We’re cooked.

  • calliope@retrolemmy.com
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    2 days ago

    Seems like very thinly veiled advertising for a new version of google’s ai image generation.

    If AI is getting this good at imitating the things that signal a photo is real, then guys: We are cooked.

    “We are cooked, fellow kids!”

    The author also pretty much says “all other AI was slop before this, right guys?”

    • Hackworth@piefed.ca
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Yeah, a more honest take would discuss the strengths & weakness of the model. Flux is still better at text than Nano Banana, for instance. There’s no “one model to rule them all,” as much as tech journalism seems to want to write like that.

  • bonenode@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    3 days ago

    I feel the bus one is actually quite easy to spot as fake. There’s no one with head down looking at their phone.

    • Sinaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      2 days ago

      Most of these images have really shitty resolution as well. Can’t they generate higher res stuff or would inconsistencies otherwise be more obvious?

      • Hackworth@piefed.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Directly, generating higher res stuff requires way more compute. But there are plenty of AI upscalers out there, some better, some worse. These are also built into Photoshop now. The difference between an AI image that is easy to spot and hard to spot is using good models. The difference between an AI image that is hard to spot and nearly impossible to spot is another 20 min of work in post.

        • Sinaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          The difference between an AI image that is hard to spot and nearly impossible to spot is another 20 min of work in post.

          Yeah, I don’t doubt it, but you still need human labour. Not anyone can simply fake a photo on a level that’s believable.

          • TheWonderfool@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            44 seconds ago

            I would say that it mostly depends on the complexity of the photo. Random Instagram model posing for the camera? You can get it out of the box with a press of a button in your machine. Complex photo with multiple subjects and cluttered background? That would need lots of human work.