Google Pixel 9 AI Photo Tool Blurs the Line Between Real and Fake

The new AI-powered Magic Editor in Google Pixel 9 creates hyper-realistic fake images, sparking debates on trust in photography.

Image: Tom Caillarec / Reproduction / Disclosure
Image: Tom Caillarec / Reproduction / Disclosure
  • Google’s new AI photo-editing tool makes it easy to create convincing fake images that look real.
  • Trust in photographs as proof of reality is increasingly at risk, raising concerns about misinformation.
  • Experts warn that society will need new ways to verify truth as AI-generated images become more widespread.
The Essentials

Google has tossed another wrench into the gears of truth with its new AI-powered photo-editing features. The latest tool in the Pixel 9’s Magic Editor app makes it ridiculously easy to cook up totally fake—but utterly convincing—images. So, can anyone actually trust photographs anymore?

At the heart of the problem is this seemingly innocent feature that uses AI to enhance photos by adding elements that blend seamlessly with the original image. Think of it as AI-generated Hollywood magic, but without the credits at the end.

Everything from shadows to lighting matches up so well it could fool even the most skeptical eye. With this tech, it’s now possible to alter reality in ways that would leave even Hollywood’s FX artists shaking their heads.

According to Dr. Harold Hong, a psychiatrist and Medical Director at New Waters Recovery, the implications of this tech go beyond just pretty pictures. “This kind of manipulation erodes trust in what we see, which could lead to anxiety as people struggle to know what’s real.” In a world already drowning in misinformation, throwing doubt on visual evidence only adds to the chaos.

Manipulated History

Photographs have never been the ultimate truth. From Stalin erasing his enemies from photos to subtle framing tricks that change the narrative, we’ve always been able to play with images. But, until recently, pulling off a convincing fake required skill, time, and effort—something most fraudsters lacked. Now? Those barriers are obliterated.

The internet is already awash with lies, but Google’s new tools mean anyone can whip up a believable fake in minutes. And guess what? Most people won’t be able to tell the difference.

Photographer and psychiatrist Omotola Ajibade of FirstClass Healthcare believes society will have to adapt. “We’ve never completely relied on images alone for the truth, especially in areas like the legal system. We’ve always needed other sources of information, and that’s what we’ll lean on even more going forward.”

The Reality We Thought We Knew

Images are a cornerstone of how we verify reality. Courts rely on them. We use them to document wars, prove damages to landlords, and show that, yes, that package was delivered in pieces. But with Google’s Reimagine tool, you can now add or remove anything from a photo with just a few taps. Want to turn a calm street into a crime scene? Easy. Need to fabricate evidence of something more sinister? Done in seconds.

In an article for The Verge, Sarah Jeong demonstrated the terrifying potential of this tech. From transforming an empty room into a scene of chaos to placing incriminating objects around unsuspecting individuals, the possibilities are limitless. And dangerous.

Fake photos aren’t just about celebrity scandals anymore. They could be used for insurance fraud, phony court cases, or even to justify international conflicts. As Pulitzer-winning photographer Deanne Fitzmaurice puts it, “The challenge now is trusting the source and context. We’ve always been able to edit photos, but AI makes it so easy, it’s hard to spot what’s real.”

Fitzmaurice, who started her career in the film era, has seen disruption in her field before, from the rise of digital cameras to the advent of Photoshop. But AI is something else entirely. It’s a threat to journalism’s credibility, which has always rested on truth.

Fighting the Fake

In the near future, photographers might have to prove their work is authentic with anti-AI watermarking technology, which is already in development. But will that be enough? Edward Tian, CEO of GPTZero, thinks AI-detection software will become essential for newsrooms and educators alike. “It’s going to be critical in ensuring accuracy,” he says, “especially with AI making it so easy to cut corners.”

The real problem, though, is bigger than just spotting fakes. When images lose their credibility, even the real ones will be met with skepticism. And while it’s true that source and context are key to trusting an image, it works both ways. Plenty of shady sources are already spreading conspiracy theories, and now they’ve got a powerful new tool to back up their lies.

It’s clear that the future of photography, and truth itself, just got a whole lot murkier.

Stay informed