One of the first known photo fakes, a portrait of Abraham Lincoln, was made just decades after the dawn of photography itself. Since then, photographers have found themselves in endless arguments about what constitutes a photo — what’s real, what’s fake, and when editing is too much. Now, as we head into an era where AI-powered tools are everywhere and easily accessible, the discussion will be messier than ever. And with the Pixel 8, Google has turned the question of “what is a photo” right on its head.
Google has been leading smartphone photography down this path for many years now. The company pioneered computational photography, where smartphone cameras do considerable behind-the-scenes processing to spit out a photo that contains more detail than the camera sensor can detect in a single snap. Most modern smartphones use a system like Google’s HDR Plus technology to take a burst of images and combine them into one computationally created picture, merging highlights, shadows, details, and other data to deliver a more pristine photo. It’s accepted practice now, but it also means that a baseline smartphone photo is already more than just “a photo” — many combined with their best parts.
The Pixel 8 lineup complicates things further by transforming how much a photo can be easily changed after the picture is snapped. It presents easy-to-use editing tools powerful enough to create an entirely different image from the original photo you recorded when you hit the shutter button, and those tools are marketed as integral parts of the phone and camera. Photo editing tools have existed since the beginning of photography, but the Pixel 8 blurs the line between capture and editing in new and vital ways.
Magic Eraser, Best Take, and Magic Editor
This starts with Magic Eraser, a two-year-old feature that Google has overhauled with generative AI for the Pixel 8 Pro. The original version could remove unwanted items from images by “blending the surrounding pixels” — taking what’s already there and smudging it to hide small objects and imperfections. According to Google hardware leader Rick Osterloh, this upgraded version “generates completely new pixels” using generative AI; the result is no longer simply your photo but your photo plus some AI-assisted painting. In one example, Google showed how the tool could seamlessly remove an entire car and fill in details like wooden slats behind it. In another image, Google used the new Magic Eraser to Thanos snap two people into oblivion and fill in the horizon behind them.
The Pixel 8 also debuts a reality-defying tool called Best Take, which tries to solve the problem of somebody blinking in a photo by letting you swap in their face from another recent image. It might work well; based on what I saw from our tests at Google’s event, it can do some seamless face swaps.
And then there’s the big one: Magic Editor. First announced at Google I/O in May, Magic Editor uses generative AI to help you adjust entire parts of the photo in some dramatic ways. You can move a person to a better position by tapping and dragging them around. You can resize that person with a pinch. You can even use Magic Editor to change the color of the sky.
Where Magic Eraser and Best Take are more about “correcting” photos — fixing blinks and strangers wandering through — Magic Editor entirely goes down the road of “altering” an image: transforming reality from an imperfect version to a much cooler one. Take two examples from a Google video. In one, somebody edits a picture of a dad tossing a baby in the air to move the baby up higher. Another shows somebody leaping for a slam dunk at a basketball hoop but then removing the bench the person used to get the height for the jump.
There’s nothing inherently wrong with manipulating your photos. People have done it for a very long time. But Google’s tools put powerful photo manipulation features — the kinds of edits previously only available with some Photoshop knowledge and hours of work — into everyone’s hands and encourage them to be used on a broad scale without any particular guardrails or consideration for what that might mean. Suddenly, almost any photo you take can be instantly turned into a fake.
There are ways for others to tell when Pixel photos have been manipulated, but they’ll have to look for it. “Photos edited with Magic Editor will include metadata,” Google spokesperson Michael Marconi tells The Verge. Marconi adds that “the metadata is built upon technical standards from [International Press Telecommunications Council]” and that “we are following its guidance for tagging images edited using generative AI.”
In theory, that means that if you see a Pixel picture where the baby seems too high in the air, you’ll be able to check some metadata to see if AI helped create that illusion. (Marconi did not answer questions about where this metadata would be stored or if it would be alterable or removable, as standard EXIF data is.) Marconi says that Google also adds metadata for photos edited with Magic Eraser, which applies to older Pixels that can use Magic Eraser, too.
Using Best Take does not add metadata to photos, Marconi says, but there are some restrictions on the feature that could prevent it from being used nefariously. Best Take does not generate new facial expressions and “uses an on-device face detection algorithm to match up a face across six photos taken within seconds of each other,” according to Marconi. It also can’t pull expressions from photos outside that timeframe; Marconi says the source images for Best Take “requires metadata that shows they were taken within a 10-second window.”
Minor alterations can unambiguously improve a photo and better define what you’re trying to capture. And groups that care a lot about photo accuracy have already figured out particular rules about what kinds of changes are okay. The Associated Press, for example, is fine with “minor adjustments” like cropping and removing dust on camera sensors but doesn’t allow red-eye correction. Getty Images’ policy for editorial coverage is to “strict avoidance of any modifications to the image,” CEO Craig Peters tells The Verge. Organizations like the Content Authenticity Initiative are working on cross-industry solutions for content provenance, which could make it easier to spot AI-generated content. On the other hand, Google is making its tools dead simple to use, and while it does have principles for how it develops its AI tools, it doesn’t have guidelines on how people should use them.
The ease of use of generative AI can be alarming, Peters argued last month in a conversation with The Verge’s editor-in-chief, Nilay Patel. “In a world where generative AI can produce content at scale and you can disseminate that content on a breadth and reach and on a timescale that is immense, ultimately, authenticity gets crowded out,” Peters said. And Peters believes companies need to look beyond metadata as the answer. “The generative tools should be investing to create the right solutions around that,” he said. “Currently, it’s largely in the metadata, which is easily stripped.”
We’re at the beginning of the AI photography age and starting with tools that are simple to use and hide. But Google’s latest updates make photo manipulation more accessible than ever. I’d guess that companies like Apple and Samsung will follow suit with similar tools that could fundamentally change the question of “What is a photo?” Now, the question will increasingly become: is anything a photo?