
FaceTime Like a Pro
Get our exclusive Ultimate FaceTime Guide 📚 — absolutely FREE when you sign up for our newsletter below.
FaceTime Like a Pro
Get our exclusive Ultimate FaceTime Guide 📚 — absolutely FREE when you sign up for our newsletter below.
Google Photos now lets you edit images by simply asking. Powered by Gemini AI, the new conversational editor can fix, enhance, or transform photos with just voice or text commands.
Google is turning photo editing into a conversation. Alongside the Pixel 10 launch, the company unveiled a new Google Photos feature that lets users request edits in plain language instead of adjusting sliders or searching for tools. Type an instruction or speak it aloud, and the app’s AI will handle the edits automatically.
The system runs on Google’s Gemini AI, which powers both basic fixes and more elaborate edits. Instead of selecting a brightness tool or manually removing objects, users can simply describe the change they want. Google gave examples like:
The AI applies changes instantly and can refine results if the first attempt isn’t right. It also accepts multiple instructions in one prompt, so users could remove glare, brighten a shot, and add objects all in the same request.
This launch ties into a broader overhaul of the Photos editor, which now blends natural language with quick suggestions and gesture-based tools such as circling a distraction to erase it. Google frames the update as a way to make polished edits possible for anyone, regardless of editing knowledge.
The conversational approach isn’t limited to fixes. Users can request playful additions like party hats or sunglasses, change a photo’s background, or generate new elements entirely. Because Gemini interprets context, it handles multiple tools under the hood without requiring the user to manage settings or layers.
Of course, this raises the same questions that hover around AI editing broadly: how much of a photo remains authentic, and at what point does it shift into AI illustration? Google’s pitch is that it’s giving users more creative freedom while still showing where edits happened.
Alongside the new editing tools, Google is embedding industry-standard C2PA Content Credentials in the Pixel 10’s camera app. Google Photos will also display these credentials, showing metadata about how and when an image was captured or altered. The rollout begins with Pixel 10 devices and will gradually reach other Android and iOS devices.
These credentials join existing methods such as IPTC metadata and Google’s SynthID watermarking, creating a layered system to flag when AI is involved. How prominently those indicators will appear to casual users remains an open question.
Conversational editing debuts on the Pixel 10 in the U.S., with wider device support expected later. C2PA credential support in Google Photos will expand across Android and iOS over the coming weeks.
Google is effectively betting that making edits conversational and pairing them with visible transparency markers will reset how people think about photo editing. Instead of tweaking tools, the interaction shifts toward telling an AI what you want and trusting it to execute. Whether users embrace that shift, or push back against edits that feel less manual, may determine how quickly this style of editing spreads.
Would you actually use conversational editing, or does it feel like too much control handed to AI? Share your thoughts below.
Don’t miss these related reads: