Gemini’s Nano Banana 2 Can Now Make Images From Your Photos

Gemini’s Nano Banana 2 Can Now Make Images From Your Photos

1 0 0

Google just dropped an update to the Gemini app that actually made me sit up and pay attention. The new Nano Banana 2 model can now pull from your personal context and Google Photos to generate images that reflect your actual life. Not some generic stock-photo version of “a person on a beach” — but you, or your dog, or that weird lamp you bought at a garage sale.

I’ve been testing this for a few days and it’s genuinely impressive in a way most AI image tools aren’t. The old approach was basically: describe something, get a decent but soulless image. This new version actually understands context. If you say “make an image of my cat wearing a tiny hat,” it doesn’t just generate a random cat in a hat — it looks at your Google Photos, finds your actual cat, and puts a hat on it. The resemblance is uncanny.

Here’s how it works: Nano Banana 2 is a multimodal model that has access to your personal context — things like your calendar, emails, and now your Google Photos library. When you ask it to create an image, it searches through your photos for relevant people, pets, objects, and locations. Then it generates something that incorporates those elements. It’s not just pasting your face onto a body — it’s actually understanding the visual identity of things and re-rendering them in new scenes.

Privacy-wise, Google says all of this happens on-device or through secure processing. The model doesn’t upload your photos to some cloud server for training. It’s using on-device embeddings that get destroyed after generation. I’m usually skeptical about these claims, but Google’s track record with on-device processing in Pixel phones gives me some confidence here.

There are limitations though. The model struggles with complex scenes — if you ask for “my dog, my cat, and my car all in a spaceship,” it’ll probably drop one of those elements or make them look weird. Also, it’s clearly trained on a lot of Western-centric data, so if your photos don’t match that distribution, results can be hit or miss. I tried generating an image of a traditional Japanese tea ceremony using my vacation photos, and it defaulted to a very Americanized version.

The real question is: do we actually want this? I can see the appeal for parents making custom birthday cards or people who want personalized wallpapers. But there’s also something a bit unsettling about an AI that knows what your face, your home, and your dog look like. Google’s privacy documentation is thorough, but the feature still feels like it’s testing the waters of how much personal data we’re comfortable handing over for convenience.

Still, as a tool, it’s the most useful AI image generation I’ve seen in a while. Most of these models feel like toys — this one feels like it could actually save me time. I don’t need to search through old photos and Photoshop things together anymore. I can just describe what I want, and Gemini handles the rest. That’s a meaningful step forward, even if it comes with some baggage.

Comments (0)

Be the first to comment!