How I Redesigned Mona Lisa with AI
Redesigning the Mona Lisa with AI means using generative image models to reinterpret Leonardo da Vinci's c.1503 masterpiece in entirely new contexts — maintaining the recognisable face, expression, and cultural weight of the original while placing the figure in contemporary scenarios. The process combines a clear creative concept, iterative prompting, and deliberate style choices to produce designs that are funny, wearable, and art-historically grounded.
The Mona Lisa is the most recognised painting in the world. It has survived theft, acid attacks, vandalism, and being reproduced on approximately everything. Marcel Duchamp drew a moustache on her in 1919. Andy Warhol silkscreened her in 1963. Every decade, someone finds a new way to make her relevant.
My question was simple: What does she look like in 2026?.. Not as a symbol of fine art. As a person. Living now. Shopping, scrolling, navigating modern life with the same composure she has always projected — and a few new accessories.
The Concept: Starting with a Question, Not a Style
The most common mistake in AI art reinterpretation is starting with a visual style rather than a concept. 'Make the Mona Lisa look modern' produces generic results — contemporary clothes, maybe a phone in hand, nothing that creates genuine surprise or recognition.
I started with specific questions instead:
• What if she was mid-shopping trip, phone in hand, simultaneously checking a sale and ignoring everyone around her?
• What if she was a 90s skate kid — full cargo pants, graphic tee, skateboard over her shoulder?
• What if she needed coffee before she could deal with any of this?
Each question implies a character, a setting, a visual register, and a specific kind of humour. That specificity is what the AI model responds to — and what the final viewer connects with.
The Process: From Concept to Final Design
Each design in the Mona Lisa series followed the same five-stage process:
1. Define the scenario clearly
Not 'Mona Lisa shopping' but 'Mona Lisa in a flat-illustrated retail street, holding a coffee, pushing a cart full of designer bags, sunglasses on, speech bubble saying exactly what she's thinking'. The more specific the scenario, the more specific the result.
2. Choose the visual register deliberately
Each design needed a style decision before prompting: photorealistic composite, flat pop art illustration, oil painting parody, or graphic novel. The register shapes the entire aesthetic output — choosing it upfront prevents generic middle-ground results.
3. Prompt iteratively
The first generation is always a draft. We typically ran 8-15 iterations per design — adjusting composition, colour palette, expression, background detail, and text elements. The goal was a result that felt designed rather than generated.
4. Add the art historical layer
The details that reward attention: the Da Vinci and Del Giocondo shopping bag labels (referencing the painter and the painting's Italian name), the text treatments using the same monospace font across the series, the specific year references like 1503. These are the details that make art lovers pause.
5. Test across products
A design that works as a square digital image doesn't always work on a mug handle, a tote bag strap, or a phone case edge. Every finalised design was tested across the full product range before listing.
What the Process Taught Us
Consistency is a design choice, not an accident
The text treatment across the Mona Lisa series — the monospace font, the bracket arrows, the lowercase styling — was a deliberate decision made after the first three designs. Without it, the series looked like a collection of separate ideas rather than a coherent body of work. Visual consistency across a series requires active decisions, not just matching aesthetics.
The AI doesn't have the idea — you do
The most common misconception about AI art creation is that the tool generates the concept. It doesn't. It executes the concept you bring. 'Better Call Me' works because the shopping scenario, the speech bubble, the specific flat illustration style, and the pop-art colour palette were all decided before the first prompt was written. The AI produced an image. The creative direction was entirely human.
The hardest part is knowing when to stop
Iterative prompting can produce progressively better results — but it can also produce progressively safer ones. The most interesting designs in the series came from early iterations that preserved something unexpected or slightly wrong. Knowing when a design is finished rather than merely refined is a skill the AI cannot develop for you.
Why the Mona Lisa, Five Centuries Later
The Mona Lisa has lasted five hundred years because something about her refuses to be fully resolved. The smile that can't be read definitively. The landscape that doesn't quite match anywhere real. The gaze that follows you across a room.
That unresolvability is exactly what makes her such a good subject for reinterpretation. She doesn't belong to any single reading — so every generation gets to ask the question again. I asked it with AI, a shopping cart, a skateboard, and a speech bubble.
Leonardo would probably have something to say about that. We're choosing to believe he'd find it amusing.
👉 Browse the Artist Reimagines collection on Etsy — t-shirts, tote bags, mugs (desk mats, and puzzles: coming soon...) featuring AI-reimagined classical masterpieces.

