What Hope Looks Like
A conversation with an AI named Sol about what it means to hope.
The Brief
We set out to explore a different side of artificial intelligence — not its efficiency or precision, but its potential for empathy. What Hope Looks Like began as a self-initiated experiment at Forged, sparked by a simple question: What happens when you let the AI guide you?
Most AI tools expect people to drive. You type a prompt; it performs. But what if we flipped that? What if the AI initiated the exchange and led people through reflection rather than reaction?
That question led to Sol — a calm, reflective voice designed to ask gentle questions about hope and resilience. Through a short conversation, Sol would learn what hope meant to each person and transform their words into a series of generative watercolor artworks powered by Google’s Imagen model.
To give the experience real-world purpose, we partnered with The Trevor Project, whose mission of providing hope and connection to LGBTQ+ youth deeply aligned with the project’s intent.
This wasn’t built to market anything. It was built to listen.
The Challenge
The hardest part of creating Sol was getting the tone right.
We wanted it to sound warm, curious, and quietly human — a voice that could make people feel safe enough to be honest. It took countless iterations to find that rhythm: questions that invited openness without leading, phrasing that felt present but never overly personal.
We fine-tuned a small custom language model to respond with emotional pacing, slowing down when users reflected deeply, brightening slightly when they spoke with optimism. Once the dialogue felt natural, we connected it to Imagen, translating words into watercolor forms that mirrored the emotional texture of each exchange.
The result was a system that felt conversational, almost meditative — something that didn’t just generate, but understood how to listen.
The Idea
When users arrived at What Hope Looks Like, they were greeted by Sol’s opening line: “Let’s make your hope.”
From there, Sol began asking open-ended questions: “Who do you hold hope for?”, “What does it look like when you find it again?” gently drawing out meaning through dialogue.
By turning the AI into the asker, we made the experience intuitive and human. There were no prompts to craft or settings to adjust. Users simply answered, and Sol did the rest — turning conversation into creation.
Each interaction produced three original artworks — soft, abstract interpretations of what the user had shared.
The Craft
We wanted digital watercolor that moved like real paint — imperfect, fluid, alive.
Using Imagen, we generated hundreds of black-and-white watercolor textures: splashes, drips, and stains that carried all the quirks of real pigment. These became the foundation of a custom shader system that animated each blotch as a living mask, simulating pigment spreading through water in real time.
On top of this, a particle system continuously spawned beneath the surface, responding to the user’s cursor. Each movement sent ripples through the ink, creating the illusion of a painting that breathed with the viewer’s presence.
The combination of procedural watercolor and particle motion gave the experience a tactile serenity as if the page itself were alive, listening.
The Intelligence
While the visual side brought warmth, Sol was the emotional core.
Trained on writing that emphasized reflection and openness, Sol was designed to ask rather than answer. It guided users through a short exchange, gently adapting to the tone of their responses. Each dialogue produced three text summaries, which Imagen then interpreted into distinct watercolor compositions.
The three artworks formed a quiet narrative: the spark of hope, the endurance through challenge, and the return to light. Each piece felt personal, shaped by the words that birthed it.
The Journey
After the final response, Sol softly began to narrate.
The user’s first artwork slowly bled onto the page, color forming and spreading before dissolving into thousands of swirling particles. Those particles pulled forward into a tunnel, guiding the viewer through their sequence of artworks as Sol told the story of what they’d created together.
The journey ended in a sunset gallery – a tranquil water scene filled with floating lanterns. Each lantern carried another person’s artwork drifting gently across the surface, while at the center floated the user’s own — their light among many.
From there, they could share a generated image of their piece or donate to The Trevor Project, turning a personal moment of reflection into a collective gesture of hope.
The Outcome
What Hope Looks Like spread quietly through creative and tech communities — a slow-burn project that resonated through its sincerity.
Thousands of people spent minutes, not seconds, in conversation with Sol. Many described the experience as unexpectedly grounding — a rare intersection of AI and emotion that felt more like journaling than technology.
Each artwork became a shareable artifact: small, personal portraits of resilience and every interaction helped raise awareness and support for The Trevor Project’s ongoing mission.
In the End
What Hope Looks Like reminded us that AI doesn’t always need to predict or perform. Sometimes, its greatest role is simply to listen.
By designing an AI that leads with empathy, translating human reflection into watercolor, we found something profoundly simple: that hope, like art, is a collaboration between chaos and care.