Sketching with AI

download the full design PDF here.

New Ways of Making Music

I’ve been experimenting with AI since 2018, using it as both a design accelerant and a creative partner. Long before that—back in the early days of the internet—I was lucky enough to work with bands like The Rolling Stones and David Bowie, experiences that shaped my curiosity about how technology can expand the possibilities of music-making. As an amateur musician myself, the question has followed me ever since: What new musical experiences can we create when technology listens, adapts, and responds to us?

AI finally gives us the ability to sketch answers to that question at speed and fidelity we’ve never had before. The concepts below come from a series of exploratory prototypes—each one designed to rethink what a musical interface could be when sound, movement, and environment become creative inputs rather than constraints. 


Listen Again: A Reactive Soundscape in Your Pocket

Listen Again transforms your phone into a hybrid instrument that blends live environmental sound with generated frequencies and visuals. Using the onboard microphone, it responds in real time—mixing the world around you with ambient or beat-driven layers.

It’s not a music player.
It’s not quite a synthesizer either.
It’s a tool for rediscovering the sound of your immediate environment—expanded, remixed, and reinterpreted. 


SongLines: Turning Movement into Melody

Inspired by Bruce Chatwin’s The Songlines, this concept generates music from your physical movement through space. Pace, direction, rhythm, and path shape a living soundtrack that evolves as you travel. Every journey produces a unique composition—one you can revisit, share, or use as a sonic journal.

In a world where we constantly track steps, calories, and time, SongLines reframes movement as something expressive, not just quantifiable. 


Wheels: Finding Your Bus Through Sound

Urban environments are full of moments where you need information but can’t look down at your phone. Wheels solves one of the simplest but most common problems in New York City: Is my bus here yet?

Instead of visuals, Wheels uses sound.
Farther bus = softer tone.
Closer bus = louder tone.

It’s a compact example of designing for real-world constraints—helping people navigate busy streets, accessibility needs, or cluttered environments with audio-first interaction. 


Why Music + AI?

Because music is one of the oldest forms of human expression—and AI allows us to rethink the interface. These sketches aren’t production-ready apps. They’re provocations. They ask:


What if space, motion, and environment became instruments?
What if sound could help us navigate the world—not distract us from it?

AI helps me explore these questions rapidly, iteratively, and playfully. It’s a collaborator that lowers the cost of experimentation and broadens the range of what can be imagined.