Upekkha AI Song Creation Process:
- Neil Freebern
- May 6
- 9 min read
Updated: May 22
In this post, I share my detailed journey of creating a song using AI tools — blending human intuition, philosophical inspiration, and cutting-edge technology to shape a work that moves both heart and mind.


Step 1: Initial Inspiration
AI writing assistant: I placed personal journal entries into an AI to analyze and create a bulleted summary of my thinking. At the time I was inspired by a Podcast by Noah Rasheta "The Secular Buddhist" I asked AI for a Buddhist word for the space between stimulus and response. I learned about Upekkha and loved the sound of the word. I was reminded of a song "Kuaga" by a former student. The vowel choice on the hook stuck with me. Upekkah... like "Wimowe" from a Lion Sleeps Tonight. Just brainstorming.
STEP 2: Lyrics
I was learning a lot about equinimity from the podcast. A word I never have used. I took this research and loaded it into the same AI thread. Training the AI on what was in my mind will allow it to guide my creation more authentically. I requested lyrics to match the spirit, angst and depth of my personal text, using Upekkah as the foundational message. I spent the next day playing with what the AI produced. I pulled out key phrases, added new ones, and worked back and forth with my writing assistant to tell a story of enlightenment through suffering. I figured out what the "hook" of the lyrics was going to be and creatively developed a musical form that met structural conditions I.e., Intro, Verse Chorus, Verse Chorus, Verse, pre-Chorus, Chorus Outro. I was constructing a structure that ascends, peaks, and softly descends, reflecting the emotional progression through the lyrics. With each lyric, I was able to contemplate the intent and further process my thinking. The word play was therapeutic. "To meet it the same, means the work is never done." To meet this challenge with my old skills means I am going to have an issue moving forward. The creative process gave me the space to live in my emotions without judgement and contemplate the fact that we have a choice of how we respond to the world.
Step 3: Using AI Music Generation
Initially, I dumped my lyrics into an AI music generator, musicmusic.ai , using the lyrics as the prompt only. I got a song with a vocal track, completely produced with lyrics about hearing loss. It was intriguing but fell a bit short. I learned what it could do and was initially impressed. I repeated this process and found that the next iteration was a bit limited in depth. It sounded like this... intriguing, but I wanted my own lyrics.
Time for the advanced mode in Musicmuse.
I placed the lyrics in the lyrics window then worked on creating a music prompt to generate a song that matched my intent. To create the best music prompts, I went back to my first AI chat and input a long list of musical attributes that I wanted to hear.
Key (e.g., C major, A minor)
Tempo (e.g., 120 BPM, slow ballad pace, upbeat dance)
Time Signature (e.g., 4/4, 3/4, 6/8)
Genre/Style (e.g., pop, rock, jazz, ambient, country. hip hop, EDM, cinematic)
Mood/Vibe (e.g., uplifting, dark, melancholic, hopeful)
Instrumentation (e.g., piano, guitar, synth pads, strings, drums)
Harmonic Tension (e.g., consonant, dissonant, modal, tension-and-release)
Song Form/Structure (e.g., Verse-Chorus-Verse or follow form of lyrics)
Dynamics/Intensity (e.g., soft and intimate, building to a climax, high-energy)
Rhythmic Feel (e.g., straight, swung, syncopated, groovy, spacious)
Reference or Inspiration Tracks (if available, mention songs or artists you want it to resemble)
Special Features (e.g., tempo shifts, breakdowns, moments of silence, climactic peaks)
Vocal Style (if vocals are included: smooth, raw, robotic, soulful)
Lyrical Themes or Keywords (Put your lyrics in the lyrics window, or you can generate new ones in the app)
Once the prompt was created, I tested it and the first iteration was quite good. I listened to it several times in awe of how well the song was built by AI. The vocal work was humanlike but at times got a bit too predictable. I still was curious so I modified the prompt with the changes I wanted to hear. I repeated this process several times until the song appeared and resonated with me.
Hearing the text in musical form was so cathartic. When I got to the last iteration, what was created brought me to tears. It matched much of the style I have been composing for years. It was like a synthesis of all of my favorite moments. I could wallow deep in the emotion of every word, on every verse. "The pulse of a soul" lyric hits and the music holds, so poignant. I must have listened to this a hundred times. Was the song perfect? Nope. Did it elicit an emotional response? Yup. Is it art? Not good, not bad!
Step 4: Bringing It into the Studio (Logic Pro)
I wanted to make it better, and I knew the only way was to rebuild/remix the instrumentals. I downloaded the AI-generated track and imported it into Logic Pro. Aligning the track to Logic's grid, I split it into stems (vocal, instrumental, etc.) using Logic's AI features. Breaking this into stems allowed me to isolate just the vocal lines, adjust their timing and moments of repose, and completely recompose the instrumental parts around them. This gave me full creative freedom to shape the arrangement while retaining the essence of the AI-generated vocals. Analyzing the AI's musical output, I identified the primary chord progressions and designed a new piano part, adding pads, sound effects, bass lines, and cello to enrich the texture. I created two small instrumental breaks and altered the tempo at a few cadence points. When the lyrics describe a state of awe, the AI tossed a Neapolitan chord under it. I played with this moment by slowing it down and echoing a rhythmic motif from earlier in the song, letting the oscillation of the note choices evoke a deep emotional response.
Through mixing and mastering, I refined the overall sound, blending human creativity with the AI's foundation.
Step 5: Artistic Reflection and Deeper Meaning
This entire process raises an important artistic and philosophical question:
What makes a work valid as art when AI plays such a central role?
The lesson I am learning from Upekkha applies equally to the creative process itself. Rather than judge the process as good or bad, valid or invalid, I have come to see this work as a reflection of the present moment, shaped by both human and machine, intention and emergence. Working through this entire creative process offered me a profound sense of catharsis — it helped me process my personal loss and reframe my perception of my ailment. What began as a technical and artistic experiment became, ultimately, a healing journey that allowed me to transform grief into meaning and reshaped how I relate to my own challenges. The emotional resonance I feel from the final track is real, regardless of how it was conceived. This raises deeper questions: Does this song have less value because AI played a role in its creation? Who truly owns this track, and why does ownership matter in the context of art? Can it be judged on its own merit, apart from its origins? After all, it would not exist if I had not prompted it into being. At its core, art is human expression — and even though this piece emerged through the collaboration between human and machine, it meets that criterion. It is neither inherently good nor bad; it simply exists, inviting us to engage with it openly. Art, like life, is not bound by fixed categories; it is an unfolding experience that invites us to witness, feel, and reflect.
Reflection on Creative Process
I am contemplating the possibility of replacing the AI-generated vocals with human recordings and considering a public release of this piece. But why? To make it more legitimate? What is the point? Ego? This prompts me to reflect on the motivations behind sharing my work and the potential benefits it may offer to others. Initially, this project was a personal endeavor, crafted for my own experience. It is about the process not the product. This makes me think.
Although my days of performing may be behind me, my creativity remains vibrant and active. This process has transcended technical execution, evolving into a profound exploration of meaning, creativity, and the dynamic interplay between human expression and machine assistance.
"It is a shift in perception we all have." I discovered that music can still exist in my life. I learned of my new limitations and have found new freedom in the process.
In sharing this work, I hope to foster dialogue and inspire others, contributing to a broader understanding of the relationship between technology and the arts. The insights gained through this creative journey may resonate with those who encounter it, encouraging them to reflect on their own experiences and expressions.
If you're curious about the details of AI music creation, the integration of philosophical themes like Upekkhā, or the emotional impact of blending human and machine artistry, feel free to reach out or follow along as I continue this artistic exploration.
THE RESULTS
Upekkhā
This is the re-orchestration mix I completed after deconstructing the AI creation inside LOGIC Pro.
Upekkhā (Pali: उपेक्षा) is a Buddhist concept that refers to equanimity—a state of mental balance and impartiality. It is one of the four brahmavihāras (sublime states) in Buddhism, alongside mettā (loving-kindness), karuṇā (compassion), and muditā (sympathetic joy)d
Rather than indifference, upekkhā represents a calm and even-minded approach to life's ups and downs, allowing one to remain unattached to extreme emotions like desire or aversion
[Verse 1]
It started with a sound that didn’t feel right—Just hung in the air, like a beam of light
It filled my mind with a shimmer so bright
Hiding the song that once brought delight
I tried to explain it, break it, contain it.
Told myself it would fade—but it chose to stay
Silence, I lost—with my story in tow,
And once that cracked, all the questions took hold.
[Chorus]
What is good? What is bad?
It’s a shift in perception, we each have…
No fight, no flight,
What’s wrong could be right.
OOM-pe-KA...
OOM-pe-KA...
Let it ring...Let it be...
OOM-pe-KA.
[Verse 2 ]
How do I fix this? When will it stop?
A part of me broke, how do I get it back?
It is here to stay, and the change has begun.
To meet it the same means the work’s never done.
What if it’s not broken?
What if I just bend?
What if the tale I’ve been telling has come to its end?
I thought I was the sound, the tone, the role.
But maybe I’m something deeper—the pulse of a soul.
When the noise fills my head, and I can’t find the ground,
I don’t push it away. I don’t run from the sound...
[Chorus]
What is good? What is bad?
It’s a shift in perception we each have…No fight, no flight,
What’s wrong could be right.
OOM-pe-KA...
OOM-pe-KA...
Let it ring...Let it be...
OOM-pe-KA.
[Verse 3]
The sound’s still here—like breath, like time
.I stopped asking why, and started making it mine.
Not good, not bad—just real, just now.
No need to fix it. I just learned how.
My old story is done. I laid it down.
And in its stillness, I found upekkhā’s crown.
I’m not chasing quiet. I’m not holding back.
I’m walking forward on a brand-new track.
Not good. Not bad.
Just present in awe.
That’s how I found—
OOM-pe-KA.
[Outro – Chant]
OOM-pe-KA...
OOM-pe-KA...
Let it be...
OOM-pe-KA...
Here are other AI versions of this song. I find these intriguing but miss the depth of aingst I am looking for. I post them because I find it facination how the musical setting can change the depth of a tune.
Here are my lyrics set to a tune using SUNO AI.
This version has some joy in it. I rendered this several days later with a prompt that was less dark.
Do these songs have value?
Does it sound like AI? I am started to hear some tendencies as I render more and more songs. I am finding that "timing," or establishing the emotional grip of silence and the approach to silence through tempo alterations, can be lacking. There is also some predictiblility in some of the renderings that may not sit well with some musical tastes. For me, I find this process to be facinating. What are the key principles musically that are at play? What makes a good song? How much repetition is needed? What type of harmonic complexity is possilbe? Why does a melody stick with you while others do not? What makes a great hook? This is a learning tool for me.
Why Do We Struggle to Accept AI-Generated Music?
As AI-generated music becomes more common, many people hesitate to see it as “real” art. Why?
At the heart of it, we’re used to thinking of music as a deeply human act — an expression of emotion, experience, and craft. When an algorithm creates a song, it disrupts that narrative. There’s no artist backstory, no struggle or mastery we can connect with. It feels like the art appeared too easily, without the personal effort we admire.
We also fear what it says about creativity itself: if a machine can make something beautiful, what does that mean for human uniqueness? And who even owns the work — the person who prompted it or the team who built the tool?
But here’s the thing: throughout history, every major artistic innovation (photography, synthesizers, sampling) was first met with suspicion. Over time, we adapted, expanded our definitions, and found new meaning in these evolving forms.
Maybe the challenge isn’t whether AI music is legitimate — but whether we’re ready to evolve alongside it.










Comments