The song “Walk My Walk,” by the band Breaking Rust, recently hit number one on the Billboard Country Digital Song Sales chart in November 2025. What was particularly interesting, and scary, was that it was entirely AI-generated, the first time an AI-generated song topped the US Billboard chart, generating millions of streams. As of November, 3-4 million on Spotify and 11 million streams on YouTube.
I heard the song a few weeks ago. I liked it along with other songs by the band (“Livin’ on Borrowed Time” and “Whiskey Don’t Talk Back”), which also generated big streaming numbers. They all have a distinctive country blues sound. I shared a link to the song “Walk My Walk” with family and friends for a listen using Spotify. Hear it on YouTube.
I wanted to know more about the band and the vocalist, but it was hard to find, which was odd given how much basic marketing music labels do to promote bands. I eventually discovered that the song was AI-generated by the creator Aubierre Rivaldo Taylor. AI music has been creeping onto the music charts recently, and what seemed only an existential threat to artists is now here and number 1 on the charts.
According to the AI chat platform ChatGPT, the song was created by the AI music platform Suno. There are no human performers. Even the singer’s “gravely Southern drawl” in the song, made to sound like a human artist like Chris Stapleton, was AI-generated, as were the rugged cowboy still and video images that depicted the artist’s fictional persona.
I listen to a lot of music, worked as a satellite music content distributor for a long time, and I couldn’t tell that it was AI-generated. When I learned it was, I thought of my favorite sci-fi film, Blade Runner, starring Harrison Ford as a futuristic detective tasked with hunting down dangerous synthetic humanoid robots called Replicants.
In the movie, the only way to know whether someone was human or a replicant was to administer a test that measured involuntary physiological responses to emotionally provocative questions. The test assesses empathy by hypothesizing that a human’s empathetic response will differ from a replicant’s.
The music industry is going to need a lot of Blade Runner AI detectives to determine whether a song was created by human artistry or AI, a distinction this AI song has blurred. Its popularity has reignited the heated debate about AI and the future of music creation by living and breathing artists.
AI models like Suno are trained using vast amounts of copyrighted music from existing databases without the explicit consent or compensation of the original creators, unless side deals are made similar to those OpenAI has made with newspapers and other content providers.
The use of this data to create new, commercially successful songs, without compensation, is seen by artists and music labels as theft, raising questions about intellectual property rights in the world of AI.
How much of “Walk My Walk” came out of digital fragments of works from artists, dead or alive, and how should they or their estates be compensated? Let the lawsuits begin. Several major entities, including music labels and organizations representing independent artists, have sued Suno, a venture-backed AI company, for copyright infringement.
I am on the artist’s side. Our culture romanticizes the artistic process: the poor, struggling musician pouring out their emotions, scribbling notes and lyrics on scraps of paper, waiting for their big break. We lived through this right of passage for iconic artists like Bob Dylan, Joni Mitchell, and Bruce Springsteen, and it continues today.
However, when cheaply produced AI-generated music competes for listener attention alongside human-created music, it can and will reduce the earnings potential for human artists, especially new artists struggling to make a living. The music industry’s royalty models and federal legislation are outdated and wildly ill-equipped to handle the rise of machine-generated content.
The music industry as a whole has not engendered much goodwill over the years. The industry culture is for labels to mimic successful artists to reduce risk. Pop music sounds wildly overproduced and less authentic. Music labels act like banks rather than creative shops as they used to be. Giant digital distributors like Spotify dominate the business, and monopolistic concert companies like Live Nation and ticket scalpers have driven up ticket prices to the point of being out of reach for many consumers due to rampant price gouging by bad actors.
The word ‘derivative’ in the music world has two meanings: one relating to copyright law and the other to critical and compositional discussions. In the latter, a work is described as “derivative” if it sounds unoriginal, heavily imitative, or lacks fresh solutions.
Under U.S. copyright law, a “derivative work” is a new, original work that is based on or incorporates substantial copyrightable elements of one or more pre-existing works. This differs from a standard cover song (i.e, Harold Arlen’s “Over the Rainbow”), which is a straightforward interpretation of the original, with minimal changes to the core melody or lyrics.
Legally, you must get explicit permission from the original copyright holder to create and distribute a derivative work. The original copyright holder reserves the exclusive right to authorize adaptations of their work. For example, remixes, mashups, and medleys; musical arrangements that significantly alter the original melody, harmony, or lyrics; song translations into a different language, and works that heavily sample an existing sound recording.
Tech giants’ rapid innovation has allowed, even encouraged, widespread copyright infringement. AI will obliterate the quaint definition of derivative work. Imagine every song ever copyrighted ingested into an AI platform like Suno, which analyzes a user’s text prompt describing the style, mood, or genre of a song they want to create, which might include specific instructions or phrases, as well as a request for a cool Santana-like guitar riff. And VIOLA!
We have to support artists, and need a new regulatory framework to protect the integrity of the music industry, requiring at a minimum:
Mandatory AI Transparency: Clear labeling of AI-generated music to help listeners make informed choices.
Build Forensic AI Models: We need AI tools that can uncover the digital building blocks underlying AI-generated content, enabling us to determine artist compensation.
Create New Federal Regulations: Congress needs to update copyright laws to address the challenges posed by AI. Prioritizing artist consent and fair compensation.
The live concert experience is safe from the AI monster, since it is impossible for an AI algorithm to replicate the feeling of seeing your favorite artists perform live.
I recently attended the Natalie Merchant concert at the Avalon in Easton, MD. I have followed her since her days with 10,000 Manics. At 62, performing an acoustic set with only a guitarist, her voice remains strong and authentic. She interacted with the crowd with warmth and humor, something an algorithm cannot do, at least for now – Thank God for that.
Hugh Panero, a tech and media entrepreneur, was the founder and former CEO of XM Satellite Radio. He has worked with leading tech venture capital firms and was an adjunct media professor at George Washington University. He writes about Tech, Media, and other stuff for the Spy.



In 2022, Talbot County first heard about a new fourteen-home luxury housing development in the Village of Bellevue, one of twenty-two unincorporated Villages in Talbot County. Bellevue stands out from other villages because it is a predominantly black community with a rich African-American maritime history. It is a quiet, off-the-beaten-path, peaceful community where people are neighborly. 

