106 Comments
User's avatar
Richard Grace's avatar

I wish I could add an image to my comment here. I've added a "No AI" brand logo to my music videos and my CDs that I just had made (and finally received today). You know, the typical red circle with a slash through it over a black AI symbol. Of course, I have just 20 subscribers on my YouTube channel, but you gotta start somewhere and that's what I'm doing. I agree about using some AI tools (I use an internet cloud service for my mastering, and my mixing software has new AI-based mastering and stem separation plugins) but not making and recording music using AI. That's moronic. There are no shortcuts.

Expand full comment
Su Terry's avatar

I've done the same, but not on my music, rather on my articles.

Expand full comment
Richard Grace's avatar

Nice

Expand full comment
Deep Turning's avatar

Some sites like Deviant Art have quasi-banned AI-generated material, requiring it to be metadata-tagged. It's available in its own area. There was so much pushback against allowing it to be freely mixed with non-AI material that they were forced to do something. Admittedly, there are some gray zones, like AI-touchup of natural photographs, a digital version of old-fashioned retouching. But the images as a whole are not AI-generated.

Personally, I think there's a whole range of legitimate uses of generative AI. But the copyright and liability issues must be worked out first. No more "disruptive" man-children like our tech titans ramming this stuff down our throats with deceptive and kleptomaniac business models.

Expand full comment
Richard Grace's avatar

I subscribe to Adobe Stock, and they are starting to overwhelm the categories I typically buy with AI-generated crap. I think it's time for me to actually push back on this to their customer support. At least they clearly label anything that IS AI-created, but it's super easy to pick them out anyway. All of it is garbage, it's not acceptable, and it cheapens the service. I won't have it and won't buy it, ever

Expand full comment
Deep Turning's avatar

Tell them about Deviant Art. There are ways to cope, but an early, firm stand is needed. Make the categories clear: pure AI, digitally auto-retouched, digitally/manually retouched, straight originals. Ideally, the AI material should be digitally watermarked with date, originating human, and the software that human used. Humans in the loop!

And always keep in mind, whatever the worthy private, voluntary initiatives, political and legal action are needed to sort out the issues of copyright, liability, and deceptive business practices. We need a Supreme Court that understands enough to not perpetrate and perpetuate travesties like the recent decision to allow continued government-backed online censorship (which is illegal in the normal universe that we've always lived in). The idea that online is a "hole in reality" where normal laws and democratic culture magically don't apply has to end.

Expand full comment
Mercia52's avatar

You could always label your music videos an CDs 'proudly an HI product'.

Expand full comment
Richard Grace's avatar

I like the simplicity of a logo - it has more punch and visibility given the context of what I do

Expand full comment
Mercia52's avatar

Leonardo's Vitruvian Man

Expand full comment
Richard Grace's avatar

Exactly

Expand full comment
User's avatar
Comment removed
Jun 25
Comment removed
Expand full comment
Su Terry's avatar

Using a tool to check your grammar and spelling is the same as a carpenter using a level or a tape measure. No worries John!

Expand full comment
The Bit Barron's avatar

I hear myself using this in upcoming conversations "is same as a carpenter using a level or a tape measure" so thank you!

Expand full comment
Limne's avatar

I'm glad Ted recognized the necessity for point (3) among artists. Chat-GPT may be a lousy encyclopedia, but it's a much better spellchecker. That goes double for when you need to write code, which is something I suspect most artists hate. It's not just programmers wanting art without paying artists, it's artists wanting multimedia who can't afford programmers. Ideally, we could all afford each other - but that's a much broader income inequality issue. Best of all would be fostering a culture of collaboration where creative people of all kinds preferred working together instead of with bots, but then we'd be replacing social media narcissism with actual community and trust in shared ventures, which big tech doesn't want.

The real question is - would you be ashamed to admit to the way you used AI? If you're an artist and you post an AI generated image as part of your "brand," that's sad. If you re-traced an AI image, ditto. But using AI as a glorified Pinterest-style mood-board? I really can't see myself raising the ire against reference images and inspiration whether they come from photography, illustration, sculpture, movies, wallpaper, bubblegum wrappers, or, yes, AI hallucinations. What we need is a culture of transparency where the processes of human involvement take priority.

Expand full comment
Richard Grace's avatar

Great question. I got concerned when I realized I wasn’t going to be able to afford a mastering engineer ~ a year ago, when most of my mixes were completed, it was at least $100 per track with only a single turnaround. What if you discover problems after you receive the master file? (Sure, that NEVER happens if you’re a pro 🤣).

I discovered there were some cloud-based subscription services that solved both the cost and the editing issues. Mastering a song mix inevitably uncovers things that you may not notice during composition and initial recording and mixing. That’s why it’s so critical to the process.

Yeah, I wouldn’t have an issue with admitting I use a cloud AI mastering service. I liken it to playing all my own instruments and programming a drum machine. I can’t pay someone to master for me if I keep finding issues, and I can’t hire a drummer, much less set up a complete recording session in my tiny office for a drum kit. (Shudder.)

As for collaboration, I would really appreciate a partner in crime, but many musicians are either flakes or just high-maintenance people whom I do not have the spare energy or time to sustain. (I may in fact have found someone who could be a real partner; but establishing a partnership is akin to meeting a person in the flush of first love - one must exercise care not to frighten the game off before you consummate the relationship!) I sincerely hope to find a real collaborative partnership, so we’ll just have to see.

Sorry, I wander.

Expand full comment
Tim N's avatar

Some (most?)mastering engineers will send you mp3's of at least a couple tunes before you are charged anything, so you can check it out.

Expand full comment
User's avatar
Comment deleted
Jun 25
Comment deleted
Expand full comment
Limne's avatar

I'm sure you've never referenced a photo, or an art-book. I'm sure that you learned to draw by constructing the figure in a vacuum and never gained inspiration from another artist, and that you emerged fully formed, and were never placed in a collaborative environment where you were asked to do things "like this." That is a very unique position to be in.

I recently graduated with a Bachelor of Fine Arts where, of course, all of the students spat on AI without knowing the least thing about it. Of course, any time they attempted verisimilitude, they'd be recopying photos they found online. But an AI image that at least has the decency to mash together millions of points of "inspiration" into a generalized facsimile gleaned from common elements amongst them - that is forbidden.

And if using the best available spell-checker is lazy - it is because I have better things to do with my time than thumb through a paper-dictionary double-checking words - or should we ban dictionaries as cheating, too?

Expand full comment
User's avatar
Comment deleted
Jun 25
Comment deleted
Expand full comment
Limne's avatar

If you're from Ottawa, there's a very good chance we graduated from the same school. But, with all due respect to our common community - you are being an ignorant crank. A sensible person does not accuse strangers on the internet of being criminals based on flimsy ideology.

First of all, plagiarism is many things, but it is not a crime, it is a civil matter. That's why, unlike with theft, conversion, or other actual property crimes, you have to sue the other person and prove damages. There are no police involved. I further note that there exist no cases in any relevant jurisdiction characterizing anything I have done with the assistance of AI tools as plagiarism - namely because such a claim would be patently absurd, and you'd understand that if you knew the least thing about the law, how AI works, and how it figures into the modern artistic process.

I would hazard a guess that you have no proof at all that you've been so much as scraped by big tech. Go on - show me where you are in the LAION dataset. It's all publicly available, and that case is actually being litigated. But I get the sense you're likely another embittered luddite - the type of person who saw some blurry, grainy, black and white art photography and assumed that because it bore a passing resemblance to your blurry, grainy, black and white art photography, it must have been stolen from you personally. I mean, it could always be that you're just not very original, but some people are indeed comforted by the idea that there work must have been "stolen" by a jealous rival, or a big corporation, or the evil robots. Moreover, I suspect that if you did graduate from the likes of Ottawa U in the 70's (this is just a guess here) I would have absolutely no interest whatsoever in your work, let alone stealing it.

Besides, and as low tech as I suspect you to be, this is something you ought to understand: the tool for plagiarizing other artist's work is nothing from AI - it's Google Image Search, where you can find the copyrighted work of anyone who's anyone, without a license, free of charge, in it's exact original.

If you want to know how AI works (I suspect not based on your palpable, calcified lack of curiosity) I've described a little of it in some posts below, but in summary, even to say that AI is a skilled mimic of various artistic styles is a vast over-estimation of its capacities that does a disservice to real artists. If the thing can't generate a proper phony of a master like Vermeer, I can't imagine why you should feel so threatened about whatever it is you do.

Expand full comment
Richard Grace's avatar

Well, you know Grammarly is a machine learning tool going on what, 20 years now? You’re not compromising yourself by using it. There are folks in existence who think using a DAW is a moral compromise and that your recording isn’t “real” unless you’re using only vintage analog equipment with 2-inch tape and have a recording contract. And that you’re obligated to pay specialists every stage of the way to shepherd your music across the finish line, whatever that is. Which is, you know, nonsense. I don’t miss the old barriers to entry.

The essential element is that there is a creative human actually formulating, writing/designing/building, and producing in whatever chosen format or medium is required for the art to become a real thing.

Expand full comment
Limne's avatar

When people go asking ChatGPT to do that same proof-reading work - folks still get antsy. I mean, there's also the fact that you can get AI to write a paragraph for you, and then you proof-read it so it doesn't sound like it was written by a committee of public relations analysts. Even by that point, people get nervous. Most people can't tell 15-year old technology from the emerging stuff and are quick to attack anything that's new to them.

Expand full comment
Richard Grace's avatar

I also can’t help thinking that the big AI push is not going to amount to as much as Big Tech thinks for the following reasons: 1) major issues with excess resources usage; 2) diminishing returns in quality the longer you use natural-language prompts; 3) companies throwing money at AI to remake their businesses may be starting to perceive those diminishing returns; 4) persistent violations of intellectual property combined with increasingly loud rejection of the validity of copyrights. None of these things are making them any friends. Techies could care less about creatives; to them we’re just people who don’t matter. But eventually the Big Scam becomes too obvious to ignore. Just like Web 2.0. I’ve done pretty well in stocks this year but I am definitely keeping an eye on things.

Expand full comment
Limne's avatar

Large language models are in a hype cycle like the dot-com bubble, crypto, NFT's, metaverses, and those stupid VR goggles. There's nothing in the architecture that suggest a transformer network will ever be capable of general artificial intelligence. The companies adding chatbots to their products aren't being helped by them - they're just waving something shiny in front of feckless investors.

I think the resource consumption issues will come down with innovations like Samba, or some other iteration - but that just means that every open-source gearhead is going to want to run it on their cell-phones, which will leave nothing for big tech which mostly brings scale to the table. As for intellectual property - I wish the average consumer cared, but they probably care more about the ways the current IP regime fails them than artists, and I suspect the tech giants will try to exploit that to forge a coalition against creative workers. The fact that everyone just gives so much work away for free on social media is a chilling sign of things to come.

That being said, I think there is a revolution in AI in the works. If you know the "Think Fast and Slow" theory of the mind, our current AI is pretty good at thinking fast. When it can "think slow," then they'll have something. They're working on it.

Expand full comment
Richard Grace's avatar

♥️

I think Apple's involvement is really going to prove if this stuff is viable. They are also heading in the very direction you note by having much computation take place on your local devices.

As a musician, sharing my work on social media freaks me out. I consider YouTube the best of a bad lot of options.

Expand full comment
Dheep''s avatar

Saying you cant spell is like my Dad did for 50 years "Oh, I can't cook" - Ya ,because he had my Mom to cook for him & he didn't want to bother Learning to cook.

Oh, he did learn one Dish: Peanut Butter on toast.

Expand full comment
erg art ink's avatar

Ones of my favourite questions. “Whose cooking dinner” Cuts right through the wanking every time. We had this phrase in my industry to describe a level of self flagellating hubris. “Stick milk.”

Expand full comment
Amanda Riddell's avatar

Do you see a distinction between using machine learning versus generative AI? For example, Peter Jackson used machine learning to isolate those tracks in the Beatles film, and I use an AI-based engine called NotePerformer to realise my scores (it wasn't initially AI, but the new version uses AI).

To me, that's quite different to typing in a prompt, then out pops a tune.

Expand full comment
Svein-Gunnar Johansen's avatar

I believe what you are asking here is where the line goes. My belief is that we will eventually all be using AI for something, whether we have an opinion about it or not. In your example, the answer is: Yes! This is AI. But it's a tool, and it's useful.

The threat from AI in this context is - as in most things - whether the powers-that-(currently)-be sides with creatives that wants to make some money, or the "disruptors" whose goal is to make ALL the money.

PS: With regards to "prompt, then out pop a tune": I think it's high time for some appropriate legislation on the matter. Now, I am no law maker, but if I were to advise anyone on how to solve this through legislation, I think the following set of rules would get us a fair bit of the way there:

1. Commercial AI models must maintain a public database of every individual piece of art it has trained on, and who created it.

2. Every AI generated piece of media should keep a record in its metadata, of whose artists’ works (and how big of a percentage) were used as references.

3. Artists whose work and style is being used in commercial AI generated media, should receive a viable royalty payment based on this meta-data.

I have written some more about this, and how the music industry traditionally has had some practices that we could build upon here:

https://backtobasic.substack.com/p/a-pragmatic-approach-to-the-current

Expand full comment
Limne's avatar

Number 2 is not a tractable problem because AI simply doesn't work like the kind of stochastic auto-collage people have in mind. Once an input goes into that black box, it dissolves into the mathematics of pattern recognition. I've not yet seen an image generator that can so much as generate a satisfactory Mona-Lisa, the most frequently reproduced painting in history.

The AI doesn't know what a hand is - it uses fancy math to suggest what pattern differentiates abstract noise from every image it has seen with a label involving a hand according to some mathematical approximation of what a word is. That's why AI is terrible at drawing hands. The mathematical "gist" of billions of examples labeled according to no coherent system is not what us who have and use hands would recognize to be a hand.

That's why you can make up artist names and get works for those prompts. The works of "Minamato no Ichigo von Bismark III of Transylvania" are to be found in the generator because, mathematically, that nonsense name necessarily has a relationship to real names. The same thing happens if you put in the name of an artist who does exist but wasn't in the training set, and give results for that name that look nothing like the real artist. The generator will give you results for a prompt like "SFNS$^F&NSIFsj&is[jrf9830343" for that matter.

Given that the machines produce false positives, and false negatives and can't distinguish them from works identifiable as resembling the output in the style of an actual artist, there's no metric for the level of inspiration involved. If you tried to define one, you'd end up with the same labeling problems you get everywhere in AI - where the machine mistakes a banana for cat.

There's also the issue that there are billions of images in, for instance, the anime style. Ai relies a lot on the generic commonness between inputs like that. If you could pry anything useful out of the black box - what do you do with a metric suggesting that thousands of people draw basically the same way and aren't special little snowflakes? What do you do when prompting for an artist clearly gives you work in the style of fans of that artist, suggesting most of it's "inspiration" was itself plagiarized? From what I've seen, AI isn't even that good at mimicking art styles, because it reduces everything to a generic mass - Gustav Klimt and Egon Schiele, Claude Monet and Pierre-Auguste Renoir, Edgar Degas and Henri de Toulouse-Lautrec, Piet Mondrian and Kazimir Malevich, Vincent van Gogh and Paul Gauguin, Andy Warhol and Roy Lichtenstein, Frida Kahlo and Diego Rivera: as far as the AI is concerned, these kinds of pairs might as well be a single artist who works under different names.

Expand full comment
Su Terry's avatar

Astute assessment.

Expand full comment
Svein-Gunnar Johansen's avatar

If that's the case, then properly attributing where art is taken from IS going to be a problem.

I do suspect that this data is findable in the models though. I remember reading during the initial hype-phase that AI-techs had to tweak the models so as not to create perfect copies of art, but merely approximations. Meaning that connecting creators and works was doable once, before it got gaussian blurred into oblivion.

Of course, the current fluid state of the Internet being what it is, I can no longer find any reference to this claim.

But I believe the right amount of pressure applied to the AI companies in the form of regulation will eventually get us there again.

Expand full comment
Limne's avatar

In the old days of StyleGAN type models and so on, "over-fitting" was indeed a problem. If you trained a big model on seven images, it would pretty much spit out those same seven images. The models themselves were often larger than the dataset, so you ended up re-encoding those images much the same way that a bitmap can be converted into an imperfect, compressed JPEG. With much larger datasets, this was generally not as much of a problem because the "latent space" ended up full of interpolations that mixed the thousands or even millions of those images, together, all at once, in various ratios. You'd give the it lots of dogs, lots of cats, and get mostly dog-cats. You often, but not always could find a facsimile of an original picture by probing the latent space for generated images that mathematically approximated it - but, oddly, you could often give it similar images it had never seen before and give a pretty good approximation: I gave my own photo to a StyleGAN trained on photographs of faces and, sure enough, it "found" a facsimile of my face with some slight alterations to my hair and ears, and so on. It's a bit like how law enforcement can now DNA fingerprint criminals based on DNA a cousin or the like uploaded to one of those ancestry websites (see the case of Joseph James DeAngelo Jr. for instance).

Diffusion-based and transformer based models are quite a bit different. One works on the principal of entirely "hallucinating" results out of noise, and the other's based on a similar principal to a Markov Chain text-generator any programmer could have written you 50 years ago where you randomly predict the next word based on previous words: it could produce an input, but in the same way that a million monkeys on a million typewriters could eventually write Shakespeare.

I think the training set is much more fertile ground for holding tech accountable, especially because that's information the companies need to keep well-maintained for the sake of their models. But the fact that a computer can recognize my face, or my DNA, or my voice, without me even being in the training set is much, much creepier. Even without being in the training set, there's a good chance the models could act as a replacement for you anyway if enough people opt in, or make their work open source, or enough of the big evil companies who own most artist's work decide to sell it for training purposes.

Expand full comment
Svein-Gunnar Johansen's avatar

Compensating artists for using their works in training sets is definitely a good place to start.

Expand full comment
Richard Grace's avatar

To Svein: Apple’s Logic Pro has an AI-informed Mastering plugin that I’ve tested and found to be more than sufficient. It also illustrates the process of mastering on a fairly basic level. IMV, for radio-ready mixes you could actually use this alone. It probably is a compromise but I did pay for the software license, and if I let artificial scruples stop me I wouldn’t now have a video channel plus an album and two singles released.

Agree 100% about how AI regulation could be managed.

Expand full comment
polistra's avatar

To my mind the important distinction is where the tool is bsed. If a tool is entirely inside your own computer and never "phones home" to the web, it's OK. If the tool is based in a central server, you're helping Altman gain more material for his crimes. All servers are ultimately connected together, and the tech tyrants are in charge of the connections.

Expand full comment
Svein-Gunnar Johansen's avatar

That is a somewhat flawed distinction.

For instance: With Stable Diffusion AI, you can download not just one but several AI models, which can run on your own computers, allowing you to generate as much as you like without "phoning home" to any servers.

These models ARE however still trained on art from human artists, and what you generate from them is problematic in the same way as if it was created with an online tool like: MidJourney or Udio.

Expand full comment
Richard Grace's avatar

Not every AI company is run by Sam Altman or a tech oligarch although it sure seems that way sometimes

Expand full comment
D. D. Wyss's avatar

AI is useful for many things, but not creative endeavors. Leave creativity to those who can actually be creative, namely humans.

Expand full comment
The Delinquent Academic's avatar

Very much agree; another brilliant piece Ted. I've suggested some of what you say in some of my recent articles - and have been moved and inspired (and have directly quoted you!) by what you are saying about culture writ large.

Something I want to add: As consumers, our actions are more important than they once were. We - especially younger generations - need to shake our 'napster' and 'limewire' habits we grew up with (illegally downloading music and other art all off the internet) and become art consumers with intention. We as art-lovers have developed taste; and we need to employ our developed taste in hand with real monetary patronage of the Microculture. We need to become Micropatrons, proud of our art collections. I outline this mentality here: https://open.substack.com/pub/hemibowman/p/how-to-be-a-micropatron?r=b87nb&utm_campaign=post&utm_medium=web.

I also discuss the artist's journey in the face of AI - to go all in on one's humanity, and leave nothing for the swim back, even in the face of overwhelming odds for our new artist's dystopia, here: https://open.substack.com/pub/hemibowman/p/the-artist-against-the-machine?r=b87nb&utm_campaign=post&utm_medium=web.

It is up to all of us to help independent artists first survive and then flourish.

Expand full comment
Jim Frazee's avatar

Completely agree. And I shouldn't have to say it, but AI to music is like Kraft Velveeta to cheese. Do we really want music credits to read like the label on this product: SKIM MILK, MILK, CANOLA OIL, MILK PROTEIN CONCENTRATE, SODIUM PHOSPHATE, CONTAINS LESS THAN 2% OF MODIFIED FOOD STARCH, WHEY PROTEIN CONCENTRATE, MALTODEXTRIN, WHEY, SALT, CALCIUM PHOSPHATE, LACTIC ACID, SORBIC ACID AS A PRESERVATIVE, MILKFAT, SODIUM ALGINATE, SODIUM CITRATE, ENZYMES, APOCAROTENAL AND ANNATTO (COLOR), CHEESE CULTURE, VITAMIN A PALMITATE.

I didn't think so, but if there's money to be made, there's always some toxic waste to sell.

www.jim-frazee.com

Expand full comment
Albert's avatar

Thank you Ted for that brilliant piece. I'd like to add one thing that could be also help to solve the issues. I'm co-founder of a MRT (Music Recognition Technology) startup and we've started heavily to investigate the opportunities to detect AI generated music. At the moment this is pretty straight forward, due to the quality of the audio, but as we all agree, I suppose, this is not going to stay like that. Going a step further we want to be able to detect the music that has been used to train the model. It's all in research at the moment, but can keep you updated in the future on where we stand. ✌️

Expand full comment
Cyndi Kane's avatar

Amen, and amen. Thank you for eloquently advocating for musicians and songwriting. ❤️

Expand full comment
George Neidorf's avatar

Do we really need recorded music? Can we exist, as people did for hundreds of years, with only live music? What if, instead of buying recorded music, we used that money to listen to live music and supported our local artists, be they in a club or an orchestral hall? Of course they would be a lot of music that we'd never hear, but is that a tradgedy? Whether it's music, cars, food, or whatever, If you don't like something, don't buy it.

Expand full comment
Dheep''s avatar

Exactly -Don't buy it. But very few people seem to be able to deny themseleves anything nowadays. Principals ? To stand by ? What a joke -huh ?

Expand full comment
George Neidorf's avatar

Not having money is a good deterrent.

Expand full comment
Timothy Bailey's avatar

This feels like an interesting and fundamental question to me. I have wondered if the second iteration of punk will eschew recorded music altogether. I also wonder if it’ll be primarily acoustic and performed exclusively for small audiences. A revolt against scale.

Expand full comment
George Neidorf's avatar

It's already starting to happen in towns like Trinidad, Colo. and in other mid-Western towns.

Expand full comment
Graham Nolan's avatar

I think you probably do need recorded music. I am lucky enough to be able to afford some live events but I wouldn't have gone to them if I hadn't been signposted that they are there (by youtube amongst others). I am so glad that I was because after too long a pause I have started to go to gigs again. And I can't stop grinning when the artists play. Just joy.

Expand full comment
George Neidorf's avatar

That's a point. I would just go to whatever local venue was having music and take a chance that I would enjoy it. If an artist can produce music that people want to hear, they will always draw an audience. It only takes a few people to tell their friends and in a short time the venue will fill up. At least that the way things were in the past.

Expand full comment
Jim Frazee's avatar

And this just in: U.S. record labels are suing AI music generators, alleging copyright infringement

https://www.nbcnews.com/tech/rcna158660

Expand full comment
VMark's avatar

That’s what it takes. Slow it down with billing.

Expand full comment
Daniel Brigham's avatar

I read the article and the litigation makes the allegation that AI uses copyrighted music to train the AI. And surprise, surprise, the AI companies say, "No we didn't." Record labels say, "Yes you did." I personally think the genie is out of the bottle and litigation will go nowhere.

Expand full comment
James james's avatar

Isn’t this the same strategy they did with Napster?

Expand full comment
Joel Goodman's avatar

LEGISLATION. Ted, your ideas are right on the money (no pun intended) and is a great start but unfortunately won’t hold, even if the majors draw that line in the sand, without making these LAWS. If they are not laws, and no legal protection is put into place, then someone else will come along and do it.

Expand full comment
VMark's avatar

I so wish there was something we could do. I remember well the days of the AFM and SAG strikes. I remember feeling like the lone ranger holding the line. Music is its own reward, people want to work and will take less. It’s reasonable to think that after 50 + years of drum machines, AI will be as ubiquitous, cutting into more than just the drummers revenue stream. Part human part hip robot pop bands are in our near future. There’s nothing to stop this but it will ebb and flow. Man made music may be preferable, as it is now, but, we’re only talking about artists. The industry was also musicians and arranger/ composers that worked in the trenches. When you can paper the walls with AI choices the bottom line for humans comes down. You can’t raise a family on that and they won’t. Musician revenue streams will fade away. Fame will always remain as a business model, but that’s not the entirety of what we called the music business. We can’t turn AI off and we can’t stop it. Unless we can make it hurt financially, it will out pace us in the human race.

Expand full comment
Daniel McBrearty's avatar

We all wish you ran a major label or two Ted, bur you don't. I've been saying for a long time that the real villain in the last 20 years was not Spotify/Ek (much as I dislike both), but the major labels who were utterly, damnably, inexcusable, asleep at the wheel while the biggest technical and social revolution in 500 years was blowing up.

Perhaps it's because they grew so fat and stupid on the profits of easy CD reissue. We will never know. Anyway, of all places to hope for salvation - that is the most hopeless.

We are on are own in this, that's the harsh truth. Best solution I see is to try to turn a threat to an advantage by emphasising the "real" nature of our work. MacDonalds dominate the fast food world but not everyone eats there.

That said, I wouldn't be surprised if a bot somewhere is ingesting these words for later scrambled regurgitation already.

Expand full comment
Chris Buczinsky's avatar

We are not fighting AI or any single technology, and as long as we continue to be fooled into thinking so, we will always be fighting a rear-guard action— reacting, with lots of handwringing, after the fact.

We are fighting an idea—an old, hoary one that goes back to the advent of the industrial revolution. Dress it up all you want with shiny new toys, the tech gurus are spreading the old time religion of utilitarianism.

For these 21st-century acolytes of the 19th-century faith—and ALL of us suffer from the infection of this idea to one degree or another—everything is a TOOL to GET something else; nothing has intrinsic value.

How could such pinched philosophy of life ever GET music?

Expand full comment
Chris's avatar

Humans crave personal connection and you can’t connect to a machine that hasn’t lived and loved and laughed and cried.

Expand full comment
+ and -'s avatar

Music has many purposes, for listening, dancing to, protesting, praying, meditating, selling things, and watching people perform. Record company producers only care about making money from musicians. That is what will lead to their downfall, or open the door to producers that want to advance music and artists. Streaming services will embrace AI since they can make money from tracks that cost them nothing. They already do this with song mills that put out multiple tracks paying the writers flat fees with all streaming revenues going to the platforms. Now they will cut out the writing mills and just have their employees writing prompts using AI to flood their platforms thus diluting the streaming revenue for real artists. The question is what will real creative musicians do to promote their music and find an audience so they can make a living with live performances which AI will never be able to do. Tick Tock, YouTube, X, I don't know. Not streaming other than Band camp. This is a real disruption.

Expand full comment
Preston's avatar

As a musician, I feel incredibly strongly about this. AI can do a lot of things, but the genuine emotion and uniqueness that comes from musical expression is not one of them.

Expand full comment
Will McCulloch's avatar

Just wait …

Expand full comment
Preston's avatar

While your confidence is noted, there are subtleties in human performance that will be extremely difficult for AI to imitate. For example, even professionals make the slightest of mistakes. When it comes to singers, I can't imagine AI being able to emote the way some of the best soloists do when performing operatic arias. Furthermore, AI cannot connect with an audience when it "performs". The audience is as much of a part of performances as performers are in many ways, and AI simply doesn't have the ability to make human connection, let alone show empathy. I am sure it will continue to evolve in ways that will be helpful rather than make us lazier and harm our brains. But, when it comes to live performance especially, AI will have an incredibly difficult time feeding off of the emotions of the audience and responding in kind, which is what makes live performances so intoxicating.

Expand full comment
erg art ink's avatar

AI does not comprehend the ethereal. The programmers and their masters certainly don’t. Most people are not capable either, which is why as an audience we love to enjoy a performer who does. AI will never produce a sublime original, simply endless copies of what is already known.

Expand full comment
Preston's avatar

Precisely.

Expand full comment
Will McCulloch's avatar

My reply was very much tongue in cheek. To your points, I recommend all listen to Cal Newport’s podcast episode “306: Defusing AI Panic” https://podcasts.apple.com/us/podcast/deep-questions-with-cal-newport/id1515786216?i=1000660043961 We are indeed a long way away from the computer having human intelligence or emotion.

Expand full comment