Five Seven Five: Day 7 – Training and Copyright

SD Prompt: Training and Copyright, concept art, cinematic lighting

Yeah, I missed a day. That’s because I found myself suddenly needing to hop into the car and drive to Tucson. As always, life happens.

This post is part of a collection of thoughts I’m putting together as a companion to Five Seven Five, a stand-alone book that uses haiku I wrote as input to an AI art generator. It was a super-fun project to build. You can get it by backing my kickstarter. Or you can pre-order it at several online retailers.

Fun or not, though, the use of AI in any endeavor is filled with intrigue. So as I came closer to launching the project I decided I wanted to explore the bigger picture of how AI is impacting our world, and specifically the creative world I dwell in.

Today I’m finally getting to what I think is the heart of the matter. How AI is trained.


I am not a lawyer. More important, I am not a judge. And when it comes to the training of AI, the situation today requires lawyers and judges to finish their work.

As I type this, news has come that a class action suit has been leveled against several key AI art generators, citing their use of images scraped form the internet, and fed without authorization into the learning algorithms of the thing we’re calling AI.

Remember Rick Beato’s question earlier? Who will be paid? Pretty soon, we’ll know.

Let me be clear about this part: I am on the side of the artists. I hope we win. Paying us for the use of our work is the fair thing for the world of generative AI companies to do—even if they aren’t legally required to. Reiterating: I am on the artist’s side. I think people should be paid for their work.

To be direct, however, I’m not confident the artists are going to win because I’m unconvinced that companies making these engines are breaking the letter of the law—and because in the end, unless you’re one of many underrepresented classes, the letter of the law is how courts generally work. My view is bleak because, as I understand these AI systems to work, these companies are not directly copying the works they use to train one. The system has not stored a copy of “Starry Night” in any direct way. This would seem to make the legal claims of copywrite as I understand them to be written, very difficult to support. There is also a fair use piece of the conversation—which again is interesting, but not something I see the courts deciding on the artist’s side.

As a partial aside here, I’ve found it interesting to see reports of various artists attempting to duplicate their work using AI generators, and getting really close. They point to these as examples to say their art is in there. I guess that’s true enough as far as it goes. Kind of. Humans have been dealing with forgery for as long as art has been deemed to hold value, and while it’s possible for AI prompters to come up with close representations, that does not actually prove there’s a copyright infringement happening in the storage of the data. Copyright infringement in the prompting, though? Yes. If one attempts to sell a close representation of an existing property, whether made by AI or not, that person is playing with copyright infringement. If you attempt to sell a hand-drawn representation of Mickey Mouse, be prepared.

But the copying of styles—which is something AI generators are very good at, probably because of the way they store data—is a different thing. Artists copy other artists’ styles all the time. That’s a time-honored method of learning. As a writer, for example, I can find lots of publicly available material in which established authors advocate working to mimic other writers who inspire them. The entire “who influenced you” vein of questions fill this bill. There are some who even advocate writing other writer’s prose out by hand to get a feel for their rhythms and pacing.

Sadly, in the end, if I were betting and if this class action suit goes all the way to its end, I’d say the artists will lose.

That’s the conversation around the front end of the AI generative process—which is where the angst is pointed today. I think that direction is a mistake, though, because the economy of the system is a bit hosed up on the other side, too, and it’s the other side where I think the most damage can be done to protect artists’ ability to make their livings.

I think everyone is talking about the training process because the back end—who owns copyright produced by an AI system—has been decided. For now, anyway…Roe vs. Wade tells us nothing is forever.

 Though some of the tech companies are trying to pretend otherwise (more in a moment), no one owns copyright on the output of an AI generated process. In other words, all output from ChatGPT is public domain. Same for Midjourney or Dall-E or Stable Diffusion. Same for Sudowrite. Everything is public domain. AI generated art belongs to no one. To answer Rick Beato’s question: No one gets paid for the use of the work, but everyone can use the work to get paid.

But, not so fast there baba-louey.

In the last episode of my little project, I said something about trusting tech bros to always find a way to make money. In this case, most of them are now granting “personal” use license of their engine’s output, but commercial licenses for those who pay to use the generators. Talk about freaking cheeky, right? By doing this, companies are trying to cast themselves in the light of a governing body, trying to create copyright licenses where they do not exist. This, to me, is copyfraud, and is a place folks ought to be focused on.

I should note that this is among the reasons I used the public beta of Stable Diffusion for my project. Its terms of service—which I copied off, just in case—expressly notes they make no claims on the use of such output.

If I were predicting the future, the AI companies will eventually move to a model like DreamStudio is engaging in. When you join, you get a small credit of usage for free, then to use it for longer terms, you have to pay for time. In other words, you pay them for their system, not for the output of said system. This seems fair. Despite sometimes derogatory comments about capitalism, I am at heart all for capitalism in certain markets. Just like artists spent labor making the fodder for their tool and should get paid for it, The tech bros spent labor making their tool, and should be paid for that.

This dynamic is at the root of my view: why AI generators of artistic output of any kind should result not in the death of artists of any flavor (though many will, indeed, parish), but instead in a radical change in the way artists create their career paths. From my view as an independent creator, the economics of this competition is the cost of AI tools (and my time) vs. the price of acquiring or contracting for human-created images.

You can see the dynamic play out in the Written Word’s survey of writers, and how independent writers use commercial artists today. Bottom line: once a writer begins to make financial headway, they begin contracting the work out. Before that time, they do it themselves. This makes complete sense when looked at through the capitalist world we live in. When you have no money, you cheap out. Once you’re making money, you can make more by writing than you can by making visual art, so it’s more lucrative to hire someone to do it. So unless a writer just loves making covers, that’s where human artists live.

Thoughts on streaming:

Then there’s another influence on the economics of this situation—one that I keep scratching my head about. Several of the many articles I’ve read about the business side of this game make equations of the music industry’s organization around streaming as a foundation for why art generating companies will need to come to the table. As if that business model applied. This, I find confusing. Perhaps I’m wrong.

My wife and I listen to Radio Paradise all day long in our condo. This is a streamed internet service, hence the artists get paid through the standard collections agency—I think RP pays a fee for music, submits routine reports, and the agency delivers micro-payments. It’s like Amazon’s KU for writers. The difference, though, is that these streamers are delivering a full, cohesive product while the AI generators are delivering something quite different, Where Radio Paradise delivers Billie Eilish’s latest song in full, AI generators deliver one note, perhaps from that song, or perhaps not, and then another note that it thinks should be next, again, perhaps from that song and perhaps not. No AI generator will deliver The 30th in full to any prompter—though some have tried to get close.

This is another reason I think the tech bro AI companies are likely to win the case that’s been brought against them. Again, I am no lawyer but this is a totally different technology that operates in a totally different way. I’m really interested to see how established copyright law will be found to hold up under it.

Right now, though

I do hope the artists win, I’m guessing they won’t.

That said, I think there is a way—and it has to do with that cheeky copyfraud over-reach that the AI companies are trying to pull. By selling the copyright rather than time on the tool, these companies are flat-out making money off their output. Output that could not exist without the fodder they used for training the device. With some luck, this practice will be struck down. When it is struck down, then either one of two things will likely happen: First, the companies will just default to the DreamStudio model, or second, the companies will create a training database opt-in model that will mimic the streaming thing.

Either way, I think there will still be a place for human artists in the loop.

But also either way, the world is going to change.

And as we know, change is hardest on older or more established artists—in any field.


You’ll note my video is in a different setting and has not title plate. Sigh. As noted above, I had to run off to Tucson, so I’m just happy to have gotten something out!


Share Me
Posted in Uncategorized.

Leave a Reply

Your email address will not be published. Required fields are marked *