Why the Fear?

I didn’t intend to write a couple thousand words about AI this morning, but watching the scare run through the worlds I pay attention to has been so interesting that this is what came out.

For the general public I kind of get that sensation of being afraid. We are a set of generations molded by Hollywood and journalist for whom ringing the warning bell sells. But within the world of creatives I keep my toes in, I admit I’m a bit nonplussed.

How is it, I keep thinking, can so many science fiction writers be so afraid of this technology?

Haven’t we been writing about it intently enough that we can see the difference between the holodeck, and Skynet?

In addition, if we really believe that human-in-the-loop artistic endeavors are valuable specifically because of our humanness, why are we worried we’ll be out of jobs?

Are we really that insecure?

Well, I suppose we are. So much of what we are is tied up in what we do. And we are also all humans, and that means that—in general—we talk a pretty good game about our stuff but when the rubber hits the road, we discover our real truths. Then there’s also the fact that if we are made redundant by AI, the next step is oftentimes not clear. That said, if a fiction writer is honestly worried they’ll be out of a job when AI story generators grow more robust, what does that say about what they think of their work and their readers? I don’t know about you, but I can already report that there are a gazillion other options for readers to not come to my work—many of them free of charge—and that somehow I still manage to rope a few in. I don’t suspect adding a second gazillion options will change the dynamic much.

Readers will always have their choice. If they wind up liking AI work better than mine, well, then I’ll be out of a job. Just like I would be if they ignored my work due to one of the already gazillion options out there.

But I believe in my work. And there’s magic in the idea that a book sale to another writer (be they human or not) has limited or no effect on my own success.

When the AI is writing quality fiction, I’ll be slower than it, of course, but I’ll still hold onto the shred of a hope that I can manage to get someone to read my stuff at whatever pace I can create it.

THE GLUT OF COMMENTARY

I’m thinking about this today because it seems like every day a friend or acquaintance of mine (or maybe just a distant social media contact) has been posting some kind of hit piece on the output of an AI someplace. Mostly the genre of commentary seems to run along the lines of making fun of the tool, and the message goes something like: “Look how bad these things are, ha! Ha! Ha! And oh, the humanity!” Tucked in there somewhere is also almost always a context about how the AI is stealing from original content.

No, and no. And probably no.

Yes, there are lawsuits being filed, and those lawsuits are going to be interesting. I think all creators who have their work used to train an AI should be paid (even if I’m not sure they will be required to be by copyright law). But I suspect the real situation is considerably more nuanced than us people want to deal with, and I’m guessing that the courts will fall on the side that says using a piece of art as a training device is not copyright infringement. Perhaps I’m wrong, though. We’ll see. We are very bad at dealing with complexity, especially when that complexity is brushing up against our penchant for survival instinct.

Regardless, none of those lawsuits are geared toward stopping the advancement of AI.

But here’s the thing: For possibly perfectly good reasons, I think people who are hand-wringing over all this stuff are missing the boat when it comes to what’s happening right now. The AI tools being created are not some kind of super-brains. They are not designed to be “right.” They will only be as good, bad, or indifferent as their input. And in the right uses, if the input is very good, the value of an AI is exceedingly high.

If the input quality is high, the output quality is high. In anything with complexity, that has value.

Taking a bit of a segue, let me talk about the commonly discussed topic of ChatGPT and schools.

ChatGPT IN SCHOOLS

Every time I see a story about a student using ChatGPT to write a paper, it’s framed as a way for that lazy-assed kid to shirk work, I shake my head, though. Not because the kid used ChatGPT, and not because that’s particularly good or bad. Instead, I shake my head because (if I were a teacher–which is easy for me to say, since I am not), I would care less about how the student created the work, and much more about how much that student learned in the process. That is the purpose of school, after all. Learning. In fact, if the student learns the same amount in less time, then the tool they used is a fantastic addition.

The issue, of course, is that it’s easy to see how the ChatGPT tool could be used by a notoriously inept student in an attempt to obscure their lack of knowledge, and because we can all remember how, when we were kids, we would have been happy to not have to suffer the drudgery of actually writing a paper on something we simply did not care about.

The argument we make is that, without the ChatGPT tool, the student has to actually do something to write the paper. Or, at least has to go to the internet themselves and do the work it takes to plagiarize everything, by goodness. The mere existence of a paper (when written without ChatGPT/AI) is at least proof of effort, and at best we hope that this effort creates some kind of real learning even if the kid doesn’t care.

The challenge this new AI tool creates, then, is to determine what the student knows, divorced from this evidence of effort. In other words, tests of some sort.

Of course, people are gonna people.

Before going forward, let’s remember that it’s always been relatively easy for any properly motivated someone to crib a paper or steal off someone else, and students have always attempted to get out of work. I can argue that the fact of the AI tools now coming so publicly into the spotlight means that teachers are more sensitive to the idea a paper might be plagiarized, hence might catch more people. Again, though. I am not a teacher. Do not listen to me, except maybe as an outside voice. I humbly note, though, that I’ve often been commended for being able to help people who are stuck in the weeds of daily concerns step back and see the actual goal posts. Perhaps that’s a value here? You tell me.

Regardless, I’m not trying to say its easy to make this kind of societal shift in mindset. But I think it is a good thing in the long run.

AND, JUST WHAT IS PLAGIARISM? (A CASE STUDY)

And then there’s the question of just what plagiarism is.

I got interested in the question. So here’s a fun example I just did this morning. I asked ChatGPT “What would be good names for a space-faring cat?” Then put the whole thing into an online plagiarism finder. Here is the ChatGPT response:


Here are five potential names for a space-faring cat:

    Starclaw: This name combines the idea of space with the cat’s sharp claws.

    Galaxy: Simple and fitting for a cat that travels through space.

    Orion: This name references the constellation known as Orion, which is often depicted as a hunter in the stars.

    Nova: A nova is a type of star that suddenly increases in brightness, making it a fitting name for a cat that stands out.

    Meteor: This name references the celestial bodies that travel through space, making it a fitting name for a cat that travels through the stars.

It’s important to remember that these are just name suggestions, and you should choose a name that you feel represents your cat’s personality and characteristics.


Running this through the plagiarism finder brought back a big red flag. Plagiarism found! So I did a google on the answer and found noting at all concerning. Hmmm…no plagiarism that way.

But still, let’s go further.

I split it in half and ran the first three through the plagiarism checker, and the text came back clean. No problem. The second two suggested the issue was there. Due to a requirement for more than a little text in the checker, I took the next two and added them to the mix one at a time, whereupon I discovered that #4 (Nova) was the culprit.

So I grabbed that and did a google search on it.

As Keith Jackson once said, Woah Nelly!

That first sentence lights up a gazillion stars. Sites like Nova – howstuffworks, homeworkstudy, wikipedia, universetoday, and aandra (to pick the first five) all have very similar wording. Clearly, despite not using the exact phrasing of any, the AI is cribbing from information obtained from one of these sites (or another such source). Given that this resolved so boldly, I’d suggest that this sentence was surely the piece that toggled my plagiarism check of the whole.

Except…

Let me ask another question here: Which one of those five sites (and more) are the original? And once we’ve establish that, what are we gonna do about all those cheaters and stealers in the other sites.

Of course, this idea that these sites are stealing seems farcical to me. I mean, how many ways can you define a nova?

So…

Even assuming all these sites were in the training set, I’d be hard-pressed to say the AI is directly copying. That’s not what a generative AI does. A generative AI takes lots of inputs, and then uses some form of probability based notion to build a final representation of a response. I can almost envision it as the computer building a jigsaw puzzle of all the words out there.

Depending on what the AI has been trained on, a lot of the time, it’s “right,” though. A nova, for example, is actually a type of star that suddenly increases in brightness.

Aside: I note that the second part of the AI’s response: “making it a fitting name for a cat that stands out,” does not return any closely matched phrases. So it seems reasonable to think the meter pegged only due to the phrasing of that one sentence.

And here’s a better question: If a kid does a paper, and he scans all the material out there and winds up typing: “A nova is a type of star that suddenly increases in brightness,” are they to be expelled for copying?

No.

Of course not.

Again, the problem is real but the issue is nothing new. Plagiarism detectors have been used in schools for years. But in context of a learning model, the question of plagiarism in the product’s output (as differentiated from use of source material as its training) is as hard to define as it is for a human. And that’s before we get into the idea of unethical humans using a trained AI in unethical ways … more on that in a bit.

CLARKESWORLD SUBMISSIONS

Of more recent interest to me was the Clarkesworld finding a huge spike of submissions have come in using AI tools as story generators.

This maybe gets closer to the root of the fear I see in SF writers. Other humans are using AI to create works that might “steal” slots in professional magazines that would otherwise go to us humans. Again, maybe. I suppose. I tend to doubt it, but you never know. The question will be answered by readers, though. If Clarkesworld and others can put out an issue every day filled with AI generated stories, and sell subscriptions well enough to grow the business, then more power to them. If that can happen, that’s where the market of readers will lead the world.

But, again, readers read authors.

I say that as a lowly writer who, through my career, has always struggled to find readers. It’s a hard gig. I totally get it. I’ll fully admit that the AI invasion into the space of commercial fiction might well impact writers with different profiles. Maybe big names will lose a little, and the little guys like me will lose a lot. I’m not sure I’d agree with that right now, but it colud happen. Or vice versa.

Even today, though, a writer works for their audience, regardless, and the audience makes their intentions toward our work quite clear.

As you can tell, I’m not sold on the idea that the AI will take over the realm of finished-out fiction any time soon. There may be fads and pushes, but I just don’t think that readers will care much for supporting an AI over the long run. Again, I could be wrong here, but I don’t think you can have it both ways. If you’re pontificating that an AI literally cannot replace a human in the loop, then that’s what you’re saying, too.

If you’re truly worried about being replaced by an AI, then you’re saying that an AI can and will write better fiction than you in the eyes of the final arbitrator, the reading public.

THE REAL PROBLEM?

The bigger impact (as you can see in the Clarkesworld post) will be to the intake process at the publications themselves—and maybe the intake process at distributors. For now, anyway, they have to figure out how to deal with a gazillion spammers. I’d guess this is a short term thing, though. The Game Theory of cheating is a constantly swinging pendulum. As the Clarkesworlds of the world figure out how to deal with the onslaught, eventually the onslaught will subside to a trickle. I propose that true cheaters don’t tend to beat their head against the wall until it breaks before they go a different direction.

When I was doing my run of blog posts and videos on AI in support of my Five Seven Five project, I finished on the idea that we don’t have an AI problem, we have a human problem. I continue to think this is true. And I think this human problem is playing out in the press, in the communities I belong in, and in the social media streams I follow.

The problem is cloaked in fear that I think is at least partially unfounded, but that helps mask the very real nature of the problems at hand.

The injection of AI into all sorts of areas is going to make very large impacts on many aspects of society. Mostly good, but some bad (due mostly to human nature, and our deep adherence to the roots of capitalism in our government and corporate structures).

Is it bad, for instance, that there’s a surge on Clarkesworld submissions? Yes. For now, anyway, and for Clarkesworld employees who are going to have to do a ton of extra work. Will it run them out of business? Hmmm. Maybe. Only Clarkesworld can say for sure, but I’d guess not. I’d guess that, given their need to stay afloat financially, they will make changes to their process if the glut continues. One can come up with many ideas–some that would be “bad” for writers as a whole, and others that may not be.

The point, though, is that there are options.

Things to do, and things that will have to be done.

It’s just that we, as humans, need to figure out how we’re going to deal with something that’s likely to be a sea shift of change. Our systems are not prepared for this, and if we’re honest with ourselves its fair to say they never will be prepared. Capitalism is generally a just in time proposition, after all. That goes for politics, too. It is a rare political system that can look too far ahead, and the US (where I live) is particularly behind the curve when it comes to dealing with the social safety nets that will likely be required to deal with the economic issues related to worker displacements that AI will drive across the board.

This is not meant to be a political post, though.

A LAST WORD – READERS, WRITERS, AND DISCLOSURE

The Clarkesworld example raises a follow-on question that is relevant for writers in particular: Is it possible that an AI-generated manuscript will get through the net, and that some unscrupulous scallywag will make money from their use of an AI agent to make fiction?

Of course, it is possible.

Is that bad?

Well?

Writers have both an ego and financial stake in the answer. Unfortunately, we don’t get to actually decide it. We live in a consumer-based, capitalistic world, so the answer to that question lies in the hands of readers.

I would suggest some of their reaction depends on how transparent the writer/artist has been.

If it is disclosed that a writer used AI helpers, and the readers like it, then they are saying the use of AI is fine by them. If the readers don’t like the idea of AI in their fiction and then that writer is discovered to have been silently using the tools, then that writer’s career is probably over. This is the unfortunate root of the business side of being a creative—or the fortunate root, if you’re so inclined to the concept of free markets set against the world of creatives. Readers decide.

I personally think that, eventually, readers read writers more than they read stories (but that I, as a writer, need to be able to tell stories that relate to those readers in order for them to want to spend time with my work). I could be wrong, of course. Or I could be right today, but the market might shift to later make me wrong. Perhaps the market will segment, and I’ll be both right and wrong at the same time. I dunno. Time will tell.

But I am fairly certain that it will be readers and fandom who decide the ethics of a writer’s use of AI tools, and it seems valuable as a writer to be thinking about it in that line. That means we need to get used to the idea of what an AI is and how it really works rather than poke at it for what it’s getting wrong in the vain hope that someone will just shut it down.

Because AI is here.

And AI is going to make a lot of things better—and a lot of techbros even richer than it already has.

Belittling the technology for how it doesn’t work or how it hasn’t yet been trained to work, or maybe purposefully mischaracterizing what it’s doing seems to be wrong-minded in the same way that belittling a dog for not being a cat would be wrong-minded. Making fun of an AI engine because its art today creates six fingers instead of five or because it put a comma in the wrong place (pot, meet kettle, kettle, meet pot) is, to my mind, missing the point.

But that’s a lot of what I’m seeing right now.

It makes me wonder.

Share Me
Posted in Uncategorized.

Leave a Reply

Your email address will not be published. Required fields are marked *