This is a script of a podcast I am working on about AI – What it is and what are we to do about it. I will probably add to it from time to time.
Some of the listeners to this podcast know Jerry Drye, associate professor of communication at Dalton State. He has been on this podcast three times and some of you may know his office used to be right next to mine. Actually, his office is in the same place; I am the one no longer there. Ah, retirement. We were trying to hire a new faculty member for the department and he called a reference at his former institution whom he actually knew well. This gentleman told him how he had published 20 books in the past year. How? He used AI. Or I should say, he more than used it, he just prompted the AI generative software to write a book of a certain length along certain lines on certain subjects, and voila he had a book to publish on Amazon and make money form.
Well, you know what I think about that. Yet, I know this person is one of many—there’s no way to make an estimate—of people doing the same, or using it to do their jobs or write their academic papers. To quote George Costanza’s father on the old comedy Seinfeld, “I have a lot of problems with AI and you people are going to hear about them.” This is my erring of grievances and it’s not even Festivus. Today’s episode is about Artificial Intelligence and the problem it causes, in my worldview as a writer and educator, human being and Christ-follower, reader and critic of literature. In a previous episode, I had Amber Nagle on to speak about how she uses it and the ups and downs of it for writers. I so appreciated her time, but I did not present my side. Today I will. I might bring in some other voices and research. I hope you find it entertaining, informative, irritating, scary, and helpful.
My first experience with AI was. . . I was going to say that it was a paper from a student who slept through class and suddenly wrote like a nobel prize winner. But it goes back further than that. I have used Grammarly for years, and I’ve talked to chatbots, and really, different forms of AI are all around us. What is it? Here’s one definition, from a pretty reliable source on this subject, NASA:
Artificial intelligence refers to computer systems that can perform complex tasks normally done by human-reasoning, decision making, creating, etc.
There is no single, simple definition of artificial intelligence because AI tools are capable of a wide range of tasks and outputs, but NASA follows the definition of AI found within EO 13960, which references Section 238(g) of the National Defense Authorization Act of 2019.
Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets.
An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action.
An artificial system designed to think or act like a human, including cognitive architectures and neural networks.
A set of techniques, including machine learning that is designed to approximate a cognitive task.
An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting.
I did not read the whole definition that the NASA site provides, and you can find the link in the shownotes. I do want to say that I feel something of a hypocrite. This is a podcast about the problems with AI, but I still use it occasionally, far less than I did when I was working. I would like to avoid all use whatsoever of it, but customer service websites use it for questions, it is embedded in websites, and if I use Wikipedia, there is no telling how much of that is AI-generated. A lot of YouTube is now based in generative AI, although I’ve learned to detect it and stay away from those videos. -And of course, there is GPS. I plan to use it as little as possible, but let’s be real. I knoww I will resort to GPS and some other AI tools in a pinch.
My real enemy, if I call it that, is generative AI, or Chat GPT4 and now 5 and its many siblings, because that has the most effect on learning and writing.
I should say I have been working on this podcast for a while, and something is happening with AI all the time. It’s either Twitter/X’s AI Grok becoming a Nazi and defending the Holocaust, or its the AI corporations rolling out a new generation of “tools” in early August.
But I recently read an article, which is linked in the show notes, that we should not call it Artificial intelligence because it doesn’t show intelligence. Nathan Beacom, writing for the Dispatch, stated:
In lieu of “artificial intelligence,” I propose a more accurate, ethical, and socially responsible name: “pattern engine.” Early computers, which would find mathematical differences, were called “difference engines.” This name adequately recognized the reality of the machine at hand. “AI”s are indeed engines, and engines made for aggregating patterns and sorting data into statistical correlations. They are, truly, engines that sort things into patterns and produce outputs based on the statistical weight of what has been sorted.
I like what he is saying here, because the technology that makes it possible for that professor to write 20 books is based on large language models. I am not dismissing the science behind large language models and their application to write readable, fairly coherent and generally relevant text—more on all that later—in less than five seconds, depending on the length. But …. it’s not intelligence, if you define intelligence as understanding concepts. The program doesn’t understand what it writes about. It is responding to programming—amazing programming—and its output seems guided by intelligence, again, only seems. But its computations ARE based on infinite amounts of data.
I also want to steal Nathan Beacom’s tripartite description: accurate, ethical, and socially responsible. I’m stealing it to use for a different purpose—just like AI does, but of course AI is not human, despite how it appears in its VERBAL OUTPUT—put a pin in that—and since it’s not human can’t be made guilty of anything—put another pin in that. I want to use it as the skeleton of my argument. AI is not accurate, ethical, or socially responsible, or again, I correct myself—the use of it is none of those things. Or should I say, the dependence on it is none of those things.
If you have not listened to my previous conversation on this subject with Amber Nagle, either turn this one off and go to that or listen to it right afterward. But one of the things we talked about there was the accuracy—both in terms of plain facts and writing tasks. Sure, it will be grammatically correct, but the writing will be distinctly AI—more on that later.
The belief that AI will be entirely accurate is wholly without foundation. I encourage you to test it. I did so with a famous pundit/columnist I follow—it had wrong his alma mater and what he wrote about. Recently the AI overview on Google said that Russell Moore, the editor of Christianity Today and former head of the Southern Baptist Ethics and Religious Liberty Board is the husband of Beth Moore, the Bible teacher who also left the SBC at about the same time. They are not related, which they always say (Moore is Beth’s married name, of course). But now a lot of people will think they are and say they got it off Google.
I bring this up first because it’s the practical side of things. If you have to check the output’s factuality—how far do you go with that? Every assertion or piece of information? And so where is the time saver, which seems to be the main reason people use generative AI?
I recently found an article online in a search that confessed it was written by AI. Why I even read it, I don’t know. It did have sources at the end, so I clicked on them. At least one of the three sources given had absolutely nothing to do with the subject of the article. But at least it was a real article—AI also makes up sources. Those are called hallucinations. Where it cites a legitimate source, it is likely to misquote it or misinterpret it. Because it is a pattern engine, not a context understanding engine or an understanding engine. Chapt GPT5 is supposed to be improved in that regard. Well, maybe.
Back in the spring when I was employed, I was in a meeting with some academic administrators and one of them reported on a meeting he had been in with employers in a local county. The employers made it clear that they wanted employees who could use AI, and this administrator thought this was something we should heed in our departments. English, communication, and history –let’s just say it wasn’t an amenable response. But I’ve thought about that a lot. Did the employers know that AI was so prone to errors and that its writing style was so….AI-ish? What did they want the employees to do with it? Just save time? Did they want them to use it in producing written reports, that weren’t really going to be read by anyone for any serious purpose but had to be written, showing documentation, like minutes of meetings, obligatory stuff. For standard, generic emails? For summaries of documents or content? (a use I can see the value of, as long as the person reads it for correctness). I agree that employees should have the skills to use it wisely, but I doubt employers really grasp the problems of it if they see its benefit as saving time and money. I suspect the largest benefit is saving on employees’ wages.
As an offshoot of AI being inaccurate, generative AI writing doesn’t always pass the smell test in terms of style or authenticity. I realized early that AI has a fixation with certain patterns of writing. One is sets of three. Something cannot be beautiful and efficient. It has to be beautiful, well-designed, and efficient (somewhat redundant?) A YouTuber, Evan Edinger recorded a video I Can Spot AI Writing Instantly — Here’s How You Can Too in which he points out that AI prose often starts with this construction. Subject isn’t just about ABC—it’s also about X, Y, and Z. Also emojis on bullet points. The use of the em dash. Lots of generic words that don’t mean a lot. He uses this opening line generated by AI. Sometimes Striking the perfect pose in photography isn’t about … We don’t strike a pose in photography, but in a photograph. The poser does not take the photo, right? This is a logic error that a machine without context is unlikely to see. Once I checked Grammarly to see if it found a misplaced modifier. It did not, because that is a contextual, human, logic problem. A misplaced modifier would be a sentence like, Lying in a pool of blood,
But in another YouTube video, https://www.youtube.com/watch?v=s0FdYs8ZqmA, the host reports on an article in the New York Times where a best seller author wrote a short story based on the same prompt as given a generative AI platform. The AI story was written like a person who has not read much fiction. It used a number of cliches, the characters didn’t talk or act like people, there were non sequiturs and red herrings—things brought up that weren’t closed or referred to again, and there were summaries of actions rather than showing it. It was clearly not written by something who knew what
AI writing also gives Lots of fluff, adverbs and adjectives and Always far more than you need. Tone is weird, robotic. Personal examples will not fit.
This was how I figured out how students were using it. The AI Chatgpt tool would be told to answer a certain question on the readings and to include a personal example. The student apparently did not read the personal example, which would say something like “When I was head of HR at a company, I worked with a woman named Gloria….” when I knew the student hadn’t been in that position. Then I saw it everywhere—or not. Actually, it was so obvious that students were using it because they did not know how to use it, how to do the prompts correctly. It was also clear they weren’t reading the output to at least edit and know what they said.
In my last two years of teaching I included a unit in a business communication class about Generative AI, and even added a reading and exercise related to the prompts, so that they would at least know that was something to pursue. These were junior level students and I asked them test and to analyze the output, based on three prompts I provided but they could tweak. It was a good assignment for them and I was able to present on it at two conferences. But…. If I were still teaching, I don’t know what I would do. I would have to restructure most of what I did in the classroom to prevent AI use, or inappropriate AI use, AI use that had anything to do with real world use of it.
So let’s shift a bit and talk about some good uses.
1. Super generic writing that doesn’t need any thought or creativity. In which case, why is it being created period? Bad news letter kinds of things. Is anyone going to read it and care? Minutes of meetings no one is going to read (but someone would still have to record the meeting or provide notes)
2. Brainstorming for topics.
3. Drafts that you know you are either going to have to rewrite and check, or use as the prompt for the next draft, and next, until you get what you want. But, watch out for the poor writing and the tells that will signal to your readers you used it.
4. Summaries of long and dry material when you are in a time crunch – but I think the only ethical thing to do is admit to using it—which brings up something Amber said. A lot of people are using it and denying it.
5. Writing multiple choice questions based on written text—however, you will have to check them carefully and probably edit.
I am not going to say these are the only five or so ways it could be used, but they are the only 5 I see as valid now.
But back to my three points: accuracy—it comes up short. Ethics is my second concern, and there is so much there that it’s hard to know where to start. Obviously, if you submit the work of AI as your own for pay or a grade without being clear and upfront about it (and that matters—some people will care and some won’t) it’s like what I said to my students about wholesale plagiarism. You’re a big fat liar and I don’t care what happens to you (grade wise). That’s a little harsh, and I was supposed to be empathetic, but I don’t really see why I should not take lying for what it is. Oh, some say, the students are so overwhelmed, and feel so inadequate in their writing, and we should understand that. I grant you that they might be overwhelmed or feel that their writing is inferior, but that is what learning is about. We would not say that about math or science classes if a student cheated on an exam, especially if it’s a doctor or nurse or engineer. I saw a meme on Facebook today: Your future doctor is using AI to cheat on his/her work, so you better live and stay healthy. Ha! And then I saw an ad for and AI tool and the spokesperson was a young woman in medical school. Yikes. Yes it’s okay that engineer cheated on his math courses so he could build bridges and buildings that aren’t safe. Cheating is cheating. And cheating is not a victimless crime, if you will, any more than adultery is.
Some of this dismissal of students using Gen AI for writing comes from a dismissal of the importance of writing in the first place, and a dislike for it. I can understand disliking writing tasks, but not dismissing its importance as a form of communication, a way of learning, and an art form. Another issue is that AI denigrates, lowers the value what communication is and what writing is. Generative AI “creates” (not really, outputs) coherent and generally relevant and accurate text. Its output is verbal (we’ll set aside images for the present). As we all know intuitively, communication is significantly nonverbal, and that in itself is an iffy statement. For years we were told communication is 93% nonverbal, but that was a misunderstanding as well as being based on a faulty study in the early 1960s. It’s much less, but still a lot and still very important to the meanings we communicate. Plus, what is nonverbal? It’s context as well as things our bodies do to supplement meaning. There are all kinds of relational and social elements, like the white coat the doctor wears and her facial expressions and past encounters with her. Could an AI teach a class? No, not really, if you understand how much teaching is relational.
In this regard, I’d like to read this quote again from an article in the Dispatch
“As students increasingly outsource writing to artificial intelligence, the consequences may extend well beyond academic integrity. Writing for Engelsberg Ideas, Aaron MacLean argued that AI poses an urgent and profound danger to human reasoning. “An old professor of mine, in my freshman year, once said something wise and important to a seminar I was in when one of my classmates observed that ‘I know what I think, I just can’t get the words down on the page.’ My teacher responded: ‘Well, you don’t actually know what you think, then. The act of writing the thing is the same thing as the thinking of it. If you can’t write it, you haven’t actually thought it,’” he wrote. “Now we have technology that, in essence, is promising to supplant the core and foundational human activity – that of thinking. And if it is a bit sad that human beings are simply less musical than we used to be, this threat is much more serious. Freedom is, in the first and most essential place, intellectual freedom – the ability to reason clearly, to navigate received opinion and accreted (or accumulated) prejudice so as to pursue and sometimes even catch knowledge of things that matter. This activity, when directed at the things that matter most, is called philosophy. It’s a kind of hunt, sometimes pursued in cooperation with others, racing in bands across the limitless plain after our quarry – but in the moment of the kill, we all must kill alone, for ourselves. The hunt for wisdom requires, of course, basic reasoning skills and ideally wide exposure to high-quality efforts of others to reason about things that matter. It requires learning.”
Our dependence on AI now follows thirty years of addiction to screens; I doubt we would have been so nonchalant about it if it had showed up at the advent of the Internet. We have become used to dealing with people mostly through screens—and COVID made it worse. In Nathan Beacom’s article, he starts with stories of how individuals confused chatbots with people, leading to bizarre behavior—suicides, despair over romances with chatbots, that kind of thing. Mental health therapy through chatbot is a real thing. My experience with chatbots, recently, involved fussing with the Medicare supplement insurance companies. I thought I would talk to the little helper, knowing it was not real. Fortunately for me, the chatbot wasn’t ready for my questions and got me a person to help.
More and more I am seeing concerns that dependence on AI is destroying our brains. There is a famous study out from MIT on this subject, but it’s just a start—all studies have limited participants. This one showed much less brain activity from participants using Chatbots compared to those who did their own writing. Bring surprise, right?
Another very real concern is that if generative AI becomes the primary writing mode, its problems in style and authenticity, as there is more of it, will become the kind of writing we come to expect. This problem will be exacerbated because AI will be scraping itself and writing quality will decline ever more. What do I mean by scraping? Generative AI is using the massive amount of data on the internet to create those patterns it uses to determine what is the most likely next set of words to answer the prompt. Again, I don’t pretend to understand all of it, but there are countless sites and videos about Large Language Models. The more AI is used, the more it will clone itself, scrape its own output. Boy, that’s a wild idea. And by the way, it’s scraping anything you put up there. Amber said that she prompted it to write something in the style of . . . herself, and it was like her writing!
Ethically, AI use fails the tests, but there is another side of it.
What we are calling AI is flooding social media with fake content, is scamming people financially and in so many other ways, and is wasting our time, and is making us dumb. So why does it exist? Because it can. While some would say for the good of humanity, and there are, reportedly, medical usages that can expedite diagnoses and care, it’s hard to believe that a tool that writes papers for students and creates deep fakes—images of real people saying things they didn’t, or of fake people saying fake things—it’s hard to believe this is in our best interests. It saves time and therefore fewer employees are needed. Will people find jobs, or will more be created? It’s hard to know; some sources say yes, some no. This sounds like the 8 ball we used to play with. Some experts say that it will even affect certain types of physicians, such as radiologists, but not others, such as pediatricians, who interact more with patients, face to face. Jobs that require a human body can’t be done by AI alone. We’ll still need servers and nurses and builders and landscapers and so on, until robotics get that good.
Another ethical question is why this was foisted on the marketplace in the first place. Was there an outcry for it from consumers that led to its being created? Was anyone saying, we need a program that does all our writing for us, and thus our thinking? Who is the beneficiary of Generative AI? Really? My recent podcast guest Dr. Eliot Parker spoke of its beneficiaries being corporations that could hire fewer people for jobs that AI could “do.” Now, in the scientific realm, such as testing for drug efficacy and safety, I can see the benefits of it, as well as it writing code. But . . . there are so many other questions. I have been watching a lot of videos on YouTube, and I have links in the shownotes. One is from wonderful Dr. Lennox, the mathematicians and lovely Christian man from Oxford University. What a saint. You want to look up that lecture. Another is from a British mathematician and documentarian, Hannah Fry. She spurred my mind with questions that lead me to ask: Will AI kill us by making itself or becoming too smart, or by making ourselves too stupid, i.e., by its overuse and dependence? Hannah Fry sort of disputes that it can become intelligent in a truly human way. I am more of the opinion that we will cede our humanity to it—our creativity and ethics and human effort and reasoning—before it gets too smart for its and our own good.
But to move to another subtopic regarding ethics, for myself as a writer, the major ethical question is “where does the material come from”? Generative AI is a massive plagiarism machine. On this one, I recommend a John Oliver video, which I admit is blasphemous but makes some startling points in this area, for both writers and visual artists. AI is an idea and creativity thief. You can use other language to get around that, but that’s what it does. In talking to Amber, I notice we use two metaphors: AI scrapes the Internet, like some kind of scavenger, and it spits out its content. Yuck. It also puts graphic designers out of work. “But they have been using computer tools for years,” you might say. This situation is different. Generative AI doesn’t create anything. Of course, one could argue humans don’t really create anything either, but that’s a deep question.
No one had ever written a book like . . . well, you name it, To Kill a Mockingbird, or Great Expectations, or The Bluest Eye, before. There have been coming of age stories, which all of those are, sort of, but no one had written anything like those before. That we are just large language models too, that we steal ideas and phrases and images and metaphors from each other and from writers of the past, some might argue. There is some, a little truth there. But humans have a lived experience from which they write, a commonality, a humanity and complexity. AI can probably write short stories or novels that are entertaining and readable and even bordering on good. But it needs the human prompting to train it, and the more we use it, the more we are training it and feeding the beast.
The last area of my concern is social responsibility. There are two areas here: its output and the environment, which does not get talked about enough. Some are aware that Twitter, or X’s, AI engine Grok, what a name, called itself MechaHitler in mid July 2025 and started putting out all kinds of anti-semitic, final solution garbage last week. Why? Supposedly, it was told to not be woke, so its answer was to be a Nazi. Binary thinking anyone? Seriously, why would it start that? Go there? Bizarre. It was shut down for a while, but what’s to say that AI’s generated output is not tainted by ideology? Of course it is. Remember, though, I said to put a pin in the concept that AI is not human and therefore cannot be guilty of anything, or prosecuted, or feel shame, etc. It can pretend to by putting out words that represent those concepts because it is a pattern engine, but we know it’s not shamed or embarrassed or anything like that.
The big question for my generation—can AI become HAL the computer in 2001 A Space Odyssey, where to save its own existence he secures the death of the astronauts, who want to turn him off because they have learned he is too “smart” and controlling? That creepy voice….sort of like the shower scene in Psycho or the alien coming out of the android’s gut in the first Alien---are embedded in our psyches. But our narratives matter, and we should not poohpooh those stories and visuals.
I am reading a book about the history of science fiction, which involves a biography of Isaac Azimov. Azimov is known for the three laws of robotics.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These seem idealistic now, but I wonder if some people, a lot of people, believe in them and think we are safe! Like the MechaHitler of Twitter’s Grok AI tool, this is a human problem and the machines will end up doing what the humans create it to do. Unless we envision a kind of original sin that would get into the programming. Now I am writing science fiction.
I hope by now you see that the legislature of the U.S. should start to take this problem seriously. Instead of expecting corporations or the executive branch, the people who are supposed to make our laws should be the actors in working towards a reasonable future of generative AI. If we can get them off of the TV talk shows long enough to do their jobs. But I digress.
Finally, I challenge you to read up on the environmental impact of AI. Because we use our computers to access it, pulling from electricity and wi-fi in our homes, we don’t realize the huge bank of computers and servers needed for all that data processing. To keep them running, massive amounts of electricity is needed, and to keep them cool, massive amounts of water is needed. Please look at the MIT article on this subject in the shownotes. It is this area I am most concerned about, because as the paper points out, there is unfettered growth in the use and development of AI without any discussion of the sustainability aspects—where the electricity is coming from and generated and whether those demands and that of water will affect the environment and the people who need the water and power. I am concerned about it because water policy is important to me and because no one is really talking about it. There is a lot out there about the psychological and intellectual harms, the ethical implications, the business uses, but not as much about society can keep natural resources from being depleted due to uses of AI. And let’s face it, most of its current uses are pretty silly, like making videos of Bigfoot as a video blogger.
I feel like I can only scratch the surface. For me, I want to stay away from AI as much as I can, but I know I am in the minority. I get depressed, and wonder why I should write if so much writing is helped or sustained by AI, and how my granddaughter will learn to think with this in her life surrounding her all the time as she grows up, and why we can’t just be people again and get off our screens. My granddaughter already loves the iPhone and knows she can change things with her finger!
A few years ago I finally read the science fiction classic Dune, a truly brilliant book. One of the characteristics of that future world is that there were no computers because of the war between AI machines and the humans in the past, called the Butlerian Jihad. A holy war against computers. It does make me wonder, where is God in our use of AI? Where is our theology? There are many prophesies about AI, one being transhumanism, where our minds can be uploaded into the cloud and we can live forever. I don’t think that sounds so promising or tempting to most people, but it goes back to what I said earlier about the output of generative AI being language but not context, nonverbals, etc. We are embodied creatures, and the dualism of something like transhumanism goes against everything we are, everything I believe as a Christian. Dualism has been a pernicious philosophy for thousands of years, and it is antithetical to the Christian faith to which I belong. Jesus came in the flesh—that is a core teaching, and he died and rose again in the flesh, and there is a final bodily resurrection. Bodies and physical creation are good and redeemable and will be redeemed.
Then there are concerns or promises of the singularity, which is a term used to describe the hypothetical point at which technology -- in particular artificial intelligence (AI) powered by machine learning algorithms -- reaches a superhuman level of intelligence and capability. It would or could be doom, depending on who has it, just as we have feared nuclear annihilation since Hiroshima.
Well, that was an uplifting place to stop! I am not a Luddite, but I sincerely believe we have to step back from this technology as individuals and as a society and Americans. AI is not those crazy videos on YouTube where the boa constrictor is fighting a polar bear. It is not Bigfoot as a vlogger. It is not babies re-enacting scenes from movies. It is far more than that.
Be aware, is all I can say. People far smarter than me have more to say and you should read broadly on this subject, as it will have a great deal of effect on your life going forward. As the saying goes, rage against the machine and be as human as you can possibly be.
Music
This is the tenth episode in Season 5. To be honest, I am coming to a time for a hiatus this fall. I have produced, with Clemencia’s help, 80 episodes over three and a half years. I t have come to a sort of cross roads. I need some ideas because I have exhausted all of mine! Let me know through my website barbaragrahamtucker.net if you have any ideas or contacts. Until then, check the shownotes, click on the Go Fund Me link, and please go back and listen to some of our past conversations.
https://www.youtube.com/watch?v=9Ch4a6ffPZY&t=39s
https://www.youtube.com/watch?v=ixgunKpy61s
https://www.youtube.com/watch?v=0fPUWSv2JCI
https://www.youtube.com/watch?v=-MbD_KdPgZ0
https://www.youtube.com/watch?v=ixgunKpy61s
https://www.youtube.com/watch?v=TWpg1RmzAbc&t=4s
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
Comments
Post a Comment