Writing, Critical Thinking and AI: Where Are We Headed?


Is society being eroded by “brain rot” and a so-called “cognitive debt” exacerbated by artificial intelligence?  Observers have grown concerned that users of AI display weak brain connectivity patterns.  In a very literal sense, when we fail to exercise critical engagement, the brain starts to believe it’s no longer necessary to build vital neural pathways, thus becoming atrophied.  When students using AI are asked to justify their conclusions, which may appear to be polished, they become defensive about their reasoning methodology.  In this sense, learning becomes devoid of thinking, and “when you have no investment in the thinking process, you become detached from the work’s accuracy or success.”

At the risk of stating the obvious, critical thinking is vital to a functioning democracy, and we ought to encourage such thinking through writing, publishing and the educational system.  Without critical thinking, we cannot analyze, evaluate or synthesize information to make satisfactory decisions.  Those who score high in scientifically measured criteria of critical thinking receive better grades, are more competent at their jobs, and are less prone to being manipulated.

Perhaps, my own personal experiences have a bearing on such questions.  Even before the advent of smart phones and the rise of social media, I had a brief stint as an adjunct college history professor.  Ever the idealist, I reasoned that assigning anything less than “writing intensive” essays was a cop-out, since the public should be skilled at rigorous argument.  To my shock and dismay, however, I discovered that students had no idea what a thesis statement was.  Given the distractions of social media and growing use of AI, I can only imagine critical thinking has deteriorated further since my teaching days.

For further insight on such matters, I attended a talk late last year at the New York Society Library entitled, “Is Copyright Dead? Authors and Publishers in the Time of AI.”  Keith Riegert, one of the panelists, had mixed feelings about emerging AI technology.  A CEO of Brooklyn-based Ulysses Press, Riegert also sits on the board of Inkbloom, an AI firm which “transforms the way publishers and agents evaluate manuscripts,” with software delivering “actionable insights, market predictions, and streamlined identification of promising narratives in your slush pile.”

Awaiting reception at New York Society Library

Riegert is a second-generation publisher.  His father Ray was “a brilliant mind” who nonetheless faced significant obstacles related to technology.  When the internet came along, travel writing became more challenging.  However, the indefatigable Ray was inspired by his local baseball team, the Oakland A’s.  Like Brad Pitt in Moneyball, Riegert’s father “got obsessed with the idea of using data to better inform publishing practices.”  Today, Keith continues to incorporate data-driven analytics into publishing.  AI, he says, has represented a “a huge productivity unlock.”  “Across the board,” and in “every single department,” he declares, AI technology has allowed publishing to become faster and better.  “I just don’t see any other way to do business,” he concludes, but at the same time “I deeply, deeply resent AI, so it’s a weird place to be in.”

So much for the business side, but what of the implications for writing and critical thought?  While Riegert strictly avoids publishing any content generated by AI, it’s unclear where this slippery slope might lead.  Artificial intelligence, the publisher remarked, has already helped authors draw up book outlines and organize writers’ research.  According to advocacy group Authors Guild, a small percentage of writers have already used AI to generate ideas and create such outlines.  But while some writers may feel creative when using AI for brainstorming, “the range of ideas often becomes narrower,” with a uniformity of style taking root amid similar wording patterns.  In this manner, the power of one’s original and authentic voice becomes neutered.  Moreover, writers from other cultures may find themselves subsumed into English-speaking norms, thus contributing to “AI colonialism”: an unfortunate new euphemism.

Overall, Riegert said, “I’m very worried about non-fiction,” since consumers may prefer to simply have ChatGPT text them about a given subject, rather than opt to purchase a book.   Even in the best of times, non-fiction writers were already facing significant headwinds.  Umair Kazi, Director of Policy and Advocacy at the Authors Guild, oversees the organization’s portfolio on copyright and artificial intelligence.  Aside from his legal work, Kazi also translates poetry from Urdu.  Sitting next to Riegert at the New York Society Library, he recalled marveling at the previous non-fiction writing paradigm.  “When I first came to the Guild,” he said, “I reviewed earlier contracts from as late as 2000, and I couldn’t believe it.”  Kazi was taken aback that people actually made money from being magazine writers, even managing to raise families and send their kids to college.  Today,  however, the situation is dramatically different with the median income for full-time writers amounting to scarcely more than $20,000.

Riegert and Kazi at the discussion panel

While fictional genres such as romantasy are doing well, other forms of writing such as press releases, news articles, business reports and ad copy may go the way of the dodo.  Non-fiction book sales, meanwhile, are down.  “Much of the present whispering,” notes the Times of London, “is that something has gone seriously wrong with non-fiction…Is there anything left to read for those who aren’t interested in ghostwritten celebrity memoirs or self-help manuals?”  The paper posits that non-fiction books are suffering due to competition from podcasts and the “viral spread of short-form videos,” with sprawling history faring particularly badly.  Unfortunately, older historians are dying out, and “publishing giants are unwilling to invest in new, untested voices.”  Instead, skittish publishers tend to gravitate toward “platform-led publishing” and authors who already command hundreds of thousands of followers on Instagram, TikTok or Substack.

The rise of text-based generative AI applications which “scrape” the web for authors’ content has exacerbated the situation. Indeed, even as they pursue piratical practices, AI companies fail to acquire permission from writers, let alone offer compensation.  This new AI-generated content has become “ubiquitous,” with the market becoming flooded with AI books.  These AI-works may appear like, and read like human-written books, though the reader may have no way of discerning the difference. Meanwhile, chatbots deliver instantaneous summaries of books and can even crank out material emulating the voice and style of well-known writers.  In this manner, Kazi notes, we get into “market dilution”: that is to say, the value and energy authors invest in a book gets devalued.  Consumers, meanwhile, wind up buying “copycat books” with Amazon turning a blind eye.

Even more perniciously, the piratical assault on writing also threatens to literally erode our sense of “fact-based reality.”  ChatGPT, in fact, has been known to provide answers “riddled with errors,” including non-existent references, fabricated citations and “hallucinations” unsupported by evidence.  It’s “unnerving,” Kazi remarked, that we’re taking language, which is so “fundamental to how we not only organize our lives, our communities, but our societies,” and turning everything over to algorithms.  “There’s something about this which is very disturbing to me,” the attorney noted, adding “especially now that we’re seeing reports of chatbot-induced psychosis.”  Lack of fact checking, Kazi continued, is a huge issue when it comes to books dealing with everything from psychology to self-help.  A large portion of works dealing with herbology, meanwhile, are AI-generated, “so if folks are buying these books and going mushroom foraging, a lot of people could be getting sick, or worse.”

Striking back, authors have petitioned AI firms to halt their unfair policies, and Authors Guild itself has taken a strong stand against companies which make use of books to train so-called large language models, or LLM’s.  Recently, Authors Guild brought a suit against OpenAI and Microsoft, claiming the corporations had violated copyright infringement when they used books to train Chat GPT’s chatbot.  Even if legal efforts prove successful, however, it will be difficult to know whether companies are refraining from “ingesting” books by secretly compiling data sets.

Bizarrely, in this era of “AI slop” and pirated books, writers are now having to prove they are themselves human.  The Authors Guild has launched a new “human-authored” certification which will allow writers to “distinguish their work in increasingly AI-saturated markets,” while providing readers with the “right to know who (or what) created the books they read.”  The advocacy group has teamed up with a startup, which is literally named Created by Humans.  This human-authored system and logo will be filed with the U.S. Patent and Trademark Office, and the public will be able to trace a book through a public database, thus building trust.  “You can use the certification to make a statement and let the world know this book is created by a human author,” Kazi said.  Perhaps, the certification could go right on the cover itself as a badge of honor, or alternatively inside the jacket in a less blatant form of marketing.

Where do we draw the line between AI and human writing?  Authors Guild believes writers can incorporate AI to perform spell checks, “minor use,” brainstorming or research if the overall text was written by a human.  In a detailed set of guidelines, the group suggests only using AI “as a tool” or “paintbrush for writing.”  “When you use AI to generate text that you include in a work,” the organization remarks, “you are not writing, you are prompting.”  Furthermore, if writers do opt to “use AI to develop story lines or character or to generate text, be sure to rewrite it in your own voice before adopting it.”  Lastly, authors must disclose any AI-generated text, characters or plot to publishers while being fully transparent with readers about any AI-content, “who will feel duped if they are not advised.”

Whether the AI industry will adhere to stringent and ethical guidelines is open to question, however.  Tech boosters claim we can use AI “to write a great book that sells, gets good reviews and doesn’t read like a glorified Wikipedia entry.”  The “trick,” we are told, “is to work with AI, not let it take over completely.”  AI can help overcome writer’s block by brainstorming ideas and plot twists while suggesting “fresh non-fiction angles.”  Such tools may also assist with world-building and character creation.  For non-fiction writers, AI can “take your bullet points and turn them into readable paragraphs.”  We should all incorporate a combination of chatbots and “specialized writing tools” such as SudoWrite, for best results.  Needless to say, users must pay to use programs such as SudoWrite and Jasper.

The Non-Fiction Authors Association, no less, says that using AI tools can be really “enjoyable” and save time by performing “tedious tasks” such as research, while providing outlines, brainstorming and “introductory material.”  We shouldn’t fear AI, since we can always insert our individual voice, opinions and personal experience within “pre-generated narratives.”  Ultimately, it may be “futile to resist,” since embracing AI writing technology is “as inevitable as the calculator, the computer, and GPS.”  The business press, meanwhile, is salivating at the prospect of generating bland-sounding “content,” which comes across as a demeaning and boilerplate term for writers investing quality time in their craft.

AI programs will transform our lives, we are told, by “punching up” writing projects, “generating a wide variety of content types from the same prompt,” and “experimenting with a range of styles, tones and editorial voices.”  While Sudowrite may help writers “improve fiction” and “blast through writers block,” other programs such as ParagraphAI can “offer customized answers” in response to “all different kinds of content.”  By opting in to Grammarly’s paid plan, users will be awed as the program “comes alive” while offering “comprehensive advice,” helping with brainstorming, and “polishing” writing.  Supposedly, the program is designed to avoid plagiarism and can “help generate citations.”  However, users should exercise caution, since the plagiarism scanner “is far from 100%.”

Paperpal, another program suitable for students and researchers, purports to help with term papers, essays or dissertations.  If students can come up with a thesis statement, which is becoming increasingly iffy given the current erosion of critical thought, Paperpal can “help you think through your idea in detail.”  Just like Grammarly, however, one must be “careful” with Paperpal’s suggestions, since the program is “susceptive to error.”

“Sudowrite isn’t evil, but it’s not harmless,” some writers argueOthers are less sanguine, openly wondering whether we should call AI programs “plagiarism software.”  Yet others remark, “I thought the whole point of creative writing was the creative part,” or “if you’re relying on AI to write your stories in any form, shape, or fashion, you are not a writer,” or “finally, a writing tool for people who hate to write, or hate writers, or both.”

“Sudowrite launches novel-writing AI software for pseudo-writers,” declares Gizmodo.  The publication continues, “What if there was a way to drain the life out of a book with lifeless recycled robot prose?  That seems to be the future that Sudowrite imagines…Sudowrite isn’t some AI breakthrough.  In reality, it’s based on OpenAI’s GPT-3, it’s just tweaked and trained to specialize in the death of the author.  Sudowrite is an insult to writers everywhere, boasting that it can do what writers can’t, saying on its site: ‘Everyone knows it’s 10% writing and 90% editing.’ Spoken like someone who always wanted to write but never could.”

How is higher learning dealing with AI writing tools?  Far from condemning the new technology, University of North Carolina Chapel Hill seems to hedge its bets.  The institution advises students that individual instructors may “not think alike on this topic,” and therefore one should speak with teachers about when it “may be appropriate to use generative AI in your coursework.”  The university openly admits that AI tools may provide false information, bogus citations and biased responses, but then oddly backtracks by adding that it is possible to “maintain your academic integrity and employ the tools with the same high ethical standards and source use practices that you use in any piece of academic writing.”

Others are more critical, remarking AI could have a “corrosive” effect on education.  Ironically, even though Trump officials declare the federal government should get out of education, “apparently there’s a carve-out for nudging Big Tech’s continued incursion into the classroom.” Already, many schools are incorporating AI into the curriculum, though it’s unclear if the rollout is being pursued with adequate forethought or adherence to data privacy concerns.  In any case, there’s no conclusive evidence that AI improves learning in comparison to good old human teachers.

In addition, Duke University points out that “overreliance on AI can erode an individual’s critical thinking skills,” with researchers finding that students who employ LLM’s for writing assignments displayed “cognitive offloading” accompanied by poorer reasoning and argumentation skills in relation to their peers.  Ultimately, if AI is introduced systematically, this could be “a horrible mistake.  It could inhibit children’s critical thinking and literacy skills and damage their trust in the learning process and in one another.”

Do we really want to surrender our own “cognitive autonomy”?  “The design of LLMs,” notes Psychology Today, “induce a dopamine-fueled reward that reinforces our dependence, similar to gamification.”  But fundamentally, as we rely on AI to pursue cognitive tasks, perhaps we should ask ourselves, “what capacities are you developing, and which might you be surrendering?”  Yuval Noah Harari opines that humans may become more “cognitively idle,” which could in turn give rise to “programmed thinking and societal stagnation.”  “Fears about AI are material,” notes the New Yorker, “but they are also philosophical, even existential.  Perhaps the threat is not that computers will become more human, but that humans will, or have, become less so.”

Perhaps we need to take stock of “how writing affects us as people,” since fundamentally “it changes our minds and brains” while encouraging “reflection, logical thinking, and production of tangible texts to foster rethinking.”  Neuroscience teaches us that the brain is “plastic,” that is to say, capable of reorganizing itself in tandem with our physical or mental activity.  Literate individuals display different brains from illiterate people and have increased density of white and grey brain matter.  On the other hand, if we surrender to powerful tools such as Grammarly, the cognitive risks increase, “particularly for less confident writers.”

On an intrinsic philosophical level, what will happen to our own individual writing voices?  Through “predictive texting,” our vocabulary might become more concise, yet less captivating, and thus we may find ourselves inundated with “homogenized mashups of what has come before.”  While AI is “highly adept” at writing everyday e-mails, our “human motivations for writing run deeper.  We write to look outward, as with literary works that convey our perspective on the human condition.  We write to look inward, including to find out what we’re thinking.”

Is it any wonder that AI poses risks not only to one’s own cognitive capacity, but to the wider culture as well?  Already, some countries have moved toward “digital authoritarianism” while cultivating AI-mass surveillance.  Once unleashed, AI tools can generate bogus content, disinformation and “malicious narratives.”  In the U.S., AI-generated deepfakes flooded social media in the 2024 presidential election.  Even prior to this, however, both leading presidential candidates in the 2023 Argentine presidential election used deepfakes and “tactics that escalated into full-blown AI memetic warfare to sway voters.”

Analysts are concerned about the rise of ominous sounding virtual “psychochats” or avatars, which would allow digital candidates to interact with voters.  Needless to say, proliferation of AI technology within the political system, in tandem with a lack of rigorous factchecking, “amplifies the risk of psychological manipulation” and represents a “cognitive challenge for democratic societies.”  This scenario has led to the coining of odd new terminology like “digital pollution,” referring to synthetic content flooding the zone, which makes it harder “to find trustworthy sources of information and to trust democratic processes and institutions.”

Given all these potential threats to cognitive autonomy, how can we avert an AI-generated dystopia?  Back at the New York Society Library, Riegert strikes a somewhat more hopeful note.  Though the publisher is concerned about how AI will affect non-fiction, in the long-run he believes our fixation on digital screens may increase the perceived value of human-written books.  Nowadays, when you walk into Barnes & Noble, you might find glossily produced paperbacks, and increasingly books are becoming “a luxury market.  It’s an indication you have time to not be on your phone and you have the attention span not to just be scrolling through TikTok.  It is a pure analog form of entertainment.”

Some academics have made similar arguments, arguing that “if you set aside the more apocalyptic scenarios and assume that AI will continue to advance…it’s quite possible that thoughtful, original, human-generated writing will become even more valuable.  Put another way: The work of writers, journalists and intellectuals will not become superfluous simply because much of the web is no longer written by humans.”  Other writers, however, wonder how many people will actually support a living, thriving culture, which might have “to rely on philanthropy to continue to exist.”

(Incidentally, this piece was not written with the assistance of AI-writing tools)


Leave a comment


Content | Menu | Access panel