by Emmanuelle Toulouse, Spring 2024 AILA Assistant
In the 1950s, if you asked a computer to write you a love letter, it would produce something like this: “My sympathetic affection beautifully attracts your affectionate enthusiasm. You are my loving adoration, my breathless adoration. My fellow feeling breathlessly hopes for your dear eagerness.” Today, ChatGPT can produce something that, some might argue, sounds much better: “My beloved. Your presence lights up even the darkest corners of my world. Your touch ignites a fire within me that never fades. With every heartbeat, I am reminded of the depths of my love for you.”
But is ChatGPT’s version really that much better? Is the dramatic improvement of generative AI really a good thing? And what might be lost when you rely on AI to write your love letters for you?
Naomi S. Baron, author of Who Wrote This? How AI and the Lure of Efficiency Threaten Human Writing and Professor Emerita of Linguistics at American University began her talk, AI and Creativity: What Matters for Human Writers, with these examples and questions. The first installment of AILA’s Spring 2024 Learning and Discussion Series, this virtual event was hosted in partnership with Amherst College’s Writing Center and featured a discussion of some of the ideas covered in Baron’s latest book and a Q+A with the audience.
After a dramatic reading of the two computer-generated love letters, Baron began her talk with a seemingly simple question: is AI creative? The short answer is yes; AI can create art, music, and text. The long answer, however, is much less clear and depends on how we define creativity. Scholars—ranging from psychologists to neuroscientists to sociologists—have defined creativity in countless ways: something that produces surprise, an act that causes a change in culture, the production of something useful. Joy Guilford, an American psychologist, attempted to quantify creativity: he believed creativity is the tendency to think more divergently (‘outside the box’) than convergently (in accordance with existing norms), and developed the Alternative Uses Test to identify whether individuals do more of a certain type of thinking.
When ChatGPT took the test, it scored very highly in divergent, or creative, thinking. But does this result really mean that large language models or generative AI tools are creative? Guilford’s definition of creativity focused on creative outputs—but what if creativity is more about the ‘creative process’ than the process’ product? We know that humans place value on a product’s creation process when determining its value: people are willing to pay more for handmade creations even when near-exact machine-made replicas exist, and studies show that people are less likely to praise poems if they know an algorithm wrote them. We also know that the ‘creative process’ is valuable to the creators themselves: research shows that being creative allows humans to think through ideas, express opinions, work together, cope with challenges, and reduce stress. Arguably, the challenging and rewarding creative process is what makes the products of creativity so valuable.
The rise of generative AI is thus forcing us to re-examine our definitions and valuation of intelligence, innovation, and creativity—but what do these questions mean about AI’s implications on writing?
Writing, Baron argued, is a form of creativity. Therefore, the people reading it care that a human wrote it, and authors benefit from writing it. Of course, Baron acknowledges the benefits that AI can offer to writers. Collaboration with AI—using it to brainstorm, check grammar, and improve style—can be highly beneficial. However, the ease and efficiency of AI encourage overuse and overreliance on AI, and this slippery slope can have disastrous effects. We don’t want to begin losing connection with the things we write because AI wrote more of it than we did. We don’t want to forget how to formulate clear arguments and write beautiful texts because we assume AI can do it better or faster than we can. We don’t want to turn writing into a solitary experience, when it always benefits from human collaboration. We also don’t want to lose trust in what others write—we know AI can make mistakes, and we don’t want to start questioning everything we read because someone might have used AI to write it.
So, where does this leave us? The ‘turn-it-in’ approach, Baron argued, will get us nowhere. It is becoming harder and harder to tell what is AI-produced and what is not, and governments and schools will inevitably fail to devise policies that effectively identify and limit dishonest use of AI. The key to addressing the problems that come with using AI—and harnessing its immense potential—is education. Students and adults need to be well-informed about how AI can be helpful and how it can be detrimental. “Talking with students about what the tools do and what they don’t do; talking about why learning to write and practicing the craft yourself matters—that’s what’s important,” Baron said. Ultimately, Baron concluded, the rise of generative AI will force us to re-examine what we value and think about how to best share those ideals. “The notion of trying to make something look best as a product is not a good way to be a human being,” she argued, “It’s how you got there and what you accomplished that matters.”
AILA extends huge thanks to Professor Baron for her thought-provoking remarks and Jessica Kem of the Writing Center for her flawless facilitation of the event’s Q+A. Stay tuned for our next installment in the Spring Learning and Discussion Series, Emotions on Demand with Music scholar and practitioner Ravi Krishnaswami, on March 26th!