As artificial intelligence, or AI, makes rapid technological strides, one software has grabbed the attention of students and professors alike: Chat GPT, the language model that pulls text from across the internet to mimic the response of a human.
The possibilities of this software are virtually limitless. A user could ask Chat GPT to write a creative bedtime story, give instructions to perfect any recipe and, as many academics are coming to realize, even do their homework.
Professor and chair of media and information policy Johannes Bauer said cheating isn’t his main concern with Chat GPT.
“I have a very strong trust that students want to learn, I don't think that students come to college to cheat their way through a degree,” Bauer said.
According to Bauer, the discourse around Chat GPT will eventually “swing back to a more reasonable state” where academics can “explore the benefits and what the potential risks are and learn new ways of how to handle” them.
He thinks software like Chat GPT can help academics re-evaluate the current “industrial model" of examining knowledge that prioritizes learning single facts over complex understanding.
“Chat GPT points us to the fact that this is not a good way of assessing knowledge anyway," Bauer said. "Maybe it will help us to come up with better ways to do this."
Writing and rhetoric professor William Hart-Davidson said students using the AI software to cheat on assignments isn’t his primary concern. What is concerning to him is the long-term effects on their skills if they rely on AI.
“I don't think the problem that we have to worry about is cheating. I would say it this way, good writing, wherever you're practicing, it takes practice,” Hart-Davidson said. “What that means today, now that the robots are here, is that you can skip practice — and you know what happens when you skip practice? You might fool yourself; you might fool somebody one time, but you won't get any better.”
Some worry that shortcomings in AI technology can become issues for students. Because the AI has access to limited information, the data sets that it's trained on may have racial and gender biases, Hart-Davidson said.
“They're assembling a text out of a bunch of writing they already have and if that writing already contains biases or inaccuracies; if it contains racist statements, or sexist statements, it's going to produce those, because that's what it has,” Hart-Davidson said.
He also raises issue about where AI sources its data, which may lead to it producing content that other people have made, like writings and art, without consent. The two policy issues surrounding AI learning is the need to develop a “culture of consent” and a “culture of disclosure,” he said.
A solution, he said, would be for one to have the ability to be informed when they are contributing to an AI’s training corpus — or the ability to opt out of it entirely. Individuals currently don't have this ability.
Bauer said that academics need to adapt and learn how to respond to the development of new technologies such as Chat GPT.
“We can't put the genie back into the bottle and we'll have to rethink how we how we evaluate knowledge, how we can use it in class,” Bauer said.