AI innovation and education: how to navigate learning and teaching
This article on AI and what it means for education is written by Becky Zimmer, a freelance writer based in Humboldt, Saskatchewan with experience in farm, community, small business and sports reporting.
Historically, technological shifts have always disturbed the familiar and usual.
According to educational insiders, the technological development in AI is posing the biggest threat to the world of post-secondary education.
When Darren Hicks stumbled onto his first case of an AI written essay in the fall semester of 2022, there wasn’t much he could have done to prove with 100 per cent certainty the student had not written the two essays themselves.
The 500-word essay on Hume and the paradox of horror looked well written, but reading deeper into the structure and arguments, it made very little sense to someone who actually knew about the writings of Hume. If the assistant professor of philosophy at Furman University in Greenville, South Carolina hadn’t caught a whiff of AI generation, Hicks said he would have passed the student, but he and the institution still counted it as a form of academic dishonesty. This was different from a traditionally plagiarized essay, said Hicks, but without the student actually admitting to using AI generation, he wouldn’t have had a leg to stand on reporting the student to the academic board.
“A plagiarized essay just screams its nature. It says, I am a piece of crap. This one didn’t. This one was confidently written, it was cleanly written, it really believed it knew what it knew.”
Plagiarism has always been a concern in the realm of education and professors know what to look for. Hicks doesn’t see AI increasing the number of plagiarists in the world of academia, but plagiarism is not the conversation we should be having. It’s about more than just academic dishonesty, said Hicks, and he likes to think that most students are in university to learn and get an education. An essay gets at what it means to be fundamentally human, but it is getting harder and harder to prove that lack of human creation. Besides the practical argument that AI just gets things wrong, the bigger question for students should be why they are getting an education in the first place.
“It does raise just fundamental questions about what education is,” said Hicks. “Why are we doing it? What does it mean to be human? The huge questions…if you are (at university) to learn, if you’re in this class, because you’re interested in the material, then it defeats that purpose to use AI to do this stuff.”
As AI development continues, we won’t be signing any global contracts to stop it anytime soon. With that in mind, we shouldn’t be afraid of using it, said Hicks.
“Let’s find a way to work this thing into the classroom. I give the student credit, she already knew about this technology, which is pretty good. It’s her bad luck that I knew about it, too, and was a step ahead.”
Given his area of research is also in educational technology media, Alec Couros, director for the Centre for Teaching and Learning, is heavily invested in the use of AI in the classroom. He develops programming to teach University of Regina professors how to use technology within the classrooms, but also speaks with his grad students about what works for them and what doesn’t from a teaching perspective. Much to the dismay of high school teachers, Couros also shows high school students how to use AI. Couros is OK with forcing that conversation, he said, since this technology will just continue to advance, not disappearing if people simply ignore it.
“We need to better communicate what it is, what we think is appropriate, and also make it quite clear, this is not just about academic misconduct, it’s about your own learning. If you don’t learn to do certain things, it’s not going to benefit you in the long run.”
Just like we mentioned in the first story, there are limitations to what AI can do. It takes skill to clearly write the AI inquiry. That itself is a learning experience, said Couros. Asking for proper citations can be done, but again, only if the inquiry is clear. It won’t be perfect, but students need to understand that AI is a good place to start but is not the be all and end all. If they’re using AI as the final result, they are going to have some problems.
From a utopian lens, more and more AI is replacing those menial, unimportant cognitive tasks within academia and many see it opening doors for academics to shift their focus to critical thinking, collaboration, comprehension, and creativity.
However, with this utopianism also comes a sense of foreboding for the future of education, Couros said, and we can’t continue down the path we are currently on without addressing concerns in AI development.
This is the second in a three-part series on AI in the world of writing, including the first part on AI in journalism. Our last one will look into the impending future as AI continues to develop with some notable people starting to note their distrust.
on January 4, 2024 at 11:53 am
· Permalink
I think this is a big issue, actually. There’s more to writing an essay than just writing skills–namely, being able to organize the information and write it down in a way that clearly communicates your ideas, and being able to structure your arguments in a logical way. I’m increasingly running into the issue of students who do not understand what a claim is or what an argument is, at very basic and fundamental levels.
This isn’t to say that AI is the originator of the problem, but it will exacerbate it because students will no longer need those thinking skills to write and they will be outsourcing their thinking and writing mostly to the AI.
The other issue is that AI is increasingly becoming something of an authority (or treated that way) on topics that it clearly does not have a handle on. For example, there is a writing forum that I frequent and its users have been using AI for editing. Unfortunately, ChatGPT sometimes provides erroneous advice (hallucinations) in place of real grammar rules.
People are even using AI to do research, which, again, opens the door to hallucinations, giving a truly unwise level of epistemic authority to machine that often just…makes stuff up. As for complicated and nuanced topics, forget it, lol.
Very happy to see this topic being discussed, thanks!