AI - Is Education Doomed?

Story by Marcus Goble | Design by Rody Farr | Photos by Dylan Hanson


AI: Is Education Doomed?

A professor begins grading an essay. The paper reads like one of the best they have ever graded. They wonder to themself, “Either I’m the greatest writing teacher to ever exist, or this is plagiarized.” They plug the paper into a plagiarism checker, and it shows up clean. What, then, has changed?

As AI continues to infiltrate various aspects of our lives, the education system works to adapt and ensure that it doesn't negatively impact learning, while also measuring the opportunities AI provides.


Generative Artificial Intelligence

AI programs like ChatGPT dominate the airwaves in the discourse surrounding AI. Anyone can generate text based on a prompt given to the program. This means students can theoretically use it to write their text-based assignments for them.

ChatGPT is a form of generative AI. This AI pulls from its database to generate content. The AI is not sentient though it may seem. 

“It has no intent,” Director of the Multimodal Education Center Chad Schone says. “It’s not trying to do anything. It has no agency. It is just doing what it was built to do, which is predict text and predict words that come together based on what it was trained on.”

Is it Plagiarism if AI Generated This Heading?

Generative AI comes with a major problem for the education system. Students can use these programs to generate entire essays in a matter of seconds. While some students may see this as an absolute win, there are concerns to be had about plagiarism. 

In normal instances of plagiarism tools like Turnitin are used to detect cheating. Generative AI cannot be found the same way normal plagiarism is detected. Turnitin has created programs to attempt to detect AI writing, but it can only go so far.

“Turnitin says they are erring on the side of not catching things in order to be 98% accurate with the detections.,” Director of Instructional Technologies and Design for CWU Multimodal Learning Delayna Breckon says.

The technology is brand new. Developments from Turnitin or similar programs may make it easier to detect AI writing, but faculty do need to stay cautious to make sure they do not falsely accuse students of AI writing.

“We’re not encouraging folks to fail anybody to say this is definitive,” Breckon says, “there’s no way to prove any of these things.”


A Few More Concerns

Generative AI may not always be accurate. It predicts text based on a data set. Programs like ChatGPT only have data up to September 2021. You can test this yourself. Ask ChatGPT or Snapchat’s My AI who won the most recent Super Bowl or World Series. Misinformation can easily leak its way into projects using generative AI.

There’s also a bias concern. While an AI cannot have bias, the data set it’s trained on can.

“There are standard worries that people have,” Department of Philosophy & Religious Studies Lecturer David Schwan says, “especially with these large language models, these generative models that whatever the data set was that they were trained on any bias or any kind of structural assumptions that are built into the data are going to be reflected and not always super clearly, but they’re going to bleed through.”

Faculty do not just worry about the content created. Privacy is one of the big concerns for Faculty Senate Chair and Associate Professor of Musicology Mark Samples.

“Anytime you put your words into this system there are privacy issues,” Samples says, “not just for personal information, but also when you’re working in an institution like an educational institution, putting information into ChatGPT is going outside the institution.”

The last of these few problems call into question what education is. 

When you take part in any creative process, you learn to think about that creative process. When writing an essay, you are developing skills throughout the entire writing process. Generative AI shortcuts the thinking you may do.

“My worry is, especially in settings where we’re trying to develop these skills, people might try to jump over that process a little bit,” David Schwan says.


CWU’s AI Stance

CWU’s faculty have a wide variety of opinions when it comes to AI. Some professors have begun implementing generative AI into their assignments. Others have AI written into the plagiarism section of their syllabus.

“There are no policy statements out there right now,” Samples says. “There are no syllabus statements out there right now that are definitive or from the university.”

Enforcing a ruling on AI university-wide comes with some caveats. As seen with Turnitin’s current technology, how would a hard-stance against AI be enforceable? A ruling in favor may make some faculty uncomfortable. 

“Whatever we decide to do, probably needs to keep in mind a couple of things,” Sample says, “faculty are going to have different comfort levels with this technology. They’re going to have different awareness of this technology, just like students.”

CWU avoids taking a hard position when it comes to AI. The university wants to make sure to respect the choices of the faculty, while still monitoring the emerging technology.


Is AI Good, Actually?

Many members of our faculty see opportunities with this technology. Some use it or allow for its use in their classroom. In writing assignments, Schwan encourages students to experiment with ChatGPT.

“Right now, I’m kind of just trying to keep things collaborative, and negotiate with people and say try to use these tools where you can,” Schwan says, “You do want to produce your own work, but cite where you are using them in different ways.”

Generative AI can find uses for both faculty and students if they think critically about them. Students can produce outlines and brainstorm with the help of AI. Professors can put together a syllabus or summarize class notes at the press of a button. 

“These tools are enormously powerful,” Schwan says, “and if what you’re trying to do is focus on maybe being creative or expressing yourself in particular ways, I’m just going to talk about the Open Ai, ChatGPT tools, these can save you a lot of grind of certain types of creative activities.”

Some professors also use generative AI to teach.

“You can iterate on multiple papers,” Chad Schone says, “you can edit them and basically have it write and then have students correct the writing. So there’s ways to use it in order to look at writing from a different perspective allowing students to grade it which would inform their own writing.”

The process of writing a paper becomes much more condensed as well. Creating an outline turns into a few button presses and some editing. Brainstorming can be done at the press of a button and applying critical thinking.

“One faculty was saying that they are excited about it because it lets them write multiple papers in one quarter rather than just one big term paper,” Schone says.

When used optimally AI can be used to improve the education of students. It allows multiple repetitions of big creative projects over the course of a quarter.

“You don’t have time to write 20 papers in a ten week course,” Samples says, “but maybe this tool could allow for students to not turn that in as their own work, but to get more reps.”


Moving Forward

Generative AI continues to advance quickly. It also shows no sign of disappearing. 

“It’s changing,” Breckon says, “it kind of blew me away from my first engagement to my last engagement like with how fast it’s maturing.”

Technology evolves rapidly and has created discourse about whether it will ruin education. Most recently smartphones were the center of discourse. 30-40 years ago personal computers were going to doom education.

“I don’t know what the future holds, and most people don’t,” Schwan says, “and what’s always so terrifying about a new technology is that you’re just not exactly sure where things are gonna go.”

PulseComment