The introduction of artificial intelligence into higher education has brought with it new complications. As university administrators adopt new technologies, professors and faculty must navigate a rapidly evolving educational landscape.
Garrett Sims, assistant director of the Office of Student Community Standards, explained that a university-wide AI use policy might unnecessarily limit the use of AI tools for specific course-related applications.
“Given each discipline and each course varies in what ‘acceptable AI use’ looks like, the university empowers instructors and departments to set expectations for their courses,” Sims said in an email.
Kristy Kelly, the director of writing at Oregon State University, is part of a College of Liberal Arts committee tasked with developing policies to identify and address AI use. Her committee conducted a survey of liberal arts professors in order to understand their most pressing concerns about AI.
“In our poll, the answer was: We want to know what the heck to do when we encounter something that doesn’t fall in line with things we’ve already talked about as acceptable (AI) use in our class,” Kelly said. “(Teachers) don’t want to have to all be figuring it out themselves.”
Some instructors are trying to understand what might lead students to use generative AI.
Kelly said that, in her experience, international students in Ecampus classes often use AI as a translation tool in writing assignments.
“They are definitely struggling with confidence around not wanting to sound like an international student, not wanting to have the ‘writing accent’ in their work, and using ChatGPT to kind of glaze over any linguistic differences,” Kelly said.
Cleavon Smith, a theatre arts and writing instructor at OSU, has come across classwork he suspected was generated by AI. Smith likened it to plagiarism.
“My experience over the last 20-plus years has been that students plagiarize when they lack the confidence in their own ability to do the work, or they’re super cramped for time,” Smith said. “There are always those one or two (students) who just plagiarize because they’re flat out lazy, but that’s not usually the case.”
Smith described how the structure of a writing course and approaches to assignments can reduce the likelihood that students will be tempted to use AI. By incorporating drafting exercises and peer review, students have more opportunities to develop their ideas and receive feedback earlier in the writing process.
“My colleagues who … just say, ‘Turn in this essay on the fourth week of class,’ they’re having a lot more issues with (AI),” Smith said.
According to Kelly, many instructors consider the best approach to be having a conversation with a student they suspect has inappropriately used AI for their assignments.
“I think the earlier you can start a conversation about what is acceptable use (of AI), then you’re much more likely to encourage students … (to use) the other kinds of writing resources that are at their disposal,” Kelly said. “What we certainly can’t have (teachers) doing is putting a zero on a paper and saying, ‘You used AI,’ in the absence of being able to prove it.”
In-class essays and writing samples can help instructors identify AI-generated work by demonstrating a student’s authentic writing style and voice. These samples can then be compared to future work that may be tonally different or have other common characteristics of AI generation.
Some teachers have also tried hiding trap words and phrases in their assignment prompts. These traps are designed to produce nonsensical outputs if a student copies and pastes the assignment prompt into an AI writing generation tool.
Sims explained that while instructors may use applications to screen students’ work for potential AI usage, those tools by themselves are insufficient to prove that an ethics violation has occurred.
“The university does not consider ‘detection tools’ (Turnitin, ZeroGPT, etc.) as reliable indicators of whether a submission was AI-generated,” Sims said.
When a student is suspected of using AI tools in a manner that violates the cheating, plagiarism or falsification parameters of OSU’s existing code of student conduct, an instructor may submit a report to a College Hearing Officer.
As with any other potential academic conduct violation at OSU, the CHO meets with the students, reviews the information included in the report, and offers the student the opportunity to respond to the allegations. The CHO then makes a determination based on a majority of the evidence.
If the student is found to have violated OSU’s academic integrity policies, they face both academic and educational sanctions, such as grade deductions and a required academic integrity online course.
“The process seeks to balance accountability and support, recognizing the harm of academic misconduct to the community and addressing the root causes of a student’s behavior,” Sims said.
OSU has been on the cutting edge of AI’s integration with higher education, adopting Microsoft’s Copilot AI and constructing the $213 million Huang Collaborative Innovation Complex, which will host an NVIDIA supercomputer with the aim of advancing research and learning with artificial intelligence.
Referring to her survey of instructors, Kelly said that a lot of professors in the College of Liberal Arts are resistant to the presence of AI at an institutional level.
“It makes me wish that (educational) institutions were being a little bit more like, ‘Hold on, we need to understand the implications of this before we just kind of dive in,’” Kelly said. “There’s faculty perspective on that, and then there’s an administrative perspective on that, and sometimes they’re not having the same conversation.”
However, there is an increasing acceptance of the fact that AI will be an element of higher education for the foreseeable future.
“A growing perspective is that (teachers) have some sort of responsibility to tell (students) how to manage these tools,” Kelly said. “I feel like I need to prepare students for a workforce where AI is going to be kind of a presumed skill set.”
Ethical concerns in higher education are not new. Long before AI, professors and educational institutions have had to contend with cheating, plagiarism and falsification, among other things. Kelly said that students who violate the code of conduct, by misusing AI or otherwise, may be struggling to understand what benefits higher education has beyond a degree to make them more qualified for a job or to enhance their resume.
“The students for whom (AI) is more of a problem are already kind of disadvantaged in this system to begin with,” Kelly said. “So I think we need to get down off our high horse sometimes and try to work it through in a little bit more of a deliberate way.”