Annette Vee has written two artificial intelligence policies for her University of Pittsburgh colleagues to consider including in their syllabi.
One policy completely bans AI usage in the classroom. The other permits AI use under professorial supervision.
Those policies perhaps exemplify the crossroads that professors face as they navigate a college landscape that is becoming increasingly affected by AI.
“It's hard to ignore [AI],” said Ms. Vee, a Pitt professor and the director of the university’s composition program. “I think it can be a good thing or it can be a bad thing, like a lot of technologies. It depends on how it's implemented.”
AI became a cause for concern in many classrooms last school year, when AI research lab OpenAI introduced ChatGPT, an AI system that can respond to prompts in readable text based on an expansive internet database. Since its November launch, the chatbot has composed emails, written poems and constructed five-paragraph essays.
It’s muddied by ethical concerns: chiefly cheating, in the classroom. A Canadian writing professor told the Associated Press in August that academics are in “full-on crisis mode” as they try to prevent their students from taking credit for a bot’s work.
As AI popularity increases, students are more likely to find it mentioned in their syllabi. But the extent to which AI is allowed — or banned — is often up to their professors’ discretion.
“I require it,” said Don Maue, a Duquesne University professor and the director of the school’s Center for Emerging and Innovative Media. “We’re using ChatGPT and other tools to enhance our learning and enhance our experience. It doesn’t replace experience and it certainly doesn’t replace learning.”
Students in Mr. Maue’s classes use AI to generate outlines for school reports, and then consider accuracies and inaccuracies in the AI-provided text. Mr. Maue also holds “Socrates-style” conversations with ChatGPT during class. The interactions are amusing to students, but they also help students understand how the system’s database works, the professor said.
Because Mr. Maue’s assignments are experience-based, he isn’t worried about students using AI to cheat. He believes integrating AI in the classroom could equip his students for the workforce.
“I hear that 300 million people are going to lose their jobs because of artificial intelligence,” Mr. Maue said, citing a Goldman Sachs report from March. “My position is my students are going to be the people that know how to integrate these advanced research and writing tools into their work so that they're not losing their jobs. They're actually benefiting from the use of the tools.”
Penn State journalism professor Steve Kraycik has also started to incorporate AI in his classes. Students use AI to help generate news clips and social media posts for their stories.
In journalism, Mr. Kraycik believes it’s key that the creation and review of stories remains human-centered. He plans to develop an AI policy for his syllabi based on Associated Press guidelines.
“Nothing replaces a journalist writing a story and having it edited and getting it on the air,” Mr. Kraycik said. “AI can help us with some other peripheral things, but certainly not the guts of writing journalistic stories.”
Like other universities, Penn State offers a sample AI policy for instructors to consider. While academic integrity policies already bar students from cheating and plagiarizing, AI-specific policies can address gray areas that stem from this technology, like using AI for research or summarization purposes.
Ms. Vee, of Pitt, said it’s important for instructors to include AI policies in their syllabi to give students a sense of clarity.
“Students are allowed to get help with their writing — they go to the writing center and they get help from tutors, their peers [and] their instructors,” she said. “But [they need to know] what help is okay and what help isn't.”
First Published: September 2, 2023, 9:30 a.m.
Updated: September 4, 2023, 5:08 p.m.