ChatGPT presents ethical concerns for schools and profs
Recently, a new artificial intelligence technology called ChatGPT has been gaining traction in the tech world. ChatGPT is a natural language processing system that can generate human-like conversations with users. Created by OpenAI, it is powered by a deep learning algorithm that can understand and respond to user input in real time. It can also generate content based on a prompt, such as thesis statements or even entire essays.
This technology has raised ethical questions as some students have begun to use it to complete assignments, and it leaves schools trying to figure out what policies they
need to put in place to address it.
Jennifer Tronti, assistant professor of English and director of the undergraduate English program, said she is aware of this technology and thinks that teachers have to accept that this is a reality now.
“I brought it up in a class and — no names — but people have said that they know classmates (who have used it),” Tronti said. “We know it’s in use already. It’s already something that’s here and around, so I feel that no matter what my personal position is on it, I’m going to have to contend with it and I can do horrible, awful things like make everybody do handwritten essays, which are painful to read. I’m sure they’re painful to write, and I don’t think that’s a great solution.”
Recently, California Baptist University’s Provost Office sent out an email to the student body with the subject, “Uses and Abuses of Chat GPT3,” where they addressed this technology and acknowledged both the “potentials and perils it brings.”
“Your faculty are aware of applications like Chat-GPT3 and GPTZero, which can detect, like SafeAssign, AI-generated text,” said Tae Sung, dean of student success and associate professor of English, in the email.
“A group of faculty is currently exploring how such technologies can be used properly to enhance learning through technology in the same way we already use sophisticated computer software to enhance learning.”
The email also warned against the abuse of this technology.
“If you choose to experiment with these new technologies, avoid any potential honor code violations,” Sung’s email said. “Do not use AI-generated text as your own. Treat it like any open online source that must be evaluated and, if used, cited properly.”
ChatGPT itself, when asked about the ethical implications and practical applications of this new technology, warns against using it improperly.
“It is important to consider the ethical and educational implications of (using ChatGPT to help with writing essays),” ChatGPT said.
“As a language model, ChatGPT can generate text that may be useful in providing ideas or suggestions for your writing, but it is not a substitute for your own original thinking and writing.”
Chat GPT, while not sentient, still gives permission to use some of its writing in essays or articles as long as it follows OpenAI’s terms of service and is done in a “legal and ethical manner.”
“Using ChatGPT to write essays may be viewed as a form of academic dishonesty and may result in consequences if the work is found to be plagiarized or of poor quality,” ChatGPT said. “Additionally, the use of language models like ChatGPT may perpetuate biases and inaccuracies present in the data used to train the model.”
Dr. Laura Veltman, professor of American literature and associate dean of arts and letters, said she believes the technology could effectively be used as a supplement instead of a replacement.
“There are some ways I think it could be used effectively, like perhaps you want to just brainstorm ideas,” Veltman said. “(You tell it), ‘I want to write a paper on Ophelia in ‘Hamlet.’ What are some common thesis statements that people have used?’ So maybe it generates a list or maybe you don’t even ask what people have used but ‘generate a list of Hamlet thesis statements about Ophelia.’ So it gives you some ideas and you’re like ‘Oh, this is interesting. Now I’m going to go research this one.’”
Tronti said she is considering implementing this technology in her classes to help students learn how to use it properly.
“I feel like I need to create an exercise in class to have my students do (something with it),” Tronti said. “Maybe we’ll do it for one of my composition classes later in the semester, so that way we can play with it. But to me, one of those challenges is that I think — and maybe English teachers are responsible a little bit — but there’s this misconception that a research paper is just ‘here’s my list of facts’ as opposed to ‘here’s the relationship between these facts; here’s the argument that I am building based on this data.’”
Jacob Brook, senior history major and employee at CBU’s Writing Center, said the Writing Center recommends that if students use Chat GPT at all, they should use it like a source, and be sure to cite it properly.
“The first thing I would say the AI is lacking is trustability, and the second is clarity. The AI is not always clear. The last thing is that it could hinder your development (in writing and learning).”
Veltman said she is concerned about the possible direction technology like this can go but is interested in seeing where it leads.
“We could lose the kinds of conversations that come from discussing a novel or a poem or a short story or whatever,” Veltman said. “Those grow out of human interactions. If I just tell you to tell me some interesting ideas from a novel, you can do that, but it doesn’t actually make it interesting. There’s no magic in that and I think we’re being less than the creative selves that we were meant to be. I don’t think we were designed to just find information or to pass along information. We’re designed to grow in community with each other.”