Is Chat GPT as smart as a college student?

Screenshot documenting the form that pops up when user downvotes a Chat-GPT response, as of Dec. 20, 2022 Photo credit: Rolf h nelson, CC BY-SA , via

“ChatGPT, the cutting-edge language model developed by OpenAI, is revolutionizing the way we interact with artificial intelligence.”

That is the response of ChatGPT-3 (ChatGPT) to the request, “write the first line of an article about ChatGPT.”

The sample is characteristic of many of ChatGPT’s products: not perfect, nor particularly creative, but certainly better than one might expect from an AI chat bot.

Changing the request to, “write an article about ChatGPT,” yields a several paragraph product in a handful of seconds. As the output gets lengthier, the more repetition, clichés and borrowed information become apparent.

ChatGPT, the latest edition of a chatbot software developed by Open AI in late November 2022, has shocked many. The chatbot can complete a wide range of tasks in seconds, including writing an essay, poem or letter about a given topic, generating code, translating a passage, conducting research and answering complex questions in a markedly human way. It distinguishes itself from other chatbots with its nuance and ability to fine-tune its own responses with further information from the user.

At the University of Miami, like many educational institutions, questions of a new threat of cheating or plagiarism have abounded.

“Please review your assignments and learning assessment methods to determine how they might be susceptible to ChatGPT, but also consider incorporating the tool in your courses to facilitate learning,” said Jeffery Duerk, the University of Miami provost, in an email sent to UM faculty on Jan. 19, 2023.

Duerk also encouraged faculty to interact with UM’s writing center resources and attend a series of educational events on artificial intelligence.

His approach to the issue echoes the sentiments of Yelena Yesha, the Knight Foundation Endowed Chair of Data Science and AI and a professor of computer science at UM. Yesha encourages embracing the chatbot’s assets, while remaining cautious of its fallbacks.

“It will empower equally, the faculty and educators as well as the consumer, the students and scientists, but it will not replace them,” Yesha said.

ChatGPT’s lack of transparency gives Yesha pause. The technology’s background is relatively unknown, making it difficult to ensure the resource is empirical and accurate.

She encourages the creators to share their algorithms so the code can be corrected for constraints and biases.

“Certain things or certain information can be either omitted, constrained or misrepresented,” Yesha said. “Technology can inadvertently produce misinformation.”

With Microsoft’s recent multibillion dollar investment in Open AI, both companies have voiced commitments to make ChatGPT safe and responsible through future development.

As the chatbot becomes safer, more accurate and more widely used, it could serve a role in more mundane tasks. However, as Yesha describes, there must always be a human somewhere in the process.

In the case of writing code, developers could use ChatGPT to complete more rote tasks, akin to the way many use calculators to do long division.

Victor Milenkovic, a professor of computer science at UM and the founder of UM’s computer science department, compared using ChatGPT to write code to the task of computing the square root of seven.

Complex on paper, yet simple with a few keystrokes on the calculator, the calculator is well equipped for such a direct task. However, it requires the user’s knowledge of the square root sign to serve a purpose. Otherwise, the result is just an unwieldy long decimal. Similarly, code written using ChatGPT requires the wherewithal of the programmer to ensure useful code.

“If you’re using this tool in the future, you’ll have to learn how to use the tool and, in particular, develop skills to test the program,” Milenkovic said.

He adds that most of programming is a debugging process. He shared a story about a fellow professor who suspected his students were cheating on their coding assignments. Instead of trying to charge them with plagiarism, he graded them based on the quality of their faulty program. The students did not know how to test nor debug their program.

Even with a functioning program, Milenkovic uses other methods to combat cheating efforts, employable across other fields. He dedicates the lab section of his classes to supervised coding, compelling the students to code their own work. He also uses OneDrive, a cloud service, to track the history of the code, and varies his assignments by the semester.

Nicole Hospital-Medina, a frequent lecturer for first-year writing at UM and a poet, uses a similar trick, asking students to handwrite a paragraph response in the first class to get a baseline for students’ writing ability.

Hospital-Medina is not too concerned with plagiarism in her class using ChatGPT.

“The writing is usually very boring. very generic, very overly generalized, lots of generalizations and no engaging anecdotes. There’s nothing clever about it,” Hospital-Medina said.

This makes it easy to detect the chatbot’s work.

She adds that the chatbot is uncreative and incapable of being experimental, making it a very poor poet and further dampening the quality of its performance in her class.

“The way that ChatGPT is right now, it’s not an asset in any way,” Hospital-Medina said in regards to her writing class.

More generally, she said it could aid start-ups, for example, in place of hiring a content creator or marketer. She advises, though, that this approach still has pitfalls, particularly in the loss of self-expression.

Across academic subjects, the consensus appears to be that ChatGPT requires at least one human’s judgement, intuition and creativity in complex processes.

In the more rudimentary skills, ChatGPT is competent, but using it in place of original work contradicts the point of learning the fundamental skills.

In language learning, for example, students begin by learning a very basic level of speech. The use of Google Translate, and now ChatGPT, are very difficult to detect at this level. As students grow stronger in the language, automatic translators become less useful, while online dictionaries become more essential.

“Once you develop the skill, then you can use technology as a supplement,” said Viviana Pezzullo, a lecturer in French at UM.

She advocated against completely prohibiting technology and instead rethinking language learning.

“I think learning languages will change,” Pezzullo said. “Right now we are still heavily reliant on memorization.”

To make an assignment “ChatGPT-proof,” educators can try to add a human element. In the rote tasks that ChatGPT executes well, professors say to embrace it.

“Use it to the fullest extent and do amazing, amazing projects, but be aware of the pitfalls,” Milenkovic said.