In the educational environment, hints from artificial intelligence are becoming commonplace. Recently, the Stanford Daily reported that a large number of students are already using the ChatGPT chatbot for final exams. According to an anonymous survey of 4,497 respondents, 17% of students said they used ChatGPT to help with assignments and exams in the fall quarter, and 5% said they submitted material created directly by ChatGPT with little or no editing.

Academics fear that support systems such as GitHub’s ChatGPT and Copilot (based on an OpenAI model called Codex) will require educators to rethink how they teach and grade exams as machine learning-based support technologies become so powerful.

In addition, Christian Terwiesch, a professor at the Wharton School of the University of Pennsylvania, and a group of medical researchers, mostly associated with Ansible Health, decided to test ChatGPT.

In their survey, they determined that ChatGPT has limitations and is flawed. Overall, the researchers gave the chatbot an average score, but they expect AI-assisted support systems to find their place in education and other fields.

After all, the model has been trained on countless human-written texts, so its ability to guess a satisfactory answer to a question is no surprise.

“First, it does an amazing job at basic operations management and process analysis questions including those that are based on case studies,” Terwiesch notes in his article. “Not only are the answers correct, but the explanations are excellent“.

At the same time, he noted that ChatGPT makes simple mathematical errors and cannot answer complex questions on process analysis. However, the AI model responds to human cues on how to improve — it can successfully correct itself when it receives cues from a human expert.

The researchers claim that ChatGPT’s correct answers are fully in line with the accepted answers, and that the AI model has improved significantly from a success rate of just 36.7% just a few months ago.

The usefulness of ChatGPT in an educational environment – despite its frequent errors was emphasized in his blog by Thomas Rid, of the Alperovitch Institute for Cybersecurity Studies.

Rid describes a recent five-day course on malware analysis and reverse engineering:

“Five days later I no longer had any doubt: this thing will transform higher education,” says Rid. “I was one of the students. AAnd I was blown away by what machine learning was able to do for us, in real time. And I say this as somebody who had been a hardened skeptic of the artificial intelligence hype for many years. Note that I didn’t say “likely” transform. It will transform higher education. ”

Rid argues that while concerns about AI as a mechanism for plagiarism and cheating in education need to be addressed, a more important conversation should be about how AI tools can improve educational outcomes.