ChatGPT failed my course: How bots may change assessment


ChatGPT failed my course: How bots may change assessment

Aurich Lawson | Getty Images

One of the most unpleasant aspects of teaching is grading. Passing judgment on people is never fun, and it’s even less fun when you’ve spent months interacting with those people on a daily basis. Discovering that your students have tried to get a leg up by using an AI chatbot like ChatGPT has made the process even more unpleasant. From a teacher’s perspective, it feels a bit like betrayal—I put in all this effort, and you respond by trying to do an end-run around the assessment.

Unfortunately, the bot-writing horse bolted long ago. The stable is not just empty; it’s on fire.

So what is the right response to ChatGPT in education? Is there even a single correct response?

Before we get into the whys and wherefores of ChatGPT, let’s jump to the conclusion: It’s important that we don’t enter into an arms race. I don’t want to spend my limited time and energy trying to detect the use of writing tools. I don’t want to pay large fees to access tools that detect bot-written text. I also don’t think avoiding bot output by taking a great leap backward to written exams is an acceptable solution—we already generate large numbers of students who can pass exams while not actually being able to apply what they “know.”

We have to ask what we want to assess. And do we really need to use things that bots can produce—essays or reports—as proxies for our assessment?

ChatGPT is just the latest

Students have been using writing aids for a long time. Grammarly and QuilBot have been rephrasing students’ sentences for as long as they have existed. Before that, asking more fluent friends to help get the phrasing and flow right was the norm. Essay mills have churned out paid-for papers for forever. In short, ChatGPT is new only in the sense that it is accessible—and far more prone to churning out nonsense.

But ChatGPT is also different from tools like QuilBot and Grammarly. The latter tools suggest improvements to existing text—they don’t write an entire essay. From my perspective as a teacher of science and technology, these writing tools are something students can learn from if they want to improve their writing. In other words, before ChatGPT, none of the tools got in the way of me evaluating my students.

This is less true for my wife, who teaches high school English. English is the second language of most of her students, so vocabulary and grammar—the exact things QuilBot and Grammarly target—are still important assessment points. Her senior school students possess a wide range of English proficiency, with some being near-native speakers and others only capable of expressing simple ideas. In senior school, the focus shifts to building arguments and conveying them fluidly in different written and non-written forms. The previous tools only helped if the assignment was written and if the student had already managed to structure an argument.

ChatGPT changes the calculus. You can see the appeal of the chatbot in the classes my wife teaches. For good students, it saves them from doing what they can already do; for struggling students, it turns an almost-certain fail into a chance of passing.

Part of the senior school assessment is a writing portfolio, and boy, has ChatGPT been working hard here. Some portfolios, including the reflection on building the portfolio, were entirely written by ChatGPT. Some of the reflections, which are supposed to describe a student’s experience in doing the assignment, even referred to a number of random pieces of writing not included in the portfolio, as ChatGPT has no context, only statistics.

The school administration is uncertain about how to proceed. Unlike plagiarism, the case against ChatGPT use is less defined and no one wants to face the threat of a lawyer.

In the technical courses I teach, the appeal of ChatGPT is high, while the utility of ChatGPT is less obvious. My students must perform research related to one of their practical projects, and that research is partially assessed in the form of a chapter of the project report. Can you really provide ChatGPT with a prompt that gives it enough context on a student research project?

Despite that question, I’ve also seen written material from ChatGPT. The suspected ChatGPT chapter was poorly sourced, superficial, voiceless, and dull. Within that chapter, different sections were supposedly written by different students, but the voice, style, and sourcing did not change at all. ChatGPT’s effort wasn’t all bad, though; the writing was clear and flowed nicely.


Source link

Denial of responsibility! galaxyconcerns is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave A Reply

Your email address will not be published.