this post was submitted on 16 Nov 2023
19 points (100.0% liked)

Science

13024 readers
137 users here now

Studies, research findings, and interesting tidbits from the ever-expanding scientific world.

Subcommunities on Beehaw:


Be sure to also check out these other Fediverse science communities:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago* (last edited 1 year ago)

I've been using the new GPT feature of ChatGPT to improve my own feedback on student work. If you don't know, GPT is like a customized, purpose driven ChatBOT. So I set one up with the purpose of evaluating my feedback and recommending ways to improve it. I can provide the GPT with 'knowledge' about a topic in the form of word files and PDFs , then as I grade I simply give it my feedback and instantly receive suggestions for improved feedback that are based on my original feedback and the knowledge base.

It's flawed, and occasionally messes up, but more often than not it improves the quality of feedback a great deal, expanding a 2-3 sentence piece of critical feedback into a 2-3 paragraph piece of critical evaluation, references to the knowledge base and relevant examples of why the students should take the advice.

Anyway, this relates back to the article with the concept of RAG (result augmented generation) , I give the GPT knowledge to work from, and I have found that it still gets it quite wrong, quite often, especially in some use cases. For example, I generated a GPT for creating quiz questions from a knowledge base, and it was wrong more often than the feedback GPT. The feedback GPT is , as this article says, brittle. If I give it multiple students work, or pieces of feedback, it will start confusing them very quickly. Which is notnideal since you want feedback to be customized per student. Once I realized that, it was solvable by simply starting a new instance of the GPT. But any instructors not paying close attention would see feedback meant for one student end up on anothers paper.