- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I’ve used ChatGPT to explain a difficult concept in different terms, and it’s greatly helped me learn.
It’s especially awesome with language practicing. Can have full on conversations with the robot while it corrects your language use and shows you how to improve
I’ve been using the new GPT feature of ChatGPT to improve my own feedback on student work. If you don’t know, GPT is like a customized, purpose driven ChatBOT. So I set one up with the purpose of evaluating my feedback and recommending ways to improve it. I can provide the GPT with ‘knowledge’ about a topic in the form of word files and PDFs , then as I grade I simply give it my feedback and instantly receive suggestions for improved feedback that are based on my original feedback and the knowledge base.
It’s flawed, and occasionally messes up, but more often than not it improves the quality of feedback a great deal, expanding a 2-3 sentence piece of critical feedback into a 2-3 paragraph piece of critical evaluation, references to the knowledge base and relevant examples of why the students should take the advice.
Anyway, this relates back to the article with the concept of RAG (result augmented generation) , I give the GPT knowledge to work from, and I have found that it still gets it quite wrong, quite often, especially in some use cases. For example, I generated a GPT for creating quiz questions from a knowledge base, and it was wrong more often than the feedback GPT. The feedback GPT is , as this article says, brittle. If I give it multiple students work, or pieces of feedback, it will start confusing them very quickly. Which is notnideal since you want feedback to be customized per student. Once I realized that, it was solvable by simply starting a new instance of the GPT. But any instructors not paying close attention would see feedback meant for one student end up on anothers paper.