Education and Learning

Grading Using CoPilot: Next Steps

My new found enlightenment on how AI can grade based on concrete criteria, such as a rubric has, of course, prompted more questions, or as some would rather say it has opened up a can of worms or perhaps even Pandora’s Box. You be that judge. I want to do two things:

  1. Go back to some of my assignments and see how I graded my students based on my rubric and compare it with the CoPilot. Of course, if we differ then why? Is it because of bias or preference towards the students. I’ll cross that bridge if I get to it. So far, I have taken one 100-level essay and submitted it. CoPilot graded it a B- . I gave it a B because the student asked a lot of questions and stuck around after class for deeper discussion. I might have added a few marks for diligence and motivation but that was not in the rubric. Was a B not fair to other students? Hmmm.
  2. Go back to my M.Ed. courses and see if I was graded as CoPilot would have graded. If the grades differ a lot and I were currently in the course would I challenge the teach, or at least ask them to explain their discrepancy if CoPilot gives a much better grade?
  3. Of course this is not thorough research and nothing printed here should be deemed as conclusive. One more thing is I found it’s grading to be consistent (it doesn’t lack sleep like most or probably almost all educators) but I will watch for inconsistencies.

Any more questions you would ask? How will you use this tool?

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.