I have been thinking about my response to the topic of robograders and automatic assessment as I have heard my respected colleagues in CS, math, science, and just general all around great educators write on the subject.

Once again I find myself in the unpopular position of defending not the current implementation but the idea. In the article linked to below, the author makes the final point, "If it is worth the time for the student to do the work isn't it worth our time to grade it?" This is a false comparison IMHO as the student is groking with the concept, learning, and playing with the concept; I am initially just verifiying the correctness of the problem.

Of course when necessary I further dive into the answer to see how and why they made certain choices but I fail to see why I couldn't have the computer make the first pass so I can focus quickly on those who need immediate help. A well designed question tests not just if they got the answer but how and what misconceptions they may have whether it be with fractions, units, definitions, etc. Is the technology there yet to do this effectively? On the whole no, but these things have to start somewhere and even more so with machine learning that depends on large data sets to improve.

I would hate this to be considered an endorsement of those currently using automated grading especially for high stakes grading or those who are trying to diminish/eliminate the role of the educator because nothing is more abhorrent to me, but not spending my entire weekend grading and instead opening up a dashboard with a detailed diagnosis that I can then use to drive instruction, support, and further challenge my students sounds empowering to me as an educator.
Shared publiclyView activity