Computer says no: Automatic assessments for rapid feedback?

Published: Posted on

In 2026 I hope that the routine feedback we give students has been computerised, so increasing students’ access and speed of feedback. If you are marking 100 essays how many times are you correcting the basic errors of incorrect referencing, use of the first persons, incorrect grammar, and inappropriate subheadings … the list goes on. Machines can do this! If they did we would be able to use the time freed up   to explore the depth and nuance of the work students have produced, cutting through the moraine of ‘technical’ errors to hit the bedrock of real student scholarship.

To some extent we already do this: knowledge can be tested automatically and on demand using banks of MCQs, judicious use of peer marking in online communities allows student to learn from their peers in exercises that are non-summative. These approaches are increasingly supported by web based tools such as Peer Wise and NoMoreMarking. The use of well chosen (and well annotated) exemplars can provide additional support that is appreciated by students and helps them improve.

Will increased automation remove the academic from the picture? Not in the scenario I imagine: A student writes an essay draft and feeds it into a slot in the wall. In seconds a printout report of feedback emerges from the machine. The students can read this, reflect on it and correct the basic errors without direct staff input. Subsequently this computer generated analysis of the essay forms the basis of a more informed interaction between students and academic that explores the work undistracted by technical errors, facilitating deeper learning and so higher attainment.

These programs exist off the shelf and they could be integrated with the current VLE. More sophisticated versions exist that use machine learning to modify feedback to reflect local needs. Could it be done? Yes obviously with the appropriate resource and support. Should it be done? Again a yes from me. As long as we don’t use this approach to abdicate our pedagogical responsibilities but rather use it an additional tool to help us improve feedback and quality of student interaction.

2 thoughts on “Computer says no: Automatic assessments for rapid feedback?”

  1. In 10 yrs time we may not have essays as it is likely by then that a computer will be able to write them for a student without us being able to identify that it’s happened.

    1. We already have this within the university now, it just isn’t used.
      Turnitin allows for draft submissions to be made if you don’t save them to repositories, this can help show students where they are going wrong in terms of referencing correctly. They can then make corrects and submit a final version.

      Turnitin and other online marking systems allow for comment banks to be created where you can drag and drop comments (either created by yourself or system defaults) on to the submitted work. You can use this to make banks of common errors, typing up a response only once to things you know you will see repeatedly across the submissions (e.g. a common miscalculation), which can be used again and again. It might not be instant feedback but it certainly can cut down on marking those repeated mistakes.

      Other university’s do this already, why aren’t we? We pay for the technology we just don’t use it. The future is partially here!

      As for holes in the wall, we are already beyond that with electronic submission!

Leave a Reply to Matt Bridge Cancel reply

Your email address will not be published. Required fields are marked *