A class of 20 'A' Level Business Studies students undertook a unique experiment. Their task was simple - attempt a 9-mark exam question on Microsoft Teams. Upon submission, instead of the usual manual marking process, AI took centre stage.
Armed with the mark scheme, indicative content, and further guidance, AI was tasked with grading each answer, determining its level, marking it, elucidating reasons, providing feedback, and suggesting five improvements. All of this was showcased live on a whiteboard for a collaborative analysis with the students.
Upon completion, feedback was transmitted via Teams, directing students towards answer refinement based on AI-generated feedback and collective discussion.
However, as with all experiments, observations varied:
Whilst the AI's marking wasn't always precise, its determination of the mark band was generally accurate. The AI tended to play safe, usually awarding marks around the median range (4 or 5 out of 9). The feedback and suggested improvements were notably beneficial, guiding students toward enhanced answer quality.
The AI's incessant quest for perfection was evident; despite some students revising their answers based on the initial feedback, AI continuously sought more, seemingly overlooking the time constraints of an actual exam setting.
An unintended but valuable outcome was the heightened engagement and critical evaluation skills demonstrated by the students, as they dissected AI's feedback and contemplated its application and possible improvements.
As a safety net, manual marking was conducted post-experiment.
The scenario in this case study is genuine and based upon real events and data, however its narration has been crafted by AI to uphold a standardised and clear format for readers.
Key Learning
While AI can augment the marking process, its accuracy isn't infallible.
Feedback from AI tools can serve as a valuable educational tool for students.
Involving students in the AI feedback process can boost engagement and foster critical thinking.
Manual oversight remains indispensable inensuring the authenticity and accuracy of AI-generated outcomes.
Risks
Over-reliance on AI for exam marking can lead to discrepancies in grading.
AI's continuous strive for perfection may not align with real-world constraints.
Without human intervention, feedback might lack contextual understanding and nuances.