Roger's discussion paper on AI in education recommends personalised AI-tutoring in schools and colleges but only if we take steps to stop it widening gaps in educational attainment or increasing economic disparities.
Time will tell whether AI Ed-Tech is as good as hoped. Finding the right relationship between the AI, the teacher and the learners remains challenging. But the potential is extraordinary - both good and bad - and adoption is picking up. So, we should put the right foundations in place now.
Contracts for AI tutoring systems must protect privacy and ensure that systems can be scrutinised. Ideas of how to apply that scrutiny must evolve over time if, as expected, AI affects how we understand success in education as well as how we learn. But we can set down some principles.
We will need to understand the effect of using products in real-world settings, not just product test data. We will need to understand the impact on different pupils and different communities in ways that allow comparison between products, between settings and between uses. And we will need to have information that is timely and actionable for when problems arise, as they will. Nothing in the way we currently oversee education equips us for this.
Contract terms and oversight requirements should be coordinated by national governments who will need to develop new capabilities in this area. Where does the money for that come from?
We could start by getting rid of low value data systems. The National Reference Test costs a few million a year and provides data that lacks the precision or breadth to be of much use for the one purpose it was created - to inform GCSE grade boundaries. It should be scrapped, and exam boards should strengthen their own mechanisms with the use of comparative judgement.
The Reception Baseline Assessment is another strong candidate for the bin. The plan is to use it for performance management. But the data - noisy at best - depends, for its accuracy, on people who will be judged by the results. This is a well-trodden path to failure and waste.
Key Learning
The adoption of AI into schools and colleges is a moment to think carefully about how we oversee education. If AI takes off as hoped, we will need to put less reliance on blunt indicators and more time into understanding education with sufficient precision to manage AI-associated risks.
It will take time and money to develop the right ways of working. But fortunately, there are some things we could usefully stop doing as well as things we need to start doing.
Risks
Roger identifies several risks associated with the adoption of AI in education:
- Privacy concerns related to data collection and analysis by AI systems.
- Intrusive assessment methods by AI tutoring systems that may lead to unintended consequences.
- Potential misuse of data that could disadvantage learners and citizens.
- Uncertainty regarding the impact of AI on human behavior and learning outcomes.
- The need for transparent and trustworthy policies to govern the use of AI in education.
These risks highlight the importance of careful consideration and oversight in integrating AI technologies into educational settings to ensure positive outcomes for all stakeholders.