1. Creating Curriculum Resources or Activities
Risk Rating: Low
Use Description
Using AI to generate lesson activities, lesson plans, presentations, and other educational materials.
Risks Associated
- Inaccuracy of generated content
- Lack of personalisation for student group
- Potential bias in resources
- Pedagogical rigour
Mitigations
- Human always validates and reviews AI generated content.
- Customise and adapt resources to suit specific classroom needs.
- Other: Use as opportunity to educate students about AI and appropriate ethical use.
2. Parent Communication Email and Report Writing
Risk Rating: Low/Medium
Use Description
Using AI to help draft parent emails or complete student reports.
Risks Associated
- Over reliance on AI leading to loss of personal touch and empathy
- Potential data privacy issues
Mitigations
- Review and personalise all AI generated reports and parent emails.
- Ensure parent/student data privacy is maintained by anonymising names and other personal details within the prompt.
- Use AI tools with strong data protection measures such as Copilot enterprise.
- Use AI tools that do not train models on your data such as Copilot enterprise or models that have an option to turn off training data such as ChatGPT.
3. Data Analysis
Risk Rating: Medium (Dependent on nature of data being processed)
Use Description
Analysing performance data and other educational metrics using AI.
Risks Associated
- Misinterpretation of data
- Potential bias in analysis
- Data privacy concerns
- Poor decision making based on lack of understanding of analysis and results
Mitigations
- Cross check AI analysis with manual data review.
- Ensure data used is anonymised and privacy compliant.
- Use AI tools with data protection measures such as Copilot enterprise.
- Use AI tools that do not train models on your data or models that have an option to turn off training data such as ChatGPT.
- Avoid use of special category data. This data is subject to strict controls, and therefore schools need to adhere to UK GDPR regulation and protect this information efficiently.
4. Assessment
Risk Rating: Medium/High
Use Description
Using AI to mark assessments and give feedback.
Risks Associated
- Potential bias in assessment
- Inaccuracy
- Data privacy concerns
- Consent and intellectual property risks
- Transparency and acceptance of using AI in the process of assessment
- Loss of teacher agency and value in the process
Mitigations
- Review AI designed assessments for bias and accuracy.
- Ensure student data privacy is maintained by anonymising names and other personal details within the prompt.
- Consent granted from student to upload work to AI (Required if model is using data for training and recommended for transparency if not)
- Be clear and transparent with students and parents around use of AI for marking and feedback.
- Be clear in your own mind about the purpose of your use of AI for marking and feedback and what you may lose in the process as well as what you will gain.
- Use AI tools with data protection measures such as Copilot enterprise.
- Clear human moderation process to validate and review AI output.
5. Tutoring Chatbots
Risk Rating: High
Use Description
Implementing AI chatbots to assist students with tutoring and answering questions.
Risks Associated
- Inaccuracy in responses
- Lack of empathy
- Data privacy concerns
- Potential for bias
- Alignment with sound pedagogical practice
- Lack of purpose
Mitigations
- Regularly review and update chatbot content.
- Implement strong data privacy and protection measures that are UK GDPR compliant.
- Use inclusive AI tools.
- Monitor chatbot interactions and provide human oversight.
- Ensure alignment with existing evidence based pedagogical approaches.
- Ensure alignment with curriculum content.
- Understand underlying system prompts or chatbot instructions.
- Consider requirements for a Data Protection Impact Assessment (DPIA).
- Consent has been sought from parents/students where necessary.
- Be clear in your own mind about the purpose of your use of tutoring chatbots and what you may lose in the process as well as what you will gain.
Key Learning
Risks
Oversimplification: The assessment provides a broad overview but may not capture the full complexity of AI implementation in educational settings.
Rapid Technological Change: AI technology evolves quickly, potentially making some aspects of this assessment outdated soon after creation.
Context Dependence: The assessment may not account for specific contexts or unique situations in different educational institutions.
Subjectivity in Risk Ratings: The low/medium/high risk ratings are subjective and may vary based on the assessor's perspective or experience.
Incomplete Risk Coverage: There may be additional risks not identified in this assessment, particularly as AI use in education expands.
Mitigation Effectiveness: The proposed mitigations are not guaranteed to fully address the risks and their effectiveness may vary.
Legal and Regulatory Gaps: The assessment may not fully address all relevant legal and regulatory requirements, especially as they evolve.
Lack of Quantitative Measures: The assessment doesn't provide quantitative metrics for measuring risk or the success of mitigations.
Overreliance on the Assessment: Users might rely too heavily on this document without conducting their own context-specific risk analysis.
Ethical Considerations: The assessment may not fully explore all ethical implications of AI use in education.