No items found.
No items found.
No items found.
No items found.
Endorsed
*This resource has been tested for appropriateness in the classroom and scrutinised for safeguarding and cybersecurity issues. However, please do carry out any due diligence processes required by your own institution before using or recommending it to others.
Experimental
*This is an example of a resource that is under development and may not have been fully vetted for security, safety or ethics.  Please carry out your own safeguarding and cybersecurity due diligence before using it with students or recommending it to others.

An AI Risk Assessment

Primary
Pupil Referral Unit
Secondary
Sixth Form
Specialist
Nursery
SEND
Leadership & Implementation
No items found.
No items found.
Documents
No items found.
No items found.
Practitioners
Chris Goodall

Head of Digital Education, Bourne Education Trust

This assessment outlines five key areas where AI is used in education: curriculum creation, parent communication, data analysis, assessment, and tutoring chatbots. Each area is evaluated for potential risks and given a risk rating from low to high. Common risks include data privacy concerns, inaccuracy, bias, and loss of personal touch. The document provides specific mitigation strategies for each use case, emphasising the importance of human oversight, data protection, and clear communication with stakeholders. Overall, it underscores the need for careful implementation of AI in educational settings, balancing the benefits of technology with ethical considerations and pedagogical integrity.
1. Creating Curriculum Resources or Activities

Risk Rating: Low

Use Description

Using AI to generate lesson activities, lesson plans, presentations, and other educational materials.

Risks Associated

  • Inaccuracy of generated content
  • Lack of personalisation for student group
  • Potential bias in resources
  • Pedagogical rigour

Mitigations

  1. Human always validates and reviews AI generated content.
  2. Customise and adapt resources to suit specific classroom needs.
  3. Other: Use as opportunity to educate students about AI and appropriate ethical use.

2. Parent Communication Email and Report Writing

Risk Rating: Low/Medium

Use Description

Using AI to help draft parent emails or complete student reports.

Risks Associated

  • Over reliance on AI leading to loss of personal touch and empathy
  • Potential data privacy issues

Mitigations

  1. Review and personalise all AI generated reports and parent emails.
  2. Ensure parent/student data privacy is maintained by anonymising names and other personal details within the prompt.
  3. Use AI tools with strong data protection measures such as Copilot enterprise.
  4. Use AI tools that do not train models on your data such as Copilot enterprise or models that have an option to turn off training data such as ChatGPT.

3. Data Analysis

Risk Rating: Medium (Dependent on nature of data being processed)

Use Description

Analysing performance data and other educational metrics using AI.

Risks Associated

  • Misinterpretation of data
  • Potential bias in analysis
  • Data privacy concerns
  • Poor decision making based on lack of understanding of analysis and results

Mitigations

  1. Cross check AI analysis with manual data review.
  2. Ensure data used is anonymised and privacy compliant.
  3. Use AI tools with data protection measures such as Copilot enterprise.
  4. Use AI tools that do not train models on your data or models that have an option to turn off training data such as ChatGPT.
  5. Avoid use of special category data. This data is subject to strict controls, and therefore schools need to adhere to UK GDPR regulation and protect this information efficiently.

4. Assessment

Risk Rating: Medium/High

Use Description

Using AI to mark assessments and give feedback.

Risks Associated

  • Potential bias in assessment
  • Inaccuracy
  • Data privacy concerns
  • Consent and intellectual property risks
  • Transparency and acceptance of using AI in the process of assessment
  • Loss of teacher agency and value in the process

Mitigations

  1. Review AI designed assessments for bias and accuracy.
  2. Ensure student data privacy is maintained by anonymising names and other personal details within the prompt.
  3. Consent granted from student to upload work to AI (Required if model is using data for training and recommended for transparency if not)
  4. Be clear and transparent with students and parents around use of AI for marking and feedback.
  5. Be clear in your own mind about the purpose of your use of AI for marking and feedback and what you may lose in the process as well as what you will gain.
  6. Use AI tools with data protection measures such as Copilot enterprise.
  7. Clear human moderation process to validate and review AI output.

5. Tutoring Chatbots

Risk Rating: High

Use Description

Implementing AI chatbots to assist students with tutoring and answering questions.

Risks Associated

  • Inaccuracy in responses
  • Lack of empathy
  • Data privacy concerns
  • Potential for bias
  • Alignment with sound pedagogical practice
  • Lack of purpose

Mitigations

  1. Regularly review and update chatbot content.
  2. Implement strong data privacy and protection measures that are UK GDPR compliant.
  3. Use inclusive AI tools.
  4. Monitor chatbot interactions and provide human oversight.
  5. Ensure alignment with existing evidence based pedagogical approaches.
  6. Ensure alignment with curriculum content.
  7. Understand underlying system prompts or chatbot instructions.
  8. Consider requirements for a Data Protection Impact Assessment (DPIA).
  9. Consent has been sought from parents/students where necessary.
  10. Be clear in your own mind about the purpose of your use of tutoring chatbots and what you may lose in the process as well as what you will gain.

Key Learning

Risks

Oversimplification: The assessment provides a broad overview but may not capture the full complexity of AI implementation in educational settings.

Rapid Technological Change: AI technology evolves quickly, potentially making some aspects of this assessment outdated soon after creation.

Context Dependence: The assessment may not account for specific contexts or unique situations in different educational institutions.

Subjectivity in Risk Ratings: The low/medium/high risk ratings are subjective and may vary based on the assessor's perspective or experience.

Incomplete Risk Coverage: There may be additional risks not identified in this assessment, particularly as AI use in education expands.

Mitigation Effectiveness: The proposed mitigations are not guaranteed to fully address the risks and their effectiveness may vary.

Legal and Regulatory Gaps: The assessment may not fully address all relevant legal and regulatory requirements, especially as they evolve.

Lack of Quantitative Measures: The assessment doesn't provide quantitative metrics for measuring risk or the success of mitigations.

Overreliance on the Assessment: Users might rely too heavily on this document without conducting their own context-specific risk analysis.

Ethical Considerations: The assessment may not fully explore all ethical implications of AI use in education.