No items found.
No items found.
No items found.
No items found.
No items found.
Endorsed
*This resource has been tested for appropriateness in the classroom and scrutinised for safeguarding and cybersecurity issues. However, please do carry out any due diligence processes required by your own institution before using or recommending it to others.
Experimental
*This is an example of a resource that is under development and may not have been fully vetted for security, safety or ethics.  Please carry out your own safeguarding and cybersecurity due diligence before using it with students or recommending it to others.

INOCULATING AGAINST MISINFORMATION: How BCoT is trying to Build Digital Resilience in Learners

No items found.
No items found.
Teaching & Inclusive Practices
No items found.
No items found.
Case Study
No items found.
No items found.
Practitioners Panel
Scott Hayden

Head of Teaching, Learning and Digital

Basingstoke College of Technology (BCoT) is proactively building digital resilience in learners by educating them about AI and misinformation, inspired by Audrey Tang’s approach. Through an in-person induction session, students were introduced to deepfakes, learning to identify digital manipulation. Following this, they engage with a self-marking module, "AI @ BCoT," co-developed by Holly Hunt and Student Digital Leaders, teaching ethical AI use and deepfake detection. The initiative aims to build critical thinking, promote responsible AI usage, and equip learners with the skills to recognise and counter misinformation effectively.

BCoT are addressing the growing challenge of AI-driven misinformation by equipping learners with the tools to recognise and resist digital manipulation. Inspired by Audrey Tang’s innovative approach in Taiwan, their programme is designed to build students' resilience against misinformation through a combination of education, critical thinking, and ethical AI use.

The initiative begins with an in-person session as part of the college induction, where all full-time learners are introduced to how AI is used at BCoT. A key activity includes identifying a deepfake version of the tutor, which helps students develop an early understanding of how digital content can be manipulated.

Following this, learners progress through the self-marking "AI @BCoT" module, co-authored by Holly Hunt and supported by Student Digital Leaders. The module reinforces the ethical use of AI, including strict guidelines on using generative AI without consent, particularly when it involves human likeness. Learners also analyse deepfake content featuring BCoT's Principal, Anthony Bravo OBE, further developing their ability to detect AI-manipulated media.

This approach not only teaches students how to use AI responsibly but also strengthens their critical thinking skills. By engaging with real-world examples and AI tools, learners become better prepared to navigate the digital landscape and understand the potential of AI to both enhance and distort information. Ultimately, BCoT's programme empowers students to recognise and resist misinformation, ensuring they are equipped with the knowledge and skills needed to thrive in a digitally complex world.

Key Learning

High Student Engagement in Person: The in-person activity, where students had to identify a deepfake version of the tutor, generated significant interest. However, roughly 70% of learners guessed incorrectly, indicating how difficult it is to spot deepfakes, which reinforces the need for such training.

Effective Introduction of AI Module: The transition from the in-person session to the self-marking AI module in Week 2 worked well, especially with a human introduction providing context. This approach helped make the subject matter more relatable.

Time-Consuming but Worthwhile: Conducting 15 in-person presentations for all learners was a time investment but proved to be effective in engaging students and introducing the AI-related topics.

Apathy Due to Misinformation Overload: Some students showed signs of apathy, likely due to being overwhelmed by the volume of misinformation they encounter regularly. Additionally, the information overload during induction week may have contributed to some disengagement, suggesting that timing and content management are key considerations.

Positive Reactions to the Module: Following two days into the module being live, there was notable curiosity and engagement from learners, with follow-up questions and increased interest. This response indicates that the module resonated with students.

Emergence of a Student Steering Group: The module’s success has sparked the formation of a Student Steering Group for AI, showing growing student interest and leadership in shaping future AI education initiatives at BCoT.

Risks

One key risk when working with AI and misinformation is the ethical use of generative AI, particularly the creation of deepfakes. It is essential to stress the importance of obtaining consent when using another person’s likeness in AI-generated content.

BCoT has a zero-tolerance policy on using anyone's image or likeness in generative AI without their explicit consent. This policy is crucial to protecting individuals' rights and ensuring that AI is used responsibly. It is vital for institutions to clearly tie such policies to any AI-related activities to prevent misuse and uphold ethical standards.