The recent inclusion of "custom instructions" by OpenAI in ChatGPT has ushered in a game-changer for educators. This feature, accessible via a user's profile settings, empowers educators to set particular contextual instructions, ensuring ChatGPT adheres to them during every interaction.
For educators managing multiple classes, groups, or students with unique needs, this feature is hugely beneficial. Instead of re-entering prompts during each use, educators can create, save, and quickly input specific instructions tailored for their audience. Consequently, ChatGPT can seamlessly cater to students' specific requirements, enhancing their task or activity engagement during lessons.
Three illuminating applications underscore this feature's transformative potential:
Whole Class Scaffolding: Designed to aid the entire class, it structures information or tasks, ensuring comprehensive comprehension.
EAL Group Scaffolding: Tailored for students with English as an Additional Language (EAL), this support level offers linguistic scaffolds, enhancing students' content understanding.
Individual Student IEP Support: This ultra-personalised approach caters to students with Individualised Education Plans (IEPs). It offers strategies and support mechanisms specific to their learning needs.
However, an essential caveat is the importance of data protection. It's imperative to avoid sharing specific student details when interfacing with AI to maintain confidentiality and uphold ethical standards.
The scenario in this case study is genuine and based upon real events and data, however its narration has been crafted by AI to uphold a standardised and clear format for readers.
Key Learning
Custom instructions in ChatGPT offer unparalleled ease and efficiency in delivering tailored educational support.
The flexibility of this feature ensures wide-ranging applications, from whole-class to individualised interventions.
Prioritising data protection is crucial when interfacing with AI platforms.
Risks
Potential for unintentional sharing of sensitive student information if educators aren't vigilant.
Over-reliance on AI-generated instructions might lead to reduced human oversight in addressing student needs.
The efficacy of AI-facilitated support may vary, necessitating consistent monitoring and refinement.