In a global context of rapidly expanding use of Generative AI, this article shares how international school leaders can address safeguarding concerns related to generative AI.

1. Educate Yourself and Your Leadership Team about the Risks

A recent article in The International Educator shares some real and concerning experiences of international schools related to student use of generative AI. In particular, schools are reporting an increasing number of incidents of peer-on-peer harm or abuse involving the creation of deep fakes, images or videos that have been manipulated to use someone’s image, often without their consent. This can be distressing for the community and, especially, the people involved.

The majority of deep fakes found online globally are non-consensual pornographic images, most of which target women. Recent years have seen a rapid increase in the number of deep fake images online from 14,000 to 145,000 from 2019 to 2021, with the US Department of Homeland Security outlining risks across sectors including commerce, society and national security.

Understanding these emerging risks is key for international school leaders to take early action to mitigate them and be prepared to respond to incidents. The following four points describe some ways you can achieve this.

2. Develop a Comprehensive AI Policy for Your School

In November 2022, when ChatGPT was released, at Faria we set to work on developing a template policy to help schools implement guidance around the use of Generative AI. In this document, you can find the following recommendations for inclusion in your organisation’s AI policy:

  • Committing to Safeguarding in ensuring that the use of these technologies does not result in negative outcomes for individuals or for the community. (8.2 Promoting Societal Wellbeing)
  • Providing reporting pathways for community members to raise concerns about the use of GenAI, in accordance with the Safeguarding and Whistleblowing policies of the school. (8.2 Promoting Societal Wellbeing)
  • For Violations of this Policy, please refer to the relevant policies, including but not limited to the Academic Integrity Policy, Behaviour Policy, Safeguarding Policy and Whistleblowing Policy. (9.1: Violations of the Policy)

In addition to these points, school leaders can also interweave Generative AI into existing policies and practices, such as adding it as a category in your safeguarding and disciplinary incident logs and defining the terms in your safeguarding handbook.

3. Provide Professional Development and Training for Teachers

School leaders can prepare teachers for recognising and responding to incidents of peer-on-peer abuse or harm related to the use of Generative AI by including it in your annual safeguarding training. This will ensure that teachers are clear that if they know or suspect a harmful image or video is being circulated among students, it should be reported immediately to the Safeguarding Lead of the school. Your school should consult with law enforcement, child protection agencies or lawyers to understand in advance how to respond in cases where illegal images may be on a student’s device. It is important that teachers know they must not look at any images or videos of students on the students’ devices.

For international school specific child protection training, explore the following options:

4. Create and Share Reporting Pathways

Students need to feel safe to come forward to adults who can help them in situations where a harmful deep fake image or video is being circulated among peers. You can foster a safeguarding culture in your school by providing multiple avenues for reporting, including anonymously, and ensuring students are well-informed about how the school may respond to such reports. Promoting upstander behaviours is also important, so students know what they can do when they receive an inappropriate or harmful image or video of someone else.

Students can feel helpless when an image or video of them is being shared and this can create risk for their mental health and wellbeing. However, there are organisations and agencies that schools, families and students themselves can engage with for support in such cases. Without My Consent and So You Got Naked Online | SWGfL both use student-friendly language to offer reassurance and guidance for students. School leaders can engage with the following organisations to report abusive, explicit or illegal online content, IWF-ICMEC, Take It Down and CEOP.

5. Engage Parents and Students

The most effective way to learn about online risks to students is simply to ask them! Engaging students in focus groups, surveys and other consultative actions can help teachers to better understand students’ online lives. Schools can help parents to develop this understanding too, by offering workshops and resources to prepare them for the kinds of decisions they need to take around technology and how they can support their children at home to navigate both the risks and benefits of Generative AI. With technology in particular, students may avoid reporting issues or concerns due to a fear of having their devices confiscated, so it is important that parents react calmly and in a measured way when they learn about online harm happening to their children, or in the school.

Through education and professional learning, engaging with parents and students, and developing strong policies and procedures, we can collectively benefit from the opportunities of Generative AI while proactively and thoughtfully protecting our students from the risks.

For more resources, information and guidance: CIS Perspectives Blog

About The Author

Leila

 

Leila Holmyard
Director of Teaching & Learning – Head of Safeguarding

Leila Holmyard serves as Head of Safeguarding at Faria Education Group, and also supports international schools and organisations globally with strengthening their safeguarding and child protection practices. Leila is currently pursuing a PhD in Education with the University of Bath focused on safeguarding in international schools.