Safeguarding Blind Spot - Avoiding AI leaves students vulnerable.

Artificial intelligence is not approaching education. It is already embedded within it. It is in students’ pockets, on their devices, shaping what they search, what they read, what they create and, increasingly, what they trust. This is not theoretical. It’s now our daily reality.

Yet in too many primary and secondary schools, leaders are still debating whether to delay teaching it or restrict students from using it to support their learning. Meetings are held. Tools are discussed. Concerns are raised. And little is implemented with clarity or conviction.

Further, some senior leaders in schools are publicly calling for outright bans on AI, arguing that it undermines learning and weakens critical thinking. I understand the concern. But banning a technology that is rapidly embedding itself into every aspect of society is not strategic caution. It is short-sighted and here's why.

Some senior leaders in schools are calling for an outright ban of AI use in schools.

When we deny supported access of AI, we do not reduce harm. We push it into unsupervised spaces.

History is clear: when transformative tools emerge, those who refuse to engage with them do not protect young people, they leave them unprepared. AI will not disappear because schools or their leaders disapprove of them. AI will continue evolving and integrating. If we respond with prohibition rather than structured guidance, we create a generation forced to navigate powerful systems without education, ethical framing or critical understanding.

That is not protection. That is exposure.

All schools operate in a world where risk is real. The internet carries risk. Social media carries risk. We did not respond to those realities by pretending they did not exist. We built digital literacy, supervision, policies and boundaries. We accept that managed exposure, with guidance, is safer than unmanaged exploration. AI requires the same maturity of response.

Yes, AI presents genuine dangers. It can generate persuasive misinformation. It can reinforce bias. But risk doesn't justify retreat. The complexity of AI demands education, not avoidance. If students are experimenting with AI at home without modelling, that is far more dangerous than structured use in a classroom environment.

When we deny supported access of AI, we do not reduce harm. We push it into unsupervised spaces.

There is also a broader professional responsibility. AI is rapidly embedding itself into every aspect of our day-to-day lives. Therefore the topic of AI and safeguarding is not only about preventing immediate harm there is also the long-term potential to consider. Appreciating the potential of AI means we build the judgement, resilience and critical thinking young people need to navigate the systems shaping their futures.

Ignoring AI may feel like the easier option, but it delays readiness and weakens resilience.

Refusing to engage with AI is not principled caution. It is a failure of leadership. If we are serious about safeguarding and future preparedness, structured and well-supported AI education is not optional – it’s essential.

Liam Stewart

Liam Stewart is an experienced educator with over 20 years in K–12 leadership across the UK, UAE, and Central Asia. He currently heads Primary and EYFS at Haileybury Astana and previously held senior roles at Aldar Education, where he oversaw curriculum implementation and regulatory accreditation.

At EDNAS, Liam is responsible for academic strategy and product development, ensuring the platform meets both global education standards and regional classroom needs. He holds an MBA in Educational Leadership from University College London and is a Fellow of the Chartered College of Teaching.

https://www.linkedin.com/in/liam-s-9a826544/
Next
Next

Teaching AI the Right Way: Raising Thinkers, Not Adopters