Intelligent Edu.tech Issue 1 | Page 18

E X P E R T C O L U M N

AI IN THE STAFFROOM, NOT JUST THE CLASSROOM

ave you ever noticed that little note:‘ ChatGPT can make

H mistakes. Check important info’? It’ s more than just a disclaimer; it’ s a warning. It’ s not just that AI platforms can make mistakes, it’ s that they do make mistakes.

The increased uptake of AI in schools is a well-documented and discussed issue, with much of the public conversation around AI in education focusing on students: using ChatGPT to do homework, plagiarising essays or cutting corners on assessments. But there’ s a far less discussed, and potentially more damaging, trend quietly taking place behind the scenes – teachers relying on Generative AI tools to plan lessons, create content and even mark student work.
The appeal is obvious. Faced with growing workloads, tight deadlines, and immense stress, AI offers instant summaries, lesson plans and model answers. But here’ s the danger: these outputs often carry biases, factual inaccuracies or paedagogical flaws, and these mistakes can enter the classroom under the guise of authority.
Unlike students, teachers carry institutional legitimacy. If an AIgenerated worksheet includes a flawed historical interpretation, or a science quiz subtly reinforces outdated concepts, students aren’ t likely to question it; they’ ll learn it as fact. The result? Errors and biases become embedded in instruction, not just assignments.
Even AI-assisted marking raises serious concerns. Some educators use AI to grade essays or give feedback, yet generative models
By Ben Leitch, our CXO CyberConnections and Digital Content Manager
often favour formulaic responses, penalise unconventional arguments and lack cultural or contextual awareness. If teachers accept these suggestions without scrutiny, students may be unfairly graded or misdirected in their learning.
At the higher education level, we’ ve already seen essays marked as an instant fail due to being flagged as AI-written by automated detection systems, despite AI not being used.
The problem isn ' t that AI is inherently bad, it ' s that many educators aren’ t trained to evaluate its outputs critically, and they are so overworked, strained and tired, that a quick and simple solution becomes a huge attraction, despite its flaws.
Schools must treat AI literacy as a staff priority, not just a student concern. Teachers need to understand the limitations of generative models, adopt validation practices and approach AI as an assistant, not an authority. ✓
18 www. intelligentedu. tech