The 9 Questions Every Headteacher Must Be Able to Answer
Picture the scene. It is a Tuesday afternoon governor meeting. The board has just read a news story about AI in schools. A governor leans forward and asks: "So, what exactly is our AI strategy?"
You have an AI policy. You know teachers are using AI tools. You have run at least one INSET session on it. But as the room goes quiet, you realise something uncomfortable: you do not have a clear, evidenced answer. Not because you haven't been thinking about it, but because nobody has ever given you a structured way to look at it.
This is not a failure of leadership. It is a failure of framework. AI readiness in schools is genuinely complex: it spans curriculum, staffing, infrastructure, policies, safeguarding, and more. Most school leaders are managing it reactively, not systematically.
The nine questions below are drawn from 33 international frameworks, including UNESCO, OECD, the UK Department for Education, and the EU AI Act, and cover every dimension of school AI readiness. They are not a self-congratulatory checklist. They are the questions that reveal where your school's thinking is solid, and where the gaps are.
Work through them honestly. The answers will tell you more than any survey.
AI readiness is not about having the right tools. It is about whether your school has the structures, skills, and strategies to use them well, and safely.
Question 1
"Can your students critically evaluate AI-generated content, or do they just use it?"
Dimension: Student AI Literacy
There is a significant difference between students who use AI tools and students who are AI literate. AI literacy means understanding how large language models work, what biases they carry, when their outputs are true, and, critically, how to evaluate what they produce rather than accepting it uncritically.
A school at the Exploring level might have students who use ChatGPT to generate essay introductions. A school at the Established level has students who can interrogate those outputs: spotting hallucinations, identifying omissions, and understanding why the same prompt produces different results on different days.
If you couldn't answer this question with specific examples from the current academic year, this dimension needs attention.
Question 2
"Do your teachers have the confidence to teach with AI, or just the permission to use it?"
Dimension: Teacher AI Competency
Most schools have, at some point, told staff they are allowed to use AI tools. Very few have equipped them to use those tools in pedagogically sound ways.
Teacher AI competency covers three things: understanding what AI tools can and cannot do, knowing how to integrate them into learning design without outsourcing the cognitive work to the machine, and being able to model critical AI use for students. A single INSET day does not produce this. A sustained professional development pathway does.
The question is not whether your teachers are technically capable. It is whether they feel confident enough to teach with AI, and whether they know how to do it well.
Question 3
"Does your senior leadership team have a coherent AI vision, with named ownership and an allocated budget?"
Dimension: Institutional Readiness
AI readiness does not happen organically. It requires a deliberate institutional decision: that AI is strategic, not peripheral, and that someone is accountable for it.
A school with strong institutional readiness can answer yes to each of the following: Is there a named SLT lead for AI? Is there a budget line for AI tools, training, and infrastructure? Does the school's development plan include AI? Does the head have a clear view of which AI tools are in use across the school, formally and informally?
In most schools, AI has arrived bottom-up: driven by individual teachers, enthusiastic departments, and students who found ChatGPT before anyone had thought about policy. Institutional readiness means leadership has caught up with, and got ahead of, that reality.
Question 4
"Does your AI policy reflect what your teachers are actually doing in classrooms this week?"
Dimension: Policy and Governance
This is one of the most common failure points we see. A school develops an AI acceptable use policy, often adapted from a template, often written by a compliance lead who is not close to classroom practice. Meanwhile, teachers have moved on: they are using AI for lesson planning, differentiation, feedback, and marking. The policy and the practice have diverged.
A strong AI policy is specific, current, and operationally coherent. It names the tools in use. It describes what staff and students are permitted to do with those tools, and what they are not. It covers data handling, attribution, and the treatment of AI-assisted student work. And it is reviewed at least annually.
If your policy was written more than 18 months ago without amendment, it almost certainly does not reflect current practice. That gap is both a governance risk and a source of genuine confusion for staff.
The most common finding when schools run an AI Literacy Audit: their AI policy and their Schemes of Work contradict each other, often without anyone realising.
Question 5
"Do your students know that AI tools can be biased, and does your school have a process for assessing the ethics of new AI tools before adoption?"
Dimension: Ethical Framework
International frameworks, particularly UNESCO and the OECD, are explicit: AI education must go beyond technical skills to encompass ethical reasoning. Students should understand that AI systems reflect the data they were trained on, that this data is often biased, and that the outputs of AI tools can reinforce inequalities if used uncritically.
At the institutional level, ethical readiness means having a process, even a lightweight one, for evaluating the ethical implications of new AI tools before they are adopted. Who is the tool trained by? What data does it collect? What are the terms of service regarding student data? These are not questions for the IT department alone.
Question 6
"Are the AI tools your students and teachers use properly provisioned, secured, and monitored?"
Dimension: Technical Infrastructure
In many schools, students are using AI tools that the school has not formally provisioned, because the tools are free, public-facing, and accessible on personal devices. This creates a technical infrastructure problem that is easy to miss.
Strong technical infrastructure for AI means: knowing which AI tools are in use across the school (formally and informally), ensuring that school-sanctioned tools meet data protection requirements, having content filtering in place for AI-generated output, and monitoring AI tool use in a way that protects students without surveilling them oppressively.
This dimension is closely linked to safeguarding. If you do not know which AI tools your students are using, you cannot know whether those tools are appropriate for their age or whether student data is being collected and used appropriately.
Question 7
"Is there a structured AI CPD pathway for your staff, or has training been a single INSET day and a shared folder of links?"
Dimension: Professional Development
Professional development for AI is one of the most underdeveloped areas in schools that otherwise have good instincts about AI. There is a pattern: a school does a half-day INSET, shares some resources, and considers the training obligation met. Twelve months later, confident AI users and anxious non-users have diverged further, and the school has no systematic picture of who needs what.
A strong professional development approach for AI is structured, tracked, differentiated by role and department, and embedded in the school's wider CPD framework. It includes time for practice, peer sharing, and reflection, not just knowledge transfer. And it is ongoing, because AI tools are developing faster than any once-a-year training day can address.
Question 8
"Does your assessment policy meaningfully address AI-assisted work, or did someone just add a paragraph to the academic honesty policy?"
Dimension: Assessment Integrity
This is the dimension where the gap between policy intention and classroom reality is most acute. Most schools have added some reference to AI to their academic honesty or academic integrity policy. Very few have fundamentally rethought their assessment approach in light of what AI tools can now do.
Assessment integrity in the AI age requires three things. First, a clear and specific policy on what AI assistance is and is not permitted, varying by task type, year group, and purpose. Second, assessment design that is genuinely AI-resilient: tasks that require personal synthesis, live demonstration, oral defence, or process documentation. Third, a fair and proportionate approach to suspected AI misuse that does not criminalise students for using a tool they have been insufficiently guided on.
If your assessment policy has not been reviewed by heads of department in the last 12 months with AI specifically on the agenda, it almost certainly has gaps.
Assessment integrity is not about catching students. It is about designing assessment that makes AI assistance irrelevant, or at least honest.
Question 9
"Do your safeguarding procedures cover AI-specific risks, including deepfakes, synthetic media, and AI-facilitated grooming?"
Dimension: Safeguarding and Risk
This is, without question, the dimension most likely to be underdeveloped in schools, and the one with the most serious potential consequences.
AI-specific safeguarding risks include the generation of synthetic media (deepfakes) involving students or staff, the use of AI tools to facilitate contact between adults and minors, AI-generated content that escalates into bullying or harassment, the collection and misuse of student data by unregulated AI tools, and student exposure to harmful content generated or amplified by AI systems.
None of these risks are hypothetical. They are already occurring in schools. A safeguarding framework that was not written with AI in mind almost certainly does not cover them. Your designated safeguarding lead should be able to tell you, clearly, how these scenarios are addressed in your current procedures. If they cannot, this needs to be a priority this term.
So, how did you do?
If you worked through those nine questions honestly, you will have a rough sense of where your school is strong and where the gaps are. Some of you will have found that two or three dimensions are well-developed and the rest are thin. Some will have discovered that you have a policy that does not match your practice, or that your safeguarding framework has not been updated to reflect AI-specific risks.
That is not a crisis. It is a starting point.
The schools that will handle AI well over the next decade are not the ones with the most sophisticated tools. They are the ones with the clearest picture of where they are now, and the most structured approach to improving. That clarity starts with asking the right questions.
The challenge is that answering nine questions rigorously, across a whole school, against 33 international frameworks, takes time that most senior leaders do not have. The analysis requires comparing what your documents say against what those frameworks require, which means reading both, in detail, systematically.
That is exactly what the AI Literacy Audit Tool does automatically. Upload your school's existing documents (your Schemes of Work, your AI policy, your staff handbook) and get a scored, evidenced audit across all nine dimensions in under 10 minutes. The system reads your documents, detects contradictions between them, surfaces the evidence behind every score, and generates a board-ready report you can put in front of governors.
The audit is built for school leaders who need answers, not more forms to fill in.
Ready to find out where your school stands?
Upload your school's existing documents (Schemes of Work, AI policy, staff handbook) and receive a scored audit across all 9 dimensions in under 10 minutes. Benchmarked against 33 international frameworks. No surveys. No forms. Just your documents.
Run your free AI Literacy Audit at audit.deepeducationnetwork.com
About DEEP Education Network: DEEP Education Network is a professional development platform supporting over 1,000 educators across 50+ countries. We specialise in helping schools and school leaders navigate AI integration through courses, training, and practical frameworks grounded in education research. The AI Literacy Audit Tool was built from our analysis of 33 international AI frameworks and is designed to give school leaders a rigorous, evidenced picture of their AI readiness in minutes, not months. deepeducationnetwork.com
If this was useful, share it with a colleague. The conversation about AI in schools is too important to leave to chance.