The Mythology of Authoritative Knowledge: When the Teachers Learn From a Broken Archive

The Mythology of Authoritative Knowledge: When the Teachers Learn From a Broken Archive
Authoritative Knowledge that may distort histories and identies

In February 2026, the UK Education Select Committee launched a formal parliamentary inquiry into AI and EdTech, asking explicitly whether AI perpetuates inequality between students, how it reshapes learning and assessment, and whether children's digital rights are being protected. The inquiry is accepting written evidence until 10 April 2026. Education Secretary Bridget Phillipson has described AI as potentially the biggest boost for education in the last five hundred years.

Simultaneously, the US federal government has issued an AI education executive order and legislators are tracking 52 bills across 25 states reshaping how AI enters classrooms, with AI literacy set to be assessed on the 2029 PISA examinations for the first time in history. Across Europe, the EU AI Act reaches full enforcement in August 2026, and schools are already being formally classified as deployers of high-risk AI systems subject to strict regulatory oversight. Three jurisdictions. Three converging policy moments. One simultaneous surge of infrastructure building. This essay asks the question none of them has yet asked: what knowledge is that infrastructure transmitting, and whose archive has it inherited?

There is a moment before a system becomes infrastructure. It is the moment when it still appears optional, still appears experimental, still appears as assistance rather than authority. That moment is passing.

Artificial intelligence is now entering education, not as a tool at the margins, but as a tutor, an assessor, a recommender of knowledge, and increasingly, as a quiet arbiter of what is correct. It will not announce itself as such. It will arrive as help. It will arrive as efficiency. It will arrive as access. And in doing so, it will inherit the archive. This is a problem because the archive is not neutral.

Within corrective history, it has long been established that the record of African and diasporic history is not simply incomplete, but structured through erasure, distortion, and misattribution. Civilisations are rendered as peripheral when they were central. Contributions are detached from their origin and reassigned under new names. Knowledge systems are stripped of authorship and reintroduced as if they emerged elsewhere, independently, or later. What appears as absence is often the result of removal. What appears as fragmentation is often the result of disassembly. What appears as marginality is often the result of reframing. Yet, this archive, with all its distortions, is precisely what large language models are trained on.

This is why questions like "is AI biased in education", "does AI reflect historical bias", and "how does AI training data affect learning" are not technical questions alone, but historical ones. The system does not know what has been erased. It does not know what has been renamed. It does not know what has been misattributed. It learns from what is present, and what is present has already been filtered through centuries of classification, omission, and narrative control. It is therefore not simply learning knowledge. It is learning what survives distortion.

This is more than a theoretical risk. The HEPI Student Generative AI Survey 2026 found that 95 per cent of students are now using AI and 94 per cent use it specifically for assessed work. Nearly two-thirds say that assessment itself has already changed significantly in response to AI. The system is not entering education. It has entered.

When such a system enters education, it does not merely assist learning. It begins to stabilise that distortion as a baseline. This is where the mythology emerges. The mythology is that AI will democratise knowledge and, by making information accessible,make understanding equitable. That by providing answers at scale, it will level the field. That by removing human bias, it will produce neutral outcomes.

But this rests on a deeper assumption that is rarely examined: that the knowledge being distributed is already sufficiently accurate, sufficiently complete, and sufficiently just to be scaled. It is not. If the archive is distorted, then the scale does not correct it. Scale locks it in. Amplifies it.

This is what transforms a tool into infrastructure: once embedded in classrooms, assessment systems, and educational pathways, AI does not simply reflect knowledge; it begins to define it, determining what is surfaced, what is emphasised, what is omitted, and what is treated as settled.

And once that happens, contestation becomes deviation. Correction becomes friction, and alternative knowledge becomes error. The system does not need to exclude explicitly. It excludes through selection, and this is where the logic of extraction reappears. Data is taken without clear recognition of origin. Knowledge is stripped from context. Contributions are flattened into training material. Attribution becomes diffused to the point of disappearance. Harm is reframed in technical language that obscures its structure. Bias replaces structural distortion. Gaps stand in for erasure. Outputs are discussed, where consequences should be examined.

This softening is not confined to the technology companies building the systems. It is visible in the language of the very institutions convened to scrutinise them. The UK parliamentary inquiry, the most significant democratic examination of AI in education currently underway in the English-speaking world, frames the problem in terms of risks, safety, and responsible use. It does not ask what the archive contains. The EU AI Act, the most comprehensive AI governance framework anywhere on earth, classifies educational AI tools by risk level and demands transparency, accuracy, and human oversight. It does not ask what knowledge systems and categories were removed before the system was trained. Governance is being built around the container. The question of the contents remains, structurally, tragically, outside the frame.

The language softens. The system stabilises. And the student, encountering this system as a teacher, does not see distortion. They see coherence, fluency and authority.

The question is no longer what is known, but which archives are entirely absent from the stream, and who decides what counts as knowledge. Within the frameworks already available to us, the answer has never been to reject knowledge systems, but to interrogate them.

Forensic historiography can provide methods. It asks where knowledge comes from, who is missing, what has been reassigned, and what has been rendered invisible. It treats absence not as a given, but as a signal. It understands that distortion is not random, but patterned.

African decision frameworks provide the evaluative structure. Ma'at asks whether knowledge is in balance. Ifá asks what consequences follow from its use. Ubuntu asks who is affected by its framing. Ogboni asks who is accountable for its distortions. Palaver asks how harm is addressed and how it is restored. These are epistemic safeguards.

The scholarly work of building with them has already begun. A special issue of the New Zealand Journal of Educational Studies published in March 2026, titled Decolonising AI in Educational Studies, explicitly calls for moving beyond use-versus-ban debates toward the deeper question of whose assumptions, values, and knowledge AI systems encode, and how to broaden the foundation from which those systems are built. Researchers at the University of Leeds are applying learning analytics directly to curriculum reading lists across disciplines to identify and measure colonial bias at the level of the archive itself.

The decolonial AI literature, now substantial and growing across multiple continents, converges on a single point: the question is not whether to use AI in education but whether the archive it inherits has been examined, contested, and corrected before it is scaled. The frameworks to do this work already exist. What has been missing is the institutional will to treat archive correction as the necessary precondition rather than the optional supplement.

Once AI is embedded in education at scale, it becomes exceedingly difficult to dislodge. It shapes curricula. It informs the assessment. It influences what students trust. It begins to sit upstream of thought itself, and systems trained on a distorted archive simply generate more of the distortion.

The danger, then, is not that AI will teach falsehoods in obvious ways. It is that it will teach partial truths with complete confidence, and in doing so, foreclose the space in which correction might occur, the further danger being that, once these logics are embedded, they do not remain targeted. They become generalisable.

The students themselves are already naming the consequence, even if the institutional vocabulary is not yet available to explain its cause. A RAND Corporation report published in March 2026 found that 67 per cent of students now believe AI use harms their critical thinking skills, up from 54 per cent earlier in the year. They are describing, in plain language, exactly what happens when a system trained on a distorted archive becomes the primary instrument of inquiry. The capacity to contest, to correct, to think against the grain of received knowledge, is the first thing that thins.

The practical demand, then, is not to remove AI from education. It is to insist that archive correction precedes archive scaling, or at least proceeds in parallel with pathways designed for reversing out incorrect, biased or otherwise problematic data. That forensic historiography is applied to training data before that data shapes what a generation of students is taught to accept as settled knowledge, or that the analytic work to identify and freeze contaminated data is undertaken in parallel with a strategy for amelioration hard-coded into rollout plans. The evaluative structures already developed across African and other non-Western knowledge traditions, asking whether knowledge is in balance, what consequences follow from its use, who is affected by its framing, and who is accountable for its distortions, should be treated not as cultural appendices to the main curriculum but as the epistemic infrastructure the moment requires. For EdTech companies, university boards, and school district procurement teams operating under the EU AI Act's education provisions and the UK parliamentary inquiry's scrutiny, that infrastructure is also the missing governance layer their compliance frameworks do not yet contain.

The UK parliamentary inquiry has a submission deadline of 10 April 2026. The EU AI Act reaches full enforcement in August. The PISA AI literacy assessment arrives in 2029. There is a window. The question is whether those with the tools to act on it will recognise what they are holding.

This is where the UK Education Select Committee's inquiry into AI and inequality, the EU AI Act's classification of educational AI as high-risk, and the global legislative surge now reshaping classrooms in over two dozen US states all converge, asking the right questions about harm, but not yet the prior question about the archive those harms inherit from.

Chinenye Egbuna Ikwuemesi

Chinenye Egbuna Ikwuemesi

Chinenye Egbuna Ikwuemesi is a writer, author and systems thinker examining power, infrastructure and the myths that legitimise harm, with a focus on Africa as the first quarry for logics later applied to everyone.
London