The Mythology of Neutral Systems: Facial Recognition Bias Is a Historical One.

The Mythology of Neutral Systems: Facial Recognition Bias Is a Historical One.

This week, Angela Lipps, a fifty-year-old grandmother from Tennessee, received a public apology from the Fargo Police Department. She had spent 108 days in jail after an AI facial recognition system identified her as a bank fraud suspect in a state she had never visited. US Marshals arrived at her door with guns drawn while she was babysitting her grandchildren. The algorithm said it was her. Nobody called to check. By the time charges were dropped on Christmas Eve, she had lost her home, her car, and her dog.

There is a moment in the life of a system when its decisions stop appearing as decisions. They begin to appear as facts. That moment is where we are now with surveillance.

The Fargo case is the twelfth documented wrongful arrest in the United States from facial recognition misidentification. Of the previous eleven, the overwhelming majority of victims were Black. Lipps is white. That shift is not incidental. It is structural. It is the argument this essay makes.

Simultaneously, on 22 March 2026, Norfolk Police scanned over 50,000 faces in Norwich city centre as part of the UK's national expansion of live facial recognition, scaling from 10 to 50 deployment vans across England and Wales. Essex Police remains under ICO audit after scanning 2.5 million faces using confidence thresholds borrowed from an entirely different algorithm to the one they actually deployed. In Sweden, a government proposition submitted to the Riksdag on 3 March 2026 seeks to make Sweden one of the first EU member states to explicitly legislate for real-time police facial recognition, directly contesting the spirit of the EU AI Act which reaches full enforcement in August 2026. Three jurisdictions. One simultaneous surge. The question none of the governance frameworks are asking: what is the system working from, and has that history been examined before the system was authorised to act?

Across the United Kingdom, facial recognition systems and predictive policing tools are being expanded, tested, and normalised. They are introduced as enhancements to safety, as efficiencies in detection, as improvements in accuracy. They are presented not as interpretations of reality, but as reflections of it. And because it is a machine, it is assumed to be neutral.

This is the mythology.

The machine does not see the world. It sees data about the world, and that data is not neutral. It is historical. It carries within it the accumulated record of how societies have chosen to classify, record, and act upon different groups of people. It carries patterns of attention, patterns of suspicion, patterns of enforcement. It carries the residue of every decision that determined who was watched, who was stopped, who was searched, and who was recorded.

When such data is used to train a system, the system does not transcend those patterns. It stabilises them. What appears as recognition is often repetition. What appears as detection is often confirmation.

The system does not invent bias. It inherits and operationalises it, then translates that inheritance into the language of metrics. Accuracy rates. Confidence scores. Match probabilities. Harm is rendered as performance, and once harm is expressed as a metric it becomes optimisable rather than questionable. The question shifts from whether the system should be used at all to how it can be made more accurate. In that shift, the original distortion disappears from view.

The Amnesty International UK report Automated Racism, published in 2025, found that at least 33 UK police forces are using predictive policing systems built on data that disproportionately targets Black and racialised communities. In the West Midlands, Black or Black British people were stopped and searched 10.3 times per thousand compared to 2.3 times per thousand for white people. The system predicts crime where crime has been recorded. It does not ask why it was recorded there. The feedback loop is the product, not the failure.

Within corrective history, this logic is not unfamiliar. Systems of classification have always presented themselves as neutral. The categories of race, of criminality, of civilisation were not introduced as arguments. They were introduced as descriptions, framed as observations of reality rather than constructions of it. And once accepted as descriptions, they became infrastructure.

The same logic now reappears in technological form.

Facial recognition systems are trained on datasets that reflect unequal visibility. Some faces are overrepresented. Others are underrepresented. Some are captured in controlled conditions. Others in surveillance conditions. The system learns to distinguish based on what it has seen, but what it has seen is already structured by power. It is not simply that some groups are misidentified more often. It is that the system has learned, through the data it has been given, who matters to recognise correctly and who does not.

Human rights organisations have noted that the surveillance technologies now being rolled out across democratic states are frequently developed and field-tested in conflict zones and occupied territories before being normalised into civilian policing contexts. The populations tested on first are rarely the populations who get to consent to the test. The Angela Lipps case makes the structural logic visible in a different register: the harm that was first calibrated against Black bodies has now generalised outward. That is not an accident of the technology. It is the Dorian Grayisation of enforcement in live operation.

Outwardly, the system appears refined, objective, and efficient. It presents itself as an improvement over human judgment, free from prejudice, guided by data. Inwardly, it carries the accumulated record of historical bias, now encoded into models, now operating at scale, now shielded by the authority of computation. Each deployment transfers another layer onto the canvas. Each metric conceals the structure. And because the system is seen as neutral, its outputs are treated as evidence rather than interpretation, granted a form of epistemic authority that exceeds that of the humans who built them.

This is how distortion becomes self-legitimating.

The danger is not that such systems will produce unequal outcomes. It is that they will produce those outcomes in a way that appears justified, measurable, and correct. Resistance becomes irrational. Critique becomes anecdotal. Structural harm becomes statistically defensible.

The machine is not neutral. It is the archive, reanimated. And when the archive has not been corrected, what the machine produces is not truth, but continuity.

The practical demand, then, is not to halt deployment entirely. It is to insist that the archive be interrogated before it is operationalised at scale, and that where deployment has already proceeded, mechanisms for identifying, freezing, and correcting contaminated data be hard-coded into governance frameworks rather than treated as optional audit activity. The ICO's rolling audit programme is a beginning. The EU AI Act's August 2026 enforcement date creates a compliance threshold with real penalties. The Swedish Riksdag debate will test whether EU member states can legislate around the Act's spirit while remaining within its letter. But regulation built around risk levels and accuracy thresholds does not ask the prior question. It governs the container. The contents, what the system learned, from whose history, structured by whose enforcement patterns, remain outside the frame.

This applies with equal force to every organisation deploying facial recognition and biometric systems beyond law enforcement. Retailers scanning shoppers, financial institutions verifying identity, employers screen candidates, insurers assessing risk. For all of them, the EU AI Act's high-risk classification, its August 2026 enforcement deadline, and the emerging civil rights litigation landscape around AI wrongful identification create identical governance and liability obligations. Biometric risk management is no longer a policing question. It is a board-level question with a compliance deadline.

The question, then, is not whether these systems work. The question is what they are working from, and whether we are prepared to challenge that before we allow them to decide who is seen, who is recognised, and who is acted upon. Because once the system is in place, what it sees will increasingly become what is believed to be there.

In practical terms, this sits at the centre of current debates around facial recognition bias, predictive policing racial disparity, AI wrongful arrest liability, algorithmic accountability, responsible AI deployment, EU AI Act compliance, and biometric data governance. It is also precisely where the UK's national facial recognition rollout, the ICO's ongoing audits, the Angela Lipps wrongful arrest case, the Swedish Riksdag bill, and the EU AI Act's August enforcement deadline all converge, asking the right questions about accuracy and risk, but not yet the prior question about the archive those systems inherited. For organisations seeking an AI governance framework that addresses algorithmic accountability at its structural root rather than its technical surface, that prior question is where responsible AI deployment actually begins.

Chinenye Egbuna Ikwuemesi

Chinenye Egbuna Ikwuemesi

Chinenye Egbuna Ikwuemesi is a writer, author and systems thinker examining power, infrastructure and the myths that legitimise harm, with a focus on Africa as the first quarry for logics later applied to everyone.
London