Technological Neutrality and the Myth of Impartial Systems
In early 2025, the European AI Act's compliance discussions surfaced a case that had never been properly reckoned with. Amazon's internal recruitment algorithm, developed to automate the screening of job applications, had systematically downgraded CVs containing the word "women," including references to women's colleges and women's professional organisations. The system had been trained on a decade of successful Amazon hires, which were overwhelmingly male. It learned, correctly, what Amazon's historical hiring looked like. It then reproduced that history as objective assessment. Amazon retired the tool quietly in 2018 without public acknowledgement that the system had failed. The AI Act's compliance frameworks have since required organisations to confront what Amazon preferred to leave unexamined.
That quiet retirement is worth examining carefully. Not as an isolated technical failure. As a structural choice.
The Infrastructure
Algorithmic decision systems are built on three layers of assumption that neutrality narratives render invisible. The first is data selection: which historical records are used to train the system, whose experiences are treated as the default, and whose are statistically marginal or absent entirely. The second is objective setting: what outcome the system is designed to optimise, and whose definition of a good outcome that reflects. The third is consequence allocation: when the system produces errors, which populations absorb those errors and which are insulated from them.
None of these layers are neutral. All of them are designed. The myth of technological neutrality does not emerge from the systems themselves. It is produced by the institutions that deploy them, because neutrality narratives transfer accountability from human decision makers to algorithmic processes, and algorithmic processes cannot be held responsible.
The Power Structure
The COMPAS recidivism algorithm, deployed across multiple American jurisdictions to inform bail, sentencing, and parole decisions, makes the power structure visible with unusual clarity. The system generates risk scores predicting the likelihood that a defendant will reoffend. Judges use those scores in decisions that determine whether people remain imprisoned or return to their families and communities.
COMPAS was trained on historical criminal justice data, data generated by a system with a extensively documented record of racially differential policing, prosecution, and sentencing. The algorithm learned what that system's outputs looked like and reproduced them as predictive assessment. Independent analysis found that Black defendants were nearly twice as likely as white defendants to be incorrectly flagged as high risk. White defendants were more likely to be incorrectly flagged as low risk.
When these findings were challenged, the system's developers pointed to their methodology. The process was sound. The variables were validated. The myth of neutrality closed the argument before the structural question could be asked: what does it mean to train a predictive system on the outputs of a racially structured process and then deploy that system to determine who goes free?
The Extraction Logic
What is extracted through the myth of technological neutrality is accountability itself. When an algorithm makes a consequential decision, the human institutions that commissioned, deployed, and maintained that algorithm acquire a mechanism for distributing responsibility so widely that it effectively disappears. The system flagged it. The model predicted it. The data showed it. Each formulation moves the decision one step further from any actor who can be questioned, challenged, or held responsible.
The cost of that extraction is not distributed equally. Medical diagnostic AI trained predominantly on lighter skin tones produces significantly higher error rates for darker skin, with documented consequences for the timing of cancer diagnoses in populations already carrying the heaviest burden of healthcare inequality. Hiring algorithms reproduce the demographic profiles of historical workforces. Credit systems produce differentials that investigators cannot explain but will not name. In each case the myth of neutrality allocates the cost of algorithmic error to the communities least positioned to challenge the systems producing it.
The Myth
The Myth of Impartial Systems holds that technological decision making is categorically more objective than human decision making because it operates through data and formal rules rather than through judgment and intuition. This claim performs a specific function: it removes algorithmic decisions from the category of things that require democratic scrutiny and places them in the category of things that require only technical validation.
The myth is not entirely without foundation. Human decision making is genuinely inconsistent, genuinely susceptible to explicit bias, genuinely improved in some contexts by structured processes. The myth extracts this partial truth and extends it into a general claim that systems which process data objectively produce outcomes that are themselves objective, ignoring the three layers of designed assumption on which every such system rests.

When the myth holds, institutions that deploy algorithmic systems are insulated from accountability for their outcomes. When the myth breaks, as it did with Amazon, with COMPAS, with the Dutch childcare benefits system that brought down a government, the institutional response is consistently the same: the tool is retired, the methodology is reviewed, and the structural question, why was neutrality assumed, who benefited from that assumption, and who bore its costs, remains unasked.
The Civilisational Pattern
The myth of impartial measurement has a long history and a consistent function. Craniometry measured skulls to produce racial hierarchies and presented its conclusions as biological fact. Intelligence testing measured cognition through instruments designed within particular cultural and linguistic frameworks and presented differential outcomes as natural variation. Eugenics programs processed genealogical and medical data through formal classificatory systems and presented their outputs as scientific determination.
Each of these systems claimed neutrality. Each processed real data through real methodologies. Each produced conclusions that encoded the assumptions, priorities, and hierarchies of the societies that designed them as objective findings. Each was eventually discredited, not because the data was fabricated but because the design assumptions were exposed as political rather than scientific choices.
Contemporary algorithmic systems are not eugenics. The comparison is structural, not moral. The relevant pattern is that systems which claim to measure objectively, while embedding the assumptions of the powerful as default parameters, have a consistent history of allocating their costs to those with the least capacity to contest their conclusions.
The Question
The Amazon case did not surface because the company acknowledged a structural failure. It surfaced because regulatory compliance frameworks created an external requirement to examine what internal governance had preferred to leave unexamined.
The question is not whether algorithmic systems can assist human decision making. They can, and in some contexts do. The question is who designs the parameters that determine what the system optimises for, whose historical experience is treated as the training data for what good outcomes look like, and through which mechanisms the communities most consistently subject to algorithmic error can challenge systems whose neutrality is their primary defence against scrutiny?
Impartiality is a design choice. Is the decision to call it neutral not one also?