A (short) critical history of artificial intelligence from a computational fallacy to tech fascism

Just before we wrap up 2025, we take a dive into a critical history of AI. This is an extended version of a short talk I gave in Helsinki at an event called Critical AI.

A (short) critical history of artificial intelligence from a computational fallacy to tech fascism
Photo by Navy Medicine / Unsplash

Introduction, and the politics of technology

David Graeber spent much of his career making visible the fact that systems that we usually treat as natural or inevitable are actually made up and they are the products of specific political choices. In The Utopia of Rules, he showed how bureaucracy creates a kind of structural violence through claims towards rationality. In Debt, he revealed how supposedly neutral economic relationships encode power relations going back to ancient times. In Bullshit Jobs, he documented how late capitalism generates meaningless work while destroying forms of labor that actually contribute to human welfare.

The current wave of artificial intelligence can be traced to the same line of thought. These 'inevitable' systems require massive computational resources, while they are drawing electricity at scales that strain regional power grids. Their infrastructure demands concentrate development in the hands of a few massive corporations with access to capital and data centers. This concentration matters because it shapes what gets built and for whom. The technology emerges from a specific set of material conditions: venture capital seeking returns, corporations looking to reduce labor costs, and states interested in surveillance capabilities. The outputs reflect these origins. Content moderation systems trained on Western norms get deployed globally. Chatbots designed to sound helpful mask the displacement of customer service workers. Image generators trained on copyrighted work without permission or compensation create new vectors for appropriating creative labor.

Of course, for the wider public, the real labor relations underlying these systems remain deliberately obscured. Thousands of workers in Kenya, the Philippines, and Venezuela spend their days labeling images, rating text outputs, and filtering traumatic content for a few dollars per hour. The annotation work makes the systems function, yet the workers receive no credit, no benefits, while they also face constant threat of their own job termination. Their labor gets repackaged as machine intelligence, while, the systems themselves create new forms of bullshit jobs. Professionals shift to "prompt engineers", or they are tasked to write corporate AI policies, or to manage AI ethics boards that produce documents nobody will ever read... Companies then need to hire workers to fix the errors other workers made while using AI to speed up their work and meet quarterly company goals.

The knowledge these systems encode and encapsulate comes from mass appropriation. Billions of web pages, digitized books, scientific papers, social media posts, and blogospheres are all scraped without consent. Medical diagnosticians find their knowledge compressed into decision support tools that hospitals use to justify reduced staffing. Translators compete with systems trained on their previous work. Programmers debug code generated from patterns extracted from open source repositories they contributed to. Content creators see their freelance jobs being reduced to zero since 'ChatGPT can do the job equally well'. The systems don't replace expertise so much as they enable its cheaper, AND worse substitution.

On the discourse side, the rhetoric around artificial intelligence consistently invokes the feeling of inevitability. Companies describe "racing" toward AI as though acceleration is something that we absolytely have to do, no matter the cost. Investors speak of "missing out" on transformation as though the specific form of transformation was de facto a predetermined success (remember the same narrative around web3?). This framing is there, of course, because it only serves political purposes. This framing of inevitability and FOMO positions opposition as futile, and paints alternative paths as naive.

Yet the technology itself remains profoundly contingent. These systems require specific legal regimes around intellectual property and data collection. They depend on energy infrastructure and tolerance for their massive environmental costs. They need continued access to training data and human labor. Each of these represents a political choice. Copyright law could require compensation for training data, but doesn't. Energy policy could price carbon to reflect real costs, but does not. Labor law could extend protections to annotation workers, but their work remains invisible. The possibilities exist for creating responsible AI, but they threaten the extractive business model.

We might ask who benefits from treating human knowledge as a resource to mine, or from creating systems that generate endless content, or from deploying tools that enable surveillance at unprecedented scales. But the answers point toward a familiar concentration of wealth and power. The technology amplifies existing dynamics rather than disrupting them. It makes certain forms of extraction more efficient while creating new categories of precarious work to support that extraction. This suggests the problem isn't primarily technical. Better alignment methods, more efficient training, improved safety measures and more policy: these solutions address symptoms. The underlying question revolves around what sort of social relationships these systems instantiate. Do they distribute knowledge or concentrate it? Do they reduce necessary labor or eliminate useful work? Do they serve human (and planetary) needs or generate returns for shareholders and investors? Do they help the creators of knowledge and art to further their work, or appropriate their work for the benefit of lazy people who didn't put the work in? Do they actually benefit anyone apart from the ones who create these systems? These remain political questions that technical solutions cannot resolve.

What follows maps how computational thinking became an ideology, from the fusion of Silicon Valley with surveillance capitalism, to the algorithmic systems that mediate nearly every aspect of social life. This history draws on the work of experts in the field and contemporary researchers who have documented the extractive dimensions of AI. Their insights confirm what Graeber taught us: when power presents itself as a technical necessity, we must look closely at who benefits from treating political choices as engineering problems.


Seeds of the computational fallacy (1960s-1990s)

The ELIZA effect (1964)

Joseph Weizenbaum created ELIZA in the mid-1960s as a simple program that could mimic a Rogerian psychotherapist through pattern matching and response generation. The program was rudimentary. It simply reformulated users' statements into questions, creating an illusion of understanding where none existed. What shocked Weizenbaum was not the program's technical achievement but the human response to it. Intelligent people, including his own secretary, fell for the illusion. They believed they were talking to something that understood them.

By 1976, when Weizenbaum published Computer Power and Human Reason, he had recognized a deeper problem. The conversational interface (the chat) became a site of anthropomorphization. Humans projected consciousness onto systems that performed mechanical operations. This tendency revealed something about our hyperindividualistic society: the need for validation was so strong that people would accept it from a simple pattern matcher.

Weizenbaum's core insight was that complex social problems were being reframed as computational puzzles waiting to be solved. This was exactly what every techbro wants: a world where political questions could be answered through better algorithms. The ELIZA experiment showed that this computational fallacy would work because humans were willing participants in the deception and fall for the disguise of care.

The birth of the commercial internet

The 1980s marked the transformation of ARPANET from a military research network into what would become the commercial internet. The personal computer revolution promised to put computing power in everyone's hands. What it actually did was create millions of data entry points for future corporate capture. Apple's 1984 Super Bowl ad claimed they were fighting Big Brother. They were actually building the infrastructure for a thousand little brothers.

The early tech bubble of the 1980s showed the blueprint for speculation-driven innovation that would define Silicon Valley forever. This was the prototype for the boom-and-bust cycles that would repeat in later years.

The Osborne effect, when the computer company collapsed after prematurely announcing a new product, demonstrated how speculative capital would drive technological development. Companies would rise and fall based on promises rather than products. This pattern established Silicon Valley's business model, which was never about solving real problems. It was about creating speculative value through technological mystification.

The neoliberal fusion (1990-2000)

Silicon Valley meets the end of history

The 1990s marked the fusion of neoliberal economics with computational thinking. As the Soviet Union collapsed and the Berlin Wall became rubble, Silicon Valley emerged as the new frontier of American capitalism. Francis Fukuyama shouted for the end of history. Capitalism went global. But this was the kind of capitalism where everything had to be turned into a market, including human relationships.

The internet became a platform for data extraction and market expansion. This decade established the foundational myth that technological progress equals human progress. This equation represented a fundamental category error, because sure, technological progress might expand capabilities, but human progress requires improvements in well-being, justice, wisdom, and meaningful growth.

The 1990s assumed that because we could connect the world digitally, we were automatically creating a better world for humans. Technology without corresponding moral and political development ended up amplifying existing inequalities and creating new forms of harm. Technological progress became a tool in-service of market expansion. Innovation alone never guaranteed social progress.

The Dot-Com bubble

The late 1990s dot-com bubble revealed that companies with no viable products attracted billions of dollars in investment simply by claiming to "revolutionize" something out of our human relationships. This speculative logic would become central to AI development as well, later on.

This pattern reflected a deeper ideological shift where the techno-bubble became synonymous with rational thinking itself. The tech industry did not just sell products anymore; it sold the idea that their particular way of modeling the world through algorithms and data structures represented the pinnacle of human reasoning. Complex social, political, cultural, and economic problems were reframed as technical challenges waiting for the right technical solution to come from a visionary founder.

This digital chauvinism dismissed centuries of humanistic knowledge as outdated, positioning engineers as the new philosopher-kings who could optimize everything from transportation, hospitality, love, eating, and democracy itself. If we are to assume that computation equals intelligence, and intelligence equals progress, then any technological innovation can be framed as revolutionary, regardless of its actual social utility. This mindset created a permission structure for endless speculation disguised as inevitable technological advancement.

The surveillance decade (2001-2010)

The marriage of Silicon Valley with the security state

The period following September 11, 2001 marked the marriage of Silicon Valley and the US security state, and mass surveillance was suddenly no longer creepy! It was patriotic!! Google, Amazon, and Facebook did not just emerge as tech companies. The "war on terror" now provided the perfect justification for mass data collection of unprecedented scale. Social networking, on the other hand, provided the most sophisticated surveillance apparatus in human history, and we, as users, collectively agreed (either with our knowledge or our actions, or by not reading the fine print) that "connecting the world" and "monitoring the world" meant the same thing.

This period marks the birth of surveillance capitalism, a new economic version of capitalism that extracts value from human experience by converting daily lives into data for computational analysis and behavioral modification. Tech platforms discovered they could offer free services because users were not customers. They were the product being sold to advertisers who wanted to influence behavior. The platforms did a bit more than just collect data. They created unprecedented psychological profiles that could predict and influence future actions better than people could predict their own behavior.

This period saw "digital transformation" solidify as an ideology that operates through a particular form of solutionism. Every human problem can supposedly be optimized away through better data processing and algorithmic efficiency. The appeal, of course, is seductive. Why endure the messy, slow, conflict-prone process of democratic governance when algorithms can simply calculate the optimal outcome for all of us?

The financial collapse and platform capitalism

The 2008 financial crisis did not just collapse housing markets. It created the perfect conditions for another mutation of capitalism: platform capitalism. As traditional employment became precarious, Silicon Valley offered a solution: turn everyone into entrepreneurs. The "sharing economy" (with companies like Uber, Airbnb, and TaskRabbit) promised freedom and flexibility.

The result was the systematic elimination of worker protections, benefits, and job security, all mediated through apps that made exploitation feel like a game. The financial crisis taught us that traditional capitalism was unstable, so Silicon Valley repackaged it with a better user experience.

The financial collapse left millions underwater on mortgages while destroying traditional employment, creating a population desperate enough to accept "gig work" as the new form of work. Platforms systematically dismantled the social safety nets that had protected workers, replacing collective bargaining and employer obligations with individual "flexibility" sprinkled with algorithmic management. They created a massive apparatus for converting social relations into extractable value, while transferring all risks and costs to the individuals.

This period also saw the rise of "Big Data" as the new gold rush. Every click, search, navigation and interaction became raw material for algorithmic systems. Big data captures the intimate details of who we are at our most vulnerable moments, like our fears when we search for medical symptoms at 3AM in the early morning hours. This behavioral surplus gets fed into systems designed to modify future behavior for commercial gain. Platforms discovered they could use our own psychological patterns to create addiction-like engagement loops, exploiting cognitive biases and gradually shifting preferences and decisions in directions that maximize their revenue.

The deep learning era (2011-2020)

The invisible labor of machine learning

Behind every "intelligent" system are thousands of invisible workers like data labelers and content moderators, traditionally employed from countries of the Global South. Machine learning does not necessarily eliminate human labor; rather, it simply renders it invisible.

This invisibility is intentional because AI systems depend on vast supply chains of human labor that span the globe, from miners extracting raw materials for data center servers, to clickworkers training image recognition systems for pennies per task. The "artificial" in artificial intelligence is a misnomer because these systems concentrate the cognitive labor of thousands of humans into systems owned by a handful of corporations.

Content moderators spend their days viewing the most traumatic material imaginable while data annotation workers teach machines to "see" by spending ten-hour shifts drawing bounding boxes around objects in images. Meanwhile, gig workers find themselves managed by algorithmic systems that monitor their every movement, optimize their routes, and lay them off through automated processes.

The myth of "autonomous" systems obscures the dependence on human labor, creating a new form of colonialism where the Global South provides both the raw materials and the cheap labor that power the Global North's "intelligent" technologies.

The deep learning hype cycle

The "deep learning revolution" arrived with GPUs mining cryptocurrencies, all while promising that artificial general intelligence was just around the corner. The deep learning hype deliberately conflates statistical pattern matching with human intelligence. Suddenly, neural networks started "thinking", statistical correlations became "understanding", and brute-force computation became "learning".

Human intelligence became just another technological challenge to overcome. The promise of AI became a secular rapture narrative, complete with predictions of transcendence and warnings of existential risk, all designed to distract from the mundane reality of what these systems actually do.

The "intelligence" in artificial intelligence was always a marketing term, because what we got was not thinking machines, but increasingly sophisticated tools for behavioral manipulation and labor displacement, wrapped in the mythology of digital consciousness to make their social impact seem natural and inevitable rather than politically motivated.

Algorithms-as-governance

By 2016, algorithms expanded their capabilities from predicting behavior. Now they were governing it via credit scores, predictive policing, social media feeds, hiring algorithms, and so on. Capitalism no longer needed to explicitly coerce compliance, since it could simply embed its power within algorithmic systems that appear neutral while systematically reproducing existing hierarchies.

When an algorithm denies someone a loan or flags them for police attention, it appears as a mathematical inevitability. The genius of the algorithms-as-governance is that it makes power disappear into the black box of digital decision-making. Citizens cannot vote against an algorithm... They cannot petition against a neural network. Democratic accountability gets replaced by technocracy. This post-politics created a form of power that operates beyond the reach of democratic institutions.

AI ethics as corporate PR

As the negative consequences of algorithmic systems became undeniable, the tech industry responded with "AI ethics". Corporate AI ethics functioned as an elaborate theater of accountability without the accountability part. The same companies deploying facial recognition systems for authoritarian governments and building weapons for military contractors suddenly discovered a sudden passion for "responsible AI development". But these ethics initiatives systematically avoided the most fundamental questions like should these systems exist at all, or who gets to decide what problems AI should solve?

Within the corporate sphere AI ethics got reduced to technical problems such as "bias mitigation" and "algorithmic fairness". Meanwhile, researchers who dared to critically examine or document the harms these systems cause faced retaliation, dismissal, and industry blacklisting. Timnit Gebru, Margaret Mitchell, and their stories are notable examples.

The ethics washing provided moral legitimacy for continued expansion while creating the illusion that someone was watching over these systems. Corporate ethics boards became a form of regulatory capture, ensuring that conversations about AI governance happened on terms favorable to the companies building these systems.

GPT and the commodification of language

The development of large language models represents the latest frontier of extraction: commodifying the human language itself. Every book, article, essay, and conversation became training data for private systems. Companies started charging us to access compressed, garbled versions of our own collective knowledge. We witnessed the enclosure of human expression, repackaged as "democratizing AI".

Companies like OpenAI claimed that training on copyrighted material constitutes "fair use" while simultaneously arguing that their models are so transformative they deserve patent protection. Meanwhile, these systems systematically amplify the biases present in their training data, generating text that reproduces harmful stereotypes while sounding authoritative and ambiguously neutral.

When pressed about these issues, companies respond with the classic deflection that the technology itself is neutral, and any problems are "alignment" issues that can be solved with better fine-tuning in the next versions.

The present crisis (2021-2025)

The climate cost

Training a single large language model produces ridiculous amounts of carbon. Data centers now consume something like 4 percent of global electricity. The AI revolution is an environmental catastrophe, but it gets marketed as clean, efficient, and inevitable.

This is late-stage capitalism and the planetary crisis in our current moment we vaguely call polycrisis (or polycollapse - this though is another long post to be made in the future). As the window for meaningful climate action continues to narrow, tech companies are backing away from climate pledges, and they are doubling down on the most energy-intensive technologies imaginable. They do that while selling these technologies as solutions to the very crisis they are accelerating. The promise is that as AI becomes more advanced, we will find the solution to the climate problem later on. Do not fret, folx.

The environmental cost is another externalization of the same system that commodifies, labor, intelligence and everything in between. Techno-optimism obscures the basic fact that we cannot compute our way out of a crisis we created ourselves through overconsumption and endless growth. The promise that more efficient algorithms will somehow offset exponentially increasing computational demand is magical thinking created for us to avoid confronting the systemic changes actually required (not for long, though) within our lifetime (or at least our children's lifetime).

Digital surveillance as the norm

COVID-19 normalized digital surveillance. With contact tracing apps, health passports, suddenly, carrying digital identification became a civic duty. Remote work surveillance systems exploded (for example, keystroke monitoring, webcam, and software tracking).

This moment revealed how capitalism works in a state of emergency (or 'shock doctrine', as Naomi Klein might call it) through surveillance-as-a-service: the idea that complex social problems can be solved through even more monitoring, tracking, and data collection, or they form a moment in time where more of these survailance solutions can be forced.

The exhaustion that defined this period (apart from the physical toll) was the psychic toll of living under constant surveillance while being told this was for our own good. Workers found themselves monitored while being grateful to work from home. Students had to comply with proctoring software that tracked their eye movements during exams. It doesn't matter if these softwares worked well, or if they even achieved their intended stated goals, but they succeeded quite brilliantly at normalizing a level of digital control that would have been unthinkable just a few years before. The state of emergency became a state of technological dependency. The temporary became the de facto state.

Cory Doctorow's concept of "enshittification" allows us to understand how every digital platform follows the same decay pattern. Stage one: platforms are good to users to lock them in. Think early Google with minimal ads and great search results, while secretly spending tens of billions to become the default search everywhere. Stage two: abuse users to attract business customers. Google fills results pages with ads marked in tiny gray text, uses surveillance for targeting, all while business customers become dependent on the platform for revenue. Stage three: platforms claw back value from everyone, leaving just enough to keep users and businesses trapped.

The automation of creativity

By training on copyrighted works without compensation, tech companies started laundering intellectual property theft through algorithmic complexity. They trained AI on millions of copyrighted works (often using torrents as well), while claiming this was "fair use" because the algorithm "learned" from them. Tech companies appropriate the cultural commons without permission, then sell it back to us with a price tag. But what we are witnessing is a systematic devaluation of artistic labor and the destruction of what Walter Benjamin called the "aura" of original work.

When Hayao Miyazaki's decades of painstaking craft can be instantly replicated and mass-produced by typing a prompt, we are destroying the economic and cultural foundations that make artistic careers possible. The problem is not that machines may be able to mimic human creativity (they can't), but that we have been conditioned to celebrate the process by which our own creative capacity gets automated away. This leaves us dependent on systems that compress and regurgitate our collective cultural heritage while concentrating the profits in the hands of a few tech moguls.

Deskilling of cognitive labor

Large language models are replacing the process of learning itself as we are witnessing the final phase of deskilling. Junior engineers forced to use AI assisted coding tools, and may no longer be able to debug code without AI assistance. Writers find that their critical thinking skills deteriorate from constant reliance on automation tools. Designers forget how to craft compelling narratives without relying on algorithmic prompts.

Workers describe feeling like "prompt generators", reduced to managing the output of systems they do not understand and cannot control. Deskilling gets presented as empowerment: you are not losing expertise, you are becoming more "efficient". But efficiency at what cost?

Business-as-usual is creating entire cohorts of workers who can no longer perform their jobs without subscription access to proprietary AI systems, transforming professional expertise from a personal asset into a corporate dependency. The same companies that profit from this deskilling then turn around and lay off the now-dispensable workers, having extracted and commodified their knowledge.

Computational thinking in education

When students are taught that every problem can be broken down into discrete, optimizable components, they are being conditioned to accept a world where human complexity gets reduced to data points and algorithmic variables. Many universities are preparing a generation that sees complex social problems as algorithmic puzzles rather than political challenges. They are producing a generation of humans optimized for compliance with machine logic.

The pedagogy of computational thinking systematically excludes forms of knowledge that cannot be quantified (wisdom, intuition, ethical reasoning, cultural understanding, and collective solidarity). The result is students that instinctively trust algorithmic solutions over human judgment, and experience their own worth primarily through their utility to computational systems.

The new face of fascism

Figures like Balaji Srinivasan and Marc Andreessen promote hierarchical, anti-democratic "network states" and "startup countries": aligned online communities that crowdfund territory around the world and eventually seek diplomatic recognition from existing states. The strategy lies in making destruction look like optimization. When AI systems eliminate diversity programs while protecting "infrastructure", it's presented as efficiency engineering rather than ideological purging. This replaces democratic deliberation with algorithmic efficiency, transferring decision-making power from democratic institutions to private corporations.

The techno-fascists have solved fascism's old legitimacy problem by wrapping authoritarian politics in the language of technological inevitability. Most people accept algorithmic decisions as neutral and objective, even when those algorithms are explicitly programmed to advance reactionary political agendas.


Reclaiming human reason

The last sixty years have shown us what happens when we surrender human capacities to computational systems designed to serve capital rather than humanity. It is not too late to choose differently, but only if we recognize that technology is politics by other means.

The capacity to make meaning from experience, to find connections between disparate events, to interpret the poetic dimensions of existence, to construct narratives that give life significance, represents a fundamentally different kind of intelligence that pattern matching in large datasets will not be able to replicate.

According to Mark Fisher capitalist realism operates by making alternatives appear impossible. When we stopped being able to conceive different futures, the only imaginable path became one where human capacities get progressively automated away.

Reclaiming human reason means insisting that some forms of understanding cannot and most importantly should not be automated. The messiness of democratic deliberation is preferable to algorithmic governance, and in many cases, friction is good. The goal of technology should be enhancing rather than replacing human agency and creativity.

Human intelligence is not pattern matching.

Language is not a statistical correlation.

Creativity is not recombination.

These are reductive claims serving specific political purposes: making human capacities appear automatable and therefore replaceable. We can refuse systems built on extraction. We can demand institutions hire humans rather than deploy chatbots. We can insist that some forms of labor should not be automated regardless of technical capability. We can treat the messy, slow, conflictual process of human deliberation as valuable precisely because it cannot be optimized away.

What if we recognized that the computational fallacy was always a political project dressed up as technical progress?

Resistance requires not better algorithms but different values: human dignity over efficiency, democratic deliberation over algorithmic optimization, collective flourishing over individual productivity. These values cannot be automated. That is precisely why they matter.

On to 2026.


Sources & further reading