The synthetic elephant in the room

When half of the public believes your technology is the problem

The synthetic elephant in the room

Big Tech has done it again. They've created a technology so fundamentally hostile to humans that it's hard to hide the public's revulsion anymore. The Seismic Foundation's latest report reads like an autopsy of democracy itself; 10,000 people across five countries, all reaching the same damning conclusion: AI isn't the future, but a present threat to everything we actually care about.

Find the report here

Overwhelmingly, […] people think AI will worsen almost everything they care about. We asked people whether they thought AI would improve or worsen a range of salient issues, ranging from the economy to politics, health and society. The pattern is clear. The trend is negative for every issue except health care and pandemic prevention. Unemployment, Misinformation, and War and Terrorism are the areas where people think AI will do the most damage.

After the statistical fog settles, we see that only one in three people see AI as hopeful for humanity. One in two see it as a growing problem. This is pattern recognition as people understand (with the clarity that comes from lived experiences already) that AI represents an attempt to automate human agency itself.

The survey reveals what should be obvious to anyone not spending their days reading hopecore LinkedIn messages: people fear AI will destroy human relationships more than it will destroy their jobs. Sixty percent worry about AI replacing human connections.

What should not come as a surprise, is that women and the poor are most afraid. Exactly the people who've learned, through centuries of bitter experience, that new technologies rarely liberate the powerless. Women are 2.2 times more pessimistic about AI than men, not because they're irrational, but because they recognize something oddly familiar: another technology that will be used to exploit, surveil, and control them.

The specificity of these fears should worry anyone with moral instincts. Half of the respondents worry about AI creating sexualized deepfakes of children. Nearly half fear AI-powered scams and election manipulation. These are fears based on what's already happening.

Students, supposedly the demographic most comfortable with new technology, are terrified. Three in five fear AI will eliminate entry-level jobs. Half feel "daunted" by the future of work. The generation that grew up with smartphones can see exactly where this leads: a world where human labor becomes increasingly worthless.

But there's more. Over half the public believes AI developers are "playing god". Only a third think these companies have society's best interests at heart. The respondents see AI for what it is: automated inequality. Not technological progress, but the systematic destruction of human agency dressed up in Silicon Valley marketing speak.

The tragedy is that we're debating regulation and oversight when we should be asking a more fundamental question: What problems does AI solve that couldn't be better addressed by, say, paying teachers more, investing in public healthcare, or creating meaningful work for everyone?

The answer, of course, is that AI doesn't solve human problems, it solves capital problems.

The response against unregulated AI won't come from any official body like the European Union. It will come from ordinary people refusing to use systems designed to replace them, from workers organizing against algorithmic management, from communities choosing human connection over digital convenience. The survey shows it has already begun as a rational response to an irrational system.

Thanks for reading our collective futures! Subscribe for free to receive new posts and support my work.