Rachel Coldicutt
5 min readJun 1, 2023

--

On Understanding Power and Technology

I went to a Q&A by the “godfather of AI” Geoff Hinton the other night, and was invited to a (mostly) academic dinner afterwards. In the slightly awkward period at the beginning when we were milling around (also known as “pre-dinner drinks”) I started chatting to a very eminent computer scientist who had retired some years ago, having mostly – I gather – worked in academic labs.

By way of making conversation, the Very Eminent Computer Scientist asked me what I “do”. This is an oddly complicated question at the best of times, because I’m a chronic overthinker, don’t have a convenient Richard Scarry job title, and I do a few different things. I’m also an anomaly in that kind of setting as I’ve been working in technology for almost 30 years without ever doing the sorts of things that generally equate to status: I’m not an academic, I’ve never worked at Google, and I’m a very long way from being fantastically rich. However, when people make polite conversation before dinner, they don’t want a whole spiel about how, actually, you work in an emerging field in which job titles are not yet fully formed and your practice is angled towards realising an inclusive feminist vision of digital technologies because, well, life is short and people are hungry, so instead I said, “I help people understand how technologies work in the world” and dropped the vague “tech ethics” catch all, and he replied, “How interesting, and, may I ask, what qualifies you to do that?”

Being asked by a professor “what qualifies you to do that?” while standing in the Senior Common Room of a Cambridge college is quite daunting, so I emitted a vaguely incoherent freefall of word association – the kind you kick yourself about while replaying a slew of potential pithy aperçus. I explained that I’d spent twenty years making and commissioning digital products and services, some of them used by millions of people, and so my practice was based on observing what happens to a technology when it goes into the world: how it’s adapted and changed, and how every technology is really unfinished until it’s used by people. Mercifully, at that moment, we were ushered into dinner, but “what qualifies you to do that” stuck with me, and I wished I’d had a better answer.

The question of “what qualifies you” to understand a technology is particularly relevant at the moment, as we enter the nth week of Sam Altman’s AI Hype Roadshow, a cavalcade of open letters and AI doomspeak from World-Leading Authorities, in which the term “AI” has been a compelling vehicle for a wide range of as-yet imaginary concepts.

In this instance, the ability to understand a technology is neither here nor there, because the point has not been to discuss any of the relevant technologies. Instead, the project of Altman and his merry band of doomsayers appears to be to capture power and create obfuscation by making new myths and legends. If there has been a teachable moment, then the lesson has not been one about the potential of technologies but about the importance of media literacy.

And this is by no means a new move, it just happens – this time – to have been astonishingly effective. For several decades, tech companies have been aware that political influence is as important as technological innovation in shaping future market opportunities: from tactical advertising to political lobbying to creating well-paid public-policy jobs that have improved the bank balances of many former politicians and political advisers, the importance of getting in first with compelling political story has played a critical role in creating, expanding, and maintaining their incredibly lucrative markets.

The current “existential threat” framing is effective because it fits on a rolling news ticker, diverts attention from the harms being created right now by data-driven and automated technologies and it confers huge and unknowable potential power on those involved in creating those technologies. If these technologies are unworldly, godlike, and unknowable, then the people who created them must be more than gods; their quasi-divinity transporting them into state rooms and on to newspaper front pages without need to offer so much as a single piece of compelling evidence for their astonishing claims. This grandiosity makes the hubris of the first page of Steward Brand’s Whole Earth Catalogue seem rather tame, and it assumes that no one will pull back the curtain and expose it as a market-expansion strategy rathe than a moment of redemption. No one will ask what the words really mean, because they don’t want to look like they don’t really understand.

And yet, really, it’s a just a narrative trick: the hidden object is not a technology, but a bid for power. This is a plot twist familiar from Greek myths, cautionary tales and superhero stories, and it’s extremely compelling for journalists because most technology news is boring as hell. Altman’s current line is roughly, “please regulate me now because I’m not responsible for how powerful I’m going to turn out to be – and, oh, let’s just skip over all the current copyright abuses and potentially lethal misinformation because that’s obvs small fry compared to when I accidentally abolish humanity”. If it reminds me of anything, it’s the cartoon villain Dr Heinz Doofenshmirtz from Phineas and Ferb, who makes regular outlandish claims before trying, and failing, to take control of the Tri-State Area. The difference is, of course, that Phineas and Ferb always frustrate his plan.

My point is not that so much that we need Phineas and Ferb to come and sort this all out, but that we need to stop normalising credulity when people with power and money and fancy titles say extraordinary things. When I went to Hinton’s Q&A in Cambridge last week, he spoke with ease and expertise about neural nets, but admitted he knows little about politics or regulation or people beyond computer labs. These last points garnered several laughs from the audience, but they weren’t really funny; they spoke to a yawning gap in the way that technology is understood and spoken about and covered in the media. Computer science is a complex discipline, and those who excel at it are rightly lauded, but so is understanding and critiquing power and holding it to account. Understanding technologies requires also understanding power; it needs media literacy as well as technical literacy; incisive questioning as well as shock and awe.

If there is an existential threat posed by OpenAI and other technology companies, it is the threat of a few individuals shaping markets and societies for their own benefit. Elite corporate capture is the real existential risk, but it looks much less exciting in a headline.

--

--

Rachel Coldicutt

Exploring careful innovation, community tech and networked care. Day job: @carefultrouble .