Can Machines Read?

Rachel Coldicutt
9 min readSep 2, 2023

--

What does AI mean for authors — and is collective action the solution?

A black keyboard at the bottom of the picture has an open book on it, with red words in labels floating on top, with a letter A balanced on top of them. The perspective makes the composition form a kind of triangle from the keyboard to the capital A. The AI filter makes it look like a messy, with a kind of cartoon style.
Teresa Berndtsson / Better Images of AI / Letter Word Text Taxonomy / CC-BY 4.0

This piece was written in June 2023. An edited version was published in the Summer 2023 edition of the Society of Author’s magazine, The Author.

The question of how humans coexist with machines has long vexed all kinds of people, including — but not only — computer scientists, science-fiction writers and management consultants. Recently that preoccupation has become a little more widespread. Thanks to the astonishingly effective PR of a few technology companies, artificial intelligence has made newspaper front pages, filled breakfast TV tickers and created enough speculation on social media to keep data centres all over the world humming at full capacity. Generative AI is the apocalyptic monster of the moment and — if the big tech spin doctors are to be believed — it won’t just eat everyone’s lunch, it will also gobble up the rest of humanity.

However, this preferred corporate future — in which technology is recognised as the most superlative force of our age — is not inevitable. Automation and machine intelligence will almost certainly change the ways we live, work and communicate to some degree over the next decades, but it’s important to remember that technological change depends on people to make it happen.

We can and should shape and choose the way those changes play out, and to do that it’s important we understand them. Overall, it seems likely that the biggest issue AI creates for writers is not a crisis of creativity, but one of commercial terms. Automation is deeply unlikely to replace human imagination or sensemaking, even in the long term, but it is probable that the corporate machinations of the companies developing those technologies will continue to shift and undermine existing business models. Big tech companies become big for many reasons, and as Cory Doctorow and Rebecca Giblin’s recent book Chokepoint Capitalism outlines, one reason is their ability to capture the value of creative content.

Don’t believe the hype?

Current levels of fascination with AI are perhaps unusually high in media and political circles, but it is worth remembering that hype and new technologies have a history of going hand in hand.

Brent Goldfarb and David A. Kirsch examined 150 years of technological innovation for their 2019 book Bubbles and Crashes. When they started looking for trends they noticed that some of the most talked about technologies were also the ones that were the least well understood, with the least obvious market fit and the most varied range of use cases. Typically, they saw, new technologies that could be used in many different ways attracted more storytelling and speculation; these higher levels of speculation often led to higher levels of investment, which in turn resulted in the founding of more specialist companies. The recent AI spiral is the latest iteration of this hype cycle: other technologies that have recently exploded into public consciousness, before stabilising and receding, include Web3 (remember NFTs?) and the metaverse.

Perhaps one reason AI speculation is so captivating is that it has a rich history. While consumer technologies and social networks have come and gone, the potential for humans to create other kinds of intelligence has long been a compelling spur for both imaginative and scientific inquiry. Even the most rational and enlightened computer scientist cannot, it appears, always resist the urge to become a “Modern Prometheus”, and in the current financial and political environment the rewards for achieving this godlike status are considerable.

A scan of a pen and ink cartoon drawing showing an allegorical (female presenting, wearing a white robe and red cape) figure of Justice, blindfolded, holding scales in one hand, while a male-presenting figure in a military uniform, with feathered wings protuding, lies in what looks like pain or distress on the ground.
The modern Prometheus, or: downfall of tyranny. This print presented gratis to every purchaser of a ticket or share at Martins Lottery Office 8, Cornhill

Another reason is that many of current assumptions about technological development draw on Victorian conceptions of technological mastery, rooted in a very technocratic view of progress. As Davids Wengrow and Graeber write in The Dawn of Everything:

over the course of the nineteenth century, almost everyone arguing about the overall direction of human civilization took it for granted that technological progress was the prime mover of history, and that if progress was the story of human liberation, this could only mean liberation from ‘unnecessary toil’.

The aspiration to remove effort assumes all work is “unnecessary toil” and does not allow space for sensemaking, critical understanding or the pursuit of joy beyond pure leisure. Applying the same logics to all work assumes that, as humans, we already know enough, have produced enough knowledge, and can continue to exist in the world simply by recapping and regurgitating the understanding we have amassed to date. But if we leave communication, creativity and knowledge production up to bots and algorithms and the people who make them, then we risk consigning humanity to existing in a mise en abyme, trapped within the confines of a reflection of a reflection of a reflection of the knowledge we once held.

A screencap of a Mac desktop OS, showing an apparently infinite number of windows embedded within one another
A screenshot showing a recursive display of windows on a computer screen.

The Imitation Game

Computing pioneer Alan Turing’s 1950 paper “Computing Machinery and Intelligence” explored the question “Can machines think?”. To determine if this were true, Turing sketched out an adversarial process called “the Imitation Game”, designed to test the cognitive capacities of a computer.

Now better known as the Turing Test, the game was created to see whether a human questioner could tell the difference between a human and a computer and it has since passed into popular imagination as a shortcut for invoking sentient machines. However, Turing was describing something quite different to the conscious humanoids that we know from Star Trek or from movies like Ex Machina: the Imitation Game was not designed as a benchmark for the levels of consciousness or humanity a computer can display, but as test to measure how good a computer is at pretending to be a person. And just as putting on a pantomime horse outfit doesn’t turn two people into a horse, when a computer pretends to be a person is not the same as becoming a person.

Computer scientist Meredith Broussard explains this disconnect between the AI we have and the AI we imagine in Artificial Unintelligence. Although some techno-optimists believe its creation is still possible, Artificial General Intelligence — in which machines achieve sentience and robot butlers become our friends — is still mostly the preserve of Hollywood. The technologies we have now are grounded instead in Narrow Artificial Intelligence, which is based on Turing’s aspiration for imitation.

Narrow AI is most easily understood as a “mathematical method for prediction”, which Broussard calls “statistics on steroids”. As such, when ChatGPT writes a sentence, it is echoing sentences that have been written before.

Human imagination and understanding is more plural and diverse than this, and is rooted in our conscious and physical interrogation of the world around us. Neuroscientist Anil Seth supposes that human consciousness is deeply rooted in our physicality; riffing on the Cartesian idea of the beast machine, he suggests that “We are conscious selves because we too are beast machines — self-sustaining flesh-bags that care about their own persistence.” This embodiment is vital to our ability to be and create in the world around us, and is one of the things that differentiates what happens when a machine parses information and a human reads.

When a person reads, we can turn the signs and signifiers on a page or a screen into imaginary worlds and concepts that uniquely draw together our context, our personal experiences, our beliefs and our knowledge. When a machine “reads”, it infers meaning from the words on the page and from the way those words have been used in other accessible data sets, but it does not (yet?) have a grasp of what those words mean in the world. As such, current forms of Narrow Artificial Intelligence do not make meaning, they mimic a human making meaning. This impressive but superficial linguistic facility — which leads to computer-generated content that seems convincing but is ultimately meaningless — led computational linguist Emily Bender to coin the term “stochastic parrot” to describe the limits of this mimicry.

Bender is co-author, along with computer scientists Timnit Gebru, Angelina McMillan-Major and Margaret Mitchell, a 2021 paper that explores the difference between understanding language (which humans do) and simply processing it (currently the domain of computers). Called “On the Dangers of Stochastic Parrots”, their analysis raises the fact that humans’ ability to find meaning means we might actively seek it out when reading computer-generated text. This is one reason that the outputs of Narrow AI can seem so compelling: why our eyes will hop over the extra thumbs generated by Midjourney or seek cogence in text written by bots. One of our gifts as humans is that we want things to make sense, and so we make sense of them, and infer sensemaking capabilities where none may exist.

Short-term risks

This is not to say that the current business models and methods for creating artificial intelligence do not create some immediate challenges for writers, knowledge creators and creative practitioners. In the short term, the business of writing is likely to be exposed to several stresses.

The most pressing of these is copyright. Artificial intelligence requires large data sets in order to learn, and those datasets are built on words and pictures that humans have created or had a hand in creating. This means that works generated via artificial intelligence are likely to be derivative, and — at the time of writing — the burden for proving this rests with copyright owners, who are expected to trawl the Internet looking for evidence of their work.

Amassing large data sets also makes it easier for artificial intelligence to create more formulaic texts, the kinds that might be relied upon to provide short-term, dependable sources of income. This reformulation of this market is likely to take place in a dispersed way, appearing to go slowly for a long time and then bedding in all at once. Probably many readers of this piece will have already heard of a creative practitioner in their network being displaced by an automated tool to do a small job, and the normalisation of this practice will ripple through different sectors at different rates, before becoming rapidly established itself as a norm. Longer term this will probably lead to the creation of new jobs and embed increased collaboration with automated tools into existing ones, but in the short term it is likely also make the day-to-day hustle of making a living a harder and more complex for some writers. Getting to an equitable settlement around copyright will create some useful friction, slowing down the speed of this change and, in an ideal world, also creating space for retraining and renegotiation of contracts.

There are currently a number of legal cases and consultations in progress that will set precedents and initiate frameworks for how human input to artificial intelligence is credited and paid for. Giblin and Doctorow’s Chokepoint Capitalism unpacks how big tech has already mined human creativity to build existing digital business models for streaming and social media, and it seems unlikely that the current battle will be the last. It will, however, be a decisive one, that will help determine the commercial value of human creativity.

A picketer, David James Henry, carries a sign that reads “A.I.’s not taking your dumb notes!” Image shows a white-skinned male presenting person waering a red t-shirt and a cowboy hat, holding a protest sign that says “Writers Guild on Strike!” in print and “AI’s not taking your dumb notes” below in handwriting.
A picketer, David James Henry, carries a sign that reads “A.I.’s not taking your dumb notes!” By David James Henry — Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=131577476

Perhaps the most important lesson to be learnt from the digital revolution is how important it continues to be for knowledge producers and the creative industries to stand together and act collaboratively and collectively to establish a fair deal for creativity. Technology companies can wield extraordinary resources of money and power to assure the best outcomes for their business models, and are unlikely to be defeated by a single courageous campaign or campaigner. A united front is needed to ensure protections now and in the future; as the capabilities of artificial intelligence improve, new challenges will emerge, but ensuring copyright works will help put in place foundational protections for writers and help establish some boundaries for big tech companies and the technologies they bring to market.

No technology is inevitable. Artificial intelligence won’t replace humanity, but it is likely to change the nature of some jobs and many business models; these are not godlike entities, but human structures and processes. As such, the immediate challenge is not to be overcome by fear of AI or its mythical capabilities, but to deliver collective action that ensures human intelligence and creative labour is not subsumed by short-term commercial concerns

This article draws on research conducted for Coldicutt R, Williams A, and Barron D, (2023) ‘The Networked Shift: A Creative Industries Foresight Study’. London: Careful Industries, commissioned by MyWorld and the Creative Industries Policy and Evidence Centre.

--

--

Rachel Coldicutt
Rachel Coldicutt

Written by Rachel Coldicutt

Exploring careful innovation, community tech and networked care. Day job: @carefultrouble .