Three recommendations to improve the safety of safety tech

Rachel Coldicutt
4 min readNov 29, 2022

--

Formal feedback channels for researchers and advocates A safety tech standards and scrutiny body is created Ongoing public engagement and feedback

Earlier this year, I was part of a team working on a piece of research to assess the impact of safety tech on the UK digital economy. My role was to think about policy recommendations and framing but — as so often happens with this kind of project — the policy recommendations didn’t make it to the final cut.

The following are personal observations. This isn’t a comprehensive set of policy proposals, but some practical, achievable steps that are important to consider if safety tech is going to be deployed safely, in ways that benefit the people most likely to be affected by online harms.

This is a complex area and a short blog post, so I’ll lead with the recommendations, which are, I hope, relatively self explanatory, and then explain why these considerations are important.

  1. Researchers, advocates and civil society innovators need to be well-funded and have formal channels of communication, influence and feedback with the safety tech sector
  2. Someone watches the watchers — a safety tech standards and scrutiny body is created, prioritising lived experience of harm over industry representation
  3. Public engagement and feedback is an ongoing part of the process, not restricted to consultation phases

What is safety tech?

A previous DCMS report describes safety tech as “technologies or solutions to facilitate safer online experiences, and protect users from harmful content, contact or conduct”. This is another way of saying that the term “safety tech” covers a lot of different things, of varying technical complexity and with a broad range of social, cultural, and political implications.

As in much online policy, everything that happens on a screen is bundled together here, but this clustering is not very helpful for thinking about governance.

In reality, national security concerns and individual anti-social behaviour have very different causes and remedies, but both are covered, as is the facilitation of online experiences (which might be anything from a text-based chat to an immersive game) and people’s’ “protection”, in itself a complex and contentious social concept. It also includes the work of the Internet Watch Foundation in detecting CSAM and what happens when someone reports a nipple on Instagram.

Age assurance — often considered a curative for many online ills — is also included. Digital age assurance is a complex problem in a country without a central identity system. Many of the proposed solutions (including, for instance, this report from the Children’s Commissioner) assume it is possible for a patchwork of platforms and third-parties to hold sensitive information about people’s online behaviour, or that it is possible for “estimation” technologies to assess whether a person is older or younger than 18. Age estimation is a growing industry that many companies will rely on as more stringent measures are introduced, but it sits atop a tangle of human rights and security concerns that are ambiguated by smooth processes, and widespread roll-out without sufficient diligence risks proliferating a new set of online harms.

Likewise, delegating content moderation to machines is complicated.

Automated content moderation is necessary, but is not always sufficient or problem free. (See Sarah T. Robert’s Behind the Screen for an introduction to the human costs of human moderation.)

At its most simplistic, the clash of the technical and cultural can be seen in what is now known as the “Scunthorpe Problem”, in which — in the mid-1990s — AOL’s application of bad language filters led to all mentions of the town of Scunthorpe being marked as offensive. While web filtering has improved a great deal in the last 25 years, it remains far from perfect.

The need for transparency, standards and oversight

Safiya Noble’s Algorithms of Oppression offers myriad examples of how “racism and sexism are part of the architecture and language of technology”. Algorithmic bias and the structural problems of using historical data to make decisions about the present or the future increase the likelihood that safety tech will replicate and deepen structural inequalities, including, but not limited to sexism, racism, ableism. This means, without sufficient guard rails, safety tech could end up harming the people it has been created to protect.

While the concept of social and technological bias is well established in discussions of AI ethics (see Meredith Broussard’s Artificial Unintelligence for a very readable primer on this), it is rarely covered in policy discussions of online safety. I touched on some of the challenges of executing algorithmic moderation in just and effective ways in written evidence to the Lords Communications and Digital Committee last year, but it remains a relatively niche concern in UK policy debates — despite its potentially chilling effect.

Abeba Birhane’s “Algorithmic Injustice: A Relational Ethics Approach” outlines one of the foundational problems of “fairness” in machine learning, which is that:

“the reality of the Western straight white male … masquerades as the invisible background that is taken as the ‘normal,’ ‘standard,’ or ‘universal’ position”

This is the same set of biases and cultural norms that led to crash test dummies being modelled on standard male physiques, meaning that safety measures are only designed for one section of the population. While the challenges this creates might present differently in different safety tech contexts, the fact that the Western straight white male is rarely the target of racist or sexist online abuse presents a problem for the rapid roll-out of safety tech. Mitigating this is essential.

Safety tech is a useful part of a holistic response to developing online trust and safety, but it is not and cannot be the only component. There is, after all, an irony to assuming that the problems caused by the unregulated deployment of technologies by private companies can be solved by the unregulated deployment of technologies by private companies. At the very minimum, there must be mechanisms for transparency, accountability, and redress, otherwise safety tech risks entrenching the very problems it has been invented to solve.

--

--

Rachel Coldicutt
Rachel Coldicutt

Written by Rachel Coldicutt

Exploring careful innovation, community tech and networked care. Day job: @carefultrouble .

No responses yet