New online safety laws aim to protect children—but will they harm artists?

0
16

The popular term “IRL” (in real life) asserts that our online lives are not exactly “real”. But the last few years have brought critical attention to the IRL influence and harm that can come from social media, particularly for children. The desire to protect young people online, therefore, has become a call to action across continents, provoking some pioneering legislation to regulate platforms. But along with these righteous campaigns comes the chilling prospect of restricted artistic freedoms and vociferous arguments about government censorship. A case in point is the UK’s Online Safety Bill, set to be passed in the coming months and enforced by the communications regulator Ofcom and the Information Commissioner’s Office (ICO), an economic controller.

The main objective of the bill is to ensure privacy and safety online, with a focus on young users. Introduced back in April 2019 and hailed as the “world’s first online safety law”, the Online Safety Bill’s journey since has outlived four prime ministers and seen multiple governmental shifts. Brought back to the table earlier this year, the bill was met with vocal opposition from digital rights groups and a public concerned about freedom of expression, particularly the targeting of content deemed “legal but harmful”.

This reactionary language, intended to add an extra layer of protection for sensitive users, is particularly dangerous for artists who often see first-hand the real-life consequences of ill-defined guidelines and subjective enforcement. A recent attempt to address similar concerns in the US with the so-called Fosta-Sesta laws made platforms liable for any content that could be perceived as explicit sexual imagery, a threat so loose that platforms pushed out creators and artists rather than risk facing litigation.

But, in an about-face, the “legal but harmful” provision was removed last November, putting the bill on track to structure better regulation that is less reactionary and hopefully does not risk leaving creators and artists in the cold. In place of it, companies will be required to enact greater transparency, accountability and protections of freedom of speech. As an alternative to provoking the over-zealous, top-down censorship many were concerned about, the bill proposes that users are given better control over the content they see, and that age-verification tools help keep harmful content away from children.

Double-edged measures

Despite the removal of ambiguous directives, some still worry that companies will be prone to over-censor content. With Ofcom as the external enforcer, companies will face steep fines and their principals possible jail time for failing to comply with the law, and sharp measures like these are double-edged.

Carolina Are, a platform governance researcher at the Centre For Digital Citizens at Northumbria University—who is also a pole dance instructor and an oft-censored content creator—worries these extreme consequences “won’t exactly encourage platforms to go easy on things. Platforms do tend to over-censor, particularly for marginalised communities. While the bill might be trying to solve a problem, it might create more problems for those whose freedom of expression needs to be looked after.” Regardless, Are is cautiously optimistic about Ofcom’s approach, saying, “They are doing amazing work to reach out to researchers and become better equipped for this job.”

In an effort to protect users from harmful content, it is important to remember that “harmful” is subjective, and art censorship often happens within that subjective space. Putting more moderation control in the hands of users is an increasingly popular proposal to make content moderation better for everyone, but the question remains about how we decide what harmful content actually looks like.

Participatory content moderation models are already being tested: for example, on Reliabl, where the community both decides and moderates for itself via a layered system of tagging. Tumblr, for instance, re-introduced permitting nudity this year, relying on a system of “community labels” so users can choose what content they see.

As our “real” world begins to regulate our online lives, it will be a complicated mission to ensure that freedom of expression remains participatory—so as to truly bridge these worlds once thought so far apart.

LEAVE A REPLY

Please enter your comment!
Please enter your name here