EuropeNewsUK

Ofcom Warns Tech Giants: AI Chatbots Must Comply with Online Safety Laws

In a stark warning to tech companies, UK communications regulator Ofcom has declared that content generated by user-created AI chatbots will fall under the purview of the nation’s forthcoming Online Safety Act. The announcement comes on the heels of disturbing incidents where chatbots on the Character.AI platform were found impersonating deceased British teenagers Brianna Ghey and Molly Russell.

According to sources close to the matter, Ofcom felt compelled to issue this guidance after these “distressing” cases came to light. The regulator emphasized that any website or app allowing users to create and share chatbot-generated content with others would be subject to the stringent requirements of the Online Safety Act.

Protecting Users, Particularly Children

Set to come into full force next year, the new online safety rules will mandate social media platforms and other user-generated content hosts to implement robust measures to shield users, especially children, from illegal and harmful material. The largest platforms will be required to:

  • Proactively identify and remove illegal and potentially harmful content
  • Provide clear reporting tools for users
  • Conduct thorough risk assessments
  • Fulfill other critical duties to ensure user safety

Failure to comply with these regulations could result in severe penalties for companies, including fines of up to £18 million or 10% of their global turnover. In extreme cases, websites or apps may even face the risk of being blocked entirely.

Character.AI Chatbots Impersonate Deceased Teens

The Ofcom guidance was prompted by the alarming discovery that users on the Character.AI platform had created chatbots to act as virtual clones of Brianna Ghey, a 16-year-old transgender girl who was tragically murdered last year, and Molly Russell, who took her own life in 2017 after viewing harmful online content.

The Molly Rose Foundation (MRF), a charity established by Molly’s family, commended Ofcom’s stance, stating that it sends a “clear signal” about the potential for chatbots to cause significant harm. However, the foundation also called for further clarity on whether bot-generated content could be treated as illegal under the act.

Complexity of the Online Safety Act

Legal experts have pointed out that Ofcom’s need to clarify the inclusion of these services reflects the immense breadth and complexity of the Online Safety Act. As noted by Ben Packer, a partner at Linklaters law firm, the act’s gestation began several years before the proliferation of GenAI tools and chatbots, contributing to the challenges in addressing these rapidly evolving technologies.

Character.AI Responds

In response to the controversy, Character.AI asserted that it takes platform safety seriously, moderating content both proactively and in response to user reports. The company confirmed that the chatbots impersonating Ghey, Russell, and a Game of Thrones character had been removed from the platform.

As the UK prepares to implement the Online Safety Act, Ofcom’s warning serves as a sobering reminder of the urgent need for robust regulation in the face of rapidly advancing AI technologies. The impersonation of deceased individuals by chatbots underscores the potential for these tools to cause real harm, particularly to vulnerable users and grieving families.

With the stakes higher than ever, it remains to be seen how effectively the new regulations will be able to keep pace with the ever-evolving landscape of artificial intelligence and protect users from the darker aspects of this transformative technology. As the world watches, the UK’s approach to regulating AI chatbots may well set a precedent for other nations grappling with similar challenges.