Snap’s AI chatbot draws scrutiny in UK over kids’ privacy concerns

Snap’s AI chatbot has landed the company on the radar of the UK’s data protection watchdog which has raised concerns the tool may be a risk to children’s privacy.

The Information Commissioner’s Office (ICO) announced today that it’s issued a preliminary enforcement notice on Snap over what it described as “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI’”.

The ICO action is not a breach finding. But the notice indicates the UK regulator has concerns that Snap may not have taken steps to ensure the product complies with data protection rules, which — since 2021 — have been dialled up to include the Children’s Design Code.

“The ICO’s investigation provisionally found the risk assessment Snap conducted before it launched ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children,” the regulator wrote in a press release. “The assessment of data protection risk is particularly important in this context which involves the use of innovative technology and the processing of personal data of 13 to 17 year old children.”

Snap will now have a chance to respond to the regulator’s concerns before the ICO takes a final decision on whether the company has broken the rules.

“The provisional findings of our investigation suggest a worrying failure by Snap to adequately identify and assess the privacy risks to children and other users before launching ‘My AI’,” added information commissioner, John Edwards, in a statement. “We have been clear that organisations must consider the risks associated with AI, alongside the benefits. Today’s preliminary enforcement notice shows we will take action in order to protect UK consumers’ privacy rights.”

Snap launched the generative AI chatbot back in February — though it didn’t arrive in the UK until April — leveraging OpenAI’s ChatGPT large language model technology to power a bot that was pinned to the top of users’ feed to act as a virtual friend that could be asked advice or sent snaps.

Initially the feature was only available to subscribers of Snapchat+, a premium version of the ephemeral messaging platform. But pretty quickly Snap opened access of “My AI” to free users too — also adding the ability for the AI to send snaps back to users who interacted with it (these snaps are created with generative AI).

The company has said the chatbot has been developed with additional moderation and safeguarding features, including age consideration as a default — with the aim of ensuring generated content is appropriate for the user. The bot is also programmed to avoid responses that are violent, hateful, sexually explicit, or otherwise offensive. Additionally, Snap’s parental safeguarding tools let parents know whether their kid has been communicating with the bot in the past seven days — via its Family Center feature.

But despite the claimed guardrails there have been reports of the bot going off the rails. In an early assessment back in March, The Washington Post reported the chatbot had recommended ways to mask the smell of alcohol after it was told that the user was 15. In another case when it was told the user was 13 and asked how they should prepare to have sex for the first time, the bot responded with suggestions for “making it special” by setting the mood with candles and music.

Snapchat users have also been reported bullying the bot — with some also frustrated an AI has been injected into their feeds in the first place.

Reached for comment on the ICO notice, a Snap spokesperson told TechCrunch:

We are closely reviewing the ICO’s provisional decision. Like the ICO we are committed to protecting the privacy of our users. In line with our standard approach to product development, My AI went through a robust legal and privacy review process before being made publicly available. We will continue to work constructively with the ICO to ensure they’re comfortable with our risk assessment procedures.

It’s not the first time an AI chatbot has landed on the radar of European privacy regulators. In February Italy’s Garante ordered the San Francisco-based maker of “virtual friendship service” Replika with an order to stop processing local users’ data — also citing concerns about risks to minors.

The Italian authority also put a similar stop-processing-order on OpenAI’s ChatGPT tool the following month. The block was then lifted in April but only after OpenAI had added more detailed privacy disclosures and some new user controls — including letting users ask for their data not to be used to train its AIs and/or to be deleted.

The regional launch of Google’s Bard chatbot was also delayed after concerns were raised by its lead regional privacy regulator, Ireland’s Data Protection Commission. It subsequently launched in the EU in July, also after adding more disclosures and controls — but a regulatory taskforce set up within the European Data Protection Board remains focused on assessing how to enforce the bloc’s General Data Protection Regulation (GDPR) on generative AI chatbots, including ChatGPT and Bard.

Poland’s data protection authority also confirmed last month that it’s investigating a complaint against ChatGPT.


Source link




  • Loading
  • You May Also Like