The European Union has expanded its warning about illegal content and disinformation targeting the Israel-Hamas war circulating on social media platforms to Meta, the parent company of Facebook and Instagram.
Yesterday the bloc’s internal market commissioner, Thierry Breton, published an urgent letter to Elon Musk, owner of X (formerly Twitter) — raising concerns the platform is being used to disseminate illegal content and spread potentially harmful disinformation in the wake of Saturday’s surprise attacks on Israel by Hamas terrorists based in the Gaza Strip.
Breton’s letter to Meta’s founder Mark Zuckerberg, which he’s also made public via a post on X, is a little less urgent in tone than yesterday’s missive to Musk. But the social media giant has also been given 24 hours to respond to the EU’s concerns about the same sorts of content risks.
“Following the terrorist attacks carried out by Hamas against Israel, we are seeing a surge of illegal content and disinformation being disseminated in the EU via certain platforms,” the EU commissioner writes. “I would ask you to be very vigilant to ensure strict compliance with the DSA [Digital Services Act] rules on terms of service, on the requirement of timely, diligent and objective action following notices of illegal content in the EU, and on the need for proportionate and effective mitigation measures.
“I urgently invite you to ensure that your systems are effective. Needless to say, I also expect you to be in contact with the relevant law enforcement authorities and Europol, and ensure that you respond promptly to any requests.”
Meta was contacted for a response to Breton’s warning, and to ask about the steps it’s taking to ensure it can respond effectively to content risks related to violent events in Israel and Gaza, but at the time of writing it had not responded.
We’ve also reached out to the Commission to ask if it has related concerns about any other social media platforms.
Since Saturday’s bloody attacks, there have been reports of graphic videos being uploaded to Meta platforms. In one report on Israeli television, which has been recirculating in a clip shared to social media, a woman recounted how she and her family had learned that her grandmother had been murdered by Hamas terrorists after they took a video of her dead body with her phone and uploaded to her Facebook account.
Eye on election disinformation
The bloc’s letter to Meta is not solely focused on risks arising from the Israel-Hamas war. It also reveals the Commission is worried Meta is not doing enough to deal with disinformation targeting European elections.
“I personally raised your attention when we met in San Francisco in June to the fact that Meta would need to pay particular attention to this issue in order to comply with the DSA, and the topic was covered extensively in the stress test carried out by our teams in July,” writes Breton. “However, while we have noted steps taken by Meta to increase mitigation measures in the run-up to the recent elections in Slovakia — such as increased cooperation with independent authorities, improvements in response times, and increased fact-checking — we have also been made aware of reports of a significant number of deep fakes and manipulated content which circulated on your platforms and a few still appear online.”
“I remind you that the DSA requires that the risk of amplification of fake and manipulated images and facts generated with the intention to influence elections is taken extremely seriously in the context of mitigation measures,” he adds, requesting a response from Zuckerberg — “without delay” — and containing “details of the measures you have taken to mitigate such deepfakes, also in the light of upcoming elections in Poland, The Netherlands, Lithuania, Belgium, Croatia, Romania and Austria, and the European Parliament elections”.
The DSA, a pan-EU content moderation-focused regulation, applies the deepest obligations and governance controls to larger platforms (so-called Very Large Online Platforms, or VLOPs) — 19 of which were designated by the Commission back in April, including Meta-owned Facebook and Instagram — with extra requirements to assess and mitigate systemic risks attached to the use of algorithms and AIs. This means VLOPs are expected to be proactive about identifying and mitigating systemic risks such as political disinformation, in addition to swiftly acting on reports of illegal content such as terrorism.
Penalties for a confirmed breach of the regime include fines of up to 6% of global annual turnover — which, in Meta’s case, could mean a fine of several billions.
Political deepfakes have emerged as a particular area of concern for the Commission, as developments in generative AI have made it cheaper and easier to produce this type of disinformation. Last month the bloc said it would be meeting with AI giant OpenAI to discuss the issue. But the role social media platforms can play in rapidly and widely disseminating these sorts of fakes is also clearly on the EU’s radar.