The European Union plans to beef up its response to online disinformation, with the Commission saying today it will step up efforts to combat harmful but not illegal content — including by pushing for more minor digital services and adtech companies to sign up to voluntary rules aimed at tackling the spread of this manipulative and often malicious content.
EU lawmakers pointed to risks such as the threat to public health posed by the spread of harmful disinformation about COVID-19 vaccines as driving the need for more brutal action.
Concerns about the impacts of online disinformation on democratic processes are another driver, they said.
Commenting in a statement, Thierry Breton, commissioner for Internal Market, said: “We need to rein in the infodemic and the diffusion of false information putting people’s lives in danger. Disinformation cannot remain a source of revenue. We need to see stronger commitments by online platforms, the entire advertising ecosystem, and networks of fact-checkers. The Digital Services Act will provide us with additional, powerful tools to tackle disinformation.”
A new, more expansive code of practice on disinformation is being prepared — and the Commission hopes to be finalized in September to be ready for application at the start of next year.
Its gear change is a fairly public acceptance that the EU’s voluntary code of practice — and approach Brussels has taken since 2018 — has not worked out as hoped. And, well, we did warn them.
A push to get the adtech industry on board with demonetizing viral disinformation is undoubtedly overdue.
The online disinformation problem hasn’t gone away. Some reports have suggested inappropriate activity — like social media voter manipulation and computational propaganda — has been getting worse in recent years rather than better.
However, getting visibility into the accurate scale of the disinformation problem remains a considerable challenge, given that those best placed to know (ad platforms) don’t freely open their systems to external researchers. But that’s something else the Commission would like to change. Signatories to the EU’s current code of practice on disinformation are:
Google, Facebook, Twitter, Microsoft, TikTok, Mozilla, DOT Europe (Former EDiMA), the World Federation of Advertisers (WFA) and its Belgian counterpart, the Union of Belgian Advertisers (UBA); the European Association of Communications Agencies (EACA), and its national members from France, Poland, and the Czech Republic — respectively, Association des Agencies Conseils en Communication (AACC), Stowarzyszenie Komunikacji Marketingowej/Ad Artis Art Foundation (SAR), and Associate Komunikacnich Agentur (AKA); the Interactive Advertising Bureau (IAB Europe), kreativitet & Kommunikation, and Goldbach Audience (Switzerland) AG.
EU lawmakers said they want to broaden participation by getting smaller platforms to join and recruiting all the various players in the adtech space whose tools provide the means for monetizing online disinformation.
Today, commissioners said they want to see the code covering a “whole range” of actors in the online advertising industry (i.e., rather than the current handful).
In its press release, the Commission also said it wants platforms and adtech players to exchange information on disinformation ads that have been refused by one of them. So there’s a more coordinated response to shut out bad actors.
As for those signed up already, the Commission’s report card on their performance was bleak.
Speaking during a press conference, Breton said that only one of the five platform signatories to the code has “really” lived up to its commitments — which was presumably a reference to the first five tech giants in the above list (aka Google, Facebook, Twitter, Microsoft, and TikTok).
Breton demurred on doing an explicit name-and-shame of the four others — who he said have not “at all” done what was expected of them — saying it’s not the Commission’s place to do that.
Instead, he said people should decide among themselves which platform giants that signed up to the code have failed to live up to their commitments. (Signatories since 2018 have pledged to take action to disrupt ad revenues of accounts and websites that spread disinformation; to enhance transparency around political and issue-based ads;
tackle fake accounts and online bots; empower consumers to report disinformation and access different news sources while improving the visibility and discoverability of authoritative content; and empower the research community so outside experts can help monitor online disinformation through privacy-compliant access to platform data.)
Frankly, it’s hard to imagine which of the five tech giants from the above list might be meeting the Commission’s bar. (Microsoft, perhaps, on account of its relatively modest social activity versus the rest.