The EU has criticised Twitter for not providing enough information regarding its disinformation efforts.
It failed to submit a detailed report on how it was executing its obligations, in contrast with other signatories of Code of Practice on Disinformation such as Google Meta, Microsoft, and TikTok.
These reports should contain information such as how much advertising revenue each company prevented from going to disinformation actors; the amount or value of any political ads that were accepted, labelled and rejected; instances where manipulative behavior was detected like the creation and usage of fake accounts; information about fact-checking’s impact.
These reports will be submitted to the Transparency Centre, which is designed to provide visibility and accountability for platforms’ disinformation efforts.
There are now 38 signatories to the code, ranging from large tech platforms like Google and Meta to non-profits, fact-checking organisations and software companies.
The EU says that while reports on other platforms were 150 pages, Twitter’s report was just 80 pages. According to the EU, it was “short on data” and did not include information about commitments to support fact-checking communities.
We need to be more transparent and can’t rely solely on online information platforms for quality. They need to be independently verifiable,” says Věra Jourová, EU vice-president for values and transparency.
“It is disappointing to see the Twitter report fall behind other sources and I expect them to be more serious about their obligations arising from the Code.”
They are meant to be a starting point to show a state of play for the first time on how firms have implemented their code-related commitments. In July, the next batch of reports is expected.
The reports reveal that during the third quarter of 2022, Google prevented more than €13 million of advertising revenues from flowing to disinformation actors in the EU.
The figure for MediaMath, a demand-side platform that allows ad buyers better management of programmatic ads, was €18 million.
TikTok claimed it had removed 800,000.00 fake accounts over the same period. Meta, however, reported that about 28,000,000 fact-checking labels (on Facebook) and 1.7million on Instagram were used in December 2022.
Microsoft announced that its Newsguard partnership had resulted in news reliability ratings being displayed 84.211 times in its Edge browser discover pane in December 2022. Twitch reports that in October, Twitch blocked 270.921 botnets and fake accounts created through its platform, and also took steps to stop 32 impersonation and hijacking attempts.
Twitter’s brief report might not surprise given its recent announcement about removing third-party access to its Application Programming Interfaces.
While the company claims it used its Community Notes (which uses fact-checking volunteers) as its core report, they admit that not all member states have them.
We’ve reached out via Twitter to receive a response and will keep you updated.
SME Paid Under