Transparency Initiatives in the Digital Services Act: An Exciting Step Forward in Online Content Removal Transparency

Lumen Database Team
8 min readApr 5, 2022

In January 2022, the European Parliament voted in favor of the Digital Services Act (DSA), a horizontal legislation for the EU’s digital single market that seeks to define platforms’ responsibility regarding user content. The draft law also contains several concrete provisions aimed at mitigating certain harms of online advertising, including imposing a ban on ‘dark patterns’ when getting consent from users (Article 13a), a behavior that recently led to the French DPA imposing fines of over $200 million on Facebook and Google. While the DSA seeks to promote a more free internet in numerous ways, this article focuses on its transparency mandates for content moderation decisions and the provisions mandating researcher access to data.

Transparency Reporting

The DSA has created arguably some of the most well fleshed-out mandates regarding content moderation transparency compared to other similar legislation. For example, Article 13 requires online platforms to publish annual transparency reports. Most ‘very large online platforms’, defined as platforms with more than 45 million users in the EU, already comply with some of the DSA’s requirements for transparency reporting, such as disclosing the total number of content removal notices received, the types of content asked to be removed and the consequential action taken. The DSA asks for similar transparency reporting from all online platforms and in addition, also requires the disclosure of additional information, such as the automated tools (if any) that have been used in making content moderation decisions and the types of moderation measures taken.

This mandate is not only imposed on any external takedown frameworks put in place by law, such as the DMCA in the USA or the proposed Article 14 of the DSA, it also applies to ‘internal-complaint handling systems’, the most salient example of which is YouTube’s ContentID, which recently released its first copyright transparency report including some statistics on the complaints handled by ContentID. This is a significant first step in ensuring that the DSA’s mandated content moderation efforts remain equitable across all moderation tools, not just the public-facing legal tools a notice and takedown mechanism.

Reporting by ‘Trusted Flaggers’

The provision for ‘trusted flaggers’ (Article 19) awards special status to certain NGOs, experts, and public bodies, that would then have the ability to expedite content removal by flagging content as unlawful. One key requirement for trusted flaggers is that they must publish annual reports on all content removal requests submitted, along with details of hosting provider, type of content, legal provisions among others. Another requirement for this status to be awarded is independent operations and no conflicts of interests. All content removal requests by trusted flaggers will be made available to the public by the EU Commission by way of a publicly accessible database.

Before this provision, there was no clear division based on types of notice senders when it came to handling content removal requests. YouTube’s internal complaint handling system ‘ContentID’ operated with a tiered system based on the frequency and scale of sending copyright infringement takedown requests, not by the type of sender, which seems to be the intention with the ‘trusted flaggers’ system.

Awarding the status of trusted flaggers to NGOs and certain public bodies may ensure more expeditious removal of some patently illegal content such as child pornography and terrorist content. However, this accreditation may also be easily exploited or abused for extra-legal content removal, despite the publicly accessible database of accredited trusted flaggers as an oversight mechanism. For example, there is notable potential for abuse of the expeditious takedown process by trusted flaggers, especially for certain content that may be time and context-sensitive, such as during elections, where swiftness of content removal may cause chilling effect, even if content is later reinstated for being legitimate. The balanced dialogue that the envisioned system initiates between civil society and institutional actors as potential trusted flaggers on the one hand, and online platforms on the other is valuable, although ensuring the independence of the trusted flaggers will be vital.

Reporting of time taken for content removal

Most content moderation-related transparency reporting clauses in the DSA (such as Article 13 and Article 17) require disclosure of both the average and median time taken by the platform to act on the allegedly infringing content.

The recurring emphasis on mandating disclosure of the average and median time for removing content seems somewhat peculiar. While this practice may lead to useful information about the promptness of takedown for patently illegal content such as child pornography, it may also lead to overbroad content removal, in order to err on the side of caution. Research has indicated that the 24-hour takedown requirement for content removal of ‘patently illegal’ content in Germany led to over-removal of content. Similarly, if social media platforms moderate content under the working assumption that swifter takedowns for all types of content is preferred by regulators or that it would weigh more heavily in favour of determining immunity of an online platform in the event of a challenge to a decision, this may be likely to cause a chilling effect on free speech online.

Data accessibility for Researchers

Another notable advancement in online transparency that the DSA has proposed is the mandates around data accessibility for researchers. The DSA proposes databases of content takedown requests at two levels. First, Article 15(4) creates a requirement for all hosting services, such as web hosting or cloud services, to annually publish their actions taken to remove content and the statement of reasons for doing so, in a publicly accessible database published by the European Commission. The explicit caveat for this data accessibility mandate, as with most others in the DSA, is that the database must not contain any personal data.

Article 31 obliges very large online platforms to provide data to vetted researchers to ‘identify, mitigate and understand systemic risks’ of their business models including the functioning and use made of their services by the users. They can use this information to assess any “actual and foreseeable negative effects for the exercise of fundamental rights”. This is not limited to takedown requests and actions taken, it includes any other information that a researcher would like to acquire to assess the functioning of the platform.

Since this provision provides arguably the highest level of data access to researchers compared to all other provisions that provide researcher access to data, the European Commission has designated the Digital Services Coordinator, an independent authority that each EU member state is required to designate, as an intermediary that will vet the prospective researchers, and upon satisfaction of their independence and non-commercial interests (as laid out in 31(4)), will require the very large online platform to share data with the researchers.

As opposed to the data accessibility mandates for hosting providers, the very large online platforms are not cautioned against sharing personal data. This is likely because the mandate for hosting providers is to create a publicly accessible database whereas very large online platforms are required to share data with vetted researchers only. There are additional oversights to ensure the accreditation of the researchers remains valid through the course of their research.

Researchers, academics, and scholars have for many years been trying to get access to content moderation information from online service providers to conduct research about blocking practices, cross-referencing removal claims, and assessing larger takedown trends. In the past, platforms such as Facebook and Instagram have shut down research efforts by external scholars and academics based on alleged privacy concerns for their users. A law laying out the circumstances under which such data must be shared with vetted researchers will allow for more effective and quality research into the takedown practices of online platforms.

Mandatory disclosures will improve research dramatically, because we don’t have access to great data right now. Not-for-profit initiatives such as the Lumen Project have been working for almost two decades to maintain a database of content removal requests that online platforms receive. This data is then provided to researchers who have used it in the past to identify problematic trends in content moderation. Through voluntary partnerships, over twenty online platforms now share takedown notices with the database on topics including copyright, counterfeit products, trademark, patent, government requests, court orders among others. Researchers that work with the database are always eager for more data regarding notices,including the jurisdiction from which a notice originated and whether any action was taken to remove/moderate it, as well as for more online platforms sharing takedown notices in general. Informal governance groups such as the Global Network Initiative have also called for transparency measures through voluntary transparency measures through voluntary membership.

However, internal voluntary transparency initiatives by online platforms are likely to be designed to further business incentives more than anything else, and this makes legislative mandates necessary. For instance, while the Lumen database has large amounts of data on takedown requests sent to online platforms, this data remains limited due to the voluntary nature of the partnership with online platforms and to the limited set of participants. The database is only able to aggregate the notices that online platforms choose to share, and therefore of the data is only as good as those platforms’ internal transparency imperatives. Despite the abundance of data, this remains one of the Lumen database’s insurmountable inadequacies — one that creating legal transparency mandates would certainly remedy. Hence, creating legal mandates and due processes that require very large online service providers to share data with vetted researchers is a great next step to ensure continued research efforts for effective content moderation policies and practices.

Conclusion

The DSA, in its current framework, compared with other legislative mandates passed globally, seeks a new level of granularity in transparency surrounding content moderation practices. For example, Germany’s NetzDG requires bi-annual reports about the handling of unlawful complaints but does not have other data accessibility norms for researchers. India’s Information Technology (Guidelines for Intermediaries and Digital Media Ethics Code) Rules, 2021 requires large scale OSPs to publish monthly transparency reports with details of complaints received and actions taken and the number of URLs disabled due to proactive monitoring conducted by using automated tools, although the information remains at a high-level indication as opposed to the more detailed breakdown that the DSA compels. The US Platform Accountability and Transparency Act, proposed by Stanford Professor Nathanial Persily is a legislative effort akin to the DSA in the sense that it offers extensive transparency and public disclosure obligations and also mandates data accessibility for researchers while preserving user privacy. It is, however, still at the proposal stage.

The DSA offers a valuable opportunity to impose binding transparency mandates that encourage accountability of online platforms and mitigation of systemic online risks that they may pose. Even though the current proposed framework of the DSA will undergo changes, it will certainly have wide-ranging implications for the future of online transparency since, like the GDPR, it is likely to be a template that will be relied upon for similar future legislative and regulatory efforts.

About the author: Shreya Tewari is a Research Fellow at the Berkman Klein Center’s Lumen Project.

--

--

Lumen Database Team

Collecting and facilitating research on requests to remove online material. Visit lumendatabase.org and email us if you have questions.