The Weaponization of Copyright by Police Officers and the Need to Automate Fair Use

Lumen Database Team
7 min readAug 6, 2021

--

“Cumbria police officers” by 28704869 is licensed under CC BY 2.0

In July 2021, an on-duty police officer in Alameda County being videoed by activists took out his phone and played ‘Blank Space’ by Taylor Swift, in hopes of hindering social media circulation of the video by attracting the attention of automated copyright enforcement algorithms. This was merely one of the more recent examples of a larger unsettling trend in law enforcement where police officers are weaponizing copyrighted music to thwart bystander recordings of police from going viral.

Police recordings have recently been a crucial tool for the accountability of law enforcement. Minnesota Attorney General Keith Ellison and lead prosecutor in Derek Chauvin’s trial for George Floyd’s murder described onlooker Darnell Frazier’s recording of Floyd’s death as an ‘indispensable’ piece of evidence in Chauvin’s conviction. Bystander videos of police encounters are a crucial tool for accountability but also for civilian protection.

Created to in part address the idiosyncrasies of online copyright infringement, the DMCA requires that service providers promptly remove allegedly infringing content upon notification of its presence, in exchange for protection from any ensuing liability. However, some very large-scale online service providers have found the DMCA’s tools to be inadequate for their needs, and have also put into place their own private internal mechanisms for addressing copyright, most of which involve automatic and algorithmic procedures and mechanisms. YouTube’s ContentID is the most well-known of these.

Although automatic copyright enforcement algorithms aren’t mandated by law, online service providers use them to be able to cope with the almost unimaginable scale at which content is uploaded on the internet. However, such algorithms are almost by definition ill-equipped to take note of context. Moreover, in specific instances, these tools may also be indirectly exploited or leveraged by third parties in unintended ways to restrict circulation of specific content on the internet. The use of copyrighted music by police officers during interactions with citizens recording them is an example of just such an unforeseen outcome .

US citizens have a First Amendment constitutional right to record the police, but how states define the parameters of that right may vary. However, it is generally accepted that citizens can film police on duty as long as they’re not interfering with their activities. In the past, police officers have tried using the wiretap laws to argue that audio recordings made without the police officers’ consent are illegal, however, the Court of Appeals for the First Circuit struck down this argument in 2011 (Glik v. Cunniffe) and found it in violation of the First and Fourth Amendments. Further, federal appellate courts in the Third, Fifth, Seventh, Ninth, and Eleventh Circuits have also directly upheld the right of members of the public to record police.

As Lumen noted in its piece in July 2020, the weaponization of automated copyright enforcement tool is increasing and is not limited to a police tactic. Nor is it the only evidence that automated copyright enforcement often flags as infringing material that is clearly not, or which may qualify as fair use. An experiment where a person recorded ten hours of white noise and uploaded it on YouTube met with five copyright infringement claims. The famous “dancing baby” case, Lenz v Universal Music Corp., where a baby was dancing to Prince’s ‘Let’s Go Crazy’ in the background received a DMCA notice from Universal Music Group. This case went up to the 9th Circuit’s Court of Appeals, which ruled in 2018 that the DMCA requires rightsholders to consider whether the uses targeted by a DMCA notice are actually lawful under the fair use doctrine and further noting that fair use is not simply a narrow defense to copyright infringement but an affirmative public right. However, the court stopped short of creating a legal test or standard for what may be considered legitimate or accidental use of copyrighted work.

In the case of police recordings, while bystanders have the right to record a police officer on duty and even though the US Supreme Court has noted that material, where the copyrighted property is not at the heart of the content, should not attract or trigger copyright filters (such as in the dancing baby case mentioned above), there remains a legal lacuna regarding the circumstances such as the current one where police officers are intentionally leveraging copyright enforcement algorithms to try to prevent bystanders’ videos of their conduct from being effectively circulated. This problem is further exacerbated by algorithmic enforcement’s lack of context for what comprises fair use.

Peter K. Yu, Professor of Law at Texas A&M University, in his paper titled, ‘Can Algorithms Promote Fair Usediscussed what may be needed to successfully automate fair use analyses. He notes that the current decisions for fair use are ex-post. This means that an alleged infringer has to move to court to prove that the use of copyright material in their video comprised fair use. However, similar to how copyright enforcement on the internet has required algorithmic enforcement based on probability, Professor Yu makes a compelling argument that automation of fair use will require that the current post hoc precise declaration of fair use after the content has been removed for alleged copyright violation should give way to a more rapid ex-ante determination of the probability that the material should comprise fair use and remain online.

Niva Elkin-Koren, Professor of Law at Tel Aviv University and Visiting Professor at Harvard University has also pushed for ‘Fair Use by design’, arguing that it is a necessity in the era of algorithmic governance.

Automation of copyright enforcement with ‘Fair Use by Design’ at its heart would require sufficient oversight from all stakeholders involved and would require not only the voices of copyright holders but also those of researchers, lawyers, academics, and those speaking for the public interest. Automation of fair use is also likely to also require legal reform at some level — which would likely include transparency and data collection requirements that may inform better algorithms and provide data that allows for more effective analysis.

In the interim, social media platforms have stringent copyright enforcement policy on one hand but varying and sparse rules for instances where the use of copyrighted content may not qualify as a violation on the other. Facebook stated that copyright restrictions take into account how much of the total video contains copyrighted music, the number of songs in the video, and their respective lengths. Instagram noted that it would warn users before taking any action on a live stream video and give the user a chance to address the issue.

YouTube, through Content ID, has arrangements with copyright holders where if a video has copyrighted content, it may be monetized by the rightsholder. This means that even if the copyright filters are tripped, the video will remain online — which in cases of police recordings, may serve its purpose of finding visibility online. However, even aside from the new issues this creates, this remains at best an unreliable and ideally temporary solution to the specific problem of online content serving a public interest/ oversight purpose.

Another possible interim solution to consider may be to have an audio focus feature that is similar to the video focus feature on the camera, where the person recording the video can choose the voices that must be focused on in the video, with the remaining background voices becoming equivalent to a ‘blur’ that we see on video when we focus on just one object/person. At the least, this will require participation on the part of rightsholders to ignore ‘out of focus’ sounds, and at the most, the audio will be blurred to the point of unrecognizability, which has its own downsides in the form of potentially muting of the entire video. Although videos so modified will most likely continue to trigger false positives, there may be a chance that it decreases their likelihood of the copyright enforcement algorithms catching music that was purely incidental to the video.

It may also be worth considering if Section 512(f) of the DMCA, which gives users the ability to hold rightsholders accountable if they send a DMCA notice in bad faith may be read to encapsulate weaponization of copyright by rightsholders as well as non-copyright holders, although it is unclear how, if at all 512(f) could be brought to bear on enforcement mechanisms rooted in private agreements, such as Content ID. Perhaps a new regulatory analog would be required.

Dan L. Burk, Professor of Law at the University of California warned in his article on algorithmic fair use that “the design values embedded in algorithms will inevitably become embedded in public behavior and consciousness. Thus, algorithmic fair use carries with it the very real possibility of habituating new media participants to its own biases and so progressively altering the fair use standard it attempts to embody.” Police officers’ attempt to weaponize copyrighted music to thwart the circulation of critical or otherwise unfavorable content is a clear sign that this warning rings truer now than ever before, and warrants an urgent discussion.

About the Author: Shreya is an Employee Fellow at the Berkman Klein Center, where she works on the Lumen Project. She is a passionate digital rights activist and uses her research and writing to raise awareness about how digital rights are human rights. She tweets at @shreyatewari96

--

--

Lumen Database Team
Lumen Database Team

Written by Lumen Database Team

Collecting and facilitating research on requests to remove online material. Visit lumendatabase.org and email us if you have questions.

No responses yet