Use of Facial Recognition Technologies on a steep rise in India

by is licensed under

In November 2021, , along with the and , to Hyderabad, a city in the Indian state of Telangana, which has established a ‘Command and Control Center’ — a hundred and seven million dollar project that is meant to support the processing of over six hundred thousand surveillance cameras in Hyderabad at once. This, combined with Hyderabad police’s existing facial recognition software for identifying individuals will enable the police to track individuals across the city in real time.

The government has called it an ‘augmented intelligence integrated solution’ that would provide ‘actionable insights and situational awareness’ for field officers to make informed decisions in real-time. On the other hand, Amnesty International and others, in their report, have called for a total ban on facial recognition technology since it ‘inherently poses high risk to human rights’. Several other Indian cities such as , , , and are also in the process of setting up similar control centers for CCTV cameras as part of a ‘.

At a national scale, the National Crime Records Bureau, India’s body responsible for managing information on crime and criminals in India has also been issuing tenders for setting up a National Automated Facial Recognition System in India since 2019, although there are concerns that this is being done without any public consultation or inter-ministerial meetings or discussions for the adoption of such a system. As per a May 2021 conducted by , a UK-based pro-consumer cybersecurity and privacy research organization, the Indian cities of Hyderabad, Indore, and Delhi, along with London, are some of the top twenty most surveilled cities in the world, along with over a dozen cities in China. However, the U.K has in place that regulate the collection and use of the personal data of its citizens. The laws also provide mechanisms for oversight and grievance redressal.

India’s Information Technology Act, 2000, classifies biometric data as sensitive personal data and contains rules for collection and use of such information. However, these rules are only applicable to body corporates and not to the governments’ use of such information. Notably, India lacks a law regulating Facial Recognition Technology (FRT) and there is no current legislation that addresses or qualifies the evidentiary value of information derived from the use of such automated technologies within the criminal justice system.

Currently, the Indian government is deliberating the third draft of a proposed . The law comes with some important milestones regarding regulating cross-border data flows and prior consent before use of personal data, although that some of the most of the proposed bill are unchecked biases, overbroad authority of the government to bypass the law, and built-in obstacles to informing data subjects if there has been a breach of their personal data.

The escalating use of FRT in India despite the absence of related legislation can be in part understood by examining how FRT has been used in the past, since this may help predict whether the future use of a large urban CCTV network along with FRT is likely to result in any effective oversight.

In October 2021, in response to questions about footage of Hyderabad’s police forcing civilians walking on streets to remove their masks and capturing their photos and in some cases, also demanding their fingerprint, the local police stations stated that this was a part of ‘the patrolling cops official duty’ and that the police was scanning ‘suspicious persons only’.

Nationally, in various parts of the country when civilians were protesting a citizenship law in India. In what was the biggest riot in the national capital since the 1980s, the police is said to have borrowed from China’s to record the protestors and run the footage through facial recognition software — initially adopted with the stated objective of finding missing children — to identify the ‘miscreants’, some of whom are and are still . Finally, in the into the Indian government’s alleged use of the Pegasus spyware, revelations were made that the government used the spyware to hack and surveil the activities of over a dozen people including journalists, academics and the incumbent government’s political rivals. During the investigation by the independent expert committee set up by the Indian Supreme Court, the government noted that

The upcoming projects may make FRT increasingly accessible to the executive wing of the government under the guise of technological and urban development, but without any oversight mechanism, is likely to also be used behind a ‘veil’ — both with secrecy and with overreach, including the possibility of deliberately intimidating civilians, among whom certain marginalized communities stand to face than others. On the other hand, government representatives have made repeated assurances that FRT will be used to ensure citizens’ safety in public spaces and make welfare benefits allocation more efficient. In the past, the government has facial recognition tech for prevention of human trafficking and earlier this week, sanctioned another to use FRT as a means of providing proof of life certificates for pensioners in order for senior citizens to receive annual benefits.

An overwhelming majority of and globally concur that FRT must be due to its inaccuracy and inherent bias along with concerns around opacity and the implications for privacy and civil liberties. Pending any such universal ban, a second-best alternative may be that FRT be used in strictly regulated conditions and with expansive oversight.

Until a cogent law is passed in India specifically regulating use of biometric data by government agencies, it may be possible to challenge any misuse of FRT’s under Article 21 of the Indian Constitution, which has been read by the Supreme Court of India to include as a part of the fundamental right to life. Any exceptions to the right to privacy are meant to satisfy the three-part test of prescription by law, legitimacy and proportionality of use.

Broadly looking at the three-part test in the context of the use of FRT in India, there is no law currently that prescribes for or sanctions the use of FRT in India, nor is there a pre-specified or legitimate aim for which the technology has been deployed — although it may be argued that ‘national security, public order and child safety’ make for legitimate aims, but as it stands, an exception must also qualify the other two tests of prescription by law and proportionality, to be considered valid. Regarding proportionality, the Supreme Court of India, when in 2017, in the context of compulsory collection and use of biometric information had stated that ‘there cannot be a sweeping provision which targets every resident of the country as a suspicious person. Presumption of criminality is treated as disproportionate and arbitrary.’ For these reasons, it may be likely that if a challenge is brought against the use of FRT along with CCTV cameras, it may be successful in an impartial court of law.

In order to re-balance the power dynamics between the state and the citizens, it may also be useful to make a case for mandatory disclosures on part of the government. There are various ways to do this, one of which could be through a system whereby citizens can seek information from local and national governments about whether their biometric data is a part of or has been run through facial recognition software. In addition, mandating publicly accessible information flagging the purposes for which the FRT data is being used may also be a helpful start. This is likely to both, empower citizens, and provide transparency about the citizens’ data that the government holds.

It seems clear that the growing use of facial recognition calls for effective law-making that clearly identifies the situations within which sensitive personal data of civilians can be processed by law enforcement agencies. While advocacy groups and civil society organizations are concerned that the increasing disregard for international standards for and for may have the worrisome effect of turning the world’s largest democracy into an increasingly panopticon state, concrete regulations with effective oversight may make a case to some for measured acceptance of the technology.

About the Author: is a Research Fellow at the Berkman Klein Center’s Lumen Project.

--

--

--

Collecting and facilitating research on requests to remove online material. Visit lumendatabase.org and email us if you have questions.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Ragnamaple Airdrop & Whitelisting Campaign (Part 2 with New Form)

Make With The Clicky

Mines of Dalarnia : Testnet Wallet Setup Guide

Configure CAPTCHA in Oracle Cloud Web Application Firewall

Brandjacking scams: data theft and malware hiding in plain sight

Cybersecurity recommendations for campaigns and organizations in 2022

AirGap web version

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Lumen Database Team

Lumen Database Team

Collecting and facilitating research on requests to remove online material. Visit and email us if you have questions.

More from Medium

The Sea of Inputs and Outputs

Scientists shouldn’t be allowed to recommend their own peer reviewers

DNA & Disease: Facing Pandemics with Pathogen Genomics

What is Deep Tech?