30 April 2018 12:02 PM GMT
Very recently, in the wake of news about social media giant Facebook’s data policy lapse, billions of Facebook users were made to realize that no private information could be ‘secured’ even within under the click-wrap agreement between Facebook and the user which promises no exposure of data to third party. As if this was not enough, the news about possible privacy breach by Facebook...
Very recently, in the wake of news about social media giant Facebook’s data policy lapse, billions of Facebook users were made to realize that no private information could be ‘secured’ even within under the click-wrap agreement between Facebook and the user which promises no exposure of data to third party. As if this was not enough, the news about possible privacy breach by Facebook by way of application of photo scanning and facial recognition tool has further created an impression that personal images that are uploaded by the users in their own profiles/albums or shared albums are neither safe even in the hands of the ‘company’ i.e., the Facebook itself. This situation arose because of a recent order passed by an Illinois court in a 2015 case whereby the concerned judge opined that Facebook users can go for a class suit against Facebook for unethically scanning and storing personal photos. So how did this come up in light? It was apparently noted that Facebook was ‘automatically’ suggesting tagging of people whose photographs were available in a group or who was (unintentionally) seen in the photo that were shared by other/s. The judge based his opinion largely on the Illinois Biometric Information privacy Act, which prohibits use of biometric information, including facial patterns, size of pupil etc.
Nonetheless, in India, we do face this problem in a large scale whereby individuals, especially women, government officials, celebrities etc are tagged in photos without their explicit consent. ‘Tagging’ does not necessarily mean photo tagging, but it may mean profile tagging also whereby profile photos and pictures (except those which are in the ‘private albums) of the person concerned may also be exposed to the target audience chosen by the individual who tagged the former. While this definitely poses a grave risk of privacy infringement, I am more concerned about the facial recognition technology for photo tagging.
Let me share some positive use of photo matching and facial recognition technology by Facebook: as a cybercrime victim counselor, I have noticed large-scale harassment of women by way of creation of fake profiles which may or may not be results of revenge porn. Victims, especially women and teenagers may have to face extreme frustration when they receive information that they have to establish the fact that the photo in question is that of their own and they have not authorized the perpetrator to upload it in any manner. This involves a huge task of sending a scanned copy of photo identity proof which ultimately may not yield any result in case Facebook policies fail to recognize the report on the fake profile or the offensive picture as ‘offensive’ or harassing as per their standards. The victims in such cases would be left with no other options but either to file a complaint with the police or to get an order from the court directing the company, i.e., Facebook to remove the offensive contents. To my knowledge, the later has never happened till date because once the victims of fake profile or revenge porn are refused help by the website concerned, they tend to lose faith in the police as well and ultimately, they may take up irrational coping methods like contacting amateur hackers or even consider for suicide for the shame generated due to such fake or revenge porn profiles. In cases where the police may register such cases, they may use different legal provisions for framing the FIR or for booking the offence. There are two reasons for this:
Cases thus filed, would be booking the offender for the offences mentioned in the provisions, but none may consider bringing the service provider under the scope of laws meant for breach of privacy that a ‘body corporate’ is liable to protect under S.43A of the Information Technology Act, 2000 (amended in 2008). However, several public interest litigation cases and social cause lawyers did file cases against websites, including Facebook, for allowing to share contents which may be detrimental for children or which may hurt religious sentiments, but Indian courts, including the apex court, have almost always remained satisfied with the ‘assurance’ for stricter monitoring by the websites.
Photo matching tool including facial recognition tool was an answer to these problems: Facebook started using facial recognition mechanism to resolve the issue of identifying fake profiles and revenge porn contents and removing /blocking the same in order to prevent further circulation. Users, especially female users of Facebook felt much relieved when Facebook rolled out mechanism to save profile pictures along with profile picture cover shield and transparent ‘cover’ for profile pictures with designs of geometrical pattern for protection of unethical use of profile pictures by third party, including harassers, creators of revenge porn contents etc.
Now, let me throw light on the negative aspect of photo matching technology used by Facebook: This however would mean that Facebook would ‘store’ “guarded” and “reported” images, including biometrics, which may be/have been used to commit harassment and offence, including revenge porn; the users who have already started applying the cover shield and profile cover, have already impliedly agreed for letting Facebook store their biometrics for ‘safety sake’. Once the user activates the application of photo matching technology by clicking on the ‘option’ of using profile cover guard and the shield, Facebook initiates the action of storage of biometrics. This act of intimating the user about the profile picture guard mechanism by Facebook (which may fall within the meaning of ‘body corporate’ as has been defined under S.43A of the Information Technology Act, 2000 (amended in 2008)), may deem to be a fitting application of policies for protecting the privacy of the users as has been laid down by S.4 of the Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011 (which says body corporate to provide policy for privacy and disclosure of information). But the mechanism of collection of photos, including biometrics, may not be on par with S.5 Information Technology (Reasonable Security Practices and Procedures and Sensitive Personal Data or Information) Rules, 2011 which lays down process of collection of information by stating as follows:
(1) Body corporate or any person on its behalf shall obtain consent in writing through letter or Fax or email from the provider of the sensitive personal data or information regarding purpose of usage before collection of such information.
(2) Body corporate or any person on its behalf shall not collect sensitive personal data or information unless —
(a) The information is collected for a lawful purpose connected with a function or activity of the body corporate or any person on its behalf; and (b) the collection of the sensitive personal data or information is considered necessary for that purpose.
(3) While collecting information directly from the person concerned, the body corporate or any person on its behalf snail take such steps as are, in the circumstances, reasonable to ensure that the person concerned is having the knowledge of — (a) the fact that the information is being collected;(b) the purpose for which the information is being collected;(c) the intended recipients of the information; and(d) the name and address of —(i) the agency that is collecting the information; and (ii) the agency that will retain the information.
Nevertheless, users do not exchange sensitive information with the body corporate on the basis of any written agreement. When the user starts applying the cover guard, it is implied that she has allowed Facebook to use photo matching technology for that particular image. Since this is done for the safety of the users, even though the ‘guarded’ images are ‘stored’ without explicit contract, Facebook would not be liable for any privacy breach as long as the same is not being used for any other purposes.But allowing tagging of individuals by photo-matching is definitely a privacy breach if seen in the perspective of S.5 of the Information Technology (Reasonable Security Practices And Procedures And Sensitive Personal Data Or Information) Rules, 2011. As mentioned above, tagging by photo matching may necessarily include recognition of biometrics and facial patterns and users whose privacy has been thus violated may definitely consider for legal action.
There is, however, one more privacy loop which many users overlook: Facebook also suggests making friends with other users who may/may not be connected with users through common friends. This may pose a bigger threat to privacy because ‘friend- suggestion’ allows ex-colleagues, ex-boyfriends or spouses, possible perpetrators, stalkers to mine data about vulnerable victims. I have observed that ‘friend suggestion’ mechanism, which also relies upon several artificial intelligence tools, including photo matching technology, may be extremely detrimental for women who may have deactivated their old profiles to save themselves from perpetrators and have created new profiles with old photographs and old sets of ‘trusted’ friends. The new profiles with/without photographs may appear in the ‘friend suggestion’ menu for unwanted people, including the perpetrators. Noticeably profiles made for the purpose of revenge porn also may get wide publicity in this way as well.
Unfortunately, India’s courts, as well as data protection laws, are not well prepared to take care of these problems. Consider court cases in Canada, Ireland, US and Germany etc., where courts have extended the scope of the laws to cover the liabilities of the websites in cases of revenge porn or general breach of privacy. Indian courts should also follow this trend. It is interesting to note that Facebook has recently made their internal policy for content moderation public. A brief analysis of this policy would show that Facebook uses human experts as well as artificial intelligence tools to detect ‘offensive contents’. But they have also claimed that this system is not infallible. Victims, whose take-down requests have not been responded in a desirable manner by Facebook, may appeal against the decision. Similarly, in case a photo or content has been ‘wrongly’ removed by Facebook, the owner/publisher of the content may intimate Facebook to look into the action again. The policy also reveals that Facebook is trying to match its offensive content detection mechanism with the socio-cultural aspect of usage of language and pictures of different countries to understand which content may be considered as hate speech and which pictures may be considered as sexually explicit and obscene from the socio-cultural aspect of the victim’s region.
These efforts may make one believe that Facebook is trying to fit its responsibilities not only as per the US laws but also as per the regional laws dealing with liabilities of service providers. But in sum, the lacuna may always continue to exist as long as the data privacy laws and revenge porn law are not formulated and executed properly. A properly framed data privacy law or revenge porn law may enable the courts to implement existing provisions regarding service provider’s liability as well. The photo matching mechanism may be considered as boon to users, especially women, only when the courts and laws are prepared to regulate the website/s for breach of privacy which may happen due to misusing the said mechanisms.
Dr. Debarati Halder is the Professor and Head of the Research Cell at United World School of Law, Gandhinagar, Gujarat. She is also the honorary managing director of Centre for Cyber Victim Counseling (www.cybervictims.org ). She can be contacted at firstname.lastname@example.org .
[The opinions expressed in this article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of LiveLaw and LiveLaw does not assume any responsibility or liability for the same]
 Reuters (2018), Facebook must face class action lawsuit over face-tagging in photos, judge rules. Published in https://www.cnbc.com/2018/04/16/facebook-must-face-class-action-lawsuit-over-face-tagging-in-photos-judge-rules.html on 16-04-2018. Accessed on 16-04-2018
 Andrews Carley Daye, Bieber Corey, . Jacobson Julia B,. McGinley Molly K, Volz Carl E. (2017), Litigation Under Illinois Biometric Information Privacy Act Highlights Biometric Data Risks published in http://www.klgates.com/litigation-under-illinois-biometric-information-privacy-act-highlights-biometric-data-risks-11-07-2017/ on 07-11-2017 Accessed on 21.04.2018
 See for more discussion about revenge porn in Halder D., & Jaishankar K. (November 2016). Cyber Crime against Women in India. New Delhi: SAGE. ISBN: 978-93-859857-7-5
 This author offered a model law on revenge porn and the same can be seen @ Halder Debarati(2017)Criminalizing Revenge Porn From The Privacy Aspects: The Model Revenge Porn Prohibitory Provision. Published in http://www.livelaw.in/criminalizing-revenge-porn-privacy-aspects-model-revenge-porn-prohibitory-provision/ on 15th September, 2017. Acessed on 16-04-2018
 Nsubuga.J (2017), Facebook launches new photo-matching tool to fight against revenge porn. Published in
http://metro.co.uk/2017/04/05/facebook-launches-new-photo-matching-tool-to-fight-against-revenge-porn-6556140/?ito=cbshare . Accessed on 22.04.2018
 Chaykowski. K (2017) Facebook Publishes Internal Content Moderation Guidelines For The First Time, published on Apr 24, 2018 in https://www.forbes.com/sites/kathleenchaykowski/2018/04/24/facebook-publishes-internal-content-moderation-guidelines-for-the-first-time/#449312be71e7 Accessed on 24.04.2018