Facial Recognition

Wendy Liu & Jessica Ong

Follow us on LinkedIn

31 Oct 2022
Topics
  • Technology and Cyber Risk
This article is a part of Risk Update 5. 

How it works, concerns with its application and privacy tips

Introduction

Facial recognition is not a new concept. First taking its roots in the 1960s when a group of researchers, unsuccessfully, attempted to program a computer which could recognise human faces, this rudimentary concept has now become entrenched in our daily lives as we use it to unlock our mobile phones, conduct video surveillance, assist with law enforcement, and more. At its crux, it is a method of using technology to transform images of a face into numerical expressions that can be used to identify an individual from an image. However, with the evolution of this piece of technology comes controversy, particularly around the ethicality and potential biases which could arise as a result.

This paper will explain how facial recognition works, the applications, concerns, and tips to protect yourself.

How does it work?

Application

Now, facial recognition – and biometrics more broadly – have had their application expanded and are quickly becoming a part of our everyday lives. The technology is used for a variety of purposes by governments and private organisations. Some uses include:

  • Allowing phone users to unlock their phones, log into apps, and make payments.
  • Automatically tagging people in online photo albums and sorting photos.
  • Enabling passengers to board flights without using a boarding pass or passport.
  • Making advertising more targeted by customising content based on your demographic.
  • Allowing consumers to try on makeup, sunglasses, and hats online.
  • Enabling venue operators to easily spot problem gamblers who have been banned.

Concerns

However, as with any technology which uses our inherent and personal details to make a judgement, facial recognition technologies have come under a significant amount of scrutiny and caused controversy over the years.

  • 2016: A study found that half of American adults were in a law enforcement facial recognition database.
  • 2019: A researcher revealed that Amazon’s systems were much better at classifying the gender of light-skinned men than dark-skinned women. This gave impetus to a discussion on the biases of the model – hence, facial recognition practices more broadly – and the potential for racial profiling, which could in turn lead to wrongful imprisonment.
  • 2020: London police began using facial recognition cameras to pick out suspects from street crowds in real-time. However, this raised concerns about privacy, potential discrimination, and the potential misuse of this data – despite any laws or corporate cybersecurity policies in place.
  • By 2020: By 2020, Chinese authorities had planned to use a network of cameras throughout cities, facial recognition systems, and various phone applications to monitor individuals for their Social Credit System. While the social credit system in reality is not as bleak, there are valid fears that it could potentially develop into something more Orwellian as the system are refined, and individuals become fearful of doing something wrong in public for fear it will impinge on their credit score.
  • July 2022: Consumer group CHOICE referred Kmart, Bunnings, and the Good Guys to the Office of the Australian Information Commissioner to investigate potential breaches of the Privacy Act over their use of biometric technologies. 65% of respondents to CHOICE’s survey said they were concerned about stores using the technology to create profiles of them that could cause them harm.
  • September 2022: The Human Technology Institute released a report Facial Recognition Technology: Towards a Model Law, that proposed rules to govern the use of facial recognition technology in Australia. This includes national agencies and corporations that develop, distribute or deploy facial recognition systems. The report called on the Attorney-General, Mark Dreyfus KC,  to urgently regulate the use of facial recognition technology.

Privacy tips

Below are four privacy tips we can consider when thinking about the way we interact with facial recognition on an everyday basis.

1. Wear a face mask in public setting using facial recognition.
Wearing a face mask is a simple way to prevent the technology from identifying facial features.

2. Evaluate whether photo organisation applications are worth the privacy trade off.
It may be convenient to have your photos organised by faces, however it is difficult to know how a company may be misusing your data. You can disable photo organisation as well as automatic tagging on social media websites.

3. Consider turning off facial recognition features on your home security.
Facial recognition for home security is becoming increasingly popular. As such, it is important to know how and where images are stored or used when they are recorded by the camera. It is also worth understanding what the company would do in the event of a security breach.

4. Be wary of taking photos of yourself on any software or application.
Certain software or applications may hold rights over the photos the user takes as part of the terms of service agreement. You may consider investigating how and where the software or application processes data and how it is shared with third parties.

Amstelveen Risk Update: Edition 5, October 2022
Download the article

You may also like

Let us tell you more

Risk management expectations are evolving rapidly. How well is your organisation equipped to respond?