CMO

Facial recognition: Could the risks outweigh the rewards?

Facial recognition offers brands better ways to personalise marketing, but the privacy questions and risks are substantial, warn experts

Taylor Swift doesn’t immediately spring to mind when thinking about facial recognition. Yet the Grammy award winning singer has reportedly used the technology at her concerts to identify stalkers.

The power to identify faces is undisputed and has many useful applications beyond spotting serial harassers. Yet facial identification poses significant privacy challenges.

For marketers, facial recognition technology offers an irresistible proposition: More responsive advertising, customer identification, rich insights and analytics. Through the use of heat maps, eyeball tracking and more, deployed through almost any device with a camera, the possibilities are potentially endless.

Lately, however, facial recognition has been in the news for all the wrong reasons. Tech giants like Microsoft have faced criticism of their systems on the grounds of privacy, with privacy advocates arguing such personal identification is invasive and protections haven’t kept pace with the growth of the technology. Researchers have also accused systems developed by IBM and Amazon of producing bias results.

There's no doubt a rich Web of personal information can be created about people through their digital trail and facial recognition hyper-personalises this information, CEO of social network PikMobile and founder of mobile advertising platform Zave Networks, Scott Relf, explains to CMO.

“I think it’s [facial recognition] a very important tool. But it raises a whole bunch of privacy considerations," he says. "Folks have become comfortable with data being collected about them by the businesses they’re engaging with. The privacy issue starts to become a concern when these different businesses share information without the person really knowing what’s going on. In some cases, they're giving or selling the data to a third-party, which combines the data, and then it’s even one step removed."

In many cases, facial recognition is being used without a person's consent. "That same facial recognition ID allows the data about you to be matched up with the data about you from everywhere else because the matching can occur based on facial recognition," Relf says. "This network of data about a person can be matched together even better than before.”

The heightened focus is triggering lawmakers and legislators in the US to mull legal protections against the spread of systems that can record and track people. Prompting this are privacy advocates, who argue for limits and protections against systems they say are invasive but often shrouded in secrecy.

Closer to home, a Queensland Police report found its facial recognition system was rolled out too quickly and had limited usefulness during the Commonwealth Games.

Facial recognition in marketing

Use cases are two-fold. Shopping centres, individual stores and retail precincts can use facial recognition to identify VIP shoppers and target them with promotions or offers. Or the technology can be used to identify potential shoplifters within a crowd as a way to curb organised shoplifting crime.

“Shoplifting and organised retail crime are compelling security reasons. In a store or mall, tracking people for recognition, people may be sensitive to that, but I think retailers are equally sensitive to that and they don’t want to discourage shoppers by making them feel they are individually under personal surveillance," Gartner research director, Nick Ingelbrecht, says. "There are instances where malls have removed technology because of privacy concerns around such as with Wi-Fi tracking so there is that kind of negotiation going on."

Proponents argue facial recognition is too useful to be ignored. While aiding police and lawmakers in finding criminals and even potential terrorists is an easier sell, it has many helpful use cases for responsive marketing and improves customer service and customer experience.

Already, an app has been developed using facial recognition to identify someone such as a businesses contacts by tapping a facial database, speeding through hotel and airport check-in, personalised in-store marketing and online registration, customer loyalty and discount schemes are just a few of the customer applications aimed at personalising and streamlining services.

Relf believes people are becoming comfortable with the concept and usefulness of facial recognition, whether it might be Facebook easily identifying and tagging photos of friends, enjoying streamlined check-in at hotels or restaurants that use the systems to better manage customer flow in busy times or aid regulars to order their favourite dish. At the same time, they may not grasp the implications of when their data is outside of their control. Which means businesses have a responsibility to safely protect and manage this personal data.

“IT professionals are in a very delicate spot because their business needs this technology to do their own business better. But they also now have to be very careful where that data with the facial matching piece goes, especially outside their business,” Relf says.

The cost of the technology, Ingelbrecht says, means there needs to be a solid business case behind any facial recognition system.

“This isn’t cheap technology, so there’s got to be a business rationale for deploying it. A chain like WalMart tried it and decided there wasn’t an ROI to justify the investment,” he notes.

Ingelbrecht also suggests businesses, at least in the retail context, aren’t keen to cross the creepy line with potential shoppers and this will help curb the intrusiveness of facial recognition systems in malls and shopping precincts.

“It’s a negotiation between consumers and service providers, so in an airport or retail store, you expect cameras will be there and part is for safety and equally for theft. But when it comes to facial recognition and tracking shoppers through a store, people may be sensitive to that," he continues. "Retailers are equally sensitive to that and they don’t want to discourage shoppers by making them feel they are individually under surveillance.”

Up next: How it works, where the legal issues are lying, and what businesses should do to embrace facial recognition

Page Break


Facial recognition in the spotlight

Facial recognition is carried out by matching faces, whether it might be for emotion tracking in advertising or face matching for criminals, and relies on a dataset of facial images. And this is one of the core issues.

Microsoft recently removed a database called MSCeleb containing more than 10 million facial images, scraped from publicly available sources, after a Financial Times investigation argued the images were taken without consent and included private individuals. In 2018, China and the US accounted for almost all references to this database in research papers and facial recognition projects. A Microsoft spokesperson reportedly said the database was only intended for academic purposes. 

In addition, several experiments have found the systems can produce results that are racially bias or inaccurate.

OpenMic has a mission is for greater corporate accountability in the tech and media sector to do with privacy, civil rights and freedom of expression. The group makes the business case for improved or more responsible practices through organising large, institutional shareholders such as funds, companies and investment firms and lobbying to improve protections for civil liberties and highlighting how new technologies can raise the risk profile for companies. 

Reputational damage, according to its executive director, Mike Connor, is one of the risk factors for businesses that deploy facial recognition because they jeopardise trust in the eyes of customers and the public in general. The organisation has been active around facial recognition generally and Amazon’s system in particular.

Connor pointed to the lack of permissions to do with the technology and says “one of the most troubling aspects of facial recognition technology is that there are no rules”.

“There are no corporate guidelines, or regulatory or legal rules about how it should be used in the vast majority of situations. Research shows it can be bias and inaccurate. There are many troubling questions,” he says.

OpenMic has been at the forefront of pressure on Amazon to limit how its service Amazon Rekognition is used because of concerns about human rights violations. In defending Rekognition against what it says are misleading claims, the online retail giant argues it has not infringed civil liberties nor is it not bias. Amazon has, however, argued for legislation to protect individual rights and ensure transparency in government use of facial recognition systems.

Still in the US, some cities and states are banning, or considering banning, facial recognition use by government agencies. Even Microsoft's president suggested the US Congress needed to regulate its use and not leave it to companies alone.

As the ethical and privacy debates continue, technology providers are attempting to overcome some of these challenges. One way to improve facial recognition is with better facial data and this can only be done through a dataset used to train the systems accordingly. In this vein, IBM has released a new dataset comprised of more than 1 million diverse facial images in a bid to improve fairness and accuracy in the systems.

The Diversity in Face is a publicly available dataset with 10 different facial coding methods including craniofacial (head length, nose length, forehead height), facial ratios (symmetry), visual attributes (age, gender) pose and resolution. The computing giant argues the breadth of data sets is the key to improving AI-powered facial recognition systems by being better trained.

“For the facial recognition systems to perform as desired – and the outcomes to become increasingly accurate –  training data must be diverse and offer a breadth of coverage," researchers said in a blog post.

Legal questions

What's clear is legal protections are needed to safeguard personal data. At the same time, it’s necessary to provide certainty for businesses interested in innovating around facial recognition.

“The legislation is changing all the time around us. And if businesses are making investments in these systems, it’s a matter of working out what they can do today and what they may be able to do in, say, five years time," Ingelbrecht says. "Obviously, if you're making investments in these systems, you want to be sure you can use them.

"It does mean we’re dealing with a moving entity, in terms of regulation, and what we’re seeing in Europe and North America, a progressive catching up of the regulation with the technology.” 

Data, while it might hold the keys to better customer intelligence and improved service and better business, nonetheless presents a risk to businesses as well. OpenMic’s Connor echoes concerns and his advice is simple.

“It’s becoming increasingly clear having data is not necessarily a good thing," he argues. "There’s been a rush to ‘big data’ and also AI or they’ll be left behind and, to a certain extent, that’s true. But every piece of data gathered is a potential liability because it raises questions of data security and privacy.

"Good corporate governance is you only want as much data as you need. You shouldn’t be gathering data recklessly because the more data you have, the more you’re exposing your company to potential liability.”

What's more, digital platforms operate in different legal jurisdictions with varying legal approaches such as privacy laws in Australia or GDPR data protection rules in the EU. Proper data protection, such as what’s required in healthcare, might seem onerous to begin with, but Felk says it's also an opportunity to improve information management systems. Still, he concedes it’s a huge challenge to piece together the privacy rules for each country for huge global platforms. 

While certain activities such as the capture or sharing or particular bits of information can be regulated, the potential for assembling a rich picture of personal data, linked to facial imagery, poses huge challenges to privacy protection.

“Once you identify a person, connect that with data and start connecting that across other places to form a profile that the person is unaware of, didn’t consent to, doesn’t have the ability to look at or can’t make go away, you've got a concerning issue,” Felk adds.

Follow CMO on Twitter: @CMOAustralia, take part in the CMO conversation on LinkedIn: CMO ANZ, follow our regular updates via CMO Australia's Linkedin company page, or join us on Facebook: https://www.facebook.com/CMOAustralia.