The FaceApp scare: how machine learning technology can put your identity in danger


The FaceApp scare: how machine learning technology can put your identity in danger


Culture
Spotlight

A software developer explains the fine print we don’t read when it comes to getting into the latest online trend—like FaceApp—as well as the dangers of machine learning technology where you can be made to appear to have said something or acted in a way that never really happened

Dominic Ligot | Jul 20 2019

As a software developer, I spend the majority of my waking (and sometimes sleeping) hours thinking about how data can make things work, especially in today’s highly digitized world. I think most people are unaware that data algorithms are being used to learn and make decisions for us: from assigning the best driver for Grab and Angkas, recommending a purchase on Amazon, and plotting the best path to a destination on Waze.

Then comes something like Faceapp. It’s been a viral sensation for the past weeks, and something our selfie-driven culture totally embraces: an app that uses AI to apply a filter to your photos, allowing you to see yourself in your old age. Although photo filter software have existed for years, the reason Faceapp works so well is it uses neural networks to create filters.

Neural networks are an example of “machine learning” — mathematical algorithms that allow computers to “learn” patterns from past data, and using these patterns provide recommendations or decisions using new data. Faceapp developer Wireless Lab trained neural networks to learn how faces age from millions of existing photos, to the point that given a new photo of a face, it can perfectly simulate how a person might age.

Aging is not the only filter for Faceapp, you can simulate gender (see how you would look like as the opposite gender), styles (see how you would look with a different hairdo), and smiles (see how a neutral or frowning photo will look if smiling). Given these scenarios, it isn’t surprising that Faceapp has become quite popular, especially with celebrities who have gotten buzz posting re-aged or re-styled versions of themselves and others on social media.

 

To Russia with love 

More recently, Faceapp has received criticism from journalists and politicians who raised concerns about data privacy. The developer of Faceapp, Wireless Lab, is headquartered in Russia, and with commonplace news about hacking and spying, alarms were raised that people’s personally identifiable data — their face — was being aggregated and stored in Russia.

Wireless Lab CEO Yaroslav Goncharov clarified and denied that any user data was being sent to Russia. He said Faceapp services use public clouds such as AWS and Google Cloud to store user photos and information. Other members of the security community have also come out to absolve the company of any privacy and security concerns.

Faceapp developer Wireless Lab trained neural networks to learn how faces age from millions of existing photos, to the point that given a new photo of a face, it can perfectly simulate how a person might age.

As a data professional, I don’t think Faceapp is just about privacy and security. This is a whole different level of debate and concern over what I would call data ethics: which would include data privacy, data ownership, algorithmic bias, and data-driven liabilities. 

Data ethics does not automatically imply legality — although there is an overlap. We might say for example that anything that is illegal would be unethical, but you might have something that is legal but could be grossly unethical.

For example, the Faceapp terms of service (the fine print that no one reads), involves statements that would make any person flinch if they stopped for a moment to consider what they signed up for, like:

“You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content in all media formats and channels now known or later developed, without compensation to you.”

 

Perpetual irrevocable rights 

I’ll have to defer to the lawyers for the legal interpretation of this, but I wonder —is it really possible to grant anyone perpetual irrevocable rights to your face and derivatives? On the one hand, in politics and the entertainment industry, contracts are negotiated over the use of a person’s video, image, voice, to endorse products and brands in a commercial for instance. On the other hand, the cybercrime prevention act criminalizes any form of computer-related forgery and cyberlibel. Would you be liable if you used Faceapp to re-age or re-gender someone and posted their image without their permission?

It gets murkier when you consider the power of machine learning that Faceapp uses. Similar technology also creates Deep Fakes — images or video that are generated by supplanting the face of another person on another body or model. Machine learning also allows the mimicking of someone’s voice and speech. With these technologies people can be made to appear to have said something or acted in a way that never really happened.

Would the terms of service allow that? Maybe. Is it legal? Maybe not. Is it unethical. Yes.

Before the privacy issues came to light, Faceapp got criticism in 2017 related to its “hotness” filter — which simulated attractiveness — on the grounds that it appeared to favor light skin and European (Caucassian) features. This implied a racial and ethnic bias in its algorithm. Faceapp has since dropped the filter, but the issue of bias is a core problem in machine learning leading to some interesting questions such as “Given a situation should a self-driving car prefer to kill a pedestrian or its passengers?”

Consider the power of machine learning that Faceapp uses. Similar technology creates Deep Fakes — images or video that are generated by supplanting the face of another person on another body. 

Faceapp is a sign of our times: the Fourth Industrial Revolution, which is for me simply the “data-driven industrial revolution”.  It also shows how inadequate our current appreciation of how data pervades our lives and how our legal frameworks need to adjust to the pace of these technologies.

We all need to carefully study and take precautions when it comes to our data. Short advice: Be careful what you expose your data to (and your phone, and your emails). Read the fine print on anything and everything you sign up for online. You don’t always have to be on the latest viral trend, especially if it involves your data.

And finally, as Intel’s Andy Grove said: in these times, only the paranoid survive.

 

Dominic Ligot is a tech entrepreneur and technologist. He is a founding board member of the Analytics Association of the Philippines where he is an active advocate for data literacy and data ethics. He previously held executive roles in IT and banking that included roles in governance, risk management, fraud, surveillance, and cybersecurity.

 

References:

1. Faceapp Terms of Service (https://faceapp.com/terms)

2. Article on Wireless Lab CEO email exchange (https://www.msn.com/en-us/news/technology/you-downloaded-faceapp-here-s-what-you-ve-just-done-to-your-privacy/ar-AAEtzVE?li=AA30Mu)

3. Faceapp uses AWS, GCP (https://www.sunjournal.com/2019/07/17/faceapp-adds-decades-to-your-age-for-fun-but-its-also-collecting-your-info/)

4. Is Faceapp Really a Privacy Threat (https://au.pcmag.com/features/62838/is-faceapp-really-a-privacy-threat)

5. Baidu AI can mimic your voice (https://www.digitaltrends.com/cool-tech/baidu-ai-emulate-your-voice/)

6. The Threat of Deep Fakes (https://www.lawfareblog.com/deep-fakes-looming-crisis-national-security-democracy-and-privacy)

7. Racism in Faceapp (https://www.telegraph.co.uk/technology/2017/04/25/faceapp-viral-selfie-app-racism-storm-hot-mode-lightens-skin/)

8. Who should a self-driving car kill (https://qz.com/536738/should-driverless-cars-kill-their-own-passengers-to-save-a-pedestrian/)




Source link