Laptop open onto a login screen with chains and padlock wrapped around it

Artificial intelligence (AI) and machine learning technology present an interesting meeting point between our fear of the unknown and our fear of being known (that is, fear of our private information being exposed and known by others.) The erosion of privacy and rise of intelligent machines is actually a common theme in science fiction. But while reality still has a lot of catching up to do before we can call Skynet’s customer support or play cards with Agent Smith, many people have expressed genuine concern over the implications of modern technology – especially regarding their privacy.

This trend is demonstrated by the increasing number of legal issues regarding consumer privacy – from Apple’s dispute with the FBI over creating “backdoor” access to iPhones, to the class action lawsuit against State Farm insurance over unsolicited marketing calls. China has also recently passed a number of laws explicitly dealing with data protection in the digital world. It’s clear that consumers care about their privacy, and that lawmakers are implementing laws to protect it.

In this post, we’ll go over a few of these privacy concerns to see if they’re justified, as well as delve into whether such public perception undermines the future of machine learning.

Read our blog post "What is Machine Learning?" for a quick overview of the basics.

What are the primary risks to consumer privacy?

At the recent World Economic Forum in Davos, the topic was Mastering the Fourth Industrial Revolution.” The revolution is largely comprised of the Internet of Things (IoT), proliferation of mobile technology and cloud connectivity, as well as artificial intelligence and machine learning. Central to their discussion was the implications of these developments for people’s privacy, as tracking and sharing personal information is part of being so fully connected. When this information is used by companies and their marketing initiatives, many people feel that their privacy has been infringed upon.

According to the US Federal Trade Commission, there are two primary risks to consumer privacy: “first, the risk that a company with a large store of consumer data will become a more enticing target for data thieves or hackers, and second, that consumer data will be used in ways contrary to consumers’ expectations.”

A particularly infamous example is described in this Forbes article from 2012, where US retail giant, Target, used its machine learning algorithm to determine whether a woman is pregnant based on her purchase and browsing history. This caused a stir when a Minneapolis father called the retailer up in an outrage over maternity coupons addressed to his daughter, who hadn’t told her family that she was pregnant. If a multinational corporation can know more about you than your own parents, it only makes sense to feel your privacy has been violated.

Read how Retailers are using Machine Learning to target Customers

Unsupervised Machine Learning is still some way off

When it comes to privacy fears, there’s one simple fact that consumers need to remember – it’s not the machines you need to be concerned with, it’s the people involved who have access to them. This might seem fairly obvious, but it’s a crucial point. Much of the fear that people have of AI generally stems from self-programmed decision-making without a human agent. But, according to Christian Huitema, an engineer at Microsoft, “Unsupervised machine learning is hard. There are many examples of supervised machine learning, but these are driven by subject-matter experts that guide the machine towards specific discoveries. It will take much more than ten years to master the extraction of actual knowledge from Big Data sets.” While he’s talking specifically about data extraction, the notion of unsupervised machine learning is a clear aggravator of consumer fear that could potentially undermine developments in the field.

It’s also important to remember the legal protection given to consumer privacy in the digital age. Here in South Africa, the Protection of Personal Information (POPI) Act was signed into effect on the 11th of April, 2014. The act states that companies can only process personal information for the intended purpose for which it was collected. Information processing includes collecting, accessing, reading, storing, disclosing and destructing of any given data. Whether the act is being enforced sufficiently across the board is uncertain, but it is undoubtedly a necessary step for consumer privacy protection and peace of mind.

For consumers, the reward must always exceed the cost

So will privacy fears undermine the future of machine learning? We don’t think so. But this ultimately depends on whether companies abide by laws like the POPI and don’t use anyone’s information for illicit purposes. It’s also important to remember that the common understanding of what privacy is – and how valuable it is to us – has undergone substantial change as the IoT has grown in prominence. People are increasingly willing to give up information like their location, profession, likes, hobbies, etc. as long as they see a direct benefit or increased convenience. The onus is therefore on companies to take the high road and use machine learning technology to improve their customers’ user experience in a highly visible and convenient way.

Please get in touch if you'd like to learn more about how machine learning and predictive analytics can improve your marketing and business processes. 

Click here for Principa's Offshore Data Analytics services

Image credit: http://thesantaclara.org/