The Privacy Conundrum

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Clear but also confusing

It’s clear that privacy is essential, but protecting our privacy can be legitimately confusing.

What if instead of fighting fire with fire we starting using water. What if predictive privacy was the kind of water necessary to put out the fire that is contemporary surveillance based AI?

Maybe we have enough data already and we don’t need any more. What if we could use the data we currently have to predict when our privacy is in jeopardy and intervene in those moments, either manually or automatically to protect it?

This is the concept of predictive privacy. It offers an alternative to the current model of privacy that relies on either preventative measures (e.g. encryption) or reactive ones (e.g. revocation of user permissions).

Let’s take an alternate approach that not only anticipates when our privacy is in jeopardy, but supports rules and regulations to support it? Not in the government sense, but in the corporate or organizational sense. As a sign of trust or reliability.

“Our algorithms can balance the need for security with the need for your privacy. They can automatically turn off and avoid recording you when you are not in an active area, or when nobody else is there, or when you turn off our wireless assistant.” – Huawei or some equivalent.

The Future of Privacy and AI
Here at Metaviews we recognize the paradox that ethics is generally speaking an exercise in legitimizing what it was you were probably going to do anyway. This suggests that the future of privacy and AI are interwoven with each other.

Interdependent and also in constant conflict. Which is why if there is to be AI based prediction, perhaps the primary focus of that prediction is when and where privacy is necessary and when and where privacy is not. Recognizing that this means different things to different people. Also recognizing that prediction is a fallacy and not at all possible.

Examples of predictive privacy could include:

  • A phone alarm that goes off if someone is filming you without your permission.
  • An alarm clock that goes off if someone is filming you while you’re asleep.
  • A shoe that alerts the wearer when they are being filmed from below.
  • A wearable device that alerts people when they are being filmed by a drone.
  • A virtual assistant that tells you when you need to cover your webcam with a sticker.
  • When our privacy is most at risk app for kids that tells them when they need to delete their social media accounts.

Ok, that last one was only one of the silly examples GPT-3 came up with. Here’s another that made me lol:

  • The first person to buy the When our privacy is most at risk shirt gets a free Amazon Echo Dot

We are constantly being bombarded with advertisements and marketing strategies that try to get us to give out our personal information. While there is no way to completely protect ourselves from this, there are ways we can make sure that our personal information isn’t being shared without our knowledge.

Perhaps the most common way is by using a VPN. With a VPN you can hide your location, encrypt your traffic, and more. But how do you know which VPN is the best for you? It doesn’t really matter, but also hilarious that the machine actually thinks this is an issue.

It is really hard to be a privacy advocate in today’s society. Especially when you try to get a machine to write your newsletter for you. But, it is important for us to be aware of how much our personal information can be used and abused. Mostly I’m trying to see if you’re paying attention, but I’m also just shocked that one I wrote using this tech got a positive response.

This paper stresses the severe ethical and data protection implications of predictive analytics and outlines a new approach in tackling them. First, it introduces the concept of “predictive privacy” to formulate an ethical principle protecting individuals and groups against prediction of sensitive information using Big Data and Machine Learning. Secondly, it analyses the typical data processing cycle of predictive systems to provide a step-by-step discussion of ethical implications, locating occurrences of predictive privacy violations. Thirdly, the paper sheds light on what is qualitatively new in the way predictive analytics challenges ethical principles such as human dignity and the (liberal) notion of data protection as the preservation of privacy. These new challenges arise when predictive systems transform statistical inferences, which are knowledge about the cohort of training data donors, into individual predictions, thereby crossing what I call the “prediction gap”. Finally, the paper summarizes that data protection in the age of predictive analytics is a collective matter as we face situations where an individual’s (or group’s) privacy is violated using data other individuals provide about themselves, possibly even anonymously.

This last sentence is essential. Not only is data protection a collective issue, but that our privacy is put at risk in spite of and sometimes even because of anonymization (based data practices).

Which is why it might be nice to be notified when your privacy is at risk. Especially if that notification was collectively focused and similarly promoted and enabled collective action.

Could an Open Source Information Protection initiative enable a new kind of predictive privacy that mobilized people and made it easy for all of us to be private in a digital world?

Leave a Reply

Your email address will not be published. Required fields are marked *