The harms associated with the loss of privacy

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Reconfiguring how our policies and courts understand privacy

Digital technology has not only transformed our relationship with privacy, it also changes our understanding of the concept as a whole. While some may lament this as a slippery slope where our definition of privacy is being eroded, there’s reason to believe that our privacy is actually being renewed, and strengthened.

If anything the digital era is helping to substantiate why privacy is essential, and why the loss of privacy includes present and future harms.

The issue of harms has been a frustrating and elusive element of privacy reform and regeneration. Historically legal courts have been lenient on privacy issues due to the wrong perception that the harms are minimal or non existent. Ironically this has also been true in the realm of antitrust.

Thankfully a range of really smart people have been doing the research and writing necessary to help shift our understanding of how privacy is linked to harms. In particular Danielle Citron, a legal scholar who specializes in privacy issues, has just pre-released a co-authored paper that articulates this link.

The diagram above provides a glimpse as to why it has been so difficult to demonstrate these harms in courts. Not all of these harms are the kinds we’d traditionally think of, but in the digital era they can be powerful and have lasting impact.

For example the reputational harms may be one of the more powerful, but also difficult to prove, and the impact is often on a subject’s future reputation. Similarly data quality harms is another future or long term consideration, that reflects how we are increasingly judged by our data trail rather than our person.

Privacy harms have become one of the largest impediments in privacy law enforcement. In most tort and contract cases, plaintiffs must establish that they have been harmed. Even when legislation does not require it, courts have taken it upon themselves to add a harm element. Harm is also a requirement to establish standing in federal court. In Spokeo v. Robins, the U.S. Supreme Court has held that courts can override Congress’s judgments about what harm should be cognizable and dismiss cases brought for privacy statute violations.

The caselaw is an inconsistent, incoherent jumble, with no guiding principles. Countless privacy violations are not remedied or addressed on the grounds that there has been no cognizable harm. Courts conclude that many privacy violations, such as thwarted expectations, improper uses of data, and the wrongful transfer of data to other organizations, lack cognizable harm.

This has been a huge issue for privacy advocates, and a major source of frustration. Not just with regard to the courts, but also with politicians who write the laws that the courts interpret.

Facebook has been the obvious example. It’s taken well over a decade to recognize the harms being done by the platform, and we still haven’t effectively curbed those harms (or Facebook).

Courts struggle with privacy harms because they often involve future uses of personal data that vary widely. When privacy violations do result in negative consequences, the effects are often small – frustration, aggravation, and inconvenience – and dispersed among a large number of people. When these minor harms are done at a vast scale by a large number of actors, they aggregate into more significant harms to people and society. But these harms do not fit well with existing judicial understandings of harm.

This article makes two central contributions. The first is the construction of a road map for courts to understand harm so that privacy violations can be tackled and remedied in a meaningful way. Privacy harms consist of various different types, which to date have been recognized by courts in inconsistent ways. We set forth a typology of privacy harms that elucidates why certain types of privacy harms should be recognized as cognizable.

The second contribution is providing an approach to when privacy harm should be required. In many cases, harm should not be required because it is irrelevant to the purpose of the lawsuit. Currently, much privacy litigations suffers from a misalignment of law enforcement goals and remedies. For example, existing methods of litigating privacy cases, such as class actions, often enrich lawyers but fail to achieve meaningful deterrence. Because the personal data of tens of millions of people could be involved, even small actual damages could put companies out of business without providing much of value to each individual. We contend that the law should be guided by the essential question: When and how should privacy regulation be enforced? We offer an approach that aligns enforcement goals with appropriate remedies.

This is another important point that the paper makes. Recognizing privacy harms does not always mean that they need to be applied. Part of the original problem with privacy harms, were courts insisting that there had to be harm in order for there to be a hearing or guilt found.

And that’s where this paper focuses not just on establishing a wide range of possible harms, but also arguing that those harms are not always the reason we should be concerned about or enforcing privacy rights.

After all the big picture is just as important. Privacy is not an end in and of itself. For the most part privacy is a means for us to engage in activity both superficial and substantive, personal and public. It is a tool to protect ourselves, in the present, and in the future.

As a result privacy is a connecting (t)issue, that not only touches other issues, but helps make them relevant and potent. Take data for example. Boring in the abstract. Powerful when in the context of our privacy.

Another example is the connection between privacy and identity.

The debate around the role and value of anonymity has always been hindered by false assumptions around privacy and a lack of knowledge of harms. However once we value privacy, and recognize the harms that can come with the loss of privacy, all of a sudden we can rethink and reassess the value we place on anonymity.

Similarly surveillance in the workplace is generally permitted and encouraged precisely because we do not factor in larger notions of privacy harms. We expect employees to submit to increasingly pervasive as a condition of employment. Yet is that reasonable or fair?

Interestingly enough this can also raise larger collective harms, especially when that surveillance not only targets employees, but the public at large, as is the case with Amazon’s proposal to add AI cameras to their delivery vans.

We talked about what Amazon is up to on Monday’s edition of the Metaviews show on Twitch.

On Wednesday’s edition, Ken Chase and I dug into this notion of privacy harms, and why they matter.

Leave a Reply

Your email address will not be published. Required fields are marked *