Asking the AI gods to be fair

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Narratives of technology frame our sense of justice

We’ve had a request for a salon on the future of education, so that’ll be this week’s topic. The time for this one will be 8pm Eastern (with pre-show starting around 7:30pm). Please join us!

A key ingredient towards understanding technology is an understanding of media. Not that either are ever truly accomplished, as the consequence of the medium being the message is that we and our media/technology are always changing.

Yet understanding media as a pre-cursor to understanding technology helps us connect narrative to meaning, and frames to features. What we think a technology or tool can do is a direct reflection of the stories we tell (ourselves) about it.

We’re rapidly approaching a (narrative) conflict in the field of AI, surveillance, and predictive analytics. The traditional story of AI, fused with magic, divinity, and inevitability is increasingly being challenged with an alternate tale that not only situates machine learning as a tool, but also a potential weapon, especially in the hands of those with wealth and power.

Hopefully readers of this newsletter are familiar with or at the very least aware of the latter narrative, and with it an inoculation against the former. Once we realize the AI emperor is naked, it’s difficult to not snicker and laugh at attempts to depict the machine as holy or omnipotent.

Or so we hope. Our goal as story tellers, as philosophers, researchers, or even readers, is to counter the dominant narrative with our own. We have a moral responsibility to disarm the weapon, and to pull the curtain back on the wizard of Oz so that all may see the humans who control it.

Unfortunately there remains an entire industry dedicated to the mercenary pursuit of narrative and propaganda. Eager to be hired by those who can afford it, to amplify and solidify the mythology of technology.

The headline of this article is arguably just as bad as the initiative itself. Especially if it helps cast the initiative as part of some evolution of intelligent life, and something we need to embrace given its inevitability.

When the Chula Vista police receive a 911 call, they can dispatch a flying drone with the press of a button.

On a recent afternoon, from a launchpad on the roof of the Chula Vista Police Department, they sent a drone across the city to a crowded parking lot where a young man was asleep in the front seat of a stolen car with drug paraphernalia on his lap.

When the man left the car, carrying a gun and a bag of heroin, a nearby police car had trouble following as he sprinted across the street and ducked behind a wall. But as he threw the gun into a dumpster and hid the bag of heroin, the drone, hovering above him, caught everything on camera. When he slipped through the back door of a strip mall, exited through the front door and ran down the sidewalk, it caught that, too.

Watching the live video feed, an officer back at headquarters relayed the details to the police on the scene, who soon caught the man and took him into custody. Later, they retrieved the gun and the heroin. And after another press of the button, the drone returned, on its own, to the roof.

Each day, the Chula Vista police respond to as many as 15 emergency calls with a drone, launching more than 4,100 flights since the program began two years ago. Chula Vista, a Southern California city with a population of 270,000, is the first in the country to adopt such a program, called Drone as First Responder.

There’s been a lot of pushback against this article from the kind of critics and researchers we like to follow. It’s not just the way the piece frames AI as intelligent, which an editor should have caught due to a lack of substantiation, but it also lacks context let alone a discussion on appropriate policy.

While helicopters are not new to policing, drones are not only a cheaper form of aerial policing. They also have their own set of political and cultural baggage, as people in rural Pakistan or East Africa could attest, given their decades long exposure to drone attacks.

Yet there is a role for drones when it comes to police work in general.

The issue comes back down to effective regulatory control and good public policy:

For most of you reading these words, suggesting that regulation and public policy are necessary elements of the successful use of technology is a given, if not something at least worth considering.

However it is important to recognize that this is a relatively recent change in the narrative surrounding our technology (and related industries). We’ve spent the majority of the Internet’s existence arguing that regulation of technology was either oxymoronic or impossible.

There was no evidence behind this, it was just the assertion of industry as amplified by the self-described news and opinion industry. Thankfully this argument is largely dissolving, but the damage has been done.

Like a monster slouching towards Bethlehem to be born, we now must wrestle with a culture that regards AI within the context of religion and divinity.

While the AI as god narrative is likely to continue building momentum and believers, we can find comfort or hope in the competing counter-narrative. The story that says AI is a tool that requires rules to ensure it is used responsibly.

Last week we discussed the paradox of bias and AI. While bias of some form may always be with us, that doesn’t mean we should not strive towards fairness.

There are numerous definitions of fairness for AI models, including disparate impact, disparate treatment, and demographic parity, each of which captures a different aspect of fairness to the users. Continuously monitoring deployed models and determining whether the performance is fair along these definitions is an essential first step towards providing a fair member experience.

Although several open source libraries tackle such fairness-related problems (FairLearn, IBM Fairness 360 Toolkit, ML-Fairness-Gym, FAT-Forensics), these either do not specifically address large-scale problems (and the inherent challenges that come with such scale) or they are tied to a specific cloud environment. To this end, we developed and are now open sourcing the LinkedIn Fairness Toolkit (LiFT), a Scala/Spark library that enables the measurement of fairness, according to a multitude of fairness definitions, in large-scale machine learning workflows.

Introducing the LinkedIn Fairness Toolkit (LiFT)

The LinkedIn Fairness Toolkit (LiFT) library has broad utility for organizations who wish to conduct regular analyses of the fairness of their own models and data.

  • It can be deployed in training and scoring workflows to measure biases in training data, evaluate different fairness notions for ML models, and detect statistically significant differences in their performance across different subgroups. It can also be used for ad hoc fairness analysis or as part of a large-scale A/B testing system.
  • Current metrics supported measure: different kinds of distances between observed and expected probability distributions, traditional fairness metrics (e.g., demographic parity, equalized odds), and fairness measures that capture a notion of skew like Generalized Entropy Index, Theil’s Indices, and Atkinson’s Index.
  • LiFT also introduces a novel metric-agnostic permutation testing framework that detects statistically significant differences in model performance (as measured according to any given assessment metric) across different subgroups. This testing methodology will appear at KDD 2020.

While there’s reason to regard “fairness” as a bias unto itself, it’s a bias we can and should strive for.

However fairness is not as straightforward as some may think. It is intrinsically linked to justice, and justice is as often a source of conflict as it is a response to it.

Should we be pleading with our AI gods to ensure that the police drones patrolling our communities are fair?

Or should we be demanding rules and regulations that govern the use of the drones and ensure that said use adheres to our definitions of what is fair?

What role does journalism play in facilitating this debate? Or in sabotaging it?

In thinking about the way we frame these stories, and how we construct these stories, I’m trying to work through my own emotions about how our society informs itself.

The way official narratives emerge and become endorsed. As well as the way conspiracy is incubated and cultivated.

Perhaps the problem lies in the simplistic form by which we assign credibility to narratives, when instead we should be fostering literacy and critical thinking.

These are dangerous times, not the least of which because our capacity to converse and learn has been disrupted, and appears both precarious and pervasive. Precarious due to the dominance of digital monopolies and the accompanying toxic narcissistic culture. Yet also pervasive due to the ease by which we can express ourselves, and the digital tools that make it near impossible to suppress said expression.

I’m still struggling to wrap my head around this paradox. Although it reminds me of a line I was fond of in my youth: what do you call a society comprised of multiple paradoxes? A paradise!?

Leave a Reply

Your email address will not be published. Required fields are marked *