What to do about bias?

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

Fairness and transparency are a reasonable response

Join us today for another Metaviews salon where we’ll discuss memes, satire, politics, and the pandemic. The session begins at 12 noon Eastern time, but we ask that you show up early so we can start on time.
https://zoom.us/j/92115776566?pwd=TGRsWEFzQi9mSzNscXd2Z1dhVjB2QT09

The relationship between algorithms and bias is both problematic and an opportunity. Problematic in that it has the potential to amplify and exacerbate entrenched biases and inequality. However it’s also an opportunity as we can now document and identify institutional or cultural bias in ways that were not previously possible.

The larger question, or metaview, is whether this new found ability to identify and engage bias will help us understand the larger role of bias and subjectivity in society?

Before we address that, let us first take a look at a new report from the UK based Centre for Data Ethics & Innovation (CDEI). This is a group that was created by the UK government in 2018 as a means of giving both public sector and industry groups a global advantage. Stemming from the belief that ethics makes for better products and services, especially in an era where trust in government is scarce, and ethical corporations are seen as an innovation (rather than an oxymoron).

By focusing on bias, the CDEI hopes to address issues of trust, and influence policy towards earning and reinforcing that trust.

The government commissioned CDEI to review the risks of bias in algorithmic decision-making. This review formed a key part of the CDEI’s 2019/2020 Work Programme, though completion was delayed by the onset of COVID-19. This is the final report of the CDEI’s review and includes a set of formal recommendations to the government.

Key recommendations include:

  • Government should place a mandatory transparency obligation on all public sector organisations using algorithms that have an impact on significant decisions affecting individuals.
  • Organisations should be actively using data to identify and mitigate bias. They should make sure that they understand the capabilities and limitations of algorithmic tools, and carefully consider how they will ensure fair treatment of individuals.
  • Government should issue guidance that clarifies the application of the Equality Act to algorithmic decision-making. This should include guidance on the collection of data to measure bias, as well as the lawfulness of bias mitigation techniques (some of which risk introducing positive discrimination, which is illegal under the Equality Act).

These are really interesting and crafty recommendations that acknowledge the pervasiveness of bias and the emerging responsibilities that technology helps enable when it comes to our relationship with bias (and institutions).

On the one hand this report uses bias to make a strong argument in favour of algorithmic transparency. On the other hand it embraces and promotes the idea that data and algorithms can be used to identify bias, as a means of mitigating or working against it. This includes the prudent question of how such a process should work. More on that below. Let’s also look at their blog summary:

However, the evidence is far less clear on whether algorithmic decision-making tools carry more or less risk of bias than previous human decision-making processes. Indeed, there are good reasons to think that better use of data can have a role in making decisions fairer, if done with appropriate care. Though a report on bias inevitably considers risks, there is also an opportunity here. Data gives us a powerful weapon to see where bias is occurring and measure whether our efforts to combat it are effective; if an organisation has hard data about differences in how it treats people, it can build insight into what is driving those differences, and seek to address them.

To date, the design and deployment of algorithmic tools has not been good enough to achieve this consistently. There are numerous examples worldwide of the introduction of algorithms persisting or amplifying historical biases, or introducing new ones. We must and can do better. Making fair and unbiased decisions is not only good for the individuals involved, but it is good for business and society. Successful and sustainable innovation is dependent on building and maintaining public trust.

I think fairness and transparency are excellent responses or aspiring goals that help democratize the use of algorithms and automated decision making. Similarly I agree that meaningful public trust is essential for automated democratic decision making to be successful and sustainable.

However I’m not at all convinced that unbiased decisions are ever possible or desirable. This is what I meant earlier by the metaview of bias and algorithms. On the one hand this technology makes it easier to identify bias, but on the other hand it should also change our relationship or perception of bias overall.

For example, is bias always bad?

Like how people used to and in many cases still use the word “chemicals” as if all chemicals are bad. “I don’t want chemicals in my food.” Yet it’s worth remembering that water is made of chemicals. Everything is chemical.

What about a bias in favour of democracy? In favour of inclusivity and participation?

Should we not regard those biases as good and virtuous? Shouldn’t our decisions employ and leverage those biases?

The issue should not be the elimination of bias, but the identification and management of biases.

Perhaps the folks at the CDEI anticipate this, as their recommendations may help us move towards such a position.

Although many of the recommendations in this report focus on actions for government and regulators, which was the core remit we set out initially to look at, there is much that individual organisations can and should be doing now to address this issue. Organisations remain accountable for their own decisions whether they have been made by an algorithm or a team of humans. Senior decision-makers in organisations need to engage with understanding the trade-offs inherent in introducing an algorithm. They should expect and demand sufficient explainability of how an algorithm works so that they can make informed decisions on how to balance risks and opportunities as they deploy it into a decision-making process.

Organisations often find it challenging to build the skills and capacity to understand bias, or to determine the most appropriate means of addressing it in a data-driven world. A cohort of people is needed with the skills to navigate between the analytical techniques that expose bias and the ethical and legal considerations that inform best responses. Some organisations may be able to create this internally, others will want to be able to call on external experts to advise them. As part of our openly commissioned research into bias mitigation techniques, we worked with a partner to build a web application that seeks to explain the complex trade-offs between different approaches; there is more to be done in this area to build understanding of the options available.

This is a helpful and constructive approach to automated decision making in general. It encourages the recognition that adopting automation comes with new costs and responsibilities. In this case managing bias, which brings a new skill-set and a need to value diversity.

There’s an inherent subjectivity in this that will not be easy to manage, but the right approach can produce substantial payoffs or dividends. Much of the positive attributes we associate with automation: responsiveness, personalization, accuracy, can all be attained in the pursuit of understanding and dare I say it designing bias in automated systems. Similarly the ability to anticipate if not accommodate subjective experiences is rare and of growing value.

Which is why we can’t treat bias as an isolated bug, or in my case feature. Instead we need to situate it in a larger context that connects bias with the production of machine learning systems in general.

This is a good question. I suspect bias and accuracy are linked, but the nature of their relationship needs to be teased out further. For example I can imagine a situation where something is biased and accurate or biased and inaccurate.

Perhaps there’s a larger risk of focusing on the technology at the expense of the big picture. Thinking that the answer to bias lies in better systems, when instead bias may be an inherent by-product of a subjective experience.

Similarly we may be focusing on explainability at the expense of understanding power and the systems that serve it.

Some of the brightest minds in the field are meeting this week on that subject, but unfortunately they’re doing so in private.

Although we’ll share the results of that session as they’re published.

As for the CDEI report, our friends at the Ada Lovelace Institute shared a relevant response from Anna Thomas who was one of three independent advisors to the CDEI Review:

The pandemic has seen an explosion of digital technologies at work. Over the summer we saw public frustration boil over about the harms and accountability in the wake of the Ofqual A- level grading farrago. Even today, a new survey suggests 1 in 5 employers are tracking workers online or planning to do so.

Invisible and pervasive, automated technologies involving mass data-processing have taken over an extraordinary variety of tasks traditionally carried out by people, such as HR professionals, teachers, vast numbers of managers and public servants, and many others in response to drives to meet new demands and increase efficiency.

As the CDEI report argues, the role of automated decision making systems are to make human decisions better, not replace humans in the decision making process. Perhaps this distinction is lost in the larger mythology or deterministic narrative that AI is taking over the world. For what do reports like this matter when the world they imagine has little connection to the one in which we find ourselves?

Yes, roadmaps and frameworks are essential tools for change, but the motivation for change needs to come from recognizing the problems with the present and the status quo.

Second, it recognises the scale and breadth of both individual and collective harms, which are posed by use of automated technologies trained on data that embed historic inequalities and patterns of behaviour and resource. In turn, as the scale and speed at which these tools are adopted increase, so too must the pace, breadth and boldness of our policy response to meet these challenges and rebuild public trust.

The report’s recognition of this issue should not be downplayed: it is a milestone. Challenges connected to the potential of algorithmic systems to amplify and project different forms of individual and collectiveinequalityinto the future have too often been minimised, or avoided altogether.

Understanding and responding to adverse equality (and other) impacts will mean building cross-disciplinary capabilities and expertise at the CDEI itself and more widely within Government, regulators and industry (as IFOW and the Ada Lovelace Institute have recently modelled).

However getting the public sector on side, while significant (and far from successful) is not enough in a society where the private sector sets the tone and pace for technological development and deployment.

Third, many of the recommendations to improve public-sector transparency beyond strict requirements of the existing legal regimes – including a new, mandatory transparency duty – are strong, and supported by a detailed summary of existing legal requirements. But the report (while recognising that decision-making is dispersed and traditional divisions do not always stand up) stops short of extending this recommendation to the private sector.

This takes us to what I believe is the report’s Achilles’ heel. The truth is that voluntary guidance, coordination and self-regulation have not worked, and further advisory or even statutory guidance will not work either. In spite of striking moves in the right direction, today’s report, with its focus on ‘bias’ associated with individual prejudice, stops short from making the logical leap to regulation.

If strong, anticipatory governance is indeed crucial (as both IFOW and the CDEI say) then new regulatory mechanisms are required to ensure that the specific actions which have been identified as necessary are taken.

This is important. If the existing behavioural modification industry is to be reigned in, then new laws are necessary, and they will need teeth. And let us also keep in mind that automation and machine learning touch and impact all sectors. How we govern it influences how everything is governed.

Here in Canada this debate is taking flight with the proposed act to enact the digital charter. Similarly in the US antitrust and privacy debates are rapidly picking up speed. Europe has been in the lead in this area and is showing no signs of slowing down. And let’s not forget China, which is in the process of drafting a Personal Data Protection Law that will profile in a future issue.

The question remains however, how does policy intersect with subjectivity? How do we legislate and regulate things like bias, which are subjective, and in their own way pervasive?

Perhaps the answer is that policy debates in general, and government in particular, needs more weirdos.

Leave a Reply

Your email address will not be published. Required fields are marked *