easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews
When the state writes a policy that it has no idea how to enforce
Next week’s salon will focus on the growing power of the digital monopolies, the role of antitrust, and the rise of alternatives. We’ll be meeting at 10am Eastern on Tuesday, with the animal pre-show starting at 9:30am Eastern. The timing is to accommodate our European friends. We’ll be shifting week to week to depending on who is able to make it and what works best for committed participants.
This week the Canadian government introduced legislation titled the “Digital Charter Implementation Act” which proposes to upgrade the country’s privacy laws while also creating a new regulator, to govern both data and AI.
Yesterday, Canadian Innovation Minister @NavdeepSBains introduced the Digital Charter Implementation Act, which proposes a national privacy standard for Canada akin to Europe’s #GDPR.https://t.co/NCYonwwfbi
— Cory Doctorow #BLM (@doctorow) November 18, 2020
There’s a lot in this proposed legislation, and in addition to the potentially large fines included in the upgrade, the following measures have received a lot of the attention:
The law is complex and will undergo many changes, but its two most salient features are:
I. The right to refuse to have your data collected and used; and
II. The right to have your data deleted if you change your mind.
With still penalties for companies that don’t comply.
2/
— Cory Doctorow #BLM (@doctorow) November 18, 2020
Cory’s thread is interesting, and worth reading, as it gets into issues of consent that have plagued privacy laws throughout the digital era.
However I want to focus on the algorithmic transparency elements that the government is proposing.
I’ll admit I’m surprised that the Canadian government has followed through on the whole digital charter thing. Not to discourage them, as it is in the right general direction, but it has always felt as if it was a product of copy and paste policy rather than reflecting a genuine understanding of the issues let alone the ability to do something about it.
As if clever policy people had looked at what was happening in the world, read the latest papers and critiques, and stitched together the language without really reflecting on what was being proposed or what the consequences are.
Bias bad, transparency good. Let’s make sure we can embrace automated decision making while also getting rid of the messy and complicated stuff: algorithmic transparency is the answer.
Yet is that even possible? Perhaps it doesn’t matter. There’s a big difference between a right to explanation, a right to know, and a right to request.
More on that in a bit, but let’s start by recognizing that the GDPR, which has become the reference point for upgraded privacy legislation, does not have effective algorithmic transparency:
Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulationhttps://t.co/sdWrl4kC4c
— Luciano Floridi (@Floridi) January 26, 2017
This right to explanation is viewed as an ideal mechanism to enhance the accountability and transparency of automated decision-making. However, there are several reasons to doubt both the legal existence and the feasibility of such a right. In contrast to the right to explanation of specific automated decisions claimed elsewhere, the GDPR only mandates that data subjects receive meaningful, but properly limited, information (Articles 13-15) about the logic involved, as well as the significance and the envisaged consequences of automated decision-making systems, what we term a ‘right to be informed’. Further, the ambiguity and limited scope of the ‘right not to be subject to automated decision-making’ contained in Article 22 (from which the alleged ‘right to explanation’ stems) raises questions over the protection actually afforded to data subjects. These problems show that the GDPR lacks precise language as well as explicit and well-defined rights and safeguards against automated decision-making, and therefore runs the risk of being toothless.
From all appearances the Canadian legislation has not added any greater clarity or precision when it comes to how we conceive of the right to explain. Instead the current Canadian proposal uses the language of “right to request” an explanation, which does not place criteria or expectation around the clarity or accessibility of such an explanation.
There is currently a growing field of explainability, which we touched upon in an earlier issue, that at the very least provides some framework on how to approach the matter.
This desire for satisfactory explanations has spurred scientists at the National Institute of Standards and Technology (NIST) to propose a set of principles by which we can judge how explainable AI’s decisions are. Their draft publication, Four Principles of Explainable Artificial Intelligence (Draft NISTIR 8312), is intended to stimulate a conversation about what we should expect of our decision-making devices.
Their proposed four principles are:
- Explanation: Systems deliver accompanying evidence or reason(s) for all outputs.
- Meaningful: Systems provide explanations that are understandable to individual users.
- Explanation Accuracy: The explanation correctly reflects the system’s process for generating the output.
- Knowledge Limits: The system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output.
One of the insights we noted in that issue was that most explanations for decisions in the pre-automated world are unsatisfactory. While courts of law go through considerable effort to explain and justify their decisions, this does not make that accessible or meaningful to lay people. However only a tiny fraction of the decisions we encounter come from the court. Most come from opaque systems and arbitrary mangers who offer little to no explanation at all.
What makes the government think that the use of automation will improve any of this? Most organizations cannot explain themselves in present terms, let alone once they adopt technology they will almost certainly not understand themselves.
In this context of pervasive technological ignorance, is a commitment to algorithmic transparency just an empty promise to enable automated decision making systems that cannot be held accountable or used responsibly?
Perhaps we should demand proof that algorithmic transparency is possible and viable before accepting it as a pre-condition for radically expanding automated decision making.
Don’t get me wrong, I do think such transparency and explanations are possible, theoretically, they just don’t currently exist. They also face tremendous resistance in an industry that recoils at any resistance or regulation. There are still many machine learning experts who assert that algorithmic transparency is impossible given the growing scale and speed of machine learning models and networks.
This debate is so crucial, and also complicated, that it has caused some governments to stop using automated decision making systems.
“Most systems are implemented without consultation with the public, but critics say this must change. The use of artificial intelligence or automated decision-making has come into sharp focus” as technocrats deploy them to end-run narrative explanation.https://t.co/7sU9sz6VBx
— Frank Pasquale (@FrankPasquale) August 25, 2020
Councils are quietly scrapping the use of computer algorithms in helping to make decisions on benefit claims and other welfare issues, the Guardian has found, as critics call for more transparency on how such tools are being used in public services.
It comes as an expert warns the reasons for cancelling programmes among government bodies around the world range from problems in the way the systems work to concerns about bias and other negative effects. Most systems are implemented without consultation with the public, but critics say this must change.
The use of artificial intelligence or automated decision-making has come into sharp focus after an algorithm used by the exam regulator Ofqual downgraded almost 40% of the A-level grades assessed by teachers. It culminated in a humiliating government U-turn and the system being scrapped.
The fiasco has prompted critics to call for more scrutiny and transparency about the algorithms being used to make decisions related to welfare, immigration, and asylum cases.
To what extent is the Canadian government creating a hole in their digital charter by embracing and depending upon algorithmic transparency before the concept has been properly developed?
It not only risks the concept being watered down in order for it to be possible, but the broader regulatory framework depends upon a regulatory method that is unproven and potentially ineffective. Especially if the explanations are as opaque and biased as the black box systems themselves.
It’s also worth noting the language around “consumer privacy” rather than “citizen privacy”. All of this is about marketplace relations rather than societal. A problematic assumption given that what makes this proposed privacy upgrade so flawed are the foundations upon which it is built:
NEW: See our media release: "Privacy Bill C-11 Hollows out Consumer Privacy." https://t.co/Klyjq9pzcP #cdnpoli #privacybill #consumerprivacy 1/2
— Public Interest Advocacy Centre (@CanadaPIAC) November 17, 2020
Consumer privacy in Canada will be destroyed if Bill C-11, the Digital Charter Implementation Act, 2020 [including Part 1 – Consumer Privacy Protection Act], is passed, said the Public Interest Advocacy Centre (“PIAC”) today. 2/2
— Public Interest Advocacy Centre (@CanadaPIAC) November 17, 2020
This new Bill is intended to replace and strengthen the federal Personal Information Protection and Electronic Documents Act (“PIPEDA”) and but conversely hurts consumer privacy by removing key consent requirements.
PIAC Executive Director, John Lawford stated: “We are aghast that the federal government feels it can weaken consumer privacy with a doublespeak Bill that removes a consumer’s right to protect his or her personal information that is used for any ‘business activity’ if it is ‘de-identified’ or used for what the government deems is a ‘socially beneficial purpose’. This counterproductive Bill should be withdrawn and rewritten to protect consumers, not to favour big business,” he added.
Just as algorithmic transparency is presently a myth, well, so too is de-identified or anonymized data. We touched upon this in an issue from over a year ago, as the research in this area continues to advance. The methods around de-anonymizing data continue to grow, and the viability of so-called deidentified data sets are radically decreasing.
Yet just like a flawed polling industry, there is both reluctance and incentive to ignore this trend.
Rather than address these flaws, rather than provide a strong foundation for new era of privacy and data protection, the government will instead rely upon these myths, to make it seem as if they’re taking action, when instead they’re building a Potemkin village of privacy protections.
Of course the legislation is still in its early days, and can (and should) evolve. This is a minority parliament, so in theory other parties can have input. However I’m not sure how widespread the relevant expertise is among the opposition parties.
We’ll keep an eye and let you know if and how this all plays out. #metaviews