Can an AI registry open the black box society?

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews 

Or perhaps we should ask Pandora first?

One of the primary reasons that a black box society is anti-democratic is the inability to scrutinize or understand the decisions algorithms make about us.

This is compounded by the micro-political decisions that algorithms are tasked to work. Little things like traffic lights, service eligibility, and service prioritization. Technocrats might not regard such systems as being part of a democratic society, however the more we automate and move aspects of government into algorithmic systems, the more we abdicate our power and responsibility as humans.

One of the challenges in countering this process, is to audit what decisions are being made by algorithms, and then demand that they be transparent. In an ideal world this happens before systems are implemented or deployed, yet we don’t live in that world, at least not yet. Instead we find ourselves reacting to these systems, and having to track down and pry them open.

With that in mind, I’ve been proposing that AI in general be regulated, starting with licensing. Not restrictive licensing per se (although that is part of it) but rather licensing for the purposes of auditing and understanding. Licensing as a mechanism towards transparency.

Licensing in general tends to act as a data collection or information gathering process for a government. It enables the ability to know who is active and what they’re doing. In this case, who is using AI, and what are they doing with AI.

This would certainly enable the restrictive aspect. For example preventing people from using AI to harm, hurt, threaten, or discriminate against people. Or use AI for monopolistic or activities that raise antitrust concerns.

However the primary purpose of the licensing would be to create a registry that helps educate people as to what is happening with the technology and society. Help map out how the technology is being used and why.

AI is too powerful a tool to be used in secret. Rather it merits the kind of transparency and responsibility that comes with such power. Mandating an AI license would not hinder innovation but rather spread it, so that more people understand what is possible and more importantly what is responsible.

While I do not know of any such licensing, we are starting to see public registries, emerge, albeit only in the context of public sector usage.

Amsterdam and Helsinki are blazing a trail when it comes to the accountability of automated systems, and their respective registries are a good start and an example for others to follow. Here’s the links to both:

These are just over a month old, and only list a few projects, but their precedent is important. They set a new standard and expectation for what public sector organizations should adhere to when it comes to the ethics and transparency of AI.

On September 28, 2020, at the Next Generation Internet Summit, Helsinki and Amsterdam announced the launch of their open AI registers. They are the first cities to offer such a service in the world (City of Helsinki 2020). The AI registers describe what, where, and how AI applications are being used in the two municipalities; which datasets were used for training purposes; how algorithms were assessed for potential bias or risks; and how humans use the AI services. The registers also offer a feedback channel, which is meant to enable more participation, with information about the city department and the person responsible for the AI service. The goal is to make the use of urban AI solutions as responsible, transparent, and secure as other local government activities, to improve services and citizens’ experiences.

The AI registers are currently being populated. Anyone can check them. At the time of writing, there are 5 AI services available in the Helsinki AI RegisterFootnote1 and 3 in the Amsterdam AI Register.Footnote2 The plan is eventually to have all the cities’ AI services listed in the registers. At the moment, eight services are not many, but, despite their still limited number, the overall project is extremely interesting for several reasons, and one can learn a few lessons from it. Let us see them.

In making the use and governance of these systems more transparent, publicly accessible, while also providing means for public engagement, it presents AI as a public service.

Following the “normalization” of AI, it is interesting to see how the project presents AI as just another utility. AI is increasingly offered as a service, or AIaaS (Newman 2020), especially in the case of machine learning and natural language processing capabilities. Yet AIasS is a bit of an understatement, to say the least, since the truth is that, contrary to gas or water, AI is a new form of mindless agency into which one can tap to deal with problems that otherwise would require human intelligence and perhaps a huge (sometimes unfeasible) amount of other resources, like time. At the same time, part of the value of the project seems to lie also in the recognition that AI as a “utility” is a great means to deal with increasingly complex, urban environments. As the population of the world moves to live more and more in megacities, the latter may not become “smart”, but they can certainly be managed much more intelligently by using AI systems that provide more effective and efficient services, in ways that are open and transparent to public scrutiny and feedback.

The benefit to the operators of these systems, in this case municipal governments, is that it legitimizes the use of the system, while also cultivating greater trust, both to the system, but also to the government overall. This is the pay-off that comes with public engagement and public education. Obtaining buy-in for systems that would otherwise be treated with suspicion and fear.

Although the flip side of fostering such expectations for ethical behaviour is that it also engenders similar expectations of action if said ethics are not followed.

A fair question, and one that emphasizes the political nature of this technology and the way it is used. Especially when used in and by the public sector, using public funds.

Open source government, while relatively new, is a necessary response to the anti-democratic threat posed by the black box systems that currently comprise AI.

Similarly open source government, as the latest iteration of democratic government, is infections, and like a xerocracy, easy to copy.

Will this kind of transparency be demanded by citizens of their governments? Could it become a kind of democratic intervention to mitigate the anti-democratic tendencies of existing black box algorithmic systems?

More importantly, how might it be applied to the private sector? Should the private sector enjoy the privilege of using this technology in secret, or should their algorithms be subject to greater public scrutiny and oversight?

Where should the line be drawn? To what extent is AI not just a tool, but also a weapon? Or a drug? Or something so powerful that it requires a certain level of oversight and transparency? #metaviews

Leave a Reply

Your email address will not be published. Required fields are marked *