The National Security Commission on Artificial Intelligence

easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews

 

An earnest effort amidst an oxymoronic administration

 

While this pandemic has disrupted the otherwise unstoppable rise of automation, artificial intelligence (AI), and the culture of data driven decision making, efforts to normalize and gloss over the governance issues of this technology has continued unabated.

The US National Security Commission on Artificial Intelligence was initiated by the US Congress in 2018 “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”

While the commission took a bit of time to select commissioners, and was partially delayed by the pandemic, the group is now starting to float trial balloons, and preliminary ideas, as they go through their larger policy process.

This commission has released a number of reports and white papers that raise the usual rhetoric and responsibility lingo that governments around the world are using with regard to AI and its potential.

None of this is final, but reflects some of the discussion and debate that the commission has been engaged in. They produced this one page summary to highlight the current range of (potential) recommendations:


In particular, the recommendations surrounding education and public service caught my attention.

As a digital fellow with the Canadian School of Public Service’s Digital Academy, this is something that intersects with a range of my interests. Although the Canadian version of what this US commission is proposing is alarmingly tiny. One of the challenges we’ve struggled with is the issue of scale.

To bolster U.S. competitiveness in AI, the council recommends steps such as creating a National Reserve Digital Corps, modeled on military reserve corps, to give machine learning practitioners a way to contribute to government projects on a part-time basis. Unlike the U.S. Digital Service, which asks tech workers to serve for one full year, the NRDC would ask for a minimum of 38 days a year.

Commissioners also recommend creating an accredited university called the U.S. Digital Services Academy. Graduates would pay for their education with five years of work as civil servants. Classes would include American history, as well as mathematics and computer science. Students would participate in internships at government agencies and in the private sector.

A joint Stanford-NYU study found that only a small percentage of federal agencies are using complex forms of machine learning and that trustworthiness of systems used for what it calls algorithmic governance will be critical to citizen trust. Released in February, the report urges federal agencies to acquire more internal AI expertise.

The Canadian federal government is large compared to Canadian organizations, but tiny compared to the US federal government. Upgrading or training the many staff and professionals who work in these organizations is no easy job. Scaling such efforts is part of the problem. It’s easy to prototype these sorts of initiatives, but having them apply to these massive public sector organizations is another task entirely.

That’s partly why these recommendations from the NSCAI should be taken seriously and subject to critical scrutiny.

The government’s highest priority investment in artificial intelligence needs to be its AI workforce. It is not adopting AI as quickly as the private sector, and potentially as quickly as our adversaries. Most government teams developing AI solutions we have met face high barriers when they begin a project. They include limited access to data sets, constrained system authorities, and less computing power than they need. As a result, projects are slower and more expensive than they might be, delaying the fielding of systems that can decrease costs, increase capabilities, and help improve national security.

An educated, trained, and empowered AI workforce can act as a catalyst, enabling the government to create and adopt AI capabilities far more quickly and effectively than it does now. If a workforce can manage data, purchase and maintain compute; if domain knowledge and AI experts can work together, then it will create and adopt AI capabilities more quickly and more effectively. Just as importantly, a well-trained workforce will better understand when and how to purchase commercial solutions for immediate implementation, when to adapt commercial solutions to organizational needs, and when to develop custom software. Other priorities, such as internal projects, acquisition and contracting reform, and improving public-private partnerships will all improve faster and more effectively with an AI literate workforce.

However it’s also crucial to recognize that education is never neutral, and that increasingly, the open ended approach to learning is being replaced with a far more focus and purpose driven approach.

This understandably leads people to jump to conclusions as to the role of said education.

This concern about the intersection of pedagogy, ideology and technology is not coming out of nowhere, but reflects the broader politicization of the commission’s work:


This argument seems as empty as the one that suggested economic engagement with China would induce democratic values.


The US technology industry continues to pretend that what they do and propose is not ideological or political when it screams as such to the rest of us. This is partly why the work of this commission will remain contentious, and question the legitimacy of its work.

National security can be used as a cover to shroud a process in secrecy, and that may have been the case here, had it not been for a recent court ruling.

In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)—a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commission’s activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, “Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.”

That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.

The commission’s most recent meeting, held last week, was the first that could be observed by the public. It revealed some interesting and relevant dynamics:


The twitter account above produced a thorough summary and play by play of the public meeting that I recommend reading, although this tweet pretty much sums it up:


If you’d like to watch or skim the meeting yourself, here’s the video:

The governance of AI is arguably one of the most important policy areas facing society. It provides the opportunity to rethink many of the values we take for granted, as well as the institutions we depend upon.

However the broader process surrounding the governance of AI is dominated by technology companies that cannot and perhaps do not want to bring legitimacy to the endeavour. Instead in their desire to reinforce their power and position, they may be undermining their chance to get it right.

After all, let us not forget or overlook the kind of reaction these initiatives inspire among the already paranoid US public:

Here at the Academy of the Impossible we also have an AI commission, led by our three rabbits, who will do their best to replicate life, and maybe even intelligence.

Leave a Reply

Your email address will not be published. Required fields are marked *