easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews
An earnest effort amidst an oxymoronic administration
While this pandemic has disrupted the otherwise unstoppable rise of automation, artificial intelligence (AI), and the culture of data driven decision making, efforts to normalize and gloss over the governance issues of this technology has continued unabated.
The US National Security Commission on Artificial Intelligence was initiated by the US Congress in 2018 “to consider the methods and means necessary to advance the development of artificial intelligence, machine learning, and associated technologies to comprehensively address the national security and defense needs of the United States.”
While the commission took a bit of time to select commissioners, and was partially delayed by the pandemic, the group is now starting to float trial balloons, and preliminary ideas, as they go through their larger policy process.
#NSCAI Second Quarter Recommendations have been submitted to Congress! Learn more here: https://t.co/ytY6tlbEpV pic.twitter.com/rVLyyix07t
— NSCAI (@AiCommission) July 22, 2020
This commission has released a number of reports and white papers that raise the usual rhetoric and responsibility lingo that governments around the world are using with regard to AI and its potential.
None of this is final, but reflects some of the discussion and debate that the commission has been engaged in. They produced this one page summary to highlight the current range of (potential) recommendations:
We have condensed 180 pages of #NSCAI Q2 Recommendations down to one and would love to hear your thoughts and feedback! pic.twitter.com/tYC5Idjm1e
— NSCAI (@AiCommission) July 22, 2020
In particular, the recommendations surrounding education and public service caught my attention.
#NSCAI recommends the government increase its efforts to train and recruit technically skilled civilians by expanding Scholarship for Service programs, creating a National Reserve Digital Corps, and establishing a U.S. Digital Service Academy. pic.twitter.com/CRi8sotGFH
— NSCAI (@AiCommission) July 22, 2020
As a digital fellow with the Canadian School of Public Service’s Digital Academy, this is something that intersects with a range of my interests. Although the Canadian version of what this US commission is proposing is alarmingly tiny. One of the challenges we’ve struggled with is the issue of scale.
The National Security Council on AI (#NSCAI)
recommends creating a National Reserve Digital Corps
to give #machinelearning practitioners a way to contribute to government projects on a part-time basis.https://t.co/CzkZ3Z8zOj#Datascience #AI #ML— Philippe Rebelo (@PhilippeRebelo) July 27, 2020
To bolster U.S. competitiveness in AI, the council recommends steps such as creating a National Reserve Digital Corps, modeled on military reserve corps, to give machine learning practitioners a way to contribute to government projects on a part-time basis. Unlike the U.S. Digital Service, which asks tech workers to serve for one full year, the NRDC would ask for a minimum of 38 days a year.
Commissioners also recommend creating an accredited university called the U.S. Digital Services Academy. Graduates would pay for their education with five years of work as civil servants. Classes would include American history, as well as mathematics and computer science. Students would participate in internships at government agencies and in the private sector.
A joint Stanford-NYU study found that only a small percentage of federal agencies are using complex forms of machine learning and that trustworthiness of systems used for what it calls algorithmic governance will be critical to citizen trust. Released in February, the report urges federal agencies to acquire more internal AI expertise.
The Canadian federal government is large compared to Canadian organizations, but tiny compared to the US federal government. Upgrading or training the many staff and professionals who work in these organizations is no easy job. Scaling such efforts is part of the problem. It’s easy to prototype these sorts of initiatives, but having them apply to these massive public sector organizations is another task entirely.
That’s partly why these recommendations from the NSCAI should be taken seriously and subject to critical scrutiny.
Check out #NSCAI Commissioner @MignonClyburn's feature in @TheHillOpinion: The Artificial Intelligence Investment the Government Must Make https://t.co/fpknPPX0Vv
— NSCAI (@AiCommission) July 21, 2020
The government’s highest priority investment in artificial intelligence needs to be its AI workforce. It is not adopting AI as quickly as the private sector, and potentially as quickly as our adversaries. Most government teams developing AI solutions we have met face high barriers when they begin a project. They include limited access to data sets, constrained system authorities, and less computing power than they need. As a result, projects are slower and more expensive than they might be, delaying the fielding of systems that can decrease costs, increase capabilities, and help improve national security.
An educated, trained, and empowered AI workforce can act as a catalyst, enabling the government to create and adopt AI capabilities far more quickly and effectively than it does now. If a workforce can manage data, purchase and maintain compute; if domain knowledge and AI experts can work together, then it will create and adopt AI capabilities more quickly and more effectively. Just as importantly, a well-trained workforce will better understand when and how to purchase commercial solutions for immediate implementation, when to adapt commercial solutions to organizational needs, and when to develop custom software. Other priorities, such as internal projects, acquisition and contracting reform, and improving public-private partnerships will all improve faster and more effectively with an AI literate workforce.
However it’s also crucial to recognize that education is never neutral, and that increasingly, the open ended approach to learning is being replaced with a far more focus and purpose driven approach.
This understandably leads people to jump to conclusions as to the role of said education.
Eric Schmidt plans to launch US Digital Service Academy, with unanimous support of the National Security Commission on AI + recommendation to Congress. Complete merger of military-industrial-academic complex // cold-war style digital escalation vs China ???? https://t.co/J6Xk7vJKrt
— Matthew Claudel (@matthewclaudel) July 22, 2020
This concern about the intersection of pedagogy, ideology and technology is not coming out of nowhere, but reflects the broader politicization of the commission’s work:
There's a lot to say about the NSCAI (beyond the USDS Academy) and I'll leave most of it for others, but it's important to understand that the perceived threat is Chinese AI supremacy and that the belief is that US AI supremacy will inherently promote democratic values.
— Emma Lurie (@emma_lurie) July 22, 2020
This argument seems as empty as the one that suggested economic engagement with China would induce democratic values.
Also telling: “You absolutely suck at machine learning,” Mr. Schmidt told General Thomas “If I got under your tent for a day, I could solve most of your problems.” General Thomas said he was so offended that he wanted to throw Mr. Schmidt out of the car. https://t.co/NfKH4h1RwZ
— Emma Lurie (@emma_lurie) July 22, 2020
The US technology industry continues to pretend that what they do and propose is not ideological or political when it screams as such to the rest of us. This is partly why the work of this commission will remain contentious, and question the legitimacy of its work.
National security can be used as a cover to shroud a process in secrecy, and that may have been the case here, had it not been for a recent court ruling.
The NSCAI, currently chaired by Eric Schmidt, has been hiding it’s activities? A judge rules that this must change? |
Increasing Transparency at the National Security Commission on Artificial Intelligence – Lawfare https://t.co/1h6J3WgkiB— Scott C. Lemon (@humancell) July 4, 2020
In 2018, Congress established the National Security Commission on Artificial Intelligence (NSCAI)—a temporary, independent body tasked with reviewing the national security implications of artificial intelligence (AI). But two years later, the commission’s activities remain little known to the public. Critics have charged that the commission has conducted activities of interest to the public outside of the public eye, only acknowledging that meetings occurred after the fact and offering few details on evolving commission decision-making. As one commentator remarked, “Companies or members of the public interested in learning how the Commission is studying AI are left only with the knowledge that appointed people met to discuss these very topics, did so, and are not yet releasing any information about their recommendations.”
That perceived lack of transparency may soon change. In June, the U.S. District Court for the District of Columbia handed down its decision in Electronic Privacy Information Center v. National Security Commission on Artificial Intelligence, holding that Congress compelled the NSCAI to comply with the Federal Advisory Committee Act (FACA). Under FACA, the commission must hold open meetings and proactively provide records and other materials to the public. This decision follows a ruling from December 2019, holding that the NSCAI must also provide historical documents upon request under the Freedom of Information Act (FOIA). As a result of these decisions, the public is likely to gain increased access to and insight into the once-opaque operations of the commission.
The commission’s most recent meeting, held last week, was the first that could be observed by the public. It revealed some interesting and relevant dynamics:
Josh Davisson asks a great question: "Why not urge *Congress* to establish human rights safeguards for AI systems?" rather than simply recommending best practices.
The NSCAI immediately turns the question over to the CSO of Microsoft, Eric Horvitz, to say it's too complicated. pic.twitter.com/pslAHD0wd7
— Tech Inquiry (@tech_inquiry) July 20, 2020
The twitter account above produced a thorough summary and play by play of the public meeting that I recommend reading, although this tweet pretty much sums it up:
We hope you enjoyed this recap of how the CEO of Oracle, former CEO of Google, Director of Google Cloud AI, CSO of Microsoft, board member of SAIC, and former CEO of In-Q-Tel are helping determine how their own industry is regulated and funded by the DoD.
Fin/
— Tech Inquiry (@tech_inquiry) July 20, 2020
If you’d like to watch or skim the meeting yourself, here’s the video:
The governance of AI is arguably one of the most important policy areas facing society. It provides the opportunity to rethink many of the values we take for granted, as well as the institutions we depend upon.
However the broader process surrounding the governance of AI is dominated by technology companies that cannot and perhaps do not want to bring legitimacy to the endeavour. Instead in their desire to reinforce their power and position, they may be undermining their chance to get it right.
After all, let us not forget or overlook the kind of reaction these initiatives inspire among the already paranoid US public:
Humanity is being downgraded & enslaved.
This virus is a front for an A.I. take over.
Techno-Fascists Are Attacking Us.
DARPA & NSCAI#WW3 Has Begun!!
— David DeGraw (@davidVdegraw) July 23, 2020
Here at the Academy of the Impossible we also have an AI commission, led by our three rabbits, who will do their best to replicate life, and maybe even intelligence.
Ida B. Wells trying to keep it cool, while Peter Kropotkin the Rabbit cannot resist snuggling with Rosa Luxemburg pic.twitter.com/aQBufQm9cD
— Jeanette Herrle (@jeanetteherrle) July 27, 2020