easyDNS is pleased to sponsor Jesse Hirsh‘s “Future Fibre / Future Tools” segments of his new email list, Metaviews
Reflection on how children are an essential part of automation
The industrial revolution would not have been possible without child labour. Is this current industrial revolution, driven by automation and data, also dependent upon child labour (at least conceptually)?
In September we published an issue that pondered whether children should have the right to privacy. No surprise, we think kids do deserve privacy. Especially given their propensity to understand the technology better than their parents (at some point).
Pitting parents against schools is an easy way to illustrate the privacy dynamics of password control and other forms of regulating how kids use technology. However it may not necessarily prime parents as to how they approach their relationship with technology and their kids.
Most children who have grown up in the digital era have figured out how to protect their privacy from their parents. Whether via fake social media accounts, or using encryption at home (like VPNs) to avoid further scrutiny, young people are forced to innovate to evade unwanted attention from adults.
However this adversarial approach is unnecessary and avoidable. Parents and children should be working together to protect each other and foster greater privacy practices. Using technology is about curiousity and learning, which is best done collaboratively, and with trust.
The idea that children should control their own passwords upsets a lot of parents who do not trust the Internet (or their kids). Often discussions around children’s privacy gets overruled by concerns around children’s safety, even though the two are interdependent.
Many parents foolishly think they can do a better job of protecting their children online, in spite of using a baby monitor or smart camera with a default password or configuration.
Ideally parents and children are on the same side, but as with most Internet issues, there are far more than two sides, or two parties.
In the case of children, there’s school, there’s entertainment, and then there’s friends and other family members, who for recent memory, have largely been online contacts.
While we may take for granted our own relationship with algorithms, we must also reflect on the relationship children have with algorithms. Baby sitter, guide, overseer, intermediary, gatekeeper, game keeper, and so on. There’s a growing list of algorithms our kids interact with, and I imagine most parents do not question or interrogate those algorithms as to the nature of the relationship it has with the child.
There is a general assumption that AI is presently being developed by adults, but that’s not entirely true. Control of the development is currently in the hands of adults, but children are definitely participating in the data used to train AI, as well as the interaction with algorithms on platforms like YouTube and TikTok.
UNICEF appears to be one of the only organizations that is at the very least talking about the role kids can and should play in the development of AI.
Although as is often the case with NGOs and charities, their language is so depoliticized as to almost be meaningless. In this case the idea that adults should be taking children into consideration, rather than going further, and acknowledging and encouraging that kids participate in the development of technologies that will greatly impact their lives.
Policy guidance on Artificial Intelligence for childrenhttps://t.co/ilAo0Jdb6V#ai #artificialintelligence #machinelearning #deeplearning #aiethics pic.twitter.com/zv4LDbQu1w
— Nige Willson (@nigewillson) January 17, 2021
Artificial Intelligence (AI) systems are fundamentally changing the world and affecting present and future generations of children. Children are already interacting with AI technologies in many different ways: they are embedded in toys, virtual assistants and video games, and are used to drive chatbots and adaptive learning software. Algorithms provide recommendations to children on what videos to watch next, what news to read, what music to listen to and who to be friends with. In addition to these direct interactions between children and AI, children’s lives and well-being are also indirectly impacted by automated decision-making systems that determine issues as varied as welfare subsidies, quality of health care and education access, and their families’ housing applications. This impact has implications for all children, including those from developing countries who may be equally impacted by lost opportunities as a result of not being able to enjoy the benefits of AI systems.
As the world’s leading organization for children, UNICEF recognizes the potential that AI systems have for supporting every child’s development. We are leveraging AI systems to improve our programming, including mapping the digital connectivity of schools, predicting the spread of diseases and improving poverty estimation. While AI is a force for innovation and can support the achievement of the Sustainable Development Goals (SDGs), it also poses risks for children, such as to their privacy, safety and security. Since AI systems can work unnoticed and at great scale, the risk of widespread exclusion and discrimination is real. As more and more decisions are delegated to intelligent systems, we are also forced, in the words of a UN High Level Panel, to “rethink our understandings of human dignity and agency, as algorithms are increasingly sophisticated at manipulating our choices.” For children’s agency, this rethinking is critical. Due to the extensive social, economic and ethical implications of AI technologies, governments and many organizations are setting guidelines for its development and implementation. However, even though the rights of children need acute attention in the digital age, this is not being reflected in the global policy and implementation efforts to make AI systems serve society better. Simply put: children interact with or are impacted by AI systems that are not designed for them, and current policies do not address this. Furthermore, whatever is known about how children interact with and are impacted by AI is just the start. The disruptive effects of AI will transform children’s lives in ways we cannot yet understand, for better or for worse. Our collective actions on AI today are critical for shaping a future that children deserve.
Efforts to democratize the benefits of AI systems for all children urgently need to be broadened. The first step is to recognize the unique opportunities and risks that AI systems represent for children, and then to act to leverage and mitigate them, respectively, in ways that recognize the different contexts of children, especially those from marginalized communities. Children’s varied characteristics, such as their developmental stages and different learning abilities, need to be considered in the design and implementation of AI systems.
It bothers me that even a document as well-meaning as this, still regards children as passive consumers of information, rather than active producers and engagers with the world they encounter.
AI is not a passive media. It is inherently an interactive (and iterative) media that changes based on usage and engagement.
Similarly I’d take issue with the idea that AI is not currently designed for children, as YouTube suggests otherwise. Perhaps a more accurate frame would be to say that AI is not currently designed for children’s best interests. However they are designed to hold their attention just like the rest of us.
It is however fair to argue that AI will and is transforming children’s lives in ways we cannot yet understand. That in and of itself is worth considering in terms of AI governance and transparency policies.
According to some of the tech experts who started the major social media forums, it’s not a coincidence that the majority of Gen Z has anxiety and a series of mental health issues. Most of us were either in primary school or junior high school when we first had social media.
— Duchess of Hastings (@PsilocybinQ) January 11, 2021
We’ve been up against artificial intelligence since we were children. Who vetted these super computers for our psychological development?
— Duchess of Hastings (@PsilocybinQ) January 11, 2021
Ironically these issues are not new, as a generation comes of age having spent their entire lives mediated by algorithms.
Is more education the answer?
Why Children Need To Learn About Artificial Intelligence #ArtificialIntelligence #ui via https://t.co/fzHFyvK11X https://t.co/IegbVEdb6y
— Tetraˌtɛtrəˈɡræmətɒn | יהוה (@miraclegers) January 9, 2021
Unfortunately, we cannot fully predict the effects of AI on society. The advancement of AI may lead to social manipulations on a massive scale, a proliferation of autonomous weapons, a heightened loss of privacy, and total surveillance by dictatorial states. Although AI-powered systems have a capacity to be much safer than the traditional ones that rely on human control, occasional mistakes could be devastating for a self-driving car, a robotic surgery, or an intelligent power grid. Due to such high stakes and the counter-intuitive nature of AI, it would be wise to have it introduced to children early, not only as an academic subject but also as a living experience.
Presently, students do not get exposure to AI concepts, challenges, and software applications at school. In response, innovative educators and education companies are taking it upon themselves to create AI curricula for middle and high school students, who can benefit from learning how to develop AI algorithms and recognize their performance failures due to biased data. One of these curricula is Inspirit AI, developed and run by Stanford and MIT alumni and graduate students. It offers “AI boot camps” for high school students from around the world. The company’s curriculum development is led by Daniela Ganelin, an MIT Computer Science graduate with a concentration in AI and years of local and international teaching experience.
A few companies are also aiming at providing smooth entry into the world of AI for younger students. One of the best known examples is the Teachable Machine by Google that allows children with no coding skills to train an AI program to recognize images, sounds, and poses while instantly observing the results of their choices. Online chatbots (AI conversational programs, such as Mitsuki) may serve as another gateway into the world of AI. Engaging and funny, chatbots demonstrate the successes and failures of AI in understanding human language.
If we leave the literacy of AI to private companies, how do we expect children or people to develop critical literacies?
This is the paradox. There’s always been a kind of concern about children and AI. On a superficial level this has been the “get kids to code” mantra that correlates anxiety of future employment with angst of agency and machine supremacy. However on a deeper level, it is a blind faith that more data is better. That subjecting kids to pervasive surveillance is good, both for their safety, but also their (future) health and well being.
Children today will generate 500x the data in #patientrecords than their grandparents. Is your #enterprise using #ai to garner #insights for making #business decisions? See how #HPE is helping #healthcare #customers prepare for the #ageofinsight https://t.co/mudMcwN4Cu
— Manoj Suvarna (@msuvarna) January 11, 2021
Encouraging children to learn more about their tools and the automated systems that govern us all is great. However there’s reason to be skeptical as to where this learning is coming from, and why the private sector is not the best place for this essential and hopefully critical pedagogy.
Explore Space with “Over the Moon” learning path guides learners through beginning concepts in data science, machine learning and artificial intelligence. I love the imaginative ways we're teaching children about #STEM https://t.co/Oi66oR7x2U
— Deb Adeogba (@DebAdeogba) January 16, 2021
Codey Rocky introduces children to the world of Artificial Intelligence (AI) and the Internet of Things (IoT) through easy-to-use coding software (mBlock) and activities. #LogicsTeacher #LogicsPD
Learn more at https://t.co/L4aBSq2RKQ pic.twitter.com/zXvKclgrjq
— LOGICS Academy (@LOGICSAcademy) January 13, 2021
I’m not suggesting the above learning resources are bad. Quite the opposite. If you do have kids they may be worth exploring. They’re also not unique, as there’s a sea of learning resources online to learn almost anything you want about automation and machine learning. The doors to the industry are relatively open.
However none of this involves critical thinking or social consequences. There’s still a huge void when it comes to governance issues, especially with regard to children.
On the one hand we’re arguing that kids should be more involved, more engaged, and generally considered and included in the larger design process. However on the other hand, this also must involve their parents (and family members).
If we are to truly “think of the children” then we also must think of everyone. In recognizing that kids are subjects of the technology and yet not included in the governance of design of it, we stumble upon the larger flaw, which is that most people are not included and the effect on them is not something we tend to consider.
In this context, advocating for the rights of children in AI development is a gateway to arguing for the rights of all. Children just happen to provide a moral high ground.
Although here’s why all of this is relevant. Children may also provide the blueprint for how AI moves to the next level of sophistication and power.
The Child as Hacker
“We propose that children’s learning is analogous to a particular style of programming called hacking, making code better along many dimensions through an open-ended set of goals and activities.”
open-access pdf https://t.co/4tsMf5Dn3p pic.twitter.com/5rJbQbuBuX
— hardmaru (@hardmaru) October 3, 2020
The future of automation is in emulating the mind of children. The future of learning is recognizing that hacking is not a crime, it’s a pedagogy exhibited by our youth!
Now talking about the child's "intuitive physics" and "intuitive psychology"; how a child will try to solve a problem, debugging as they go (makes me think of his paper in TiCS, 'The child as hacker' https://t.co/FFdZEOu0ok)
— Andrew Caines (@cainesap) November 17, 2020
Josh Tenenbaum is one of the authors of the paper above, and the tweet thread from his talk has some interesting insights. Although here’s the end with a link to the video:
Here's the recording of Josh Tenenbaum's talk https://t.co/mj2omoQzLR from the @CamLangsci Symposium on 17 November. Enjoy!????
— Andrew Caines (@cainesap) December 10, 2020
The irony here is that in order to make computers (and machine learning) more accessible we now seek to make it more human, by helping to understand how humans learn.
Will this result in greater inclusion and participation from children? Or eliminate the need for their input all together? Is the value of participation in the eye of the beholder or in the agency of the participant?