Kids Code Jeunesse (KCJ) primarily agrees with the principles of the Mozilla Trustworthy AI Whitepaper. We applaud the work in coordinating such a huge effort. As advocates for kids, and the grown ups that care for them, there are a few considerations we’d like to propose.
On August 8th, we hosted a UNESCO AI session with Algora Lab and MILA - Québec institute for Artificial intelligence - for a group of teens. The full report is here. This gave us critical insights from some of the most important stakeholders: kids. We created a kid-friendly litepaper to help other grown ups host these conversations. Members of our team also contributed to AI & Education: an international online workshop for UNESCO’s AI & Ethics recommendations with Future Society, and AI & Health: an online “delibinar” facilitated by the Montreal AI Ethics Institute.
There’s a lot of ground to cover, and Mozilla has outlined the key challenges. But the core language is still rooted in the existing industrialized system.
AI poses a threat to kids and wider society by being too opaque, incomprehensible, and mired in intellectual language. Education is a solution, human needs must be centred in all stages of development.
The first issue we have is with how people are represented - or not - in the whitepaper. Users are only mentioned in the passive, as sources of data, or as the recipient of bias outcomes. Simply lumping all users or persons impacted by AI as “consumers” is limited.
There is certainly value in framing consumer purchasing power - but it isn’t just buyers who interact with AI. Systems that use AI touch us all.
In 2020, there are only 3 types of people who are living outside of our digital society:
It is the most vulnerable, with the least purchasing power, who are most impacted by bias and discriminatory AI. This can include people having “poor” credit scores as they have never had access to a credit card. Or refugees with limited English having their asylum application rejected by an automated system.
Kids, for example, are using, consuming and engaging with AI. According to Pew Research, 81% of parents let their kids watch Youtube at least on occasion, but nearly 4 in 10 parents feel their kids have seen some inappropriate content on the platform. While Youtube states that recommendations are based on “quality content,” a former YouTube engineer has claimed that the algorithm favours “divisive and sensational content” to try and extend watch time.
With no transparency on how content is recommended to kids, they remain a captivated, yet captive, audience. They have no purchasing power thus cannot vote with their wallets. As dependents, their engagement with AI is filtered through grown ups, who may not always have the capacity to moderate that content.
AI has no capacity for grey areas or misunderstandings. It has no understanding of the nuances of the human condition, nor the subtly different needs of different groups - or even within groups. Recreating the "right-wrong" dichotomy as "creator-user," removes the human touch necessary for trustworthy AI.
The paper should rethink “users” instead as Digital Citizens, as most humans on the planet are directly or indirectly impacted by AI. Digital citizens include the people using AI, affected by AI, profiting from AI, and creating AI. Removing the contrast of creator and consumer incorporates all stakeholders. Inclusivity should be one of the goals in creating Trustworthy AI.
While a user is a passive endpoint, a citizen is an engaged participant in democracy. This whitepaper should serve to activate the public to engage with these concepts and calls for action.
In our consultation with teenagers, this theme came up a lot:
“We need to stop treating people like pawns [for example] workers aren’t just workers; they are not disposable. They have families.“
Another added:
“We must think about individual people’s rights: the rights of each person.“
- Youth response to UNESCO AI recommendations in collaboration with Algora Lab and Mila
In thinking about people’s rights to understand how AI is influencing the world around them, it is critical that explainability - the ability to explain something in an accessible and understandable way - is a pillar of trustworthy AI. Explainability, as a concept, requires an audience. Therefore, its inclusion in the process of creating AI ensures that digital citizens are included in these considerations, by design.
Transparency, accountability, explainability, and trustworthiness are not add-ons. They are as critical tools as the computer keyboard that types the first line of code.
The whitepaper rightly highlights why we need diversity in the teams that create AI - but it needs to go further.
First, let’s challenge short-term outcome 3.4.
The list of organizations that Mozilla hopes to catalyze is a good start. But their spheres of influence are all post-development.
Advocates and digital citizens must play an active role in developing these technologies. They must be brought into the rooms where industry “has ideas” and “identifies customer needs.”
Digital citizens’ lived experience, opinions and ability to understand AI has to be part of the development process. Otherwise we are creating lumbering systems that become “too big to change.”
As one of our youth representatives pointed out:
“Humans will always look to gain something so it’s hard to leave a capitalist system, but we need more regulations instead of profit.”
- Youth response to UNESCO AI recommendations in collaboration with Algora Lab and Mila
To ensure the highest level of inclusion, stakeholder groups need to be discussed in more detail. Let’s consider the short-term outcome 1.3.
This is our raison d’être: engaging kids (and the grown ups around them) with technologies like AI, and empowering them to take an active role in development. We call for Mozilla to make sure kids are included in all considerations around trustworthy AI.
Today’s kids are the first generation to grow up in a world so influenced by AI. Toddlers are learning to talk with Alexa. School-aged students ask Siri for homework help. Teens try to “game” the algorithms that determine success on TikTok or YouTube. They are facing a long future, which is being shaped by the decisions that we make now. Their views are critical to integrate.
This is something our teen contributors acknowledged:
“[We must hold] companies accountable in their sourcing of resources, and how they’re training AI: ensuring it includes all possible cases: efficiently but also ethically. Training it well and properly, not focusing just on the symptoms but applying it to the root of the problem.”
- Youth response to UNESCO AI recommendations in collaboration with Algora Lab and Mila
We need to involve the grown-ups around kids - parents, care-givers, educators, community members - in discussions around trustworthy AI. And they need to learn how to best help kids understand the influence AI has on their lives.
We began this with the launch of our #kids2030 initiative in April 2019. We committed to teaching 1 million children and 50,000 educators about AI, coding, digital citizenship, and the UN’s Sustainable Development Goals (more on those later). We bring AI into classrooms and learning spaces. With CCUNESCO, we launched the rallying call to #GetAlgoLit with The Algorithm Literacy Project. We hope to continue this, because...
To include the least represented groups, we need to shift from passive consumption to active engagement. Education is critical to drive these changes, and must be accessible to all. From kids in school to midlife career changers, retired technophobes to policy-makers.
There are hints to wider education in the whitepaper, but there is space for evolution. For example, in short-term outcome 2.4. the hope is that:
As outsiders to the development of AI, artists and journalists play a fundamental role in unpacking AI concepts. In some cases, they hold the powerful to account. Yet these are still exclusive occupations, often accessible only to privileged individuals.
What if we gave digital citizens the tools to understand the AI systems that they interact with?
We must also consider the missed opportunities that come from concentrating AI education at university-level.
Despite COVID-19 and the global digital divide, educators and care-givers are fighting to keep kids learning. This is an investment in the future: if we are to build back better, we need to build better leaders. This segment of society - of families, and schools, and youth - must be named and featured in the whitepaper.
Otherwise, we run the risk of creating a system that serves the knowledgeable better, at everyone else's cost; a system guarded by privileged gatekeepers that fails to serve users.
In a similar vein, we’d like to challenge the limits of what short-term outcome 4.4 is seeking:
This is important, undoubtedly, but focuses primarily on the inner workings of the government system. But what about education? Canada invests 6.2% of its GDP on education. After being born, education is often a Canadian’s first direct interaction with government services. Therefore, it should play a significant role in how the government invests in, and legislates around AI.
Regulation needs to extend beyond retroactively dealing with existing AI systems and the data they collect - we must equip digital citizens with foundational knowledge to understand how their interactions with AI influence these systems, and the control they wield. This, in turn, needs to restrict private companies' abilities to develop untested and uninhibited systems that can wield significant levels of influence over the wider world.
As a framework for change, the UN has already undertaken a huge amount of work to create these goals. Using the framework of the SDGs to guide the principles of trustworthy AI would help the paper’s own trustworthiness, and add to a global language of impactful change. It would provide a structure around this framework that seeks to create a fair and just AI. A system that respects the needs of people and the limits of the planet. A consideration that is especially important given how AI processing adds to greenhouse gas emissions.
As one teen put it:
“It’s easy to not be ethical. It’s more expensive to treat workers right and higher cost to use ethical resources but it has such a huge impact on how things work. It’s hard to change as it’s been that way since the Industrial Revolution.”
Another added:
“What we [personally] do will have an impact but if [large organizations] make an effort to make change, it will make a larger impact in the long-run.”
- Youth responses to UNESCO AI recommendations in collaboration with Algora Lab and Mila
The goals that are most relevant to this paper, to integrate into the short-term outcomes and long-term roadmap are:
Curricula must incorporate AI and the skills that underpin it - like algorithm literacy and data - as the foundations of digital literacy.
Every level and every room at every point in the development process must include diverse voices. Quality assurance and focus groups for new products must also be diverse.
AI is going to change the future of work: automating away whole sectors while creating entirely new roles and industries. The need to ensure a fair transition is pressing.
Industry is a key stakeholder, so it makes sense to align this global framework with this Global Goal. Further, as we advance industry with AI, it is essential that the items raised here in this litepaper and by our younger citizens are considered when building out infrastructure and future applicable innovation. Trust and accountability must be embedded intentionally.
Inclusion is critical to ensure the rights of all digital citizens are influencing the design and delivery of trustworthy AI. Moreover, access to AI and education about AI must be available to all to ensure no one is left behind or left unrepresented by AI innovation.
Governments have an opportunity to incorporate trustworthy AI policies into their own operations, and to make policy that protects citizens against predatory or untrustworthy applications of AI. AI and privacy, AI and competition, AI and criminal law, AI and other relevant areas of policy need to be considered when establishing governance and protection measures to build trustworthy AI environments.
We cannot achieve trustworthy AI or the SDGs alone. Opening this paper up to comment was a remarkable start to begin this radical process. We encourage more partners to be welcomed to the table, especially our youngest citizens, as we commence the Decade of Action on the UN SDGs.
For AI to be truly trustworthy, it must consider the whole of society. That includes kids and education. But it must also include people without internet access, without digital literacy, and so many more. To move from a binary of consumer and creator, we must visualize an ecosystem of diverse stakeholders. It doesn’t fit neatly into a matrix or framework. But to protect the interests of all, we may have to sacrifice a little narrative simplicity.
By advocating for our youngest digital citizens, with the most at stake in our future, we hope we have given the Mozilla team some food for thought. We hope these ideas challenge the team to include more voices and needs into their suggestions. And we hope to join them on their quest for a new global standard of trustworthy AI.