When a child overhears Alexa announcing the weather forecast or Google Home warning their parent about the morning traffic, it’s easy for them to equate these devices with an all-powerful robot. It’s easy to think of these devices as magic — and because kids are now growing up with this supposed magic all around them, it’s crucial for them to understand how exactly these voices inform us of the rain this afternoon or that the drive to school will take twice as long.
One of the key lessons every child should learn is that AI technology is developed by humans, and created with human-centric goals in mind. They should be aware of the fact that Alexa and Google Home are not just robots, but that a person with an arsenal of computer engineering knowledge is responsible for the intelligent voices coming out of these sleek devices. And because technology that runs on AI is programmed by people, it means that these machines are prone to the mistakes we make and the biases we have as human beings.
This is where AI ethics come in. Through our #kids2030 initiative, we want children to learn the basics of AI and participate in activities that reveal its inner workings. But we also want to raise their awareness about the ethical issues embedded within the way AI technology is programmed and used.
“In today’s world, kids have to understand how to work with artificial intelligence and technology, not against it. It is our duty as parents, guardians and educators to support them in learning the digital skills they need to build a better future” - Kate Arthur, Founder and CEO at KCJ.
After understanding that AI technology doesn’t magically appear out of the ether, the second step is learning how it acquires so much information. Just the fact that its intelligence is called “artificial” should be a tip, but the answer here is data. And the more, the merrier. When AI designers are developing a given technology, they need to collect an enormous amount of data to ensure maximum accuracy. The source of that data? You and I, our personal information, our daily routines, our interests, and how it all manifests itself online. With that, come issues of privacy, and how ethical it is to use information that was not necessarily intended for the development of a given technology. Because children are now getting cell phones before hitting their teens and living in a tech-filled world, we think it’s important for them to know the ins and outs of the technology around them.
Another potentially flawed aspect of AI is that the machines are often influenced by the biases of their creators. Developing an entirely objective device could only become a reality if its creator had no biases… which is far from possible at the moment. Because the AI industry is dominated by a rather homogenous group, the resulting technology can have, for example, racial biases, which can lead to fairly discriminatory consequences. A potential answer to this issue would be to diversify the industry by bringing in more women and people of colour to develop AI devices.
Undoubtedly, these topics may be tough to explain to children — but not impossible. The Algorithm Literacy project, an effort by KCJ and CCUNESCO, aims to present all of these issues in a way that kids will understand. Through interactive projects, discussion guides, and videos, we want to demonstrate to kids that they are smarter than AI tools. They can make their own decisions and shape their own online experiences.
We want kids to see that AI devices are not almighty robots. In fact, they’re devices created with the minds and hands of people, and just like people, they can make mistakes. More importantly, we want kids to understand that mistakes can be fixed, and they can be the ones to grow up and fix them.