Artificial intelligence (AI) relies on its creators for training, otherwise known as “machine learning.” Machine learning is the process by which the machine generates its intelligence through outside input.
But its behavior is determined by the information it is provided. And at the moment, AI is a white male dominated field.
How can we ensure the evolution of AI doesn’t further encroach on Indigenous rights and data sovereignty?
AI risks to Indigenous artAI has the ability to generate art, and anyone can “create” Indigenous art using this machine. Even before AI, Aboriginal art has widely been appropriated and reproduced without attribution or acknowledgement, particularly for tourism industries.
And this could worsen with people now being able to generate art through AI. This is an issue not just experienced by Indigenous people, with many artists affected by their art styles being misappropriated.
Indigenous art is embedded with history and connects to culture and Country. AI-created Indigenous art would lack this. There are also implications for financial gain bypassing Indigenous artists and going to the producers of the technology.
Including Indigenous people in creating AI or deciding what AI can learn, could help minimize exploitation of Indigenous artists and their art.
What is Indigenous data sovereignty?In Australia there is a long history of collecting data ‘about’ Aboriginal and Torres Strait Islander people. But there has been little data collected ‘for’ or ‘with’ Aboriginal and Torres Strait Islander people. Aboriginal scholars Maggie Walter and Jacon Prehn write of this in the context of the growing Indigenous Data Sovereignty movement.
Indigenous Data Sovereignty is concerned with the rights of Indigenous peoples to own, control, access and possess their own data, and decide who to give it to. Globally, Indigenous peoples are pushing for formal agreements on Indigenous Data Sovereignty.
Many Indigenous people are concerned with how the data involving our knowledge and cultural practices is being used. This has resulted in some Indigenous lawyers finding ways to integrate intellectual property with cultural rights.
Māori scholar Karaitiana Taiuru says:
“If Indigenous peoples don’t have sovereignty of their own data, they will simply be re-colonized in this information society.”
How mob have been using AIIndigenous people are already collaborating on research that draws on Indigenous knowledge and involves AI.
In the wetlands of Kakadu, rangers are using AI and Indigenous knowledge to care for Country.
A weed called para grass is having a negative impact on magpie geese, which have been in decline. While the Kakadu rangers are doing their best to control the issue, the sheer size of the area (two million hectares), makes this difficult.
Collecting and analyzing information about magpie geese and the impact of para grass using drones is having a positive influence on goose numbers.
Projects like these are vital given the loss of biodiversity around the globe that is causing species extinctions and ecosystem loss at alarming rates. As a result of this collaboration thousands of magpie geese are returning to Country to roost.
This project involves Traditional land owners (collectively known as Bininj in the north of Kakadu National Park and Mungguy in the south) working with rangers and researchers to help protect the environment and preserve biodiversity.
By working with Traditional Owners, monitoring systems were able to be programmed with geographically-specific knowledge, not otherwise recorded, reflecting the connection of Indigenous people with the land. This collaboration highlights the need to ensure Indigenous-led approaches.
In another example, in Sanikiluaq, an Inuit community in Nunavut, Canada, a project called PolArtic uses scientific data with Indigenous knowledge to assess the location of, and manage, fisheries.
Changing climate patterns are affecting the availability of fish, and this is another example where Indigenous knowledge are providing solutions for biodiversity issues caused by the global climate crisis.
Indigital is an Indigenous-owned profit-for-purpose company founded by Dharug, Cabrogal innovator Mikaela Jade. Jade has worked with traditional owners of Kakadu to use augmented reality to tell their stories on Country.
Indigital is also providing pathways for mob who are keen to learn more about digital technologies and combine them with their knowledge.
Future challenges and opportunities for Indigenous inclusionAlthough AI is a powerful tool, it is limited by the data which inform it. The success of the above projects is because AI was informed by Indigenous knowledge, provided by Indigenous knowledge holders who have a long held ancestral relationship with the land, animals and environment.
Research indicates AI is a white male-dominated industry. A global study found 12% of professionals across all levels were female, with only 4% being people of color. Indigenous participation was not noted.
In early June, the Australian government’s Safe and Responsible AI in Australia discussion paper found racial and gender biases evident in AI. Racial biases occurred, the paper found, in situations such as where AI had been used to predict criminal behavior.
The purpose of the study was to seek feedback on how to lessen potential risks of harm from AI. Advisory groups and consultation processes were raised as possibilities to address this, but not explored in any real depth.
Indigenous knowledge have a lot to offer in the development of new technologies including AI. Art is part of our cultures, ceremonies, and identity. AI-generated art presents the risk of mass reproduction without Indigenous input or ownership, and misrepresentation of culture.
The federal government needs to consider Indigenous Knowledge informing the machine learning informing AI, supporting data sovereignty. There is an opportunity for Australia to become a global leader in pursuing technology advancement ethically.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation: Indigenous knowledge informing ‘machine learning’ could prevent stolen art and other culturally unsafe AI practices (2023, September 9) retrieved 9 September 2023 from https://techxplore.com/news/2023-09-indigenous-knowledge-machine-stolen-art.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.
Machine learning contributes to better quantum error correction
Researchers from the RIKEN Center for Quantum Computing have used machine learning to perform error correction for quantum computers — a crucial step for making these devices practical — using an autonomous correction system that despite being approximate, can efficiently determine how best to make the necessary corrections.
In contrast to classical computers, which operate on bits that can only take the basic values 0 and 1, quantum computers operate on “qubits,” which can assume any superposition of the computational basis states. In combination with quantum entanglement, another quantum characteristic that connects different qubits beyond classical means, this enables quantum computers to perform entirely new operations, giving rise to potential advantages in some computational tasks, such as large-scale searches, optimization problems, and cryptography.
The main challenge towards putting quantum computers into practice stems from the extremely fragile nature of quantum superpositions. Indeed, tiny perturbations induced, for instance, by the ubiquitous presence of an environment give rise to errors that rapidly destroy quantum superpositions and, as a consequence, quantum computers lose their edge.
To overcome this obstacle, sophisticated methods for quantum error correction have been developed. While they can, in theory, successfully neutralize the effect of errors, they often come with a massive overhead in device complexity, which itself is error-prone and thus potentially even increases the exposure to errors. As a consequence, full-fledged error correction has remained elusive.
In this work, the researchers leveraged machine learning in a search for error correction schemes that minimize the device overhead while maintaining good error correcting performance. To this end, they focused on an autonomous approach to quantum error correction, where a cleverly designed, artificial environment replaces the necessity to perform frequent error-detecting measurements. They also looked at “bosonic qubit encodings,” which are, for instance, available and utilized in some of the currently most promising and widespread quantum computing machines based on superconducting circuits.
Finding high-performing candidates in the vast search space of bosonic qubit encodings represents a complex optimization task, which the researchers address with reinforcement learning, an advanced machine learning method, where an agent explores a possibly abstract environment to learn and optimize its action policy. With this, the group found that a surprisingly simple, approximate qubit encoding could not only greatly reduce the device complexity compared to other proposed encodings, but also outperformed its competitors in terms of its capability to correct errors.
Yexiong Zeng, the first author of the paper, says, “Our work not only demonstrates the potential for deploying machine learning towards quantum error correction, but it may also bring us a step closer to the successful implementation of quantum error correction in experiments.”
According to Franco Nori, “Machine learning can play a pivotal role in addressing large-scale quantum computation and optimization challenges. Currently, we are actively involved in a number of projects that integrate machine learning, artificial neural networks, quantum error correction, and quantum fault tolerance.”