A true authority on human rights and AI, South Africa's Rachel Adams founded the African Observatory on Responsible Artificial Intelligence (AORAI) in early 2022 to provoke thought. What does AI mean for Africa? What are the risks and benefits for African societies? Can the deployment of these technologies undermine democracy? How can we build on African value systems, traditions and cultural codes to set ethical guidelines? And what would an AI by Africa and for Africa be? These are some of the questions that will be discussed by members of the Observatory.
“We want to bring the African experience to the world stage,” says Adams, who also directs the Just AI Center.
The discussions surrounding artificial intelligence are influenced by countries in the northern hemisphere. As a result, the tools developed reflect their ethical, normative and governance standards: the countries of the South must have their place in these discussions.”
Through the AORAI, which is supported by Canada’s International Development Research Centre, among others, the researcher wants to provide tools to African countries to supervise the implementation of AI and provide guidance.
Solution or losing control?
In concrete terms, how is AI being deployed in Africa?
The banking world is among the first to adopt it. From east to west, financial institutions are rushing to integrate AI into their credit granting or customer service systems. Similarly, the Canadian firm Proto has had great success with African banks: its technology automates the receipt of claims in a variety of local languages such as Kinyarwanda and Twi. The Africa Digital Financial Inclusion Facility sees the digitization of banking services as a way to provide more efficient and inclusive services.
AORAI members are critical. “When it's the machine that decides you don't qualify for a [bank] loan, a job, a visa . . . where do you turn?” asks Namibian researcher Kristophina Shilongo in an interview.
The fairness of intelligent systems relies heavily on the data with which they have been trained. “The data available in Africa is largely based on men's experience,” says Rachel Adams. "As a result, the services are biased. But it's not just an AI problem: this is the result of deep structural problems.” It is to counter this trend that AORAI is committed to including gender and race in all its work.
Moreover, the AORAI network is also working on the decolonization of AI. “We want to identify, in the AI world, the dynamics inherited from colonialism, in order to defuse them,” explains Adams.
Importing technologies that are not adapted to local contexts is a problem, she says. For example, facial recognition systems "trained” outside the continent are prone to error. This is particularly worrisome when they are used for police surveillance, as in Johannesburg, where more than 5,000 AI-assisted cameras from the Danish company iSentry & Milestone record the actions of passersby.
Africa is also targeted by companies for “beta testing” their innovations. Populations that are not always able to give free and informed consent are used as guinea pigs. The company specializing in psychological and ideological profiling, Cambridge Analytica, tested its capacity for political influence in Nigeria and Kenya before tackling the election of Donald Trump in the United States.
Then there are abuses related to data.
In Kenya, credit applications lend money at high interest rates without assessing borrowers’ creditworthiness and massively harvest borrowers’ data, such as location, text messages, contacts and call history...
To counteract these abuses, Nokuthula Olorunju, a South African member of the Observatory, is working on AI governance and the legal framework of cyberspace. In her opinion, framing technology is a collaborative bottom-up approach. “We're reaching out to people to listen to their experiences and find out what technology and their data mean to them. From there, we have a better understanding of their perceptions and realities, to build on to protect them.”
To identify what constitutes ethical AI in each country, Rachel Adams and an international team set out to create the Global Index on Responsible AI. This tool will make it possible to compare countries on the basis of criteria established globally and not just imported from Northern states.
"This instrument is developed in the South, and especially from the African continent,” explains Adams proudly. It will be based on regional conceptions of human rights, with the hope of being as inclusive as possible.”
The aim of the index is to help each citizen become aware of what is at stake in their society. Despite the concerns raised by researchers, AI innovations are part of the future and represent valuable opportunities.