Skip to main content

Ensuring generative AI is responsible AI: the case for allowing time to find the evidence

 

Matthew Smith

Senior program specialist, IDRC

The emergence of powerful generative artificial intelligence (AI) applications like Chat GPT is raising an important question for international development: How can we manage the risks and benefits of AI technologies when their advances are moving faster than our ability to understand their impact?

The most recent version of ChatGPT is already 10 times more powerful than the 2022 model. We have barely had time to reflect on potential benefits and risks of the original model in the Global North, much less explore the particular problems the latest version could pose for the Global South.

Research can help. But it needs time.

Many research projects we currently support focus on predictive AI — which makes predictions based on historical data. Generative AI differs from predictive models in that it generates new content rather than predicting an outcome.

In our 2018 publication, Artificial Intelligence and Human Development: Toward a Research Agenda, we examined a growing body of work being conducted by scores of AI researchers and developers across the Global South. This article features some of their views on why generative AI must be responsible AI: safe, trustworthy, sustainable and ethical.

Prof. Jerry John Kponyo of the Kwame Nkrumah University of Science and Technology in Ghana is an IDRC research partner through the Artificial Intelligence for Development in Africa (AI4D Africa) program, a joint endeavour with the Swedish International Development Cooperation Agency. Kponyo says that generative AI tools like ChatGPT present significant opportunities to move Africa toward achieving the UN Sustainable Development Goals.

“These models can be used to develop more accurate and efficient healthcare systems that can help diagnose and treat diseases more effectively, even in remote areas,” says Kponyo. “They are also helpful in creating educational resources that are more accessible to a broader range of students, regardless of their location or socio-economic background.”

Ernest Mwebaze at Uganda’s Sunbird AI, part of an IDRC-funded network devoted to developing gender-inclusive AI, says generative models can also help solve the historical shortage of data infrastructure in Africa — a major problem for responsible developers.

“There has not been enough contextual content from the African continent to support the training of AI models, and we've seen the effects of bias in data being propagated because of this," says Mwebaze. “Generative AI can significantly improve current efforts to create contextual content from Africa.”

Despite the possibilities it presents, generative AI also poses significant risks.

Generative AI’s ability to produce convincing, human-like text means that it can be used as a dangerous tool for the creation of disinformation. This, IDRC-supported research has helped demonstrate, is already a significant challenge. For example, such use of generative AI has been shown to negatively affect women’s participation in democratic processes.

The jury is still out as to whether generative AI will create as many jobs as it will make redundant. But Scott Timcke of Research ICT Africa, which is designing the IDRC-funded Global Index on Responsible AI, says those job numbers may not be as important as generative AI’s broader societal effects.

"Savings on labour and increased productivity will have an economic effect, but also a political effect,” Timcke says. “In societies where wealth buys disproportionate political influence, tools like ChatGPT and generative AI models will reinforce the processes that produce social inequality."

The Global Index’s driving purpose is to understand the development and use of AI and its social implications around the world. Critically, the index emphasizes the perspectives of underserved and marginalized groups. Those are the people Timcke is worried will fall through the cracks. "In the proverbial arms race between Big Tech firms around who will deploy generative AI models, regulators and lawmakers must ensure that the social good is not a casualty,” Timcke says.

The perpetuation of existing inequities through generative AI applications is a major concern. The training ground for these applications is the Internet, which is full of both good and bad data. The worst is toxic data — images and text with gross abuses like child pornography. Such data can be filtered out, but that process is open to reinforcing inequalities. For example, US-based OpenAI has been criticized for paying Kenyan workers less than USD2.00 (CAD2.70) per hour (about the average salary in Kenya) to clean up text data, thereby subjecting them to hours of disturbing content.

While toxic data can be removed, problematic social views are much tougher to erase from the online data on which AI models depend. For instance, GPT-3 has been shown to persistently associate Muslims with violence.

“There is the risk of reinforcing existing biases and inequalities, particularly in cases where the generative AI models are trained on data that reflect existing social, economic and cultural biases,” says Luciá Mesa Vélez, of the Ladysmith Collective, which provides gender, equity and inclusion mentorship and training in AI research. “This could result in further marginalization of already vulnerable groups, such as women and the LGBTQIA+ population, in the Global South.”

Moving forward, it is critical that we do not pin all our hopes on AI to solve underlying systemic challenges like inequality. Vélez says that’s happened before.

“As we have seen with digital technologies like mobile phones — which were at times held up as having the capacity to deliver empowerment ‘to women’s fingertips’ — simply adopting new or innovative approaches to old problems is insufficient. We need to proceed with care and caution, including by marshaling insights from the vast existing knowledge about the promises and pitfalls of leveraging technology and data science for development.”

Proceeding with care and caution also means taking into consideration the diversity of contexts. As Kponyo explains, “[t]here are over 2,000 languages spoken across Africa, making it one of the most linguistically diverse continents in the world. Thus, developing language models that recognize and understand these languages, not just the commonly used ones like English, French and Arabic, is crucial.”

Kponyo, Mwebaze and Vélez are all part of the AI4D Africa program, a good example of the ecosystem approach IDRC uses to fund research on information and communication technologies for development. The four-year initiative with SIDA supports the African-led development of responsible and inclusive AI, including through high-quality research and talent strengthening, and by informing contextually appropriate strategy and policy.

At the heart of this approach lies a fundamental question: Who is responsible for developing AI and for whom is it being developed? Those who create and refine AI models inevitably embed their values into them, but when technologies have the potential for global impact, whose values should be prioritized?

Answering these questions also takes time. So does gathering evidence, which is critical for better understanding the benefits, risks and policy options. That is why it is crucial to proceed cautiously with AI development and deployment to protect marginalized populations.

Given the power of these technologies to bring about significant social and economic disruption, the right safeguards must be in place to protect the public good. The international community has made progress towards widely accepted principles on the development and implementation of responsible and ethical AI — such as the UNESCO agreement on the Ethics of AI adopted by 193 member states in 2021 — but we’ve only just begun to understand the potential rewards and risks, and how to regulate them.

Critically, this conversation must cease to be dominated by voices from the Global North and include perspectives and research from the Global South on how such technologies can be managed to benefit everyone, no matter where they live. With the emergence of generative AI, this approach is more warranted — and more necessary — than ever.