Skip to main content
Project

Advancing research on feminist artificial intelligence to advance gender equality and inclusion
 

Costa Rica
Middle East
South Asia
Project ID
109694
Total Funding
CAD 2,000,000.00
Project Status
Active
Duration
36 months

Programs and partnerships

Lead institution(s)

Summary

This project will seek to improve how artificial intelligence (AI) research as a field can measurably address social problems faced by women and marginalized communities, and to correct for historic exclusion and bias that currently exists in AI systems.Read more

This project will seek to improve how artificial intelligence (AI) research as a field can measurably address social problems faced by women and marginalized communities, and to correct for historic exclusion and bias that currently exists in AI systems. It will respond to a need to ensure that new methodologies, processes, and approaches ensure AI applications are designed with and for the needs of women and vulnerable communities. A competitive call will support the development of applied research on AI models that seek to combat bias and address historical inequities in data sets. These inequities perpetuate biased predictions and decision making in areas such as procurement, climate action and resource management, the automation of social-protection systems, and judicial practices.

Regional networks will help to strengthen the capacities of researchers in Asia, Latin America, the Middle East, and North Africa to develop and use innovative methodologies and models to undertake multidisciplinary, gender-transformative, and participatory AI research. Another focus will help to advance gender equality and inclusion principles within responsible AI principles, frameworks, and practice, working with key stakeholders in academia, the private sector, and governments at the national, regional, and international levels to adopt best practices.


Research outputs

Access full library of outputs Opens in new tab
Report
Language:

English

Summary
Author(s)
Kraft-Buchman, Caitlin
Article
Language:

English

Summary

Approaches to bias assessment usually require such technical skills that, by design, they leave discrimination experts out. In this paper we present EDIA, a tool that facilitates that experts in discrimination explore social biases in word embeddings and masked language models. Experts can then characterize those biases so that their presence can be assessed more systematically, and actions can be planned to address them. They can work interactively to assess the effects of different characterizations of bias in a given word embedding or language model, which helps to specify informal intuitions in concrete resources for systematic testing.

Author(s)
Alemany, Laura Alonso
Article
Language:

English

Summary

Automatic processing of language is becoming pervasive in our lives, often taking central roles in our decision making, like choosing the wording for our messages and mails, translating our readings, or even having full conversations with us. Word embeddings are a key component of modern natural language processing systems. They provide a representation of words that has boosted the performance of many applications, working as a semblance of meaning. Word embeddings seem to capture a semblance of the meaning of words from raw text, but, at the same time, they also distill stereotypes and societal biases which are subsequently relayed to the final applications. Such biases can be discriminatory. It is very important to detect and mitigate those biases, to prevent discriminatory behaviors of automated processes, which can be much more harmful than in the case of humans because of their scale. There are currently many tools and techniques to detect and mitigate biases in word embeddings, but they present many barriers for the engagement of people without technical skills. As it happens, most of the experts in bias, either social scientists or people with deep knowledge of the context where bias is harmful, do not have such skills, and they cannot engage in the processes of bias detection because of the technical barriers. We have studied the barriers in existing tools and have explored their possibilities and limitations with different kinds of users. With this exploration, we propose to develop a tool that is specially aimed to lower the technical barriers and provide the exploration power to address the requirements of experts, scientists and people in general who are willing to audit these technologies.

Author(s)
Alemany, Laura Alonso
Evaluation
Language:

English

Summary

Understanding the social implications arising from the design, development and deployment of artificial intelligence demands a multidisciplinary approach. This approach allows for the analysis of AI as a complex socio-technical system deeply entwined with power dynamics and societal structures. During the last three years, the Incubating Feminist AI project proposed multidisciplinary, decolonial, intersectional feminist perspectives to provide critical views and proactive ideas on AI technologies for/from three regions of the majority of the world, including ours. This final report offers a comprehensive summary of the activities conducted under the Incubating Feminist AI project in Latin America and the Caribbean (2021-2024). It chronicles the wide range of efforts and collaborations throughout the project, serving as a record of all actions taken and the incubated projects in the region.

Author(s)
Quijano; Paola Ricaurte
Access full library of outputs Opens in new tab