Skip to main content

Feminist AI Research Network: Combatting gender-based violence with artificial intelligence innovations

 

Despite its potential for use in achieving development goals, artificial intelligence (AI) also threatens to deepen inequalities, especially for women and other vulnerable populations. That’s why IDRC is supporting responsible AI, funding projects focused on addressing barriers to equality, like the innovations within the Feminist AI Research Network (FAIR).  

FAIR is a global network of scientists, economists and activists united in their dedication to finding ways to make AI and related technologies more effective, inclusive and transformational. FAIR-supported projects aim to identify and correct digital biases by fostering collaboration and developing AI solutions that reflect feminist principles.  

One cohort of FAIR projects addresses threats to women’s safety and security, which are important barriers to gender equality. Globally, 27% of women experience some form of interpersonal violence, and 6% experience non-partner sexual violence in their lifetimes, with regional rates as high as 45% and 11%, respectively.  Those statistics represent multiple, unique contexts that demand different solutions. Here’s a look at three feminist AI innovations co-designed by regional innovators and community members to combat gender-based violence in ways that best meet the needs of users. 

Research highlights

  • IDRC’s Feminist AI Research Network (FAIR) aims to identify and correct biases within digital spaces by fostering collaboration and developing AI solutions that reflect feminist principles – three such solutions are SafeHER, AymurAI and SOF+IA. 
  • SafeHER is an app designed for women transit users in Manila, the Philippines, based on their lived experiences and needs. It provides tools such as SOS alert, live location sharing, scream detection, and a buddy system to enhance their safety on public transport. 
  • AymurAI was developed to address the lack of data on gender-based-violence cases in the Argentinian judicial system, ultimately fostering greater accountability and transparency within the judiciary when it comes to gender-based violence.    
  • SOF+IA is a web-based feminist chatbot created to support victims of technology-facilitated gender-based violence on social-media platforms. It guides users on how to report cases, provides digital self-care tips and evaluates whether a situation can be reported to police.

SafeHER

Manila in the Philippines has one of the most dangerous transport systems in the world for women, with at least 80% reporting harassment and sexual assault. While some safety apps do exist, FAIR research partner Hazel Biana found that the majority put the burden on women to protect themselves, with suggestions such as staying alert and traveling with weapons. Biana also found that existing apps do not tackle the underlying issues of violence against women or take into account the functions that women want. By contrast, SafeHER – developed by Biana and a team of researchers at De La Salle University’s Social Development Research Center in Manila – is designed based on women transit users' lived experiences and stated needs. The SafeHER app includes, among other tools, SOS alert, live location sharing, scream detection, and a buddy system where women can detect other solo commuters. Some other features for future development include best-route recommendations and crash detection. The app is still in alpha-testing and is about to undergo further testing and deployment in areas around the university. Developers are in talks with government agencies, such as the Philippine National Police, the Philippine Commission on Women and local government units, to address incidents of gender-based violence on transit systems. Overall, the app aims to empower women, challenge victim-blaming norms and raise awareness of women’s safety concerns on transit. Ultimately, the SafeHER team hopes data from their app can be used to influence policies that make public transit safer for women. 

Media
SafeHER promotional image
SafeHER

AymurAI

AymurAI — developed by Data Género in collaboration with a court in Buenos Aires, Argentina — was created to help address the lack of data regarding gender-based violence in the city’s judicial system, and across Latin America as a whole. The tool has two features. The dataset feature identifies and extracts relevant information from court rulings related to gender-based-violence cases, while ensuring all sensitive information, such as identities, is protected. Once verified by a staff member from the criminal court, the app makes the data publicly available online, which in turn improves the quality and quantity of gender-based-violence data and related court rulings. The second feature is the anonymization of legal rulings, so that the complete ruling texts can be uploaded while protecting privacy and sensitive information. 

The project team, led by the Data Género executive director, Ivana Feldfeber, hopes that data can then be used to identify patterns that could lead to more serious offences such as femicide. Although the Aymur app is tailored for Argentina’s specific legal and cultural context, it detects a lot of information in Spanish, which is why the project team is now exploring how to adapt it for use in Mexico, which will serve to significantly increase the program’s reach and impact. Overall, the app aims to understand gender-based violence from a judicial perspective and foster greater accountability and transparency within the judiciary when it comes to gender-based violence. Recently, Aymur AI was recognized by UNESCO in its Global Toolkit on AI and the Rule of Law for the Judiciary

SOF+IA

Gender-based violence is increasingly prevalent online. An 18-country survey led by the Centre for International Governance and supported by IDRC found that approximately six out of 10 women (as well as transgender and non-binary communities) have experienced what is referred to as technology-facilitated gender-based violence (TFGBV). TFGBV includes abuse like harassment, tracking, the non-consensual sharing of intimate images and death threats. In Chile, researchers with Datos Protegidos and ODEGI designed SOF+IA — a web-based feminist chatbot that provides support and resources for those experiencing TFGBV on social-media platforms. SOF+IA guides users on how to report cases on those platforms, provides digital self-care tips, and evaluates whether a situation can be reported to police. The data generated from SOF+IA will also be used to increase public awareness about TFGBV through informational data visualizations and tailored notifications to users when coordinated attacks or harassment against women occur on social-media platforms, particularly during times of conflict and/or sociopolitical crisis. 

Conclusion

Feminist AI expands far beyond the scope of gender-based violence. FAIR innovation projects investigate biases and discriminatory stereotypes in large language models and create communities for women employed in crowd work, who are involved in labelling the data used to train algorithms. A+ Alliance, which houses FAIR in collaboration with IDRC, also created a global directory of feminist AI experts and practitioners, part of a growing ecosystem that supports an inclusive digital future, with gender equality at its core. 

Contributors: Hannah Whitehead, Research Award Recipient, Education and Science and Abbey Gandhi, Program Management Officer, Education and Science