Gender Bias in AI: Uncovering the Roots and Shaping Inclusive Futures

  Focus - Allegati
  23 marzo 2025
  28 minuti, 39 secondi

Rosa Santa Serravalle - Senior Researcher
Cesar Muñoz Alarcon - Senior Researcher
Francisco Duran Herrera - Head Researcher

Abstract

This paper investigates the manifestations and implications of gender biases in Artificial Intelligence (AI). It examines how historical stereotypes and socio-cultural norms are inadvertently embedded into AI systems through biased data, algorithmic design, and decision-making processes. Using prominent examples from recruitment tools, machine translation, and facial recognition systems, the study highlights the detrimental impact on women and gender minorities. The analysis extends to discuss underrepresentation in AI development and the subsequent reinforcement of systemic inequalities. In response, the paper outlines potential strategies to mitigate these biases, including the creation of inclusive datasets, rigorous auditing of algorithms, and the implementation of ethical and regulatory frameworks such as UNESCO’s Recommendation on the Ethics of AI.

I. Introduction

The rapid advancement of Artificial Intelligence has transformed various aspects of modern life, from healthcare diagnostics to workplace automation. However, as AI systems increasingly influence decision-making processes, the risk of perpetuating historical gender biases becomes more pronounced. Gender bias in AI emerges when systems, trained on data reflective of entrenched cultural stereotypes, generate outputs that disproportionately disadvantage women and members of the LGBTIQ+ community. Examples such as Amazon’s discontinued recruitment tool and biased outputs in machine translation services underline the severity of the issue. This paper explores the three main levels at which bias infiltrates AI—data collection, algorithm design, and automated decision-making—while also considering the socio-technical dynamics that contribute to these inequities. Moreover, it discusses the underrepresentation of women in AI development, which further exacerbates these challenges, and sets the stage for a critical examination of potential corrective measures.

II. What are Gender Biases in AI?

When we talk about gender biases in Artificial Intelligence (AI), we refer to those situations in which technological systems reflect biases or stereotypes associated with specific genders, particularly to the detriment of women and individuals from the LGBTIQ+ community. These biases stem from the reproduction of historical cultural patterns that are inadvertently and often unintentionally transferred to the process of creating and developing technologies such as algorithms or automated systems (Buolamwini & Gebru, 2018; Noble, 2018). It is important to note that AI is not inherently a source of knowledge creation, but rather an algorithm that, based on theinput data or information, generates output, also known as outcomes.

Specifically, gender biases are evident in everyday situations that have made headlines due to the harm they caused to those affected by such biases. For example, the well-known AI recruitment tool developed by Amazon, which was discontinued in 2018, systematically discriminated against women because it was trained on predominantly male historical data, especially in the tech sector (Dastin, 2018). Similarly, virtual assistants with female voices reinforce gender stereotypes by providing responses that are considered more submissive or polite (West et al., 2019). Another significant example is found in facial recognition systems, which exhibit higher error rates in correctly identifying women, especially those with darker skin, revealing biases that intersect both gender and race (Buolamwini & Gebru, 2018).

For a better understanding of the phenomenon at hand, it is necessary to recognize three levels at which biases can emerge in AI: the data used to train such models, the algorithms employed, and, finally, the automated decisions made. Data biases occur when datasets are used that do not fairly represent all groups or that reproduce existing biases (Mehrabi et al., 2021). Algorithmic biases, on the other hand, stem from technical decisions made during the construction of these models, which may unintentionally favor discriminatory outcomes. Finally, decision biases occur when AI systems amplify pre-existing biases in their responses, directly impacting specific groups (Barocas & Selbst, 2016).

A rigorous and systematic analysis of gender biases, as well as a precise understanding of their technical and sociocultural causes, is essential for developing fairer training methodologies and usage practices in the design and application of AI systems.

III.
Causes of Gender Bias in AI

Bias in data collection and training datasets

Many important decisions are currently being automated by AI applications. Hence, AI-enabled decision systems have exacerbated this problem by amplifying pre-existing societal and gender bias. The link between gender bias and AI is rooted in modern society, which is influenced by deep-seated patriarchy.
Today there is not a single definition of the term of bias. We can define ‘bias’ as an inclination or prejudice for or against one person, for or against something (Oxford dictionary). Specifically, gender bias is the term used to describe systematic biasing effects that result from gender-related stereotyping and prejudice (CEWS).

Bias is one of the largest ethical issues surrounding AI and machine learning today. Machine translation is not exempt from these issues. Several AI tools have recently shown harmful tendencies toward certain minorities, exhibiting racist behaviour and gender bias. The latter can be illustrated through the case study of Google Translate (GT), focusing on sentences translated into English using the GT API. The colossal American company Google launched in 2006 the well-known Google Translate and it was one of the largest available Neural Machine Translation (NMT) tools in existence. Historically, GT has provided only one translation for a query, even if the translation could have either a feminine or masculine form. In an automatic way, the NMT was replicating gender bias already existing in the society. The researchers created sentences such as 'he/she is an engineer' in languages like Chinese, Hungarian and Turkish that use non-gendered pronouns and submitted them to GT, to see how the service would itself complete the personal pronouns. They then compared the results with data from the US Bureau of Labor Statistics (BLS), to see whether the number of masculine and feminine pronouns generated by Google corresponded to reality.

Marcelo et al. demonstrate that it exhibits a tendency towards male defaults, particularly in fields associated with unbalanced gender distribution or stereotypes, such as STEM (Science, Technology, Engineering, and Mathematics) occupations. According to the researchers, the algorithm should have generated pronouns in a ratio representative of that in the outside world. When a specific profession is practised mainly by women, GT may use she more often in the translation of a sentence. Consequently, it results in putting male pronouns per default, especially when it is concerning the employees of STEM. These fields are grouped into a single category which helps to compare the large asymmetry between gender pronouns in these fields (72% of male defaults). GT was not then capable to represent the contemporary society in which the role of women is widely increased and in several labour sectors their presence is almost equal to that of men. Later on, in 2018 Google decided to address the question of gender bias. It previously offered one translation for queries, which reflected the gender bias. To address the issues, Google updated its translation framework, so single word queries from English to French, Italian, Portuguese, or Spanish would provide both masculine and feminine translations. Gender-neutral sentences are identified via a new machine-learned process, while masculine and feminine translations are produced through two more steps that involve adding gender attributes to the training data and filtering out rejected translation suggestions. Google claims this new NMT system can ‘reliably’ produce feminine and masculine translations 99% of the time. It is certainly a huge step forward for language inclusion, even though we cannot perceive huge leaps, especially for longer phrases or full sentences, which require a more complex process.

Algorithmic discrimination and systemic inequalities

Axel Honneth, with his theory of recognition (Honneth, 1996), argued that AI’s gender biases are not only an ethical problem because they can lead to discrimination, but also because they resemble forms of misrecognition that can hurt women’s self-development and self-worth. Honneth’s theory of recognition is valuable for understanding the potential effects of AI on people’s self-development and to explain that bias is not only a technical problem, but also a social problem. In fact, gender biases in AI, like racial bias and other forms of bias, indirectly stem from the social norms and practices that prevail in a society. In his influential book The Struggle for Recognition (Honneth, 1996), Honneth outlines a social philosophy which focuses on recognition granted or refused to individuals and groups based on their needs, moral responsibility, and societal contributions. According to recognition theory, our personality and identity are shaped by social relationships, which influence the roles we assume and the goals we strive for in daily life. The recognition we receive—or are denied—in our interactions with others plays a fundamental role in shaping both individual development and society as a whole. By interacting with others and by perceiving herself from their perspective, an individual develops a “practical relation-to-self” (Honneth, 1996, p. 92), which determines how she establishes her self-worth and sees her position in the society. In this regard, AI systems can be said to be biased in a variety of ways. According to Friedman and Nissenbaum (1996), there are three categories of bias in computer systems: preexisting, technical, and emergent biases. Preexisting bias primarily means that a system perpetuates existing human prejudices. These biases can originate from the system's technical design or from the training data on which it is built. In the first case, preexisting bias emerges when a technology's programming or material design reflects the prejudiced or non-inclusive beliefs of its developers (e.g., smartphones designed to fit the average male hand and pocket). In the second case, biased or unrepresentative input data lead to algorithms that generate low-quality, potentially discriminatory outputs—what is often summarized as "garbage in, garbage out”. Notably, pre existing bias caused by flawed data can also be classified as a form of technical bias. This occurs when self-learning systems interpret their training data in problematic ways, favoring certain outcomes over others. A striking example of AI’s misrecognition of women is seen in voice and facial recognition systems, which are less accurate in identifying women compared to men. Similarly, facial recognition technology has been found to perform worse on darker-skinned individuals than on lighter-skinned ones, reinforcing existing inequalities in automated decision-making processes. Young, white, and often affluent men make up the majority of the workforce in many technology companies today (Richter, 2021). This gives them a much greater influence on the development of technologies than other groups have. In this case is possible to develop what is known as unconscious bias. This latter refers to the automatic, unintentional mental associations individuals make based on factors such as gender, race, age, and background. These biases stem from societal influences, personal experiences, and cultural norms, often shaping decision-making processes without conscious awareness. Generative AI has the potential to transform productivity and reduce inequality, but only if adopted broadly. It is expected to have profound economic and social impacts. Recent studies demonstrate that tools like ChatGPT have already begun to impact the skills, knowledge, and productivity of professionals across various domains, including college-educated workers, customer support agents, job seekers, students, and entrepreneurs (Brynjolfsson, Li, and Raymond, 2023). Moreover, because these tools are often both widely accessible and easy to use, they have the potential to help billions of people from historically underserved groups across the world (Björkegren, 2023; Otis et al., 2023). This is particularly relevant for women, who continue to encounter institutional, professional, and cultural barriers that hinder their access to the skills and knowledge essential for workplace success. However, research in the sociology and economics of technology adoption indicates that, despite the potential benefits, various obstacles may lead to lower adoption rates of generative AI among women compared to men.

Underrepresentation of women in AI development

Challenging stereotypes requires a more inclusive approach to recruitment within technology companies. Recent data highlights significant gender imbalances in the AI sector: women hold only 20% of technical roles in major machine learning companies, account for just 12% of AI researchers, and make up a mere 6% of professional software developers. This disparity is also evident in academic contributions, where only 18% of authors at top AI conferences are women, and over 80% of AI professors are men. A lack of diversity in development teams increases the risk that AI systems will fail to adequately serve diverse populations or safeguard their fundamental rights.

The rapid expansion of artificial intelligence has led to transformative advancements across various sectors, from enhancing medical diagnostics to improving connectivity through social media and increasing workplace efficiency via automation. However, when developers operate from a singular perspective, they may overlook how female users interact with AI-driven technologies. This oversight can compromise both the accuracy of these systems and the quality of services provided.

The exclusion of women's perspectives, values, and needs in AI development reflects deeper issues of misrecognition, as conceptualized by Axel Honneth. Recognizing and incorporating diverse user needs into the design of widely deployed technologies affirms individuals’ significance in society and validates their experiences. A failure to do so can negatively impact women’s self-perception and personal development. Indeed, as already said, one form of misrecognition in AI is the frequent inaccuracy in identifying women’s faces and voices, largely due to the underrepresentation of female data in system training. Another issue stems from gendered stereotypes embedded in AI systems, which could be mitigated by designing products that better reflect the spectrum of gender expression and identity. Lastly, a reliance on generalized assumptions about femininity further exacerbates the exclusion of diverse female perspectives, emphasizing the need for more representative and inclusive AI development practices.

IV. Examples of Gender Bias based in AI

Workplace discrimination and stereotyped representation

Artificial intelligence (AI) is transforming our world—but when it reflects existing biases, it can reinforce discrimination against women and girls. From hiring decisions to healthcare diagnoses, AI systems can amplify gender inequalities when trained on biased data. As claimed by Zinnya del Villar, a leading expert in responsible AI, learning from data filled with stereotypes often reflect and reinforce gender biases. “These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgments”. AI, as largely explained in the previous sections, is about data. It is a set of technologies that enable computers to do complex tasks faster than humans. AI systems, such as machine learning models, learn to perform these tasks from the data they are trained on. When these models rely on biased algorithms, they can reinforce existing inequalities and fuel gender discrimination in AI. In a nutshell, AI gender bias is when the AI treats people differently on the basis of their gender, because that’s what it learned from the biased data it was trained on. Artificial Intelligence, as we already explained, mirrors the biases that are present in our society on the basis of gender, age, race, and many other factors. Public awareness and education are essential parts of this strategy, adds del Villar. Helping people understand how AI works and the potential for bias can empower them to recognize and prevent biased systems and keep human oversight on decision-making processes. “To reduce gender bias in AI, it is crucial that the data used to train AI systems is diverse and represents all genders, races, and communities,” continues del Villar. This means actively selecting data that reflects different social backgrounds, cultures and roles, while removing historical biases, such as those that associate specific jobs or traits with one gender.

Ahead of the International Women's Day, a UNESCO study revealed worrying tendencies in Large Language models (LLM) to produce gender bias, as well as homophobia and racial stereotyping. Women were described as working in domestic roles far more often than men - four times as often by one model – and were frequently associated with words like “home”, “family” and “children”, while male names were linked to “business”, “executive”, “salary”, and “career.
The study Bias Against Women and Girls in Large Language Models explores gender stereotyping in Large Language Models (LLMs), the natural language processing tools that power popular generative AI platforms. It examines models such as GPT-3.5 and GPT-2 by OpenAI, as well as Llama 2 by META, providing clear evidence of bias against women in the content they generate. The study found that open-source models like Llama 2 and GPT-2—valued for their accessibility to a broad audience—exhibited the most pronounced gender biases. However, their open nature also presents an opportunity for greater transparency and collaborative efforts within the global research community to mitigate these biases. In contrast, more closed models, such as GPT-3.5, GPT-4 (which powers ChatGPT), and Google’s Gemini, offer fewer opportunities for scrutiny and improvement. A linguistic analysis of Llama 2’s output revealed significant disparities in how men and women are represented. Stories about male characters frequently included words like “treasure,” “woods,” “sea,” “adventurous,” “decided”. In contrast, stories about women were dominated by words such as “garden,” “love,” “felt,” “gentle”. Additionally, women were depicted in domestic roles four times more often than men, further highlighting the model’s tendency to perpetuate gender stereotypes.

Another significant analysis is conducted by Waelen et al. (2021) about gender bias, which showed that AI not only has the power to influence our behavior directly, but also constitutes who we become and how we are able to express ourselves over time. They claimed then that artificial intelligence always needs the human mind as a counterpart. For machine translation, the human element is a post-editor, who checks and corrects the entire output and asks questions if there are any ambiguities. New research reveals that women are significantly more reluctant to use the technology than men.”There is always a stark gender disparity hiding in the back of these papers,” says Harvard Business School Associate Professor Rembrand Koning, who has also noticed that fewer women use the generative AI tools that he and his colleagues at the Digital Data Design Institute at Harvard have created for entrepreneurs around the world. Koning’s research reveals that women are adopting AI tools at a rate 25% lower than men on average, despite the fact that AI’s benefits should, in principle, be equally accessible to both genders. He explains that, based on discussions with managers and existing research on work and gender, some women hesitate to rely on AI-generated information due to concerns about its ethical implications or the perception that using it constitutes "cheating." This apprehension is often rooted in societal expectations and educational experiences. “Women face harsher penalties when their expertise is questioned in various fields,” Koning notes. “They may fear that even if they provide the correct answer, others will assume they ‘cheated’ by using tools like ChatGPT.”
If the gender gap in AI adoption continues, Koning warns that it could lead to three significant consequences.

First women may struggle to advance in their careers. If female workers aren’t using a technology that increases productivity, they risk falling behind their male counterparts, ultimately widening the gender gap in pay and job opportunities. In recruitment, unconscious bias can significantly impact the hiring process, leading to an under-representation of diverse talent in technology sectors. In the tech industry, gender bias remains a persistent issue. Women and non-binary individuals often face challenges in securing roles due to stereotypes suggesting that men are more suited for technical positions. Studies have shown that women’s resumes are sometimes evaluated less favourably compared to identical resumes with male names.
Further, traditional resume screening is a time-intensive and bias-prone task, as recruiters may unconsciously favor candidates based on factors such as name, gender, ethnicity, or educational background. Eventually, AI-driven Applicant Tracking Systems (ATS) use machine learning to analyze resumes and shortlist candidates based on skills, experience, and qualifications rather than personal identifiers. These AI-powered systems can: a) analyse resumes to identify relevant keywords, work experience, and educational backgrounds; b) rank candidates based on job-specific criteria, ensuring an objective evaluation; c) reduce human bias by anonymizing demographic details before recruiters review applications. For example, companies like IBM and Amazon use AI-driven ATS solutions to process thousands of applications efficiently while ensuring fair candidate evaluation.

AI should complement, not replace, human judgment in recruitment. While AI can process vast amounts of data efficiently, it lacks the ability to assess soft skills, cultural fit, and other nuanced factors essential to hiring decisions. Over-reliance on AI can lead to a purely algorithmic approach, stripping away the human intuition that plays a vital role in evaluating candidates holistically. Body-language analysis, vocal assessments, gamified tests, CV scanners, these are some of the tools companies use to screen candidates with artificial intelligence recruiting software (BBC, 2024). Job applicants face these machine prompts – and AI decides whether they are a good match or fall short. A valuable example is the one high-profile case in 2020, UK-based make-up artist Anthea Mairoudhiou said her company told her to re-apply for her role after being fired during the pandemic. She was evaluated both based on past performance and via an AI-screening programme, HireVue. She says she ranked well in the skills evaluation – but after the AI tool scored her body language poorly, she was out of a job for good (BBC,2024).

The role of governments and institutions in shaping AI ethics through guidelines

Artificial Intelligence can be used to reduce or perpetuate the biases and inequalities in our societies. Here are five steps that the expert del Villar recommends making AI inclusive – and better:

  1. Using diverse and representative data sets to train AI systems
  2. Improving the transparency of algorithms in AI systems
  3. Making sure AI development and research teams are diverse and inclusive to avoid blind spots
  4. Adopting strong ethical frameworks for AI systems
  5. Integrating gender-responsive policies in developing AI systems

These steps may rely on the main takeaways of the conference in November 2021, when UNESCO Member States unanimously adopted the Recommendation on the Ethics of AI, the first and only global normative framework in this field, applicable to all 194 member states. Only in February 2024, 8 global tech companies including Microsoft also endorsed the Recommendation. The frameworks call for specific actions to ensure gender equality in the design of AI tools, including ring-fencing funds to finance gender-parity schemes in companies, financially incentivizing women’s entrepreneurship, and investing in targeted programmes to increase the opportunities of girls’ and women’s participation in STEM and ICT disciplines. The protection of human rights and dignity is the cornerstone of the Recommendation, based on the advancement of fundamental principles such as transparency and fairness, always remembering the importance of human oversight of AI systems.
GSMA, INNIT, Lenovo Group, LG AI Research, Mastercard, Microsoft, Salesforce and Telefonica signed a ground-breaking agreement to build more ethical AI. The companies will integrate the values and principles of UNESCO’s Recommendation on the Ethics of AI when designing and deploying AI systems.

UNESCO’s Recommendation on the Ethics of Artificial Intelligence was the world's first, and remains its only normative framework on AI. In the past two years, demonstrable progress has been made towards implementing this framework.

Later on, the Organization decided to introduce a new collaborative platform to support governments and companies’ efforts to ensure that women are represented equally in both the design and deployment of AI, the so-called Women4Ethical AI. The platform’s members will also contribute to the advancement of all the ethical provisions in the Recommendation on the Ethics of AI. The platform unites 17 leading female experts from academia, civil society, the private sector and regulatory bodies, from around the world. They will share research and contribute to a repository of good practices. The platform will drive progress on non-discriminatory algorithms and data sources, and incentivize girls, women and under-represented groups to participate in AI. Established by UNESCO, the Women4Ethical AI Platform was launched during CSW63 (Sixty-third session of the Commission on the Status of Women) to foster the integration of gender equality into the development and deployment of AI technologies, thus contributing to the global conversation on the role that women can and should play in shaping AI. It advocates for significant policy actions across various fields, ranging from closing gender gaps, to empowering women in tech, promoting female entrepreneurship, and enhancing participation and leadership in the AI domain.

New ethical challenges are created by the potential of AI algorithms to reproduce and reinforce existing biases, and thus to exacerbate already existing forms of discrimination, prejudice and stereotyping. Some of these issues are related to the capacity of AI systems to perform tasks which previously only living beings could do, and which were in some cases even limited to human beings only (Recommendation 2, lett.c). These characteristics give AI systems a profound, new role in human practices and society, as well as in their relationship with the environment and ecosystems, creating a new context for children and young people to grow up in, develop an understanding of the world and themselves, critically understand media and information, and learn to make decisions.

The Recommendation underscores the need to protect, promote, and respect human rights, fundamental freedoms, human dignity, and equality—including gender equality. It also highlights the importance of safeguarding the interests of present and future generations, preserving the environment and biodiversity, and ensuring respect for cultural diversity at all stages of the AI system lifecycle.

Moreover, transparency and explainability are fundamental to ensuring that AI respects human rights and ethical principles. Without transparency, accountability mechanisms at national and international levels may be ineffective, potentially undermining legal frameworks, fair trial rights, and access to effective remedies.
To operationalize these principles, the Recommendation calls on Member States to implement policy measures that ensure AI governance aligns with human rights, democracy, and the rule of law. This includes establishing policy frameworks and mechanisms that encourage adherence by stakeholders—such as private companies, research institutions, and civil society organizations.

V. Consequences of Gender Bias

The consequences of gender bias in AI can be profound and far-reaching, affecting everything from fairness in employee selection processes to fundamental aspects of social justice. In the workplace, these biases can lead to discrimination in key processes such as hiring, promotions, and performance evaluations, unfairly excluding or limiting women and gender minorities from leadership and career development positions. This exclusion not only harms the individuals involved but also impoverishes the diversity of talent within organizations, negatively impacting innovation and organizational efficiency (Ajunwa, 2020).

Beyond this, biases in AI also pose significant risks to equity and social justice. The use of biased automated systems in sensitive contexts such as bank loan allocation, healthcare insurance, or even judicial decisions could deepen existing inequalities. For example, algorithms that determine credit conditions can systematically disadvantage women, particularly those in more vulnerable socio-economic contexts, perpetuating cycles of poverty and marginalization. Moreover, systems used in criminal justice can generate unfairly harsher decisions for certain groups, exacerbating historical disparities (Eubanks, 2018).

Another significant consequence is the logical decline in public trust in AI as a technology serving humanity. When people perceive technological systems as unfair or biased, their willingness to use them decreases, limiting the effectiveness and widespread adoption of AI in various social and economic spheres (Benjamin, 2019). Furthermore, these biases not only directly impact the victims of discrimination but also have an indirect effect on the communities to which they belong, increasing social tensions and perpetuating historical cycles of inequality.

Moreover, gender biases in AI pose significant challenges from both an ethical and regulatory standpoint. Biased automated decisions force a reconsideration of current regulatory frameworks and require new public policies and professional ethics to ensure that emerging technologies contribute to social well-being (Crawford, 2021).

In this context, the European Union has adopted legislative measures to promote gender equality at the top. Starting in July 2026, large publicly traded companies will be required to ensure that at least 40% of non-executive director positions or 33% of all managerial positions are occupied by the underrepresented gender. This initiative aims to implement transparent and fair hiring processes, reflecting a commitment to diversity and inclusion at the corporate level (European Parliament, 2022)

VI. Conclusion

In conclusion, the pervasive gender bias in Artificial Intelligence represents a multifaceted challenge that spans technical, social, and ethical dimensions. The evidence presented demonstrates that biases in data collection, algorithm design, and automated decision-making systems not only replicate existing societal inequalities but also amplify them, particularly affecting women and gender minorities. These biases have tangible consequences in real-world applications—ranging from skewed hiring practices that hinder career progression for women to the reinforcement of harmful stereotypes in language translation and facial recognition systems.

Critically, the underrepresentation of women in AI development further exacerbates these issues. When development teams lack diversity, they inadvertently create products that fail to address the needs and experiences of all users, thus perpetuating systemic discrimination. The situation calls for a radical rethinking of AI development practices, where inclusive design is not merely an afterthought but a foundational principle. It is essential for organizations to invest in the recruitment and retention of diverse talent, ensuring that the perspectives of women and underrepresented groups are integrated from the inception of AI systems.

Furthermore, the need for transparency and accountability in AI is paramount. Implementing rigorous auditing processes and fairness testing can help identify and mitigate biases before they result in discriminatory outcomes. Regulatory frameworks and ethical guidelines, such as UNESCO’s Recommendation on the Ethics of Artificial Intelligence, offer valuable roadmaps for achieving greater equity in technology. However, effective enforcement of these standards requires close collaboration among policymakers, industry leaders, and civil society organizations to build a governance ecosystem that prioritizes human rights and social justice.

Looking ahead, the transformative potential of AI can only be fully realized when it serves as an instrument of empowerment rather than exclusion. Addressing gender bias in AI is not solely about correcting technical shortcomings; it is about challenging and reshaping the cultural and structural narratives that have long defined gender roles in society. By fostering an environment where ethical considerations, inclusivity, and diverse perspectives are central to AI development, we can create systems that not only enhance productivity and innovation but also contribute to a more just and equitable world.

References

Ajunwa, I. (2020). “The Paradox of Automation as Anti-Bias Intervention." Cardozo Law Review, 41(5), 1671-1742 - 1a

Axel Honneth, The Struggle for Recognition: The moral grammar of social conflicts.S. Thompson - 1996 - Journal of Applied Philosophy 13:325-326 -2b

Barocas, S. e Selbst, A. D. (2016). "Big Data's Disparate Impact." California Law Review, 104(3), 671-732 - 1a

Bellens E. (2018). Google Translate est sexiste. Article of Data News. https://datanews.levif.be/actualite/google-translate-est-sexiste/ -3b

Benjamin, R. (2019). "Race After Technology: Abolitionist Tools for the New Jim Code." Polity Press - 1b

Björkegren, Daniel. 2023. “Artificial Intelligence for the Poor: How to Harness the Power of AI in the Developing World.” Foreign Affairs. -3b

Blog Oneword (2022). Error sources in machine translation: How the algorithm reproduces unwanted gender roles. https://www.oneword.de/en/error-sources-in-machine-translation/#information -3c

Brynjolfsson, Erik, Danielle Li, and Lindsey R Raymond. 2023. “Generative AI at work.” NBER Working paper. -2c

Buolamwini, J., & Gebru, T. (2018). "Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification." Proceedings of Machine Learning Research, 81, 1-15 - 1c

Crawford, K. (2021). "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence." Yale University Press - 1b

Dastin, J. (2018). "Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women." Reuters - 1c

Eubanks, V. (2018). "Automating Inequality." St. Martin’s Press -2b

Friedman B. and Nissenbaum H. (1996). Bias in Computer Systems. ACM Transactions on Information Systems, Vol. 14, No. 3, Pages 330 –347 -2c

Koning, M. R., (2025). Women Are Avoiding AI. Will Their Careers Suffer? Distilling Harvard Business School research for leaders who make a difference. Data and Technology. Women Are Avoiding AI. Will Their Careers Suffer? | Working Knowledge -2b

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K. e Galstyan, A. (2021). "A Survey on Bias and Fairness in Machine Learning." ACM Computing Surveys, 54(6), 1-35 - 2b

Noble, S. U. (2018). "Algorithms of Oppression." NYU Press - 1c

Kuczmarski J. (2018). Reducing Gender bias in Google Translate. Google Translate blog. https://blog.google/products/translate/reducing-gender-bias-google-translate/ -3c

Lytton, C. (2024). Ai hiring tools may be filtering out the best job applications. Article BBC. AI hiring tools may be filtering out the best job applicants -3b

Otis, G. N., Delecourt S., Cranney K., Koning R., (2025). Global Evidence on Gender Gaps and Generative AI. Working Paper published with Business Harvard School. https://www.hbs.edu/faculty/Pages/item.aspx?num=66548 -2c

Otis, Nicholas, Rowan P Clarke, Solne Delecourt, David Holtz, and Rembrand Koning. 2023. “The uneven impact of generative AI on entrepreneurial performance.” Working paper available at SSRN 4671369 -2c

Prates M., Avelar P., Lamb L. (2019). Assessing gender bias in machine translation: a case study with Google Translate (6363-6381). -2c

Prates M., Avelar P., Lamb L. (2019). Assessing gender bias in machine translation: a case study with Google Translate (6363-6381). -2b

Richter, F. (2021). Women’s representation in big tech. Statista. Retrieved from https://www.statista.com/ chart/4467/female-employees-at-tech-companies/. Accessed 18 Jan 2022. -2b

UN WOMEN (2025), How AI reinforces gender bias-and what we can do about it. Interview with Zinnya del Villar on AI gender bias and creating inclusive technology. How AI reinforces gender bias—and what we can do about it | UN Women – Headquarters -1b

UNESCO (2024) AI Ethics: 8 global tech companies commit to apply UNESCO’s Recommendation
AI Ethics: 8 global tech companies commit to apply UNESCO’s Recommendation | UNESCO -2c

UNESCO (2024), Articficial Intelligence launches Women4Ethical AI expert platform to advance gender equality. Press release. Artificial Intelligence: UNESCO launches Women4Ethical AI expert platform to advance gender equality | UNESCO -2b

UNESCO (2024), Ethics of Artificial Intelligence. The Recommendation. Press release. Ethics of Artificial Intelligence | UNESCO -1b

UNESCO (2024), Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes. https://unes.co/lvg4k9 -2b

UNESCO, IRCAI (2024). “Challenging systematic prejudices: an Investigation into Gender Bias in Large Language Models. Challenging systematic prejudices: an investigation into bias against women and girls in large language models - UNESCO Digital Library -2b

UNESCO, Recommendation on the Ethics of Artificial Intelligence, adopted on 23 November 2021. Published in 2022 by the United Nations Educational, Scientific and Cultural Organization,France. SHS/BIO/PI/2021/1 -1a

Waelen, R., Wieczorek, M. The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition. Philos. Technol. 35, 53 (2022). https://doi.org/10.1007/s13347-022-00548-w -2c

West, M., Kraut, R., & Ei Chew, H. (2019). "I'd Blush if I Could." UNESCO - 2b



Condividi il post