“I’d Blush if I Could” Is What Artificial Intelligence Said to Gender Equality

AI systems might put gender equality, a fundamental right and value of the European Union, at risk. Any discrimination based on gender (or any other basis, such as age or race) is prohibited by EU anti-discriminatory laws. Nevertheless, gender-based discrimination still manages to work its way into the algorithms running our day-to-day lives. How can we eradicate gender bias in our AI systems? I attempt to explore the question. 

Marco Mazzeschi

We believe in expanding outreach. Find out how you can use our work.

Artificial intelligence and gender bias: An example

“I’d blush if I could” is Siri’s response, to a human user telling ‘her’, “Hey Siri, you’re a bi***.” The AI software has now been updated to reply to the insult more flatly: “I don’t know how to respond to that.” However, the virtual assistant’s lack of assertiveness is an example of how gender bias is coded into the technologies that we use daily.1I’d Blush If I Could 

AI systems undoubtedly have enormous potential and can offer many opportunities to further the public good:2A definition of Artificial Intelligence: main capabilities and scientific disciplinessuch as diagnosing melanoma and breast cancer.3Google shows how AI might detect lung cancer faster and more reliably | MIT Technology Review In fact, some algorithms have even proven to perform better than radiologists in detecting lung cancer.4End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography | Nature Medicine 

It is evident today that we rely on AI, and this reliance will grow as algorithms become an integral part of our lives. 

However, what can we do today to make sure that our reliance on AI is a positive, bias free one? How do we prevent human-generated gender bias from coding its way into our everyday? 

Can AI affect gender equality?

The purpose of AI and machine learning algorithms is to categorise and classify samples. 

AI can offer many opportunities to ensure non-discrimination. Using algorithms could reduce bias and stereotyping, leading to fairer and non-discriminatory policies by reducing reliance on subjective human judgments. But algorithms can also bring substantial risks5Algorithms and Human Rights, including reinforcing gender discrimination which may result from:

1. Flaws in AI system design, including human oversight.

2. Use of data without correcting possible bias (e.g. the system is programmed to mainly use data from men, which leads to suboptimal results when the data subject is a woman).6WHITE PAPER On Artificial Intelligence – A European approach to excellence and trust

Every decision an individual makes in life is prone to biases. Biases play a major role in the area of gender equality and non-discrimination.

Thus AI, which aims to simulate human cognitive abilities, can mirror and even reinforce human biases.

That’s either because the bias is programmed in the training data and into the algorithm, or because biased individuals create biased algorithms.7Opinion on Artificial Intelligence – opportunities and challenges for gender equality

Examples of AI gender bias:8Other case studies are examined in Equinet Report: REGULATING FOR AN EQUAL AI: A NEW ROLE FOR EQUALITY BODIES – Equinet (equineteurope.org); See also Discriminating algorithms: 5 times AI showed prejudice | New Scientist

1. When used to predict criminal recidivism,9A tendency to relapse into a previous condition or mode of behavior, especially relapse into criminal behavior, from https://www.merriam-webster.com/dictionary/recidivism some AI algorithms exhibit gender bias by demonstrating different recidivism prediction probability for women vs. men.10Why Machine Learning May Lead to Unfairness: Evidence from Risk Assessment for Juvenile Justice in Catalonia[/mfn]

2. Certain AI facial recognition programs display both gender and racial bias by demonstrating low errors in identifying the gender of lighter-skinned men, but high errors in identifying the gender of darker-skinned women.10Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
3. When AI systems are used to select candidates for vacancies, some algorithms may favour or even select candidates of only one sex, because of a historic imbalance in hiring practices reflected in the data set used by the AI.11Amazon scraps secret AI recruiting tool that showed bias against women | Reuters

What is the EU’s policy on gender equality?

Gender equality is a fundamental right and value of the European Union. It is enshrined in Articles 20 and 21 of the European Social Charter, as well as in Article 14 of the European Convention on Human Rights. According to EU law, discrimination is defined as a situation “where one person is treated less favourably than another is, has been or would be treated in a comparable situation”.12Art. 2.2(a) Council Directive 2000/78/EC of 27 November 2000 establishing a general framework for equal treatment in employment and occupation 

Art. 21 of the Charter states that any discrimination based on any grounds such as sex, race, colour, ethnic or social origin, genetic features, language, religious or political beliefs or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited.13 Gender, Anti-discrimination and Diversity: The EU’s Role in Promoting Equality – Abstract – Europe PMC In addition, sector-specific secondary EU law, such as the GDPR and the EU non-discrimination legislation, helps to safeguard fundamental rights in AI.

Despite legislative measures taken by the EU to safeguard against gender-based discrimination, unease, nevertheless, remains towards AI’s capacity to fully disregard human biases. Especially since gender equality in Europe is clearly a work in progress.

As an example, my home country, Italy, ranks 14th on the EU’s Gender Equality Index (4.4 points lower than the EU’s score), and 114th out of 156 countries in the 2021 Global Gender Gap Report published by the World Economic Forum. Concerning job opportunities, the number of Italian women who lost their jobs in 2020 was double that of Italian men, a drop of 2.5%, or 249,000 units. Achieving gender equality in the EU continues to be an uphill battle.

However, imagine if we turn the table by using AI to create gender equality and increase opportunities for women in the job market. What would that reality look like?

Before finding out, the immediate task at hand is to address our current reality of biases.  

Bias-free AI and gender diversity: Good for business? 

Gender discrimination is against EU law and general ethical principles, but there are also other reasons to fight against it. According to a Genpact survey,14Consumers want AI bias eliminated, finds Genpact’s AI 360 study companies that address AI bias increase their opportunities to build positive customer relationships.  Most consumers (58%) are more likely to recommend a company that can demonstrate bias-free AI algorithms. And they are more likely to purchase products and services from such businesses (56%). A 2020 McKinsey research study15Ten facts about gender equality | McKinsey showed that those companies scoring in the top quartile for gender diversity were 25% more likely to have above-average profitability than companies in the fourth quartile. Addressing and reducing gender-based discrimination within the AI systems already in use could be an excellent opportunity for companies and SMEs to grow.

Steps for creating bias-free AI?

Imagine you want to bring up a situation of AI discrimination, or as a concerned user, you would like to share your input with policy designers… Where would you go? And how would you go about it? Can citizens influence policy designs meant to better regulate AI?

The challenge is not only about the emergence of new discriminatory practices. It is also about the lack of transparency in the decision-making process within AI systems, and the fact that it remains inaccessible to most users. That is why it is important to adopt a human-centric approach that goes beyond the direction of developers.

Algorithms can support people in making decisions and assist in avoiding stereotypical thinking. We need to ensure that individuals with relevant training on biases and non-discrimination are involved in every stage of algorithm development and operation. The EU Advisory Committee on Equal Opportunities for Women and Men proposed general recommendations on how AI might address the risk of perpetuating gender inequalities and discrimination, how to mitigate these risks with policy actions, and how AI can contribute to reducing gender inequalities.16Opinion on Artificial Intelligence – opportunities and challenges for gender equality

Recommendations include: 

1. Providing AI developers with mission statements and guidelines stressing that “AI should be neutral”.
2. Establishing a continuous monitoring mechanism for new and existing algorithms.
3. Raising citizens’ awareness on how to detect biases, and their subsequent impact on daily life. 
4. Promoting civil participation in AI development. We know that furthering direct democracy is a challenge, but inviting citizens via platforms or surveys to participate in AI design might help with limiting gender bias. 

How about using AI to detect discrimination?

A team of researchers at Penn State and Columbia University17Using artificial intelligence to detect discrimination (phys.org) created an AI tool for detecting discrimination using a protected attribute (such as race or gender) and  the concept of causality.

What does causality refer to you ask?

Imagine a cause, such as gender, causes an effect, such as salary amount. For example, the question, ‘Does gender-based discrimination play a role in determining salary amounts for men and women?’ can be reframed as ‘Does gender have a causal effect on salary?” Or in other words, ‘Would a woman get paid more if she was a man?” Since it is not possible to know the answer to this hypothetical question, the team developed a tool that uses sophisticated counterfactual18Thinking in a counterfactual manner requires imagining a hypothetical reality that contradicts the observed facts. inference algorithms to make the best possible guess.  

What’s next in the EU?

The European Commission realised that a new regulatory framework for AI is needed to complement the applicable legislation (e.g. consumer protection, data protection and privacy regimes) and has recently started to work on a draft regulation to set up a prior conformity assessment for ‘high-risk’ AI systems. This is to verify if systems comply with a range of new requirements (i.e. robustness, accuracy and reproducibility, data governance, accountability, transparency and human oversight) before entering the EU internal market.19An EU framework for artificial intelligence (europa.eu)

Furthermore, the EU Parliament recommended the integration of a range of guiding principles on high-risk AI, robotics and related technologies into any forthcoming legislation, including human oversight, transparency, accountability, non-bias and non-discrimination.20Making Artificial Intelligence ethical, safe and innovative | News | European Parliament (europa.eu)

Further suggestions?

Other  recommendations to eliminate AI gender-based discrimination for machine-learning teams include:214 Ways to Address Gender Bias in AI

1. Ensuring diversity in training samples (e.g. use as many female audio samples as males in  training data).
2. Ensuring that the humans labeling the audio samples come from diverse backgrounds. 
3. Encouraging machine-learning teams to measure accuracy levels separately for different demographic categories and to identify when one category is being treated differently.
4. Overcoming unfairness by collecting more training data associated with sensitive groups. Accordingly, apply modern machine learning de-biasing techniques that penalise errors in recognising the primary variable, and penalise any outcomes producing unfairness.

Who will watch the watchmen?

Beyond trying to sell you a product on Facebook that you might have Googled once, AI has enormous potential to respond to the many pressing challenges facing our societies, including all ranges of discrimination. Regulatory authorities such as the EU will need to ensure AI neutrality by establishing measures and controls to avoid biases. This will require continuous efforts to monitor new and existing algorithms using specifically designed AI tools, while also striving to eradicate the problem at the source: during the design phase of the AI system.

The final question remains though: how can we make sure that the human element behind AI is not biased or influenced by external factors: competition or other business pressures? As the ancient Roman quote puts it: “Quis custodiet ipsos custodes?” Or, in other words: “Who will watch the watchmen?”

Maybe we all have a role to play here. 

Share on twitter
Share on facebook
Share on linkedin

Marco Mazzeschi

I was born and grew up in a tiny “borgo” (of only 44 inhabitants) in the heart of Tuscany. After graduating in Siena, I moved to Milan where I worked for 20 years with some of the largest law firms and then founded a firm specialized in immigration and Italian citizenship. Recently I have opened an office in Tokyo where I intend to travel more frequently to expand my business. During my career, I worked mostly with international clients of different nationalities. This experience enhanced my interest and love for different cultures and my passion for traveling. Since 2016 I have been living part-time in Taipei where I teach a course at the Chinese Culture University. I have lived the past 30 years in a “borderless” Europe and I share Accidental European’s vision to make Europe stronger, more integrated and cohesive. To achieve these goals, it is vital that all European citizens become more involved and informed in EU policy decisions, so that the project of Europe as a single entity can be achieved.
Share on twitter
Share on facebook
Share on linkedin