In recent years, the rapid development of Artificial Intelligence (AI) has brought numerous benefits to society, including increased efficiency and rapidity of otherwise more lengthy procedures. However, the use of algorithms in automated decision-making processes may result in discrimination, also reinforcing existing social hierarchies and inequalities. This thesis aims to investigate algorithmic discrimination and the specific impact that the gender data gap, namely the persistent lack of data about women, has on this phenomenon. Chapter 1 presents an outline of the nature of AI models and of the different mechanisms of algorithmic discrimination. It highlights the challenges posed by algorithmic discrimination to traditional EU anti-discrimination law concepts and discusses the concept of intersectionality as a lens through which to examine this kind of discrimination more effectively. Chapter 2 explores the gender data gap and its impact. It analyzes its theoretical foundation, focusing in particular on the myth of male universality and on the invisibilisation of women, of which the gender data gap is cause and effect. It provides an overview of existing gender data and of the existing gaps, together with some specific instances of algorithmic discrimination that result from these gaps. Indeed AI models, relying heavily on data, are more likely to perpetuate biases and discrimination without the use of gender-specific data in their design. Finally, Chapter 3 discusses possible solutions and strategies to address the issues raised in the previous chapters. It encourages the development of more inclusive and representative datasets and highlights the importance of transparent, unbiased and accountable AI systems. In particular, it emphasizes the need to make full use of existing tools and notions of both anti-discrimination law and data protection law.

Artificial Intelligence and Equality: The Need for Gender-Responsive Data

DI MURO, ALESSIA
2021/2022

Abstract

In recent years, the rapid development of Artificial Intelligence (AI) has brought numerous benefits to society, including increased efficiency and rapidity of otherwise more lengthy procedures. However, the use of algorithms in automated decision-making processes may result in discrimination, also reinforcing existing social hierarchies and inequalities. This thesis aims to investigate algorithmic discrimination and the specific impact that the gender data gap, namely the persistent lack of data about women, has on this phenomenon. Chapter 1 presents an outline of the nature of AI models and of the different mechanisms of algorithmic discrimination. It highlights the challenges posed by algorithmic discrimination to traditional EU anti-discrimination law concepts and discusses the concept of intersectionality as a lens through which to examine this kind of discrimination more effectively. Chapter 2 explores the gender data gap and its impact. It analyzes its theoretical foundation, focusing in particular on the myth of male universality and on the invisibilisation of women, of which the gender data gap is cause and effect. It provides an overview of existing gender data and of the existing gaps, together with some specific instances of algorithmic discrimination that result from these gaps. Indeed AI models, relying heavily on data, are more likely to perpetuate biases and discrimination without the use of gender-specific data in their design. Finally, Chapter 3 discusses possible solutions and strategies to address the issues raised in the previous chapters. It encourages the development of more inclusive and representative datasets and highlights the importance of transparent, unbiased and accountable AI systems. In particular, it emphasizes the need to make full use of existing tools and notions of both anti-discrimination law and data protection law.
ENG
IMPORT DA TESIONLINE
File in questo prodotto:
File Dimensione Formato  
869286_tesi_alessia_di_muro_869286.pdf

non disponibili

Tipologia: Altro materiale allegato
Dimensione 708.8 kB
Formato Adobe PDF
708.8 kB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14240/68578