Many real-world processes and phenomena, spanning from drug-drug interaction to social networks, are based on interplays among connected entities and can therefore be represented through graphs. A promising machine learning approach to deal with this kind of data is the use of Graph Neural Networks (GNNs), which employ neural networks to embed node features into a graph and extract structure information. However, GNNs are based on nested non-linear structures, and as a result the learned decision process is not easily understandable by human beings. As a consequence, virtually all trained GNNs end up being uninterpretable black boxes. The lack of transparency represents a barrier to the adoption of these systems for those tasks where interpretability is essential, like autonomous driving or medical applications. This problem is tackled by a new research ?eld, named eXplainable Arti?cial Intelligence (XAI). This approach comprises a suite of methods and algorithms enabling human beings to understand, trust and better manage machine learning models. This thesis is focused on the study of a state-of-art XAI technique called GNNExplainer: ?rst, we improved its stability on GNN black box models for the Node and Graph Classi?cation tasks; then we developed a methodology to explain the Link Prediction task as well. Our results analyse and extend the current state-of-art regarding XAI for graph-based tasks.

EXPLAINABLE AI FOR GRAPH-BASED MACHINE LEARNING TASKS

SARTORI, FLAVIO
2020/2021

Abstract

Many real-world processes and phenomena, spanning from drug-drug interaction to social networks, are based on interplays among connected entities and can therefore be represented through graphs. A promising machine learning approach to deal with this kind of data is the use of Graph Neural Networks (GNNs), which employ neural networks to embed node features into a graph and extract structure information. However, GNNs are based on nested non-linear structures, and as a result the learned decision process is not easily understandable by human beings. As a consequence, virtually all trained GNNs end up being uninterpretable black boxes. The lack of transparency represents a barrier to the adoption of these systems for those tasks where interpretability is essential, like autonomous driving or medical applications. This problem is tackled by a new research ?eld, named eXplainable Arti?cial Intelligence (XAI). This approach comprises a suite of methods and algorithms enabling human beings to understand, trust and better manage machine learning models. This thesis is focused on the study of a state-of-art XAI technique called GNNExplainer: ?rst, we improved its stability on GNN black box models for the Node and Graph Classi?cation tasks; then we developed a methodology to explain the Link Prediction task as well. Our results analyse and extend the current state-of-art regarding XAI for graph-based tasks.
ENG
IMPORT DA TESIONLINE
File in questo prodotto:
File Dimensione Formato  
836209_sartori_s_master_thesis.pdf

non disponibili

Tipologia: Altro materiale allegato
Dimensione 4.65 MB
Formato Adobe PDF
4.65 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14240/32692