The possibility of using chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting great interest from both the clinical and the AI community. This is motivated by CXR greater availability and easiness of use in emergency settings compared to other methods like computer tomography (CT). In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We contribute by validating the generalization capability of the methods circulating in the scientific community in the latest months, and by showing how significant the biases introduced by publicly available CXRs datasets can be. We then illustrate a more reliable and "explainable-by-design" pipeline for COVID detection: a two-step approach in which the final diagnosis is based on the detection of common radiological findings and lung pathologies. All of our experiments have been carried out bearing in mind that, especially for clinical applications, explainability plays a major role for building trust in machine learning algorithms. Furthermore, we propose the novel application on medical images of a non-discriminatory regularization technique, aimed at reducing hidden biases in the used datasets. The contributions of this work are enabled by CORDA, a COVID CXR dataset collected for this study by two of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. The proposed approach, supported by almost 1k radiographic images in our dataset, achieves promising performance in COVID detection, compatible with - and sometimes even higher than - expert human radiologists.
The possibility of using chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting great interest from both the clinical and the AI community. This is motivated by CXR greater availability and easiness of use in emergency settings compared to other methods like computer tomography (CT). In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We contribute by validating the generalization capability of the methods circulating in the scientific community in the latest months, and by showing how significant the biases introduced by publicly available CXRs datasets can be. We then illustrate a more reliable and "explainable-by-design" pipeline for COVID detection: a two-step approach in which the final diagnosis is based on the detection of common radiological findings and lung pathologies. All of our experiments have been carried out bearing in mind that, especially for clinical applications, explainability plays a major role for building trust in machine learning algorithms. Furthermore, we propose the novel application on medical images of a non-discriminatory regularization technique, aimed at reducing hidden biases in the used datasets. The contributions of this work are enabled by CORDA, a COVID CXR dataset collected for this study by two of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. The proposed approach, supported by almost 1k radiographic images in our dataset, achieves promising performance in COVID detection, compatible with - and sometimes even higher than - expert human radiologists.
Diagnosi del COVID-19 da radiografie polmonari mediante intelligenza artificiale
BARBANO, CARLO ALBERTO MARIA
2019/2020
Abstract
The possibility of using chest X-ray (CXR) imaging for early screening of COVID-19 patients is attracting great interest from both the clinical and the AI community. This is motivated by CXR greater availability and easiness of use in emergency settings compared to other methods like computer tomography (CT). In this study we provide insights and also raise warnings on what is reasonable to expect by applying deep learning to COVID classification of CXR images. We contribute by validating the generalization capability of the methods circulating in the scientific community in the latest months, and by showing how significant the biases introduced by publicly available CXRs datasets can be. We then illustrate a more reliable and "explainable-by-design" pipeline for COVID detection: a two-step approach in which the final diagnosis is based on the detection of common radiological findings and lung pathologies. All of our experiments have been carried out bearing in mind that, especially for clinical applications, explainability plays a major role for building trust in machine learning algorithms. Furthermore, we propose the novel application on medical images of a non-discriminatory regularization technique, aimed at reducing hidden biases in the used datasets. The contributions of this work are enabled by CORDA, a COVID CXR dataset collected for this study by two of the major emergency hospitals in Northern Italy during the peak of the COVID pandemic. The proposed approach, supported by almost 1k radiographic images in our dataset, achieves promising performance in COVID detection, compatible with - and sometimes even higher than - expert human radiologists.File | Dimensione | Formato | |
---|---|---|---|
811588_master_thesis_final.pdf
non disponibili
Tipologia:
Altro materiale allegato
Dimensione
9.27 MB
Formato
Adobe PDF
|
9.27 MB | Adobe PDF |
I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/20.500.14240/156323