The term federated learning was introduced in 2016 by Google, explaining a machine learning setting where many entities named clients collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized. Each client's raw data are stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective. Federated Learning is a very important research topic nowadays since it allows companies to implement distributed learning in a privacy conscious fashion. A number of interesting questions are still open: • Through distributed or parallel training, are our models and parameters guaranteed to converge on the same state as in a centralized setting? • If they don't converge on the same state, then how far are we from the original solution, and how far are we from the true optimal solution? • What other assumptions/conditions are needed to reach a “good” convergence? • How much faster we can get if we compare distributed training to non-distributed training? How can we evaluate this? Aiming at answering some of the above questions, this thesis experiments with several federated learning settings and studies how convergence rates and final accuracies vary depending on the kind of network or the kind of data partitioning chosen.

Federated Learning: un'analisi empirica dei tassi di convergenza in regimi di apprendimento differenti

MANCUSO, LORENZO
2019/2020

Abstract

The term federated learning was introduced in 2016 by Google, explaining a machine learning setting where many entities named clients collaboratively train a model under the orchestration of a central server, while keeping the training data decentralized. Each client's raw data are stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective. Federated Learning is a very important research topic nowadays since it allows companies to implement distributed learning in a privacy conscious fashion. A number of interesting questions are still open: • Through distributed or parallel training, are our models and parameters guaranteed to converge on the same state as in a centralized setting? • If they don't converge on the same state, then how far are we from the original solution, and how far are we from the true optimal solution? • What other assumptions/conditions are needed to reach a “good” convergence? • How much faster we can get if we compare distributed training to non-distributed training? How can we evaluate this? Aiming at answering some of the above questions, this thesis experiments with several federated learning settings and studies how convergence rates and final accuracies vary depending on the kind of network or the kind of data partitioning chosen.
ENG
IMPORT DA TESIONLINE
File in questo prodotto:
File Dimensione Formato  
797491_lorenzomancuso-tesimagistrale-federatedlearninganempiricalanalysisofconvergenceratesindifferentlearningregimes.pdf

non disponibili

Tipologia: Altro materiale allegato
Dimensione 2.31 MB
Formato Adobe PDF
2.31 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14240/29205