raditional machine learning algorithms typically necessitate centralizing data. This can be a challenge in real-world scenarios where data is gener- ated at various locations. Transmitting all the data to a central location is usually infeasible due to resource limitations and privacy concerns. Feder- ated learning addresses this problem by allowing clients to collaboratively train a global model, while keeping their raw data stored on their local de- vices. The choice of the aggregation algorithm in federated learning significantly impacts the performance of the resulting model. Therefore, it is essential to understand the strengths and weaknesses of different aggregation algo- rithms. The aim of this work is to assess and compare the performance of FedDyn (Acar et al., 2021) against competing methods in different feder- ated scenarios. The algorithms I will compare are FedAvg (McMahan et al., 2017), FedProx (Li et al., 2020), Scaffold (Karimireddy et al., 2019) and FedDyn. In contrast to these methods, which either attempt inexact minimization or use devices for parallelizing gradient computation, FedDyn introduces a dynamic regularizer for each device at each round. This en- sures that, in the long run, the global and device solutions are aligned. The experiments indicate that among these five algorithms, FedDyn achieves the best performance, in terms of highest final accuracy.

The traditional machine learning algorithms typically necessitate centralizing data. This can be a challenge in real-world scenarios where data is gener- ated at various locations. Transmitting all the data to a central location is usually infeasible due to resource limitations and privacy concerns. Feder- ated learning addresses this problem by allowing clients to collaboratively train a global model, while keeping their raw data stored on their local de- vices. The choice of the aggregation algorithm in federated learning significantly impacts the performance of the resulting model. Therefore, it is essential to understand the strengths and weaknesses of different aggregation algo- rithms. The aim of this work is to assess and compare the performance of FedDyn (Acar et al., 2021) against competing methods in different feder- ated scenarios. The algorithms I will compare are FedAvg (McMahan et al., 2017), FedProx (Li et al., 2020), Scaffold (Karimireddy et al., 2019) and FedDyn. In contrast to these methods, which either attempt inexact minimization or use devices for parallelizing gradient computation, FedDyn introduces a dynamic regularizer for each device at each round. This en- sures that, in the long run, the global and device solutions are aligned. The experiments indicate that among these five algorithms, FedDyn achieves the best performance, in terms of highest final accuracy.

Federated Learning with Dynamic regularization - implementation and experiments

BARGETTO, CRISTINA
2022/2023

Abstract

The traditional machine learning algorithms typically necessitate centralizing data. This can be a challenge in real-world scenarios where data is gener- ated at various locations. Transmitting all the data to a central location is usually infeasible due to resource limitations and privacy concerns. Feder- ated learning addresses this problem by allowing clients to collaboratively train a global model, while keeping their raw data stored on their local de- vices. The choice of the aggregation algorithm in federated learning significantly impacts the performance of the resulting model. Therefore, it is essential to understand the strengths and weaknesses of different aggregation algo- rithms. The aim of this work is to assess and compare the performance of FedDyn (Acar et al., 2021) against competing methods in different feder- ated scenarios. The algorithms I will compare are FedAvg (McMahan et al., 2017), FedProx (Li et al., 2020), Scaffold (Karimireddy et al., 2019) and FedDyn. In contrast to these methods, which either attempt inexact minimization or use devices for parallelizing gradient computation, FedDyn introduces a dynamic regularizer for each device at each round. This en- sures that, in the long run, the global and device solutions are aligned. The experiments indicate that among these five algorithms, FedDyn achieves the best performance, in terms of highest final accuracy.
ENG
raditional machine learning algorithms typically necessitate centralizing data. This can be a challenge in real-world scenarios where data is gener- ated at various locations. Transmitting all the data to a central location is usually infeasible due to resource limitations and privacy concerns. Feder- ated learning addresses this problem by allowing clients to collaboratively train a global model, while keeping their raw data stored on their local de- vices. The choice of the aggregation algorithm in federated learning significantly impacts the performance of the resulting model. Therefore, it is essential to understand the strengths and weaknesses of different aggregation algo- rithms. The aim of this work is to assess and compare the performance of FedDyn (Acar et al., 2021) against competing methods in different feder- ated scenarios. The algorithms I will compare are FedAvg (McMahan et al., 2017), FedProx (Li et al., 2020), Scaffold (Karimireddy et al., 2019) and FedDyn. In contrast to these methods, which either attempt inexact minimization or use devices for parallelizing gradient computation, FedDyn introduces a dynamic regularizer for each device at each round. This en- sures that, in the long run, the global and device solutions are aligned. The experiments indicate that among these five algorithms, FedDyn achieves the best performance, in terms of highest final accuracy.
IMPORT DA TESIONLINE
File in questo prodotto:
File Dimensione Formato  
885847_tesi_bargettocristina.pdf

non disponibili

Tipologia: Altro materiale allegato
Dimensione 1.29 MB
Formato Adobe PDF
1.29 MB Adobe PDF

I documenti in UNITESI sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/20.500.14240/108026