Full Text:  <1041>

Summary:  <202>

CLC number: TP391.4

On-line Access: 2025-02-10

Received: 2024-04-12

Revision Accepted: 2024-05-14

Crosschecked: 2025-02-18

Cited: 0

Clicked: 1515

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Ruipeng ZHANG

https://orcid.org/0000-0002-4372-4987

Ziqing FAN

https://orcid.org/0009-0009-1459-3250

Jiangchao YAO

https://orcid.org/0000-0001-6115-5194

Ya ZHANG

https://orcid.org/0000-0002-5390-9053

Yanfeng WANG

https://orcid.org/0000-0002-3196-2347

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


Fairness-guided federated training for generalization and personalization in cross-silo federated learning


Author(s):  Ruipeng ZHANG, Ziqing FAN, Jiangchao YAO, Ya ZHANG, Yanfeng WANG

Affiliation(s):  School of Artificial Intelligence, Shanghai Jiao Tong University, Shanghai 200240, China; more

Corresponding email(s):  ya_zhang@sjtu.edu.cn, wangyanfeng622@sjtu.edu.cn

Key Words:  Generalized and personalized federated learning; Performance distribution fairness; Domain shift


Share this article to: More <<< Previous Paper|Next Paper >>>


Abstract: 
Cross-silo federated learning (FL), which benefits from relatively abundant data and rich computing power, is drawing increasing focus due to the significant transformations that foundation models (FMs) are instigating in the artificial intelligence field. The intensified data heterogeneity issue of this area, unlike that in cross-device FL, is caused mainly by substantial data volumes and distribution shifts across clients, which requires algorithms to comprehensively consider the personalization and generalization balance. In this paper, we aim to address the objective of generalized and personalized federated learning (GPFL) by enhancing the global model’s cross-domain generalization capabilities and simultaneously improving the personalization performance of local training clients. By investigating the fairness of performance distribution within the federation system, we explore a new connection between generalization gap and aggregation weights established in previous studies, culminating in the fairness-guided federated training for generalization and personalization (FFT-GP) approach. FFT-GP integrates a fairness-aware aggregation (FAA) approach to minimize the generalization gap variance among training clients and a meta-learning strategy that aligns local training with the global model’s feature distribution, thereby balancing generalization and personalization. Our extensive experimental results demonstrate FFT-GP’s superior efficacy compared to existing models, showcasing its potential to enhance FL systems across a variety of practical scenarios.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE