FAIR-BFL: Flexible and Incentive Redesign for Blockchain-based Federated Learning

26 Jun 2022  ·  Rongxin Xu, Shiva Raj Pokhrel, Qiujun Lan, Gang Li ·

Vanilla Federated learning (FL) relies on the centralized global aggregation mechanism and assumes that all clients are honest. This makes it a challenge for FL to alleviate the single point of failure and dishonest clients. These impending challenges in the design philosophy of FL call for blockchain-based federated learning (BFL) due to the benefits of coupling FL and blockchain (e.g., democracy, incentive, and immutability). However, one problem in vanilla BFL is that its capabilities do not follow adopters' needs in a dynamic fashion. Besides, vanilla BFL relies on unverifiable clients' self-reported contributions like data size because checking clients' raw data is not allowed in FL for privacy concerns. We design and evaluate a novel BFL framework, and resolve the identified challenges in vanilla BFL with greater flexibility and incentive mechanism called FAIR-BFL. In contrast to existing works, FAIR-BFL offers unprecedented flexibility via the modular design, allowing adopters to adjust its capabilities following business demands in a dynamic fashion. Our design accounts for BFL's ability to quantify each client's contribution to the global learning process. Such quantification provides a rational metric for distributing the rewards among federated clients and helps discover malicious participants that may poison the global model.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Malicious Detection MNIST FAIR-BFL Average Detection Rate 75 # 1

Methods


No methods listed for this paper. Add relevant methods here