Paper

Towards Explainable Multi-Party Learning: A Contrastive Knowledge Sharing Framework

Multi-party learning provides solutions for training joint models with decentralized data under legal and practical constraints. However, traditional multi-party learning approaches are confronted with obstacles such as system heterogeneity, statistical heterogeneity, and incentive design. How to deal with these challenges and further improve the efficiency and performance of multi-party learning has become an urgent problem to be solved. In this paper, we propose a novel contrastive multi-party learning framework for knowledge refinement and sharing with an accountable incentive mechanism. Since the existing naive model parameter averaging method is contradictory to the learning paradigm of neural networks, we simulate the process of human cognition and communication, and analogy multi-party learning as a many-to-one knowledge sharing problem. The approach is capable of integrating the acquired explicit knowledge of each client in a transparent manner without privacy disclosure, and it reduces the dependence on data distribution and communication environments. The proposed scheme achieves significant improvement in model performance in a variety of scenarios, as we demonstrated through experiments on several real-world datasets.

Results in Papers With Code
(↓ scroll down to see all results)