Paper

Quantum Privacy-Preserving Perceptron

With the extensive applications of machine learning, the issue of private or sensitive data in the training examples becomes more and more serious: during the training process, personal information or habits may be disclosed to unexpected persons or organisations, which can cause serious privacy problems or even financial loss. In this paper, we present a quantum privacy-preserving algorithm for machine learning with perceptron. There are mainly two steps to protect original training examples. Firstly when checking the current classifier, quantum tests are employed to detect data user's possible dishonesty. Secondly when updating the current classifier, private random noise is used to protect the original data. The advantages of our algorithm are: (1) it protects training examples better than the known classical methods; (2) it requires no quantum database and thus is easy to implement.

Results in Papers With Code
(↓ scroll down to see all results)