[Resource Topic] 2022/1483: Towards Practical Secure Neural Network Inference: The Journey So Far and the Road Ahead

Welcome to the resource topic for 2022/1483

Title:
Towards Practical Secure Neural Network Inference: The Journey So Far and the Road Ahead

Authors: Zoltán Ádám Mann, Christian Weinert, Daphnee Chabal, Joppe W. Bos

Abstract:

Neural networks (NNs) have become one of the most important tools for artificial intelligence (AI). Well-designed and trained NNs can perform inference (e.g., make decisions or predictions) on unseen inputs with high accuracy. Using NNs often involves sensitive data: depending on the specific use case, the input to the NN and/or the internals of the NN (e.g., the weights and biases) may be sensitive. Thus, there is a need for techniques for performing NN inference securely, ensuring that sensitive private data remains secret. This challenge belongs to the “privacy and data governance” dimension of trustworthy AI.

In the past few years, several approaches have been proposed for secure neural network inference. These approaches achieve better and better results in terms of efficiency, security, accuracy, and applicability, thus making big progress towards practical secure neural network inference. The proposed approaches make use of many different techniques, such as homomorphic encryption and secure multi-party computation. The aim of this survey paper is to give an overview of the main approaches proposed so far, their different properties, and the techniques used. In addition, remaining challenges towards large-scale deployments are identified.

ePrint: https://eprint.iacr.org/2022/1483

See all topics related to this paper.

Feel free to post resources that are related to this paper below.

Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.

For more information, see the rules for Resource Topics .