Welcome to the resource topic for 2025/1684
Title:
FHEMaLe: Framework for Homomorphic Encrypted Machine Learning
Authors: B PRADEEP KUMAR REDDY, SAMEEKSHA GOYAL, RUCHIKA MEEL, Ayantika Chatterjee
Abstract:Machine learning (ML) has revolutionized various industries by leveraging predictive models and data-driven insights, often relying on cloud
computing for large-scale data processing. However, this dependence introduces challenges such as bandwidth constraints and network latency. Edge
computing mitigates these issues by enabling localized processing, reducing reliance on continuous cloud connectivity, and optimizing resource
allocation for dynamic workloads. Given the limited computational capacity of sensory nodes in ML systems, edge devices provide an effective
solution by offloading processing tasks. However, a critical challenge in this paradigm is to ensure user privacy while handling sensitive data
both in the cloud and in edge processing. To address this, we propose a Fully Homomorphic Encryption (FHE) enabled framework that enables
ML computations directly on encrypted data, eliminating need for decryption. The main challenge to design such framework is that ML complex
implementation steps need to be revisited with suitable optimizations to match FHE processing requirements. There are different standard libraries to
support basic computation blocks on which encrypted ML processing is to be developed. These libraries vary in supported computation operators,
computational complexity and memory demands. Those in-turn introduces latency and throughput challenges, especially on resource-constrained
edge nodes. For example, in general HE library CKKS(Cheon-Kim-Kim-Song) with packing and approximate homomorphic operation support is
known to be the best choice for privacy preserving AI algorithm implementation. However, analysis shows leveled CKKS is limited in implementing
complex operators and hence not suitable for few specific ML algorithms like KNN, Logistic Regression or general activations in NN etc without any
approximation. To avoid accuracy drops associated with approximations, Torus based FHE library (TFHE) can be a better choice to make certain
ML implementations feasible. Moreover, our study shows compared to TFHE, CKKS with huge memory requirement is not suitable for resource
constrained edge. Thus, underlying library choice to design such framework is crucial considering the trade-off between latency and accuracy. In
this work, we propose an integrated framework FHEMaLe for encrypted ML processing which takes model architecture, desired accuracy, and
platform preference as inputs and based on that appropriate execution environment is selected: a cloud platform leveraging the CKKS homomorphic
encryption library or an edge platform using the TFHE library. Further, analysis shows the limitation of performing FHE ML on a single edge device
and hence our framework partitions encrypted data, transmits it via a fabric API, and performs distributed encrypted ML computations across the
edge cluster. We implement distributed ML inference for algorithms such as 𝐾-Nearest Neighbors (KNN) (Cloud CKKS=248 sec, Edge TFHE=37
min), Support Vector Machine (SVM) (Cloud CKKS=18 sec, Edge TFHE=4.15 min), and Logistic Regression (LR) ( Cloud CKKS=17 sec, Edge
TFHE=7.82 min) on a cluster of 11 edge nodes. This work explains why KNN suffers from a major performance bottleneck in encrypted domain and
may not be a great choice for encrypted ML processing without application specific optimizations. Furthermore, our encrypted operators are capable of supporting encrypted NN processing
(Cloud CKKS= 57 sec), but we explain why CKKS is a preferred choice in this case. The distributed nature of our implementation shows a promise
of further improvement and scalability with the support of larger cluster.
ePrint: https://eprint.iacr.org/2025/1684
See all topics related to this paper.
Feel free to post resources that are related to this paper below.
Example resources include: implementations, explanation materials, talks, slides, links to previous discussions on other websites.
For more information, see the rules for Resource Topics .