Leveraging Sparsity for Privacy in Collaborative Inference
Abstract
Collaborative inference (CI) is hampered by high communication costs and privacy risks, with existing defenses often forcing a trade-off between efficiency and formal privacy guarantees. In this work, we present a framework that leverages activation sparsity as a dual-purpose mechanism to address both challenges simultaneously. Our approach uses a lightweight Sparse Autoencoder (SAE) to learn a sparse representation, which is then protected by a novel two-channel noise mechanism grounded in information theory. This design provides a tunable privacy budget while remaining computationally inexpensive. Evaluations on CIFAR-10, Tiny-ImageNet, and FaceScrub show that our method achieves a state-of-the-art privacy-utility trade-off, sustaining high accuracy at sparsity levels of up to 97\%, while offering superior resilience against strong model inversion attacks. Our results underline that sparsity can be transformed from an effective compression tool into a powerful and theoretically-grounded privacy defense, paving the way for more practical and trustworthy CI systems. We provide code at https://github.com/an7123/privacy_ci.