SiGBDT: Large-Scale Gradient Boosting Decision Tree Training via Function Secret Sharing

Jiang, Yufan and Mei, Fei and Dai, Tianxiang and Li, Yong (2024) SiGBDT: Large-Scale Gradient Boosting Decision Tree Training via Function Secret Sharing. In: ASIA CCS '24 : Proceedings of the 19th ACM Asia Conference on Computer and Communications Security. ACM, New York, pp. 274-288. ISBN 9798400704826

Full text not available from this repository.

Abstract

As a well known machine learning model, Gradient Boosting Decision Tree (GBDT) is widely used in many real-world scenes such as online marketing, risk management, fraud detection and recommendation systems. Due to limited data resources, two data owners may collaborate with each other to jointly train a high-quality model. As privacy regulations such as HIPPA and GDPR come into force, Privacy-Preserving Machine Learning (PPML) has drawn increasingly higher attention. Recently, a line of works [3--6] studies function secret sharing (FSS) schemes in the preprocessing model, where the online stage of secure two-party computation (2PC) is significantly improved. While recent privacy-preserving GDBT frameworks mainly focus on improving the performance of a singular module (e.g. secure bucket aggregation), we propose SiGBDT, a globally silent two-party GBDT framework via function secret sharing on a vertically partitioned dataset. During the training process, we apply FSS schemes to construct efficient modular protocols, such as secure bucket aggregation, argmax computation and a node split approach. We run in-depth experiments and discover that SiGBDT completely outperforms state-of-the-art frameworks. The experiment results show that SiGBDT is at least 3.32 X faster in LAN and at least 6.4 X faster in WAN.

Item Type:
Contribution in Book/Report/Proceedings
ID Code:
229615
Deposited By:
Deposited On:
28 May 2025 10:45
Refereed?:
Yes
Published?:
Published
Last Modified:
28 May 2025 23:13