Seminars

View all Seminars  |  Download ICal for this event

Privacy Preserving Machine Learning via Multi-party Computation

Series: M.Tech (Research) - Thesis Defence (ON-LINE)

Speaker: Mr. Chaudhari Harsh Mangesh Suchita M.Tech (Research) student Dept. of CSA

Date/Time: Jun 15 11:00:00

Location: ON-LINE

Faculty Advisor: Dr. Arpita Patra

Abstract:
Privacy-preserving machine learning (PPML) via Secure Multi-party Computation (MPC) has gained momen-tum in the recent past. Assuming a minimal network of pair-wise private channels, we propose an efficient four-party PPML framework over rings, FLASH, the first of its kind in the regime of PPML framework, that achieves the strongest security notion of Guaranteed Output Delivery (all parties obtain the output irrespective of adversary's behaviour). The state of the art ML frameworks such as ABY3 by Mohassel et.al (ACM CCS'18) and SecureNN by Wagh et.al (PETS'19) operate in the setting of 3 parties with one malicious corruption but achieve the weaker security guarantee of abort. We demonstrate PPML with real-time efficiency, using the following custom-made tools that overcome the limitations of the aforementioned state-of-the-art-- (a) dot prod-uct, which is independent of the vector size unlike the state-of-the-art ABY3, SecureNN and ASTRA by Chaudhari et.al (ACM CCSW'19), all of which have linear dependence on the vector size. (b) Truncation which is constant round and free of circuits like Ripple Carry Adder (RCA), unlike ABY3 which uses these circuits and has round complexity of the order of depth of these circuits. We then exhibit the application of our FLASH framework in the secure server-aided prediction of vital algorithms: Linear Regression, Logistic Regression, Deep Neural Networks, and Binarized Neural Networks. We substantiate our theoretical claims through im-provement in benchmarks of the aforementioned algorithms when compared with the current best framework ABY3. All the protocols are implemented over a 64-bit ring in LAN and WAN. Our experiments demonstrate that, for MNIST dataset, the improvement (in terms of throughput) ranges from 11x to 1390x over Local Area Network (LAN) and Wide Area Network (WAN) together.

Online link to join Microsoft meeting:
https://teams.microsoft.com/_#/pre-join-calling/19:meeting_MmMxMDZkNmItZWZkOS00ZGJhLTgyYzYtNjlhODZiYjk5NzNj@thread.v2

Speaker Bio:

Host Faculty: