• English

学术报告《Differentially Private Machine Learning: Improvement, Byzantine Resilience, and Input Perturbation》

发布日期:2023/12/07 点击量:

报告人:王天豪

腾讯会议:175-430-402

报告时间:2023-12-11 09:00


Abstract:

The lecture is about the ongoing recent work on differentially private machine learning (DP-ML). Firstly, I will present simple yet effective strategies to improve the performance of DP-stochastic gradient descent (DP-SGD), the widely adopted method for DP-ML. Then I will discuss ways to defend against Byzantine attacks when DP-SGD is used in federated learning. Finally, I will talk about our recent exploration in input perturbation with DP and synthetic data generation, which is another popular approach to DP-ML.


Bio:

Tianhao Wang is an assistant professor of computer science at the University of Virginia. His research interests lie in data privacy and security, and their connections to machine learning and cryptography. He obtained his Ph.D. from Purdue University in 2021 and held a postdoc position at Carnegie Mellon University. His work about differentially private synthetic data generation won multiple awards in the NIST competitions.


邀请人:唐朋

审核人:魏普文


联系我们

地址:山东省青岛市即墨区滨海路72号永利最新登录入口青岛校区淦昌苑D座邮编:266237

邮箱:cst@sdu.edu.cn电话:(86)-532-58638601传真:(86)-532-58638633

版权所有 77779193永利官网 - 永利最新登录入口