Weighted Pointer: Error-aware Gaze-based Interaction through Fallback Modalities
Ludwig Sidenmark, Mark Parent, Chi-Hao Wu, Joannes Chan, Michael Glueck, Daniel Wigdor, Tovi Grossman, Marcello Giordano
International Conference on Machine Learning (ICML)
We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy (DP). Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas, and Xu (Wang et al., 2020), we study general convex loss functions with the assumption that the distribution of gradients has bounded k-th moments. We provide improved upper bounds on the excess population risk under concentrated DP for convex and strongly convex loss functions. Along the way, we derive new algorithms for private mean estimation of heavy-tailed distributions, under both pure and concentrated DP. Finally, we prove nearly-matching lower bounds for private stochastic convex optimization with strongly convex losses and mean estimation, showing new separations between pure and concentrated DP.
Ludwig Sidenmark, Mark Parent, Chi-Hao Wu, Joannes Chan, Michael Glueck, Daniel Wigdor, Tovi Grossman, Marcello Giordano
Simon Vandenhende, Dhruv Mahajan, Filip Radenovic, Deepti Ghadiyaram
Xiaoyu Xiang, Yapeng Tian, Vijay Rengaranjan, Lucas D. Young, Bo Zhu, Rakesh Ranjan