A Method for Animating Children’s Drawings of the Human Figure
Harrison Jesse Smith, Qingyuan Zheng, Yifei Li, Somya Jain, Jessica K. Hodgins
International Conference on Learning Representations (ICLR) 2017
In this paper, we explore theoretical properties of training a two-layered ReLU network g(x; w) = PK j=1 σ(w | j x) with centered d-dimensional spherical Gaussian input x (σ=ReLU). We train our network with gradient descent on w to mimic the output of a teacher network with the same architecture and fixed parameters w∗. We show that its population gradient has an analytical formula, leading to interesting theoretical analysis of critical points and convergence behaviors. First, we prove that critical points outside the hyperplane spanned by the teacher parameters (“out-of-plane“) are not isolated and form manifolds, and characterize inplane critical-point-free regions for two ReLU case. On the other hand, convergence to w∗ for one ReLU node is guaranteed with at least (1 − )/2 probability, if weights are initialized randomly with standard deviation upper-bounded by O(/√ d), consistent with empirical practice. For network with many ReLU nodes, we prove that an infinitesimal perturbation of weight initialization results in convergence towards w∗ (or its permutation), a phenomenon known as spontaneous symmetric-breaking (SSB) in physics. We assume no independence of ReLU activations. Simulation verifies our findings.
Harrison Jesse Smith, Qingyuan Zheng, Yifei Li, Somya Jain, Jessica K. Hodgins
Yunbo Zhang, Deepak Gopinath, Yuting Ye, Jessica Hodgins, Greg Turk, Jungdam Won
Simran Arora, Patrick Lewis, Angela Fan, Jacob Kahn, Christopher Ré