A Method for Animating Children’s Drawings of the Human Figure
Harrison Jesse Smith, Qingyuan Zheng, Yifei Li, Somya Jain, Jessica K. Hodgins
arXiv
We perform an effective-theory analysis of forward–backward signal propagation in wide and deep Transformers, i.e., residual neural networks with multi-head self-attention blocks and multilayer perceptron blocks. This analysis suggests particular width scalings of initialization and training hyperparameters for these models. We then take up such suggestions, training Vision and Language Transformers in practical setups.
Harrison Jesse Smith, Qingyuan Zheng, Yifei Li, Somya Jain, Jessica K. Hodgins
Simran Arora, Patrick Lewis, Angela Fan, Jacob Kahn, Christopher Ré
Rohit Girdhar, Alaaeldin El-Nouby, Zhang Liu, Mannat Singh, Kalyan Vasudev Alwala, Armand Joulin, Ishan Misra