3

TAP: Efficient Derivation of Tensor Parallel Plans for Large Neural Networks

We present a framework that drastically speeds up the process of deriving the tensor parallel schedule for large neural networks.

ParaGAN: A Cloud Training Framework for Generative Adversarial Networks

We present ParaGAN, a cloud-training framework for GAN, which demonstrates near optimal scaling performance on BigGAN.