Deep Learning using TensorFlow
Today, the most significant challenge in deep learning is the ever-increasing training time — as models get more complicated and training data increases. In order to address this challenge, cloud providers have launched instance types with many powerful graphics processing units (GPUs) in a single node.
In this presentation we will:
- Share how you can achieve single-node, multi-GPU parallelization using native TensorFlow and Keras with TensorFlow as a backend.
- Present results from our studies that show how training time varies with the number of GPUs in the node.
- Run through a demo of a TensorFlow use case on Qubole