SESSION

A Scalable Implementation of Deep Learning on Spark

Slides PDF Video

Artificial neural networks (ANN) are one of the popular models of machine learning, in particular for deep learning. The models that are used in practice for image classification and speech recognition contain huge number of weights and are trained with big datasets. Training such models is challenging in terms of computation and data processing. We propose a scalable implementation of deep neural networks for Spark. We address the computational challenge by batch operations, using BLAS for vector and matrix computations and reusing the memory for reducing garbage collector activity. Spark provides data parallelism that enables scaling of training. As a result, our implementation is on par with widely used C++ implementations like Caffe on a single machine and scales nicely on a cluster. The developed API makes it easy to configure your own network and to run experiments with different hyper parameters. Our implementation is easily extensible and we invite other developers to contribute new types of neural network functions and layers. Also, optimizations that we applied and our experience with GPU CUDA BLAS might be useful for other machine learning algorithms being developed for Spark.

Photo of Alexander Ulanov

About Alexander

Alexander Ulanov is a senior researcher in HP Labs. His research focuses on application of machine learning on a large scale, in particular, for deep learning. Alexander has made several contributions to Apache Spark. Previously, he has been working on text mining, classification and recommender systems and their real world applications. Alexander holds PhD degree from Russian Academy of Sciences.