Facebook brings GPU-powered machine learning to Python

A port of the popular Torch library, PyTorch offers a comfortable coding option for Pythonistas

Facebook's AI research team has released a Python package for GPU-accelerated deep neural network programming that can complement or partly replace existing Python packages for math and stats, such as NumPy.

A Python implementation of the Torch machine learning framework, PyTorch has enjoyed broad uptake at Twitter, Carnegie Mellon University, Salesforce, and Facebook.

Torch was originally implemented in C with a wrapper in the Lua scripting language, but PyTorch wraps the core Torch binaries in Python and provides GPU acceleration for many functions.

Torch is a tensor library for manipulating multidimensional matrices of data employed in machine learning and many other math-intensive applications. PyTorch provides libraries for basic tensor manipulation on CPUs or GPUs, a built-in neural network library, model training utilities, and a multiprocessing library that can work with shared memory, "useful for data loading and hogwild training," as PyTorch's developers put it.

A chief advantage to PyTorch is that it lives in and allows the developer to plug into the vast ecosystem of Python libraries and software. Python programmers are also encouraged to use the styles they're familiar with, rather than write code specifically meant to be a wrapper for an external C/C++ library. Existing packages like NumPy, SciPy, and Cython (for compiling Python to C for the sake of speed) can all work hand-in-hand with PyTorch.

PyTorch also offers a quick method to modify existing neural networks without having to rebuild the network from scratch. The techniques used to do this are borrowed from Chainer, another neural-network framework written in Python. The developers also emphasize PyTorch's memory efficiency thanks to a custom-written GPU memory allocator, so "your deep learning models are maximally memory efficient. This enables you to train bigger deep learning models than before."

While PyTorch may have been optimized for machine learning, that's not the exclusive use case. For instance, with NumPy, PyTorch's tensor computation can work as a replacement for similar functions in NumPy. PyTorch provides GPU-accelerated versions of those functions and can drop back to the CPU if a GPU isn't available.