Review: MXNet deep learning shines with Gluon

With the addition of the high-level Gluon API, Apache MXNet rivals TensorFlow and PyTorch for developing deep learning models

Review: Apache MXNet deep learning shines with Gluon
At a Glance

When I reviewed MXNet v0.7 in 2016, I felt that it was a promising deep learning framework with excellent scalability (nearly linear on GPU clusters), good auto-differentiation, and state-of-the-art support for CUDA GPUs. I also felt that it needed work on its documentation and tutorials, and needed a lot more examples in its model zoo. In addition, I would have liked to see a high-level interface for MXNet, which I imagined would be Keras.

editors choice award logo plum InfoWorld

Since then, there has been quite a bit of progress. MXNet moved under the Apache Software Foundation umbrella early in 2017, and although it’s still “incubating” at version 1.3, it feels fairly well fleshed out.

While there has been work on Keras with an MXNet back end, a different high-level interface has become much more important: Gluon. Prior to the incorporation of Gluon, you could either write easy imperative code or fast symbolic code in MXNet, but not both at once. With Gluon, you can combine the best of both worlds, in a way that competes with both Keras and PyTorch.

What is Gluon for MXNet?

To continue reading this article register now

How to choose a low-code development platform