Microsoft smartens up the ML.Net machine learning API

Microsoft’s TensorFlow-compatible machine learning framework debuts a reworked API for more flexible pipelines and improved prediction performance

Microsoft smartens up the ML.Net API
Thinkstock

Microsoft has released the 0.6 version of its ML.Net machine learning framework, aimed at .Net developers. The update adds a new and more useful model-building API set, the ability to use more existing models to provide predictions, and better performance overall.

The original ML.Net API limited the kinds of pipelines you could build and had some clumsy restrictions on labeling and scoring data. The new API more flexibly allows training and prediction processes to be made up of multiple components that can be joined together in a variety of combinations, instead of requiring a single linear pipeline. The goal is to emulate the design of APIs used to drive other frameworks like Apache Spark.

According to Microsoft, the new ML.Net API makes it possible to do things like “share a given transform’s execution and transformed data with multiple learners and trainers, or decompose pipelines and add multiple learners.”

The old ML.Net API will be phased out by moving it into a legacy namespace so that existing software can continue to use it for the time being.

The new ML.Net API also uses strong C# types, so that any errors made while designing a pipeline show up earlier in the process and can be linted out.

Previous versions of ML.Net allowed re-use of TensorFlow models. The new ML.Net API expands on this by allowing an existing TensorFlow model to be loaded and used for predictions without having to write a training process to go with it. TensorFlow scoring in general is also faster, with some predictions sped up by multiple orders of magnitude.

ML.Net 0.6 also introduces the ability to score predictions using models created in the open ONNX format. ONNX models can be exported and re-used by other frameworks including TensorFlow and Scikit-learn. ML.Net has long had the ability to export models as ONNX; now it can take in ONNX models and use them for scoring predictions as well.