AWS unveils open source model server for PyTorch

Intended to ease production deployments of PyTorch models, TorchServe supports multi-model serving and model versioning for A/B testing

AWS unveils open source model server for PyTorch
Zapp2Photo / Getty Images

Amazon Web Services (AWS) has unveiled an open source tool, called TorchServe, for serving PyTorch machine learning models. TorchServe is maintained by AWS in partnership with Facebook, which developed PyTorch, and is available as part of the PyTorch project on GitHub.

Released on April 21, TorchServe is designed to make it easy to deploy PyTorch models at scale in production environments. Goals include lightweight serving with low latency, and high-performance inference.

The key features of TorchServe include:

  • Default handlers for common applications such as object detection and text classification, sparing users from having to write custom code to deploy models.
  • Multi-model serving.
  • Model versioning for A/B testing.
  • Metrics for monitoring.
  • RESTful endpoints for application integration.

Any deployment environment can be supported by TorchServe, including Kubernetes, Amazon SageMaker, Amazon EKS, and Amazon EC2. TorchServe requires Java 11 on Ubuntu Linux or MacOS. Detailed installation instructions can be found on GitHub. 

Copyright © 2020 IDG Communications, Inc.