PyTorch 1.0 accelerates Python machine learning with native code

The PyTorch 1.0 release candidate introduces Torch Script, a Python subset that can be JIT-compiled into C++ or other high-speed code

Credit: Little Visuals

An official release candidate of PyTorch 1.0, the Python-centric deep learning framework created by Facebook, is available for developer testing. One of the most touted features of the new release is the ability to define models by writing Python code that can be selectively accelerated—similar to how competing frameworks work.

Python’s traditional role in machine learning has been to wrap high-speed, back-end code libraries with easy-to-use, front-end syntax. Anyone who writes machine learning modules in Python quickly discovers that native Python isn’t nearly fast enough for performance-critical research work or production use.

PyTorch’s developers have introduced a feature in PyTorch 1.0, called Torch Script, that strikes a balance between Python’s accessible syntax and performant code. Torch Script is a subset of Python that PyTorch can just-in-time compile into fast native code that doesn’t rely on the Python runtime.

Torch Script works one of two ways. New code can be written using the Torch Script language, which by design can compile readily to native code. It’s also possible to take existing Python code, decorate it with the @torch.jit.trace decorator, and have it just-in-time compiled to native code. However, this is not as effective as using Torch Script.

According to the Torch Script documentation, “[Torch Script] makes it possible to train models in PyTorch using familiar tools, and then export the model to a production environment where it is not a good idea to run models as Python programs for performance and multi-threading reasons.”

Torch Script’s approach echoes some of the other methods for developing high-performance software in Python. For example, Anaconda’s Numba library compiles specified functions to native code, using either just-in-time or ahead-of-time compilation. The Numba library can be used to generate code that runs without Numba itself present, but it has runtime dependencies on NumPy and Python generally.

Another commonly used package, Cython, allows Python to be turned incrementally into C by way of custom syntax declarations. Cython can work with the whole range of Python and C types alike, as well as all of Python’s syntax, but Torch Script is restricted to operations on PyTorch tensors, integers, and floating-point numbers. And Torch Script can’t use constructions like exceptions.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AnacondaFacebook

Show Comments
[]