PyTorch 1.4 has been released, and the PyTorch domain libraries have been updated along with it. The popular open source machine learning framework has some experimental features on board, so let’s take a closer look.
SEE ALSO: Lyft open sources Flyte for developing machine learning workflows
PyTorch Mobile and Java bindings
PyTorch Mobile was first introduced in PyTorch 1.3 as an experimental release. It should provide an “end-to-end workflow from Python to deployment on iOS and Android,” as the website states. In the latest release, PyTorch Mobile is still experimental but has received additional features.
For example, developers can now customize build scripts, which can help reduce on-device footprints by optimizing the library size. According to the release notes, a customized MobileNetV2 can be 40% to 50% smaller compared to the prebuilt PyTorch mobile library.
PyTorch comes with support for Python and C++. As an experimental feature, Java bindings are now included as well. They are currently available only for Linux and only for inference, but the PyTorch team plans to keep working on them in the future.
Another experimental feature in PyTorch 1.4 is distributed model parallel training, which should “help researchers push the limits,” as the scale of models continues to increase.
Domain library updates
Along with the release of PyTorch 1.4, its three domain libraries—torchvision, torchtext and torchaudio—have also received upgrades.
torchvision 0.5 adds new features for TorchScript, ONNX and production deployment, while torchaudio 0.4 focuses on improving the currently available transformations and datasets as well as backend support. The third library, torchtext 0.5, has mainly received upgrades regarding the dataset loader APIs.
Breaking changes and more
Aside from bug fixes and performance improvements, PyTorch 1.4 introduces some backwards incompatible changes regarding Python, JIT and C++.
SEE ALSO: TensorFlow 2.1.0 adds experimental features and breaking changes
For Python, one of these changes is that torch.optim
can no longer use Scheduler.get_lr()
. Instead, Scheduler.get_last_lr()
should be called to get the last computed learning rate.
Read more on PyTorch 1.4 in the GitHub release notes. The highlights were pointed out in a blog post.
The post PyTorch 1.4 adds experimental Java bindings and additional PyTorch Mobile support appeared first on JAXenter.
Source : JAXenter