To kick things off, what is PyTorch and why do we care? It’s basically this awesome library that lets us create neural networks for deep learning. And who doesn’t love deep learning?! It’s like the ultimate brain food for your computer. But here’s where AI-Dock comes in it’s a containerized version of PyTorch, which means we can run it on any cloud service without having to worry about setting up all those ***** dependencies and configurations.
Now some of the cool features that come with this PyTorch container. First off, it has all sorts of fancy GPU acceleration built in! This means your neural networks will run faster than a cheetah running through molasses (which is pretty ***** fast). And if you want to use other libraries like DALI or RAPIDS for data loading and preprocessing, they’re already included too.
DALI is designed to accelerate data loading and preprocessing pipelines for deep learning applications by offloading them to the GPU. It primary focuses on building data preprocessing pipelines for image, video, and audio data. These pipelines are typically complex and include multiple stages, leading to bottlenecks when run on CPU. Use this container to get started on accelerating data loading with DALI.
RAPIDS is a suite of open source software libraries and APIs gives you the ability to execute end-to-end data science and analytics pipelines entirely on GPU. RAPIDS focuses on common data preparation tasks for analytics and data science, providing massive speedups with minor changes to a preexisting codebase. Use this container to get started on accelerating your data science pipelines with RAPIDS.
Training
The version of PyTorch in this container is precompiled with cuDNN support, which provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers. NCCL is integrated with PyTorch as a torch.distributed backend, providing implementations for broadcast, all_reduce, and other algorithms.
Inference
TensorRT is an SDK for high-performance deep learning inference that includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. Torch-TensorRT operates as a PyTorch extention and compiles modules that integrate into the JIT runtime seamlessly. After compilation using the optimized graph should feel no different than running a TorchScript module.
For more information about PyTorch, including tutorials, documentation, and examples, see: https://pytorch.org/docs/. For the latest Release Notes, see the PyTorch Release Notes. For a full list of the supported software and specific versions that come packaged with this framework based on the container image, see the Frameworks Support Matrix. To review known CVEs on this image, refer to the Security Scanning tab on this page. By pulling and using the container, you accept the terms and conditions of this End User License Agreement.