GPUs are increasingly becoming the computing device of choice for many scientific computing and machine learning workflows. As these workloads shift, so must the software used to program them. Support for native GPU computing has been available in the Julia programming language for many years, but with the release of Julia 1.0 last year has finally reached stability and widespread use. Unlike many other programming languages, Julia exposes not only high-level access to GPU-accelerated array primitives (such as matrix multiply, Fourier transform or convolution), but also allows developers to write custom GPU kernels, taking advantage of the full power and flexibility of the underlying hardware without switching languages. This ability also allows developers to easily re-use and move code from CPU-based applications to the GPU, lowering the barrier to entry and accelerating the time to solution.
Like Julia itself, Julia’s GPU support is used for an impressive variety of different applications from machine learning to climate change. Modern machine learning would be unimaginable without the computational power of GPUs. As such, users of the Flux.jl machine learning library for Julia can take automatic advantage of GPUs with a one line change, without any additional code modification. In addition, Julia’s differentiable programming support is fully GPU-compatible providing GPU acceleration for models at the cutting edge of machine learning research, scaling from a single user with a GPU in their laptop to thousands of GPUs on the world’s largest supercomputers.
Of course, use of Julia on GPUs is much broader than just machine learning. Pumas AI uses Julia’s GPU support to compute personalized drug dosing regimes, using the DifferentialEquations.jl suite of solvers - probably the most comprehensive suite of differential equations solvers in any language. Since GPUs are a native target for Julia, running these solvers on GPUs requires minimal modifications.
The same story played out in a port of a massively parallel multi-GPU solver for spontaneous nonlinear multi-physics flow localization in 3-D by Stanford University and the Swiss national supercomputing center, in work presented at JuliaCon 2019 earlier this year. In this instance, Julia replaced a legacy system written in MATLAB and CUDA C, solving the “two language problem” by allowing both high-level code and GPU kernels to be expressed in the same language and share the same code base.
NVIDIA Arm Server Reference Design Platform
Additionally, Julia was selected by the Climate Modeling Alliance as the sole implementation language for their next generation global climate model. This multi-million dollar project aims to build an earth-scale climate model providing insight into the effects and challenges of climate change. For such a massive task, both productivity and first class performance are non-negotiable requirements for the implementation programming language and after extensive evaluation, the CliMA project leaders have selected Julia as the only system capable of delivering both.
To further promote the use of Julia on GPUs, Julia Computing and NVIDIA are excited to announce the availability of the Julia programming language as a pre-packaged container on the NVIDIA GPU Cloud (NGC) container registry, making it easy to rapidly deploy Julia-based GPU-accelerated applications. NGC offers a comprehensive catalog of GPU-accelerated software for deep learning, machine learning, and HPC. By taking care of the plumbing, NGC enables users to focus on building lean models, producing optimal solutions and gathering faster insights.
Additionally, NVIDIA and Julia Computing are pleased to announce that Julia just works out-of-the box with the early access preview CUDA stack for Arm server systems, allowing Julia users to take advantage of GPU acceleration independent of the underlying CPU architecture.
In June of 2019 at the International Supercomputing (ISC) conference, NVIDIA announced its intent to deliver a complete CUDA software stack for Arm, and at this year’s North American Supercomputing conference (SC19), NVIDIA is delivering on the promise by jumpstarting the HPC developer tools ecosystem. Julia support for this software stack will be available at launch and Julia users intending to develop for the Arm platform should be able to run their existing applications without modification. The Arm platform is rapidly improving as various building blocks from storage to networking to GPU processing start working together.
Julia’s native support for NVIDIA GPUs is one of the easiest ways to get started with GPU programming. Learn more by reading the documentation, and get started by trying the Julia NGC container today. We’re looking forward to seeing what you’ll build.