The Compilers for Machine Learning workshop
was recently held at CGO 2019. Since
compiler techniques affect a large part of the machine learning stack,
this workshop aimed to highlight research that incorporates compiler
techniques and algorithms in optimizing machine learning
workloads. The workshop included talks from various projects - Julia (Julia Computing), TVM (UW), Glow (Facebook), XLA (Google), nGraph (Intel), TensorRT (Nvidia), and the soon to release MLIR (Google).
Our talk introduced the abstractions in the Julia language and the
kind of compiler transforms involved in implementing them. We then had
a deep dive into dynamic semantics + static analysis - our JAOT
(Just-Ahead-Of-Time) analysis. Building on these capabilities, the
Zygote system implements
automatic differentiation, effectively treating it as a compiler
problem, giving us differentiable programming for free. Finally,
compiler backends for
TPUs give us high performance
execution. All this comes together beautifully in Neural
ODEs, which we had to
show off as our first slide!
is available online. A PDF is also available in case Google Docs are blocked.
Need help with Julia?
We also provide training and consulting services
and build open source or proprietary packages
for our customers on a consulting basis. Mail us:
Julia Computing was founded by all the creators
of the language to provide commercial support
to Julia users. We are based in Boston, New York,
San Francisco, London and Bangalore with
customers across the world.
© 2016 - 2020 Julia Computing, Inc. All Rights Reserved.