Julia and Julia Computing are featured in a new insideHPC white paper titled “AI-HPC Is Happening Now.”
insideHPC is a leading blog in the high-performance computing (HPC) community.
The article notes that “Julia … recently delivered a peak performance of 1.54 petaflops using 1.3 million threads on 9,300 Intel Xeon Phi processor nodes of the Cori supercomputer at NERSC. The Celeste project utilized a code written entirely in Julia that processed approximately 178 terabytes of celestial image data and produced estimates for 188 million stars and galaxies in 14.6 minutes.”
Julia Computing CTO (Tools) Keno Fischer explains, “We used Julia on the world’s sixth most powerful supercomputer to achieve a performance improvement of 1,000x over unoptimized single core execution. We have demonstrated that Julia scales effectively and efficiently from a single laptop or desktop to dozens or hundreds of nodes in the cloud and multithreaded parallel supercomputing at petascale. Julia has been downloaded more than 1.2 million times, an annual increase of +161%. Julia is also helping quantitative finance analysts on Wall Street and rocket scientists at NASA’s Jet Propulsion Laboratory achieve faster computing speeds with higher productivity.”
About Julia and Julia Computing
Julia is the fastest modern high performance open source computing language for data, analytics, algorithmic trading, machine learning and artificial intelligence. Julia combines the functionality and ease of use of Python, R, Matlab, SAS and Stata with the speed of C++ and Java. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity. Julia provides parallel computing capabilities out of the box and unlimited scalability with minimal effort. With more than 1.2 million downloads and +161% annual growth, Julia is one of the top programming languages developed on GitHub and adoption is growing rapidly in finance, insurance, energy, robotics, genomics, aerospace and many other fields.
Julia users, partners and employers hiring Julia programmers in 2017 include Amazon, Apple, BlackRock, Capital One, Citibank, Comcast, Disney, Facebook, Ford, Google, Grindr, IBM, Intel, KPMG, Microsoft, NASA, Oracle, PwC and Uber.
Julia is lightning fast. Julia is being used in production today and has generated speed improvements up to 1,000x for insurance model estimation and parallel supercomputing astronomical image analysis.
Julia provides unlimited scalability. Julia applications can be deployed on large clusters with a click of a button and can run parallel and distributed computing quickly and easily on tens of thousands of nodes.
Julia is easy to learn. Julia’s flexible syntax is familiar and comfortable for users of Python, R and Matlab.
Julia integrates well with existing code and platforms. Users of C, C++, Python, R and other languages can easily integrate their existing code into Julia.
Elegant code. Julia was built from the ground up for mathematical, scientific and statistical computing. It has advanced libraries that make programming simple and fast and dramatically reduce the number of lines of code required – in some cases, by 90% or more.
Julia solves the two language problem. Because Julia combines the ease of use and familiar syntax of Python, R and Matlab with the speed of C, C++ or Java, programmers no longer need to estimate models in one language and reproduce them in a faster production language. This saves time and reduces error and cost.
Julia Computing was founded in 2015 by the creators of the open source Julia language to develop products and provide support for businesses and researchers who use Julia.
Need help with Julia?
We also provide training and consulting services
and build open source or proprietary packages
for our customers on a consulting basis. Mail us:
Julia Computing was founded by all the creators
of the language to provide commercial support
to Julia users. We are based in Boston, New York,
San Francisco, London and Bangalore with
customers across the world.
© 2016 - 2020 Julia Computing, Inc. All Rights Reserved.