The Rise Of AI

Artificial Intelligence (AI) has risen to prominence in the past ten years across academia and industry. One of the most successful branches of artificial intelligence is deep learning.

Deep neural networks have been around since the 1940s, but have only recently been deployed in research and analytics, following improvements in technology and computational horsepower afforded by modern graphics processing units (GPUs). Neural networks can now carry out tasks such as vision and speech processing that were previously considered difficult to automate. These are being combined into world-changing applications such as self driving cars, Amazon Echo, personal assistants, chat bots, and many others.

The Challenges

Interesting and exciting as these applications are, businesses today are faced with a number of challenges:

  • Safety and regulation. How do we ensure that learned algorithms do what they are intended to do, increase transparency in financial models, or ensure safety with self driving cars and unmanned aerial vehicles?
  • Companies such as Google, Facebook, and Amazon can devote dozens of engineers to fine tune their deep learning models and extract the best performance from their programs, but these resources aren’t available to domain experts in other organisations.
  • These applications need to run on a wide variety of hardware, such as clusters of GPUs, training with petabytes of data. The trained models then may be deployed on the web, on smartphones, or the tiniest of ARM systems.

Addressing the challenges

There are a number of ways to address these challenges. One popular approach has been to build sophisticated libraries for deep learning. Examples of this approach include MXNet, Caffe, Torch, Theano, and TensorFlow. Given the need for speed, these libraries are implemented in C++ and have wrappers for Julia, Python, R and other easy to use programming languages. However, this two language approach is limiting because it reduces flexibility and makes it difficult to customize models to the problem at hand. There are hundreds of domains in health, education, transportation, retail, manufacturing and others where a one size fits all approach is unlikely to work.

Enter: Julia

Julia is designed by its creators to solve exactly this two language problem. Julia is designed to be efficient, which saves compute costs and human effort.

Solving the two language problem

Julia’s Stefan Karpinski goes on to explain how Julia solves the two language problem in this video shot at Open Data Science East 2016

Mark Innes - A Fresh Approach To Machine Learning

At PyData London 2017, Julia’s Mark Innes summarizes the flexibility and features Julia brings to the table in this domain.

Julia’s deep mathematical roots make it easy to translate algorithms from research papers into code with no loss in translation - thus improving safety and reducing model risk. Being Just in Time (JIT) compiled, Julia programs run at the speed of C while still being as easy to program in as Python or R. The most important aspect however is the community. Julia is open source, licensed under the liberal MIT license, and includes contributions from hundreds of programmers and researchers around the world. Julia also runs on all kinds of silicon out there - chips from Intel, IBM, NVIDIA, ARM, and many others that are yet to come. Our introductory machine learning blog post on k Nearest Neighbors gives a flavor of how this kind of versatility is very useful in machine learning.

Julia In Use

See how Julia is carving out some interesting deep learning use cases.

Tangent Works Source:

Tangent Works uses Julia to build a comprehensive analytics solution that blurs the barrier between prototyping done by data scientists and production development done by developers.

Read more

Diabetic Retinopathy Medical Diagnosis, Source:

Diabetic retinopathy is an eye disease that affects more than 126 million diabetics and accounts for more than 5% of blindness cases worldwide. Timely screening and diagnosis can help prevent vision loss for millions of diabetics worldwide. IBM and Julia Computing analyzed eye fundus images provided by Drishti Eye Hospitals, and a built a deep learning solution that provides eye diagnosis and care to thousands of rural Indians.

Read more

Efficiency, Scalability and Reliability

Computational speed and efficiency are also essential to produce high quality machine learning software. Julia’s efficiency allows companies and researchers to deploy their code at scale. For example, Lawrence Berkeley National Lab (LBNL) managed to achieve cluster scale Bayesian inference on massive astronomical data, using a combination of Julia’s native multi-threading and multi-process parallelism, the first of its kind for a high level language. Julia’s efficiency also allows its parallel analytics to be faster than more popular frameworks such as Spark, as demonstrated by our blog post on parallel recommendation engines.

Leveraging these capabilities in Julia has led to the creation of a number of deep learning libraries in Julia such as Mocha, Merlin, and KNet, while still retaining the ability to use MXNet and TensorFlow. Our blog contains some interesting examples of deep learning in Julia such as parallel neural styles on video.

As artificial intelligence moves beyond research labs and academia, the industry needs tools that are open, easy, well tested, scalable and reliable.

Get the latest news about Julia delivered to your inbox.
Julia Computing, Inc. was founded with a mission to make Julia easy to use, easy to deploy and easy to scale. We operate out of Boston, New York, London, Bangalore, San Francisco, Los Angeles and Washington DC and we serve customers worldwide.
© 2015-2017 Julia Computing, Inc. All Rights Reserved.