MLIR: accelerating AI with open-source infrastructure

Share
  • September 9, 2019

Machine learning now runs on everything from cloud infrastructure containing GPUs and TPUs, to mobile phones, to even the smallest hardware like microcontrollers that power smart devices. The combination of advancements in hardware and open-source software frameworks like TensorFlow is making all of the incredible AI applications we’re seeing today possible–whether it’s predicting extreme weather, helping people with speech impairments communicate better, or assisting farmers to detect plant diseases. 

But with all this progress happening so quickly, the industry is struggling to keep up with making different machine learning software frameworks work with a diverse and growing set of hardware. The machine learning ecosystem is dependent on many different technologies with varying levels of complexity that often don’t work well together. The burden of managing this complexity falls on researchers, enterprises and developers. By slowing the pace at which new machine learning-driven products can go from research to reality, this complexity ultimately affects our ability to solve challenging, real-world problems. 

Earlier this year we announced MLIR, open source machine learning compiler infrastructure that addresses the complexity caused by growing software and hardware fragmentation and makes it easier to build AI applications. It offers new infrastructure and a design philosophy that enables machine learning models to be consistently represented and executed on any type of hardware. And today we’re announcing that we’re contributing MLIR to the nonprofit LLVM Foundation. This will enable even faster adoption of MLIR by the industry as a whole.

MLIR aims to be the new standard in ML infrastructure and comes with strong support from global hardware and software partners including AMD, ARM, Cerebras, Graphcore, Habana, IBM, Intel, Mediatek, NVIDIA, Qualcomm Technologies, Inc, SambaNova Systems, Samsung, Xiaomi, Xilinx—making up more than 95 percent of the world’s data-center accelerator hardware, more than 4 billion mobile phones and countless IoT devices. At Google, MLIR is being incorporated and used across all our server and mobile hardware efforts.

Machine learning has come a long way, but it’s still incredibly early. With MLIR, AI will advance faster by empowering researchers to train and deploy models at larger scale, with more consistency, velocity and simplicity on different hardware. These innovations can then quickly make their way into products that you use every day and run smoothly on all the devices you have—ultimately leading to AI being more helpful and more useful to everyone on the planet.

Source : MLIR: accelerating AI with open-source infrastructure