Artificial Intelligence developers have more freedom with Intel’s Open Source AI Compiler

  • 252
  • 10
  •  
  • 5
  • 10
  • 1
  • 5
  •  
  •  
  •  
  •  
  •  
  •  
  •  
    283
    Shares

Intel has announced that it has made its nGraph compiler Open Source in order to give more freedom to choose Frameworks and Hardware while building the Artificial Intelligence solutions. nGraph is a framework-neutral deep neural network (DNN) model compiler that helps the user to build Artificial Intelligence based solutions

With nGraph, data scientists can focus on data science rather than worrying about how to adapt their DNN models to train and run efficiently on different devices. Continue reading below for highlights of our engineering challenges and design decisions, and see GitHub, our documentation, and our SysML paper for additional details.

It allows support for various deep learning frameworks and optimizing model for multiple hardware solutions. The nGraph Compiler gives an opportunity for the decision in frameworks and hardware to researchers. It lets framework owners add unique features with much less work, allows cloud service providers to more easily address a larger market demand, and helps enterprises maintain a consistent experience across frameworks and back ends, all without performance loss.

“Finding the right technology for AI solutions can be daunting for companies, and it’s our goal to make it as easy as possible. With the nGraph Compiler, data scientists can create deep learning models without having to think about how that model needs to be adjusted across different frameworks, and its open-source nature means getting access to the tools they need, quickly and easily.”– Arjun Bansal, VP, Artificial Intelligence Software, Intel

How does it work?

Install the nGraph library and write or compile a framework with the library in order to run training and inference models. Specify nGraph as the framework backend you want to use from the command line on any supported system. The Intermediate Representation (IR) layer handles all the device abstraction details and lets developers focus on their data science, algorithms, and models, rather than on machine code.

More details are as follows:

  • The nGraph core creates a strongly-typed and device-neutral stateless graph representation of computations. Each node, or op, in the graph corresponds to one step in a computation, where each step produces zero or more tensor outputs from zero or more tensor inputs.
  • Intel has developed a framework bridge for each supported framework; it acts as an intermediary between the nGraph core and the framework. Currently, it has bridges for TensorFlow/XLA, MXNet, and ONNX. Since ONNX is only an exchange format, the ONNX bridge is augmented by an execution API.
  • A transformer plays a similar role between the nGraph core and the various devices; transformers handle the device abstraction with a combination of generic and device-specific graph transformations. The result is a function that can be executed from the framework bridge. Transformers also allocate and deallocate, as well as read and write tensors under the direction of the bridge. Currently, it has transformers for Intel Architecture, Intel NNP, NVIDIA cuDNN, and additional devices under active development.

Currently, the nGraph Compiler supports three deep learning compute devices and six third-party deep learning frameworks: TensorFlow, MXNet, neon, PyTorch, CNTK, and Caffe2. Intel will continue to add frameworks and devices in the coming months.

Source : Intel