 Hi, my name is Faith Chu and I'm a Program Manager on the ML Platform Frameworks team. It's hard to ignore the growing impact of AI and machine learning as it expands into every industry from automobile and healthcare to arts and games. For sustainable and fair growth in this hotspot of innovation, it's critical to have an open ecosystem to support flexibility and development. As champions of open and interoperable AI, we at Microsoft have invested in the open neural network exchange, or ONIX initiative, and are proud to share that we have open-source ONIX Runtime, a scoring engine for ONIX format models. ONIX Runtime is now available on GitHub. ONIX is an open format for representing traditional and deep learning ML models, supported by a community of over 20 leading companies. ONIX Runtime is a high-performance engine for running these models, fully compliant with operators defined in the ONIX spec, and works with both CPU and GPU across a growing number of platforms, including Linux, Windows and Mac. It is designed with an extensible architecture to support plug-in hardware accelerators, allowing it to stay up-to-date with the latest innovations. Companies such as NVIDIA and Intel are actively contributing by integrating custom accelerators into ONIX Runtime. You can get a pre-trained model from the ONIX model zoo, or train and convert it from any popular framework such as TensorFlow, Keras, Tykit-learn, PyTorch, CoreML and more. ONIX Runtime is simple to use. At a high level, once you have a model, you can create a session, set the input data and score the model. You can integrate ONIX Runtime into your code directly from source or pre-compiled binaries, but one simple popular way to operationalize this is through Azure ML to deploy a service for your application to call. Let's see this end-to-end in action. Here I am opening up a Jupyter notebook, which we will use to convert, load and run the tiny YOLO model using ONIX Runtime. This particular model is originally published in CoreML and is used for real-time object detection. First, we will download the model and convert it into ONIX. We are using the open-source ONIX ML tools and CoreML tools to load, convert and save the model into ONIX format. To deploy this model as a service in Azure, we will use the Azure ML SDK. After creating a workspace, we can register the model we just converted for use. Using Azure ML, we will create a scoring file that contains the instructions to execute the Runtime. In this case, we will create a session, format the input and output data formats, and finally, we will run the session with the given input. Next, we can just build and deploy an image with this model and scoring file using Azure Container Service. It can also be productionized using Azure Kubernetes Service for production-level traffic. Since this can take a few minutes, we'll just use the service I deployed earlier using the same configuration. Let's put this to use in an application. Here, I'm just pasting in the URL, which will run the model, and uploading an image I took last weekend. In this image, you'll see that here the car is identified as well as the bike and the person. We can also see this using a live video. Here, you'll see that I'm identified as a person, and this bottle of hot sauce is identified as a bottle. Here at Microsoft, we are using ONIX Runtime to improve the prediction latency and efficiency for many of our models using core scenarios within being search, image and multimedia content recognition, office productivity services, and more. In online scenarios, it can decrease user-perceived latency for a better and smoother user experience. And for offline computations, it can save on machine costs by increasing throughput. Compared to original models scored on various frameworks, we have seen significant latency gains in running these models in the ONIX format using ONIX Runtime. We encourage you to try this out and to contribute to the continuously growing community of ONIX supporters.