 Hey, I'm Chris Lauren and I'm here to talk to you about Azure Machine Learning. Azure Machine Learning will help simplify building your machine learning models by using our automated machine learning capabilities. Or if you want to build your own model, you can easily scale that out in the Cloud using our Python SDK, using any open source frameworks, and you can manage the end-to-end workflow using our Azure Machine Learning Pipelines, which is like DevOps for machine learning. It also helps you easily deploy your trained models to the Cloud and to the Edge. Now, there's a lot of different ways to use Azure to power your machine learning. You can use pre-trained cognitive services, which are available via REST APIs, or you can use Azure Machine Learning Service to train your own custom models using any framework of your choice, including PyTorch and TensorFlow, and then deploy those to the powerful infrastructure using our CPUs, GPUs, or FPGAs to speed up your inferencing. Now, the end-to-end workflow includes preparing data in the Cloud using something like Azure Databricks and then using your own Python code in an IDE of your choice, using Jupyter VS Code or PyCharm, using our Python SDK to train your model and keep track of your metrics, and then you can register your trained model and deploy that to the Cloud or to the Edge, and orchestrate the whole thing using the Azure Machine Learning Pipelines. Now, we make it really easy to scale out model training in the Cloud using our powerful GPUs, and even automatically facilitate provisioning these powerful compute clusters on-demand, so you only pay for them when you use them. You can use the latest GPUs from NVIDIA to train your models, or you can use the FPGAs to score your trained models super fast. Once you have your trained models, you can deploy that to the Cloud using Docker containers or to the Edge so you can bring the intelligence local to where your application is being deployed. Let me show you how to do this. So to use Azure Machine Learning, I'm going to show you how to take a bunch of handwritten numbers and recognize which digits they are using the hand-emnist dataset. You can see below that I've got a bunch of these handwritten numbers, and I've got it labeled to identify whether this is a three, this is a zero, etc., and that's going to help me train the model to recognize the digits. To use Azure Machine Learning, I'm first going to create a workspace, which is logically a container that holds all of your compute targets, your experiments, your data stores, your trained machine learning models, your Docker images, and your deployed services all in one place, which makes it really easy for teams to work together. To create that, I simply need to call workspace.create using our Python SDK. Super easy. That logically behind the scenes, it actually creates Azure Blob data stores, it creates application insights and more, and so it ties them all together to make them really easy to use. I'm going to go ahead and load the dataset up into my Blob store, and then provision a powerful cluster of GPU machines. Now, since I already have this, I'm going to, ahead of time to speed up the demo, I'm simply going to go ahead and grab this compute target called the GPU cluster, but take a look at how easy this is. I can specify the, should be a minimum of four VMs and a maximum of 20. So as I submit more machine learning jobs to my cluster, it's going to automatically grow and shrink. I can get the latest state, and then see how many nodes are actually working right now. Now, to get this started, I'm going to create an experiment, and an experiment is a way of keeping track of all my attempts to train this machine learning model and all the key metrics that are important. Now, when I run this experiment using my own Python script on that GPU cluster that I just created, then I will call submit with this TensorFlow estimator, and the TensorFlow estimator makes it really easy to automatically keep track of the key metrics and scale out across the different GPU machines in parallel. You can see I've run this ahead of time, and I have access to all of the log files that I streamed back as I've trained my model, and this enables me to get clear understanding of how well the model is performing using a Jupyter widgets right here in the notebook. Now, one of the really important parts of training machine learning models is tuning hyperparameters. This usually involves a lot of manual guess and check, which is really complicated. However, I can use our hyperdrive service, which simply enables me to specify which parameters I care about and specify a policy by which to determine how to cancel jobs that are performing worse than other jobs, and only keep the models which are performing the best. Again, using Jupyter widgets in line, I can tell which experiment is going really well and which are not. All of this is automatically stored in Azure, such that even from the Azure portal, other people on my team can get the same insight on how things are going. You can see that some of these experiments stop early, these lines don't go all the way, and that's because it was not performing as well as some of these other runs. I only want to use the best quality metric out of the whole batch and there's no point in wasting compute cycles and paying for extra compute when that's not performing well. Once I have the best model, then I can deploy it to the Cloud. But before I do that, I want to show you how easy this can be. If you don't know how to train your own machine learning model, you can use our AutoML service to simply say, I want to create the classifier, which is predicting the digit in this case, and then provide it to the way to get the data. This will automatically train a machine learning model for you. It'll run many of them in parallel and pick the best one. Now, since I just trained a model, you saw me do it, then we're going to go ahead, I'll show you how we can even use Visual Studio Code with the Visual Studio Code tools for AI extension. I can keep track of my experiments in VS Code. You can see my GPU cluster that I used as well, and I have that model that I trained that got the best results. Now, I'm going to show you how to deploy that to the Cloud. First, you need to write a Python script that actually has an init function and a run function, so this can score the model, and in order to deploy it, then you simply can right-click and you can say deploy service from the model. By walking through this wizard, you can even select, I want to deploy this to an Azure Container instance. You can give it a name, and then you can select that scoring script that I showed you, and then you can either provide your own Docker file or, you can simply choose a default one, and then you can select a YAML file, which has a list of all the dependencies like TensorFlow in this case, that I want to include in my Docker image. I can simply then at that point control characteristics of the Azure Container instance, like how many CPU cores and how much memory to use, and then I can deploy it. Now, to speed this process up, I've already deployed my service here, which you can see, and I can use Python code from anywhere or a curl command to call this and score the web service. You can see, I'll go ahead and run this. I'll activate my environment. This will load up some random test data, and it'll call the service. You can see here's the URL of the web service, and I got that by calling the scoring URI right here. Then you can see of the prediction is three and label is three, so that is indeed correct. Or you can pass in a bunch of individual images and score them in bulk, and you can see a bunch of the scores here, which is fantastic. Now, you can do the exact same thing, of course, from a Jupyter Notebook or anywhere else, to make it super easy to use Azure Machine Learning from any tool that you choose to use it from, using any framework to train any kind of machine learning model. So get started today for free training machine learning models. You can learn more about all the services that I just talked about at our latest blog post, or Jupyter Notebook samples to try it out for yourself. I hope you have a great connect. Thanks for watching.