 Hi, everyone, and thank you. Good morning, good evening, or good afternoon, whatever you are in the world, and welcome to Artificial Intelligence for Every Developer. I'm John Paolo, and the work I'm doing here at IBD, it's a lot of work about emerging technologies, and this involves artificial intelligence. Artificial intelligence is a field that, since some years ago, it was restricted to a boundaries of some shantists. With the work Microsoft and other big companies and the involvement they are making in artificial intelligence, it's now possible to get artificial intelligence and develop artificial intelligence. Every of us can develop artificial intelligence, so let's dig into the slide and let's start talking about this course, because we have a lot to talk and not that much time. So, about this course, the takeaway you will get is that at the end of the course you will understand what artificial intelligence is, so define it in the context of science, what machine learning is, and also what deep learning is. The way you actually can develop artificial intelligence, the way you can infuse artificial intelligence in your application to make them smart, you will also understand the core concept of deep learning, some of the technology, some of the terminology and the algorithm, and we will also develop some cool stuff, so let's go ahead. Artificial intelligence is something that's there since a long time ago, since the early 40s. In 1959, the concept of machine learning was introduced for the first time, but what is literally changing the world is the introduction of deep learning. Deep learning was introduced before 2012, but actually the first time it was used to train a computer, to make a computer intelligence, it was 2012. In 2012, and this was reported in the mayor's journal in the United States, like Huard, New York Times, there was a group of scientists that was able to build a network of 60,000 computer processors with one billion connectors and let it browse YouTube looking for cats. So the brain simulation was exposed to something like 10 million randomly selected thumbnails from YouTube over the course of three days, and after being presented with a list of 20,000 different items, it begins to recognize cats using deep learning algorithms. This date is really important because it was 2012 and they had to make a network of 60,000 computers to make this happen. Why? This is all about computational power, and I want to make a note on this because this is something that maybe not everyone realized, it's something very important because let's say, let's start from 1995. In 1995, we got five million transistors on a ship, and it was the population of New York City at that time. In 1995, it was the launch of the first interpentium with Windows 95 was also launched and everything was changed the world. 2005, the number of transistors on a single ship grew to 160 million. This is the population of the entire East Coast, and it was the era when the Pentium 4 was coming out. Internet was growing and the mobile thing was near the corner. 2010, we got one billion transistors, and this is the post-iPhone, post-iPad connected world, and we are doing gesture and voice recognition in our living room with 150 bucks. 2015, we got 7.5 billion transistors on a ship, and that's a transistor for every single man, woman and child on earth. How big it is? How big this growing is going on? Three years later, 2017, we filled up another planet. It took 30 years to fill up to New York City and just 10 years to fill up the entire world. That's the computer power, the computer power that is changing. The face, how we will see the world in the next couple of years. That's why now is the right moment to jump in in the artificial intelligence field and start developing for it. Artificial intelligence. Artificial intelligence is a branch of science, and it's the science of making things smart. It can be defined as human intelligence exhibited by machine. And this is in general artificial intelligence. The start of the art, what we can do today with artificial intelligence. Artificial intelligence today is like something, don't you think, to the artificial intelligence of Terminator? We are not there, we won't be there. Artificial intelligence today is a form of narrow AI. A system can do just one or few defining things as well or is better than a human. And this is important, so it can recognize something or maybe detect a credit card fraud in real time. There are several use cases in artificial intelligence like object recognition, speech, creative, like style transfer as you see in this picture. On the left the original picture, in the middle the style that will be transferred to the original picture and on the right you will see the result. How cool is this? So, but in common we can say today that you can infuse any of your application, any of your services, any of your website with artificial intelligence to make it become smart. Let's talk about, we defined artificial intelligence, let's talk about machine learning. Machine learning can be defined as an approach to achieve artificial intelligence through systems that can get better the more you run them. Machine learning involves teaching a computer to recognize patterns by example rather than programming it with the specific rules like we are usual for doing it. No, machine learning is something different so we will teach it to recognize a pattern and this pattern can be defined within data, within data. And basically it's creating algorithm that can learn complex function from data and make prediction out of them. So, as we said machine learning is predicting stuff and it's intelligent because it takes some data to train the system, learns pattern from this data and then classifies new data it has never seen before. This data classifies it the way that the result is given us like a best guess of probability. So, the difference between traditional programming if we can take an example maybe from a spam filter and if you were asked to develop a spam filter today with classical development, let's say C-sharp development, what you are going to do? You are making a long list of if and then so a long list of rules. With machine learning we can train a computer system to classify some email and then this system is able, once it's trained with a lot of data, is able to apply the learning model and classify the email in this case. Deep learning can be generally defined as a technique of implementing machine learning. One such technique is a concept known as deep neural network. Let's say deep learning is the context of deep neural network and deep neural network is where our artificial intelligence code is written on. So, the ways of developing artificial intelligence we have several ways actually three ways of developing artificial intelligence. The first one, the easy way, using artificial intelligence as a service. So, by using the Azure Connective Services and using it as a service. We have a bunch of services covering a lot of domains like vision, speech, language, knowledge, search, so far so on. To use it, it's very simple. I'm covering really, really fast this part because this is not the core of our course today. By the way, just to give you a glimpse and you can find a lot of course also on Channel 9 how to use Connective Services. Basically, we said that it's just using a service, so we just need to invoke our services. In these cases, the sample is a sentiment analysis, set some parameter, our subscription, our Azure subscription key, and then send the request to the service and we will get a result. Nothing more simple than this. Second way, still easy but some more work. And that's the bill 2018 we have custom cognitive services. This means that we are given the ability to customize general services to our specific domain. What does this mean? For example, for vision, we can take some set of our pictures so that then it can recognize me, my colleagues, my family. So it's customization service. How to do that? It's really simple. Let's go to the Custom Vision AI site, create a new project. If we want to make the classification, then the project type should be classification. Classification type, multi-class because we want to classify several different domains, different elements. And if we choose as domain, any that is compact, we can then export it to a lot of stuff that I will show you later on. So once we have created the project, just drop your figures. In this case, they are just some fishes, some flowers and some sticky figures. Let's train our model. We get the result. If we are happy with the result of the precision of the train and machine learning, we are good to go. And we can export, as I told before, we made a compact in the project and then we can export it to ONNX. What does it mean? It means that then we can use it. For example, in our UVP application and to use it, again, it's very simple, just download the model, put it in our project and the project will automatically generate all the class and code need to invoke the ONNX model. So we have just to load the model. And then in this case, it was just for recognizing the shapes. So it's like we are reading, we are reading with the pen on the incandescent shape and then it gets recognized. To recognize it, just some lines of code to get the stroke collected from the incandes. Inker is just a type incandes, sorry. And then transform it the way that the model can evaluate it and get the result. Okay, how is it possible that we can run artificial intelligence actually in our client? Whatever it can be, it can be a server, it can be our Xamarin, it can be our UVP application, thanks to Windows ML. Windows ML is another piece of technology developed by Microsoft that helps us using artificial intelligence without the need of going deep in artificial in the core stuff. For example, with the artificial intelligence, we can focus on our domain. And what this means, that we can use our model without thinking how to make it works. To make it works, it should, the model when it gets up, when it is asking for a prediction, it can be that it has to run against the CPU or against the GPU. With Windows ML, all these stuff are done under the hood, we don't have to care about it. We have just to get our model and there are also a lot of already trained models that you can find online. At the end of the course, I have a slide with all the links that I collected during these years that you can get and use it. So there are also a list of a lot of already trained models that you can use. So as I said, also all the things related to the hardware, we don't have to care about it. Everything is cared by Windows ML, so that's cool as to Microsoft also for this great piece of technology that we can use. If you want to get started with all NNX, just go to the site and go to see the getting started and the tutorials. Because one of the other important feature of Onyx that you can experience is that around the world if you look for models or machine learning deep neural network models or how to develop it. Usually they are developed with several different frameworks like TensorFlow, Ko-Fi or something else. With Onyx, with the project Onyx, there is also the way for exporting project from made with other SDKs to Onyx and then you can run it on your systems. So if you want to get this, you can also train your data on the cloud. You can also use ML tools, WinML tools and as I said, you already the custom vision AI site. Okay, and these are the first two ways of using AI, but we are not really developing AI. We are just using it and so if you want to dig deeper and get our hands dirty and write our artificial intelligence code, we have to go through some process. First of all, we have to understand the core concepts, terminology and algorithms of artificial intelligence so that then we can develop our first deep learning application. So again, which is the way to develop an artificial intelligence? We need a set of data, actually we need a lot of data to build and train the model and then we can deploy it. We can use it and we can deploy it also with Onyx. Let's go a step back to build the model and to train it. Before some months ago, before WinML, before ML.NET was developed, was released before the Visual Studio tools for AI was released. As a .NET developer, the way to develop artificial intelligence is it was to go totally out of our comfort zone. We need to use Python, we need to use tools that we are not used to. Thanks to the AI tools for Visual Studio, we are able to run and build artificial intelligence using the Visual Studio toolset. And there is also an AI tools for Visual Studio code, so using the tools that we are used to use every day as .NET developer. Then to train the model, we can use our local hardware or we can train it on some custom dedicated virtual machine on Azure with a lot of computational power related to AI. So that means lots of GPU power. So, let's get a step back. Prepare the data. What does this mean? Prepare the data. Prepare the data means first to identify the attributes of the things you are trying to classify. If I'm trying to classify through it, maybe the features, the attributes that I need are color and weight. Dimension usually refers to the number of attributes. So in the case of color and weight, we have an attribute with true dimension. So feature or attribute is one particular type of data that generally is called data points. So one data point, identify one object in this case through it and it contains many different attributes. And usually the attributes, they are a lot. Once you know feature to use, then the challenge is to find enough data to train the model. Imagine you need to recognize cut. You may need 10k sample image for cuts if you want to get a good result. So, regarding data, depending on the problem you are trying to solve, whether you are trying to infuse AI, data can be anything. Can be database rows, can be some sample, can be video sample, can be images, can be tests, anything you need. Now there is some important to say. The data challenge and to find the right data for solving your problem is important when you are actually developing, when you are actually trying to solve your problem. If you want to learn artificial intelligence, you don't have to care at all about it because there are a lot of already pre-made data sets around and again in the last, one of the last slide I have, there is a lot of links and there is covering also this part, how to get data so that you can start learning artificial intelligence. But anyways, data is really important when you are then going into production developing your, solving your problem. So, one important thing to do and this is related also to the amount of data is that machine learning cannot predict stuff it doesn't know about. What does it mean? Let's imagine we want to classify animals and we will train our system with just two data points. One data points with attributes name, number of legs, color and weight. So the two data points that we are going to train is just dog for legs, color black, 10 kilos and chicken, two legs, orange, 5 kilos. Now, if we train the model just with these two data points and then we ask the model to evaluate a cow with four legs, color black and 200 kilos I believe that the model will predict dogs because it only knows about dogs in kitchen and chicken so this is the best guess. There is a nice sample that one of my fellow friend from the IIT was telling me and she has a daughter and they were usual with dog because they have a dog at home then they moved to Seattle, he was joining Microsoft and there was lots of horses around and her daughter came to him and said, hey, daddy, daddy, here there are some big dogs. That's funny because she never saw a horse. So the father has to tell her, no darling, this is not a dog, this is a horse so that then the brain is trained and he also understands how to recognize and differentiate dogs from horses. So there are several ways to train a model. Usually they are divided in supervised learning and supervised learning supervised and unsupervised learning. Supervised learning and when the machine learning is trained with data and those data are labelled. What this means labelled? In our case where we want to classify animals, the labelled will be the base of the animal so dog, chicken, cow, acceptor. And then we gave three inputs that are number of legs, core and weight. We are telling the system what out labelled we do expect. We use ML in this data to predict future unseen data. So what it means? It means that we have a boundary. We can represent if we can represent it in two axis dimension. We will have a boundary and the boundary can be a straight line, a curve or whatever a function let's say. And if the labelled is dog, it should be a circle. If the labelled is chicken, it can be red crosses or we can add a lot of more domains. So we were saying labelled data and then we have unsupervised learning. Unsupervised learning is where when we are developing artificial intelligence and machine learnings, it learns from an unbellied dataset. So imagine we have some points on a graph representing three different things and the machine learning system must recognize itself that there are three different clusters and it has to categorize itself. This is tricky because the number of clusters may not be known in advice so it has to take the best guess. Sometimes also the cluster is not clear as the one we have in the picture below so the best guess can be also not that good. And so when we are talking about unsupervised learning also another concept is the way of training data and it can be reinforcement learning. So learning by trial and error and rewarding and punishing the system. What this means? This means that machine learning in the case of video games and teaching itself how to play video games learned by playing the games millions of time. And the system is rewarded when it makes a good move. When it closes we give him no or negative reward. Over time over a millions, billions iteration machine learning learns how to maximize rewards without the human explicit telling them the rules. It can lead to better than human performance when it finds the path that no one taught of doing before. And this is really amazing. So there are many ways a machine learning can learn pattern to classify data as we say. In this example we use a line to divide two clusters. When we can predict future data saying anything above the line is only by a cluster and anything below the line is only by another cluster. But we can also use a cubic curve as we saw before. So instead of a straight line we can have a cubic curve as you see in the picture below and the way the machine learning will predict future point it has not seen before by understanding in which area the point belongs. So now let's talk about Neural Networks. Our brain consists of something like 86 billions interconnection of Neuron. Each Neuron responds to a certain stimuli and passes up to another. There may be a much of them dedicated to recognize cut some for any attributes we want. Each having a different weight. The weight is the important that the feature is to the overall contributing of understanding animals for example. When the Neuron fires, if all these Neuron fires your brains tell you you saw a cut. In a Neuron network, so a model that is loosely modeled like the brain Neuron network are used to calculate the probabilities for features they are trained to look at for. So with Neuron networks we try to mimic the functionalities of a brain. Do you remember when I told you the story of the horse and the dog? That's why. So now the brain of that little girls will fire some different Neurons. So this is the representation of an artificial Neuron. So we have three arrows on the left corresponding to the inputs coming to the network let's say various 0.7, 0.7, 0.6, 1.4 and these are the weight assigned to the corresponding input. So this is the input with the weight. Inputs get multiplied by their respective weight and their sum is taken. So if we consider three inputs we have x1, x2, x3 and if we consider three weight it can be w1, w2 and w3. We have then the sum of it and after we assume we are usually at the BS and we also see in the code this to the sum to get the sum and the bias is just a constant number. Let's say one which is added for scaling purposes. So then you sum will be something like this one. It's not necessary to add bias but it's a good practice and that's it and it's good for speeding up the process. After adding the bias we reach the threshold step and if the new sum calculated is above the threshold value then everyone gets excited so it passes the value and it passes out to the output otherwise it doesn't. So exact. It goes there and then it goes on the other side. Now with this concept this is the map behind all of this so we are developers. We are not really caring about that but also in my links I have a good book where to start from and the book is actually made out of a professor of the California University and when you get this book you get also access to the food semester recordings. So it's a video recording where the professor is teaching the course buses on that book. That's really good if you want to dig deep into the map of this. The activation function that we saw over there is another part of the process of the Neuron and usually there are several activation functions. Usually the most used 90% of the time is the ReLU-1 or the 10-H or the sigmoid the 10-H goes from minus 1 to 1 the sigmoid goes from 0 to 1. These are the most used and there is something to say about this, about artificial intelligence. Artificial intelligence is so complex that it's not an exact science. The way we develop artificial intelligence is by testing, so because of this we can apply to our Neuron one activation function or another so that then we can train the model with an activation function get the result, train the model with another activation function and then we can get the result as well as using different deep neural network patterns. By testing the data and getting the result we will get the most important part. I want to show you something about that. As I told you before there are a lot of databases that we are ready to use for teaching purposes. One of the most known is the MNIST database. The MNIST database is a database of M-written digits and we have 70,000 labeled examples and 10,000 samples that can be used to get the result. This is another important concept about artificial intelligence. When we train our model then if we want to test it we have to use a dataset that is never shown before because if you use the same that is like spoiling, right? It's not good to use the same that we use to train the system for then testing it out. This is another important thing. Always divide training dataset from test dataset. Here, like you can see there are several different classifiers used to create a machine learning model. What does it mean? We get the dataset, we train the model with one of these classifiers and also with post-processing and then we get the result. The best result we can get is the one that we want to use. In the case of the MNIST the S1 is one of the best here. Okay, let's go back. Activation function. We got the concept of our neuron then imagine that we can stack a multilayer neuron like it happens to the brain and all stacking up any artificial neuron helps us to create intelligent systems. Why? Because we are talking about deep neural network and deep neural network consists of interconnected layers of neurons. There are also some hidden layers between the input and the output and each layer can learn from the one before. This is important. If the layer can usually allow dimension so that they can generalize and not overfit input data the layer can learn features and a simple example it's like first layer it's like lead to understand that it's a face part and then the other layer leads to faces and so far so on. The most true deep neural networks users are the convolutional neural networks and recurrent neural network and yes some machine learning types we have regression that is to predict numerical values like price of the house classification so if this cut human or whatever clustering most similar to other example for example the related products of Amazon and then we have sequence prediction so if we have a sequence 1, 2, 3, 4, 5 we want to predict the next number. So we are very near to start developing our application. So to develop our application the tools are really important. We show studio tools get go ahead and go to grab it and install it's also a simple repository on channel 9 there is a very good video talking about Visual Studio Tool how to install and how to use it. WinML it's something really important it was released something like 6 months ago I think I built an unsubtle build and before these two guys as I told you the only way to develop artificial intelligence it was just to go out of our comfort zone now what we can do is using the Visual Studio tools for AI and also WinML. With WinML what we can do is to develop artificial intelligence in C-sharp. And I will show you and I will show you what is the difference. We are really talking about the Modified National Institute and Standard and Technology database of Android and D-Shift and we are going to use it for the rest of the course and this is an important one. So let's develop our first application so I was talking about the MNIST database and so the MNIST database this is another link that you will find then later in my resource it's as I told you it's a database of Android and D-Shift database and it consists of total 70,000 images of MNIST and these images are 28 pixels now one important thing that we have to do when we have sets of data that is multi-dimensional and usually multi-dimensional data are called tensors this is another thing to understand is that we need to get this picture and load in a single-dimensional array for doing that what is called a flat ring so we are going to take as we said it's 28 pixels by 28 pixels so we get the first row 28 pixels and we start putting in a row then we get the second row and we put it on the side on the side by having if you multiply 28 by 28 it comes out 784 pixels and this is our vector that we are going to use for such kind of stuff now why I want to use why I like the MNIST database I like the MNIST database because it's something visual so when we train the model we get the result visually we understand if the machine learning is completed or not because the data visual so if we handwrite one for example then we ask the machine learning to get it we can understand if it's good or not so there is another very nice resources that I don't see let me because it's something that I want really to show you so bookmarking tools bookmark sidebars here we are the most complex one so let's see if I can find it very very fast yes this one so this is another blog that is in the resources and it's very very good so let's say in our networks where is it okay here we go so this is a paper this this guy is researching about and he was the paper was based on the MNIST database so this is the result of a study of a semester or so and one thing that I was telling you about the data is that okay let's see if we can visualize it because it's very nice so it's getting a bit longer but it's modding so why is good to use visual data when we are starting understanding we are starting understanding artificial intelligence this is a 3D representation of the MNIST trained model with one of the models that we are going to use and so here you can say any of this point is the point used to train the model and what we can experience over there this is an 8 but this is in the context of an 8 okay so this is an 8 in the context of a 6 right why because it really seems we are not sure if it's an 8 or 6 and this is where the artificial intelligence maybe can can file right this is a 5 but it seems right it can be it can be so let's see now let's say let's see some some example so as I told you visual studio AI tools visual studio AI tools they have very good sample records where you can find where you can find a lot of example for different SDKs so okay let's do it like this now I cannot find the source on my laptop but you can go online and also on the list of resources I will give you there is also the link for the sample oh here it is for the sample AI master so this is the Mlist database implementation using actually the cmtk SDK and this is Python so we are already in our comfort zone because we are using our tool right but this is Python so it's a step ahead because we don't have to use other tools so kudos to the visual studio AI tools for the team that develop these tools but we are still out of our comfort zone nevertheless I want to show you this is the most important part of the code where they are creating the where with this code we are creating the deep neural network for then creating the model and then train it so if we run this code over there so the code as you can see over there it's okay with the breakpoint yeah we can also use all the features we have in visual studio that's fantastic so it's creating the model we are adding some liars to our deep neural network do you remember the slides we were talking about where we were talking about the things the liars on the deep neural network this is creating the liars for the model and then the data gets downloaded, the local software is not going to download it and then exactly that's good there and then it's going to train our system so we are actually training the AI model I want to show you my task manager but it gets a bit how to say yeah, here it is let's get a bit let's stop this we already have the result we have we have the result and the result we can get over there open we can open in file explorer if my laptop is able to do it and it's a bit slower down and here we go this is the model the same tk already this is the last line of the code it's over there we are running out of time I want to show you C sharp samples ok the onyx model so that then we can use the onyx model on our component so let's say we are going we are a bit late so something that I want to show you is how to do it with WinML with WinML you just create a project add a dependency to the WinML package and then for example here I am using the iris data this is another another dataset for classifying for classifying iris forward based on this on this part so how how long it is to get classified and here we go we can do it on we can do it on C sharp just create a pipeline load the data the same code that we was looking at the cntk before now we are doing it in C sharp we can add our label we can add our features and then this is the most important part we can add our classifier and when we have the classifier then we can also train the data and predict it I have also another one sample that I want to show you and also this repository is publicly available it's also in the link in the last slide it's in my repository so this is a bit more complex like example this is what I want to show you because this is also using a regressor so for training the data testing the data we can read it and then we can train our model we can evaluate the result and then use our model this is very simple let's start it so but there is something else that I want to show you before the end let's see come on ok, coffee break ok, train the data and the prediction is this one so we got our data also from Winamel there is a way for exporting it to the Onyx model and once we have it to the Onyx model also once we have it to the Onyx model that's something for the work I don't need it now once we train it we train the Onyx model let's just drop it so I have trained this model with CMTK and this is the MNIST database just because I didn't have the time to add the sample for the MNIST database on Winamel but I will update also this repository with also this sample and then what we have to do is just drop the Onyx package, Onyx file over there when you drop over there Visual Studio already create all the classes you need to the classifier and then you are good to go and use it so this is the part right as I show you in the code this is the model but keep in mind this is the model that we developed now so this is the real difference we have developed our model, we have created our deep neural network and now we are going to use it so let's say you can look at the code by yourself so it's loading the model I can write something over there and then say recognize and that's it if you want to see the process it's really simple let's say let's do it like this erase and let's make a 7 recognize, so recognize what I'm doing over there I'm getting the stroke from the canvas converting in the way the model input likes the input I'm passing to the model and I'm asking to evaluate it when it's evaluated I get the result in this case it's a bit complicated so I get an array with all the guess and I have to understand which is the best guess I'm just iterating the array and getting the best guess that in this case is let's say in this case it's a 6 that's this time the model wasn't that good in recognizing it but let's say let's give another try and let's try let's see what's happening over there so evaluation and now it's 8 ok we have infused artificial intelligence in our application that's the most all we are over there all these resources will be available for you to download also this slide will be available to download so you can get also all the resources as you can see there are a lot of good point to start one thing that I want to point you out is this one three hour and a half courses on theory about deep learning you must see this course and also let's see let's see and that's it some thanks for the people that helped me and get me inspired for this course and to teach myself artificial intelligence because this is something really really hard at the beginning I hope this will help you and so if we have any question this is the moment we can get there you mean the one with the resources I think yes absolutely thank you