 So welcome back to the .NET Conf bunker here at Channel 9. It's about four in the morning here, Pacific Standard Time, but we are having an amazing time at .NET Conf. Thanks so much everybody that's tuning in wherever you are around the world. We're broadcasting now so that it's a lot more convenient for you. You've got folks from your regions, your areas presenting to you. We've got folks on six of the seven continents. It's amazing. It's amazing. It is awesome. We couldn't get Antarctica. Couldn't get Antarctica. We tried to have friends there that hang out with penguins, but they were a little busy with the penguins. Look at that. It's almost lunchtime in the UK. It's good stuff. Oh man. And I think that's where's our next talk coming from? I think our next talk is coming from London. We're going to be going to, you know what, let's see if we can start bringing them in now. We're going to be going to Metay in London. I love the fly ever. We got the fly in. Absolutely. Did you pre-program each one of those or did we just go to that location? Absolutely. I like the script. There we go. Hey, good morning. Good. Happy lunchtime, Metay. Hello. Can you hear me? Yes, absolutely. Good. Thanks for the introduction. Shall I start or you guys? Absolutely. It's all yours. You have the conference. The whole world's watching. Great. Hello everyone. Great. That's why I love online conferences, you know, because the whole world is your audience basically. So I don't know where people are, but I'm based in London. It's lunchtime in London, but I'm sacrificing my lunch so I can talk to you guys because I did this conference last year. Last year I talked about ASP.NET Core containers on Kubernetes and it was a lot of fun. So I'm very happy to be here again today. My name is Metay Atamel. I am a developer advocate at Google. This is my Twitter. I already have the slides, a version of the slides on my Twitter, so if you want to get the slides, just follow me. The application that I'm going to show is already on GitHub. I don't know if you can see it, it's at the bottom of my slides. And I'm going to, at the end of my talk, we'll also provide the link to that as well. So if you want to run the app yourself, look at the code, all that kind of stuff, you can get it on GitHub. To save some bandwidth, I'm going to turn off my video and then we'll start the presentation. So let me do that. I think that worked. And let me move this around. The talk is about connecting this time. I'm still talking about .NET Containers, but this time connecting Google Home Device to .NET Container. And you might be wondering, where is this coming from? How did you get this idea? Oh, okay, sorry about that. So let me, let me turn that on. Can you see it now? You see my camera, but oh, I had to do screen sharing, I guess. Let's see, I haven't used Skype for a while. Yeah. Where do I get the screen sharing in Skype? Share screen, yeah. Yeah, I got it. Now I can turn off the video. Now everything should be good. Right, okay. Cool. Now you should be able to see my screen. So anyway, so what I was saying is that, so me and my coworker Chris Bacon, he's a C Sharp developer in Google Cloud Library's team, we wanted to do a talk at the beginning of the year and we were thinking something cool to show. And then we said, okay, well, Google Home is kind of hot nowadays. And why don't we just look into Google Home, see if we can program it. So that's how the idea started. Initially we thought it would be really difficult because when you talk about a device like a Google Home, you need to talk to it. It needs to understand what you're saying. Then once you get the input from the user, you need to parse it, try to understand what the user is saying, get the entities out of the words. And then you need to somehow respond to that, right? So just thinking about this, our initial assumption was that this would be quite difficult to do. But we got the hello world application working really quickly and then we were like, okay, what else we can do with this? And then the idea came along. We have the device, people talking to it. We have the cloud with all the big data and all machine learning on all the APIs and everything that the cloud provides. So what does it take to connect it to the cloud? Again, initially, our idea was that it would take a long time, but it was really quick. We were able to relay a user's request to the cloud quite fast. And then from then on, it was pure fun. We basically said, okay, let's use some machine learning to do something interesting and then let's use some big data processing to elevate the Google Home application. And in the end, we have this application. So this talk is about that. We'll go through it and see how it works. But before I start my presentation, I want to first make sure that my application is working. I don't have a Google Home device here right now, but I'm going to use a simulator. So what I'm going to do is let me lower this. So here this is a Google Assistant simulator. So I'm going to be using this. And I also have a front end for my application that I'll show you later. So first, let's start with this. One thing to mention, when you write an application for Google Home, you need to choose a phrase that triggers your application because normally when you talk to a Google Home device, if you ask something like, how is the weather? That will be handled by Google, right? Because Google has the weather data, so that will be handled by Google. But if at some point you want the control to go to your application, then there has to be some key phrases that triggers your application. So for example, if you write a stock application, you would say, talk to my stock application or if it's a shopping application, you would say, talk to my shopping application, something like that. But since this is a test application, the way you would test the test applications is that you would just say, talk to my test application. That's how you start. So that's why every time, not every time, but most of the times when I show this, I have to say, talk to my test application so that I can trigger, I can move the control to my test. So let's do that. And the cool thing about the simulator is that you can type it or you can talk to it. So I usually try to talk, but sometimes if it doesn't understand me, then I will type it so we'll see what happens. So let's just start with talk to my test application. Or you can also click on this. So it's telling me there's a suggested input. So let's just click on it to make it easy. Okay, let's get the test version of my test app. Hello from Google Home Meets.net Containers app. Yeah, so as you can see, this hello from Google Home Meets.net Containers app, it's coming from my application. So at this point, the control is in my application. So let's just ask something to our application. Can you say hi to everyone? Hello.net Conference, people all over the world. Great to be here today. Great, so it seems that my application is working. So we'll get back to this. I just want to double check that everything is working. So let's go back to the presentation. So before I get into details, I just want to give you an overview of how this application looks like and get some terminology straight. So first of all, I am talking like the idea of our application was that we would talk to Google Home Application and then that would be caught by Google Assistant. Actually, all these applications, I call them Google Home applications, but they're actually Google Assistant applications. Google Assistant is the thing that captures the voice on many devices. So for example, if you have an Android phone, then you would have Google Assistant there. If you have Google Home, you would have Google Assistant there. The simulator has it. So any device that has Google Assistant, it has this application enabled, basically. So what happens is that the user talks to the device that they have, whatever it is, then it goes to Google Assistant. And at that point, Google Assistant decides, is this something I'm going to handle, or is this something I need to pass in to a custom application? If it's a Google kind of information, like how is the weather or what's the stock price or Google or something like that, then that will be handled directly by Google. But then if you trigger your application, then the control will pass your application. In this case, I'm using something called Dialogflow, and we'll talk about Dialogflow. But basically, from Google Assistant, if you want to extend Google Assistant, you would use something called Actions on Google. Actions on Google is the framework to extend Google Assistant. And then there's something else called Dialogflow, which is a framework even for, like, that wraps Actions on Google. And I'm going to explain the differences of what Actions on Google is and what Dialogflow is and why we use Dialogflow. But eventually, the control goes to Dialogflow. And Dialogflow, in a similar way, it makes the same kind of decision. It first tries to see if it can handle the user's response within Dialogflow. And if it can, it will just handle it and return the response back. If not, it can also call an endpoint. And that endpoint, it can live anywhere, but it has to be an HTTPS endpoint. That's the only requirement. So a request comes in, Dialogflow says, okay, can I handle this? If it can, it will handle it. Otherwise, it will pass it to this HTTPS endpoint that you defined. And then from then on, the code handles the response, the user's request. In our case, we are running an ASP.NET core application in Google Cloud. And this application, it's basically an ASP.NET core container. We deployed it on App Engine, but you can also deploy it on something like Kubernetes Engine as well. So you can run on Kubernetes as well. As long as it's HTTPS, it doesn't really matter where it's running. And I'm going to talk about the differences between the two and why we deployed App Engine. And then from then on, once we made that connection, we basically integrated with Google Search to search for some images that I'm going to show. Then we integrated with Vision API to get some intelligence out of the images using machine learning. Then we integrated with BigQuery to analyze some big data and get some intelligence back to the user. And finally, no application is complete without logging, tracing, monitoring and debugging, right? So for that, we integrated with Stackdriver and that gives us all the things that we need to maintain the application. So in a nutshell, this is the application. And I just want to take you through this journey of how we build this application and what kind of stuff you can do with it. All right? So let's first talk about Dialogflow. What is Dialogflow? Dialogflow is an end-to-end development platform for building conversational applications. So if you want to build an app for Google Home or if you want to build an app or a chatbot or something like that where people can actually talk to it or they can also type to it, like they can also text to it, then you can use Dialogflow. The great thing about Dialogflow is that it has integration with many technologies. So it has integration with Google Assistant that I'm going to show. But you can also integrate it with Skype or you can integrate it with Slack or you can integrate it with Twitter. So there's lots of integration points. So you can use Dialogflow as the one thing that you can use to integrate with many places. In this demo, I'm only integrating with Assistant, but I think it's useful to know that you can integrate it with multiple sources or you can integrate it with multiple languages as well. It works on phones. It works on home devices. It pretty much tries to work on everywhere it can. And it also tries to work all around the world. So it's not a US on-the-service or it's not a Europe on-the-service. It works across the globe and it supports multiple languages as well. I mean, it doesn't support every single language, but there's a list of languages. I think it's like 10 languages right now and the list is growing as well. And as I mentioned, Dialogflow is a channel for Google Assistant. So you can use Actions on Google SDK to program Google Assistant and that works. But Actions on Google SDK is kind of limited. It basically, like when you use it, it will get you what user said, but then you need to make sense of what the user said yourself and then you need to program that yourself. So it's not so easy. Whereas Dialogflow, it provides natural language processing so you can actually extract entities out of these expressions that the user said. But it also provides a really nice UI where you can define the conversation in a UI basically instead of doing it all in code. And I'm going to show this shortly. So that's why we tried to use Dialogflow because it makes it really easy to integrate what the user said and then it does all the things and display them. So I mentioned the natural language processing in Dialogflow. So for example, if you say something like we can flight from Los Angeles to Hawaii for less than $300. What Dialogflow does is that it first of all, it detects this and then it tries to extract entities from this expression. So in this expression, Los Angeles is a city so it will pick out a city, Hawaii is a state and then $300 it's amount and it's currency basically. So it will pick those out for you and it will give them to your application to process and that's very useful because you can literally mark what you want to be extracted and it will do that job for you so you don't have to do that yourself and it's quite smart. You don't actually have to give the full list of things that it should extract. You just give some examples of what you want to do. So that's Dialogflow. What I want to do is I want to show you the Dialogflow console first. So there's a console for Dialogflow. So this is where you would start your project. The first thing that you need to do is you need to create an agent. You can think of agent as kind of like the project or your application. Here I have two agents. One is a Hello World agent and the other one is GoogleHomeMeets.net In the agent, you have the description of the agent, the time zone, the language, stuff like that and there's also different APIs that you can use but I'm using the latest V2 API. So once you have the basic agent, you usually don't need to change much here. What you can do though is that you can export and import agents. So once you have your agent set up the way you want, you can actually export it in a zip file and someone can import it. If you get the code for this application, I have the agent exported already for you so you can just simply open Dialogflow and import my agent and you have all the stuff that I'm showing to you from the zip file. So once you have the agent, then you need to tell the agent what kind of things that it should listen for, right? And those are called intents. So if you look at intents, there are a couple of things to note. First is there's something called default welcome intent. This is the intent when you say, talk to my test app or talk to my stock application or something like that. When you say that, then it will trigger this default welcome intent. If you look at here, so this is the welcome intent and the response is you will find the response here. So here I'm saying hello from Google Home Mids.net Containers app. So this is the response that I defined. So that's how we can trigger this. This is an intent that's solely handled in Dialogflow. So request comes in. Like I say talk to my test app, it goes to Dialogflow and Dialogflow can handle directly right here using this text. The other one is default fallback intent. So if you say something to Dialogflow and Dialogflow doesn't understand it, then it will fall back here. So it's kind of like the default way of handling a request. And here, as you can see, there's no request or there's no training phrase or anything like that. There's only responses. So when you say something that Dialogflow doesn't understand, then it will pick one of these text responses and it will just say something like, you know, what was that? I didn't get that. I missed that stuff like that, but you can choose whatever you want in here. And the last thing I want to show here is the remember I said say hi to everyone. For that, I used this greeting.conference intent. So if you look at this intent the first thing that you realize is that I have some training phrases. So these training phrases are create everyone, say hi to everyone, say hi. So any of these phrases will trigger this, but not just these phrases anything that's similar to this. So if I say say hello to everyone, it will still trigger this. That's what I like about Dialogflow is that you just give it examples. You don't list everything. But if you say something like this, then it will get here and the text response again will be handled by Dialogflow. So that's what we have. Let me go back to my presentation now. So we got the initial Dialogflow application working quite quickly. The next thing that we wanted to do is we wanted to connect it to the cloud. And then nowadays like containers is the default way for many people is the default way of packaging and running applications. So I just want to talk about containers on Google Cloud briefly and the choices that we made and then I'm going to show you more stuff. So basically what we did is that we created an ASP.NET core application and we deployed it to App Engine, but it can also work on Google Kubernetes Engine and I just want to briefly cover what the differences are. So when it comes to deploying your code on Google Cloud, this is kind of like the spectrum that you have. On the spectrum, on one side you have Compute Engine, which is basically virtual machines on Google Cloud. You can have Windows virtual machines with different versions. You can have a bunch of different Linux kind of virtual machines. And you can also have container optimized virtual machines. So if you want to run containers on a VM for whatever reason, you can do that as well. On the other end of the spectrum we have Cloud Functions. These are basically serverless Node.js or Python functions that you can deploy and you don't care where they're running. They're just automatically maintained and run by Google for you. And then in the middle we have App Engine and Kubernetes. So these are kind of like the two choices that you have when it comes to running containers. The easiest way to run containers is App Engine. So in App Engine, you basically just define your application, define a Dockerfile and just say gcloud-app-deploy and it will deploy to App Engine. Under the covers, App Engine will run your containers on two VMs but it will automatically scale up to 20 VMs. But all that is directly away from you. So you don't have to worry about that. And then there's Kubernetes Engine for those people who want to run containers on Kubernetes. And the choice is really up to you. You can do both. If you want more control, you probably want to go with Kubernetes Engine but if you want just ease or use and you just want to say this is my code, just run it, then you will probably go with App Engine. So App Engine what does it give to you? Basically you just say this is my container, run it and it will just run your container automatically. It gives you dashboards, it gives you versions so every time you deploy your application you have multiple versions. Once you have the versions you can do traffic splitting and it has auto scaling. So it gives you the default things that you want with just a single command. You just say gcloud-app-deploy and by that you get all these features. And if you want more control as I mentioned, you can use Kubernetes and I think in the previous talk we talked about Kubernetes so I'm not going to cover too much but it's basically one of the most popular open source container management platforms and there's something called Google Kubernetes Engine which you can think of as Kubernetes as a service. It's Kubernetes maintained by Google for you so with a single command you can get a Kubernetes cluster with master and worker nodes and you can deploy your containers using kubectl just like any other Kubernetes cluster. So these are the options that you have so let's ask our application where it is running so I will go back to my my simulator and this is the front-end of my application so we have a simple front-end ASP.NET front-end so what I'm going to do here is I'm going to ask my application where it's running where are you running running on Google App Engine Project ID is home meets .NET containers as you can see now it's telling us it's running on App Engine so we kind of made the connection from our application from Google Home Dialogflow to the cloud now and I want to show you briefly what kind of things you need to make this connection and then we'll move on with more interesting things with machine learning and big data. So first let's go to Dialogflow console again so one thing that you realize is that in Dialogflow console in one of the intents I have something called platform.describe so this is the intent that handles when I say what environment are you running in or where are you running things like that so when you say that it will trigger this intent and then as you can see there's no response here so there's no text response but what's happening is that there's this checkbox here that says enable webhook call for this intent so that says to Dialogflow don't try to handle this yourself just call this webhook that I want you to call it's defined under here fulfillment so in fulfillment there's a URL that you have to enter it has to be HTTPS so this is basically my app engine endpoint that I already deployed that's going to handle the webhook calls from Dialogflow. By the way I should also mention there's an inline editor here where you can also define a function in Node.js in here and you can deploy it to Google Cloud using something called Firebase so you can define the function in here and just deploy it right from here and it's the easiest way to basically define webhooks right from here but we chose not to do this because we're going to have lots of intents with lots of logic in them and trying to do that all of that in here it doesn't make sense so that's why we also we wanted to do it separately ourselves also to be honest I mean who wants to write Node.js when you have C sharp right I mean I know you gotta let me for that but I mean when you have C sharp you want to write in C sharp at least for me so that's why we didn't bother with this one all right so that's the so that's how you define the webhook and now at this point it will get into our code so let me just show you some of our code so this is this code is on GitHub by the way but let me just show you the flow so what happens so we have this is an ASP.NET Core web app and we have a conversation controller so this conversation controller it's the thing that handles the conversation calls from Dialogflow okay so it will basically get here this conversation route and then we'll get the HTTPS request right here and this will call Dialogflow app handle request so let's look at Dialogflow app so under Dialogflow folder there is Dialogflow app and if you look at the handle request it gets the HTTPS request and what it tries to do is it tries to extract a webhook request from the body of the HTTPS request what is this webhook request it's basically a Dialogflow thing so if you look at the top of our file there is a google cloud Dialogflow v2 new get package that we are using so once I have that package I can use that and I can extract the webhook request so basically the contents of the Dialogflow request from the body of the HTTPS request so once I have that I have things like the session so every conversation has a session in Dialogflow and I can see the intents and intents name and things like that so I have the context of the call from this webhook request so what's happening here is that I have the session ID from Dialogflow and what I need to do is I need to decide whether this is a new conversation someone is just talking about right now or whether it's an existing conversation so this method get or create conversation figures out whether to create a conversation or to get an existing conversation and once we have the conversation either way we call the handle async method so we are passing the request down the chain so if you look at the conversation it gets the webhook request and it takes the intent name from the request and now we want to match the intent that we define in Dialogflow to a handler on the server right so this fine handler method does that it looks at the intent name and it finds a handler for it how it does it I'm not going to go through the code the code is here if you want to take a look at it but in a nutshell if you look at the intents so all the intents are under the intents folder and if you look at the platform describe handler you will realize that at the top intent tag and this intent tag has the same intent name as the one in Dialogflow so by having this intent tag with this value we are basically matching the intent of Dialogflow to a handler on the server that's how it happens so we'll get here and we'll get the request here and what we are doing here is that this platform.instance async it's a Google Cloud API call so Google Cloud is basically figuring out where we are running and at this point it will figure out the fact that we are running on App Engine and then once we have the detailed description of where we are running we do two things we have Dialogflow show call this will display on the web page whatever we pass so we pass in the text we get and display it on the page and we also return a web hook response and we say fulfillment text so this fulfillment text is basically telling Dialogflow just say whatever I told you to say so this spoken description here is what I'm telling Dialogflow to say okay so that's the whole chain like Dialogflow goes to an HTTPS endpoint from HTTPS endpoint we go through the conversation from conversation we go to the intent handler and from intent handler we handle however we want and then we just return a web hook response to Dialogflow and that's it Dialogflow will just say whatever we want to say alright okay so now we got let me go back to my presentation one sec now at this stage we have our Dialogflow connected to Google Cloud running on App Engine so things can get more interesting and this is the part of the talk that I have the most fun with so what we're gonna do is at this stage we said okay let's just see what we can use on Google Cloud to make this more interesting and the first thing that we looked at is the machine learning APIs so on Google Cloud and in other clouds as well there is this APIs for consuming machine learning by consuming machine learning I mean usually when you want to do machine learning you need to have data you need to determine what you want to do with that data then you need to be able to train that data using machine learning then once you have a trained model then you need to expose that model using an API and once you have that API that intelligence that you got from machine learning right that's a lot of work but then you can also rely on what's called machine learning APIs these are APIs that expose a model that's already been trained for you so for example there's something called cloud speech to text all what it does is that it takes the voice and it turns that into text using machine learning it uses a model that Google already built to do that same thing for text to speech so you can convert text to a human-sounding speech and there's video intelligence API that you can extract intelligence from videos natural language is also really good you can pass in some text in English and it can detect whether the text is a positive text or negative text or whether it's neutral the things like that about the text and my favorite is division API so in vision API you can pass in an image and it will try to extract information about that image just to show you an example so there's a demo you can actually go to this demo yourself as well called vision explorer and what we can do in vision explorer is that we can pass in some images and we can see what we get back so for example we pass in an image or a cat and what you get back is basically something like this just a JSON with some text with some descriptions and scores but if you look at it in detail in graphical way we can see that we get some labels so we pass this image to vision API and what we got is some labels and vision API is telling us that it's a cat 99% sure it's an animal 96% and it also knows that it's a British short hair cat 93% so it gives you quite accurate information about the image it gives you the colors in the image the dominant colors in the image and it also tells you whether this image is an adult image or a physical image or violent image stuff like that if you pass in an image with text then it can extract text from the image as well so for example in this one we have we have this traffic sign so vision API is telling us that this is traffic sign 90% but it also picked up the text and it tells you also the where in the image the text is so it can give you that as well and the last thing I want to show is that within people to vision API it can detect people's expressions for example this one it figured out that it's a social group 98% and maybe folk dance 56% which is pretty good but for me the interesting part is that if we turn this on then it can detect people's faces but it can also detect people's expressions so person one which is the person here vision API is telling us that there's not much expression which seems to be true and the person two which is this person is joyful because she's smiling so you can get this kind of information from vision API so what we wanted to do is we wanted to basically use vision API and what we did is well instead of me talking about it let me just show you what we did and then I can explain the details afterwards so let's go back to my simulator and I will say let's try I want to use vision API okay you can ask vision API to search for images first what do you want to search for search for images of London found some pictures of London now select a picture select second picture picture two selected you can describe show landmarks or ask whether the image is safe so what happened right now is that we use Google custom search to search for images of London and because images we pick one and now I'm going to use vision API to describe the image get the labels out of it and do a landmark detection to see if there's a landmark and also do a safety check whether this image is safe or not so let's see if it works can you describe the image this picture is labeled city cityscape landmark tourist attraction skyline urban area ferris wheel metropolis bridge and metropolitan area so that seems pretty accurate and then let's see if there are any landmarks here are there any landmarks in this picture this picture contains London Eye okay one interesting thing is that it picked up it thought it I said either any rabbits in this picture but somehow it went to a landmark detection I don't know how that happened but it works and then let's do the safety check now is the picture safe let's see this picture is fine yeah so it's telling us that the picture is fine so yeah we just use machine learning in our application and let me just show you quickly how this looks in the code well before the code let's look at the Dialogflow console first so we can close this so if you look at the Dialogflow console start with intense so we have a number of vision related intents the first one is vision.intro so this one when you say I want to use vision API it will trigger this and all it does is it will set what's called an input context and an output context so input context in an intent means that this intent will only be triggered when you may have this input context in this case this intent will be triggered no matter what right but the output context means that when this intent gets called at the end of it it will set this vision context so when you say I want to use vision API it basically sets the context of vision alright so that's all it does and this is important because if you look at the other other vision intents the first thing that we do is vision search so the vision search has vision context as input right so this will be only triggered if we are in vision context so that's how you can control what gets triggered when right and then it also has some output context like search and vision but one cool thing here is that the training phrases here in vision.search is that I say let's see some dogs dog pictures you know as you can see like I didn't say dogs in my demo I said London but what's happening is that we are marking this we are providing these expressions and we are marking dog as special entity that we want dialect flow to pick for us right and this entity we call it search term right so and what happens is that the dialect flow will say you know when you say show me images of London it will pick London and it will insert it as search term in the request and we can pick that search term and use it later right so that this is the entity detection of the outflow that we are getting for free which is nice and then let me show you the other intents as well and then we'll look at the code so that's image search and once we search the image we select the image that's not that interesting it's basically selecting one of the images and then vision.describe is the one that actually makes the call to vision API so in here as you can see the context is vision.select so this only gets called otherwise this intent doesn't get called which makes sense and then these are the phrases that triggers this intent such as describe this image and then response is a web hook call right there's no response our code will handle this and it's true for the other ones as well if you look at the vision.lemmarks this will also like be triggered and it will it will be calling our code as well so now we can actually look at our code let's go to visual studio code in here under intents there's a vision folder and then let's look at the search first so if you look at the search eventually the web hook request will end up in this method and what we are doing is that we are picking up the search term so this is a search term that was picked up by Dialogflow and passed to us so we picked that up and then what we do is we create a search client so this is a Google custom search API that we are using we are searching for images we get some images back and then we display that image in our frontend and then we send a response to Dialogflow and so this is basically what we want Dialogflow to say and it will say find some pictures of London now select the picture right so that's how you search it it's quite easy then we select it will basically we don't have to look at the details but it will basically when you say select the first picture that will be passed in as index from Dialogflow and then we'll use that index to select the picture and then the machine learning happens in describe so in describe one thing that you'll first realize is that we are using google.cloud.vision.v1 so that's the new get package to talk to Vision API and what we are doing is that we get the request again we create a vision client so this is the actual class we use to call the vision API and we just make a single call in this case detect labels async so this says given this image which you basically just point to the image URL we just say call label detection on this image and this gives us some labels and then we just basically take those labels and we just say this picture is labeled blah blah blah and then we just talk about that so that's it, it's just one API call to vision API same thing with landmark detection if you look at landmark detection again vision client detect landmarks almost the same kind of code and safety detection is also the same if you look at safety detection have a client and detect safe search async that's it so it's cool that we can do machine learning with a single call but that's what the machine learning APIs are for that's what we used so that was one the second that we wanted to use was BigQuery let me go back to my presentation briefly so BigQuery is Google's so that's vision API so BigQuery is Google's massively parallel processing engine basically so the idea of BigQuery is that you ingest your data and in this case I mean terabytes of data, BigQuery works better and better with more and more data so if you have lots of data you can ingest it and BigQuery stores it it has a lot of storage behind it and it has a lot of compute behind it and the idea is that you run SQL queries against this big data and it runs really fast because BigQuery is really optimized to take the query split the query into smaller queries and just run it as fast as possible and it's fully managed so you don't have to worry about clusters, machines, anything like that and the thing about BigQuery is that it comes with public data sets so if you go to this page on Google Cloud, BigQuery public data, there's all this public data that's available in BigQuery that you can use to try it out or even using your application if you want on things like GitHub data for example, you can search for all GitHub commits and everything that's on GitHub is ingested in BigQuery, there's Hacker News, so everything on Hacker News is ingested, so we wanted to use some of these and actually before I show the application, I want to point out to one of my co-workers Felipe Hoffa, he's also a developer advocate in my group he actually analyzed GitHub data using BigQuery and he figured out which companies are the top contributors to open source using GitHub data and he has a blog course about it that he explains what you can do is basically I won't go into details but you can go to Google Cloud and you can go to BigQuery and let's just go here BigQuery and then I just copied and pasted his SQL statement so I have one here GitHub top contributors let's just move this around so if you look at the this is just the SQL statement, it's quite complicated but he explains what he does in the blog post but if you run the query now this is going to look at the GitHub data and it will run it will basically look at all the commits it will look at the emails that touch the commits and it will try to figure out if the person has a company email like at Google.com or at Microsoft.com and they will count it and it will display what company has the most contribution and this ran and it looks like Microsoft has the top contribution with Google close second and Red Hat and IBM which is pretty cool but what's even more cool is that this query completed in 20 seconds it analyzed, it processed almost 900 gigs of data and I did this with just one click and to me that's really impressive so we wanted to use this in our application again instead of me talking about it let me just show you what we did first thing that I want to do is I want to say I want to search for news okay you can ask BigQuery about top hacker news or global temperatures on a certain day so yeah in BigQuery there's two public data sets that we were interested in one was hacker news we wanted to search everything that happened on a hacker news and there's another data set was global temperature so there's a data set where we keep the data set we keep track of all the global temperatures in all countries since like 1910 or something like that so let's try hacker news first what was top hacker news on May 1st, 2018 now it's running the SQL statement seen here and now BigQuery goes scan 697 megabytes in 4.6 seconds the top title on hacker news was titled Amazon threatens to suspend signals over censorship circumvention interesting yeah so you can see like we are running SQL and we got some stats about this the BigQuery run and then we are displaying the top 10 hacker news on May 1st, 2018 and that was pretty quick and the second one is the global temperature so let's ask about that as well what was the hottest temperature in USA in 2016 it's running another SQL statement against the global temperature and 5271 megabytes in 2.9 seconds yeah the hottest temperature in United States of America in the year 2016 was 48.7 degrees celsius at the stovepipe wells 1 SW monitoring station yeah I don't know where that is but that looks like a really high number like 48 degrees is probably like 120, 130 Fahrenheit I guess I don't know but that's a lot and you can ask like call this temperature as well and in pretty much any country and the way we did this again just to show you briefly is that if we go to the outflow console intense so BigQuery intro sets the BigQuery context just like before and then once we have the context there's a let's look at the hacker news so the hacker news gets called when we are in BigQuery context and the training phrase is what was on hacker news yesterday what was the top hacker news yesterday we are just picking up the date and again like here I'm not like here I'm giving one example with a date but you can also say yesterday and as long as you mark these and you tell dialect flow just pick this thing as date it will figure out what yesterday is and it will give you a proper date that's the cool thing about it so we basically pick up the date and we pass it to BigQuery code that I'm going to show you and the same thing with the temperatures in temperatures we have more things to look for so we are looking for the hottest temperature in France in 2015 so we are picking up hottest or coldest or highest or lowest and we are picking up country so France and we are picking up the year right so the entities are more complicated here but again dialect flow is doing the work for us and by the way if you say in 2015 what was the highest temperature and if you don't say the country then dialect flow will prompt and say which country which is pretty cool as well because these are marked as required right so since they are marked as required dialect flow will make sure that the user provides that so I don't have to worry about that myself alright and just to look at the code again we go to BigQuery let's look at the hacker news first the first thing that we are going to do is we are going to use the BigQuery NuGet package that we have and then it will get into handle async we will from the request the webhook request that dialect flow gives us we pick up the date that's the only thing that we are interested in in hacker news then we create a BigQuery client that we'll use to talk to BigQuery we specify the table the public data says that we want to get the information from we define the SQL statement and we define the parameter in this case the date so that's the only thing that we pass it to the SQL statement and we show the query in our web page and then we start the clock so we can time how long it takes and then we execute the query and then we get the results and in the end we show the query to people in the browser and we also do a response so the fulfillment text will say the fulfillment text so dialect flow will basically just say the stats about the query so scan this many megabytes in this amount of time stuff like that and the same thing with the global temperatures the only thing that's different in global temperatures really is that there's more things to extract from the request so that logic is here so I'm looking at all the things that dialect flow should provide to me and doing some kind of error checking so if there's something that the dialect flow is not giving to me then I'm going to say something and the rest is pretty much the same I won't show you the code the rest of the code because it's pretty much the same yeah so that's how we got BigQuery integrated and the last thing that I want to show you I want to wrap up in five minutes but I want to make sure that you show I show this to you because it's kind of important you know this is all cool like we can interact with cloud we have machine learning we have big data but this all of this means nothing if you cannot maintain your application so for that we thought that it was really important to use something like Stackdriver to maintain our application and what is Stackdriver it's basically Google clouds monitoring logging debugging error reporting tracing tool so logging is like a central place where all the logs can go error reporting is anything that you throw from your application that's not caught it will be reported here and you get stats about the errors tracing is HTTP tracing so all the calls into your application will be traced and you can see the stats about them as well and finally the thing that I want to show you is debugging so you can actually point your code on GitHub and get that loaded in your browser put a break point and get a snapshot on a live application running in the cloud okay so let's just I just want to in five minutes just show you quickly what we have here but before that to enable Stackdriver all I have to do is in my application if I go to not start up to the program I just want to show you when I create my application I just say use Google diagnostic passing my project ID service name and the version of my application and that's it this will enable Stackdriver for me right so by with this I have my Stackdriver enabled and also in your Docker file you need to start if you want to use debugger you need to start in a special way but it's basically just you wrap.net call and once you do that what you get is that I come here I go to Stackdriver in Google Cloud Console so let's look at logging first so logging not exciting but it's useful it's a central place where you can see like this is my App Engine application version 6 on my application and I can see all my logs I can do search and stuff like that here which is really useful the other thing is the tracing so all the HTTP calls in my application they are traced so I have a long end point where my web page is calling once in a while so I can see the long call calls I have the conversation end point where Dialogflow is calling so I can click on conversation and I can see the calls that are being made and when they are being made and if I click on the actual call itself I can also see what's being called underneath so here I see that conversation calls Dialogflow App Dialogflow calls conversation and then that really ends up in BigQuery call that we just made so all this stuff is there and the other thing is the error reporting so if you look at error reporting my application it has some time out issues that I didn't fix on purpose so I can show it to you so as you can see there's time out exceptions that's been happening I can see how often they happened and if I can link to an issue if there's an issue in it and I can get someone to be called if I want to so it's very useful as well but the thing that I want to show before I finish my talk is debugging so what you can do in debugging is that first you point to your source code you can go here add source code and point to github they will load your code here it won't load it in Google Cloud so we won't see your code it will only load it in the browser then you can see your code just like here and you can put breakpoints so for example this vision search handler is the handler that gets called when we search for an image right so I can come here and I can just say yeah I want to look at the search term for example maybe there's something wrong with the search term so I can do a breakpoint here and now what's happening is that stack driver is waiting for this to be hit when it gets hit it will take a snapshot of the call and all the variables but it will continue running so it won't stop anything so if we go back just to see that this is working if we go back to our application let's do this and let's say I want to use vision API okay you can ask vision API to search for images first what do you want to search for search for images of London found some pictures of London now select a picture if you go back here as you can see I mean you couldn't really see but what happened is that this actually captured it was already captured and I can already see that search term is London right but my application is running it's in the production I mean to me this is really valuable because anyone who has done any kind of production debugging knows how hard it is and how hard it is to instrument your code and application so having this is very very valuable for me and I think that's all I wanted to say let me just double check yeah there's stack driver and yeah that's all I have to say if you want the slides this is my Twitter I already have these slides from two days ago and if you want to have the code this is a link to github there's a read me there that explains how to set it up yourself it's not that complicated so you can play with it if you want and yeah I guess in the last 5-10 minutes we can have some questions are there any questions so I don't see any questions right now there's definitely a lot of people out there in the chat room that are interested and have been tuning in and watching with us now you have a real long link there can you do me a favor and send me the link to that repository back in an email a little bit later and we'll make sure we attach that link to the video the recording for this so that folks are able to get it because we want to make sure that people are able to get to that demo because man you showed some really cool stuff that was cool so I love how we're really showing just how across everything.net we worked with you saw Google Home GKE you saw all this on a Mac which is what I mean 3 years ago you couldn't think of this and now I run everything on Mac running on Linux you know that's amazing it's absolutely amazing I had a question about that cloud that awesome image cloud mind if I ask what that's written that was in your browser wasn't it that was amazing looking which cloud you mean? the universe visualization oh the vision API that was neat looking I really like that it's a demo maybe you can attach that to the show notes as well it's a demo that's available it's public and it shows the vision API based in a visual way yeah very very cool stuff ultra hall in the chat room is saying .net is wicked cool in the correct hand so compliments to you very good stuff compliments to people who created .net to be honest that's good I like that right that's the point of open source everybody's collaborated and we're all getting very cool stuff out of this well I think that's all we're going to have time here thank you so much for joining us this is great we really appreciate it you know golf clap for you sir thanks very much for having me it was again I want to have fun