 Olisi koko ajanut, että ollaan koko ajanut. Joten mitä olemme lannut, ei ole vähävä ympäristöä, koska ne on vähäväuhtoja, jotka on tullut kaikille. Ei ole vähävä projektoja. Ei ole minun. Jotenkin olisi koko ajanut. Jotenkin voimme nähdä IoT-solutiona mitään machine learninga. Jotenkin olen Markku from Google. En tiedä, miten ovat toisiin nyt. Ei ole mitään, näetkö ne. Miten minä olen? Minä olen Viet-Aksent. Olen Fynlanda-säntöä. Se on niin koulut, että me tehdään tällaista asioita. Me jäsenä kenttämme kenttämme. Meillä on toisilla asioita kenttämme kenttämme. Meillä on myös kenttämme kenttämme kenttämme. Tämä on myös kenttämme kenttämme. Minä olin tullut Nokiin. Minä olin kenttämme kenttämme kenttämme kenttämme. Minä olin kenttämme kenttämme. Minä olin tullut Pus Cloud Computing 2009. Tämä on se, mitä meidän pitää tehdä 9 vuotta. Minä olin tullut Amazon Web Services. Tämä on teknologiikin avangelista. Minä olin Google Cloud. Minä olen kenttämme kenttämme kenttämme. Se on kenttämme. Se on kenttämme. Tässä on kenttämme. Tässä on kenttämme. IOT ja data. IOT-tevaisee on todella pieni. Täällä on kenttämme. Täällä on kenttämme. Voisin loppuissa osallistua. Tällä on kenttämme. truth generally in Italian devices are really really tiny things. Because there's a lot of them. There may be thousands of them. Or millions of them. They need to run from battery power, from maybe solar power or from a battery for a year. But we don't want to manage them, because there's literally thousands of them. So if you think of Singapore, we have drainage canals. säädä, ne olisivat vahvistuneet. Tällä tavalla, kun perustelijat ovat kentäriä, se ymmärtää, miten paljon vahvistuja on, mitä vahvistuja on. Nyt ajatellaan, jos pitäisi tehdä ympäristöryhmäkirjoitukset kentäriä niin kuin se. Ja miten voidaan käyttäämään, etc. Most of the IoT real use cases in the world are kind of like industrial internet or machine to machine, really tiny sensors. That's an ESP 8266, I think. That's about $3. Let's say $2 to $5, so the unit cost is super small. They are very weak. They can only do very simple things like reader-sensor reading and control of motor like open a door. But that's the beauty of it. maintained order of it. They don't really have an operating system most of them don't have. They just run a binary forever!" That makes them super-reliable. And because they are weak and because they are cheap, they don't consume a lot of power so they can actually run from batteries for months even years. They have things like deep sleep where you can play the sleep and oh, they wake up and take a sensor reading. Send it out. Simple reading. Simple reading, it's 32 degrees again in Singapore, and then sleep again, right? So there's a lot of data coming. Why there's a lot of data? Because you might have thousands of these devices. When they're all sending data, you already have big data. Now if you want to manage this data manually, it's kind of difficult, not that good, it takes you a lot of time. So it's much better to let robots handle it, automation. So this is one way of doing it. You connect your devices securely somewhere. Cloud is one way, you can have your own data center if you want, but then you have more hassle. Anyway, you connect your devices securely somewhere. Why? Because the devices can't handle the data, they can't do processing most of the time. The second thing is that you want to collect the data centrally somewhere, so you understand the whole picture. What's going on with all my devices? So that's why you need to, in most cases, get a big picture in the middle. So you collect the data securely, and then you do things like real-time analytics. It's not good enough to analyze tomorrow. Yeah, we know it was a bad situation. If you have real-time analytics, you can react. Like again the Singapore flooding canal, if the water level goes up, then you do some actions like, I don't know, they open some gates, and the water distributes somewhere else. For that you need real-time analytics. And then one of the best ways now to actually get value out of your data is the brain picture on the bottom, machine learning. With machine learning, you don't have to do anymore if then else kind of programming. In the old days, we always used to do if the value is this, then do that. If the value is something else, do this. It's only okay for a few use cases. Machine learning is great at finding signals in your data. Us humans, we cannot really understand weird numbers coming from millions of sensors. It's just, ah, it's just a random mess for us. Machine learning's best thing is it identifies patterns in the data. It's much better than traditional programming at identifying these patterns. It doesn't matter what is the pattern, it could be like a connected car where it can start noticing like gearbox temperatures going up and maybe the, I don't know, some friction on torque differences in the gearbox. That can be used for predictive analytics, predictive maintenance. Based on these signals in the data, these weird data, we predict that with certain confidence after a while your gearbox will break. That's a very common use case. So that's difficult to do with normal programming. The other nice thing about machine learning is that you don't have to program it in a traditional way. You feed it data and it learns. It starts to recognize the patterns. There's many ways of doing machine learning. You can do it on your laptop with things like TensorFlow that Google has open sourced. There are some accelerators available. You can stick them on your computer and you get like GPU or accelerated chips. Or you can run machine learning in the cloud. If you use Google Cloud for that, we have a service called CloudML where we have specialized hardware to run machine learning models. Your models, but they run in specialized hardware called TPUs, TensorFlow processing units, the same ones we use internally. They're just more efficient. They run it faster and cheaper. And then with TensorFlow, real experts can write your own custom models. I'm not one of them. I don't know anything about TensorFlow programming. I have to learn it. I am a simple, very simple developer. I can use these. Many providers or many software packages provide pre-built machine learning models. There's the inception, image analysis, machine learning model and YOLO and these. For example, in Google, we have some machine learning models that can give you value with a simple API call. Like translate this text or understand what this person is saying in the speech, change it to text or what is in my video feed? Are there cars? Are there trees? Are there animals? So they are very easy to use without programming. How do people use machine learning? This is the global phishing watch. Those light dots, they are phishing boats. So they do things like, well, big data analysis and visualization of phishing boats. But they actually use machine learning. So they track the boats and they have made machine learning models that detect, based on a movement pattern, what kind of a phishing boat this is. So that's pretty interesting. And then they can do things like that. We want to eat our own dog food or drink our own champagne at Google. So we have very large data centers. They have cooling and heating and whatever, a lot of things that are consuming energy. And then waste heat is just waste. The data centers should be very efficient so that they are green and everything. So we have been tuning them manually for years and years. Then we wanted to try how about machine learning. What if we feed all these sensor readings from our data centers into a machine learning model? Could it optimize the situation even better? Make our data centers more efficient? Well, yes. They did an experiment. And the machine learning model improved the efficiency of the data center by 40%. That was just shocking internally at Google. And you know self-driving cars like Waymo et cetera, those cute cars. They are basically IoT devices that use machine learning in real time to understand things like the LiDAR and all these things. Very crucial, of course. So I wanted to build one. So this is the first real demo I will show you. I don't know how to build a Waymo car. It's a bit beyond my capabilities. So I built this. It's a connected car prototype for one of our automotive customers. I wanted to show them an example. I wanted to have a modular powertrain. So I used Lego. It's modular. So it has a modular powertrain here, very simple. And it has an ESP8266 microcontroller here. So very, very simple $5 chip here. And then there's an accelerometer here. So it knows the position and G-forces in three axes, X, Y and Z. Its license plate is G00613, of course. I'm trying to do some lead speak here. So let's see how this works. What did I actually build? I have the car here in my hand. It's connected securely. In this case it goes through a gateway. It's raspberry by here. It's connected securely to the cloud. And it's got real-time analytics and stuff like that. Let's see how it actually works. So I've got the car here. And if we go to that gateway raspberry by here, then I will just show you one thing. We have way too many cables today. Just bear with me one second. I will show you how simple this is to configure these devices. So we take a connected car. And we plug it into my machine. And we go to my Arduino. So this car has been programmed with the Arduino IDE. And you can see here that I simply tell the car where is the gateway. Where am I sending the data? And I will show you what kind of data I'm sending. So now what I'm doing is I'm flashing the car with a new version of the software. So I'm compiling the binary software for the car. And after it's compiled into binary, it will be sent to the flash memory of the chip. And then the microcontroller will simply run this code forever in a loop. So now we are uploading, connecting to the car's controller. Flashing it. And all I did was I changed the address of where we send the messages. And then let me show you the code. So it's very, very simple. There's a Wi-Fi client. So this microcontroller connects to the internet. There's a PubSub client for sending simple MQTT messages. And then there's the Accelerometer library. So I'm basically reading things like acceleration. And jumping a little bit here in the code, this is the loop. It's literally called the loop. It's getting the Accelerometer values for x, y, and z-axis. And then publishing those and sleeping one second. And as you can see here, this is the Raspberry Pi gateway machine, if you will. You can see that now the car has started sending data. So if you can read there, it says license plate G00613, x, y, and z values for g-forces and acceleration. And timestamp and stuff like this. So now our car is connected. We start driving along. But what happens if we have an accident? Accident detected for vehicle license G00613. Oh dear, we crashed. This is a really simple example of streaming data. And then acting on a threshold. In this case, if the acceleration of the x-axis, like going that way, was very high, then that's bad. And the idea here is that the car would also have GPS or GLONASS positioning system. And then it would send, it's streaming the data, especially if there's an accident. So then let's say worst case scenario. You have an accident, you hit your head and you are passed out. Maybe you're bleeding. I don't want to sound very negative, but it can happen. In this scenario, we have a $3 chip here, sending data up to the cloud. And then we can alert emergency services and they will start going. They see the G-force of the crash, they see the license plate, they see when it happened and where it is on the map with the coordinates. And the ambulance starts coming. You can't call the ambulance because you are, well, passed out. Let's say that you topple over your car. You're driving alone and then, oops. Oh my. So the car toppled over. So we can have different kind of reactions based on what happened. So the car went sideways on its roof or on its nose or something. Yes. So that was a car. And how am I actually doing real-time analytics? I'm using a technology or open-source software called Apache Beam, which is one of the packages or packages. It's one of the open-source software that Google has open-sourced. So we open-sourced Beam. This is the same that we use internally for our real-time and batch streaming data handling. And you can run Apache Beam in Spark or on your laptop or Flink, different Apache open-source projects. If you want to run it in Google Cloud, we have a managed service called Dataflow. This is actually my buildings, this one here, real-time analytics pipeline running with Apache Beam. So you can do things like real-time analytics very easily. Okay. So that was a connected car. So it noticed accidents. It was a simple example, but that's the whole point. The hardware here is a Lego block and a $3 microcontroller. But it's a connected car. Let's move on. So I joined Google a year ago, just over a year ago. And I was really happy. Yay, I'm at Google. I will finally get yummy hardware. Like all the Googles I saw, literally that's what I thought. All, everybody has Google Glass. I will finally get the Google Glass augmented reality things too good. Or I will get some augmented reality tablets and then reality hit me, no. Literally nobody gave me any hardware like this. So then I remember at home I have funny stuff. I have raspberry pies with a camera. I have a laser projector. So what if I put these together? I'm an engineer, so let's get to work. So let's get to work. And I built this. It's a cat detector because you know the internet is built for cats. Internet is pipes full of cats, 70% of them. So we have a cat at home. So I built this. I built my Raspberry Pi. It has a camera here. I take a picture of our cat. And then I send the image of the cat to Vision API, which is a machine learning model. And it has the image of the cat. And it will send back, most likely, a label cat. So an English word cat. But because we are international, I take the English word cat and I send it to another machine learning API, translation API. And I get back, for example, Mao in Chinese or Kojani in Korean. And then I send it to GTTS, text to speech API, this word Mao, for example, in Chinese. And I get back an MP3 or streaming audio connection that has Mao as a spoken word. Then I use the laser projector. That's the augmented reality part here to project the metadata, like cat and Mao, somewhere in the real world, near the real object. So I like augmented reality. And my device will also say Mao like that. Okay, let's see if this works. I'll grab the speakers. And I will show you first a quick demonstration. So I needed to calibrate my system. So I'm projecting four phases of my manager on the wall. And then Google Vision API is able to detect if there's a person, where is the face. And then give me the bounding box coordinate. So in this case, with this way, I can actually align the camera image, what it sees, and the laser projector where it projects. So I can have alignment. So you inspired me with this idea, so I call this Oivind calibration. That's my manager. And you can see that these phases of Oivind are actually projected by the laser projector. So at home here, the Raspberry Pi is sitting on top of the laser projector and that block there is the camera. So when it's working, I will show you a live demo as well, but I'll show you this video first. When it's working, it looks like this. I start the app. After this, I will show you a live demo. We put an object in front of the camera. It takes a picture, sends it to Vision API, Translation API. Change the language. Because it's just one API call difference. And see the laser projector is projecting in English and in another language, and also speaking through the Raspberry Pi. This was the biggest problem. How do I write Hindi fonts? Honestly, the biggest programming challenge for me was finding the font. All right. And cat, yay. The real cat refused, of course, because they are the master rays. So I had to replace the real cat with the fake cat for that. So let's now see a live demo of this system. I have to speak a little bit here. Hopefully you can hear something. And this is the same Raspberry Pi as in the video just now. This one, I think. And now here, I don't, of course, have the laser projector. I'm connected to the venue projector here, but it looks the same. We start the app, and then let's do the hand first. So this is now a live feed through the Raspberry Pi. We can see the same as the camera. So that's my hand. We take a picture. And goes to Vision API, Translation API, GTTS, three APIs. Huh? But this is the best one in Chinese. Hand model. Yes. And this will be featured later. You may be wondering, this is getting very strange now, this presentation. So let's see. Let's have only the doll in the picture and take a picture. Let's see what happens. Oh, fashion model. It's a girl. And she has quite a long hair. Okay, maybe one more thing. Let's try the bottle, water bottle, very common household objects. Let's try this. Yes. All right, that works. So that was a super quick demonstration of using machine learning models. With a simple API call. So augmented reality at home in my system was the laser projector. I will quickly show you how fast it is to actually call machine learning APIs. So this is the software running on that Raspberry Pi. For that demo you just saw, it's 367 lines of code. And basically all we do is we take a photo with the Raspberry Pi camera. And so basically we get an image that's very easy. And after we take the photo, we look how short this code is. Vision client, label detection. We just call the pre-built machine learning model for vision, give it the image, and we get back the response. In this case a Python dictionary that contains the labels like bottle and hair and girl with a confidence value. How many percent confidence? So it's super simple. So that's the whole point that using pre-built machine learning models it's extremely simple and fast to integrate machine learning into things like IoT. But how about custom models? I have seven minutes left so I want to show you one more thing. What we saw earlier with the Raspberry Pi here was using pre-built machine learning models. That's great if you can if it works for your use case. But in many use cases you need to have your own custom model for your own business problem. Now traditionally that looks like this. On the left are the machine learning APIs that are pre-built. Super easy to use, even I can use it. You make one API call, you give it the data and you get back the response. It's called inference. You don't have to do anything, it's super easy to use. On the right are the experts. That's the expert zone. Custom models that you create using TensorFlow for example. Of course then you get a custom model but it's super involving and you need to know things like TensorFlow. What is the difference? If you use a standard API with a pre-built model you get back a common categorization. In case of images for example you would get cat. I know it's a cat but I want to know which cat, who is it? With a custom model you could train a machine learning model, a neural network to recognize between your different pets. That's Bob, it's not skippy or fluffy, it's Bob. So that's good but it's kind of like this. Pre-trained model is not good enough for you. I should learn TensorFlow maybe next year. Ta-da, how about AutoML? AutoML is something new. It's a new idea which is kind of a cheat code and I love cheat codes. With AutoML you get a custom model without having to know TensorFlow. I still recommend you learn and I learn TensorFlow but meanwhile we can use things like AutoML. AutoML is kind of inception. It's literally machine learning models creating machine learning models. So we have something called the meta learner that's like the AutoML itself that's trying to create a model for you. And it does this by spinning up baby models and then optimizing and tuning them and selecting the best performing model that's in this example, the one in the middle. So it does all the heavy work for you. It does this hyper parameter tuning which I don't even know what it means. You get as a result a custom model that you can use. And there are a couple of things to learn there. AutoML uses something called transfer learning. So if you think of images, we already have the image machine learning model that we used earlier. The image model already knows the difference between cats and dogs and trees and dolls and bottles. Your images are probably not that different. So what we can do with transfer learning is that we take the existing model, the deep neural network and replace the last layer. The last layer is the one that gives the categorization. Which one? Is it a bottle? Is it a girl? Is it long hair? You replace that layer with your custom labels, with your custom data. So that's a kind of cheat code as well. The lower levels of the model already know the difference between a cat and a dog and a tree. So we can use those. And then learning to learn means that there's the meta learner. Which is the big model spinning off these baby models and selecting the best one. And because it's done automatically, that makes your life easier. So, well, we have a service called Cloud Auto ML, but the concept is the same. This is a generic concept. You give this service example data. So basically, in case of images, for example, you give it a lot of examples of handbags. These are all shoes, and then these are all hats. You just feed it like label data, and it learns. And it learns to recognize between those. When it learns the recognition, you have your own model, and then you call your own custom model with your categories. And then you get back a response. Yes, that's a shoe, and that's a handbag. Without TensorFlow coding. So that's my last demo. You may be wondering about this. This is my smart-ish building. This is a smart building. So it's supposed to look like a skyscraper. It has a lot of sensors. It has temperature, humidity, UV, air quality. It has lights and a door. It has two brains. It has an Arduino, which is great for the analog thing. So many of the sensors are analog, and the lights and the door motor are analog. So Arduino is good for those. Then there's a Raspberry Pi, which is a generic computer. It's connected to the internet, to Google in this case. And the Raspberry Pi is also controlling the Arduino with a serial cable. Good old serial communication from way back then. And the architecture looks like this. We've got our building right here. The Raspberry Pi is the connection to the internet, secure connection. It does real-time analytics, et cetera, just like the car, actually very similar pipeline. But there's one more difference. There's a camera here. There's a Raspberry Pi camera. I wanted to see if I could make an entrance camera for myself, which recognizes my friends that, hey, your friend like Jan is visiting. Hey, Jan. So I wanted to try if I could use AutoML vision for that, because I wanted to learn how to use AutoML. So for my building, I wanted to build this entrance camera. And I will very briefly show you the AutoML. So it's super easy to use. The idea here is not to block the product, but it's to show you how AutoML as a concept works. You give it a lot of data. You give it training data. You import somehow the data. And then you specify your labels that you're interested in. So in my case, I have three labels. There's this one, Barbie. So I uploaded pictures of Barbie. My manager thinks this is somehow very creepy, but I think it's funny. Then there's another doll, Jenny. These are my girlfriends dolls. They are not my dolls. It's us to be clear. And then images of me. So these are the three labeled categories of images I uploaded. Then this is my own personal tensorflow for dummies. I just press a button that says train. Thank you. And I get a custom model. Literally that's all you do. And then you can evaluate the success of the model. It gives you some percentages, how confident it is. And then you can do things like query. So you can use the model. So let's try. Let's try first by uploading that picture of Barbie and that picture of me first to the user interface. So these two photos. And I click predict. I have my own custom model now in the cloud that I trained with those example images. And maybe the first request will take some time because the service is in alpha still. So it hibernates your models when you don't use them. So the first one might take a bit of time, but then the next one is fast. So what we are doing now is that, well, we are waking up my custom model. It's still in alpha. And then we are testing it first using the user interface here. The web interface. And if we are successful, so we are uploading two pictures. One of Barbie and one of me. So you have the photos again. This is sample image one, sample image two. We uploaded those. Yeah, it's hibernated. It will take a bit of time. And then it will tell us which one of those three categories it is. So literally that's what it does. There's one more category which is none of the above. So none of the above category or label is like it's not Barbie. It's not Jenny. It's not Marko. It's probably something else. And waiting, waiting. So the beauty of this is literally that using the concept of AutoML, you can create custom models really easily without actually knowing TensorFlow. So it's kind of a cheat code. So that's the first test image. It's 100% confident it's Barbie. That's the second one. It's 99% confident it's me. Okay, so looks okay. Now let's switch to the last demo. I know I'm slightly running out of time, but I will be brief with the demo. Now we switch to the building. So now the image will be coming from the building. You can see it's building there. We start the client app on my building. I move the sound to the building. And then we boot the building. Yes. So I start the Python client app on the building Raspberry Pi. Raspberry Pi is the brain of the building. Reboots the Arduino. So Raspberry Pi controls the Arduino for analog things. And then it connects to the cloud securely using its keys and certificates. Welcome to Google Cloud. So this is the user interface of that smart building. And now you can see it's reading things like air quality, air pressure, humidity and temperature. It's pretty warm here, 30.3 degrees. And UV is zero because there's no natural light here. But then there's a funny thing. There is a camera and there's a doorbell. So let's try like this. Let's say that you have a visitor for your house and she presses the doorbell. And I just wanted to see if I can recognize my friends. Who is coming? Okay. So first vision API gives me the bounding box and I crop the image. Then the cropped image goes to AutoML. And if we are lucky, maybe it's still hyperinated a bit. The first one might crash because my timeout programming is not correct in the building. This is a prototype. So if the first one crashes, which is 60% likely, I will start the app again. But I just want to show you that I actually use machine learning in two phases here. This is phase one where I use the standard vision API. It actually takes a really wide-angle picture. And normal vision API tells you that there is a person and it gives the coordinates of the phase. So then I simply take those coordinates, I crop the image. And then I send this cropped image to AutoML, which is currently sleeping. So we wait for it to wake up and then the next one will work. I have asked the service team to please not hibernate my model. So we don't have these demo effects. And there's the crash, yes. It's a predicted crash, so it's not that bad. Let's try again. So my Doppel is not very successful because she's still waiting downstairs. By the way, we are rebooting the building. So that's new. So maybe that's the future of smart buildings. You need to reboot your house. Go cloud. Now it should work. See, it's very persistent. She's still there. Well, how about if Papi comes with a friend, Jenny? This is the weirdest Google presentation you've probably ever seen. And then two girls are coming. Maybe it's a party. Let's see what happens. Oh, that was bad. See, my model is not the best in the world. But let's say that I come with Papi to the party. Papi and party. Let's try this option. Okay, so I have a temporary girlfriend. Yes. By the way, just one thing I want to mention. It's not facial recognition. It's image categorization. But in this case, the image is look like those people. I'm not sure if she's a person, but slum did like a person. Okay, I know I'm out of time, so thank you very much. I hope you enjoyed it.