 Welcome to our session. I want to start the whole presentation with a little bit of a kind of thought exercise. I want to give you a little bit of a task that if you can keep thinking for a little while. And I want you to imagine that you'll have to build a system that will be counting number of people. I don't know, say you have it outside this room and you want to measure how many people are going in and out of the room. And I want you, you can use whatever you want of hardware, software, whatever. And, but I really want your system to comply with four simple rules. So it has to be quite strict privacy policy, so I don't want the names or identities or faces or whatever of people going in and out of the room to be leaked anywhere. I wanted to perform really well, so it doesn't matter if it's just one person or if it is 100 people just passing in front of a sensor or camera, whatever that would be. I want it to be quick. So if anything happens, I want to know that pretty fast and I want it to be relatively cheap to run. So kind of four simple rules. Think about that for now. And while you're doing that, I think we're going to spend a few minutes telling a few words about ourselves. Yeah, hi, my name is Thanos. I'm working for the University of Oslo where we are building an open source cloud infrastructure based on OpenStack for research and education sector in Norway. In my spare time, I'm helping IT communities and while working with IT devices. Yeah, and my name is Rustam. I work in a company called Computers, also in Oslo, Norway. I am a Java champion and a Google developer expert for clouds. I'm also running a few communities in Oslo. One of them is the cloud developer community. So the thing we're going to talk about is the kind of a pet project that we had and well, let's go back to our example. Yeah, because I think we can solve that problem with IT devices. IT devices are cool, easy to solve, accessible and a number of these devices just growing so fast. So maybe one of those devices can solve our problem. And that was actually exactly our thought. We first thought, we thought about that and was like, that's probably a good start. And well, let's see where it got us. Before we do that, should kind of show you this stack or it's also the plan for the talk today. So what do you have in the bottom would be general IoT devices. Those devices that usually being sold to you as smart devices, but they're kind of a bunch of sensors and a bit of network connectivity kind of devices. They're not really smart because somebody's else, some cloud or something is being smart on their behalf. But well, IoT devices. So then you have IoT Edge devices on top of that that have a little bit more power and things like that will explain what they have. And on top of that, there will be two more things that kind of connect them all together. It's a fog and a cloud. We'll explain all of those things in a little bit. Since we're, well, you might have guessed that most of the time we'll be spending of this talk about on IoT Edge. So we decided to split that part up also in three different parts. So we're gonna be looking at different devices that we had to look at. We did some software, hardware and architecture of those devices. So that will be our plan for today. But then let's start from the bottom, the IoT. Yeah, what's IoT in general? IoT devices or the internal thing we can say that it's the connection between our world and the digital world. These devices have internet connectivity. They generate and collect massive amounts of data and send those data over the network to some servers or cloud for processing and then receive the results. That's great, but there are some challenges as well. Because as the internet connectivity is growing and mobile technology is improving so quickly, our lives become more and more dependent on such devices. And as these devices is just growing so fast, privacy concerns and security issues, it can be a thing because they will affect our lives. However, overall, we can say that the growth of IoT devices are great because they give more access and opportunity to more people. And since we're talking about IoT devices growing so fast, it's kind of always nice to show some kind of graph. I really like this one because it's not just showing, well, the devices are growing. We've seen those graphs before, but this one is actually kind of comparing it to the other thing. So the funny thing is that the dark blue thing on the bottom that is slightly declining is actually computers, PCs, or, well, Macs, whatever. But, well, then you have smartphones and smart TVs and smart home devices and all the other things. But nothing is really growing as fast as this gray thing in the back there that is a general IoT device or those IoT internet connected devices. And this is kind of cool. And, well, those devices are really nice and they obviously have some good, I mean, there is a reason for why the number is growing so fast. And some of those devices, some of those reasons for those devices are the things that they're very specialized to do what they're supposed to do. So they're just very good at doing this little thing that they're supposed to do, measuring temperature, accelerometers, GPSes, whatever. And they're being produced in so many numbers. So it's getting cheaper and easier to produce them. So they're kind of smaller and also getting smaller in the size but also in the prices are getting lower and they're quite efficient at what they're doing, obviously. And they also very often come with some kind of tracking abilities. So they would have like a GPSes, accelerometers, whatever, compasses, all those kind of things. Well, that's kind of a good thing, right? But not everything is good. As I mentioned earlier, security can be a challenge because now we have much more devices that want to interact and communicate with each other. We have more software and hardware that can have potential bugs. So it makes it more complex. Privacy can also be an issue because in most of the cases, we send raw data over the network to some servers and both servers and network can leak data. As we are talking about lots of hardware devices, environment can be an issue because yeah, we are just making so many devices so we have to think how we can recycle them. Well, these devices are dependent on the internet. In most of the cases, if we don't have internet connectivity, we cannot use them. So the next thing in our stack is IT edge. If you want to explain a few words so we can say that edge computing improves operation and cuts costs. Well, we need to have a definition for edge computing. First of all, as IDC says, edge computing is a mesh network of micro data centers that process or store critical data locally and push it to a center or data center or cloud. Well, edge computing collected data that are produced by IT devices to get process on the device locally. So IT computing send almost all the data somewhere else for processing while edge computing keeps most of the data on the device and do most of the processing locally. It means that such devices have their own compute, storage and network on the device just in a smaller factor. Before we go any further, I really want you to just hold on to one thought. And that thought is this thing that says there a mesh network of micro data centers. Just hold on to that thought. We'll get back to that in a second. Just, yeah, that's it. Next. Well, obviously those devices have some good positive sides again because they're kind of trying to address some of the issues that we've seen with just the general IoT devices. But well, they provide low latency so because they don't have to ship that much data over the network and sometimes you don't really even able to ship that much data over the network because you might be having a sensors on say a boat that is somewhere in the middle of the ocean and you don't really have anything else than just a tiny little satellite link and you cannot send gigabytes of data. So you cannot send all that data. So then it will provide you low latency because well, you don't send it, you process it locally, most of it. If and only if you implement it correctly it will also help you with the privacy issues because you won't be sending the kind of the sensitive data over the network. It will give you a newly real time availability, data transmission and things like that because well, again, you don't send data that much anywhere. And in general it will give you some kind of productivity increase because well, you process things, you don't have to rely on somebody else on the other end doing the processing and telling you what to do and all those kind of things. But yeah, when we are talking about edge computing in almost like every time we are talking about them we are talking about one device. It means that there is no redundancy. So if we lose our device, we lose our data or if our device fails, so we have to replace it. But if something like this happen in the cloud we can just start a new virtual machine and it works well. And when it comes to security there are two different arguments. On one side we can say that edge computing is more secure because there is less data in communication with third parties and that's great. But on the other side we can say that it's less secure because the device itself can get attack much more easier. So in designing such devices we need to think about the access control using a VPN and well, we have to remember to patch and update our devices regularly. And well, going back to this idea of IoT devices versus IoT edge devices and most of the devices you would normally have at home would be a kind of IoT devices. Even the home assistance of any kind that you might have at home usually would be some kind of, well in a kind of most common sense would be an IoT device because they would record what you tell them but they won't be processing much of it. They will actually send most of it over to the server somewhere and return the intent. Or if they even don't do that they will still send the data over to somewhere to the cloud because well we've seen the issues with pretty much all of them were in the newspapers last few months with different issues of the voices and everything being leaked somewhere or ending up in a kind of different place than it was supposed to be. And also when you have a smart lights or things like that at home you will end up with using a kind of solution that's been already made for you and then you end up pulling up your phone, opening an app, clicking on a button and then it would typically go to some kind of bridge to some kind of gateway, possibly send some stuff to the cloud, go back and all the way and then turn on the lights or change the color of the lights or whatever that would be. And it's happened so fast that you don't really think about it. But I mean wouldn't it be nice that you can just provide all that kind of functionality with a little bit more processing power but just locally on your device? Because well, the problem is that when you have those devices listening in and all this kind of things, things like that happened. Just give you a few seconds, right? It's a little bit sad but you know this is the reality we live in. And well I mean there is also obviously ways to counteract that of course. This is the ultimate solution. So you know, when you come to home to, coming home to anyone just say Elixir, Siri and all the other ones and see if they are actually listening. But okay, let's go back. But the thing is that you don't really have, sometimes you don't really want to send all that data over and well there might be several reasons for that, right? Yeah, I think that these devices are really helpful. But the problem is that the combination of like they're collecting information from several resources and then combine all those and send it, like combine it with a specific person, time and location and share that information with other companies, that's the problem. So it would be much more better if they could process all those functionality on the device to protect data and propel privacy. All right, so two more things that we need to mention. A fog. How many of you've heard of fog? Nice, very nice, very nice. For the record, there were like six, seven hands I think. And for the record, this is the all time high we've, I've seen it while doing this presentation, usually it's just two hands. And usually, well clouds, anyone? Your hands are working? Yeah, yeah, yeah. That's usually a check because most of you have heard of cloud but not very many have heard of fog and well some have different definitions of that as well. But so what is actually a fog or fog computing? So very kind of roughly and simplified version of that that fog would be anything that you see there with the background, blue background or any other color that, you know, but well, blue or background in general. So it's a thing that connects all the IoT devices together. Remember I asked you to kind of hold on to this thought of a mesh of micro data centers. So that's the kind of thing that connects them all and that's also the thing that connects it to cloud or to some kind of data center or whatever that would be. So. Just to sort of these terms, edge computing can be any devices such as verbal devices, home assistant, network sensor systems and so on. While fog computing is then network that is needed to transfer data from the device to its final destination. Oh well, cloud, which is someone else's computer. Yeah, so now that we have all the bits and pieces together let's have a look at what we had to go through to select the software, the hardware and the architecture for those devices that we wanted to use for this pet project of ours. To do that, we need to go back to this thought experiment that we started with. So you had to build a system that would be counting number of people typically with a camera or whatever and we need to comply to privacy concerns that we have. We need to have a good performance, a low latency and low cost. All right, so devices. We needed to find hardware for that. Most of the public cloud providers have already thought about that. So both Microsoft and Google have a hardware for that while AWS has a operating system. And there are some other alternatives which are more general such as Raspberry Pi, Banana Pi and Arduino. But there are some devices that have something extra for running machine learning on them. Talking about machine learning, so edge computing compared to the cloud has a limited resources. To bridge the gap between edge computing and the cloud there are some companies that have built some devices with purpose built accelerators that have a tiny chip and the thing that that chip does is to take over the most complex and expensive part of the calculation that will speed up the process and it will free the CPU so we can use the CPU for every other things. And yeah, that is what's great with these devices. And those devices are pretty good because they have a little bit extra so you still have a tiny little hardware. I mean Raspberry Pi, it's not super powerful is it? But then it will still give you a bit of more extra stuff. And there are a few devices and the few that we had to look at is NVIDIA Jetson. It's a tiny little machine like Raspberry Pi size kind of thing with its own operating system with extra hardware that it brings and everything. And well, it can do neural networks, it can do AI kind of stuff. So you can, whenever you run that you would run it on a separate chip that would offload the CPU for you. Then you have a USB kind of sticks, USB accelerators that's the one you have there, the Intel chip, it's a thing like that. It looks like a USB stick I had, I don't know, 15 years ago with 16 megs or something. But it's kind of the big factor but it's actually a pretty cool thing because it's able to process everything on any computer. You just plug it into that computer and it will do that. And then there is a Google Edge TPU thing that comes also in two flavors. There is one that is a similar thing with a USB accelerator so just plug and play kind of thing. And then there is another that is a separate box. The separate box thing with own operating system and everything like Nvidia Jetson, it looks kind of like this. And well, we have it live here as well. You can have a look at that later. It's kind of a bunch of cables and everything so hard to move. But you have better picture there. So the thing is the hardware is pretty much as a Raspberry Pi but the thing is that most of the magic happens on this little board that you see under the cooling and the fan and everything. That's where the CPU is and TPU and all this AI kind of stuff is. And the thing on the bottom is just kind of convenience to connect network, to connect cameras, to connect USB and things like that. And TPU, speaking of TPUs, you might have heard about TPUs before. Oh, TPUs, anyone? Going once, twice, okay, a few hands. So the TPUs you've heard of probably a little bit different kind of TPUs because this one is a tiny little thing. So this is the chip compared to a penny. And still, it provides pretty nice performance boost. I will see an example of that. But the TPUs you might have heard of, it looked kind of like this. A little bit bigger, tiny a little bit. And that's not actually the whole thing. I mean, I think the whole setup, I'm not sure, I don't remember exactly but the whole setup. This is a picture I took at Google Cloud Next in San Francisco, the last one in May or so. But the thing is, the whole setup is actually to have like two six or eight of them in a row and that would be like the cluster. So bit different performance but still compared to the size, compared to the power, compared to everything, it's a different kind of story to your regular, tiny little CPU that you'll find in IoT devices. Anyway, going back to our example. Yeah, now that we know which hardware we chose to use, we can go back and see how we implement our experiment. We decided to use machine learning because in our case, we have lots of data. It should be quick. And as we have like lots of data, it is more complex to do it with a simple script. Well, to do that, we had to understand some terms in machine learning, classification, detection, and tracking. Classification is when we know what kind of object we are talking about. In detection, we know what kind of object we have and where it is on that image. So we know the coordinate of that object. And in tracking, which is quite a like detection, but in tracking, we have a series of images. So we have to do both classification and detection for each image in that story. So just to explain a little bit better, in puppy's language, in classification, we know that we have puppies and not kitties. In detection, we know where our puppy is sitting. And in tracking, we know where it is moving. But before we explain how tracking works, because this tracking is a little bit more complex, we can show you some examples of classification and detection. Yes, we can show you an example. And we did a recording of that, but if we have some time at the end, it's running on this machine. So we can show you this live as well. But for now, it's a recording that we need to play, play, press play. So what happens is that we've created a simple, very simple webpage that would be running locally on this device that we see here. I don't know if I can lift that thing. Well, yeah, there's a lot of cables. That's the device. So everything is running locally, image processing and all those kind of things. And also the web server that kind of serves this thing. So if we start the server, okay, again, we start the server and then we have a bunch of images that we just downloaded from the internet to use for this kind of example. Mostly animals because they're fun. And obviously we had to implement this sausage not sausage app thing from I don't know if you've seen the series. They have this super revolutionary AI app. So we did that as well. And then we can do two things. So we do classification. So this is actually classifying that well, this is a pug and also shows us the certainty of how certain it is about it being pug. But then we can also do detection. And the funny part and the interesting part is that it's using two different models. So one of the models knows actually what the person concept is and the other one doesn't. So in some of the examples, it will actually recognize a person and some others it won't. And the quality of the things it recognized or things it's trained to recognize is also different. So because for example, here we actually know that it's four hot dogs. But for example, for this one, for koala, it will say, well, it's a koala because we're doing classification. But if we're doing detection on the same thing, that one is not trained to recognize koalas. So surprise, surprise, it will tell you that it is a dog. Well, kind of looks like a dog, right? Weird dog. Australian dog. Everything tries to kill you in Australia, isn't it? Well, anyway. And then you have also hairy dogs. It actually recognizes that it's a dog. It's impressive. Well, anyway, okay, enough with fun. Let's go back to the thing. So now we have that we know that the classification and detection and we have all that in place. We need to look at how we do that with a video because images, they're fine. I mean, it's okay. But it's not very, you don't have to do that many times a second or whatever. And the reason video is a bit hard is that, well, video ever since this guy called Edward Maybridge in the late 1800s created the whole animation concept where he realized that, oh, I can take lots of pictures. I can switch them really fast and our brains will think that things are moving. And it's been like that ever since. So the only thing that changes the number of images that changes a second and also the quality of the images. So this is one of these first animations. You might have seen that thing. It usually is in our rotating kind of thing that goes really fast and, well, you can see the horse running. So the moral of this story is that video is just a bunch of images, lots of images. And to process all of those images, you need to do classification detection, all those kind of things to hopefully each frame, maybe not maybe every 10th frame or whatever, but quite often. And then to do that with the CPU, it's gonna be very, very, you will need a powerful CPU because it will be quite heavy on the CPU and then you will have much bigger hardware so then you will need to have a huge machine being outside this room and filming and trying to process all those kind of things. With AI, with the machine learning and this kind of things, those things are very optimized and they really go really fast. I mean, I don't know if you've noticed this example we had with the dogs and everything, it would show you the time it used to detect and it was just a few milliseconds. And then you have to do, when you have all that in place, now you have to do tracking because there is an, I'll explain in a second why, but the thing is that when you do tracking, it will also demand a lot of CPU power and the reason for that is, it's the wrong computer, okay? The reason for that is, so say you have a 10 seconds of video and you're filming at 30 frames per second, just for calculation's sake, it's easier to multiply 10 by 30 and then you'll be sitting there with 300 frames, right? And then you have two people on the frame. If I ask any of you here in your room to kind of realize, to count number of people within those 10 seconds, you will do it like this without thinking. Computer doesn't know that. Computer has no idea if it is like 600 people on that thing because, well, 300 frames times two people or it's actually two if you just do detection. So you have to do something smart. You have to send it over to some kind of comparison thing. And also the people will be moving and if the third person appears, it should give it another ID to the third person and keep the IDs of two other people on the video. And all of that has to happen a lot, many, many, many times a second. Yeah, these are like our four main steps that we do for each image. We start with capturing an image from our camera and then we do resizing because our machine learning models use a specific size of images. Then we run processing on those images and then when we have the result, we display it. And we do all those steps for each frame. So here is the system in action so we can see how accurate it is actually. Like Ruslan and I are more human than the picture on Ruslan's access card. And we can say that it takes like 120 frames per seconds. So the inference time is eight milliseconds. It means that it goes through all those four main steps just in a few milliseconds for each image. And now we have 120. So that's quite fast. And the point is, I mean, that it's not really a device specific thing. I mean, if you find another device that can do AI processing on board, you'll probably get similar or kind of the results like that. So we're not really talking about the devices. The whole idea is to talk about the whole concept of moving things that you don't have to process on the cloud or on a big hardware to the small devices because if you would be streaming eight hours of video to the cloud, that might cost you quite a bit. If you do everything on this device, it will cost you the cost of the device plus the whatever energy that thing consumes. And that one does not consume much because now I'm running actually that thing from a battery pack. So, you know, it could be like that. And another cool video is the difference between, let's see if we can do this. It's a difference between a CPU and TPU. So now we have, if we go back, because now it's cached up. Okay, so now it's running on TPU. So it's doing the TensorFlow processing in this little chip. And now we switched back to TPU. So you can see it up there in the corner. It's saying CPU. So it's really, really, really laggy. We're down to two frames per second. And now we switched back to TPU again and we're up to 70. So it's almost 10 times. And that's actually quite cool performance boosts. Raspberry Pi, a kind of faster Raspberry Pi for almost the same cost. You know, okay. So next, full screen, it won't click. That's weird. Oh, well. We'll do like this. So the things that we learned, the things that we learned is that since we're doing many things, many operations a second, we need to optimize a lot. We need to optimize, optimize, optimize, optimize all the time. The other thing that we learned is that different machine learning models will have different performance and that will affect your final thing because the model that we used in the beginning for face recognition, that had 120 frames per second. The one that we did later for person and we'll show you a demo, live demo of that thing in a second. For the person, it had like around 70, 80 frames per second. So a little bit slower because it has to do some other things. The more models have to recognize the more objects they have to recognize, they will be a little bit, maybe a little bit slower and things like that. And another thing that we learned also, the hard way, which is kind of very obvious thing right now, and it's probably very obvious thing to most of you, we did not think about that to begin with, is that you should not put much code in synchronous functions. And this thing that they teach you at school and universities and everywhere saying like, well, print functions are really expensive. It's very true. I tried to print a NumP arrays in debug console just to see how things work. It killed my performance from, it went down from 120 frames per second to two. So don't do that. We've done it so you don't have to. And well, again, optimize, optimize and optimize. So probably the coolest slide of the slide deck is that where we have a Snapchat, that's the screenshot of Snapchat, it's written in the region, but well, I'll translate that for you. It says when you have one gig of memory and you're creating 16 gigabyte of swap, so you can compile stuff. And that was the thing because we tried to compile this little thing, a C library that was really, really big for tracking, for correlation of images. And that was not built for ARM hardware or ARM architecture. So we had to build it ourselves. It took us a few days to build it. So it's a bit hard. And you might wonder if it was actually smooth sailing like this because, well, another thing while it's so cool the slide is super hip is that we have emojis as well. So smooth sailing and in reality, it was a little bit more like this. A little bit cloudy, not in a good sense. A little bit wavy and bit kind of thunderstormy. Then we were a bit like this because well, the thing is that the thing is not working and it's not compiling. And this is our pet project and we're spending nights and evenings banging our heads into the wall and knowing, having no idea how to make things work. And then, you know, then you've kind of, you learn things and you go back to smooth-ish sailing. We'll learn quite a lot and there is a lot of, I mean, we've had this thing almost, we got this thing almost the moment it got available. It wasn't better for a very, very long time. It got out of better in October or something. November, October. So a lot of things changed. So it's not only us that learned but it also, that thing got better. We should mention that it's much more, like compared to other devices that we tried earlier, it was like much more easier to get started with. This one had quite a lot of issues. First thing the requirement was when I got this less than a year ago, it was in 2019. I got this thing, first of the requirements was install Ubuntu 16.04. And I was like, thank you. So yeah, and that was a lot of pain. And well, I think that got a little bit better afterwards but now that kind of the ship has sailed if we're using the smooth sailing analogy. But we're using another device. But it was quite fun. So demo time. Should we do some demos? We can try to do some live demos. We can try to do some demos. So I think we're gonna do another thing that you're not supposed to do. We're gonna switch computers in between. We're gonna switch between two computers while doing a presentation. That's a very dangerous move. Don't do that. Again, we're gonna do that for you so you don't have to. Kind of stretching our luck with demo guts. But, black screen, it's a good sign. Okay, there you go. Yeah, we'll mirror displays. Here you go. Displays, mirror, apply, keep configuration, yeah. Okay, so now we have the, let's see if we're running. Yes, we're running. So now this is the first demo. And this is the cool thing because just remember that classification has no concept of human. It has nothing, it doesn't understand that. While detection has the idea of what a human or a person would look like. So classification would just return you things like this. We can zoom in a bit. So it will say, well, it's a hot dog and while I'm pretty certain, 99% certain it is a hot dog and things like that. So if we do something with a person, a classification, it will tell you, well, I know it's a Chihuahua. And that's right because there is a Chihuahua in a picture but it has no idea that the person is there as well. So it would ignore that. So again, that's another thing that the different models that you'll use, you can, you should use the ones that are trained for what you're used, want to use it for or you can train your own. We were last year before Easter time. We got this thing shipped to us just before Easter. So we actually trained this thing to recognize Easter bunnies, eggs, and chicken just by downloading and scraping the internet for 20, 30 images of bunnies, chicken, and eggs, 30 each training that thing on the device and it would actually recognize those pretty well. So you can actually train your own models. Another cool thing is that if we do the same thing with detection, it will actually know that the person, where the person is and where the dog is so they will have the orange boxes around it. Another fun thing is that for instance, if you do classification of this image, that's actually quite fun, this image in the middle there of a lady with the band, a stretching bandy thing. If we do detection, it will know that it's a person, I guess, come on, yeah, there you go. So it's a person and we're pretty sure it is 98%, but if we do classification and it has no idea what the person concept is, can you guess what it will actually say? Do you recognize the pose? I'll show you in a second. Dumbbells, because it has seen a lot of people, images of people sitting there and doing the push, the shoulder press, right? So it actually really recognized that, well, this is probably kind of the things I've seen with dumbbells. So it was kind of funny thing. Yeah, let's show the live video. Okay, so now we have another demo of actually the thing we were building. So we have a video processing live and counting number of objects. It worked with a very variable success rate towards the public, because usually there would be a lot of lights coming towards the camera, but in this case, I think it might actually work pretty well. So now we're starting a model, the thing that we've built. We can make it bigger, yes, come on. Is it big enough? So now we're running a model, this script that we've built. So it kind of starts a server in the background there. We're providing a model to use for the processing. We're providing also the threshold. So we don't wanna see anything below 80% certainty that it's a person. And we also, well, we are filtering only persons, so we don't wanna see anything else that it finds or might find. And we're also printing the coordinates of that object. So we need to refresh. Hey, it works, wow. Okay, so one person, hey. One object, two objects. Okay, so it works. Now we're gonna do the scary part. Turning it over to the camera, say hi. How many people do you see? None? None, actually. Zero objects. You're not human enough. I have some bad news for you people. Maybe we should get closer, can we? Oh yeah, we can. No, we can't. We're gonna kill the powers. There's this green thing. Well, we can try. We can try. How are we doing? No, still nothing? Okay, we have some bad news for you. Our machine learning and artificial intelligence super computer decided that, well. See? It works like here. It works on my machine. It needs some more optimizations and stuff like that. It's a pet project and the whole idea of this project was actually not to create a super surveillance system, was actually to play around with all this kind of concepts and see how is it to now the fan is working because it got really hot processing all the videos. But the cool thing actually is that we're doing 73 frames per second. So 10 milliseconds to process all that and to try to recognize if there is a people on that or not. And the whole idea was to actually to play around with this technology and see how far we can get to that and things we can do with it. And you can also, for example, teach that thing on the fly to recognize objects by like connecting a button or something and pushing a button and saying, well, this is a water bottle. And then to recognize the water bottle after a few tries. And it's kind of cool. Time for questions? Any questions? Ah, yes, we should do that. That's a good point. Um, you're less human now if it works. So I mean, it's a better news, right? It's better than nothing. No, I'm just kidding. Very bad joke. Okay, I'm still working. Oh, the other way around. How are we doing? Oh, oh, oh, oh, we have one. Okay, who is 60% human? Hey, we have one. So we need to go over it again. This is actually fun. This is supposed to be kind of build you up, not break you down. Well, some good news, some bad news, you know. Perfect, thanks for the tip. I didn't think about that. You kind of stand here on the stage and it doesn't work and you're like, oh well. Any questions? Any thoughts? Any comments? Anything? Here you go. What's the cost? Oh, the cost of this thing. The moment we got it, it was, I think it's the same. It was around 150 bucks for the board and 50 for the camera. So 200 bucks in total. Plus chargers and stuff like that. I mean, you need some power and you need some cables but you know, that things you have at home. Cause like, for example, this thing right now it's running on my battery pack. So, you know, it's things like that extra. And I think the other ones are pretty much the same and then the USB accelerators will be actually much cheaper. This one is almost the same price as the whole board that when I got it, I got it the same moment, the moment they actually released those because it's the second version of that. But then the USB accelerator for the Coral, this thing that is called Google Coral DevWord thingy, that one is around 70 bucks, 75 or something. I haven't tested that yet. The question is the performance difference between the USB accelerator and the DevWord. I have not tested it, but I would expect it to be pretty much the same because the difference is that actually all the board thing is for prototyping. So technically it's a system on chip kind of thing that is on top of a bigger board that does all the kind of expansion ports and everything. So the USB accelerator would be pretty much the same. I would guess, but I'm about to order one. So I'll probably tell you in a few weeks or months or something, maybe next version of this conference. But we'll see how that works. I don't expect kind of the same thing. If I'm, I might be wrong, but I think it has the same architecture actually as this thing. So it's Movedius chips made by Intel that are in that thing. It used to be. I'm not sure if they changed. That's the thing. Yes, one there and one there afterwards. Yeah. Yeah. So the question is like how that fits into the whole cloud thing because we did not really talk about the cloud. And actually it is, it's on purpose because the thing is that this thing right now is not connected to internet. It has a wireless chip, but it's not connected to any Wi-Fi or anything kind of on purpose. So you can actually show the demos totally offline. But the idea was or is still is to create a, to have a backend in the cloud that will collect only numbers of people passing. So you have an end. It's not only the results. Yeah. So you just have only processed results. So you don't leak actually the faces or anything because it will just run the thing live and it will just throw it away because we're not storing anything. And it will just, the thing that you see in the corner there are the number of objects that will be sent within some kind of timeframe. So very simple backend for the cloud. But in general, you can use these things to different kind of purposes because for example, most of the, I don't know, things like parking, they go, they're kind of more and more going towards this, the concept of reading the plates, number plates of the cars instead of all the other kind of cards and things like that. And to do that, usually they don't, they're not able to process all that in the car, in the, in the cloud. So they have to process it on a local kind of edge devices and then send the data, just the text data over. The same goes with, for example, in Oslo we have this tall rings, kind of where you pay to drive through. And the same kind of things happens there. Like the first stage of processing also happens on the edge and then it being sent over to the cloud. So the point is that you strip all the sensitive data from your thing and then you ship it to the cloud. If that answers the, yeah. There was one question in the back. Oh, so if it's gonna count one person once or more than one times, that's up to the implementation. So that's the detection part. So then what we're doing now and we need to optimize it somehow is that you take the square thing of a person. So for example, this thing that is in red square right now, you pick that image and you send it to a library. Right now we've been playing around with something called D-Lib that does the correlation between images. So it does send it between frames and then it will kind of have a look at if, do I think I've seen that object before or not? But in general, if you just do a regular classification or no, sorry, detection, it will count them many times. So you have to do something on top of that. There are also machine learning models for that as well. We have not looked at that. There is an experimental version also especially built for Coral as well made by Google Research that we also have on the list to try but that will probably come at some point later. They just opened, I just talked to this guy who was on that team a few months ago and they have pushed it to public a bit later just like a few weeks later after that. So probably there will be some more cool stuff coming out at that thing. Yeah, any other questions? There you go, there's one, yeah. We have not tried that yet. But doesn't that imply that you have to, oh, so the question was if we have tried to improve the accuracy of our model by having validation sets and stuff. But I think that implies also that you train your own model, right? So we did not do that for humans. It was a bit difficult too because otherwise it will be just two of us running in front of the camera and then it will be really good trained at us and very bad at everybody else. So, and then it's kind of really hard to find people. Well, we haven't looked at that but it would be probably a good thing. And now just, I think a day ago or two it's been a huge release of data sets for all kinds of things for machine learning. So that might be a very good idea to try. So now it's easier to get hold of the data that is not sensitive or anything. It will be easier to do that. They just released it, I think. I retweeted that like two days ago or something like that. So yeah. But it's a very good point. I mean, we'll do that. I mean, for us it was more kind of the fun and giggles kind of thing. So we were just playing around and see how far we can stretch that thing. But it was really good at chicken. Easter bunnies, chickens, perfect. Just with 30 images, not that much really. Yeah, it was like 30, 35 images of each. And then it was actually, and then we just would give it an image that you haven't seen before. But then you do what you actually do on this thing which is very fast. You don't train the whole model from the bottom up. You cut a few layers, the last layers and then you rebuild them with this training thing. So you do kind of, you freeze the other layers and then you build a few last layers on a model that is generally good at things. Like for example, like for example, this one that has all these kind of objects. So it's much easier to train it, to recognize another object than to recognize, I don't know, the bird sounds or something like that. So yeah. There is actually a model for this thing to recognize bird sounds and images of birds and things like that already. So there was this project where they built a bird house feeding house that would make a funny sounds if it would recognize anything else than a bird. So like for example, squirrel. So it would kind of scare them away with the sound and this thing was in a bird box. So it's a very expensive bird box, but yeah, it works. Any other questions? All right. Then I think I'll say thank you for coming. Thank you.