 So basically this interpreter will just interpret what is which category or what is the detection result and which category belongs to as well as the detection coordinates. This is why you can actually pull from my kit later, OK? What's the key? Basically, if you output something like that, it shows that the image is working. Let's see whether I can detect you guys myself. OK, detect as person. Pass. Let's detect online. Sorry, this is a mobile net. So do not expect very high accuracy. So you see, the mobile net also runs this slow. What do you expect of the Chinese app? That's just how I show you. And the accuracy is like, OK, I can detect the front, few people, detect my colleagues. I mean, at the back, you almost, yeah, very good. So yeah. So if you just go back to, I hope it doesn't take long. So yes, just now you can see that is the speed of detection. You will run Intel NCS on Raspberry Pi. And let's just give you a rough figure of what's going on. So you can see that Titan X can run very fast. The Atari FPS cost about 45 watt. And I did a simple calculation based on this number. They all have actually measured the voltage before. It's a constant island of about 41 watt, 45. And because about $7.20 per month extra on your electric build based on the current data read. So there's Intel 672.0K with this latest i7 processor. So it run about 7 frames per second and consume much more power. So for the TX1 that we show you just now, that immediate wall, it actually run at 2.9 frames per second. I don't know why, maybe because I'm using a microSD card. So yeah, it's about 15 watt based on that specification. Raspberry Pi is 3 without Intel NCS. It's about 0.5 frames per second, very slow. It means that if someone, if you want to install a local surveillance system, someone run fast enough, you may not even catch him. Blink of an eye. So 4.5 edge will give you better safer margin with Intel NCS. So that's probably something better. And it costs about the same per month. Do you want to deploy it? So you got to keep running and your parents will not complain about why do you spend so much on your electric build. Deep learning is very power hungry. Yeah, and that's all for my talk. Do you have any questions? Do you actually see people using Intel NCS in the production system? Do you think it's feasible? In production system, you will not be using the USB-based, because the USB still have some bottleneck. It's better to use the chip directly. So basically, inside of 12-VPUs, people just take out the small chip and use some other protocol to communicate, which is much faster, like DSP. So with USB, you're almost guaranteed that you have a bottleneck somewhere. So when the image gets transferred in, come out. So you have some latencies. Anything else you want to find out? How much is the cost of this? This thing costs $144,000. Yeah, you can buy it from Amazon. And it's different. But you buy it from RS Online, it's $144,000. More? No question? The model and use some sort of transfer learning. So basically, down a Google model that's already trained, I use it for my own class. It takes about 24 hours. Yeah, so if it's trained from scratch, it takes about one week. I'm using the Google data sets for about $80,000 images. But just dot and human. It's about pancake, pancake images. Any further questions? Oh, then it was back when I was doing the autonomous vehicle It is a problem that faced by researchers as well, for anyone who's doing a autonomous vehicle. Because there are a lot of data sets on cars and humans. But there are not a lot of data sets on bicycles. And because of that, if you put in bicycles, the amount of data sets is so little. Then when you put it inside a model, you have a data scale problem. So the network is biased towards the human car. And when you give an image with car, human and bicycle, the bicycle is unlikely to be detected. So one way is to down sample all the data, and make sure every single data is about the same amount as the weakest link, which is the bicycle. But then that causes overall accuracy drop in your network. Because you lack of other type of data. You guys can contribute. You can augment one picture and produce four pictures. But you have only 1,000 bicycle data sets, for example. You have 4K. But you look at cars, they are like 150,000. Then if you augment that amount, it's much higher. But if you leave it that amount, it's also fine. Data augmentation is one of the matters to increase the data set, definitely. But if the data set is very little, there's only that much you can do. So at this point of time, you just toy, for example. I actually view the simple surveillance app. When you detect a human being and send me an email, that's all you can do for now. Definitely, if you look at application and larger scale, since Singapore is doing this thing called smart lamp posts. So this kind of frame rate is actually, I think, just acceptable for smart lamp posts, given that you are detecting someone who is walking on the pavement, and you are trying to detect space, trying to match the database. I think you have better use cases. But for anyone who is trying to do hobby stuff, these Intel NCS can give you a robot. For example, someone is doing robot, umba, or whatever. It give you a robot some sort of vision capabilities, other than just touch sensors maybe bump around. But there are also other type of computer vision and better system around that you can try out. This is more flexible in a way. Is it possible to run multiple units of that? Yes. Yes, oh yeah, something I forgot to cover. This one can scale up to four, linear speed increase. So another thing to remember is that this form factor is a bit weird. So if you don't buy a USB extension, if you don't buy a USB extension, you will cook up all your Raspberry Pi USB sockets. So you cannot fly in any mouse or whatever. So you have a USB 3.0 extension. So yeah, that's why I didn't realize this as well. The limit of four is? I think it's been tested by Intel NCS. They reported that it's four. But if you have a USB hub that's like 12 sockets, you can feel free to try out 12 of them and see whether it works or not. But each state also draws power. So your power capability may not be high enough if you scale to a higher. There one more question. This is only for inference. This cannot train. For training, I have to use GPU. Even a CPU also cannot. Too slow. OK, there's no further questions. I think that's all for today. The speakers will still be around. So if you want to ask them questions, you can stick around to ask them questions. Thank you very much for coming down today. This up is around the second or third week of May. So we will let you all know through Facebook again. So thank you very much for the support for coming down over. Thank you very much.