 So, if I speak here, it's not audible, right? No, that mic is not working. Your color mic is not working. Yeah, my color mic is not working. Color mic is on. Test, test. Okay, I think this is for the recording, for the benefit of those who cannot come, but I'll still use this because I speak very softly. So, can you see the phone on my screen? So, if you go there in one of the stations, there's an icon called Coral Workshop. You just double click that, it will go straight to the GitHub repo. We structured it in a way so that in case there's any update in the script or something, you'll always get the latest version from one of the IoT stations. So, the goal of this is to at least have an experience of how is it like to use deploy TensorFlow models using Edge TPU, at least in one of the IoT prototype board that we have, which is Raspberry Pi and Coral Accelerator. And on tops, if you notice at the end of the workshop, we also add another, because how can you talk about IoT without using sensors and add actuators and all right? So, we added an additional device or board or a hat that sits on top of Raspberry Pi, which is called Sense Hat. Sense Hat is another hat on top of Raspberry Pi, which you can use to get other sensor data like temperature, humidity, and it has an Accelerator data as well, as well as this output LED as well as you have the joystick as well. But for this workshop, we'll just use maybe the LED, but at least you'll have access to the code so in case you want to explore later on on how to use the other APIs of the Sense Hat, you're able to explore from there. So, there's not much prerequisite on this. Mainly the setup that we have, all the driver installation that is needed for the Edge TPU are already there. Preficiency in Python, because most of the examples that we have are in Python. I think the Coral has API also for C++, but I think we haven't explored it otherwise. We use Python because we feel that it's more accessible for the general audience. The first task that you'll be doing as part of that workshop is to prepare the workspace, because we have a number of people going there. So, once you go into the machine, let's say another person does the changes and all, maybe he wants to modify the script just to check the behavior and stuff, you don't have to worry on that because what we're going to do is we're going to remove, by this command, we're going to remove the whole folder and we're going to start fresh because the next one will be getting it from the repo. Repository and clone it and then get the latest from GitHub. So, once that is done, which I already did, I'm connected to the Py currently using SSH and as well as BNC, because later on we would need BNC to access the camera. So, for now, I'll just use the SSH approach so that we can run the command. The first few tasks that we need to do are command line driven, so we're just going to use that approach for now. So, once you clone this, you'll automatically go to the particular Coral Workshop directory, which is what we have currently in the terminal. So, Coral Workshop. By the way, the font of the screen, is it like okay, or do you need me to increase it a little bit more? Good. Let me just set up the appearance size. Fonts is 18, maybe 24. Is it manageable? So, we'll just use this. So, we're in a Coral Workshop folder, same as what we have from the repo. So, Martin really gave a sort of good introduction for the afternoon workshop. So, you would have learned about the different deployment models, the TF Lite, the formats that needs to be deployed, and of course, the advantage of deploying that to HTTPU. So, generally, let's say you have a Coral. You bought a Coral device. And the next thing that you need to do is, let's say you have your friend, or let's say someone in the morning gave you a model that, hey, friend, can you, I really have this nice model about emotions. Do you want to use it for your own application, let's say in your Raspberry Pi? So, we cannot just take in the, usually when in the traditional modeling, the output of the model, you have the PB file, right? The flat TDP file. We cannot just use that directly. I mean, one thing is big and the other thing is it's not that much callous. It's not ready yet for the HTTPU Coral device. Coral device has a number of model requirements. You need to meet those model requirements in order to take advantage of the Coral HTTPU device. I mean, you can run it on the normal Raspberry Pi, but not take advantage of the HTTPU. But for the HTTPU case, there are certain model requirements. And one of them is, one thing is, when you are doing your training, you need to do it in a quantization aware training, which I think last, anyone who was here last week during the AI day, I heard someone demoed on how it was done. But when we were preparing this approach, I think you would have heard from Wei Ying also in the morning that we had concerns and issues and running quantization aware training using 2.0. Maybe that is resolved. We just haven't found it yet. That matters. So what we're going to do here is I'm going to use a pre-trained model that was already in TensorFlow Lite. So our start of the workshop or the input of the data is already in TensorFlow Lite. Let's say you have a friend that exported the model in TensorFlow Lite. But we cannot just use TensorFlow Lite directly. In this sense, it needs to be compiled to a HTTPU model first before we can use it in the device. Let's say we have the TensorFlow Lite. So that's what we have. I'll just show a bit on there are two ways that you can convert this TensorFlow Lite to HTTPU model. There's a command line approach and there's a web approach as well. So I'll just show that for a bit. So I'm just going here. Let's go. So if you go for the command line approach, then you need to install all the things. But why go there if you can just, for your purposes, if a browser-based approach is good enough for you? So a browser-based approach is the one that I'll be doing. A while ago, I was discussing about the model requirements. You'll see the number of requirements here. So one of them is quantization-aware, the tensor sizes, compiler, and so on. So your model needs to be checked on this. Otherwise, if you just randomly, let's say, use the floating approach model and then, hey, I've generated a TF Lite, then can I use this to just upload in the site and then download it and run it on my HTTPU? That doesn't happen. Because when you upload this into the website, without those model requirements, the compilation will fail. So what I'm using currently, at least to ensure that it's done in quantization-aware training, there are already, for demo purposes, if you just want to try, there's already a set of ready-made quantized-aware models that you can just download and use. So that's what we currently have. So this one, the TF Lite that you can see here, the one that is included in the workshop, that is already one of at least two of the popular ImageNet models. One is the MobileNet, which is quite small, and then the other one is the Inception model. I think we got it from the HOSED models. If you go to this link in tensorflow.org, quantized models, you'll see a bunch of quantized image classification models with varying accuracy and model size. So once you download this particular model, the tensorflow Lite file of that, when you unzip it, you'll see various files there. You have that for the buff, and then one of them is TF Lite. What you'll do with the TF Lite is to... I'll just go here to the corresponding folder where I extracted my, what you call this, extracted my repository. So I'll just delete this for now. I'll just keep it there. So you'll essentially have this file, the Inception, and then the other one is MobileNet. So once you have that, if I go here, you'll need to do this one TF Lite model at a time. So I'll just browse, and then this is the TF Lite quantized model that I've downloaded from that site. So I'll just open that. It takes a while to compile, so it's compiling the cloud currently, and once this is done, it should generate a file, which is... It appends htpu in the file. So you'll see if you have uploaded a MobileNet quant underscore TF file, it will generate another file, which you can download, which has htpu TF Lite. So this one is the one that is the htpu model that we can deploy to the htpu. So this compilation happens in the cloud. At least in this demo, you can do it by a command line, but in our approach, we compile it and then get this. Once we have this, the next step is we can use it for classifying images. So why not use that inference in the htpu? So once you have that, once you can download the model, I'll just download this. Once it's downloaded, it takes a few minutes. Just place it back in the same test data folder where you got your initial TF Lite files. So at least here you have the TF Lite, which is the quantized model, and then you have the htpu model as well. In the same way, you'll do that for inception, do that also for mobile nets. Later on in the workshop exercise, we can compare these two models and how they perform inside htpu. So I'm going to ignore this download because I already downloaded previously and we'll just go straight with classification. So the two models that we've downloaded are mobile net and inception. If you download this, you'll notice that the mobile net is quite, in terms of a few MBs only, but the inception when you download this, this is around 40 MB or so. So it's significantly heavier than the mobile net model. So once we have that, just ensure that we have the htpu files in the...let me just increase this font. I think it's not visible at the back. Once we have this htpu TF Lite model, just ensure that you have it in the same folder as you have. So we have this htpu TF Lite, one for inception and one for mobile net. So once you have that confirmed, let's do classification. Let's specify this fella. It should be a cat. So cute cat. Just ensure that you're in this intended folder so that the script that you intend to run is there. And then once you ensure that, let's try to use mobile net first. So the way that we have the classification script where we can input the model and then the label and then provide also the image of the cat which is the image that you can see in the screen. So once we have that, by the way, for this to work, this classify image is part of the default API when you install htpu, when you follow the corral documentation. So when you have the corral board or corral accelerator, generally the first step that you need to do is to set up the API environment which you will do on your pie, for example. Once you've done that, then you will have demo scripts like classify image which you can use to test the corresponding models once you have the corresponding models. In this case, we're just going to test the mobile net on how it is able to classify this particular image. We're just going to copy this and then paste it here. So enter. So it's able to classify it as an Egyptian cat. Mobile net is quite interesting. It's quite small. It's quite fast. But one of the comments that people have with mobile net is we'll know in the next example. It's the same thing. We're going to try in another image using the same model called mobile net. But this time, we're going to identify this image. We're Singapore, so why not use something from Singapore? So what we're going to do is we're going to copy that and let's see what mobile net identify or classify it as. So Merlion Power Tower. The first result is missile. Does this look like a missile thing? Maybe this is like the smoke from the left side and then it's like going through the mouth of the line. Yeah, horizontal missile or something. Then that's the mobile net approach or that's what happens if you use mobile net. How about we download two models, right? One mobile net and then one inception. What's the result if we use inception instead of mobile net? Remember, inception was the 40 megabytes file that we used to download. So let's try the same image but using inception this time. So inception. Inception, htputf, which is the file that we've compiled. It takes a while, so slow. But at least the water part, it identifies as a fountain because of the, I mean, horizontal. Is it a horizontal fountain or something? It goes from the side. At least the water part got it right. But when coming back to the models that we've explored so far, what we did was in the HOSED models, what we did was we got the first one, which is at least the smallest, but also the fastest NTF light performance and compare it with the inception before quantized model, which is the biggest, the heaviest, but also the slowest. But in terms of accuracy, it has like 79.5 accuracy. The accuracy of top one for mobile net is only around like 39. But you get the speed benefit, which is here. Benchmark says it's 3.7 milliseconds. So at least to do the contrast between the two. So that's what we have. But we'll put the inception and then the mobile net models on the side first. We'll come back to that later. Because another thing that people are interested or keen to explore with Edge TPUs, because generally when we do training, usually we need huge resources and do it in the cloud. But let's say we don't have internet or let's say we have a use case where we need to do at least the training on the Edge. Of course, there are multiple ways to do it. But one of the ways is maybe not training the whole thing. How about in cases where you already have a base model. In this case, we are using an extractor. So you have all the layers except for the last one. So that on top of it will have our new data trained so that I can learn only the last layer for the new data. So the new data that we're going to use here, we're going to use the same thing, the mobile net. But on top of the mobile net extractor, we're going to add in flower photos so that it's able to identify like is it a daisy, is it a sunflower. I'm not sure if we have a sunflower, but we have like around like I think five classes of flowers that we're going to use. So for this approach, as per the documentation, the limit is around 200 images per label. So that's what the documentation currently limits us to do. But the question is, is it good enough? Is that good enough? So what we're going to do, I've done this already. So if you go to the IoT stations later, just do this, what it will do, it will just copy the flower data, all the labels, all the different classes. And then you'll download separately the embedding extractor, which in our case is going to be based on mobile net. So mobile net is our starting point. And then on top of that, we'll use the new data for flower to make a new model. Once that is done, you can do on-device transfer learning. I've done this, I've downloaded the photo and also the embedding extractor, but let me run again the on-device transfer learning so that we can generate again the file. So during the on-device transfer learning, if you notice, there's a test ratio. The test ratio that we're using here is 0.95. That means how much of our data are we using if it's like test ratio of 0.95, only 5%. So 5% of the data, is it good enough? Then we'll see. We're just going to, did I copy that? We're just going to do transfer learning currently using the flower data that we've downloaded. So you have tulips, roses, then there's the plant flower. And this is a one, two, three, four, five classes. So it's currently evaluating the scores currently for each of the classes. The top one for this, we'll see later. It's around the lowest one, it's around 75% accuracy, but maybe a case-on-case basis, maybe that's good enough. Using just 5% of the data and achieving 75, 76, in some cases that might be good enough in terms of, especially in cases where you don't have much memory to hold the data, right? You don't want to have a huge amount of data which occupies space. When you are doing transfer learning on the edge, on the cloud, then you can take as much space as you want, right? But there are certain cases that maybe restricts or constrains you from doing that. That's why there are cases like this. But people are thinking in terms of this kind of scenario, so at least there's a way if we need to do the training. So we're left with one more evaluation that is for the Dacey. Dacey, that's the partner of Donald. I'm sure it's this neat thing. It takes a while. So if you go through the workshop later, the two bottlenecks usually is on the first side, the first front when you are cloning the repos, which is around five minutes. The next one is this, because this one takes time. But I mean we still want to do it because it's transfer learning and we want to see how it performs and how it's being used in the edge devices. So now you can see for the top one, it's around 79, which is like using 5% of the data achieving top one for something that maybe in some cases is already good enough. So let's say we've done with the training. Why not have another image that, hey, I have a model just trained. Let's see if it can identify this picture. So I'm going to download this rose if it's not already downloaded. So it's already downloaded here, rose, JPEG. Then the new model that has flower information, we're going to run and check whether it identifies the rose as the flower. So you have here, right? You have five classes, tulips, roses, 10 million, some flowers, daisy. So what's the percentage that it's likely to be a rose? Of course, if you need to evaluate more classes, then you'll need to have additional label data. But for this, at least the first one that comes up is roses. So at least using 5% of the data is able to identify the correct class. I mean, for this kind of scenarios, you just want the class at least to be the correct one that was identified. The next one, we only have like two tasks left. So it's meant to be completed soon so that at least we only have like five stations. So at least we can take our turns and then hopefully all of us can try the workshop material in the IoT stations. So from here, you remember our downloaded models a while ago, one exception and one mobile net. We have another program that we've prepared. So the initial program that comes with Edge TPU is just, it has a live classification. But the live classification for that script is initially prepared for if any one of you working with Raspberry Pi, that usually works with Pi Camera only. So you need to get that additional Pi Camera and then use that in your Raspberry Pi. But usually the Pi Camera is quite flimsy. And let's say if we're going to invest in more usage of that device, why not use a traditional webcam? So at least you can have more utility with it so that's why we use this normal Logitech webcam. So at least it has a microphone and you can even use it as a normal webcam, you can connect to your computer. So at least it's more versatile that way. But the existing script is not working for that so we had to use another approach, use another library to enable the usage of the webcam. That is I think another util or something and then use OpenCV to get the image for that. And then we'll feed in the model, like how we did a while ago in classifying the image, using the Edge TPU that we've compiled. And then the label is always image net labels because both of these models are using image net data. So I'm just going to copy this. Oh no, I cannot copy this because we're going to need the camera now, right? So I'm going to run it inside VNC. So I'm just using VNC here because I have the Pi and I need to connect using my laptop. But later on in the IoT setup, you don't need the VNC because the Raspberry Pi has an HDMI port and then the HDMI port connects directly to the monitor on the setup. So let me just show that last bit. So first one is inception. So when I run this, notice if you can notice anything about the accuracy or if there's a lag in the camera. So I'm going to run an inception using image net. Can you see on the screen? Okay, it's starting up. So this is image classification. So it's able to identify this as a microphone mic, the monitor at the back, and then is there any other... Oops, what is this? What is this? Some furnace or at least a tripod or something. So as you noticed a while ago, it was able to identify the microphone. At least the accuracy part is there. But if you notice the video, what do you notice? It's quite laggy. We're using inception. It's quite heavy. It's quite... Accuracy is there. But in terms of performance, you might think in this case where... Oh. Okay. Yeah. What happened? Yeah. So if I close the application right now, the script is designed to give us the frame per second. So it's interesting to know. Approximation of frame per second is like 0.5. Just remember that for comparison later. So it's quite laggy, but at least the accuracy is there. Next thing that we want to explore is how about mobile net? How is mobile net... How does it perform compared to inception? So using the same script, the only difference that we have here when running it is the model that is being used. The previous one was inception. Now it's mobile net. The one that we've compiled via the HTTP compiler a while ago. So still image classification, object detection, but let's see. So it's still able to identify this. But as you can see, the camera compared to the inception a while ago is now at least... It still lags, but at least it's less laggy than the... which we call this... The accuracy is also a concern, although the performance is quite different. And if you go to the IOT stations later, you'll also get a chance to open the code, because the code we also modified. So that here, if it detects something, the Raspberry Pi will display green, which is D for detect. And if it doesn't detect anything, it will turn red, which is X. But you can see for yourself later. Now it's able to detect something that's why it's showing green. If you go there in the stations, if it doesn't detect, it's like if you're getting none, it will show X. But you can modify this. You have the freedom to modify the code anyway if you want to put Hello World or something like that in one of the stations. So the two of this, this is image classification, but how does it perform if we do object detection? There's another model that you can use for object detection. Here in the workshop, we just use the straightforward cocoa model so that you have the bounding boxes as well. So this is the last demo that I'll show. And then you're free to try it out yourselves, update the scripts if you need to. So I'll just remove this. By the way, the cocoa model is using mobile. If I place this later, it will just accommodate if it detects something different. So some are detecting as background. In fact, most are background. Maybe the accuracy for this is not there yet. Maybe we need to use another model for this. But for now, I only have this model. So that's why at least you can see the bounding boxes. Black Widow. So this is the last task for this workshop. And if you want to try it doing yourself, modifying a bit of the models trying to upload it and then modifying some of the scripts, you can search for APIs for Sunset also. Then you can modify and then interact with the Sunset API. There are five IoT stations that we have set up for you. You can just take turns and run through the workshop yourself. And that's basically it. The next two workshops will be on PWA, which is for, I think we're going to use TensorFlow.js there. And then followed by deployment on Android. So for now, if you want to try, you can go there on the five stations and then if you're waiting, then maybe we can start with the next workshop. Neil.