 A very good evening to all the educators who are joining us on the live session for international boot camp on coding artificial intelligence and robotics, especially for the school educators. So we are here with one more wonderful session for all of you in day number five, where we are going to talk about the gesture control robots. I hope you are all ready to make your robot move with gestures, we will be shortly starting our session just in few moments. Till the time you can let us know in the chat from where you have joined us. And the name of the country. And I would also like to call up my colleague Tina. Hello Tina. Good afternoon. Good afternoon. To welcome all of you to international boot camp program level with the knowledge of artificial intelligence and robotics after the completion of this boot camp program. On behalf of enters 10 period family, I welcome you once again to this international boot camp program level to I request all the educators to share your name along with your country name in our chat. Thanks Christina, we can see, of course, Christina has joined us from crochet, and I would like to share that now. We have of course, being a part of one family and now I feel very familiar with all the educators seeing their activities in the telegram, getting their issues resolved and next day again coming back and talking to them on the live session. So it's an amazing interactive process to make it little more amazing. We want all of you to please join our telegram group. So you can scan this QR code and of course, join our telegram group, where we would be helping you with the your issues supporting the problem you are facing apart from it. This would be one of the medium for our communications towards you for the complete boot camp as well as the further developments which are going to come up with the stamp idea as well as the new boot camps aligned for you. So I hope you have already joined in to the telegram still if not you can of course join us moving ahead with the part of how exactly we came up with this wonderful boot camp so we would like to thank our all collaborators who have helped us to reach to the maximum educators around the world. Stamp idea is of course hosted. The boot camp art part has is the co host which is our AI and robotics technology part from Bangalore. Also, the complete boot camp has been supported by Niti I hope that is at all innovation mission which is a integral part of the central government education system in India to take you further I would request Tina Tina can you hear me. Sure. So it's our pleasure to have all in the educators forum and American Indian Foundation to be as our community partner they help us to reach our maximum. We have many international partners like purpose smart education Palestine National Institute of Education Singapore apex coding Academy Egypt, it to nature and it to stream you a all of our collaborators and stamp idea had made had made a wonderful effort to bring all of our educators to this platform. Now, it's time to discuss about basic objective behind this international boot camp program. The objective of this boot camp program is to empower the educator educators by deepening their knowledge and understanding on coding artificial intelligence robotics and thus enhancing them with all the necessary skills that they required on this 21st. century. This international boot camp program will definitely give a hands on experience to our educators on this latest technologies like robotics artificial intelligence machine learning in a fun and very interactive manner by learning more about these new day technologies like artificial intelligence robotics coding. Our educator will become more confident to talk about this technologies in front of a society. Last but not the least, we are making our teachers to become an ambassador of their own institution. Now, now why you need to participate in this boot camp program. We are giving you an opportunity to learn with our international educator, so that with this through this program, you can join in our international educators community also you can share your doubts you can discuss among yourselves and not. As I mentioned before, you can acquire the skills that they you required on this 21st century centuries. And one important thing I want to add here is that we will be giving you all the teacher resources and we will are giving you the your students our learning resources also by this by the completion of this boot camp program. You will be earning a badge and a certificate aggregated by art park and stem art park stem media and stem dot org. I think I use it's time for you to take it forward. Thank you. Yeah sure Tina. So now I think we are on the part of the schedule. So, just a sec. So now we are going to tell you about an overview of the schedule. Actually, we have covered so many things during the boot camps in the level one and the level two. Right now let's see what exactly the things we have covered in the last days and what we are going to cover it today. The first day of course it was an amazing to start with the introduction of robotics where we talked about the basics of robotics, making a robot move in forward in the backward directions. The next day of course on the 28 June we started with self driving robot which is one of the very interesting activities where our educators came up with amazing projects, making their robots move by showcasing the different card recognition cards in front of the camera. The next day of course one of a very important topic of the robotics that is line following robot, which helped us to make a quirky move in a line so that of course it can be utilized for making big and amazing projects. The next day, we came up with a chapter or I should say we came up with a session on AI delivery robot, where we used our own learning of recognition cards, as well as we used our previous learning of line following and came up with an amazing concept where we were able to stop our our quirky in different locations with just showcasing the numbers here the numbers were working as a house number so we made a quirky delivering the robots in different house numbers like house number one house number two and house number three. And today we are here again coming up with a wonderful session on gesture control robot, where we would be using some hand detection techniques and machine learning concepts to make our robot move in the different directions. We have and coming amazing session and an important session on third July, where we would be discussing about your certificates your batches and the resources how you would be getting them. Apart from it, it is going to be very much entertaining to see the different projects made by the amazing educators who joined us in the complete bootcamp journey. So without a further delay, we can move ahead with our today's session. Although we have covered this learning management system related things so many times but still we wanted to cover it again because still this is a very important part, because from here only you would be getting certificates you would be getting batches and everything. So let me just show it to you. So here basically you, you would be getting an access of the complete elements that is learning management system or your own dashboard on AI dot this time period dot com, where you have to just sign in yourself by clicking on the sign in option. You have to, once you have signed in with your credentials you have to click on my courses and click on the educated bootcamp level to see all the learning content and the recorded videos. Once you are here, you would be able to see all the learning content related to each and every sessions, along with this, there will be embedded videos which you can see it here. A very important part that we want you to fill a daily feedback form after watching the sessions live or recorded. That means even if you are not able to watch this sessions in on live medium, you can watch it later at as per your convenience, fill the form as per your convenience and submit the activities to acquire and earn the wonderful batches and certificates for you. So today, of course, we are going to cover two activities where we would be working with hand detection, as well as we would be working with machine learning. So now, I think we are pretty clear on this part and I would now like to share my screen and move ahead with our today's part. So still if you have not joined our telegram channel, please join us so that of course we can support you with your queries. So now, as we like as we have already taken the session in the level one, but of course I wanted to cover it again to give you a little more idea so that we can use this concept again today. So now, let's discuss about the machine learning. So here you can see a small diagram, which is giving us a four stage process for a human learning, because it becomes very much important if we are talking about the machine learning that how exactly a human scheme with this concept of machine learning, and to understand the history of course we have to understand how exactly the human learning process goes on. So if you'll see it clearly, basically we have four different stages. The first is sense environment. Second is analyze the information. Third is decide and act, and for is increase your knowledge. So now starting with the first. So we can consider it today as an example, where we want to understand a machine learning process, we want to understand in human learning process with a with a process I should say with an example like if I last time we discussed about cat and dog that how exactly we learned about the cat and dog. But today if I say that if you want to understand it through the postures. So I'll just turn on my camera here to show the things up here. So here you would be able to understand that. Of course, from very childhood what happens is whenever any guest comes to our home or we go somewhere. We start observing the different gestures made by the humans. Like if I want to say hi or hello to everyone. I'm going to just like I would say I will just raise my hand and just give a flow to my hand and say hi right. So this happens with our childhood and we observe these things and we adopt these things right. Similarly, if you want to stop anyone, of course, what we what do we do we bring our hand and put it like this that we want you to stop here. So there are many other gestures. Like if you know about if we generally do namaste right so we do namaste like this. So any person doing this, we are able to understand yeah that person is basically greeting us. Or of course. So similarly, there are different gestures, which we have learned from our childhood. Now, after understanding how the gestures work, what happened like now if you're going on a road. And sometimes what happens, like someone stops you or someone say waves his hand to say hi. So you automatically you at that time automatically understands that of course that person is greeting you and you wave your hand back. But if I say if you are standing on a road. Okay, and there are I can say the cars in the buses are going on. Okay, on the road. So you also wave hand like this right. What does that mean. So certainly that means that you want the bus to be stopped or car to be stopped so that you can about that bus or a car. So now what happens with the time we increase our knowledge, understanding about the different gestures and as per the scenario we understand that what exactly that gesture means. So the same gestures can also can be understood in a different way in a different scenario. But as we are humans, we are capable of understanding the things very well, we are able to take the decisions perfectly. Now, moving ahead to a part that how exactly we have understood all these how exactly we have learned all of these things as a human. So of course we have used our different senses, like visions, touch, smell, taste and sound. So here using these senses of course we can, we have given a data to ourself to train ourself on, on the different gestures on a different understanding to come up with a particular output or I should say with a particular decision whenever we require to do so. Now, in this complete process if you'll see the first time when if you'll see a baby you, if you're going to just wave your hand, you're not going to get a response from the baby because of course for a baby, they don't have an understanding that the waving hand means hi. So what exactly happens there is their parents or teachers tell them that see that person is waving his hand or her hand that is showcasing that he is greeting you. Okay, so then in that case what happens even if you might have seen generally people also hold the baby's hand. Okay, and like ask them to do so. Now what happens here is basically this is a training, basically we are training the human babies to do a waving of hand to say hi. So this is how exactly the things work that we train, then of course the training comes up. Once there is an analysis of information by the machines by the humans, then there is a decision and action taken upon it. And finally, we increase our knowledge that yeah of course, waving hand is only not the way of greeting there is different ways like saying hello or saying Namaste, or even some places we shake the hands. So those also are a part of you can say the greeting. So there at the end of course we got an understanding that greeting can be done in many ways. Now, if someone is coming in going to put forward his hand or her hand to you. It's not like you would not understand what exactly the meaning of it you understand that basically that is for shaking the hand. Okay, so I think you might have got an idea how exactly the human learning work. Now let's understand the machine learning. So as per the definition machine learning is the process of machines learning how to act themselves without any human intervention. So that means if just assume that there is a robot, and I wanted to work automatically. So if in case I am going to wave my hand. So my machine should also wave its hand to give me a greeting, or in case I am giving this expression from my hand, or this, that I am asking the machine to come forward. If I'm just saying like this. So I'm asking my machine to go backward. So these all decisions if taken by the machines automatically. That means those machines have been gone under the machine learning process to take the decision. Now how exactly these machine gets an understanding. So the first is of course we give an input of images or sound information into the model. That means we tell them first we give them a lot of images see this is the hand and if you are saying like this that means go back, if you are doing like this, this is come back. Now arrange the pictures in a data set so that we can give an input in different labels that this is hi, this is namaste, this is hello. So that next time, if we are testing it we are able to get a desired output. Now, these input are sent to the model for training. Now in model basically algorithm operates on the input to get the output to get to output the higher level information, basically to find out the higher level information inside it. That means, if of course the person is static like this that does not mean that this is high. Okay, but if we are waving it then only it is going to be said as high. So this is a very granular information which is taken out by the algorithms. Okay, of course, the algorithm works on the pixels of the images and find that what exactly it is. Then the final things comes up as an output where basically we tested that whether whatever the training has been given to machine is machine performing good or not. If machine is not performing good again, I have to go back to the initial position, give little more data so that of course I can get an amazing output. Okay, so now we are going to start with an activity. So I hope you all are very ready with your pictoblocks and I would just share my screen. So now we are going to start with an activity where we are going to make our quality actually move with different hand gestures. So I'll just share my screen now. Okay, so I hope my screen is shared. Okay, so the first thing in a very important part is of course that we should have a pictoblock we should have a porkey. Okay, and the very important part that you should have signed into the pictoblock. Okay, so if you have signed into the pictoblocks, then it means if you have signed into the pictoblocks that means you are now able to use the different extensions provided in the pictoblock. So today we are going to start working with first the hand detection. Okay, then we are going to move ahead with machine learning. So I'll just click on this add extension option at the bottom corner. I hope you are all able to locate this add extension option. So now what I want is I'll just click on this and I'll add an extension which is human body detection. Okay, so as we know that this extension is can be worked with offline pictoblocks that is you do not require of course the internet to use it. So you would require a little patience so that of course all the model details are loaded in the pictoblocks and can be used for making a wonderful project. So now I have added an extension called as human body post detection. And now here the first and important part is I have to bring up a block which is going to help me with the input data. Right, because till the time I'm not giving an input my machine is not going to work. Since I have to take an input I have to turn on the camera. Also I have to use the block which says when green flag is clicked. And now here, if you'll see I'm going to add a block from settings that is show detection. So here it will observe this complete extension blocks that it has been basically categorized into three different part one is settings one is post detection and the third is hand detection. So if I'm talking about setting is basically I have to set an environment so that of course I can use the utilities present in other blocks of this extension. Now since I've already added to turn on the camera I've also added to show the detection. So the next part is I want everything to be working on live basis that means on an instant, like if there is any change on the footage I want to see the changes on live. So I'll just click it here. I'll click it here on forever. And now, as I said I want to mention this to my machine that I want my machine to analyze something. Of course machine can analyze machine can come up with the solution, but the most important part with the machine is you have to give a command so that they can initiate their process. So to initiate the process, I have a block which says analyze the image from the hand. Now they have initiated the process, but how exactly I'm going to check whether the process is working good or whether I'm going to get some output or not. So what I can do here is I'm going to add a condition. I'm going to go to control and I have a block says if then else. So basically this is a block we have already discussed it but again I would like to draw your attention here that this block is basically we are using many times whenever we want to work with condition. Since condition is a very important part in the machines because there can be many possibilities and while making a project it's very necessary that we consider all the possibilities. So even they can be a negative possibility or a positive so we have to consider both of them and also allocate an output related to both of them. So right now I'm going to add a block here which says is hand detected so this can be true or this can even be false. So this is why I'm using a condition that if this condition is true that my machine has detected some hand in that case what I want is I want my machine to do something. Okay, so let's say I'll add from looks say hello. Okay, sorry. Okay, if in case it does not detects any of the hand I want no hand. Okay, so now let's keep this inside the forever and let's see what happened. So I'll need to just change my camera just a moment. Okay, so now I just click here. So right now you can see it's saying no hand detected. Because yes there is no hand right now right on the screen. But if in case I bring my hand what is going to happen. I'll just stop this. Okay, just give me a second. So right now you can see it's saying no hand detected. Okay, but as soon as I bring my hand. It is going to say hello. So you can see it's saying no hand detected no hand detected and as soon as I'm bringing my hand, it is saying that say hello. So now of course we did this part, but the problem. Okay, let me just clear the screen first. So I'll just hide the detection. Okay, and then bring show me. Okay. So now you can see what we did till now was we detected the hand and I was just asking it to say something. But now if I want to work with quirky. So what I'm going to do is I'm just going to bring my quirky into play. Just give me a second. My quirky I'll just switch the camera. So that of course it is visible to you. So I hope my quirky is now visible to everyone. So although I would just showcase this thing to you, even once I have like completed it. Okay, so first, I need to select a board. Okay, so I'll just select a board here. That is quirky. I'll click on connect. And I'll just connect my quality with Bluetooth. Okay, once I've done this. Now the next part is I have to use certain blocks. So for as of now I'll just remove these blocks, because I want certain different things to happen. So generally, as I said if you're talking about hand gesture, we do this part when we want to stop something. Why not to bring a robot part and bring a stop robot. If in case a hand is detected. Also, if you want to make it little more interactive, I can add a display part. Okay, which where I would be making everything okay just to say where I would be filling up red light. What would happen if hand is detected of course the display matter it would be this and the court is going to stop as what is going to happen, as I want if there is no hand, I want my quirky to keep going on. Okay, so, and also, so I'll just put it here. Go forward. And of course 100% speed is going to be very much for my small board so I can try 30%. Okay, although I don't know like it is still going to be very, very high. Okay, but still let's try it. So now if I'm going to press a green flag, you can see it's moving right as soon as I put my hand, you can see it stopped as soon as I kept my hand. You can see how exactly it is working. Okay, so it's moving. So it's it has detected the hand on the last part. So now it is done. And I want to stop. So I'll just put my hand. So now you can see how exactly it is able to basically detect the hand and move. So let me just show this part on stopping the share. So right now, I will just bring this keep it here. Okay, and I'll just put it leave it and now as soon as I bring my hand it's getting stopped. Okay, so camera in camera your hand should not be visible. Okay, I'll remove my hand and it will again move. Again, I'll bring my hand, it is going to stop. So using basically hand gestures, I'm able to stop my vehicle. So it is going out of so I'll just bring it here. Okay, and now I'll just stop this. So now I hope you might have got the idea that how exactly basically using my hand, I'm able to stop this. Okay, I'm able to stop all the this part of quality. Now what of course, if you're talking about the activities. Of course, if you're talking about some technology, where we want not only to stop it. Okay, we want it to work with different gestures, like if in case of course if I'm putting my hand in even different gestures, the corky is only going to detect a hand, whether you are putting your hand with an open palm, you are putting your hand with a single finger, or you are putting your hand with like fist closed. All the things is the hand. So whenever it is going to detect a hand, it is going to stop itself. But if in case I want, so I'll just bring the new file first. Of the picture blocks is downloading a data since 10 minutes. So I would request, gentlemen, ma'am, please restart the picture blocks and please check. Hope the ramp is not getting overused, because if there are other applications working on background, then sometimes it makes little difficult for people blocks to get open so please, from the task manager please switch off all the other tasks, which are not required at the moment, and then restart the picture block. Okay, so now I'll just bring up a new project here. Let me just. So I have like right now turn, let me just turn my court here. So I'll just share my screen now. Okay, so now you would be able to see that we are going to talk about now about the teachable machine. And why we are talking about teachable machine is as we know, like, it's an amazing platform where we can just train our own models to get the outputs right. Okay, so I'll just. Okay, so basically what are the different types of medal models you can work on teachable machine. So you can work with image model. Okay, you can also work with the pose model. You can also work with an audio but right now audio is not something which is going very good so we generally prefer doing it with I should say this. The image model. Okay, image model is something where we can work it very well. So what we are going to do I'm just bringing my picture blocks back again. Okay, so we are going to use the teachable machine platform. But for here this is very important that we have to understand that we can add a machine learning extension here using an add extension, but we have to move to the teachable machine to create a model. Once you click it here on the machine learning extension. Okay, you would observe here that there are no blocks added. I think you would you are able to see there are no blocks added. But there is actually two different clickable buttons are here, like create a model and load a model. So what am I going to do it here is I'll start with creating a model. Okay, so to create a model, of course, once I click it here, it is going to open. So once you have clicked on create a model, it is going to take you to the part of teachable machine where we are going to train our model. Now, we can use of course the post project and you can also try that. But of course right now what we are going to do is we are going to use the image project. So that we can work with it very easily. Okay, so now here you can see it's a class one and a class two options. Okay. Okay, thanks. Like wonderful. It's amazing to see the comments. Okay. I'm going I have to make my gesture control robot. So if I say that I have to make my robot move in different diets directions right so one can be left, one can be right, and I can also train my model on stopping it and going forward. Right, so we can take these four classes and water classes. So basically classes are nothing but the different labels of the data. So if we are saying that this is a go class, that means we have to give the images only related to go class. So right now I'll make a class that is go. I'll make one more class. Right. Okay, and here from here basically you can add more classes. I'll make it left. Okay, and I'll add one more class that is a stop. Okay, no, sorry. Stop. Now I've added four classes now have to also give the images. So of course last time we worked with one of a cat dog classifier where we basically uploaded the data, but today we are going to use a web camera. So I'll just click on web camera and you will be able to switch on your web camera. Okay, and here what you have to do it is little tricky that what you have to do is you have to keep the background clear. Okay, so once you'll be keeping the background clear then only you would be able to get a good result. So I'll try my level best to do so just give me a second. So I'm just turning my laptop so that of course I can get a clear background. Okay, so I hope this is going to work. So this is pretty clear. Okay, so this is how exactly we are going to use the model. So of course I have to be little like this. Okay, so what we are going to do is of course for go we have to train in certain gestures. So right now what we can do we can take this as going right, we can take this as going left, we can take this as an stop. Okay, and we can use this as a go. So there are four things and we have we are going to use four of them. Okay, just give me a second. So now it would be a little more easier for me just give me a couple of. Yeah, so now I would be able to train it perfectly. So now how I have to trade, you have to see this very carefully that there is an option to saying hold to record. Okay, as soon as I'm going to click on this it is going to start recording the photo. So what I'm going to do I want whenever I show this that is a sign of victory. So I want my machine to go forward. So I'll just give the images I'll just try to take the images little in a different ways. So that it is much more comfortable. And how many images we have to give you have to take a record that you have to give images around 150 to 200, not more than that. Otherwise it is going to consume a lot of RAM. Now for right, of course, I have to show this angle. Okay. So what I'm going to do is I'm going to bring my hand just a second. Actually, for me, I have tilted my laptop on the left hand side so it's getting a little bit difficult, but not a problem will try level best to do so. Okay, I think this is going to work. Yeah, this is going to work. So for right, we are going to train it with this. Oh, I'm sorry. So by mistake, I added the data here. So I'll just delete this. So we cannot put two different data in win label that is one class. So now here I'll click web camera, and I will now give. Okay, again, I'll give somewhere 180 major. Okay. Okay, now what I have to do here is I have to add for left and for left it's pretty easy. So now I'll click on left. Okay, so I have given more than 200 but I'll delete some of them since like I don't want my machine to get lag because if you're giving the images that of course going to use it and it is going to of course take the ramp. So I will delete some images. So I'll try to keep it around 180 only. Okay, now coming to the stop. So stop is very simple. I want this we can feast it so we can use this as a stop. Okay, so I'll directly choose web camera and I put this to hold to record. And you have to also make sure right now I'm using only my one hand. Okay, but if in case you are you want to use both the hands. So you is it okay for each five major not of course not because see I'll tell you why is it not okay. When we were working with cat and dog. They of course had a very large difference in their images. Right. But when we're talking about human hand. Because there is not a big difference in these images if you look to it closely. So that is why we cannot give very less images we have to give minimum 150 to 200 images in order to get a good result. Okay, so I hope this is not perfect. Okay, yeah. So now, coming back to the part. I'll just stop the video again here. So I have given the images with different angles. So to basically train the model. Now I'm going to click here on train the model. So as soon as I click on train the model, you have to understand what is going to happen. So the first important thing is going to happen is the machine is going to find the similarities between all the images in each class. If you see this message that is pages unresponsive that means it is taking a little ram so you have to click wait. Okay, because there are two chromes, two cameras, so many things running on my PC so sometimes the PC cannot handle but you have to press wait only always. So now important point which I want to drag your attention that it is going to of course find the similarities. I've chosen the actually wrong color. Let me just bring it here. They will closely observe this part. So are you able to see a number growing? That is 10 out of 50, 11 out of 50, 12 out of 50. So what exactly this number is? Basically this is we are calling it as epoch. That is the number of time the machine is training itself because in a single training of course we cannot get the wonderful result. So it is going to train itself. Now every time there is a training going on, you will observe there would be some calculations on back end where first the similarities between the same classes would be found than the difference between the other classes images would be found. So that of course machine can understand what is the difference between the images given as an input data and what is the similarities between the classes that is like all the go plus would only have two fingers. If all the right hand images if you'll see the thumb would be somewhere at the downside. So these type of calculations happen. Now once the calculation is done, you would be able to see here. I should say the testing part. So now right now it is getting very confused. Okay, again I would require to do the same setup so just give me a second. Okay, so right now I'll do the same. Okay, right now it is not able to find any answer. But let's see whether it's able to see my hand and find. Yes, this is a stop. It has trained itself correct. It is left perfectly correct. It is, you can see I will try to show. So it is detecting it right now that there is a percentage also I'll tell you what exactly that percentage means. So here you can see this is go. While I'm not able to show my hand completely but still I'll try. So right now you can see, while I'm giving a right hand, it is basically not so sure that this is a right side turn or not. So what it is giving if it is giving me a percentage that whether how much percentage my machine is confident. Okay, so right now if I show this, it's understanding it left. Okay, if I show this this it's understanding because now you have to understand that I have trained my machine with a certain background. While right now I'm changing the background. If I'm changing the hand, so my machine is not going to work good. So you have to make sure that you are working with the same background. If you want to work with different background then you have to train the machine as well with the same background. Okay, I hope this part is pretty clear. So now we have also tested it. And now I'm going to just clear all the drawings. And the next part is we will be exporting the model. So I'll just click on export model. And here basically I have to click on upload my model. So I saw this was one of the teachers doubt that it is showing something else. So no, this is going to be upload model always once it is uploaded it is going to show you that the model is completely uploaded or up to date. So I'll just click it here. If in case that might be a case when you are seeing something else that means you have clicked on the model link. So you don't have to click on the model link. You have to basically copy that modeling and you have to copy and basically paste that modeling over the picture blocks. Okay, that's superb. I can see her shell shimpy as mentioned it's working that superb. So now you can see I've got this modeling. Okay, I hope you all are able to see this. Let me just again mark this. So basically what I've got is this model link I have got. Okay, now I have to copy it from here. So that of course I can take it to I can take it to my picture blocks. So here you can see it is saying my cloud model is up to date. Okay, and now I'll just clear the drawing and copy this. And I have to take this to the picture block. So I'll just close the teachable machine right now. I can open the picture blocks. Now here are important part that I have to do is I have to click on load a model that is second option paste to the model. Okay, and then click on load model. So it will take a little bit of time you have to be patient with it because patients with it because you can understand we have of course used so many images we have trained our model in so many images. Right now it has uploaded everything. Okay, now you'll observe that this block. Okay, let me just zoom it and show you that this block has all the four classes what we had. Okay, that means our complete information has been successfully transferred here. Okay, so now the next part comes up that how exactly I'm going to make my quirky work with this, how exactly I'm going to use the script. So as you know, the first and a very important part, whenever we are working with artificial intelligence becomes that I have to take input data. I'll just zoom it out actually. Yeah. So it this block is somewhere going to help me to turn on the camera. Once the camera is turned on. Of course, I have to also check right whether it is working good or not. So this thing what I'm going to do is I'm not going to get for like for ease what I'm going to do is I'm also going to bring in recognition window. So you'll observe a small change that there will be a small recognition window coming up on top so that there is no so problem of basically the stage. So now what I'm going to do here is I'm also going to add a block which says when dream flag is clicked. Now these three blocks are going to only help me to set the settings part. Now the testing part has to be taken care under the forever as we would be working with the things in a real time. And now the things come up related to the condition because we have four different conditions here right that it can go it can stop it can take right or it can take left. So I would be using a condition and I already have this block with me is identified class. Let me zoom it out a little bit or let me just use this option. So you will observe here this that is identified class from the web camera in go is go. Okay, so basically what exactly going to happen is I'm going to bring that I want at this part that my court he should move ahead, but how I'm going to get the So here I have to select the board. So there was again a question that I'm not able to see the court he pallet. So that is very important that you select the court he is bored. You have to click on board, you have to select the court key, and then only you would be able to find this. Now what I'm going to do it here is I'm going to bring a robot. Okay. And I'm going to click it here go forward at 100% speed. Okay, but of course as we have already seen that 100% speed with going to work a very high so right now making it 30 so that it is very understandable by us and we can easily change our actions. And to make it more beautiful let's bring some color of green. Okay. Now else. Now here you have to understand right now we use if this is one condition. Now in the else part you have to use one more if else only, not if you have to again use this blog, because there are four different conditions. Okay, so we would be putting one condition inside the other. So right now I'm going to put like this. And now I'm going to check it for the second. So if the identified class from the web camera. Now we have tested go. Let's keep it stop. Okay, and now if it is a stop what do I want is, or let's keep it left. Okay, and now I would be, I would try to have some motion on my quality for that I have to go to robot. What I want I wanted I want my party to move on the left side. So from the drop down I'll select this and again to make it little slow, I'll make it 30. Okay. The next is to bring a little good look of quirky let's put some color on it. So I'll put this mustard color. Okay. Now again model is still loading in pictor blocks. Okay. So just what you can do is you can just stop the process and you can just click back again on load a model and re paste the link. Also, please check that you have copied the link correctly that is very important because sometimes a part of the link or a one even character is left to be copied. So it becomes little difficult for machine to understand. Okay, now again, let's make a ladder of if it's here. Okay, so I'll just bring if it's. Okay, and now we are going to put the next is the right. So I'll just bring that block and I'll put it here that if it is right. So what do I want is again the same block. So we'll just go to robot. And I'll just drop this down here and from the drop down I'm going to select right. And again, with the little speed like 30% and to make it little beautiful. Let's add some colors to the quirky. So I'll just click it here and I will fill it with blue. Okay. Oh, sorry. So how exactly I'm filling the color is I'm opening this matrix I'm selecting a color and there is an option to click here that says fill. Okay, that is going to fill the complete matrix. Now the last is if it's the do we have any more conditions, of course not in that particular scenario what we can use it we can directly use if we do not require if it's now, because there is only one condition left. So what am I going to do it here is the next again I'm going to check. And right now I'm going to check it with stop. Okay, and if I'm bringing a stop of course I can bring a color of red generally stop signify that and then along with it I want stop room. Okay, so if I just zoom my screen zoom out my screen, you would be able to see all the four conditions go left right in stop. Okay, and that has been even mentioned with a particular color. Like three mustard blue and red. Now the things come up that how exactly I'm going to start. So I was what I'm going to do it here is I'm just trying to set my camera here. I'll go to settings audio video settings. And I'm just going to change this. Okay, also along with this I want to, I want to show you the party working on my camera. Let me just put my camera little down we hope it is good now. Yeah, it seems good now. Okay, so let me just put my coffee here. Okay, my, this is also ready. So now what I want to do is I want to connect so for that I have to turn on the quality. As soon as I turn on the quality. I can see a light blue color. And now I'll click on connect. I'll go to Bluetooth port. I'll see quirky Edd six. That's the name of my own quirky click on connect. And as soon as it gets connected. You can see the green the blue light has turned out to be green. Now if I soon as I place this green flag. So you can see this recognition window coming up. So I was talking about this. Okay, so what I want here is I'm going to just a moment. They can see if even I'm not showing it okay so it is going to do something. So let me just bring stop. You can see it got stopped. You can see with the left it is taking the left direction. Go forward. Stop. Okay, then of course we can make it so right now and trained it on back of course even we can train it to go back. Let's make directly a left turn here. Okay, I'll just stop it. And if in case let's try right. So it is trying to take right. As soon as I put to stop it is going to stop for right it's taking a little bit of problem. But of course, because my hand is not sufficiently visible. Okay, right now it is able to find it stop. So I hope you have got an idea that how exactly I'm able to control my robot with gestures. Right, and I think that's pretty cool rate that we're able to meet my robot move with the hand gestures. Now how exactly I did I think it is very pretty clear to everyone, but I would again like to show you and demonstrate you this with not sharing my screen. Okay. I hope this is clearly visible. Blue light is showing but forward not moving. So there is an high chance that you have kept this okay blue light is not for right now if you have coded it in my way. So blue light I have kept it for right inside. If you kept it for forward you have to make sure that you first make the forward movement and then bring the display like that is going to really work good. Okay, so let's see this again once by I should say seeing on a big screen. Okay, so I'll try this connecting for that again I need to turn on the party. Once I've turned on the party. I'll just click on try again to check my blue and here I go connecting the party. And let's start making my party move in different directions. So let me just start it from here and let me start it with go. You can see it is coming as go. Let me just stop it by showing my stop finger. And now let me just make it go left direction and let's try a right direction and stop it. And then a small right and stop it. Let's make it go forward stop it take a left turn and go back to the initial position so I'm trying to take it to initial position and I hope you might have got a fair idea how exactly it's working so. Even what you can do here is you can right now I'll just share my skin and I'll tell you a very important thing that if I am not showing anything right. So it is still going forward and it is still detecting go so instead of that what you can do is you can even train the machine with a blank okay like a blank background that if there is nothing. Then I want my quirky to be stopped at a place. Okay, and then in that way machine is going to respond you even for your blank background that if in case you're not showing anything. Still if it's not moving, then just first check whether your quirky is able to move with only go forward direction that I mean to say is just connect your quirky and check it. With a block which says go forward. With 100% speed. Okay stop and left is too similar. So you can choose any of the colors if not a problem so of course I can change this color to green yellow, even white is a good color. I think you if you have not seen how exactly white looks actually for I like the white color on quirky a pretty good. So let me just show it to you by connecting it. Okay, if you're finding any problem with movement, you have to make first check that whether it's standalone it's moving or not. That means, is it able to move just with a single or not so you can see this is a white bright light coming out of a quality. Okay, so even you can set, I can say the brightness to it and clear the skin and many more things as per your instructions. So I hope this is pretty clear and you might have definitely got an understanding how we can move ahead with the gesture control robot. So I'll stop my share here for a moment. Now, an important part here I would like to tell is I can just share this model link. Okay, so that it can be used by everyone, even if you're using it phone. So how exactly would be working on phone is of course you cannot create the model but of course you can use our model. Right. So let me just paste this. So Tina I'm just pasting this for you on chat. So this is the link which you would be shortly getting on your chats. Okay, so now the things come up that if in case you are working on phone. So in that scenario, you have to directly copy this link. Okay. And once you've copied this link you have to paste it on the load model option. And then you have to load it. Okay, simple go forward is not working how to start. So I'll tell you what you can do is you can just first check that everything is working fine. So for that, let me just try to show you a troubleshooting. Okay, so I'll just share my screen. So what I want if in case it's not working for you, don't worry. The initial step is connect your quirky with the servo. I'm sorry, not servo serial port basically. Okay, Bluetooth connected fine. Okay, you can add let me first answer the first question that if your motor is not working, you have to connect your quality with the serial port that is USB C type cable. Once you have connected with a USB C type cable, then I want you to please click on upload firmware option. Let me just show it to you. So you would get an upload firmware option. Here just above the stage part. Okay. And you can do this only on your laptop desktop. So you have to upload the firmware. Okay, connected with USB C type cable and upload the firmware and again test it is still you are facing any problem related to the motor motion. You can join our doubt session. Of course, we are going to help you and support you there. The second question is how can we add more classes. So right now, you won't be able to add more classes to the same model, because that training has been completed and stored in teachable machine. If in case you want more classes. Okay, so you have to train the model from beginning in the teachable machine. Okay, so this is a way out you can do. And of course I think the things are pretty clear still let me just share my screen. So we used this machine learning extension, using which we created the model. And we have even get shared you the model links by clicking on load model, you have to paste the link. And this is very important educators who are using the smartphone. Okay, now here you can copy and paste the modeling which is shared. Not talking about some application so your machine learning is getting very much used an object and any male classifiers, it is getting used it identifying the different shapes and the gestures. So we'll answers that whether the people are wearing mask or not. Also in the healthcare industries, where it is getting helping the doctors to detect the different diseases. Okay. Making the games just we did one of it, like making a rock paper scissor game so today but we did it a little different. And I hope this is very clear and to everyone. Now, if you are still have not joined our telegram group please join our telegram group, so that of course we can help you support you in that, and also please do not forget to fill up the feedback forms. Especially for all the educators in India, you have to use the first QR code. Okay, and for the educators who have joined us, especially through the community of meet the IO, please fill out the second form. You can just scan the QR code from here and also you can directly fill up this feedback form in your dashboard. Those educators are finding a difficulty of course we are looking to the difficulties and for other educators who have already shared us. Our technical team is working on it and definitely I can assure you that we are going to support you with the issue till the time it is not getting resolved. So you have to only keep a patience with us. Definitely all the issues will be getting resolved whatever it is whether it's a technical whether it's related to robot it's related to your PC. If you want to have a word with our target educators, please join us on the doubt sessions at 6pm and 10pm. And then of course we can support you on that. So that was basically all of which I wanted to cover and I hope you have understood correctly. There was a small question that how exactly we can add the classes. So, let me directly open the teachable machine. I show it to you you can add the classes in the teachable machine by clicking and add class option I think this was even answered by Pat Scott thanks Pat for answering this. So you can just click on add classes option here. Okay, so this is an add class option. This is basically going to help you to add more classes. I recommend you not to go more than seven or eight classes with at least 180 or 150 to 200 images in each. Otherwise it can become little difficult for your machine to handle those things. So I hope these things are pretty clear still you have anything you can of course let us know we are going to support you with that. So that was basically all from my end. So yeah, thank you for joining us on the day number five. We would be coming up with a closing ceremony where we would be talking about the, I should say the certification batches and of course how can you become an instructor and of course implement these robotics and a concept for your own students. So do not forget to join us live in coming class which is going to happen for on the third of July. So I hope you have all enjoyed the today's session definitely you have got a good understanding of the things. So that was all from my end. Thank you everyone thanks for joining in. Okay, let me just take one question how can we get the model link. Once we have already trained after closing the window. So after closing the window if you haven't taken the modeling, you have to retrain it basically because once you have closed the window, the complete things have lost. So if you want to take you can take our model link and work on it. But yeah, there can be little difficulty because the background I'm using and you are going to use would be little change right. Okay, so thank you and over to you Tina. Thank you, Ayush. It's really amazing to see Ayush with a single AI powered quirky kit, we can make so many real time projects like self driving car line for over guests and robot and so on. We all are super excited to make our own guest control robot using quirky and Victor block hope all our educators enjoy today session. Thank you all our dear educators for joining today. See you all on Sunday.