 Hello everyone, welcome to the Pictoblox machine learning environment tutorial series. In this tutorial, we are going to learn about hand post classification. It is one of the machine learning model types which can be trained in Pictoblox. Hand post classification works by analysing the position of your hand with the help of 21 data points. You can map different hand poses to various classes and use them to execute actions. In this project, we will make a beetle in the maze game. We will control the beetle using hand gestures and ensure it does not touch the maze. For this task, we will need 5 classes, forward, backward, left, right and stop. Open Pictoblox and select the block coding environment. Go to the machine learning environment by selecting the open ml environment option under the files tab. As we are training our models in python, it is important that we have the required dependencies. In order to download these dependencies, simply click on the gear icon on the top right of the screen and select the download dependencies option. This will download and update the dependencies required to train the model. Click on the create new project option to initialise your project. Select an appropriate name of the project and select the hand post classifier as the model type. Click on the create project button and you will see the hand post classifier window. When you are greeted with the hand post classifier window, you will see 2 classes. Class 1 and Class 2 made for you by default. Edit the first class name to forward. There are 3 ways you can add data to your project. Using your device webcam, to click samples, using the upload button to upload samples from your device and using the upload classes from folder button to import a data set. For this example, we will be uploading samples by using the camera. Put your hand in front of the camera and you shall see the data points line up. If the data points do not line up, the classifier will not take inputs. Use the face of your palm to capture samples for the forward class. You need at least 20 samples for the model to train. For this example, take around 200 samples for each class. If you want to delete a sample, hover over it and click the delete button. Rename class 2 as backward and take samples from the webcam. Use the back of your hand for this class. Click the add class button and you shall see a new class. Turn the class to left and tilt your hand to the left, now capture the samples. Add another class and rename it to right. Tilt your hand to the right and capture the samples. For the final class, add a new class and rename it to stop. Make a closed fist and capture samples. As a thumb rule, you should try adding an equal number of samples in every class. Large variations in data can be a problem while model training. Training is where the classifier extracts the features from the samples and trains the model to recognize the poses in the classes. The goal is to cover with a model that can classify unseen samples as per the defined classes. Use the advance tab to alter the hyperparameters of the model. In the hand pose classification, you can play around with epochs, bath size and learning rate. Do note that learning rate is an extremely sensitive hyperparameter and can greatly affect the performance of your model. VictorBlogs gives you an option to train the model in both Javascript and Python. Just flick the switch on top of the training box to cycle between the two. Training the model might take some time. Keep a check on the accuracy graph while the training is done. You can view a comprehensive report of the model performance in the trained report. The trained report consists of the accuracy and loss curve of the model, the confusion matrix of the model, the true positives, false negatives and false positives for each class. Once training is complete, it's time to test our model on alien data. The model trains itself on the samples we provide. But a model is only useful if it can classify alien data just as well as it classifies the training data. This is where model testing comes into play. You can test the images here. Much like training, testing can be done using your device's webcam or by uploading data from your device. Start the camera to test your model. Do note that even while testing, the data points must line up on your palm. Click on the export model button on the top right of the testing box and pictoblocks will load your model into the block coding environment. Observe how we have blocks for the model we just trained on the blocks palette. You can click on the open recognition window block and it's the model's working. You already have this code available in the pictoblocks code. We will be editing the same. Add the is identified class block in the if condition and set the appropriate directions. Forward, backward, left, right and stop. Now add the analyze image from block and select stage as the option. Click on the green flag to see your code in action. There you have it. You just use hand post classification to make your very own beetle in the maze game.