 Hello everyone, welcome to the Pectoblox machine learning environment tutorial series. In this tutorial, we are going to learn about hand pose classification. It is one of the machine learning model types which can be trained in Pectoblox. Hand pose classification works by analyzing the position of your hand with the help of 21 data points. You can map different hand poses to various classes and use them to execute actions. In this project, we make a beetle in the maze game in python coding. We will control the beetle using hand gestures and ensure that it does not touch the maze. For this task, we will need five classes. Forward, backward, left, right and stop. Let's get started. Open the beetle in the maze game template for python coding. Go to the machine learning environment by selecting the open ML environment option under the files tab. As we are training our models in python, it is important that we have the required dependencies. In order to download these dependencies, simply click on the gear icon on the top right of your screen and select the download dependencies option. This will download and update the dependencies required to train the model. Click on the create new project button to initialize your project. Type an appropriate name of the project and select the hand pose classifier as the model type. Click on the create project button and you will see the hand pose classifier window. When you are greeted with the hand pose classifier window, you will see two classes. Class 1 and class 2 made for you by default. Added the first class name to forward. There are three ways you can add data to your project. Using the device's webcam to click image samples, using the upload button to upload images from your device or using the upload classes from folder button to import an entire data set. For this example, we will take images from the camera. Put your hand in front of the camera and you shall see the data points line up. If the data points do not line up, the classifier will not take inputs. Use the face of your palm to capture samples for the forward class. You need at least 20 samples for the model to train. For this example, take around 200 samples for each class. If you want to delete a sample, hover over it and click on the delete button. Rename class 2 as backward and take samples from the webcam. Use the back of your hand for this class. Click the add class button and you shall see a new class. Rename the class to left. Tell your hand to the left and capture the samples. Add another class and rename it to right. Tell your hand to the right and capture the samples. Add the final class and rename it stop. Make a closed fist and capture the samples. As a thumb rule, you should try to add an equal number of samples in every class. Large variations in data can be a problem while model training. Training is where the classifier extracts features from the samples and trains the model to recognize the poses in the class. The goal is to develop a model that can classify unseen samples as per the defined classes. Use the advance tab to alter the hyperparameters of the model. In the hand post classifier, you can play around with epochs, bath size and learning rate. Do note that learning rate is an extremely sensitive hyperparameter and can greatly affect the performance of your model. Vector blocks gives you an option to train the hand post classification in both JavaScript and Python. Just flick the switch on top of the training box to cycle between the two. Training the model might take some time. Keep a check on the accuracy graph while training is done. You can view a comprehensive report of your model performance in the trained report. The trained report consists of the accuracy and loss curve of the model, the confusion matrix of the model, the true positive, false negative and false positives for each class. Once training is complete, it's time to test our model on alien data. The model trains itself on the samples we provide. But a model is only useful if it can classify alien data just as well as it classifies the training data. This is where model testing comes into play. You can test the images here. Much like training, testing can be done either by a device's webcam or by uploading data from your device. Start the camera and test your model. Click on the export model button on the top right of the text box and pick two blocks will load your model into the Python coding environment. Set Betel as the sprite for this project. Now set the default direction, size and position of the Betel sprite. For clarity, we'll make our Betel save the direction it's moving in. To move the sprite, we'll make use of conditional statements. Each conditional statement will change the direction of the Betel and move it five steps in that direction. Other conditional statements are shown. We'll make use of four if statements. If the sprite touches the maze, we need to send it back to the starting position. Add a conditional statement for this condition. Finally, if the sprite reaches the banana, we display that the user has won the game and stop execution by using the break statement. Run the script to see your code in action. There you have it. You just used hand-close classification to make your very own Betel in the maze game.