 In the previous video, we talked about how to transform raw images into vectors and use these embeddings for clustering. But we can do much more with images. We could, for example, use them for classification. Say I am an owner of an online flower shop. I have many images of flowers, from roses and daisies to tulips and orchids. Can orange tell a tulip from an orchid? Let's check. I have created a folder, flowers, with 9 subfolders containing images of different flowers, from daisies to lilies. Let us load the main folder, flowers, with import images. Each subfolder is considered by orange as an image class label. The widget loaded 82 images, belonging to 9 classes. Great, let us observe the results in ImageViewer. The images got loaded correctly. Now, we will pass these images to ImageEmbedding, which returns a vector representation of each image in a data table. A quick inspection in a data table. Yes, each image has a class label, and is described with 2048 additional features from deep network embedding. Now we will use cross-validation to check how well we can predict the type of the flower from its image. We connect Test and Score to ImageEmbedding. Test and Score also needs a learner. Logistic regression is a good choice for this problem. Let's connect it to the widget. Test and Score now performs a 10-fold cross-validation on our images and reports on accuracy. Seems like the area under RSC Curve, the AUC in short, is really high. And the classification accuracy is also not too bad. But I would rather like to know where our model was wrong. Connect ConfusionMatrix to Test and Score. ConfusionMatrix reports on actual flower classes and predicted classes and provides a data instance count for each combination. Whatever we got right is on the diagonal. Misclassifications are reported in off-diagonal cells. Here for example, our classification said two bouquets are made of callus, where they were actually made of tulips. And for two bouquets, the model predicted roses, where there were callus. Let's see what this misclassification is about. We can select this cell by clicking on it and then connect ImageViewer to ConfusionMatrix. Looks like these bouquets indeed contain quite a lot of roses, among a few callus. So the classification was not so wrong after all. We can also select other misclassifications from ConfusionMatrix to further understand the behavior of our model. Today we've learned how to use embeddings for image classification and how to inspect and interpret misclassifications. Try it out yourself with images of your own. Image clustering and classification are great fun thanks to deep model embeddings.