 Hi, I'm Scott Thomas, and I'm here with MeToo, Elizabeth Simon, and we're going to be talking about how to combine a really, really old-fashioned machine with artificial intelligence. So, first, we'll start off with the sleeping aid from our legal department, and we also included some very specific instructions on exactly what devices we used so you can replicate these results if you need to. Now, our project objective was not to deploy thousands of factories, just to show Intel's customers how to do this. We work for Intel. Intel does not sell final solutions, but we do help our customers to sell final solutions, or as we call it, market-ready solutions, or MRS, is for short because everything at Intel is a three-letter acronym. I'm going to start with the hardware design and how we integrated our computer with the machine, but first we have to cover briefly what the textile industry looks like, every one of these thousands of threads has to work. If one of them breaks, everything manufactured from that moment on is defective. The margins are very tight, they can't afford a lot of defects, so they have to watch it very closely, and this is the state of the art of textile inspection. This poor woman stands there a maximum of two hours a day because you're brain dead by the end of two hours. They have to switch very frequently, or else they miss defects. This is ripe for some automation. These are what defects look like. This is what our system needs to find. I started with a hardware that's the old-fashioned textile machine. It's just a motor and some basic controls, and I added the image capture equipment and the machine controller. Nitu did all of the real-time inferencing and compute, and we had to work together to make the system work together so that her machine could control my machine. First, all the developers are ready. All the software people are ready to get started, but I had no data. They had nothing to start the inferencing models against or anything. I just brought one of my SLR cameras from home and took pictures of fabric, stretched over LED light for a ceiling tile. It was good enough. We just had to get data so that they could get started and we could get going. Once I got the developers off my back and gave them some data so they could get developing, I had to get a hold of a fabric machine. We just ordered an off-the-shelf machine. It's very simple. It came with a series of its own problems, but it was really cheap and already done, proven engineering. In this picture, the operator stands behind the machine. You could not see that operator in the back. The fabric is stretched up the back in front of these three cameras. These are little white boxes here. These are cameras, and this is a lighting system, and the fabric goes up over the top then in front of the operator. That's the way it works. Now, we put the cameras on the back so that the defect could be found and then shown to the operator on the other side. That's why we designed it that way. The physical design, this is a top-down view, not from the back. This is from the top down. The operator's head is over here somewhere looking at the fabric, and the cameras are over here. It's about two meters of fabric that we had to view, and we needed about four pixels per square millimeter. We engineered it that way, four pixels a square millimeter. The camera design backs itself out. There was a thought to use a higher resolution camera with just a big fisheye lens to cover the whole thing, but that causes pin cushion distortion, which can be corrected, but it's just a can of worms. We needed it quick. We needed it now. Simple. This is how we deployed it. Now, to get it integrated with Nitu's machine, I added an Arduino. The project manager said, I need it cheap. I need it quick. What's that? If you're an engineer, that's Arduino. So I just added a relay to the Arduino. The Arduino controls the relay with five volts, and the machine is up to 380 volts, and we had terrible wiring diagrams. They had no idea how this was wired, so I selected relays that were good for up to 380 volts. Turns out it was just 24 volt control voltage, but still, we were ready for anything because we didn't know what we were going to get. I recommend in the future that you use a programmable logic controller. That's more money. It's not quite as simple to program, but it's much more robust. Now we had a lot of problems. You can imagine there was color tint from the cameras. We didn't bother fixing it. We just trained with the color tint. If it's all the same, it doesn't matter. The training model doesn't know it's ugly. Poorly documented machine wiring, like I said. Also, they didn't wire it according to their own drawings, so it caught fire as soon as we plugged it in. That was exciting, but once the adrenaline wore off, it worked. Also, we brought SSDs from the United States, and they did not work for the computer in China that we met with our manufacturer, and it didn't work with their computer. Nitu had to completely start over from scratch with a really bad network connection. Finally, though, it did come together in the end. This is my Arduino board and relays, three-stack light system, and we got a simple USB RS-232 interface to get Nitu's system attached to mine. It finally worked at the end. What a relief. Next, Nitu will talk about software. Thank you, Scott. I'll cover the software design implementation and the challenges. Any AI machine wiring solution development goes through these four stages. It's a cyclic cooperative process. As Scott mentioned, we had used the BASWIL camera for data collection. It came with a Pylon software package, which is open source for data collection and pre-processing. For annotation, we used the OpenVINO computer vision annotation tool. This helped us in annotating or labeling all the data as good and bad. For training, we used the TensorFlow Keras libraries, and for the inferencing, we used the Intel Advantech IPC Core i5 machine, which was actually running this OpenVINO toolkit for inferencing. The controller code, which was mentioned by Scott, and the Edge Insights platform for data collection and processing. This slide shows you the practical implementation of our solution. The fabric here is directly from the factory. All these fabrics were defective pieces, and they had defects in each frame. For annotation, we basically had to crop out the good images from there. We used both binary classification and multi-classification here. With binary, we had classified them as defect and good. For multi, we decomposed this defect class further into discoloration and BEFT. The examples are actually shown here in the image of what a discoloration is or a BEFT is. With respect to training, we got very good accuracy for the binary classification process. The live inferencing, you can see the entire setup. It was running very well, and the accuracy we got was also pretty good around 99%. The model trained on the white fabric also worked with the green fabric. The multi-classification accuracy was very bad in both these cases. Next slide. Now I'll cover the challenges and the learnings from the software perspective. The network, as Scott mentioned, was very, very slow. Both Ethernet and Wi-Fi was very, very slow. We couldn't do any training there, and we had to send all our data back to our team in the US who actually trained these models and then send those out to us for inferencing for the next day. Also, since the network was very slow, the dockers were taking a lot of time to build. So we had to individually download these packages through the VPN and then had to build the Docker containers using offline installation of these packages. Next slide. With respect to data collection, we had to get some domain knowledge about what a defect is in a textile industry. There was a library available online which explained what are the different kinds of defects and how to distinguish between good and bad frames. Also, like I mentioned, all the defects, all the fabric had defects in them. So technically all the frames had defects in them. For good data, we had to crop the image and then we had to resize them for training purpose. And we believe this would have resulted in the accuracy that we got with the multi-classification process. We did use some augmentation techniques to increase the data set, but it also resulted in generating more erroneous data which could have also impacted the accuracy. Next slide. So the model worked, but it was not scalable enough. So that means that the model which we trained on the white fabric, it worked on the cream fabric, but it failed to work on the brown fabric. So it was not a scalable model that we had. Also, it worked for simple fabrics like simple patterns and colors and it would feel if the pattern is more complex. Model was also very, very sensitive to folds and creases. And also because of the slowness of the network, our team in the US had to train these models. Inferencing was very good. We got a 30 frames per second average inferencing speed. What we noticed is as the motor speed increased, the frames started to drop. So our system was not robust enough to handle this high motor speed. Next slide. So that ends our presentation. Thank you and let us know if you have any questions.