 Hello everyone. Thanks for joining. We have the pleasure to be joined by Mr. Connor McCauskey. He's a director of the project manager of the cave lab at MIT CTL and also digital learning lead for the MicroMasters program. He's been doing fantastic AI powered projects here at this center. So we're going to have his ideas about how AI will impact supply chain in a minute. Thank you for having me, I really appreciate it. Thanks for joining. So before we get started, just a couple notes about the midterm and also the final. So we had for the first time we tried limiting the feedback that everybody gets on the midterm and I really realized that it makes it more difficult for you. But you also need to consider that this is planned to increase integrity of the exams and in the long run helps with keeping the very good reputation of the certificate that you get. So think of the exam as pure assessment. Do not expect any kind of learning. This is just something you submit and that's for grading. Learning should happen in the graded assignments and other parts of the course. So it's just something that it takes some time to get used to but that's a new policy that we're following for the midterm and the final. So that's one thing to keep in mind. Then just the graded assignments and everything else will be the same. So make sure you ask any kind of questions, any ambiguities you have because there will not be the opportunity to learn from the mistakes in the midterm and move it to final. So these areas are pure assessment as we move forward. So that's one. And then today we will talk about the applications of artificial intelligence in supply chain management. We have two polls for you. One of them will involve you submitting ideas as questions. So we're using the question area for that. The benefit of that would be because everybody will see the point that you're submitting, the idea that you're submitting and you can vote for it. So be prepared for that and maybe save your other questions later. So we'll get to the Q&A towards the end of the talk. All right. Without further ado, I'm just going to switch to our slides. Yes. So Connor, go ahead. All right. Perfect. I guess I'll start here then. So today we're going to be talking about artificial intelligence and what is state of the art. So again, Tina, thank you for having me. I really enjoy these types of conversations. I love where supply chain is going in this field. And I love to see what people's ideas are for where it's going to go supply chain. So that's one of the reasons we're using questions today is because we want to make it interactive for you guys. So you can give us ideas that hopefully we can begin to research or even get you stimulated on researching yourselves and hopefully building this field out further. So let's actually switch right back over to the slideshow here. And can we get one? Yeah, perfect. So the agenda for today, we're going to start with what's state of the art and artificial intelligence. We're going to start with voice recognition and then object recognition. We're going to jump to augmented reality, self-driving vehicles, self-learning machines. And then I'm going to take a slightly different path. We're going to explain what makes neural networks powerful, what makes deep neural networks powerful. And we're going to kind of jump through an interactive example on that. And finally, we'll take a couple minutes to talk about next generation AI. After that point, that's when we're actually going to have some time for some Q&A. We're going to discuss some of these processes further. And we're going to have an open discussion with you guys to get as much information from you as possible, as well as to help answer some of those remaining questions you might have. Awesome. So perfect. Actually, I think we're going to skip voice recognition. We just found some tech difficulties this morning where we can't display voice from a different video through. So let's just skip over that one and we'll start right here with single object recognition. So if you guys are looking at the slideshow here, you can see there appears to be some red creature, as well as maybe some sand or some dirt gravel around it. And so we want to actually train the neural network to learn what that is. And so in this, we're actually giving them an entire picture. We use what's called a convolutional neural network. So we analyze subsections of the image. And so we take entire subsections of the image, we analyze them in groups, and then we actually group those subsections into more groups. We begin to form an idea of what's happening in the image. And that's the idea behind convolutional neural networks. So we can see here using Google's TensorFlow, this is actually determined to be a mite. Now, this is a supervised machine learning process, which means we need to look at pictures that have existed in the past and what they got labeled as. So there has to be a human there at some point saying, hey, when pictures like this in the past were here, we labeled it as a mite. Now you can see based on this convolutional neural network, we actually think it might be a mite, but there's also a probability, and that's what those barge heart is below, that we think it might be a black widow, a cockroach, a tick, or even a starfish. And so we're not doing a very good job with this one. We're not very sure that it's a mite. That's what makes neural networks powerful in a lot of senses. We can say, hey, look, we think it might be this, but here's 10 other options we think it might be. And so applying this to supply chains can be pretty powerful as well. So let's actually go to the next bit here. This one is pretty clearly a container ship. We can easily tell that one. It's pretty easy for us to guess. It turns out TensorFlow does a pretty good job guessing that as well. You can see that we also think it might be a lifeboat. So based on similar pictures and images we've marked online, we can guess or get an idea of what these things are. Do you want to also comment on the TensorFlow itself and the function it does? Yeah, so TensorFlow is a neural network package. Typically, you would code it in Python or in C. And so the idea is you're coding a neural network to learn from data that you're giving it. In this case, we're giving it pictures. And we say we're going to code a specific framework. And we want to actually update pieces of that framework so that way in the end, we can better predict what that picture is and what's inside of it. Now, there's a lot of math that goes into this. We're going to try to avoid the math for now. Later in the lecture today, we'll actually, in this live event, we'll explore how TensorFlow does this a little bit more specifically using visual. Yeah, so what's really important for you guys to know here is what happens in a lot of these machine learning algorithms and also the neural networks is that you give the algorithm lots of observations to see and it will pick up on the patterns and similarities. So if you give it 100 pictures of container and tell it that this is a container, this is a container, over time it will learn the pattern that exists within that image. And if you give it a new container, it will match that pattern and you'll see it will predict that it's a container. And again, this comes back to your standard garbage-in-garbage-out process too. For example, if you always mark it as a box when it's really like a cargo container, in the end, you'll predict that it's a box because what you trained it to do is say, hey, learn this is a box. So we're training it wrong. So if I give it bad data, I will get bad results. And so a lot of times when you're training these things, it's important to make sure your data coming in is good and powerful. Otherwise, what you get out will be the exact reflection of what you put into this model. Also, lots of biases could get in, right? Absolutely. If I'm training a model to predict a criminal phase and if all I'm supplying is people from a specific race or a specific group, then that's a problem. Yeah. Or if the sample that you've pulled is from a specific region, which has more of that group or that height or that gender, it's very important to determine what that is. Let's go a little bit further. Let's skip through these last ones. This one we're essentially saying, hey, look, we can predict it's a motor scooter or a leopard. And I want you to note the leopard. So underneath leopard, you can see we also think it might be a jaguar, a cheetah, a snow leopard or an Egyptian cat. So you can see we actually have the idea. And that's because in the image, again, we're analyzing subsects of what's happening inside the image. So we might pick up on an ear or spots or an arm that's shaped in a certain way. And the combination of all those lead the algorithm to eventually determine it's some sort of a cat. Now, the specific spots and the facial type might then lead it to determine it's a leopard. And so again, it's actually learning from smaller groupings in the picture to determine an end output. So let's take a jump here and let's jump from a single object in a picture. We're actually going to analyze different subsects of a picture to determine objects inside of that. So instead of saying, hey, look, we have identified this as an ear or an arm, we're actually training it to say, based on what's happening, we want to identify sub objects in my picture. So in this case, we're recognizing a TV there. So you can see the blue outline there is a TV or a monitor. You might also see the red outline is a bookshelf. Now, we don't capture the whole bookshelf because we don't actually have enough data there. We can determine with our eyes what we're seeing there as a bookshelf. But again, if the computer is trying to learn, it just doesn't have enough information to outline that bookshelf. So this is one of the limitations of artificial intelligence in this case, specifically multi-object or even single object recognition. And so let's go to the next part. Now, if we want to identify objects in a group or specific types of objects, this can be very powerful as well. So here you can see we're identifying a banana as well as some oranges. So I guess this leads us to our first poll, right? So let's see. Here we try to demonstrate what object recognition is and what it can do. Now, let's think about different ways this can impact supply chains. So if you have algorithms that can look at images and understand what they are, what are the applications to supply chain in different areas? Think of logistics, think of factory operations, anything. So let's go on. Let me activate our poll. And OK, let's go on Slido. Look at the first poll. So what are the applications of object recognition in SCM? I'll give you a few seconds to input your answers. So I think it's important for us to keep in mind what is actually happening with this, right? So we're thinking beyond just, hey, look, this is a truck moving from A to B. This could be how do we load the truck? This could be how do I pick up items? How do I think about items? It could even be how do products flow through my factory. So keep your minds really open on this and try to come up with some interesting new ideas. All right, if I can find whoever. So I am assuming we have some polls coming in right now. Is that correct? Yes. I'm not receiving the messages from you guys. Is there, yeah, we are not receiving the poll data. OK, actually, so while we're trying to figure that out technically, let's take a second and begin to discuss some of the other applications that we've thought about in this field. So while we fix this, let's switch to some of the ideas that we have already put on here for you so we can discuss them and then we'll see what's going on there. OK. Absolutely. So one of the things that I think you came up with this idea, I really like this idea, is using drones and image recognition for inventory. So will you tell me more about that idea? Yeah, absolutely. So I was working at Walmart for a while and one of the very interesting ideas that the team was pursuing back then was to use drones for monitoring the inventory levels in warehouse. So what happens is, imagine a very big warehouse where tons of people are working and tons of things are moving in and out. It's really hard to keep accurate records of how many of each of the audience we have when they run out and then you got to think about it. There's a lot of opportunities for things to go missing, for somebody to pick something and then they just forget to put it back on the IT system. It shows that we have it, but it's just not there. So having accurate rate of inventory levels is a very challenging thing and it's very needed as well. So when you have a customer, you can't say, I think I have a package but it's somewhere in the warehouse. If it's not on the right shelf, you don't have it. So you need some tool to go and look at the shelves periodically and see what's actually is there. So the idea here was to have a drone, to fly over and look at the shelves and from there identify which shelves are empty. And then each shelf has a label. So the algorithm's task is to take an image and identify an empty shelf that's one image recognition job there. Also look at the label and decipher what kind of product this label talks about. So then link the two and then update the records. That was an amazing application of drones and you could think of all kinds of other things that image recognition on drones could do. So comparing this to what had previously been done, it was done by hand, right? There's a lot of counting and manually entering. And so we're really trying to get these repetitive tasks into a place that we can automate them. This doesn't mean that our warehouse is managed by. It just means that we're attempting to automate tasks that are very repetitive and hard for humans to do to make them easier. Absolutely and this was almost a tenfold improvement over what people could do. So let's see. Do we have any of our questions? For some reason the polls are not showing up. We're having some technical difficulties with the polls, so I guess we'll just skip over them for now. Let me give it one more shot. Perfect. I just want to make sure we don't deprive this opportunity. So it looks like it's not working, but the questions will work. If you want to ask a quick question, I can try to address that now, but we're figuring out some tech things. So this is actually a good spot for us to go back and look at some of the questions. We have a good question from SheShare. How should we prepare for AI in SCM as carrier perspective? And what are the job opportunities? Essentially as AI is moving into supply chain, what are the things we should be prepared for? So I think the question was from a carrier perspective. I'll try to address this more from a broader perspective. So the interesting thing with artificial intelligence is what it can do, but more interestingly for most of us is what it can't do. There's a lot of things that artificial intelligence cannot do yet. It can't manage a supply chain. It can't do a lot of processes even when driving a truck. So for example, we have a lot of people that talk about how autonomous vehicles are going to change trucking. On the highway, yeah, we can do a lot of optimization. When we get into a city, if the truck actually has to go over a curb in order to get from one place to another, artificial intelligence actually can't do that. We actually give it rules saying, hey, you can't go over a curb. You can't do this. As long as those rules exist, we're going to need humans that can say, hey, look, this is a gray area. I know this is how we have to do this to solve the problem. And so I think a lot of the questions with AI is how do we actually manage what's going on on our supply chain? And having an artificial intelligence process do that, incredibly difficult just because of the complexity. Giving it enough data for it to learn what a good supply chain management strategy is would take many, many companies' data, many, many processes over many, many years. And even then, when something new happens, the only thing it can rely on is what had happened in the past, which us as humans are pretty good at using what happened in the past. But coming up with a new idea that's never been done before is something that artificial intelligence just can't do. In the future, there might be processes that help it to do that. For now, most of what's going on with artificial intelligence research is what AI can't do in those fields. Do we have polls coming back? Yes, we figured it out. But just a quick note there. Depending on the level of familiarity you have with tech technologies, so you definitely want to invest some resources on it. So imagine with the capabilities that AI has, it's really, really conceivable that it will creep into all kinds of the tasks that we're trying to do, maybe change the nature of our job. It's got to be prepared, especially for the next five years or even faster. So definitely invest in some capabilities, understanding how these systems work, knowing some level of computer science is a must. A lot of our on-campus MIT students are now spending a lot of time coding in Python or different technologies that you didn't have to do before as a supply chain professional. So definitely invest some time there. And over time, and I've gone through this process myself as well, I've been trying to self-train on a lot of these technologies. And what you'll find is that the capabilities that it gives you is significant in terms of automating a lot of boring things you do every day and achieve a lot more efficiency. So actually, I would love to go off that a little bit more. So let's actually jump back to what AI is really good at. And like you said, it's repetitive tasks. So for example, if you have a well-defined repetitive task that happens in your supply chain, that is a ripe place for AI to take it and make it something different. So for example, maintaining vehicles and determining when they're likely to break down. If you can put enough sensors in a truck and you have enough data saying, hey, look, trucks are likely to break down when these sensors start doing this, it's a ripe place to determine, hey, I should do preventative maintenance on my truck. Now, again, that doesn't mean an AI machine is going to be repairing your truck. That's very hard, and it's hard to determine where it is. But having AI tell you where there's likely to be a problem to fix it beforehand, a very, very powerful potential application. Absolutely, absolutely. It's finding very specific things that could be codified that's simple enough, even though we say artificial intelligence, but it's not as intelligent as we are, hopefully. Yeah, yes. All right, so let's take a quick look at the poll. All right, cool. So what are the applications of object recognition in SCM? Inventory management? We just talked about that. Say test, that's probably testing the quality of the product. Defect detection is a great use case. That's a great point. Like you could have, oh, produce is another great example of object recognition. Absolutely. When is my produce going bad? That's a really great point. So you could have an algorithm look at the product and identify defects on the production line. Damage packaging is another really good one. Anomalies. Ooh, okay, so like a package that's different or a different shape? Absolutely. And imagine how much added efficiency does it have over humans. Humans are terrible in doing repetitive tasks. So yeah, counting, counting, that's another good one. A lot of quality risks, okay. Quality, a lot of answers about quality warehouse management. Yes. At the warehouse, we talked about counting inventory, but there could be other things as well. Could be damages to the products in the warehouse as well. Ooh, I saw a good one there. It's automating decision making. Now, I think on a broader perspective, that might not be somewhere that AI is ready for yet, but on a very specific decision making process, for example, how much inventory do I stock of this product next week? That could be a very powerful automated decision making process that AI would be very good at. Oh, we have, there's a good one here. Fruit size recognition. Fruit size, okay. So we want a big water vial in. Yeah, yeah. Okay, I like that. I like that one a lot. And then you can categorize them and sell them at different prices, right? There we go, yeah. Cool. All right. So tons of great ideas. Different SKUs. Okay. So for like SKU management, so what's flowing through my product line? Where is it in my factory? Oh, I like that one. Picking SKUs as well. Okay, cool. Very, very interesting. Absolutely. So now let's go back and continue. Okay. So in the second part, we'll talk about some more of the technologies like augmented reality and self-driving cars. And we will then continue with our brainstorming question. So let's finish here. Yeah. So just wanted to mention that on the ride, we also have the, this is an application developed, I think Walmart is trying this, some other retailers as well. And what's happening here is we are looking at the image of the produce and we are seeing if it's bad or as intended, but at the same time, this also has the capability to look at the supply chain data and see what the temperature this product has been in, how long it's been in transit, what are the conditions at the different facilities that handle this and overall come up with an accurate prediction of when it's going to go bad. And you could imagine how important this is going to be for a retailer. They want to know how much time they have to sell this product. And this might not be vision at all. This might be pure data input and say given all of the information I know about it, I'm not using any pictures. I'm purely just going to be using the data that I have to predict how long I have until it goes bad. Absolutely. Very cool application. Awesome. Augmented reality. Augmented reality. So another really good application of artificial intelligence in image recognition and in a few other fields is augmented reality. So we use image recognition to determine where say a couch is in the room or where a couch should be in the room. And so this is a good example. I think Ikea is using this here to determine what if I want to buy this couch. I can actually use an augmented reality thing in my phone. I can point it at the corner. It determines where my room is, where the floor is, what's going on. And it's able to put a couch on the floor in the corner so I can say, hey, this is what my new couch would look like in my room. Very powerful sales tool, but beyond that a lot of really good applications in supply chain. So from a sales perspective, great application off the bat. But we see other applications that range from say fixing a truck. This is another great application. If I know the part that's likely to go bad because I had my AI in my truck telling me this is where you need to do preventative maintenance, I might actually be able to wear special goggles and it points and says, hey, here's the area where it's going to go bad. So I don't even have to look at the special output from a computer. I can wear my goggles. The truck comes in. It loads that data. And then I can look at the truck engine and it's automatically highlighting that piece I need to fix. Or that nut that is loose. So adding the extra information on top of what our eyes see to improve the efficiency. Yeah. So then let's go next to self-driving vehicles. So we actually have Lex Friedman here in our center. He does a lot of really, really great stuff with autonomous vehicles in his entire lab, the age lab and the mechanical part. They do a lot of really interesting research in that area. I would highly recommend you check out the age lab in autonomous vehicle work. In the meantime, let's kind of jump over to what really happens with autonomous vehicles. We have a series of sensors that we put on cars. One of the big ones is going to be, I believe it's a little high-dar. It's a system that circles quickly on the top of the car, and it actually maps out the entire environment around the car. That helps us to know how far we are from objects as well as what else is happening in our environment. That's image, right? That's not necessarily image, but it's, think of it as spatial data for what's happening around. We also have image data as well that's coming in that might help us to determine the difference between a tree and a pedestrian and some other things. We'll use the LiDAR to determine how far we are from those things around us, because sometimes image recognition doesn't do distance very well. Then we might have other sensors that are put into the car. All of this data goes into a central place where it's then analyzed and gives us output as what should happen. Again, this comes back to a supervised machine learning. That means we're training at given input. One of the ways that we train it is we might actually put a human driver in a car and we'll put all these sensors on it. What it's learning is, what does the human actually do in all of these situations? It begins to actually start learning, how do I drive like a human does? That's assuming that a human is a good example. That's again, and this comes back to your garbage in, garbage out. We start to learn these other processes, but then it might start to learn, okay, when a human does something and it leads to an accident, we can learn from that as well and say, hey, don't do that. That's bad. Human did a bad thing. Then the machine begins to really start to learn based on all of the data and the environment around it. I don't want to go too much into the math on that, but the idea again comes back to, we have this data, this process around us and we're able to mark when it's going well, when it's not going well. And again, the machine gets to learn and begin to give us decisions in the future, driving a really repetitive task. So it's a great ripe area for disruption. Driving in heavy traffic with people cutting us off in a crazy area, maybe not so repetitive. In the future, it might be repetitive enough, repetitive enough that there's value in automating it. Now, most autonomous vehicles operate very well on highways because it's a very controlled, simple environment, normally pretty well marked. We get to these city areas where sometimes you have to do something illegal or pull into something that where it's like, I kind of have to go out of my lane to get around it or else I'll stop traffic for an hour and these types of things. And that's a very good point. So when we design AI tools, they work okay, say 99% of the time, but that 1% is what's killing us. So we already have cars that could drive very well, as you said, in highways or in some very controlled environments. But when we think about night, day, snow, rain, pedestrians jumping in, if they can't handle that, we still can't rely on that technology completely. So a lot of the time development and innovation time in these kind of things is related to handling exceptions and rare cases. Yeah, absolutely. And actually kind of going off that, one of the biggest factors in self-driving vehicles ability is how well the road is actually painted. So if there's fresh paint on the road, it's well marked. It's a lot easier for a self-driving vehicle to know where it is, where it should be and what's going on. Now, a lot of roads around the world, especially in Boston, are not well painted. And so that leaves self-driving vehicles in this area where it's hard to know what to do in a situation. And in that case, making a mistake is so expensive that it's better to let the human take over. And that's why a lot of vehicles are pushing towards not full autonomy in certain circumstances. They say, hey, human, you need to take over in this place. And that's why we're still going to have wheels, what we steer with in the car for a decent amount of time yet. Yeah. So one approach is actually make your algorithm either avoid making decision in rare cases or special cases where it's known that it's not going to do a good job. The other one is to make the algorithm more and more intelligent, give it more data, more patterns to learn so that it can handle special cases. But there's also a third approach where you would make the environment simple enough for the tool to operate it. For instance, in China, they have designed a highway for autonomous cars. It has enough sensors and all kinds of infrastructure for sharing information between the road and the car so that it becomes really easy for the car to navigate them. So that's another approach. So you may not want to adapt your AI tool to the reality. Maybe you've got to change the reality and change the environment and make it simpler. Yeah. And then the question becomes at what expenses is worthwhile? And so in a lot of businesses right now, sometimes it doesn't make sense to invest that heavily in AI, but to choose those right areas where there's low-hanging fruit, where it's easy for me to make an AI solution that saves us a lot of time and money right now. And a lot of the times a lot of the process actually comes back to convincing management, getting those quick wins, actually showing that there's true value here and showing where there's not value. Being able to show both of those can be very helpful. Great. So let's end on self-learning machines. So I think we're going to show a video here of a genetic... It's a combination of a genetic algorithm working with a neural network. And the idea becomes how does a object, given certain realistic parameters, learn to do something? And so what we're doing here is we're saying, hey, three object piece that has certain motion abilities, learn to jump over a ball. So it does look a little blurry here. But afterwards, I think you can kind of get the idea. So let's actually jump forward a couple of minutes. So in the beginning, we just say, here's three random objects. We put them together. We give them pivot points and we say, jump over this ball. We tell you where the ball is. We have some sensors that give data as to how I should jump and when to jump. But other than that, we don't really say, this is... Actually, we don't give it data how it jumped. We say, you need to learn to jump over the ball. So think about it. This is an object that's learning how to jump over the ball on its own just by seeing, just by trying different strategies and figuring out which one worked, which one didn't work. And in the beginning, you'll see a lot of times it just doesn't do a good job. Ball hits it, but then it learns, oh, it looks like this approach doesn't work, so we'll try something else. So one of the other things it's doing is it's actually determining where it messes up. And so it's important for it to learn, hey, I messed up here. The way it's learning that it messes up is how much of the ball passes through how much of its body. And so right now you can see it's kind of cheating because it's just letting the ball pass through the bottom of its body. Now it's beginning to learn that if I do some weird motions when the ball comes through, I can almost jump over it. And so I'm having a better outcome. Even though it's not binary in this case, it's a better outcome. I'm touching less of the ball. So we train this, and right now we're on iteration 75 with the object shape. But each time we let it pass a thousand, maybe 10,000 balls go through, and it's learning, it's just randomly jumping around. Well, it begins eventually to get to a point where it can pretty consistently jump over a ball. So it figures out the best strategy to achieve this. And you could think of this ball as any kind of problem that we have that the algorithm could try. If you give it enough time and data to try different strategies and figure out which one works best. Absolutely. So let's say you give this machine the ability to learn about your supply chain. Are you willing to fail for 249 years in order to come up with a good strategy of something as simple as jumping over a ball? And that becomes a real question. At what point do we have enough data for this to be reasonable for us to do? So self-learning machines are a great example. Some of them might be, they've repeated enough tasks with enough data. It's a great place to do something. Maybe learn to follow a human with a pallet truck. So maybe combining autonomous vehicles with some self-learning machine where it learns to follow a human around the factory floor. And as they pick, they can put stuff into baskets and so they can pick from multiple baskets together and not have to drag a cart around. And that's where supply chain professionals like you have a very important role to identifying what areas in supply chain are better and more cost efficient for AI. So as you said, not everything could be economically justifiably done with AI. So it's the business's job to identify where and how we're going to apply AI and what are the benefits compared to the costs. Absolutely. And it's a lot of entrepreneurial processes going on here. Where are my pain points that are really repetitive? Absolutely. To start identifying those, then AI is right there. And a lot of it is about complexity. So far as far as we've seen, AI is still not able to handle very complex situations which is really easy for our brain to handle because we pull in lots of information, years of experience in the world, all in one place. And contextual understanding. And contextual understanding. We see in Apple, we understand it's a fruit, but just that association is very hard for an algorithm. So the complexity matters a lot. So you want to find an area that's not complex, that's repetitive, which means automating it would be economically justifiable. And it's simple enough to implement. Absolutely. I think that's great advice. All right. So we have a couple more. I think we're doing pretty good on time. So a couple more applications. We already talked about this augmented reality and repair operations. So on the right you could see somebody is repairing an engine, but they're also, their glasses are also showing the part numbers. You could also imagine all kinds of other information shown here such as when the part was installed, how old it is by now, what parts need a periodic check. And again, these are extremely hard for a human to keep track of and keep records. But for a robot, they're very easy. Very, very easy. And so this is where there's right barrier for disruption. Where are humans bad at? It's repetitive. I have to keep looking at a screen, but it's easy to actually implement. OK. And on the left you could also see a Boston Dynamics pick a robot. I don't know if it'll be easy to demonstrate, but we'll see how the technology cooperates. I think we had it somewhere here. I think I'm at the final one you might build. All right. It's here. OK, cool. So let's look at this. This is developed by Boston Dynamics. They have a series of very, very realistic robots developed. And in the beginning, we'll see the robot just walking outside. And then it follows with the actual picking. All right. Doesn't look like it's showing properly, but you could. Oh, perfect. OK. It's getting better. Yeah. So the robot understands where the box is, understands how to put its hands around the box, understands how much to press the box, and where we need to put it. And the important thing is if you program all of these movements 100% accurately, this is going to work very good, but only in the ideal situations. There are all kinds of other cases that jump into the, jump into your operations and disrupt it. For instance, for example, the box slips down or something hits the robot or the robot loses balance. How do we develop an algorithm that is capable of handling all kinds of these exceptions? And that's where AI is really powerful. And in here, you could see some of these exceptions. Somebody pushes the box down, and the robot is able to come back and identify the object again and attempts to pick it. Once you get your AI tool to figure out these exceptions and manage the complexity that we have in the real world, that's where it's really, really powerful. So I'd like to point out that this Boston Dynamics does a really, really great job with a lot of these things. You have to remember, they are very far away from replacing human. They still have a lot of work to do with, like, how do I grab things? What do objects look like? How does that work? So they're solving a lot of these early problems, and there's a lot more to solve. And I think we'll be looking to them to see what's going on in the future and how they're solving a lot of these problems. Perfect. Perfect. Okay. So we want to invite you to go back to Slido and tell us what we discussed today. We definitely did not cover all that, all there is, but just wanted to pick and point it towards some important applications and give you some food for thought. Now, with that, we want to ask you to think about other applications of AI, whereas maybe if you can think about your own company or specific areas that it's been a painful point for you, it's very costly, repetitive, and simple enough that AI could improve the operations. And so go on Slido, but post it as a question such that everybody else could also see it and vote on it. And we want it to be interactive, everybody voting on the ideas, and then we'll look at the top, maybe top five ideas and see what they are. Okay. So just go back, just go back on Slido and submit your ideas as a question. Okay, let me also make sure. Okay. So as supply chain professionals, you are in a very good position to identify different ways that we can improve operations and profitability, because supply chain is responsible for a lot of the costs that comes in, that the company incurs. And tons of the money goes to purchasing products, making new products, moving the products to the hands of the customers. And any, even small improvements in any of these activities could turn into, could become a huge profit and profit increases. Absolutely. And beyond costs, supply chain can really add value. And a lot of people don't think of it like that. They think of it as a cost center. Amazon typically thinks of it as a place I can add value. Having a two hour delivery, it's not just I'm buying a product, but I have it in two hours. That's value for a lot of customers. And so finding ways that you can create value through the way that you deliver to them in a specific time window or provide extra value to them and whatever your customer desires, that can be very beneficial. And supply chains can oftentimes hit those processes right on the head. All right. So as we talk, look at the questions. They're now live, opt vote, good ideas or good applications. And we'll see how it goes. Do we have any good suggestions coming in yet? All right. We can actually follow them as we have a question here. Some old questions are also here as well. So just you could ignore this for now. What a good question here. Would AI stop the need for forecasting? That's a great question. Artificial intelligence can be very useful in forecasting. At the same time, sometimes it's a bit too much. So you end up getting a model that's really not that robust for things outside of what it's seen before. For example, a forecasting model that like a whole winters model might actually be better long term than an advanced artificial intelligence model. And that comes back to the whole winters model just assumes it doesn't really know that much. And so it just assumes certain parameters will continue. It assumes a lot of processes happening there. Time-based things will continue. Where sometimes with artificial intelligence, it gets a little bit too smart for its own good. And it starts making things that are way out of line. It starts making projections that are way out of line. When we go through an example in the next couple of minutes, we'll actually discuss where those places are and why sometimes traditional forecasting might be much better. And it might be because we really have no idea what the future is going to hold. And sometimes a simple exponential smoothing might just be the best. Even though the AI algorithmic program might say it's better, in the future we're just really not going to meet that standard. There was also a good question that I don't want us to miss. What are the tools that help apply AI to supply chain? What are the tools that people need to think about and maybe plan for learning? Yeah, that's a great question. Most of the tools that make AI applicable in supply chain actually fall around data collection. So it's surprising that it's not really, oh, I have this advanced model. Most of it falls around collecting the right data that can give meaningful results. So building out that data collection process is really what's going to make a powerful result come because you have powerful data going in. Once you have that correct data coming in, you can identify those key things that will actually drive an algorithm to determine developing the algorithms quite important. In general, there's a couple different trains of thought. Typically, Python is the go-to function right now for this. So for example, we have a wide variety of different programs that have Python as, it's essentially a front end, but it's the place that you code for those programs. TensorFlow is a really good one. PyTorch is good so you can code in Python, and then you also have some layers that sit on top of it, like Keras. I can program in something and say, hey, use TensorFlow, use PyTorch, use Deano. So there's some other packages that you can learn in Python. SAS also has some good artificial intelligence. Program some easy ways to build neural networks, as well as R has some pretty good functions. If you want, you can code it yourself if you wanted to even have a JavaScript version. There's not really too many libraries in JavaScript, but you can code those matrix multiplications, the functions, in any language you want to really build those out. But the very famous ones, or Python, are SAS maybe less. But you want to go with programs that are easy, especially if you don't have that much exposure. You want to go with programs that are easy to learn. For instance, the problem with R is it can't handle all kinds of programming needs that we have. It's mostly statistical work. So Python would be, I guess, the safest. That's a pretty good channel purpose. When you start getting into really large data sets, you have to do a lot of special things to make Python work right. And so there's some challenges that you get when you get to that point. You get to that point, you're going to be investing in those processes regardless of what program you choose. Absolutely. So let's see the application. James has a very good idea here. He says, do examples exist of an inventory allocation or customer order routing agent? Routing? Reinforcement learning. So you can take a customer order routing or inventory allocation on the fly based on reinforcement learning. I think inventory allocation is a really good use case. So instead of actually programming a really complex model or even an advanced statistical model, we're just saying over time, maybe we don't understand all the distributions going on, just learn a way to stock so that we're hitting about 95% service level over a longer period of time. That's a really good use case of reinforcement learning. And then people have to spend their lives coming up with sophisticated mathematical models to figure these problems out. But then in order to derive the equations, to be able to derive the equations, you have to rely on very carefully set assumptions. And as you know, work never follows our assumptions. So all of these models are to some extent inaccurate. But then a completely different approach is to actually let the AI and let the algorithm learn what needs to be done. And if you give it enough time and data and you carefully basically articulate the problem in the correct way, that's the art of it. It can learn over time the best policy for inventory allocation, best policy for ordering, which could be very, very much more complicated than what we could achieve through our equations and math. Absolutely. And then the other thing you need to keep an eye on is if you have something that's never happened before, it's had no way to learn how to react to that. And so assuming we're kind of going with the status quo and my business doesn't change a lot, that's very powerful. But if my business has huge, wide changes to like the entire structure, really hard for that algorithm to learn. So just keep in mind what type of business you have, how big the changes are, how big are your order fluctuations on a weekly basis, those types of things to determine whether or not you want to go with a more of a stock model or more of an advanced reinforced learning model. Shishir is also saying one application could be continuous forecasting on real-time. So real-time forecasting. Yesterday we actually was speaking with some company representative, they said, we have, if you want to forecast for a few days later, we can do it, but we want to understand what's exactly going to happen like one hour from now, two hours from now. So to be able to have an algorithm quickly analyze the positive data that you have and come up with a prediction or an estimation of how many products you have here, how many people are in your stock or how many vehicles are in your parking lot. That's, I guess, something very, very important. Good application of AI. I think we'll probably take the next 10 minutes or so to quickly... Five minutes. Yeah, five minutes to go through a quick example of what makes deep neural networks powerful. So lots of questions here. We'll do our best to answer them later. But we definitely want to switch here to a demonstration of TensorFlow. Basically how neural networks work and how they identify the difference in the data. Okay. So let's just start by looking at some data. So here you can see that gray chart on the side. You can see orange dots and blue dots. So you can think of these as certain chemical levels and whether or not someone's a male or a female. If they're a female, they're blue. If they're a male, they're orange. And so we're predicting some differences. This could be supply chain problems where we're predicting whether or not it's a bad apple or a good apple or a bad product or a good product, et cetera. So we're just predicting a simple binary function here. But let's look at what makes these powerful. And so if I go here, I actually have two variables. I have x1 and x2. So x1 actually runs along my x-axis here. So you can see right here I'm going along my x-axis. That's my x1 value. And x2 goes along my y-axis. And so giving some value multiplied by x1 and x2, I'm going to try to predict an output. Now, right now, I randomly initialize those values. These are these two lines here. And so if I want this to become better, I want to train it. I want to train a model so I can say I'm going to go this way. I'm going to see what I did right and what I did wrong and then adjust these and come back here and update these values again. So I'm going to actually run this. I'm going to be running model training. If you want to learn more about this, come to sc4x. We do a lot of machine learning processes there. But the idea becomes I've actually updated these models. So a positive multiplier times x1 and a positive multiplier times x2 gives me some predicted output that's meaningful. Now, you can see down here we have some function. It has negative one is orange and positive one is blue. So positive times my x1 value. So I'm going to say when I'm more positive over both of these values, I'm blue and when I'm more negative. So when x1 is high and also x2 is high, we predict blue. And the background is the prediction. So you can see background is blue and the dots are blue as well. So we're predicting correctly. Yeah. So if, for example, the background was blue over here and orange on the other side, we would actually be predicting the exact opposite, which would mean we're more wrong than we are right. Now, we seem to have a pretty good model that predicts. Now, what if we have a little bit more complicated data structure? So this is an example where the problem is really simple. Yeah. So a line could actually separate the two groups. Let's say we have something a little bit more complex, like these two objects here are these two groupings of data. So you can see there's clearly a circle that we have here that would separate these two. It's easy for us humans to determine where this is. But how do we actually get data to do that? Now, a really clever person might say, hey, these are circles. I'm going to use x1 squared and x2 squared. So given my standard x1 and x2, I'm going to try to predict these two circles. And then I don't know the weights, but I'll update them just like I did my line. So notice that it's totally wrong now, but when the algorithm updates the weights, then the circles will actually match. Yeah. So what we're going to do now is we don't really have good weights. We're going to update those weights. And we can quickly see that we're training to make a circle that actually separates our data well. But again, this takes a really clever person to say, hey, I recognize there's a circle here. What if you don't have really clever analysts? Or what if instead of having x1, x2, you have x1, x2, x.dot.dot, x20,000. You have 20,000 variables. Analysts can't determine all the relationships between those. How do we find an algorithm that does that for us? Now, one of the ways we can do that is by adding intermediate predictions. And this is where neural networks get very fun. So I am going to actually add a layer of intermediate predictions here. And so you can see here that I've got two intermediate predictions. So from x1 and x2, I predict an intermediate prediction. If I hover over it, you can see I make that straight line. But I might also make a second intermediate prediction. And then I'm going to use a combination of those intermediate predictions. So what happens is that layer one makes one prediction. It delivers that prediction to layer two. Layer two does some other prediction based on it. And what this does is it allows the algorithms to mix and match different kinds of different combinations of variables and convert lines into more flexible shapes. So let's actually train this with two. And we'll start to look and what we end up seeing is this cone forming. Now, this is powerful because a combination of two nonlinear lines here that we can see, we look at both of those. We can see the combination of those two gives us a cone. Now, one of them has a negative weight and one of them has a positive weight. But that's why we end up getting the combination of them gives us some different things. So, Sina, what would you see here? What would we need to do to encapsulate those? I think it's getting close, but it doesn't have enough flexibility. So maybe add another layer or... Yeah, let's add another neuron. So let's add another layer and let's add another neuron here. And so three lines should be able to encapsulate something. Think of a triangle can go around something. So let's train that real quick. And you can see here that we're quickly forming the interior of that circle. And so this is what makes deep neural networks powerful. Now, I can actually add more layers. So this triangle can be groups of three triangles in different areas. And that's really what's powerful behind neural networks. Again, if you want to learn more about this, join SC4X. We're going to do a lot of machine learning, some data analysis, and some other systemic processes. Thank you very much. Thank you very much for joining. Yeah. It's been great. So this is a wonderful topic and we hope we've given you enough food for thought and thinking about what are the applications to your environment, your company, or even beyond. So think more about this and we'll be in touch further. Perfect. Thank you very much. Thank you again.