 Hi, everyone, and welcome. Thank you for joining us today. This is Laura Eyerge. You probably know me already. I'm a course lead here at MIT CTL for the MITx MicroMasters in Supply Chain Management Program. Very happy to be here today, co-hosting this live event. Once again, with Mr. Kellan Betz. Welcome, Kellan. Also, a course lead here at the MicroMasters. And today, we are really fortunate to have Baskar Vallabragada joining us. Welcome, Baskar. Thank you. Thank you. Nice to meet everyone. Thank you. We're very, very happy to have you. And we are sure our audience is super interested on our topic today. So before going to the agenda for the session, we would like to start the event with a poll just to learn about your expectation here in the audience today. So probably Emma can help us launch a poll. Yeah, there I see it. So let us know why are you here today? Do you just want to learn more about this machine learning and AI technologies? You want to learn more about the applications on supply chain or the challenges it can bring to apply it? Or probably you're here because you want to start using it and you don't really know where to start from. So why we'll let that populate? And thank you because I see a lot of answers already. I'll give the floor to Kellan for the agenda for the session. Awesome. Well, thank you, Lara. And hi, everyone. So for the next 20 minutes or so, Bhaskar is going to discuss some of the complex challenges supply chain managers face when using traditional planning techniques due to the interactions between upstream and downstream supply chain entities, such as pliers, customers. He'll then discuss new machine learning ML and AI techniques to handle these complex interactions, focusing on a few use cases, exciting use cases, demand sensing primarily and production optimization. I think we'll look at those two use cases maybe. And also touch on some future applications of AI. We've all heard AI in the news a lot lately. And so these large language models, like chat GPT, and then discuss decision chaining and generative AI. Also another hot topic in the news as well. And we'll follow up Bhaskar's presentation with a few prepared questions. And we'll dive into some of these topics a little bit more deeply. And then we'll definitely save the last 15 minutes or so for your questions from the audience. And so start thinking of those as we get started here. And please use the webinar Q&A feature, that Q&A button there in Zoom, to ask those questions. We'd love to see the introductions in chat, in the chat. But please use the Q&A feature for your questions so we can keep track of those. And make sure you're logged in with the name so we won't be picking questions from anonymous users. And we'll also have a couple more polls as we go along as I'll be prepared to participate in those as well. Awesome, so maybe if we could check on the, our first poll results here, I mean, in the poll and share the results. So the question was, why are you here today? It looks like definitely the majority here are here to learn about the use of ML and AI technologies and supply chains in general, that's great. You know, we're gonna dive into some of these topics as well but these are a big topic and so hopefully we'll hit that high level general high level as well. And Baskar, I don't know if you have any, any thoughts on those poll results there? No, this makes complete sense. And the people are in general interested in understanding, you know, how ML and AI can be applied to supply chain. They've seen the applications all over the place but very specifically interested in the supply chain domain. Awesome, thank you for speaking in our first poll. Love to see those MicroMasters learners there who don't miss any of our live events. Love to see you, so welcome. Thanks for joining us again. So with that, let's kick things off, Baskar. I don't know if you maybe just to start things off you could just share just for a few minutes and a little bit about your background, kind of the story and how you got to where you are today. Absolutely. So thank you, thanks, Kelvin and Laura and glad to share some of my learnings in the supply chain space. So I'm based in beautiful rainy Seattle. My, as a way of background, I am a chemical environmental engineer. That's kind of where I got my training in or my degree in, post my PhD out of University of Washington. I spent the initial years just doing nuts and bolts engineering, designing wastewater treatment plants, sizing pumps, valves, fittings, writing operations manuals and then wearing a hard hat and walking the shop floor. So, but after the first few years, majority of my career though, I spent in the digital marketing space completely different, but something that is very data intensive and that's where I spent my time building out large scale platforms and but also spend time in product as well as running a few different profitable businesses. More recently, about 2018, I joined throughput as the chief technology architect just kind of building out their platforms, just kind of bringing together some of the engineering and the data analytics knowledge to supply chain. Thank you. Thank you for sharing that. I think that's inspiring for many people in our audience. We get a lot of questions and what about switching in the middle of my career to a different path and how to do that and whether it makes sense or not. So I think we're also going to learn a lot from that part of your discussion. So I think we're ready to jump into your presentation just to kick things off. So you're welcome to start sharing and take us with you in this journey. So this is a supply chain schematic that most people have seen in one time or the other. It's fairly straightforward. Supply chain is about taking the transporting and transforming all of the raw materials into the finished goods into the hands of the consumers, right? And so of course there are a whole bunch of steps involved along the way as everyone is aware of. When you look at the schematic, this looks fairly simple and straightforward up until you see what's happening in the real world. And this is a sanky view of one of the vertically integrated operations that we work with. And you see that there is a whole bunch of nodes from suppliers to production to warehouses to point of sale to customers. And they're all interconnected. And then you see the flows going between them. The challenge that ends up happening is that at each of the nodes, it's a many to many relationship. So you have multiple suppliers sending their raw materials to multiple plants, plants accepting those working with multiple suppliers, and having production systems, bill of materials, et cetera. And that's kind of what makes it complex, right? If just overlaying the same data on a map view, again, it just looks a mishmash of flows from one location to another. So how do you go about managing such a complex network? And so when we talk to supply chain managers or professionals, the primary things that come to their mind is, look, I'm looking to just meet my customer demand or manage profitability by reducing the cost or increasing the operational efficiency of my production systems. And overall, just trying to reduce the lead times of the chain. In many cases, the way they do it is, they will have typically some sort of a business intelligence system, whether it's a Power BI, Tableau, Click or other several others. And they'll have some visualizations. But as you know, the BI just use visual charts and line graphs, and it doesn't really provide you with the necessary insights or the recommendations or the decisions. And that's left up to the analyst or the operator to figure it out. And so the question always comes, well, how do we take it to the next step? And the next step is really building out an end-to-end solution. But along the way, there are several challenges, right? And the challenges primarily relate to availability and access to data. Now, in most organizations, if you look at the data infrastructure, well, they'll have an ERP to manage the system. The challenge is not having an ERP. What they do is they have multiple ERPs. For variety of reasons, because of mergers and acquisitions or they have different siloed businesses, makes it difficult. Same thing is that they're constantly working with third parties, so having access to the data, the quality and the consistency and integrity of the data. I mean, you have all of those things, and the final aspect in terms of moving from BI to some sort of an ML solution is the necessary budget and the allocation that is associated with the ROI that you have to sell to your CFO. So all of these essentially end up forcing the organization to say, okay, you know what, if you want to apply ML, if you want to solve some supply chain problems, and if you have these data challenges, let's go with simpler, fine-based solutions. That's kind of what ends up happening in the marketplace, right? So what you end up seeing is solutions that have built around one of the nodes. And the nodes I talked about, supplier, customer, warehouses, production, et cetera. And so, for example, for customers and suppliers, you'll have a, we'll do things such as customer segmentation, ranking, some level of cross-sell upsell, shown analysis for inventory, another standard inventory recommendation modules or solution that are built out. For demand, the standard ones that come to mind in terms of solutions is Q-Restination, forecasting, et cetera, for distribution networks and for production, some type of a linear programming optimization solution that happens. And so these are kind of the standard solutions that people implement and the algorithms and the algorithms end up being a whole host of them, starting from just rule-based segmentation, RFM analysis for customer segmentation to OR tools or linear optimization tools, vehicle routing tools for handling and a distribution and production to some of the time series models for forecasting, whether it is Arima, Sarima or several others that I'll talk about. And some linear regression, some type of regression analysis for to handle recommendations. So for example, people want to do cross-sell upsell, they will use collaborative filtering, in a way it's something that has become popular with the recommendation engines. So these are kind of the somewhat standard techniques that are currently used. I just want to talk briefly about a couple of use cases that we had worked on and have kind of experienced with during the course. The first one centers around a grocery store based out of Europe. And this is a fairly large chain, having about 50,000 used multiple distribution centers and retail outlets. With the supermarket chain, one of the challenges, of course, is the shelf life and the wastage. And you have, as soon as the product expires, you end up having to throw it out or dispose it off or sell it at a much lower price. We want to understand the demand forecasting aspect of all of their skews, really, and those 50,000 skews. And what I'm going to show you is an example of a single skew, it happened to be a meat product. At the top, you see a timeline view of the demand profile. And so the blue bars at the top of the graph essentially are the sales. And the yellow line that you see on top of it is the model predictions. The bottom graph is also the demand graph, except we have applied some statistical techniques to identify some of the outliers and flag them. And so you see, so the few things that you observe right away. So number one is overall, visually, the model seems to work very well, right off the bat. Second, you had long periods of low demand with some spikes in between. There was weekly impact on the demand. So for example, and then we saw Sundays were low, Saturdays were high, et cetera. But overall, the levels were low with some time periods of high. Now, when we tried to model this using standard Arima type time seize models, it didn't work very well. And that's primarily because these peaks essentially were not seasonal. And so then the next thing that came up and what is it that is affecting the spikes and the demand during those time periods? Luckily, we had information from the company that we work on, on the promotional campaigns that they were running. And so I kind of listed out the three primary campaigns. One of them was price reduction. Second one was offering discounts and there are different levels of discounts. And the third one was just offering loyalty points when somebody bought items. Once you overlay this information on top of the demand graph, of the sales graph, you see right away that both P1 and P3, which is price reduction and the points, not have a whole lot of impact or didn't have much impact. Whereas the P2, which was discounting had a significant impact. And in fact, we were able to take that information or the model was able to take the information and essentially future predicted. So when they realized that, hey, you know what, we're stocking up on inventory and you want to get rid of it, they could essentially apply those discounts. And they were able to do it for multiple SKUs and be able to manage both their wastage as well as the inventory levels for multiple products. The second use case I want to talk about is centers around production. Now, in this particular use case, essentially dealt with plastics model manufacturing. The plastic model manufacturing is a fairly straightforward and simple process in terms of, you essentially have a injection molding machine and you push some raw material through it and outcomes the end product. The top left graph here essentially shows the various processes that were essentially set up for the most part in parallel. The challenge that they were having though was around, it was twofold. The first was that the demand was very seasonal. And essentially they saw high demand during the June to July timeframe and the rest was much lower. The second thing was that they felt that they were capacity constrained. So their way of handling the solution was you essentially build up the inventory up until the peak arrives and then you end up growing down which kind of makes sense to a certain degree except in having such high inventory levels essentially costs, there's a huge amount of carrying costs involves not to mention that the limitation of the space, et cetera. Also having a huge, not being able to fulfill some of the demand essentially meant that they were losing on some of the sales. And so the question always was should we be buying more machines? I'm gonna go through the next few charts and it is a little busy in terms of data but I do want to highlight a few things because this is what goes into analysis and figuring out how to apply some of the models. So when you look at machine operation or process operation in a production setting there are kind of three key parameters that you look at. It's the defect rate, the cycle time which is cycle time is how fast you're producing the product or the flow rate is the how fast you're producing cycle time is inverse of that and up time or down time how much the machine is up or down. All of them can combine to perform the OVE or the overall equipment effectiveness. And this is something that the operators are judged on or bonus on or incentive on. So there's a huge incentive for them to keep this thing up and moving. But if you looked, so I'm gonna focus on one aspect of it because that's the one that we ended up using and I'll go through the rationale behind it but we looked at the machines and we are calling them processes here. Now each machine is able to produce multiple products. The key aspect of it is that even though they were all rated the same to produce certain products what we observed was that certain machines would produce certain products at a higher rate than the others. And which was a big deal because now once you know that information you can actually go ahead and figure out how you want to optimize the system. Second that we looked at is kind of highlighted the three elements the up time, the cycle time and the defect. And so we looked at categorizing those and this is a waterfall chart and let me explain this a little bit. And the waterfall chart essentially shows how many units were produced during this time period and the losses that occur that estimated losses from various operations. And so what we noticed is that and so when you made me categorize the losses along with what the production so there was about 78% loss associated with various parts of the system. You had the up time losses, you have the flow of the cycle time losses and then you had the defect rate of the yield losses. As you can see right of the bat the up time or down time losses were not significant and neither were the yield of defect losses whereas the flow losses are significant. So that's kind of where we ended up focusing our energy. Now if you hear for example of predictive maintenance this is kind of what this thing affects. If you're looking at trying to fix some of your defect or quality problems this is what part of the production system affects. What we found out was that the flow losses are much higher. And so we kind of use that information, right? Use that information in a multiple integer programming or linear optimization model to set it up where we obviously looked at different horizon time scale horizons or primarily variable what we are trying to find out was how many units are produced per product per resource per time field. That is what we are trying to identify and find out. And of course there are constraints that we needed to meet the demand. Now you need to keep the capacity utilization below a certain point. And then we had the data associated from on the flow cycle. And this is the operation and the objective really was to maximize the profit and increase the capacity, right? The end result though, the output of it is essentially a map. A map that shows for a given time period how do you want to allocate your resources? Which machines should produce which product at what rate? And we were able to provide with recommendations associated with that. The end result really was increasing in capacity utilization and they were able to reduce the inventory significantly from how much they had to store before to what they were able to do. So this is all great. And as I explained, a lot of these solutions end up being point solutions addressing a certain part of the supply chain. Really we want what people are looking for is to be able to solve some of the more complex challenges that we face. And so if you start looking at some of the future facing ML AI, you have to talk about the large language models, right? And those are kind of the thing that's topical. And then I want to talk about a few other things in addition to that. Again, as I mentioned, if you have, if you're talking about AI, ChargeGPD has kind of brought the lamps of the large language models to the fore. And if you just put in the sentence and say, what are the large language models in open AI? What it comes back is, they're also known as LLMs. They are AI models designed to understand and generate human-like text based on the patterns and information that they learn from vast amounts of text data. Now, I would suspect, majority of the folks that are listening in have already used, I would be surprised if nobody has used an open AI or a BART or other type of LLM, putting in a prompt and getting some responses back. So that is something that's accepted. But what people are really interested in knowing is, well, okay, how do I use this in my daily work? How do I use this in my supply chain domain? So one thing that people want to know is to be able to query their own data. So that is something that comes up often. And there are already solutions that are out there from the major players. Now, you can take the documents, essentially, and those documents can be just PDF reports, but they can be also for databases, CSC files, et cetera, load them using APIs into the open AI, BART, or other solutions. And what ends up happening is that the documents are then vectorized and converted into embeddings. And so essentially it's a vector representation of the documents. And once that's done, then the next thing you do is you run a similarity search and the similarity search essentially returns all the relevant documents. And then you use that as a context to be able to query it. Pretty simple. And you can use this for asking the standard questions of your supply chain, hey, what are the revenue of my top three products? Or who was the customer that we lost last year or our last quarter or items that need to be ordered, et cetera. So all the standard questions can be done. One of the challenges that people do not want to do is essentially are concerned about is putting their own documents directly into under the cloud and to be indexed. That is again a common problem. And there are local LLMs that are coming up too that allow you to just do that but on your own local network. But that's not what people want to go beyond just asking questions of your data. They want you to be able to make decisions out of them or take you to the next level. And this is where, and so when you look at a decision-looking process, it involves kind of multiple steps. And so all decisions, you first ask a question, you query some data, you get some recommendations then you go to the next set of questions, et cetera. So it's a decision chain. And fortunately there are frameworks that are being built open source frameworks such as Langchain that just allow us to do that. They allow us to essentially interact with the LLMs, get some response back, apply it to some real-world situations whether it is just doing a web search and looking at your data, make those decisions and follow up to the next step. And so it is a chain that can be fully built out and done purely by data. And so a lot of these things can be done rule-based or logic-based systems, but being able to do it purely based on data is something that then this thing allows you to do. So in a use case would be something simple as, okay, what do you want to fill a customer order? We can write a whole program to be able to do it or we can essentially put that into our intelligent agent. The intelligent essentially we take that question, go to the LLM, figure out what data it needs, what calculations it needs to make, do that and come up with recommendations. Now, so in this case, you know, it'll be, okay, you know what, I want to fill the order. So it's going to first check, you know, hey, when is this their due and what is the quantity required? Then next it'll go in and check whether the stock is available. If not, it'll go and create a production order. And if it's not, it's create a production order. If it's available, it'll schedule and delivery. And so all of that thing can be done using data, training data as opposed to rule-based. And finally, you know, so this is something again, the neural networks are deep learning is something that has been around for a few years, except the potential is just coming to light more recently. Again, because of ChargePDN and few others. So traditionally a neural network, which essentially is a network of nodes, you know, it contains your input layer where you essentially push in your training data and bunch of hidden layers, you know, where some of the calculations and transformations are made an output layer where the end result comes out. That has been used primarily for things such as image recognition, national language processing and speech recognition and things like that. You know, however, again, what people thought before was that the results were limited to those types of solutions. Again, large language models just changed that. It just kind of more importantly, it allowed, it told us what else is possible with these things. And then finally, you know, of course the, you know, the autonomous driving, you know, which is taking off. So what is, you know, see, with neural networks, the, while the results are there, the, it does require a large dataset, you know, as kind of highlighted by some of these, so these are lamps and huge amount of compute, but the potential is there. The question then again becomes, hey, how do you apply it to supply chain domain? And this is where we, you know, we talked about data fragmentation, et cetera. And so there is a concept of simulated data. And so what you can do is there is already some data available. And again, it varies depending on the, depending on the type of organization, but there are some large organizations have, have significant amount of data and you layer in some simulations on top of it, you can actually build enough of a training dataset then that you can build neural nets that can solve some of these complex challenges. Let me stop there and see if there are any questions. Thank you. Thank you very much, Bhaskar. I think we learned a lot. And I also found very interesting to see the use cases because you brought all the layers of complexity that we can see in a supply chain all together. Like it's not, we usually just learn about optimization or only about inventory and all separated, like the man sensing, as you said, but here we got them all together and in identifying one single problem that covers all of them and with the tradeoff of cost and time, and it's great to see how technology can help us on those use cases. So thank you for bringing those to the floor today. I was thinking on the challenges and you touched upon a little bit on that a few minutes ago. You talked about fragmented data, but you also talk about the access to data being a challenge and then, of course, having silo data in different ERPs and all. And I've seen that in many companies and so I totally see the point of that being a challenge. I was wondering if, as we need a huge amount of data, a massive amount of data to train some models, like GPTR is using much of the internet information to be trained, can you share us more about the strategies that or how do you approach facing that challenge of accessing to data and having the right data for what we need. You mentioned something about simulations, but wanted to know more about how to make it massive when we need it massive. No, absolutely, all important challenges. In fact, let me say 80% of the work is actually in the data preparation and then 20% is in the analysis and running the models. So let me start by saying, so to talk about the fragmentation of the data. So what we have seen is said, many times people or companies have ERPs, the challenge ends up being that they have multiple ERPs. And so for example, it might be a domain based on domain. So for example, you have a company that wants to use one particular ERP to handle their demand and inventory on their sales orders and inventory or even the finance. Another one for their transportation data set, distribution and then production is a completely different model game because you need manufacturing execution systems for that. So the EMEA system. So right there, if you have enough part of the businesses, you will end up having multiple ERPs. That's what's almost a given. But also it's because in an internal decision making. And so many times, you have siloed parts of the organization, you will have different businesses with different reasons they want certain ERPs. And so you will see fragmentation or multiple ERPs because of that. Finally, you have a situation of mergers and acquisitions. One or two companies never have the same ERP and when you acquire a company, you're going to inherit and inherit that. And now people will say that again, you can transform it, but that's a much longer process. So that's the data fragmentation. And then the access to data is tied to third parties. And so individually you end up working with suppliers, third party logistic providers, fulfillment centers, all of them have their own systems. And so it's, and they may not want to share the data because the data set essentially contains information about some other players. Or they may also, they may just not have the capabilities of doing it or they may not, they just wouldn't want to share it for comparative reasons and other reasons because typically people end up charging more or they might have some pricing information that is built in there, so people don't want to share that. So in terms of, and then of course, there are other challenges with data volume, data, so we have seen the same product being called two different things and two different periods, et cetera. So we see a lot of challenges like that. In terms of how you address this, so the data fragmentation is, while it is a problem with supply chain, and it is something that has been, there's a fair amount of tools that are available nowadays. So you have open source tools, such as Airflow Lactons that help you with the pipelining. All of the major players, they have data stitching, data gluing tools. While the data sets are fragmented, and typically when you say fragmented, not in these that is the structure different, but also how it's rendered. So some might be CSV files, some are, if you're connecting using the API, might be some sort of a JSON format and even the JSON formats are different for different ERPs, right? So, but there are solutions that help you with that pipeline process. And I can of course, go into detail of what they can do, but that's a, I would say a reasonably well-solved problem. It doesn't necessarily mean it's easy, but it can be done. The next challenge that comes in, against the same multiple ERPs have different ways of dealing with the same information. For example, you talk about sales order, if you look at a ERP one, it'll have a particular type of table structure, and the second one will have a different pipeline. So the way to handle this is to just like, just like we, to look and solve the demand sensing problem, identify the metrics and the parameters that are absolutely needed and impactful. And so those end up being, of course, you want timestamp, right? So, these are very specific in terms of, what are you trying to solve? Timestamp, you need how much quantity was ordered, how much quantity was delivered, what are the key nodes in the element? So you can distill down and standardize some of those things. You may miss a few parts, but if you can focus on those standardized elements, then you can transform the data much, much easily from a mapping perspective. In terms of the data volume, right? And there is a significant difference. Just for comparison's sake, I believe in GPT-2, it's a 1.5 billion parameter model, and it uses about 40 GB of data. GPT-3, it was about 175 billion and uses 45 terabytes of data. And the GPT-4, I believe is close to 1.7 trillion. So it's an order of magnitude, and uses up to a little 1 terabyte or more of data. So it is a significant volume. Now, and in comparison, in comparison, if you look at some of the DSS that are available in the supply chain space, their transaction base, depending on the scale of the business, they can range anywhere from hundreds of megabytes or gigabytes to 10 or hundreds of terabytes, unless until you go to the top five or the top 10 Fortune-Fired companies to be able to get those volumes. So the other aspect, which is kind of pushed by, for example, AlphaGo is to essentially do simulation and build up the data cycle. So that's how you solve the problem. And it was a long-winded answer, but it is a huge problem. Awesome. Well, thank you, Bhaskar. I know it's a problem that many of us have probably experienced in different capacities. It's good to know that there are now tools, at least for the gluing together data and the pipelining of the data. There's now a lot of robust tools that are available. It's also interesting to hear, you mentioned you also kind of tied this to the demand-sensing topic, which I also may want to dive into me a little bit deeper, where in some cases, maybe you just want to focus in on the specific parameters or specific pieces that you're interested in and not necessarily absorb the whole volume of data, because maybe you don't need it all. And so maybe you're just kind of building on that and then also focusing on that demand-sensing example that you brought up earlier. And can you contrast that with time series models? Which time series models, obviously, you're just kind of fitting a model to that transactional historical data in some sense. I mean, the demand-sensing use case you brought in, you actually bring in this other element of data, which is the promotional calendar. And so I'm wondering what that, from just from like an algorithm, machine learning perspective, how that looks different. You know, obviously it's very different than just fitting a model to transaction data. So what does the machine learning side of that look like from the demand side? Absolutely, absolutely. And in fact, it ends up being a fitting model too, except, you know, so when we look at the data, when we use forecasting and sensing for kind of two different aspects. One is on the demand side and two certain things on the lead time. So kind of two aspects of forecasting that we use. In terms of the demand profile, you know, when you're looking at, looking at whether selling a good or services, it typically follows a, you know, some sort of a pattern. It's a, you know, a daily, you know, hourly pattern, you have a weekly pattern, you know, in terms of certain days being higher than less, a seasonal pattern. But what we have also seen is that that's just not enough, right? There are a lot of other interactions that end up happening that kind of end up changing the demand profile. Very specifically, you know, when they talked about promotions, but we have seen other use cases, you know, so for example, there was a cement manufacturing company that we were working with. Now, for example, for cement and concrete, the dependence on the demand is based on political cycles, especially in some countries. You know, right before the elections, you know, you will end up seeing a jump in the concrete usage or the cement usage and the demand goes up. So you have to build that in. So there are things such as those that end up playing a role. So you always have to bring some sort of a third dataset. Now, if I look at, you know, to be able to, and then, you know, then of course, you know, you're talking about interactions, you know, our substitute products, our net products and all those things, there's a whole bunch of interactions that end up happening that technically, just a very varied solution cannot handle all of them. You know, when you look at time series forecasting, there's kind of like kind of put them into three buckets. You know, you have the traditional auto regressive moving average, you know, RIMA type models. You have the additive regressive models, you know, some like profit, which is kind of what we ended up using in this particular case. Or you can use neural networks, whether it is, you know, recurrent neural networks or LSTM type networks to be able to be able to model them. So again, then we tried with RIMA-SERIMA and it didn't work primarily because of these exhaustions variables. There is SERIMA-X, which includes the exhaustive variables, but, you know, for us, you know, the additive model was just easier to implement than this kind of what we went with. We have not tried the RNNs or recurrent neural networks for this, but when you're looking for interactions between various products, I think that's probably something that we need to look into. Thanks, Bhaskar. And we have so many questions in the Q&A feature, so we appreciate the interest of the audience. I don't think we'll get to every question, but we will try to cover most of the topics. Thanks everyone for that. So Bhaskar, I was impressed with all you showed about the interactions. And I went back to some common discussions we have in the sunlight chain. You've covered systemality, but there's also the possibility of outliers and you briefly touch upon predictive techniques for identifying outliers. I'm very interested on that because we often deal with black swan kind of events. Big disruption could be natural disasters and those may be actually affecting our data as outliers. And I've heard of how different ERPs will be dealing different without the layers when bringing the data to you and how much manual work sometimes we have to put into it to find those outliers. I was wondering based on your experience what's the best approach to deal with them, but also if the different tools we have in machine learning and AI applications will bring us different type of results and whether we must go to one or the other based on what we're looking for. Sure, sure. My answer might be simpler than that, but let me kind of set up the problem. In most cases, as I said, there are a lot of interactions that end up happening within the supply chain. But in addition to that, what ends up happening is that most people are optimizing their own more, if you may, suppliers are trying to maximize their profits. Production systems are designed to be as efficient as possible. The same thing with the traffic systems. They're designed to essentially get, maximize the utilization of their trucks, maximize the utilization of their drivers, et cetera. So it is operating extremely efficiently. Each of the nodes is operating extremely efficiently. And it works very well, because you get a lot of things in time, except when there's a disruption or when there's a black zone event that you're kind of not expecting. When that happens, essentially kind of all help breaks loose. Now you have, you essentially have delays across the board and when those delays happen at each of the node, essentially, people go the other way and they start ordering more. So it's a cascading effect that essentially results in not just creating those constraints or bottlenecks, but it takes much longer to come back to the normal. It's kind of what we have seen. And so one of the solutions, and this is kind of what LEAN teaches us, LEAN teaches us to be as efficient as possible just in time, et cetera. But there is a price to pay for it and it's essentially the black zone events. One suggestion is to think, when you're operating a supply chain of business, is to think about some of the TOC concept or theory of constraint concepts. Because what they end up doing is that they help you identify the constraints in your system and kind of manage things around it. So you're not necessarily operating everything at 100% efficiency, but it really doesn't matter because if you operate something that at the highest efficiency, it may not essentially get you the end result. So that's one way of doing it. But also, in addition to that, you want to have all the flexibilities possible within the system in a sufficient supplies, et cetera. The thing that I would say in terms of ML and AI though is from a scenario planning perspective is that that's what it allows you to do is that ML and AI would allow you to build these different types of scenarios that you can then go to figure out how to best handle in the next one. Awesome, thank you, Bhaskar. It's very interesting to hear the idea of kind of bringing together some more classical concepts like theory of constraints with some of these more modern concepts like AI. I know the goal, goal rats book is definitely influential in my earlier career. So that's interesting to see that tied into some of these newer concepts. Awesome, so in the interest of time, I know we're kind of running short on time here. So maybe if we could launch our third poll and then we're going to just maybe just jump in here to the audience Q&A and we have a bunch of questions so definitely appreciate you jumping in there with your questions and we'll probably try to group some of these because I know there's a couple of questions here drawn on similar topics. And maybe, so while you're doing that for our third poll here, what was the most interesting part of today's session for you, we'd love to hear your feedback. Maybe it was just expanding your knowledge in ML and AI generally. Maybe it was the specific applications of these and spike chains, so we'll have to hear your feedback there. While you do that, I mean the first question I wanted to jump into is just diving into this idea that you mentioned of decision chaining. One of the questions we have here was the difference between decision chaining and what he called hyper automation. So I don't know if you, this is Tarun, I don't know if you're familiar with that concept. And then another question to kind of tie these together here is just if you could speak a little bit more about how this intelligent agent in this decision chaining process works. This is from John Coffee here. How that agent in this decision chaining process works. Like how you train the agent or what that agent looked like. Absolutely, absolutely. So, I believe the first question was around, both of the questions around decision chaining versus the hyper automation. You know, if I understand correctly, the hyper automation, you know, still requires some level of coding, right? Some level of rule-based inputs into the system, right? And what we are talking about is not doing that, but having a system that actually learns. And it learns from the data to be able to make, you know, to make those next set of decisions. The decision chaining essentially is, is again, kind of refer you to the open source project line chain for it. But the concept is that the LLMs allow you to converse with the data and be able to extract information. But that's all it does, right? So you need other things to be able to do it. And so along with that, there is a concept called as agents or tools within the framework. And what that does is, so for example, in the LLMs, it's a static model. You know, it has been, you know, it has been trained over the last several years, but it stops at a certain point, right? So then you want to be able to use the most latest information. So there's a concept of agent that can first, the LLMs can transpose that information in terms of what you're asking into actions. And if that requires going and just using an example, checking specific traffic pattern or checking, you know, what the current situation is, the associated agent can go and actually do it. Or if it needs to make an intelligent math calculation, not something that has been pre-built, right? So you can obviously write code to figure out, you know, a specific mathematical operation to do. But if you want the system to figure out which mathematical operation to go and do, feeding that into that agent will allow you to do that. And the way really it works is that when people build these agents, they essentially list out all of the things that you can do in their description. And that's how it kind of uses LLM to figure out which particular agent to use and go to the next step. So I hope I answered that question, and I would refer the person most, I believe it's Tharun to check out some of those. Thanks, Vaskara. I'm wondering about, and I'm also bringing some questions from the audience because probably the first approach most of people had to machine learning or AI was, as you said, playing with open AI and having their questions and prompts. So when we train our models and we have our tools, we know what we're fitting into those models. We know what's the data, what are the problems with our data, what are our assumptions. But when we're working with open AI or any other similar tool, Philip, for example, is bringing here that he has a concern on the lack of references because in the past we would do all the search and come up to a result based on some sources we trust or we think could be accurate. But when there's this AI doing those that work for us and filtering and selecting our response, we don't have those references. What would be your recommendation for those that are starting with being in touch with AI and using it for making a decision for their company in terms of the accuracy of the information provided? Yeah, absolutely, those are some of the challenges, is that you don't list out the references. Now, when you load the documents, your own documents into the system, so I'm talking about, you know, not specifically the tools that are online, but when you load your own documents, so let's just say that I wanna ask the questions, I wanna make a decision based on a question. You can actually, once you load the document into the system and it indexes it, when it comes back with the answer, you know, if you try to ask, explain me how you came back with the answer, it'll actually list the documents associated with it, so it'll tell you the context it used to be able to calculate that. So that's one way of kind of providing, you know, getting to that and those kinds of things, right? Whether it is dealing with some of the security issues and people have a lot of security issues being able to upload the data, right? Coming up with issues that surround, you know, with addressing certain specific use cases, you can train the model to do it and you can have it, give you the information and list out exactly how it came up with that assessment. So hopefully that helps. Awesome, thank you, Pascars. Maybe if we could just take a quick peek at our poll number three results here. Again, the question was, what was the most interesting part of today's session? So thank you all for your feedback on that. It looks like, you know, many of you or 40% of you are interested in learning about the specific applications of AI and ML and spy chains, that's great to hear. And then also just to do the general knowledge of the ML and AI generally. So I don't know if you have any thoughts on that, those final poll results or looks like the... They seem to be fairly evenly split, but in general, you know, again, I cannot agree more. AI is very topical and there are significant, there are quite a few use cases that we can use in supply chain. So just, you know, and people are hungry to figure out how we can, you know, leverage some of these tools to be able to apply. Awesome, absolutely. So maybe, I don't know, we're running short on time here. So maybe just one last question that I'll pull from the audience here. And again, kind of grouping together a couple of these. And this is a little bit more forward looking type of question. So Jason has a question of what's like a level of adoption of the AI and supply chain generally. And then I'll also kind of combine that with Sina Vasan. And I apologize if I'm not pronouncing your name correctly, but he's also then asking kind of what the future looks like. What's it going to be the impact of these tools? You know, so what's the current level of adoption and what's the future look like for the impact of these tools on supply chain domain as a professional or as a business? Sure, sure. So the current level of adoption is, you know, as I said, is mostly all point based solutions. You know, so there are people are using machine learning, people are dabbling with it. People want to use it. There are projects all over the place to be able to do it. But there aren't solutions that are kind of designed to address the global problem. And, you know, and so again, as I listed out, you know, there are very specific, you know, methodologies, collaborative filtering for casting techniques and others, regression, tree based models that are being applied in different situations. In terms of, in terms of the future, once again, I think, you know, the great thing about the LLMs and chat GPD is just, you know, behind actually providing the results, it showed what is capable. And that's what is more important is that, you know, that and then, you know, you can go to the AlphaGo and AlphaGo Zero in terms of it showed us what a simulation can do. You know, so, you know, just for context, AlphaGo Zero was purely built on simulated data. There was no actual play in war or expert knowledge or play involvement. And so there is a lot that can be done. So having data is great, but augmenting it and adding on to it, you know, that can be done externally, you know, through scenario planning and others that can provide, you know, allow us to help better manage some of these changes. So no, it is pretty bright. Now, again, there are challenges. Let's not take that away, but overall, you know, I would look forward to using those various tools. Thank you. Thank you, Vaskar, because you have shown us the tools, the challenges, the assumptions behind the limitations. So we've covered so many topics. I know we can go deep on any of and all of those topics. So hopefully you'll be back and join us again in the future for another webinar series so that we go deeper on those. Thank you to the audience for your engagement. Thank you, Kellen. I don't know if you have any final words for the audience. No, yeah, always a pleasure to co-host with you, Lara. And thank you, Vaskar, for your timetable. I appreciate your insights. And I know we could definitely go into these topics in a lot more detail. So hopefully it will bring you back again one day. Absolutely. Yes, thank you. Thanks, everyone. Thank you. And for everyone joining us, this is the last webinar wrapping up the summer series. For those that are taking our courses, SE1x, SE3x, this fall, or if you are following our SEM webinars, stay tuned because the fall webinar season will start soon. And hopefully we'll get to see you there as well. Thank you, everyone. Thank you for your interest on our webinar today. See you soon.