 Hello everybody! Good morning, good evening, whatever you are. Thank you very much for being with us today. It's a pleasure. We are really excited to share the input from all, I mean, many centers from the MIT Global Scale Network. We are going to talk today about AI driven supply chains. So let me share the screen so then we can elaborate a little bit, a little bit more. I hope that then you can see my slides. Can you? Great, so thank you. So today, again, we have a great set of panelists here that we are going to share our latest research in the area of applications of AI in supply chain management. Actually, we wanted to emphasize that they are all applications. We wanted to show you that AI is a reality in today's landscape of supply chain and operations. Let me introduce the panelists that we are going to have today. Let's start first with Yacell Costa. Yacell is an industrial engineer from University of Marta, and then he obtained his doctorate degree from the prestigious German institution Otto von Grieg. I can't pronounce it well. And then he researched interest in a variety of diverse topics, supply chain network design, sustainable operations, green vehicle, routine problems. Also he is the director of the PhD program of Saragosa Logistics Center. Saragosa Logistics Center is our first center that created the core of the MIT Global Scale Network. Welcome, Yacell. So the next panel is, let me see if I am okay pronouncing it. So we are glad to have you here, Yacell. It's great. So she is assistant professor of the Luxembourg Center for Logistics and Supply Chain Management at University of Luxembourg. Her research focused on optimization and uncertainty, applications and policy design and learning optimization, especially for resource allocation, fairness and equities. Very exciting topics. She is a PhD from the Cold Polytechnic Federal of Los Angeles. So yeah, this is a great panel here. I will introduce myself as well. My name is Maria Jesús Sain. I am the director of the MIT digital supply chain transformation laboratory and also the executive director of the MIT supply chain management master's program. I have been working for Global Scale Network since 2003. Actually, I was at Saragosa Logistics Center. So I know very well the Global Scale Network and I am very proud of what we are doing there just to shape the future of supply chains. Okay, so before starting, let me share what the MIT Global Scale Network is. We are a set of centers all over the world. Actually, then we at MIT are here and also Saragosa Logistics Center, Luxembourg, but also we have the NIMBO supply chain center in China. Also, we have CLI in Colombia, but it's a network of universities and institutions all over Latin America. In total, these are our figures. We have more than 10 educational programs, master's degrees, executive education, certificates, more than 80 researchers and faculty from all over the world with a variety of topics. All of us working in logistics and supply chain. Our main, main feature is that all of us we are doing applied research, we want to shape the future of supply chains. So this is why we work with more than 150 corporate partnerships. Every year we are educating more than 200 students and then we have a rich network of alumni all over the world that are super committed and they are coming to MIT every single year here in general. So then we got there also before starting I wanted to emphasize that we have a lot of different events just in a couple of hours 11 a.m. today we have Dr. Christopher Mejia talking about social driven supply chain network design so how AI can help to bring nutrition to underserved communities. But please go to CTL event website. We have there for example the POMS conference in Latin America. We have our annual event CTL MIT CTL event, annual event crossroads. Go there to CTL events and then please register. We'd love to share all our I mean insights with all of you and discuss your challenges and opportunities with us. Okay so then we have one hour so then we need to go with the clock very carefully. This is our panel dynamics. We have these short introductions then we are going to have three cases studies. I told you we want to make it very fractional oriented very actionable. Then I'm going to start talking how Dell is living right now. We are working with Dell closely. How is the supply chain using AI in different topics especially end to end planning. Then a second case study will be with Dr. Jason Costa from Ceragoceristic Center as I mentioned. He will talk about value inspired AI and the optimization of delivery routes at Samsung. So again we are bringing to your companies just to illustrate that this is our reality. And the third case study is by Chael Calgidic about data driven decisions with AI. We will talk about especially efficiency and interpretability. And what are the trade-offs between two key words for AI efficiency and interpretability. I love it. And then we will have a panel discussions with you. So the dynamics is that then you are introducing your questions in the chat to Q&A. And then we will moderate, we will read all this in order to bring the questions. I would say that the last 25 minutes we want to have time for having discussion with you. So this is we are going to try to be short in our presentations. So then let's start. Some weeks ago here at MIT, CTL, all the researchers, around 60 researchers, we sit together for almost two hours just to discuss what is artificial intelligence. What do we understand by artificial intelligence? And yeah, the beauty of that is that we couldn't agree. We couldn't get a consensus of one single definition of AI. This makes sense. Why? Because then AI could be interpreted as an aspiration about what could be. So it's very important for whatever kind of application of AI that we are doing that we are defining in advance, what do we interpret by AI? And this is why here we just saw in jail that we decided to agree what kind of definition of AI or what kind of focal point we are putting in AI for the three applications that we are going to share with you. And this is what we think that could be a good understanding of AI for the purpose of today's webinar. I am sure that you have other definitions in your application and it's totally okay. Please don't interpret that this is their definition. We don't want to bring here their definition because now the application is so broad that, yeah, I mean, then Sumis is helping me with that. Then again, the AI is so broad that then it is difficult to have one single definition. So this is what we understand for the purpose of this webinar. Then can be defined and the ability of a machine, an algorithm, a technology to perform cognitive functions associated with human minds, such as perceiving learning. We are emphasizing learning because the three of us we are going to emphasize how AI is helping us to learn, helping us mean the organization that I use in applying AI, interacting with environment, problem solving and interpreting among others. So I will start with how Dell is interpreting again. And I want to be quite quick because then we want to be agile with our webinar. Then let's start with what we understood about AI-driven digital supply chain transformation. And then we can see that what Dell did here is much more complex than just renewing technology or renewing algorithms or translating processes into algorithms is much more than that. And then we will be the Dell case. So let me start. This is the definition that here in the digital supply chain transformation lab at MIT what we understand about AI-driven supply chain, then especially transformation. It's the application of AI as a technology. And then it could be algorithms, could be, I mean, covots or robots that are driven by algorithms that then we use data to transition towards value driven end-to-end supply chain. If I have to highlight here, two key words are value and value is something that you expect. And sometimes AI helps us to discover. And then driven end-to-end supply chain. End-to-end is an aim, is a goal. And then, yeah, only a few companies are really doing end-to-end. But let's see how Dell is doing end-to-end. What we have observed in the companies is that, I mean, there are different challenges and difficulties for applying AI, especially end-to-end. And then it's much more complex. And then first we observe the challenge of the Frankenstein effect. Then, okay, you are having different components of AI. One AI is in the last mile delivery. Typically, one AI is in forecasting. And they are not talking to each other. So then they are isolated pieces that need to be polished and polished and polished in order to have a more cohesive view of AI. This is a journey. It's not something that happens in even months. It will require years. I will share what Dell is doing. Dell is working with this kind of approach for, I would say, five years right now. They continue working with this vision. Then not only Frankenstein effect, but also there are other issues or challenges. Stemocentrism is when a company then focus too much on technology. Then everything is focused on technology. Let's translate, I mean, the way of optimizing my cost in last mile delivery according to how right now I am running my last mile delivery wrong. Because the idea will be to envision how you want to do the last mile delivery. And then the algorithm for this future vision instead of just only translating what you are doing right now. So technology can help you in order to be more efficient. But then the focal point is not on technology. What technology can do for me? The focal point is, I mean, how to envision my last mile delivery process and then how technology AI can help. It's a completely different vision. And then you will achieve more with focusing on that vision and scalability. So companies sometimes they make pilot prototypes of AI applications. This is great. But typically, this is based on very highly motivated people with very clean and available and granular data. This is a perfect data set. When you go to reality, all these components are not so easy to get. So this is why it's important that the companies have the capability of scaling up, of being able to do prototyping and then moving the prototyping to scale up to more regions, to more processes, to more rescue use, etc. So the lack of scalability capability is a problem. And we have observed that more successful companies, especially with AI, they are comfortable exploring, experimenting and scaling this up. And then this is very important. So then let's talk about Dell. Dell was working, they started their digital supply chain transformation journey in 2017. Then they were asking what technology can do for me and they discovered that then they should focus on their vision, focus on their strategy and their performance expectation. And actually, they developed these five experience. Let's focus on this one that made the right commitment. Let me explain how they deployed AI and especially how they connect with leadership and vision and strategy with the performance as an anchor point to make AI scale up. This is the idea, make the right commitment. Make the right commitment for them was to put a commitment in the north star of their vision. Commitment before hand. So when they commit with a customer, let's say 100 laptops for a retailer, then we can deliver in let's say four days. So this is a commitment that they established in advance. Then they can monitor the commitment, the order end to end. And then they also, after the fact, they can go and then they can monitor what is going, what could be going on this forward looking approach, this future approach, analyzing with AI, root cause analysis, for example. So then this approach of before the order, during the order, after the order is very powerful, especially knowing how to anchor, what is the expected performance that this commitment with an order end to end. This is very, very powerful. So then AI, sorry, Dell made this kind of loop with AI. This is really, really interesting in terms of how they measure performance both of, I mean, the business, how AI is impacted the business. And second, how they are scaling this up, how they are expanding the reach and the effect of AI. First, they started with value identification. So then in these cases was commitment. So this is a north star where we define AI driven supply chain transformation, the value expectation is in commitment. They wanted to measure to quantify commitment. Also, they wanted to quantify this commitment with a KPI. And they developed a KPI that is aimed to be end to end. This KPI is perfect order index. So it's the percentage that every element of the order aimed end to end. That's a, for example, logistic service provider that is preparing an order. What is the percentage of the time that they are under the expected commitment? Under the committed commitment, that's it. So the perfect order in this is very end to end. So because then you deploy the different components of the order while preparing, but also in advance and then forward looking. So then this is the way they quantified the value in terms of a KPI that is perfect order index. It's not only a simple on timing fault, because what they are doing is just, I mean, splitting into different components from the stakeholder that are participating. Value creation is what the key learning indicators, what they are learning, how they are activating these groups, how they are scaling this up, how they are progressing with artificial intelligence. You remember AI learns or is expected to learn. So key learning indicator is important. It's, for example, the delta, how we increase perfect order index, or maybe how we decrease. Or we've had a problem with this logistic service provider. They made, I mean, they made a delay. So they are decreasing. Why this happened? So this decrease in POI should trigger some root cause analysis. Why this happened? Just to avoid that this will happen in the future. And net promoter scores and other typical CLI. Then they need to transform all this money, of course, money is important. And then just to map the AI money map, right? And to end, what are the different impact of perfect order index, right? We have changed in the commitment how this could change into a losing money or the opposite, if we are improving how we are saving. And value appropriation is very important. We're talking about supply chain. So then how we incentivize our stakeholders, for example, suppliers of this logistic service provider that is always on time, as expected, because we monitor his contribution to perform, to perfect order index to POI, and how we incentivize this attachment to commitment. And then loop starts again. AI is present in several facets here. So then AI is present before making the commitment, because we predict the capabilities during the commitment, because we are executing in real time what kind of prescriptive actions we can take. And also after the order, because we can say what are the future scenarios with root cause analysis. So all these predictive capabilities, for example, with forecasting demand, of course, but forecasting lead time, root cause, et cetera. And also, for example, with resilience, monitoring the risks behind pinking effect. With that, I am finishing. Yeah, I told you, we want it to be quick, dynamic. We want it to make sure that then you are engaged. Let's go to the second case, Dr. GSL Kostam. Dr., ready? Marie, yes. Can you see my screen? Yeah, perfect. Excellent. So thank you so much, Maria. I'm so glad to be here joining you guys. I've been learning a lot from your presentation. I do have another definition of AI. It's certainly not science fiction, right? And I do like that word about learning. And we consider that AI is constantly learning from different kind of sources, some kind of creative learning, right? And this is exactly the point of my presentation today, but in a better applied context. So when you double check sometimes the different AI based algorithmic proposals, they are already linked with these two fields, according to my understanding, mostly related to knowledge discovery. But there are a few applications in the context of optimization as the traditional problem sharing, for instance. And this learning that I mentioned has to be basically with the natural inspiration. When we learn from nature, we get an abstract, the most creative knowledge. And for that, we have been using that repeatedly since, I don't know, interest time. And for different industry, of course, we talk about manufacturing sector, biological sector, pharmaceutical sector, right? And from that learning, of course, we have some many, I would say, application context. In the field of knowledge discovery, for instance, one of the most famous one has to do with the neural networks, right? That natural inspiration related to the biological or the bio electricity that flows through our brain. And of course, we just want to understand what's better, what's the better output considered multiple inputs. So when we compare that with the traditional regression analysis, so AI based algorithm simply, it's simply superior, right? And I would say that the most effective AI applications, they all have natural inspiration on various part of the source, right? So in many of the cases, when we heard, I get excited about chat dbt. So what we do we have right behind the algorithm of chat dbt, it's clear. So there are multiple kind of neural networks trained with billions and billions cases with the knowledge base. So that's there where you see constantly multiple application of artificial intelligence and particularly by inspired algorithmic proposals, right? So, but this is not all. There are many other application contexts where we could see different sorts of natural inspiration. And well, this is very well known like the evolutionary algorithm, particularly the first one proposed genetic algorithm, all inspired in the evolution of the species were the most adapted individuals they prevailed, right? So in our case, there's not individual anymore. When I tried to put this into, you know, the context of logistics, so it can be seen like a distribution problem where we, I don't know, we take two different solutions and then we cross the solution in order to get a more better adaptive solution in our context that will mean less travel distance, right? And for instance, for this particular one, we had patterns that they have like four vehicles for the fleet size. And this one, it has like three. And then we have like a better adaptive solution with better total travel time. And in that case, we have like a fleet size equal to three. This is what we were trying to do, but with a different source of inspiration. And as this algorithm is also well known, and it's truly inspired on the behaviors of the real ants, where they constantly find the shorter path between the nest and the source of food. And of course, also artificial intelligence in this context, reveal a nice feature, which is, you know, the swarm intelligence. So a single and it's basically doesn't matter if it is real or artificial and makes a random selection of the path here. So we clearly see that the shortest path is this one. So once there is a and realize or randomly select this shorter path, then it lets this trail of pheromone. And of course, the next and will take that trail where, you know, the pheromone smell is emphasized somehow. So that kind of collective or what is called technically swarm intelligence help us a lot to solve transportation problem like, for instance, could be described according to this, according to this metrics here. And of course, if we have multiple ants departing from different sales here, and then following all the subsequent stages, then we explore greater area within the solution space. Solution space that traditionally describes transportation problems as three source assignment problems. Even the one that Maria was mentioned the forecasting problems in that field of knowledge discovering. So we use that inspiration to solve a realistic problem. In this case, the problem was set up in Chile and Santiago de Chile, particularly. And this problem was about a daily delivery process. When we had 350 customer geographically spread and that's it I mentioned, there was a 3PL that basically was hired for doing that delivery product deliveries in the last mile context. And in that regard, they charge money based on, you know, the fleet size and they got a homogeneous fleet of vehicles, of course, were traditionally called like different capacity vehicles with different capacity. And it was very challenging. Why? Because when they made a contract with the customer, the customer clearly emphasized about one of the most difficult constraints in this problem setting, and it's about the time windows, right? So when we have very tight time windows that impose a very hard constraint to the optimization process. And sometimes it makes when you have demand picks during the day, so customer that you wouldn't imagine, then the problem is not running more only in stochastic. It's also a problem with dynamic structure, like the so-called dynamic vehicle problem with the with the customers appears and disappears. And therefore, the structure of the problem change over the planning horizons, right? And of course, at the moment, this was examined. There was a manual schedule of the process, which definitely takes a lot of time, not even a reasonable time for performing a very operational decision, right? So that's pretty much the idea with the problem. And well, they were certainly penalties when they were late that we based. And of course, that that implied most of the time delays. And we were talking about a problem that is that is actually not considered a smallest scale problem in the field of PRPs. In the field of PRPs, more than 50 nodes or more than, yeah, 50 customers that which we should deliver something that is considered a problem with substantial complexity, right? So this is the way it looks one day delivering. So as I mentioned, the 3PL charge based on the fleet size and many other things, but particularly the fleet size. So this was, it was for the business, how they made the decision and it took eight trucks to, you know, to complete that workload they had at the moment. But when we were using our anchor and optimization AI inspired, then we reduced the fleet size by 50%, right? Not to mention that it was also a substantial reduction in terms of the total total cost transportation costs, right? About 38%. So before my time is about to being gone. So this is a summary for more days of, you know, road planning. And in totally just in 10 days, we could actually save like 24%, some cost metric. I don't have time to mention what it was about. And substantial reduction also in terms of the fleet size. And one of the most important reduction was that, I mean, compared with exact methods that mostly find the optimal solution, we reduced substantially, of course, the computational time. And compared even to the traditional time that they were using for, or the frequently time they were used to schedule the vehicles, then it was also a substantial reduction. So this is one example of how, you know, volume sparse methods could be applied with a very frequent problem and the logistical context, which is, for instance, in this case, transportation. So I hope you like it. And I'll hand it over to my dear colleague. And thank you very much, Maria. Thank you. Let me share my screen. Thank you, guys. Do you see my slides? Yes. Thank you. Can I just start? It's fine. Hello, everyone. I'm going to talk about the interplay between efficiency and interpretability when considering data-driven decisions with AI. Even though there is typically a trade-off between efficiency and interpretability of AI decisions, I'm going to show you that achieving both efficiency and interpretability simultaneously can be possible in practice by discussing a recent project of mine and my collaborators. The project that I'm going to discuss is not directly related to supply chains or logistics, but it's a resource allocation problem. And I'm going to argue that a similar data-driven solution approach can be used for other resource and capacity allocation problems, including those that arise in supply chains and logistics. Okay, so when we talk about decisions, including decisions, data-driven decisions with AI, we want them to be both efficient and interpretable. Efficiency typically involves maximizing payoffs while minimizing costs. And interpretability means that humans can understand and explain how decisions are made. To emphasize the interpretability is not just about understanding the models used. It is important to understand the decisions themselves. This is important in practice because this allows us to trust the decisions made by the models, making their implementation easier for us. Actually, in my interactions with practitioners from various fields, including healthcare, logistics, and energy, this desire for interpretability emerges as a common team. Practitioners always express that they do not want decisions made by a black box. They want to understand the decision-making process. Besides enabling trust, interpretability is also important for human-machine collaboration, which is arguably safer than relying solely on machine-made decisions. So if humans can understand the decisions, they can make adjustments as needed. Efficiency of AI is unquestionable from my point of view, but interpretability raises concerns. For example, you may be aware of that there are some ongoing lawsuits against various institutions, including some law firms and banks in the US raising concerns about AI-made decisions, allegedly discriminating people based on protective features such as race. It is really important to understand the decisions and proactively prevent any potential discrimination or ethical issue. There is typically a trade-off between efficiency and interpretability of AI decisions. The more advanced the model that you use, it tends to offer better decisions. But on the other hand, more advanced models and their decisions, for example, you could consider models such as gradient boosting and neural networks for forecasting. These are less interpretable than compared to simpler models such as linear regression or decision trees. In the reminder of my talk, I'm going to talk about a recent project of mine focusing on learning policies for allocating scarce housing resources to people experiencing homelessness in LA. This project that I'm going to talk about isn't directly about supply chains or logistics, but I'm going to argue that the solution approach can actually be applied to other resource and capacity allocation problems. And actually, we are implementing, we are trying to establish a similar data-driven solution framework for fright, shipping, revenue management at the moment. Okay, so the work I'm going to talk about is inspired by housing allocation for individuals experiencing homelessness in LA County. According to the Los Angeles Homeless Services Authority, LASA, there are more than 75,000 people experiencing homelessness in LA, whereas the availability of permanent housing units used for supporting these people is extremely limited. LASA currently uses a vulnerability tool to decide on how to prioritize people for different housing resource types. When an individual seeks help, a survey for this individual is completed and this survey contains questions such as how long has it been since you lived in a stable housing? These survey responses are then used to calculate a vulnerability score for each individual and to make decisions about prioritization. Unfortunately, the current system is not linked to outcomes nor to capacity limitations. Our objective in this project is to use the data that is already there, specifically the data from the LA County Homeless Management Information System database, to learn optimal policies for online allocation of scarce housing resources to people experiencing homelessness, maximizing outcomes, specifically maximizing the exits from homelessness, while considering capacity limitations and fairness with respect to protected features such as race. We propose a very simple queuing policy. This policy establishes separate queues for each of the housing resource types. When an individual arrives to the system at six halves, this policy assigns the individual to the queue for the resource that maximizes their estimated likelihood of exiting homelessness if they receive that particular resource, minus the opportunity cost of assigning that resource. Here, the likelihoods and opportunity cost, we estimate them from the data that we have. We can use interpretable parametric models such as logistic regression for estimating the likelihoods, for example. We showed on real data this type of models actually perform value. To ensure different notions of fairness, we can actually adjust the opportunity cost for different groups, for example, lowering this cost for minority groups. We actually managed to prove theoretically that our proposed policy is optimal in the long run, meaning that as the number of individuals arriving to the system grows, but I'm going to show you our results on the real data, because we tested our policy also on the real data. This thought here shows the proportion of the population with a positive outcome, specifically the proportion that exits homelessness on test data under historical allocations and under our proposed policy. Outcome minority priority here represents our proposed policy, where we enforce fairness for outcomes. This means that if we want outcomes for minority racial groups to be as high as those for the majority racial groups, and in this case, we consider black, African, American, Hispanic, and other to be minority groups. What you can see from this plot is that under our proposed policy, outcomes for almost every group improves in comparison to the historical allocations, and the overall improvements here roughly amounts to 300 more people exiting homelessness per year on the test data. Due to limited time, I can only give you a glimpse of our work and findings, but if you're interested, I want to share this QR code that would take you to our paper. In addition, I would like to mention that my quarter, Phoebe Wayanos recently gave a TED AI talk on this topic, so if you are interested, I would encourage you to see her, the recording of her talk, which is available from the TED web page. Okay, so to conclude, I presented to you a data driven solution approach for resource allocation that is both efficient and interpretable. Even though the housing allocation problem isn't directly related to supply chains or logistics, this solution approach can actually be applied to other resource capacity allocation problems, and actually the solution approach itself is inspired by bid price policies used in network revenue management. As I mentioned before, with collaborators from ACL, we are currently establishing a similar solution approach for freight in revenue management, and I anticipate that this solution approach could incorporate some sustainability targets similar to fairness integration. For example, if we are talking about procurement or supply or selection targets of the sort, I want at least 25% of all purchase goods and services to come from green suppliers. This is the end of my talk. Thank you very much for your time and attention. I would be happy to answer your questions during the discussion part. Thank you very much, Yasal, until it has been great. As a good logistician, we are right on time, which is also great, and it shows our commitment. We have tons of questions, so then I'm going to try to go one-to-one. Let's try to be agile in answering quick questions so we can go as much as we can. This is part of the idea of the webinar. Dr. Costa from Sunita Ray recommends your colony optimization and Python coding that you made at MIT some years ago. Thank you, Sunita. Are there any more of these as popular as this? Oh, yes. Well, for the sake of simplicity, I'm time saving. I did not present this here or the progress we had made, but we propose other variants where we explore more areas of the solution space. To make it simple, we have other variants that examine greater areas of the solution space and provide better solution quality because someone else was asking if that improved the C-Plex. Of course, it doesn't improve the C-Plex. This is an exact solution, but it was very close. In many instances, it was very close to the absolute optimum. Computational time was pretty much the same, although you think, okay, exploring more costs more time. No, so there have been a lot of improvements since that time. Thank you for that question. I'm glad you recalled my talk at MIT. So I'm learning more certainly how can we prevent or filter bad data from the artificial intelligence? It is a fed to the AI. What can we do to undo it? Example for bad data could be a feedback loop. So then who wants to answer this? I could go and answer what I would do. So basically, there are a lot of methods in AI and machine learning that deals with noisy and bad data to robustify the solution against such noisy data or bad data. One of the well-known methods is like regularization, use of regularization or robust optimization. So there are available methods to prevent such cases. I think a priori, it would be difficult to say what is noisy or bad. There are methods for doing that as well. But even if you are not able to tell, as I said, you could robustify your solution against certain noises. Yeah. Thank you very much. Another question on Del. I think this is for me. How does pricing analytics interact or align with this end-to-end value chain? Will this happen during sales and operation planning? Then pricing analytics should be a component of the end-to-end supply chain. So if you have an order, at the end of the day, the order should have a predicted price. It will be a price that is offered in the commitment. And then also on forward-looking, so in future, then you can also predict how the price could change. So then it is not purely a function of supply chain. It's not a function of marketing, commercial, et cetera. But definitely, in order to measure the trade-offs with AI for cost-to-serve, you need to input this. Because then this could create also some kind of distortions if the price is going to change based on unexpected, for example, commercial promotions. So then the forecasting should be able to understand why this is changing. This is maybe changing for a kind of exogenous variable. So I don't know that maybe a computer in certain disruption will change the price. So then this is an exogenous factor due to the exogenous disruption. So then as much you can grab information that is exogenous to your supply chain. So let's say how the world is moving, how the warnings are doing over there, that are not directly from the supply chain, the better you can create predictive capabilities with these exogenous factors that are coming from all over the world. It's a very general answer. But then you should input price information into your question because it's the way of also monitoring cost-to-serve trade-offs. But pricing is not typically supply chain decision. Okay, the next one. Then Julia Zhao, what is the difference between AI and data science for supply chain in your understanding? Wow, this is good. Jesu, you want to answer that one? Well, these are overlapping fields, honestly. Data analytics, whatever you do in terms of knowledge discovering, which is according to my understanding, the more comprehensive terminology, knowledge discovery in general, if you're using a neural network or if you are using other kind of natural or not natural inspiration, you can use it in data and data analytics for whatever kind of application context in that field of supply chain management. So maybe if you asked this question 10 years ago, then we will say that clearly for data analytics, then we have regression analysis. We have the traditional, more mathematical-oriented, and in this case now for AI, then these are more computational-oriented, right? But nowadays it's hard to discriminate. Now, this is difficult. This is why at the beginning we define what we interpret with AI, how we deploy these cognitive functions, especially learning. I mean, does data science learn? Of course. So then again, how to discriminate. This is why, I mean, one single definition of AI does not work. Sorry, it's not yes or not. It depends how you apply. Whatever you are doing, whatever is going to impact your performance and then allows you to learn to transform your supply chain, to be better or to test new business models, then this is good. Okay, next one. The next thing, this is for me, then Amit Ray. Thank you, Amit. Can you help to understand how AI helps to improve end-to-end visibility of their system? Traditionally, companies are using ERP and other systems for creating visibility. How AI can help it further? These are very good questions. And then ERP is playing a key role, but what we have observed in the most successful cases is that visibility is much more what you have internally in your ERP, much more than that. I mean, advanced companies are using external signals, not only what are internal signals coming from your ERP, coming from your, let's say, manufacturing operations, but external signals are coming from what is going on in the world that can help me to contextualize my actions for operations. So contextualization is another beautiful feature expected from AI, not only interpretability as we presented, but also contextualization. So then end-to-end visibility is not only internally within a ERP. Let me put an example. There are some startups that are collecting intensively data using AI knowledge graphs, natural language processing, about what is going on with suppliers all over the world. So it's a real-time information. So for example, what are your ESG scores? Your sustainability scores. So then you can input in your system, in your internal system, whatever the source, could be ERP or, I don't know, I mean a procurement tool. And then you incorporate this information from the current status of current suppliers or maybe future suppliers in order to decide what will be my best set of suppliers. For example, if I am running a new product, I am running a new business model or a new action in the market. So then again, end-to-end visibility is much more than ERPs, what we observe in the better companies. For example, end-to-end visibility is another question that is over there in the chat and it was about then how we could extract information from the bottom line that maybe we don't track. So there are some applications, beautiful applications based on AI and other startups that are beautiful work. Then for example, they scan all the emails that you are doing with natural language processing and they are extracting what are the key insights from emails in order to enrich visibility. So then it's not purely data that is a structure in your ERP or work management system or warehouse management system, transportation management system, whatever, is that then you are extracting the data that is not a structure. You are extracting the data from the decision makers, from emails, just for example, to feed up how to run a process or how to standardize a process. So again, it is the beauty of AI that it can learn from structure and from non-structured data. So this is the power that you can again transform all your decisions and what is going on into the language of data. Again, for some companies it could be science fiction, but for others it's a reality. They are playing with these toys in order to make more and more end-to-end visibility. So let's go with the next one. Who got it? Thanks, who got it? At the company I call you working, we are going to implement a new demand and replenishment software that already incorporates AI algorithms. One of the challenges we face is that the maturity level of data is not what is suspected for this type of software. Welcome to the world, but it's in the same. How to achieve a match with the company's need to implement this type of software with a low reliability of the data? So who wants to answer this question more about replenishment? I just want to say that maybe it's somehow related with that one, but there are many other questions that I went through and they were asking about the applications of, for instance, the algorithm I proposed to other application contexts like inventory management or resource allocation that you were mentioning, and of course, whatever problem you can model as a network, for instance, the traditional column optimization can be used in that particular case, too, for replenishment, for instance, too. Or there's even variants with continuous optimization where you could easily apply that. So maybe it's not related to that question, but I went through the questions of maybe I'll save in time, Maria, on that regard. Sorry, maybe I could answer a little bit the question. So yeah, the data is very important in the case of AI, but there are also AI models that generate data from limited data. So that could be one solution, but I'm not immediately clear. It would apply to the particular case here, but it's really like you would probably have seen that, for example, Google tools generating photos of other people or dogs and cats and they are not real photos. They just learn from the photos feed to the models and they generate similar photos. So these could similar approaches could be seen possible in case of limited data as well, maybe through some stimulation or so you could generate more data that could be useful. Synthetic data, yeah. In that regard, it's also related to other questions. Don't forget the one part, which is also popular right now, particularly within the Iranian community, the possibilistic distributions, too. With data scarcity and some judgmental opinions, you can actually develop something which is called fuzzy inference systems to translate judgmental opinions and various card information into numeric ranches, which you can use to work subsequently. Good. Yeah, from Miram Bhatkar, something about demand forecasting. Is there a percentage range of improvement that we can achieve from using a demand forecasting system that applies AI compared to another system that does not apply AI? I will start answering this because we have been doing a lot of work on demand forecasting, AI, ML, demand forecasting. Then my recommendation is that, I mean, you contextualize, you customize the way you do it during the demand forecasting, that then just, I mean, plaguing and playing software available model could be good, but then try to do some kind of customization about what you need to do. It's not only the software that you can bring from vendor is how you include your features and behaviors, not only from the data, but also, for example, exogenous factor that could affect your demand forecasting. And it's true, not always. The most sophisticated AI, ML models, I mean, provide the better results on demand forecasting. There are several studies that are doing that in certain contexts, I mean, the traditional demand forecasting with the right setting, of course, then could bring very good results. But then I think that you need to work a lot in order to contextualize how to better input your context and your expectations. In the case of Dell, for example, commitment, it was very important. So they were doing demand forecasting and also, I mean, lead time forecasting, right? And then, actually, they were playing the two things in certain contexts. So this means that the way your demand forecasting could be richer if you input and then you align even more with more features that can affect demand. And also, if you go upstream with other effects that can create uncertainty in your demand realization. So at the end of the day, it's not one single recipe. So then, I mean, I think that there is no answer. I mean, we should not rely if there is an answer, say, oh, you can increase 5.5% if you apply this profit demand forecasting model versus the whatever I think is the use. So what we have been doing in our lab is just to, I mean, create automated systems that test different kinds of AI, ML models, and then compare and contrast in order to learn not only how each model can better, I mean, adapt, but also what is the best model for different circumstances of context that you want to predict? Yeah. And any input here from any of you? No, okay. So thank you, Hernan, because big regards. Feel free to come here if you see a question that then you will feel comfortable answering because I am just following the queue. But then, Jason, because you are also reading. Jason, any questions you want to answer? Well, there are some also linked to that part you were mentioned. It's hard to generalize to say, okay, every time you use random forest for estimating demand, customer demand, it's always improved traditional approaches at this level. So it's very hard to generalize. What I do know is that under certain circumstances, there's no way to beat a neural network, for instance. This has been formalized. This has been certainly formalized. There's a huge variety of application contexts. I think if someone makes a book, under these circumstances, I do have a rank enough was the best performance from the worst performance to the worst performance of those AI-based methods. This is a very nice, I think, set of knowledge and put it there. But it's hard to generalize. It's very hard to generalize. So I don't see any other question here from my side because I'm okay. Yeah, I think it's hard. And I will say even dangerous to expect to generalize. So every context is different because your expectations are different and your business is running in a different way. So then be able to put effort on contextualizing. Tell any question and answer from you. I see a couple of questions to me. So one interesting one is that to prevent ethical concern associated with AI, talking about race, for example, would it be sufficient to eliminate the corresponding information from the data ensuring that AI doesn't use this information? This is a good question actually because I feel like some people have this perception, but this is not necessarily true because imagine that you even you remove the race from your data completely. There may be some other information that is highly correlated with race itself. So it doesn't guarantee that your AI won't be using the race to make decisions. So this is not sufficient. No, I think that then this is this is great. So again, thank you everybody for your time, your for being with us in this one hour, especially thank you to Yesel and Tiel for your very insightful contributions. Thank you also to the market and communication team on MIT for being with us and helping to support this. And then go to 11 a.m. today. It's time that we have another event for my comaster community, but you are all invited. Okay. Thank you. Have a beautiful day. Bye. Thank you. Bye. Bye guys.