 Thank you so much all for joining our webinar today. This is a dedicated webinar for our partners and prospects on how our forecasting AI models can help you identify early merging and unexpected risks and add value to your food safety workflows. We get a lot of questions very often from our partners on how our AI predictive models work, what's the rationale, what's the accuracy behind them. So we thought that we would take this opportunity to dedicate an hour to discuss the rationale behind our forecasting models and how those are reflected on Fudakai AI models. During this session we'll focus on three key pillars. What is the approach Fudakai follows regarding its forecasting capabilities? What are the benefits and use cases for such technology in food prevention? How can you use our ingredients and hazard predictions dashboard to extract meaningful information for your supply chain? And could we have a tailor-made forecasting model applied to each company's parameters? Finally, how do our partners experience the use of AI forecasting models in their workflows? And with that, I'd like to welcome our speakers, our data team leader and AI expert, Michalis Pela-Costaldino, who will walk you through our models and rationale. Rebecca Ferrer, who's Senior Manager of Global Food Safety at PepsiCo, responsible for global programs governing food safety, cultural health data analysis and risk mitigation of ingredients, who will reflect on how our forecasting models have added value to PepsiCo's workflows. And finally, your data scientist and prediction expert, Costaldino Spechrivanis, who will lead the Q&A session. During the webinar, feel free to share your questions on the chat and we'll collect those and answer them during the live Q&A at the end of the presentations. Please note that the webinar is recorded and the recording will be shared right after the session. And with that, I'm going to hand it over to Michalis. Thank you so much. Thank you, Marina, and thank you very much for your introduction. Now, in case some of you in the audience do not know who we are, we are Agrino, we are the data and analytics company using AI in order to predict food safety risks. And we are here today to talk about these predictions, how do they work specifically within the context of our software-reserved solution, namely Fudakai. This is what we will be focusing on today. And before we start, a couple of things behind how do our prediction predictive analytics work overall. So we're talking about a multi-step process that first starts with the data. So we first need to choose the data which we will use in order to train, test, and validate our AI models. You are seeing plural here on the slide because we are training multiple AI models and are selecting the one that is performing the best in order to apply it at scale when we're talking about application at scale, we're talking about applying it to all the ingredients and hazards available in our system. As I said, though, it all starts with data. Data is the underlying engine that fuels our predictions. And when it comes to Fudakai, our data platform has collected and analyzed more than 600 million data records coming from various sources, various places around the world, various regions, and covering many different types of data. You can see it will cover you on this slide. In terms of the actual prediction approaches, what we are using within Fudakai are basically two different approaches. One focuses on forecasting techniques. We will cover this when we talk about increasing ingredient and hazard prediction dashboards. And another one has to do with our correlation approaches. We'll talk about this when we discuss our tailor-made prediction dashboards. Regardless of the approach, keep in mind that the end game, the goal for each of these approaches, in terms of these prediction approaches, has to do with being able to identify what will take place for a specific ingredient, a specific hazard, a specific region or geography, or any combination of them over the next 12 months. We will focus initially on our forecasting techniques, ingredient and ingredient prediction dashboards the hazard prediction dashboards. In a nutshell, our approach can be analyzed in the following steps. So it all starts with a specific product, ingredient or hazard, for which we are going to perform forecasting techniques based on historical data, time series data, on recalls and border rejections announced by public food safety authorities around the world. We'll use this historical data to apply our forecasting techniques and train our IEI models. And based on their outcomes, we will use this output, what our most accurate models believe will take place, in order to be able to perform a risk assessment in the future for a specific ingredient or a specific hazard. And finally, obviously, when we are talking about dashboards, the end game is to actually show case in a meaningful way the results of these AI models. Some things that you will encounter throughout this presentation while using our system is the very core concept of accuracy. So let's spend a moment to discuss how we are calculating the accuracy within our system. So, when it comes to training a new AI model, the actual training will take place twice. First, we will train our models by extracting from the data set the past 12 months of data. We'll train our model and it will attempt to predict what has already taken place. Comparing what our models believe has already taken place and what has actually taken place, we are using this metric is two numbers to calculate the accuracy. And if it's accurate enough, then we add the missing data back into the data set, we retrain our model in order to move 12 months into the future. From a data point of view, keep in mind that whenever you go into any of our prediction dashboard, whether it's an ingredient prediction dashboard or hazard prediction dashboard, there are roughly 2.4,000 different models that are trained in order for the actual dashboard outcomes to be presented. Having said all that, let's go and dive into our ingredient prediction dashboard. Keep in mind, this is a forecasting attempt utilizing historical incidents, food records and border directions of the past to perform predictions in the future, forecasting the future over the next 12 months or performing risk assessment in the future. So what is there? First up, the dedicated block that based on each user's customizations, one can actually quickly identify how to follow the ingredients in their supply chain, which are products, which are the ones that are expected to have an increasing tendency. Basically, which are the products or ingredients that are expected to have more recalls over the next 12 months. And of course, one can dive in deeper. So we can select, as we said, the ingredient prediction dashboard is a dashboard that takes us to the starting point of specific ingredient or product and perform a deeper analysis further down. By selecting a specific ingredient or product, one can quickly identify what are most accurate models believe will take place over the next 12 months. And if we compare this number to what has actually taken place over the past 12 months, we are able to calculate the tendency. This is how we're highlighting products and ingredients that are expected to have an increasing or decreasing tendency based on our model's outcomes. For those of you familiar with our prediction dashboards, you know that we do this on a monthly and weekly basis. And you can perform this deeper analysis based on this. So you can study the monthly distribution of forecasted incidents highlighted here with the red dotted line and identify which is the month or months where you should be expecting most of the incidents concerning the specific product or ingredient. But also something very interesting is that we're also showcasing the validation period highlighted here with the yellow dotted line. These are what are models who launched our ingredient prediction dashboard back in September 2021. And we're showcasing in this yellow dotted line what are models believe back then will take place in the future. So you can see this validation period and identify cases where our model did pretty good. But also cases where our model was not able to perform accurately enough. Apart from that, as we said, we have more than 600,000 data records at our disposal. And these are analyzed also on a geography level. So we know which are the incidents that recalls important rejections for fruits and vegetables. Where did they originate from? Using this data, we are able to perform ingredient prediction, but for a specific geography or specific country. And you will be able to identify quickly which are the countries of origin for fruits and vegetables that are likely to be affected the most as opposed to others. Same thing you can do with hazards. We're still talking about ingredient prediction specifically for fruits and vegetables. There is a dedicated look in our dashboard that identifies imaging hazards. Let's take a moment to talk about what are the magic hazards for us? Imaging hazards are something very new, unknown hazards, hazards that have taken place for this specific ingredient for fruits and vegetables in these games for the first time over the past month, but never before as compared to the roughly 40 years of data that we have. This is what imaging hazard is, and you will see this highlighted as new in our dashboard. But we also do this kind of forecasting techniques for specific hazards. So out of all of the hazards that have taken place for fruits and vegetables, we are training our models on each of these time series and are highlighting here the hazard or hazards that are expected to have an increasing tendency of incidents over the next 12 months as opposed to the 12 months before. And you can quickly identify them using the dedicated block in our dashboard. Using this data, we are able to perform risk assessment in the future. And talking about risk assessment, let's spend a moment to discuss what is Fudakai's risk assessment formula. You can see this highlighted up top in the slide. It takes us input three different parameters, but let's for now focus on the last one, same probability. The probability basically is how often does a specific hazard take place for a specific ingredient or product category? And using what our most accurate models believe will take place in the future for a specific ingredient and specific hazard, we are using these numbers to perform, to apply a Fudakai's risk assessment formula into the future and be able to do this in a forecasting way. How is this visualized within our dashboard? First of all, you can see a quick snapshot of the risk assessment, of the predicted risk assessment for every hazard that has taken place for fruits and vegetables. This is one thing. And the other one is that, as we said, we do this on a monthly basis. So you can also study the risk evolution for a specific product category and specific hazard over the next two months. And this pretty much covers our approach as far as ingredient predictions go. Some things to keep in mind, based on the slides that we've already seen is that we're talking about a forecasting technique using as input data coming from incidents, food recalls and water rejection, historical food recalls and water rejections over the years for a specific ingredient. The analysis forecasting can take place on either a monthly interval or on a weekly interval. And of course, based on this data, you can dive in deeper either on a specific hazard or a specific geography that is of interest to you, where you may be sourcing your ingredients. And again, keep in mind at any point the ingredient prediction dashboard is focused has as a starting point the ingredients and products available within food again. Based finally, based on our models outcomes, we're able to perform risk assessment in the future using what our models believe will take place over the next 12 months. And that's it about ingredient predictions. Let's switch our attention to our hazard prediction dashboard. As you will see in the slides to come, we're talking about a very similar approach. Again, this is a forecasting technique using historical data coming out of recall and water rejection announcements. However, the main difference here is that we are using a specific hazard or hazard category as a starting point. So you can also investigate what are the hazards that are expected to have an increase over the next 12 months or which are the products that are likely to be affected by specific hazard over the next period of time. The analysis is very, very similar. So first up, you can identify what our models outcomes are for a specific hazard or hazard category. In the examples to follow, we have selected the chemical hazard as an example. And you see here, what our most accurate models believe will take place for chemical recalls and border rejections over the next 12 months. And we've compared this number the 12 months before, we're able to calculate the respective tendency. And again, similar to the approach that we follow for ingredient predictions, we do this on a monthly basis. Highlighted here with the red dotted line, you see the monthly distribution of forecasted incidents for chemical hazards. So what do our most accurate models believe will take place on a monthly basis over the next 12 months for chemical hazard. You can identify peaks or low points easily. And again, keep in mind that you can also use the validation period. We launched our hazard prediction, our hazard prediction dashboard back, I think in June, 2022. And you can see what our models believe back then will take place for chemical hazards, compared with what actually took place, this blue line here, and evaluate the results yourself. Similarly to what we do in our ingredient prediction dashboard, you can also dive in deeper on a specific geography or region. In order, and you can see this on the chart on the left, in order for this place, we are training our models specifically for data coming out of these regions and chemical hazards in this case. So you can quickly identify a stronger red color, means more cases, lighter red means less, all of them based on our most accurate models outcomes. Another interesting thing on our hazard prediction dashboard is the outbreak block. This is one of the two places within Foodacire where you can encounter outbreak data. There is a dedicated block within our hazard prediction dashboard that outlines and showcases chemical related outbreaks announced worldwide. And you can either study them in detail, study the counters that have been affected by this outbreak or see the daily distribution of them. Apart from that, and similarly to what we are doing in our ingredient prediction dashboard, you can identify increasing and emerging cases for specific products. So first up, let's start with the increasing cases. So out of all of the products that have been affected by chemical hazards, which are the ones that our models believe will showcase an increase in terms of tendency over the next 12 months? You can see them highlighted in a dedicated block. Again, we're saying here that tendency is based on what our most accurate models believe will take place over the next 12 months and we're comparing this number to what actually took place 12 months ago. Combined with numbers, here you will see identified the ones that are having an increasing tendency. And I'm sure some of you have already guessed it, the next block is dedicated on products and ingredients that have been affected by chemical hazards for the first time ever over the past month, as opposed to the roughly 40 years of data that we have. There's a dedicated block for that and you will see this in your dashboard that it's highlighted with the new keyword here. Before we move on to our tailor-made approach, we'll spend a moment to discuss what we've seen so far for the hazard prediction dashboard. Again, we're stressing out here that we are talking about a forecasting technique. We are using time series data, historical incidents and border rejections for a specific hazard. And even though it's a similar approach to ingredient prediction, for an ingredient prediction dashboard, the main difference is that the starting point is a hazard category or a specific hazard. The rest of the analysis is pretty similar so you can perform a deeper geographic analysis on the predicted incidents for a hazard, identify any products that have been affected for the first time and so on, and keep in mind again that this is a place where you can also see an outbreak data for a specific hazard. And this pretty much covers our forecasting techniques as far as Fudakai goes, but this does not conclude the predictive analytics offered by Fudakai. Another dashboard that we have is the tailor-made prediction one in which the approach is quite different than the forecasting techniques that we're using. Before we dive into the specifics, keep in mind that in here we are talking about a correlation approach. And by correlation, we mean attempting to identify connections, links between different types of data. You will see in the slides to come that in order for a tailor-made prediction dashboard to be generated, many more data points, data records, and many different data types, and types of data are used in order to generate the final outcome. You will see some differences in terms of risk assessment. The actual formula used by a tailor-made prediction dashboard is different than what's used anywhere else within Fudakai. And it can be applied to a specific use case either on a specific product, or a specific hazard, or a combination of them. As you can see, we have dedicated blocks to identify outliers or emerging cases, and the final thing to keep in mind in everything that we will see in the slides to follow, we are talking about the implementation of a dedicated model per use case. Specifically, what we're about to see in the next slides is the model, the tailor-made model, we believe to identify fraud cases happening in beef. As we said at the start, it all starts with data. And indeed, our tailor-made prediction dashboard takes into account many more different types of data. Specifically, in order for the tailor-made dashboard on fraud in beef, beef to be generated, we are using production data, trade data, price, animal disease, news or media references, and finally, lab tests. You can see them quickly highlighted on the chart on the right. And just a small note here that in order for a tailor-made dashboard to be generated, specifically for fraud cases happening in beef, more than three million data records were used, combined, correlated, and then fed into our model in order for the final predictions to take place. As we said, we're talking about correlation approach. So the first step would be to ingest the data, harmonize them, process them, but then we need to identify relationships in the data. And this is what the correlation step takes care. By correlation, we're talking how strong is the relationship between data types. So for example, if one increases, what does another data type do? Does it decrease as well? Does it decrease? And we have, for instance, a specific example here, you can see the tailor-made case correlation example here. And for instance, you can see that the correlation between trade and price. So it's an inverse correlation, meaning that if more trade takes place, then the price is lowered, and the other way around. Whereas as the production increases, so does the trade. Important thing to keep in mind is that this is the second step to our tailor-made prediction approach, attempting to identify hidden relationships between the data. And now let's move on to the risk assessment that we follow. As we said, the introductory slide for tailor-made prediction dashboard is that the risk assessment formula we're using here is a bit different than what we are using anywhere else within Fudakai. As you can see, the actual formula is highlighted here on the slide. You will see two different things that are of interest. First up, the actual risk assessment formula does not take into account at all incidents. So incidents for the risk assessment within our tailor-made approach is a variable that does not affect the risk at all. And this is what we're trying to show in case, and you will see this in the later slide, we're trying to identify relationships between the risk assessment as calculated by our tailor-made prediction approach. We ask what actually took place, which is the incidents. This is one thing to keep in mind. The other one is that this custom risk assessment formula takes into account changes in data, so outliers and out of the ordinary behavior. How does it actually work? We have historical data at our disposal for any of the parameters and of the drivers that we're using in order to perform the tailor-made predictions. So we have historical data, and based on the historical data, for each one of them, we are training a different forecasting model in order to be able to move 12 months into the future and forecast what the value, what the behavior for each of these drivers will be over the next 12 months. Having all these data at our disposal, then we are executing the tailor-made prediction dashboard risk formula in order to assess what's the actual risk assessment on each point in time. In terms of validation, it's a bit different than what we're using in our ingredient, hazard prediction dashboards, and in order to validate the outcomes of this tailor-made model, we are using known outbreaks. Specifically, in this case, we have highlighted for floating brief, we have highlighted the horse mid-scandal, and you can see at the left side of your screen the validation period we are using here. The interesting thing is that an increase in the risk assessment as calculated by a tailor-made prediction approach was able to identify roughly a year earlier the actual outbreak. And this is something very interesting because as we said, incidents are a variable that does not affect the risk at all directly. Apart from that, of course, there are dedicated places where we have emerging risks. And by emerging risks, in the case for tailor-made predictions, we're talking about outliers or something out of the ordinary taking place. On the right of your screen, you will see an actual screenshot taken just a couple of days ago from our system, where specifically for flood cases in brief, our models believe there is a high risk for the presence of veterinary medicinal residues for brief originating in Uruguay expected to take place during api. And finally, a demonstration on how does a risk assessment, as we do this within our tailor-made prediction approach, how does it compare with what actually took place, our incidents? You can see this chart on the right. That's our cases with red, the evolution of a risk assessment based on this approach that we described. With orange, you are seeing the incident trend line. And the interesting thing here is that as you can see, in many cases, the increase in risk is observed six to 12 months earlier than what actually took place in terms of incidents. So in the case of this tailor-made approach for fraud happening in brief, the model was quite accurate enough to be able to predict what will take place and identify these changes in risk almost a year before something actually took place. And at various places, as you can see this happening in this chart. And now, as we wrap up, our tailor-made prediction dashboard approach, what we are using and what you have to keep in mind is that it's a different approach as opposed to our ingredient or hazard prediction dashboard. It's a correlation approach that uses many more data other than incidents. We're talking about correlation. So there is this step of correlating the data, doing one another, identifying these relationships that we talk about. We're applying a specific risk assessment formula different than Fudakai, so the risk assessment is performed in a custom way. In terms of success cases and validations, there are many points in time it was able to identify outbreaks almost a half or a complete year before they actually took place. However, you need to keep in mind that in order for this to be implemented, apart from the actual technology involved in this and the dedicated work training and developing this model, there's also domain expertise required in order to identify which are the data that are most likely to play a factor to play an important role in identifying early on these changes in risk. And with that in mind, I think enough about us. Time to hear from Rebecca Farrell, Senior Manager of Global Food Safety at PepsiCo. What's their use case in using predictive analytics in their line of work? Thank you, Mahalis. I thoroughly enjoyed listening to your presentation. You're part of the presentation because it affirms for me yet again how Agrino continues to be at the forefront of using data science to really help us in this food safety area. What I'd like to talk to the group today about is how PepsiCo is using Agrino for a few use cases. One, to support our food safety intelligence surveillance. Two, to help us understand how we can better improve our starter hazard assessment for ingredients and commodities. And how we can specifically enrich the hazard analysis for specific ingredients using the information that we get in food archive. But first, let me give a little background about PepsiCo because while I feel the company is large enough, what you may not realize is how large we really are. And this is why having a tool like Agrino is so helpful for us. So each day, there are over a billion occasions where consumers are enjoying PepsiCo products in over 200 countries and territories. In our value chain, we work with over 6,000 suppliers and we procure, manage and use over 15,000 different ingredients. And so looking at a global food safety function from the perspective of a global food safety function, we understand that we play a really important part in what PepsiCo is trying to do to create a positive value chain. So our mission is to create more smiles with every sip and every bite and at a billion opportunities per day, we really need to maintain the trust of our consumers. So food safety is very integral to PepsiCo's code of conduct. We need to understand what hazards are present in our starting material, the material that we use to make the products that our consumers love. So first, I'll talk about our food safety intelligence process. So we have associates in each sector who are continuously sort of doing the monitoring and response to food safety risks. We go into several tools, including FoodiKai to understand what incidents have occurred and we use some of those predictive models to anticipate what is coming ahead. We really need to be in a proactive stance when it comes to managing food safety risks. Our partners, our stakeholders are relying on us to identify what's coming ahead in terms of food hazards. What is it that we can do? Do we need to take action within our supply chain? Do we need to communicate to our suppliers about emerging risks? Do we need to do additional surveillance testing? All of these things are really important and we rely on the data, the insights that we get through our food safety intelligence process. So, and when there is a severe action that needs to be taken, we're able to respond to that very quickly because we have our internal teams in place who are working with their cross-functional partners to ensure that necessary steps occur. With our large ingredient base too, it would be a monumental task if each of our food safety and compliance experts would have to review each and every single ingredient basis, ingredients and conduct the hazard analysis. I think in a historical context, this would have been done in our company and other companies by researching lots of different portal sites, scientific references, scientific literature and agronel is able to condense all of this into the food archive platform. So, we have a process by which we have a starter hazard analysis in place for different types of ingredients, different types of commodities. But we want to make sure in this ever-changing world that that starter assessment is continuously updated, that it's kept dynamic. What we know today isn't necessarily what we're going to know tomorrow and that information has to be kept up to date. So, to support that food safety, that initial hazard assessments, we are collecting and reviewing the hazard data that we see through agronel on a continuous basis. And then specifically for ingredients, our starter hazard analysis can do so much. But we know that particularly for chemical hazards, there are incidents that are specific to a region, specific to a geography. And where we really appreciate to the partnership with agronel, is that as a global supplier, as a global procurer of foods, a lot of the commonly used data sources do tend to be North America-centric and Europe-centric, but we challenged our agronel associates to see what other databases, what other sources of data could we get from countries in Asia and Africa and Latin America to make sure that we really have as much of a global and holistic approach to our ingredient hazard analysis. So, by incorporating those additional sources of data, we have a more comprehensive perspective of what's going on in our supply chain. So all in all, I think having this better data does lead us to be in a position of better food safety and better compliance. And so that is, those are a couple of the key use cases in how we use FoodiKai and the services from agronel. Thank you. Thank you, Rebecca. And thank you, Michalis, both for your presentation and for sharing the use case. We do have some questions. So I think we should dive into those. And Michalis and Costadinos will help us with those. And also please feel free to share questions you have on the Q&A. And we'll make sure to respond to them. So to start, the first question is, in which case should we choose a data or made model comparing with the global ingredient and hazard predictions? Okay, Martina. Thank you very much for the question to whoever posed it. Now it's an interesting one because at least in my mind, but Rebecca or Costadinos, please feel free to check in. I think it's been, at least in my mind, it's not that having one cancels the use of the other. I think the scope for each one of our prediction dashboards covers a different scope. So for instance, with our global prediction dashboards ingredient and hazard, you can quickly identify based on past trend and seasonality what is expected to take place for a specific hazard or a specific ingredient on geography or country of your interest. Our tailor-made approach though, goes to add on top of that other kinds of data because our global prediction dashboards only take into account historical incidents. Our tailor-made approach can take into account different kinds of data, correlate them together and be able to signify changes in risk or expected behavior early on. So at least in my mind, it's not about using one or the other, it's about knowing when to use one or the other. Excellent. Thank you, Michalis. The next one is to include new testing methods or technologies in the variables that help us develop and improve the models algorithm over time. That's an interesting question and I'm not sure I fully understand this, but in case it has to do with actual technology, so AI methods out there, the quick answer is yes. Our data science team led by Costa Dinos constantly looping, incorporating the newest technology out there, tested with our data and make sure that what's available in Fudoka and its dashboards, make sure that it states up to date with the latest advancements in technology and AI genre. However, if the question has to do with technologies in the sector of food safety, so let's say that the new analytical method is found and tested on specific ingredients, this is something that could potentially be caught by our tailor-made approach that uses more kinds of data. I'll give an example. A kind of data, a dataset used in our tailor-made prediction approach could be literate review, for instance, where these bleeding edge technologies are announced and then incorporating them with other historical data, correlating them together, we'd be able to perform predictions in the future. Great. Another question here is more of a request. It would be useful to see how the predictions are tested or performed to develop a level of confidence. What type of... Sorry, go ahead. No, no, please. If there was another part of the question. Thanks, Marina. Yes, and there was one more. What kind of statistic, scientific model we use for predictions? Mm-hmm. I guess that this refers to our global prediction dashboard, so we created a prediction dashboard that has a prediction dashboard. The way that we're validating our data, it was covered in the slide that talked about the accuracy perhaps we can... We'll go over there just to also demonstrate this as well. The way that we're calculating the accuracy is by training our model twice. The first time, the model does not know what has happened over the past 12 months and attempts to predict what to place. Comparing the two values between one another, we're able to calculate the accuracy and then adding the data back into the mix where we train our model in order to move 12 months into the future. This is how we're calculating the accuracy, the actual validation and can be seen wherever you encounter these yellow dotted lines. This is the validation period and you can perform the validation yourself using our tool. And the last part of the question of how to do the statistical methods or models I think specifically, what we're currently using is a forecasting model developed by Facebook, more specifically, and in the next months we will incorporate also an LSTM as well. Great, thank you. And one last question we have here is as the algorithm develops and considers new variables, how can they be informed about the model's updates and improvements? Yeah, so the way one can be informed for the outcomes of our models is in various ways. The obvious solution would be to log into our system once a day, go through each of our dashboards and quickly identify what are the peaks of data, what are the new hazards or the emerging hazards or the increasing and so on. So one way would be logging into our system and going through the specific blocks in your dashboards that are highlighting these changes. It's one way. The other one is through our weekly insights email, is how we call them and these are emails that are done once a week that are sent out once a week highlighting the major outcomes, outlines of data and we saw using the past weeks of data. So basically either logging into our system whatever this increasing or emerging risks exist or through our email through the email alerts that you receive using Fudokai. Perfect, and I think there was one more if we can go back to the graphs with the predicted values and the past predicted values and explain the difference between them. Of course. I think this, for instance, refers to this one. Basically there are two places where you can see these kind of graphs in our system. In the ingredient prediction dashboard in the hazard prediction dashboard the moment you select a specific ingredient or hazard a similar chart will be generated. This chart contains three different lines. So blue line, historical incidents what took place over the past years and the chemical hazards in this case. Yellow dotted line what are models believed back in the past will take place in this specific period of time. As we said during the presentation we launched for instance our hazard prediction dashboard back in June 2022 and ever since June 2022 up until today it's been performing these forecasts and you can see the actual forecasted values over the months the historical forecasted values over the months appearing in this yellow dotted line and this is what we call the validation period. So comparing between the blue line and the yellow dotted one you can actually assess the validation yourself see places in time where our model did pretty good but also cases in time where our model was not accurate enough. This is as far as historical assessment and validation goes and finally the last part of the chart you will see is this red dotted line that shows the predicted the expected forecasted incidents for chemical hazards you can use them to identify peaks in time or lower points in time over the next few months and one more we have here how do you select the drivers in the tailored made predictions okay it's a question actually and the first part in order to generate the data that are likely to have an effect in this tailored made approach would be through domain expertise so the first step that we do when we're about to launch a new tailored made prediction dashboard is sit with experts companies that are of interest to this tailored made prediction dashboard discuss together what are the data types that are most likely to play an effect in a tailored made prediction approach after identifying the data this is where the second step in a tailored made prediction approach takes place the correlation step and this is where actually mathematically we're able to calculate how strong does one data type correlate with another so for instance disease cases or disease deaths have a very strong correlation you can see this in these numbers here very strong red other data types may not correlate that strongly so basically to identify important data types in a tailored made prediction approach is a two step process first with domain expertise knowledge identify potential data types that may be of interest and then using the mathematical approach of correlation identify which are the ones that are actually showing a strong correlation between one another strong relationship between one another thank you and with that we'll conclude our Q&A thank you so much to our speakers and for the visitation and for the Q&A and thanks all for joining today the webinar we hope you had a chance to get a better understanding of our AI models and the value they could bring to your organization I just want you to keep in mind that following the webinar you will receive a separate email with the recording of the session you also receive an email with a two minute survey that would really like your input on if you have the time would really appreciate it and finally on the same email you also have a link where you can request a four week free trial of any of the predictive dashboards that might be of interest to you and you can try them out thank you all for joining thank you so much thank you very much everyone