 So hello. Good afternoon. Good evening. Potentially good morning. It's my absolute pleasure to act as a chair for this webinar. The title is the future food safety. Can we really trust predictive analytics? I think it's a really, really good, very, very important question. So first of all, I'll introduce myself. My name is Chris Elliott. I'm professor of food safety at Queen University Belfast and also professor of food security at Tamizat University in Thailand. And I've had a very long interest, not only in predictive analytics, but actually in artificial intelligence, probably 10 years of really asking lots of questions about this kind of a emerging technology. And I think AI, there has been a lot of coverage in the media about artificial intelligence. I think just in the UK alone in the last 24 hours, we've had two really positive news stories about AI, one about the development of new antibiotics. The second was about somebody who was totally paralyzed, was able to walk again for the first time in many years. And but also we've had the media coverage about the potential catastrophic role of artificial intelligence. So I think this, this is a wonderful opportunity to talk about AI and actually demystify it to to many people. Because this is a very positive attribute and we're thinking about here is thinking about how we can prevent major food safety incidents happening. Because I think many, many people on our webinar, you know, there's, there is about 100 people registered for this one or I just kind of show you how interested and important people are thinking about not only food safety, but in ways of moving from being reactive to proactive. So this idea of thinking about how we predict the future. We jokingly called it the digital crystal ball. But actually it is about data, it's about data sciences. And I think you're going to get a lot of information about actually how these data sciences actually work and actually function, the way that models are built and how they are checked in terms of the robustness. Also just on the theme of data. It's really important to think about data in many, many different ways. How you go about collecting it. How you verify the data is accurate and correct. But also how do you trust where your data is going to, what it's going to be used for and who might get access to that data hugely important questions. And again, there will be a lot of discussions about the integrity of your data. How robust is it? How safe is your data? Really, really, I think a very, very important topic. And in this era of thinking about future food safety risks, you know, as a professor of food safety, this is a lot of what I do. We investigate big issues in terms of food safety, but we also try to think about what is coming down the line because of changes. And just to think of some of those changes that impact on food safety. I always say the number one change is our climate. And that is having a huge impact already in food safety. And we'll have a lot more impacts going forward. Do we know now what they are? Well, the answer is no. Analytics, we will get a much better idea of those changes. But also what changes is people's perceptions in terms of what is a food safety risk. Often that will be brought about by changes in regulations and changes in legislation. Food safety legislation changes virtually every day of the week in some part of the world. And generally those changes in regulations are about lowering levels or increasing new hazards into risk management plans. I think the last area that I think we need to think about in terms of future feeds food safety risks comes from something really positive. It comes from the desire of so many people to develop a truly sustainable food system. Very positive. But actually what sits behind the drive for sustainability are the unexpected consequences. And again, as a researcher, as a scientist, we are trying to think about what some of those unexpected consequences might be. In terms of our drive for circularity, there will be new risks for sure. Some existing risks will increase in severity as well. And we have to try to get ahead of those, try to understand for all of these different scenarios that I've talked about what might be happening in the future. Once we understand that. And it will not be with 100 degree of certainty. And again, we will talk about probability of events happening. But I think once we get that understanding, you can really start to change to build to modify food safety management plans. So these are the topics that we're going to talk about today. I'm incredibly interested in the subject. And we have to what I would call subject matter experts are going to impart us with with a lot of information. The first is Manus Manus Carbusis, who is the head of research and innovation at agronome agronome is a Greek AI company. This is the second one, which is fantastic by Yvonne Pfeiffer, who is global data services lead at SGS Digi Comply. So both of them are going to give us some information, some presentations, hopefully will demystify as I said some of some of the areas around predictive analytics. Your involvement as well. And your involvement will come through. Please think about any questions that you might have. Just put them into the Q&A box. I will moderate those. I will ask as many of the questions as possible to both Manus and Yvonne. And who knows, I might even try to answer one of them myself, but I'll leave that I'll leave the tough questions to Manus and Yvonne. That's for sure. But that's enough for me. Hopefully that's set the scene that's introduced what we're going to do for the next hour or so. I really hope that you enjoy it. I hope you'll find it useful and helpful. So what I'll do now is I'm going to hand over control to Manus. Manus, you're going to take us through for the next 20 minutes or so on your knowledge and experience around predictive analytics. Thank you very much Chris. Thank you for setting the stage. You touched on so many fundamental issues around safety, each of which could be a separate webinar or could actually be a project out of itself. In any case, hi everyone. I am Manus. As Chris said, I'm leading our innovation team at Agrino. And today, I hope I can help us all become that more confident that predictive analytics and food safety can work well and be applicable in real world situations with true business fund. Now, what we'll be talking about today is in part our internal work in Agrino and the food archive product, but we're also very likely to be coordinating an important European horizon project, which is called DEFRA, where we work together with HCS-DHC comply and many other excellent partners across Europe to advance the field of food risk predictions. So without further ado, let me get us started with the most provocative question I could think of. Can we predict food safety incidents before they happen? Okay. No need to vote, but do take a moment to think this through. So what is your opinion? What do you think? Is this something feasible? Can we predict food safety incidents before they happen? And hold on to this answer for the rest of the webinar and we will revisit it. Okay, so personally, I have a background in computer science and today I will be supporting a rather bold statement. There is nothing left in computer science to achieve to be able to predict a vast array of phenomena, but it is true at the same time that in many areas and in many problems, the economic and AI in general have not yet produced significant results in the real world. So if the fundamental tools are there, what are we missing? And why some of us voted yes in the previous question and some voted no. So my statement here would be that we are missing, in most cases, two things. The first one is a good working understanding of what is feasible through AI, and the other is how we achieve good cross-domain collaboration. But really, as we'll see by the end of this presentation, hopefully there is no much involved and I will do my best to explain this today. Okay, so let's start with the first of these two things, understanding what is feasible. To understand this, let's start with two concrete cases where we can try to apply AI to get predictions for food chain. So, in the first case, let's say I want to understand how Salmonella will grow in my patriotism. This is an environment I can control very precisely and account for most influential factors such as temperature humidity growth, medium, etc. I can, very meticulously, test different scenarios and no doubt result in growth. And with all this diverse data, there is a multitude of machine learning models I can use to create a rather accurate model indeed, and similar successful attempts can easily be found in the literature. But let's take a moment and think what have I managed to create. I have created a model that answers the following question. Under a given set of conditions, how will bacteria evolve in my patriotism? Okay, let's keep this in mind and let's increase our scope a bit. Let's pose a new question where we hope AI can help us answer, will I help Salmonella in my global supply chain? Instinctively, we can understand that the second problem is fat staffer, even if fundamentally we're asking about the same pathogen Salmonella and its growth and behavior. But the difference, of course, is that the factors that we need to take into consideration are much more numerous and the conditions cannot be precisely controlled to establish a clear cause of relations. So in my experience, when someone asks yes to the initial question, if we can predict food safety incidents, I suppose they're thinking closer to the predator example and when answering no to the global supply chain example. But today I would like us to start thinking in either of these two corners. So let's, one example is much more complex and impactful. It is much closer to the real world challenge we want to have an answer for. And the other has less impact, since it is a laboratory deployment, but it's also less complex. Actually, there seems to be a trade-off between complexity and impact, meaning that we cannot freely move anywhere in this space, but we are more or less confined inside this band. Let's have a look at the visualization because it can give us an interesting idea. Is there potentially a switchboard? That is specific important questions we can state that we can actually answer efficiently through a questions that have serious real-world impact and for which we can account for enough factors to create a useful predictive AI model. Next, we will look at such concrete examples moving around in this switchboard. Okay. Now let's turn to the second part. How cross-domain collaboration looks like? When structures when? Let's look at this through various working examples, questions we can answer efficiently right now, today. So to get us started, let's say I am a food company and I have multiple facilities. I have a particular dinner on the schedule and I want to kind of improve on it by incorporating a data-driven and predictive risk assessment approach. That is intervene when and where needed before it's too late. Developing a relevant AI model is an active area of research for us at Agrino and we're aligned also with the overall vision of our project. So my goal through this working example would be to showcase, hopefully in an easy to understand way, how collaboration in this cross-domain teams is structured, but also so that using AI to predict food safety risks involves no magic at all. It's a very structured process that anyone can get a feeling for and where food experts and computer scientists can work together to find really cool solutions to problem with real business mind. Okay, so let's say we put together our cross-domain team. And the first thing we do when working this cross-domain challenges is to understand why we're creating this AI model in the first place, which is not a given for many of the participants and that's understandable. So what are the things that makes us worried that we hope we can avoid or we can do better? This serves as a focus for everyone in the team and the motivation people. So in this case, the food safety experts would share with us things like constantly worried that we might miss critical, we might miss anti-critical, and then sneak up on us, leading to recalls. Furthermore, with our audit resources being so limited, it's really crucial for us to spread the map for the best outcome, and it feels like constantly juggling. Okay, next, it will be the time for the AI experts. We will look at the family of AI models that may be best fitted to the challenge given for what we're aiming for. Let's say that after enough cross-domain discussion with that app designing that we are aiming for a data-driven approach prioritizing data load, it's based on a predicted time coincidence. That is, which of my facilities is predicted to have an increased chance for food safety and stress risk related to the others, and what is an expected timeframe for this. So the food company can then audit practically and end the situation. Okay, so the computer scientist will say or think, this seems like asking how long will a given facility survive without an incident, and the time dimension seems to be important. So one thing we can try is called survival analysis. This is a particular family of AI models that use multiple factors to predict a relative increase in probability of an incident over any desired timeframe. Before we look into what survival analysis is and actually use it as an example for such deeper principles of what AI models do. Let's go back to our cross-domain team and look at the most crucial part of the whole process, which can make or break this effort. So the next thing we do is where cross-domain collaboration is most crucial. Computer scientists and food safety experts, we have to sit down together to understand the factors that influence the probability of an incident occurring. Here, most likely, we also need to narrow down our scope, the particular type of incident, so the factors can be traceable. So just remember this idea of the sweet spot. So when the food safety experts say factors, the computer scientists usually hear columns. This seems weird, but remember this along with the sweet spot idea, this I hope will be another takeaway message. So I will turn to an example now so we can understand what this means. And I know it is an obvious simplification from some of the computer scientists out there, but I have found it to enhance cross-domain collaboration a lot. So when you say factors, I say columns. Okay, let's understand this with an example. Another simple. Let's say we're working with a poultry company and they own a series of facilities in another complex supply chain. And let's say we're concerned for Salmonella cross-contamination incidents in the facilities I would like to intervene practically based on the perceived risk. First, we have to make our goal clear at this stage. It is to collect data for the factors that influence time to Salmonella incident in poultry facilities. And then we would start by monitoring some of the facilities, a representative sample, but let's say one year 365 days. The facilities will experience an incident and others will not. We also note various factors that we believe might be related to the arising of an incident. So to simplify it, let's say these factors are things like a risk score coming from sample analysis, a score for the high-reaching measure use, a score for the feed quality used if we're talking for funds, a score for equipment maintenance. For a particular facility, let's say for the first row in this column, these facilities remained Salmonella free for 112 days. It has a risk score of 10, a high-reaching measure score of 7, feed quality score of 3, equipment maintenance score of 6, and it had an incident. It had it at 112 days after the start of the stack. Okay, the factors are oversimplified, but that is all really. Food safety experts will say factors, computer scientists will say columns, columns and such a type of table. So if we can put enough columns in this table to make it wide enough and also put enough records in this table to make it long enough, then we can trust that computer scientists can find an AI model that can produce great predictions. So, and another message, even if we do not put enough columns here in this table, even if we cannot account for all the contributing factors, the AI model will still be useful. And I will return to that in a moment, but let me continue a bit with this example. So the next step involves only the computer scientists. What we do then is to construct the AI model and do what we call training. That is present the model to the previous table. It is not as easy as it sounds. Since it involves trying a lot of different arrangements, different techniques curating the table by parametrizing the motor, but it is a set process computer scientists know how to do very well. So let's not delve too much into this, but for example in this particular case, we have found that the coax proportional model works rather well. Okay. So what is the end result after all this after we have found a good AI model we have trained it with the table. What do we get. What we get at the end is a noted prioritization AI model, which is a trained a model that can usually are time records what would predict the time to incident per facility so for example. The model may say that that facility A expect an incident in the next 50 days with a probability of 75%. So then the company can do an audit and take preventive measures before the incident happens. So, let me very quickly also say a couple of things for how we can incorporate time evolving factors as well. So in this example, we can see that scores can change over time. For example, due to the introduction of measures for hygiene. This is something that there is an AI model that. And of course, an advanced formulation of the approach would be that during the training of the model, we can connect it directly with the digital food safety system of the company, taking in the raw records as they are logged related to something such as rights and feed equipment, such records and in any case mandatory other has up and other food safety management systems, and the digital locking system can make this visible. After training the model will be ready to provide the online accommodations for facility audits. Okay, so let's wonder a bit. Is this still in the sweet spot, or is it too much. This is a bit ambitious. So, let me say something again that I already alluded to. Even if we do not account for all factors, that is, we don't have all the necessary columns. Can we create a qualitatively better data informed approach. That is better than the traditional approach in many cases. Yes. For example, we can get a better data driven approach over the house based standards for auditing. So let's keep this in mind, and explore this idea a bit further. How far we can go by not accounting for all the potential factors, but getting back useful results. Okay, this is going to be a rather strange slide, but bear with me, I will try to explain as much as I can. So, let's start with a thought experiment. Let's say that in a perfect world, listeria incidents in the global supply chain are represented by this function. Okay, this is an ideal one, an ideal one, at least for an AI expert. And then I ask, how many incidents am I expecting next, even without taking any factors in the consideration. If the time series behaves so periodically, I could predict the future behavior, right. We all could know much is involved. Okay, let's keep this in mind. So one can actually construct however complex factions by adding up periodic ones. So in this example, we the sum of the three first factors generates the last one. You know something that computers can do astoundingly well. It is to find such patterns. Here we're looking at a couple of straightforward examples for a human, but an AI model can analyze in the woven levels of such patterns extremely quickly and merrily by its ability to do things extremely fast. It can find patterns with means it can find them as easily as we can extend the line in the first example. But is this magic? No, it's just very fast computations. And it has of course its limitations, but it all has incredible capabilities that are applicable in real world situations. So here is an example of an AI model fighting and using patterns over the real mysterious public incidents timeline. So let's take a closer look at this one. The actual time series of the incidents for hysteria that have gone public around the world is the black line. Now using what we call deep learning for casting models, whatever that is, we can predict how the time series will evolve in the next months. And this is the red line. These are confidence in the predictions. We also present the orange line, which is a prediction using the previous part of the black line to predict something that has already happened. So having the orange and the black lines closed intuitively means that we have good results and an increased confidence in the red line as well. So we can then ask, but why does this work so well? Because the computer has done pattern matching at an incredible scale and found patterns that exist, but we do not have enough time as humans to find patterns, much more interesting and deeper than the examples on the left. But in a sense, using the same fundamental approach as when we found patterns by looking at the time series on the left. Did we employ any magic to do it? No, it was rather straightforward. So when you do this with billions of pattern match per second, many more patterns appear straightforward and no magic at all is involved. So currently for Dakai, one can find interactive task was that nice to present the predictions of the trained AI models for many different food ingredients or food hazards tailored when users particular supply chain, however complex. I will very quickly also say a couple of things about extreme event predictions because I think I can almost hear what some of you are thinking. Can such an approach predict unexpected extreme events? For example, here is a famous case that you might remember from here back. Well, remember what we said a couple of slides before, more factors means more accuracy. To account for extreme events, you need to put more factors, more columns in your data and more records. The better your table, the more you can accurately predict. But the AI model will remain useful to read predicted lines will get qualitatively better, the more data you have, even with a small table with a few columns. But in essence, this means that adding more factors, more columns will lead to predicting more accurately, more and more extreme events that we previously could not account for the smaller table. So this is what we're currently doing in the innovation labs of I could now using more advanced AI models that can add such deeper patterns. And of course, but from that, I personally would be very happy to sit down with energy together. I would like to thank the science and food safety experts to create a little day I'm on this for your needs. Okay, very quickly, let me talk a bit about for for our larger FRI project that we're involved in and coordinating. We're along with our partners, we're doing something very important, I think, let me very quickly introduce you to the partners, it's Stockholm University from Sweden. I would also like to say to research from the Netherlands boy back from the UK, seeing our inmates from Italy, I could be from Croatia, Reina from Greece. And as with you today, I could know from Greece and this is this comply. And with such an excellent group of organizations were also looking into deeper challenges as well. And for the remaining time in my presentation, I would like very briefly to mention one of them. All the AI prediction models that we saw can be improved the more and the more diverse examples they encounter. So the wider and the longer the table the better the predictions and this has a very deep implication. Let's say we have our audit prioritization model setup, and we have a company that wants to train it and use it. We do so, and they do get back a useful model, but it is a model informed only by their own data. So in a sense they only get a deeper insight in what they already know. But in the industry, there are also other companies with their own internal facilities. If the model could be trained over their examples as well. The final model would be much more powerful in a sense covering the entire industry rather than any particular model. Of course, any individual company would be hesitant to expose sensitive data, they might say, I want a model, but what if my data are exposed, what if the data is misused and my reputation is shared. What ever we're looking to is to prove the concept that you can get the model without exposing the data. The idea is actually pretty straightforward. The model moves around the company is getting trained with the local data. No data is moved around only the model, making it stronger and informed by all participating companies. And the final model is given back role participants. The table is in the details of course, and there are many of them, but the vision is clear to create sector specific intelligence networks, build around privacy preserving AI. And if you're alluding to that. Let me say, if you would like to learn more with Chris were co organized cabbage or summit on exactly this topic of intelligence sharing. You will hear about this from all viewpoints business experts regulatory experts technical experts. We will be very happy to have you join us. This is the registration link. This is a code you can scan. And we will now send the link as well in the chat. Keep in mind that we will also have interactive brainstorming sessions small groups with any interest participants. So book your place in one of these meetings and have a say on how this important data will evolve in the future. That's all for me. Many thanks for that manless. I personally really enjoyed the presentation. I heard some questions that I have, but it actually raised some new questions which I may post you later on. So please for all of our attendees at the webinar today. There's a Q&A box just type in in your questions for manless and hopefully we'll have time to answer them later on. I think that maybe the example that you gave around Salmonella I think was really, really good. And I could, I could see so many different applications for that. So, so thanks. So what we will do now is what we'll move on to the second presentation if that's okay even what we'll ask you to give your presentation around informing regulatory decisions with food risk intelligence. Thank you. Thank you Chris and also one welcome from my side. So my name is Yvonne from STSG comply. And after we have heard from a lot of interesting things about food safety risk prediction. I will talk about how we can also do regulatory prediction for regulatory changes and why this is so important, especially for the food safety sector. Next slide please. So, if we have a look at food safety regulations, we have different challenges that we are dealing with. On the one hand, we have the regulators and the public authorities. And for them, it's really hard to implement new food safety, new food safety regulations or update office, because it's a lengthy process that requires complex decisions involving multiple stakeholders. And if they are working on a new regulation, so they actually they are not able to use the one tool that combines regulatory data with food safety data, so that they have all in place. And they have to use several sources to get all the relevant data. And even if they have the data that they need, we also are dealing with the fragmentation of the data economy, so that it's really hard to get all relevant information together. And another problem that they are dealing with is in some cases, so think about the risk assessment that were performed from the after. Not all data are available and also must be generated within the study. And on the other hand, we have the regulatory affairs specialist in the food industry. And for them, it is not easy to find all relevant information that they need to be sure that they have everything in place. And then also, if they have found them bringing all relevant information together to think about the food safety manager in a large company, and he's not only responsible for Europe, he's also responsible for China and USA, and they have to deal with all relevant kind of regulations and it's not easy to find everything that they need. And of course, they also must be up to date, follow changes in regulation, and they are responsible to detect early triggers for a new regulation. And that's not about, for example, mineral oil or PFAS. It's not a new topic. We are discussing about these topics for several years. But to detect such a trigger, and to get an understanding, what does it mean for my company, which processes must they have in place if the regulation comes. It's not so easy. And that's the reason why, next slide please, that one part of the effort project is the prediction of regulatory changes. So, and here the idea behind this is to detect these early triggers to help on the one hand the authorities and the public sector, but also the private sector to be better prepared if we have new triggers in food safety. And next, yeah. And how do we do that? So, how we want to do that. So, let's say, for this project, for this part of the project, we have the AFRA data and analytics marketplace. Let's say this is the hub of the project for this part. We want to combine regulatory data with food safety risk data. And also, by combining such data and software sources, we also attach sustainability aspects. So think about if we are all using the same database, because also low computational energy waste in real time. Within the, by using the whole AFRA network, we will have multiple participants who will attract and contribute while in data mining software and give the data to the marketplace. On the other hand, we have the data consumers. And here we also want to invoice authorities as potential consumers. And they can use our database services. And also, here we will have the financial income. We start already working on the first use case. And this use case, next slide please, is we want to have a look at pesticides and potatoes. If you think about pesticides and potatoes, even if you only think about it in Europe, it's really a complex topic because you have to monitor several pesticides which are regulated from the database. But if you're, for example, using our SGSDG compliance platform, this is the platform for food safety and regulatory compliance. Here we combine not only the regulatory part, we also have incidents and vehicles in it. And also we use the laboratory data of our laboratory network. And this also could help in the future to get a better holistic overview. And also it's helpful to follow policies and laws. So if you get an understanding about, okay, what are they talking about at AFRA, which risk assessments are they performing, then it's also easier to understand, okay, which regulations may come in the next few months. And so combining these several data will give you good overview about the actual situation. And we also started a project at DigiComply, which is called Smart Test Protocol. And here's also the idea by combining regulatory data with food safety data to give you an easier way to update your test plan. So think about the potatoes. You will have several pesticides that you have to monitor. But maybe you will also have other contaminants like heavy metals or microbiology parameters that you want to follow. Then each of these finger parameters has a total risk based on the severity and likelihood. And due to the leakage to the regulatory data and food safety data and scientific data, you will get a really good overview and understanding which changes will come and which also will affect your test plan or your product quality. But here we have to focus on the regulatory data and food safety data. But coming back to our example for this AFRA project, pesticides and potatoes, we should also have a holistic overview. We should also think about other topics like think about the European Green Deal with the Fox strategy. So here we have targets for 2030, but we have to reduce our chemical pesticides usage by 50%. And of course this also will affect the food safety topics. And then we have the climate change. So we are dealing with drought, heat and water logging on the field. And if you think about potatoes, potato plants are not stress tolerant, actually. There are some also research products to make them more stress tolerant. But if you're talking about today, they are not stress tolerant. So here we also have the consequences of poor harvest, poor quality, and of course maybe also more pathogenic terms. So what I will say is, as also Chris mentioned it in this introduction, the climate change will have a large impact on food safety. And if we want to do a prediction also for regulatory changes, we should also follow not only the regulation of food safety relevant part, but also for sustainability. And we have also should link to that weather data, for example, or other crisis that may occur. So think about the Russian Ukraine war and the supply chain interruption. This is also a topic that may affect the food safety. And yeah, this is what I wanted to present today, that if we want really to do prediction for regulatory data, it's important not only to think about food safety data. We are talking about global food risk data. And the goal for this part of the AFRA project is to have real time informing on regulatory decisions based on the global food risk data. We want to reduce computational energy and resources, create an open food intelligence network, and bringing together all the relevant stakeholders from the public and also from the private sector. And now I will hand over to Chris again and thank you for your attention. Thank you very much for that. Again, really good presentation. I think very different actually from Manos because I think you talked about very, very specific issues that are facing lots of businesses in terms of changes that are happening, the regulatory environment sustainability, and how businesses really start to think about what the future risks are associated with those so it was a really good presentation. And yeah, a number of questions I'm going to pose to both of you, but actually I'm going to start off with, there's some really good questions have come into the Q&A box and again I encourage people please ask as many questions as possible. So, there's a question from Felix, and I have to tell you Felix, you beat me to it because I was thinking about asking a very, very similar question. And this one is for Manos. And you talked about Manos about the columns, okay, and to me columns are different sets of data. And you know, I think in the example that you showed around Salmonella, I would call those direct measurements, okay, what's happening on the farm what's happening with the feed. What about indirect measurements that may well be capable of building the robustness of the models. Felix comes up with a couple of examples around say human behavior, and you know, everybody says food safety is a culture, okay, so what about human behavior aspects, what about inputting weather data as well. What are the chances of starting to put in some of those indirect columns into the model building to increase the robustness? Great. Thank you, Chris. Thank you Felix for this question. You're of course right. The more factors we can account for direct or indirect. We'll at the end improve the model. And even if in a case we have too many factors, some some of them only indirectly related to what we're trying to do. AI experts have a way of measuring which of these factors are the most relevant for the question. So there is no problem in this direction, but let me let me phrase the question a bit and say this. Even if we do not account for all the important factors, we will get a useful AI model. It might be an AI model that overlooks something. That's true. But again, it will be a useful AI model because it will have the ability to notice things that the human doing the traditional way of approaching this problem might miss. So let's not look at the, you know, at building the most powerful AI model that could be built and writing down all the possible factors. We can do great strides, even with the most the first things that we can think about the first factors the most important factors, noting them down will have a very impact. I'd like to pose another question from from our audience to have on this time. And I think it's very much involved in terms of regulations and Brendan asked the question about the sugar beet industry and currently in Europe in the UK, the harvest has been severely affected by blight and reducing yields. Part of that has been driven by regulations about about the use of agrochemicals that can be used in that particular crop. And now we're finding sugar being imported from different countries around the world, which is contributing to environmental footprint, but also have a very different regulatory framework. So is, you know, in terms of changing supply chains, and the risks associated with those changing supply chains. Is this something that you know you're already thinking about even in terms of this area of the, the regulatory environment. The two is also complicated. So if you have into even if, yeah, it's due to a regulation that you have to change your supply chain or if we have an interruption of the supply chain different types are possible. Of course, even it's not easy to have all the, let's talk about restricted substances which are regulated in the countries and in some countries, more or other active substances for pesticides are allowed in Europe. It's really not easy to follow it or to be up to date. And I think to regulate this was not work. I think, yeah, I don't know if you think about also the climate topics. We have to go hand in hand. So on the one hand, we have more regulations or restrictions using pesticides. On the other hand, we have the climate change and we have to deal with all of these topics and yeah, it's not easy to answer. Thanks very much for that. This is this is a question actually for both of you. It's okay and maybe I'll come back to you first again even there's a question about the development of AI models and in terms of fraud. So, will AI be a tool that will actually help detect fraud and supply chains, or will AI be a tool which will be helpful to the fraudsters to deceive supply chains. There's two very different approaches there. And if we are talking about fraud, we have several aspects that we have to follow. So we think about the type of fraud. So, but of course, if you select different types of data, then even if you think about, okay, how many funny, for example, it could be produced. And if you follow this number, of course, then you were able to do prediction and can can pay today. Okay, but you cannot sell so much honey because it could not be not could not be so many produced. So I think for these topics, it's easier. But if you think about the bike product, for example, it's not, not too easy. I think it's really depending on on the type of food for what we are discussing but one of what you think about it. So Manos, what's your view? Thanks Chris. This is a very interesting question because it alludes to an interesting counterbalance between having tools for a good purpose, using the same tools for a bad purpose. So, one could frame the question, okay, who has the better tools? Is it the good guys or the bad guys? The truth is everybody has the same tools. The computer science offers these models to everyone, and more or less all actors have access to the same power of tools. What is the real question here is who has access to the intelligence and who has access to the data that can train these models. And that is why it is extremely important for the good guys for the food industry to start sharing this intelligence. In a more confident way and make sure that they train their models with all the data they have as an industry sector-wide. So they create the most powerful models and nobody else has access to this data to do any harm. And on particularly this topic, we will have a very nice discussion on the summit of well Chris, where you will show two very interesting initiatives, and how they are helping exactly in that, especially in authenticity. Right. Thank you both. There's another question has been asked, and believe it or not, it's virtually identical to the question that I wrote down, in case there weren't enough questions coming in, but I will tell you there's a lot of them. Sorry, Vaughn, it's back to you again. And I think it fits in with the smart test protocol that you talked about. And certainly with changes in legislation around pesticides, bait bait fungicides herbicides and so forth. And it's likely to say, I think undesired consequences of that, we will get increased problems with mycotoxins we may get increased problems with with microbial contact agents. So is this part of what you're trying to develop in terms of this smart test protocol, or again is that something further down the line for future. So this is definitely the topic for the future so we started working with it and also firstly to connect the regulatory data with safety data, but it's totally interesting and one important part, if you think about okay how can we detect new triggers to to have the holistic overview and see okay when we are reducing substance XYZ. What impact with with that we have so think about for example, as a lane oxide, we are using it to avoid the contamination with someone else for example. And if you look at the recalls in this time where we have the as a lane oxide problem that say, and you can also see okay. We have also the detect more similar in such samples. So I think it's really interesting to see not see it per hazard, you have to see it for commodity or product and see okay which possibly has a I can have in this product and how will they affect each other. Thanks. Manos back to you. And this is a topic we have talked about quite a lot. And we're going to be talking about it quite a lot again in just a couple of weeks time. But it's about how can regulators and private sectors work better together, but what are the hurdles what are the barriers at the moment in stopping that happen happening in the food safety domain. So Chris if I understand correctly. This alludes to the problem of okay, if I share my data, am I not very deeply exposing myself to any legal life. Concerning this data. So, for from what we have seen from the field from successful initiatives, such as fin, the regulators understand the very important role that private intelligence say networks have to play so in a sense, the food industry tries to regulate itself. So we have seen at least in the UK through the example of fin that regulators understand that, and they're actually cooperating well, and within the boundaries of such a private intelligence network, and they're actually trying to be helpful. Chris actually I think this might be an interesting question for you to chip in as well. What do you think how have you seen this interplay between regulatory bodies and fin in the UK. So I think you're absolutely right. I mean there is massive tensions in terms of sharing sensitive data. Absolutely right. And it's not difficult to understand why those problems exist. It's about protecting your business protecting reputation, all sorts of different factors, and regulators love to collect data. Absolutely love it. And those really don't like to share data themselves, actually, and that's from experience. But in fin, which is for those of you who don't know it's it's food intelligence industry network, which operates in the UK, but there are many national international companies. And we have found a way we've found a wonderful mechanism of sharing data safely between companies but also between companies and regulators. And what I will do now is, this is a wonderful opportunity to advertise our first international summit on on privacy and for food risk intelligence I would really recommend that you sign up for that. There are some phenomenal presentations about this particular topic. So what I'm going to have to do now is, I think, first of all, I think all the people for asking questions are more and more questions coming into the chat box but we have a little bit run out of time to deal with those. It's really been incredibly good, because they have pinpointed some key areas of both the presentations of Manus and even. I want to thank you both for your presentations they were really really good. And personally I've learned a lot from both of them and it's got me thinking about various things but it is. You know at the outset I talked about de mystifying the whole area of artificial intelligence, and really to me is good mathematics. It is really good mathematics, good mathematical modeling, and the more data the more inputs that you have into the model, the more robust they get. I think Manus would agree with me is the predictions can look even further ahead based on the quality of the data, which again is very important, because we didn't get a chance to talk about how far ahead can you foresee a food safety risk is at one month, three months, six months, one year, and really a lot of that is about the quality of the data that goes into building the model. I think you've uncovered really nicely that the area about regulations and your regulations are changing. I think probably one of the biggest driving factors of changing regulations for food safety will be food security going forward, because many parts of the world now understand that the world's food system is not as robust as we once thought it to be. The global pandemic, the war in Ukraine has really highlighted that, and there's going to have to be, I think, much more risk assessment much more risk management around around food safety, in terms of really thinking where the priorities lie in terms of auditing in terms of testing, I think predictive analytics is going to play a huge role in that. What I'm going to do now is I'm going to say thank you to our speakers Manus Yvonne. I want to thank all of you for joining our webinar today, and all of those who asked questions and I hope I hope you all found it very helpful and useful as I did. Thank you all very much. Thank you. Have a nice evening.