 Hi everyone, I'm Marina Biefsa, I'm the head of customer success at Agronome. Thank you all for joining today and with us we have Michalis as well. Michalis, would you like to introduce yourself? Of course Marina, so hello everyone from me as well. I am Michalis, my background is in computer science. I've been with Agronome over the past roughly eight years now. I've been with specifically our data product, Fudakai, since it's very beginning. And I'm working in Agronome as a data engineer and also leading our data team. Marina, back to you. Thank you so much Michalis. Here is our topic for today, we hear a lot about Fudshifty workflows. Fudshifty executives are unsure about the best ways they should process, combine and integrate high quality data regarding external risks into their workflows. Many already have comprehensive streamline methods for the collection, harmonization and integration of this volume of data into their everyday workflows. However, the extremely complex processes around the integration and the use of horizon scanning tools make the successful management of extensive datasets very complex. The challenge remains the same for everyone. How can food and beverage companies seamlessly process, combine and integrate large volumes of data into their already complex workflows? And here Michalis will give us an overview of this from a data perspective. Thanks Marina. And indeed, what we're here to talk about today is how can we integrate a food safety related data into our existing workflows, whatever we're working in the food industry. Specifically, what we plan on talking about today is that we have identified three specific use cases for us to showcase what kind of insights can be extracted from integrating such data into our day to day workflows. But specifically, what kind of processes are the ones that are expected to be affected when doing such integration? What are also the challenges involved in such a data integration? And finally, what we plan on doing throughout this webinar is also for every use case that we will present, we also plan on showing specific examples of how one can make use of the data that we have at our disposal, of the data that they have at our disposal, in order to enhance the data decision making. Moving on to the next slide, we're talking, you heard the word data a number of times. And indeed, in Agrono and specifically in Fudakai, what we have been doing ever since 2016 is that we have been collecting and processing food safety related data that are announced around the world. What have we done this over the past roughly eight years? You can see a quick snapshot of the data we currently have at our disposal on the top side of your screen. You are seeing specifically the data sources we are collecting, we're aggregating data from. You can see that they cover pretty much the whole world. And on the lower side of the screen, you see specific data insights directly taken from our data platform or database. More specifically, what one should pay attention to is that, as I'm sure many people in the audience already know, in order to cover holistically from a data perspective, the food safety sector, a number of different data types are involved. Food recalls and borderizations are one, but we can also, we need to also take into account other kinds of data like price data, production data, trade data, and so on. And using collecting all of this data has resulted, at least for us, in having collected and processed more than one billion data records coming from food safeties around the world. Thank you so much for the overview and we'll talk a lot more in detail about the use cases in a minute. Before we start, we have a quick poll for you that we would like for you to answer. I'm going to share the poll with you. What challenges do you encounter when incorporating high quality food safety data into your daily workflows? And here you have a few potential answers. I'll give everyone a couple minutes to answer. See, we are at 53% of participation. We give you everyone a few more, a little more time, one more minute, and then I'll end the poll. Very interesting results I'd say so far. Definitely, and we'll be able to discuss a little bit of this further down the workflow as well. And I'll share the results. Moving on, we'll start with our first use case. So the first thing we'll discuss a little bit more is how companies tend to develop complex monthly reports to share with their teams to take preventive measures. Many food companies have managed to streamline methods for the collection, harmonization and integration of their data into their everyday workflows. As they need that data, one, to develop complex monthly reports on individual ingredients as well as suppliers, to later analyze and colorate that information into actual insights for distribution-relevant teams. They also tend to utilize those reports to develop trends and forecasts every month to share with each plant manager for them to set the appropriate preventive measures in place. However, what happens when the data changes constantly, when they need to add new ingredients in their supply chain, and new suppliers, and when the data sources for monitoring and alerting are multiple? A workflow can be set, but the data needs to be updated and validated on a constant basis to stay ahead of any emerging risks and potential incidents. And at the same time, to inform internal teams as well as plant managers to take the appropriate measures. And Michalis here has a very relevant scenario for one of our partners that he'd be able to share and walk you through a bit of the process. Great, Marina. Thank you very much. Now before we dive into this first scenario we have identified, interestingly enough, this one definitely applies to the result of the poll that actually scored the highest. So during the poll, what most of the people in this webinar mentioned is that the main challenge is scanning the web throughout the heterogeneous data sources, different languages, and being able to identify which are the data records relevant to their own day-to-day work. Indeed, this is the use case we plan on showing as the first one. And now to dive in deeper into this first use case, we know and the audience knows way better than we do that the food safety sector is a very complex one in terms of the supply chain. Various food companies are sourcing ingredients from various regions around the world using numerous suppliers all spread around the world. What we are showing here is a way to easily integrate, easily simulate your own, your internal supply chain within a system, a solution such as our own food archive. In the image on the top left side, we have added a specific what we call preference. Basically this simulates your own supply chain in terms of ingredients and geographies, regions where you are sourcing your ingredients from. So basically in the terms of this example, let's assume that we are sourcing specific herbs and spices like basil, cinnamon, and so on from countries based in Asia, like India, Thailand, and Turkey. This is one side of the supply chain we want to simulate in a data system such as food archive. The other one is incorporating supplier related data. Most of the food companies out there are already using numerous suppliers around the world. What we are showing here is a way for this data to be incorporated into a system such as our own and say that okay, I am working with supplier a which is based in Thailand and I'm getting basic cumin and the rest of the ingredients you are seeing in this image. And I'm also working with supplier be a supplier based in New Delhi, India, from which I'm getting chili peppers and basically. What are the effect the advantages of integrating, simulating your supply chain within a system such as food archive. First of all, I'm pretty sure there is no human being that can go through one billion data records. So you can get customized email alerts for every food safety related record that's announced out there and that much is one of your preferences that's relevant to your supply chain. But we can, you can also incorporate the fact that we are working, we are living in a dynamic environment such as the food safety sector. You can also incorporate updates to your day to day work updates to discuss the customizations that we have done to also reflect a better your day to day work flow. So step number one, add your own ingredients, add your own suppliers, get email alerts, but also incorporate any potential updates. If we move on to the next slide, however, most of the people in the audience, I'm pretty sure that they are generating reports on a specific on specific intervals like monthly reports, three months reports or yearly reports based on their own supply chain. Again, using a system that has been already customized to feed your own supply chain, you can generate this kind of reports automatically be able to store them if in case they are very complex one and it generate them anytime you need them. What's important to mention here is that as the supply chain as your supply chains are getting more and more complex, you can generate this kind of reports and share them to the to the matching shareholders within your organization. For instance, that we have a specific monthly report that we're generating for every supplier we're working with that we are sourcing heads from another one for, I don't know, fruits and vegetables and so on. You can generate as many of these reports as you want, take into account newly available data and share them with whoever is relevant to receive this kind of insights. Thank you so much for the explanation in the showcase. Here's another poll for you. I'm going to share it with you away. Second, perfect on a scale from one to five how useful would you find such a complete customization report automation in your workflows. And we see that for most of our partners it is vital that they have the ability to continuously update their customization and preferences as their supply chain changes. It's vital to get a lot of information on time and to be able to be fully prepared. And then of course share that with their partners, plan managers, suppliers, any stakeholder involved in the process. I see five for the most part, which is great. I'll give everyone one more minute as you see there's some more replies coming in. And the poll. Thank you so much for taking the time and participating. Moving on into our next use case. We have noticed through our partners that many retailers have extremely complex processes around the use of horizon scanning tools to monitor their white label finished products as well as suppliers in several retail locations around the world. It's vital that they don't only monitor external data through various data sources, but also have the ability to integrate their own data into such platforms. In order one to have a harmonized view of their internal and external data in a secret dashboard that supports with risk monitoring assessment as well as prevention, but also export food safety data analytics on finished products. Hazards and suppliers from horizon scanning tools, which they then can provide to their store managers in order for them to take the appropriate preventive measures worldwide. We highly believe we have a use case here that we could showcase in a potential scenario that would elaborate on this a little bit more. Marina and thank you very much for the intro. Now, before we dive into this second scenario have identified as we mentioned initially, we talked about the data that are out there and that are made publicly available. Indeed, it's a lot of data. It takes some time to go through them identify the relevant ones to you, but this is only one side of the story. The other one is the data that is internal this generated internally from food commands around the world. I'm pretty sure that almost everyone in the audience who is working at the food company, they are already doing some kind of internal for instance lab test or audits to their own suppliers. And this kind of data are stored into various systems within an organization at times possibly disconnected with one another. Such systems that you may already be familiar to is SAP or Salesforce or whichever system you are using to store your own internal lab test or audits course and so on your own internal data. What we want to showcase in this second scenario that we are showing here is a way to combine these two different data sources. What does the outside view the publicly available data talk about and what do your own data talk about. Now, as you can understand, the biggest challenge here is actually making these two different data sets talking the same language. This is from a technical perspective what's called harmonization process so making sure that whatever the external data or the internal data are talking about. They are talking about the same thing. They're using the same kind of terminology, the same vocabulary is only after this step has taken place. Can we be able to extract data and sites combining these two data sets going through the specific example we have identified here. Again, we're using our similar suspects supplier a and supplier b and we have identified two different sources of data we will use in order to perform a supplier is concerned for these two suppliers highlighted in blue. Are the insights directly extracted from the public available data. So for instance supplier a has been involved in roughly 30 calls happening on a national level 25 border dissections 14 inspections and so on. Whereas supplier b as far as the publicly available data is concerned. We only know that he has been here she has been involved in seven food records that have taken place on a national level. This is one side of the story. And this data can be used of course to generate the risk assessment for the specific suppliers. What we want to showcase here is actually highlighted in green. What if we add internal data into the mix. As we said, food companies are generating their own data internally, whether it's lab test or this course and so on. How about taking them into account as well when performing suppliers assessment highlighted in green. In this specific example, we are adding the output the score we are assigning for each supplier we're working on, we're working with, based on the audits we're doing. Main advantage here is that we can combine both of these data sets and perform supplier is assessment on top. What we want to also showcase in this example is that interestingly enough, if you look at the data actually supplier a by taking into account both the sources of data scores in terms of risk assessment five times more as opposed to supplier b. And this can be an indicator that something out there is being announced for the supplier that we may want to take into account. And what we can do to tackle that Marina if we move on to the next slide is dive into further detail. So what we're showing here again we're sticking with supplier a he's the one that has scored highest in terms of supplier is assessment with a score of 53 specifically this for whatever reason catches our eye and want to dive in deeper and say, okay, what actually happened. What is the reason behind this assessment. Again, what I want to say to showcase here is that we and any system like our own data analytics solution has a way to provide a unified and aggregated view on top of data in this specific example for supply array. We are aggregating and this is highlighted in blue, all of the data we know about for this specific supplier, whether it is this data is coming from a publicly available data source or an internal data set provided by the food company. We're talking about this is irrelevant. The important thing is that all of this data are combined together and are shown here in one unified view, both in terms of total numbers like how many inspections have they can place how many food recall how many border rejections. Also in terms of risk assessment. What is the risk assessment for the specific supplier given the food recalls and border rejection one knows what there is assessment given the geography, the specific supplier the region, the specific supplier this company is based in, but also highlighted in green. The interesting thing here is that we are also able to identify insights concerning risk assessment in this case specifically for the ingredients want is sourcing for this supplier highlighted in green here are out of all of the ingredients we are sourcing from the supply array. What are the hazards the prevalent hazards for each and what's the respective risk assessment for that main advantage of using such an aggregated view for a for a specific supplier you are working with is that you can both get an overview of what the food safety authorities are out there are announcing for this specific company, but also how does this reflect to your own supply chain to the ingredients you are sourcing for from this one. This covers the supplier side of things. If we move on to the next slide and as we mentioned initially a, the food safety sector in order to cover holistically from data perspective the food safety sector, more data need to be added into the mix. Incidents for the course important sections one, but many other data are also important. Love test is one such a case is such a case. What we want to showcase here is that as the audience is more familiar than us in that is that internally food companies are performing their own lab test and they are integrating them internally into some system. You use a solution such as for the guy to integrate this kind of data this wealth of data this insights you are generating already internally, integrate it into such a solution as for the guy, combined with a publicly available one as highlighted here with in this drop down that we have data source so we have the food safety sector of Belgium, Germany, France, Netherlands, and so on. But we have also specific data source my company incorporated that is dedicated to our own data data. We are the only ones of course we have access to but can use this in combination with other data sources and generate insights on top. An example of such an insight could be what are the food safety authorities out there taking for herds and spices what are the specific parameters they're taking for when performing their own lab test, and are the lab test we are doing internally in coherence to what the food safety services are out there are checking and if not perhaps we should perform some kind of update to our lab test plan or to our audit plan. Discover this part of the use case. And finally, Marina if we move on to the next slide. We all know that most probably internally you're already familiar with BI tools used for data analytics. I'm pretty sure that most of the people in the audience are already familiar with Power BI provided by Microsoft. But internally you already have such a system in place, a data and analytics solution, some kind of BI tool that aggregates the data visualizes them in some fancy way. And of course, needless to say that this kind of insights is kind of data we just presented the combination of externally available data and internally available one can be incorporated into such a tool and generate insights on top. By the way, what you are seeing here and pretty much all of the screens that you will see throughout this webinar are directly taken from either food archive and or our internal BI tool. And also stay tuned, we would not want to make any spoilers but in roughly a month we have a dedicated webinar specifically dedicated to links data integration so stay tuned on this topic we will talk a bit more over the next month. Thank you so much. And here as I share with you another form based on the specific use case. It's really important to highlight that these requests come in from our partners quite often in terms of ways that will be able to do such integrations with internal and external data. To be able to do that in one dashboard that's actually user friendly and will be easy for everyone in their teams to use. I'll give you one or two minutes to answer. I see some people have already answered. And just to repeat the question if you guys can see it on a scale from one to five how useful would you find the integration of all internal and external data into one dashboard. See some more answers in. Thank you so much for participating. And as we move on to our next use case for today that we have prepared. We have noticed and we've discussed about this so far in the previous use cases as well that food companies have managed to streamline the collection and harmonization of data for risk monitoring and assessment. However, due to unforeseen disruptions in their supply chain, as well as other geopolitical and environmental factors, food companies need to be able to anticipate unexpected risks in their supply chain. With the food safety risks always increasing. It is more pressing than ever for companies to be able to have the complete overview of all emerging risks when compiling risk intelligence reports. Food risk prevention measures in a enable companies to be proactive, take the right risk mitigation measures on time can prevent risks associated with ingredients as well as with suppliers. Through early warning features. They can identify potential hazards and take the necessary mitigation measures to prevent vehicles plan and intensify the supplier or the space from the forecasted risks in a region. Check and update the lab testing plan according to emerging risks and be always one step ahead. And Michalis here has prepared a relevant scenario that he'll walk us through. Thank you, Martina. Again, thanks for the introductory part. Now, before we dive into this third scenario, interestingly enough, this is actually tied to the first poll that we took in this webinar. The second scoring answer was the one that the biggest challenge is identifying emerging and or new hazards and incorporating them into the internal workflows. Hopefully, this scenario is for the people that selected this specific choice. So what we want to show to showcase in this scenario is a way to utilize all this data that is made publicly available. A, that is generated internal by companies, B, use them in combination with one another and generate insights. The kind of insights we're looking for are in this example, the identification of emerging and or increasing risks. What we mean by magic. This is highlighted in red here in the slide for emerging risks. We identify emerging risks as the ones that are showing an increasing tendency in terms of risk using the data we have at our disposal. In the specific example we are seeing here, heavy metals is scores quite high for for appearing within cinema. This is a very interesting indicator because we can easily tie that to the lead in applesauce outbreak that we recently experienced within, I believe it was the United States. Now, this kind of insights taking advantage of this historical data at our disposal and perform this kind of analytics on top. So we can easily identify which are the hazards that are showing an out of the ordinary behavior in terms of risk assessment. And we can prioritize them in our internal lab test, but this is not all. We can also use this wealth of data to identify very, very new or unknown hazards highlighted in blue on the lower side. We're identifying some of these hazards specifically for the ingredients we added and we're following throughout the examples we're using in this webinar. Interestingly enough, these new or unknown hazards are hazards that have taken place for the first time over the past days or weeks, as opposed to roughly the 45 years of data we have at our disposal within Fudakai. This is a very interesting insight because usually these kinds of hazards are either not taken into account when drafting a lab test or an audit plan, or score high, score lower in terms of prioritization concerning the hazards we are taking in our day-to-day. In a nutshell, we can use the imaging hazards as the new hazards to make sure that we have added them into our own workloads on a day-to-day basis and also prioritize them higher if we have not already done so. This is one thing, this is the early warning, let's say, side of things. Before we move on to the next slide, now we will attempt to move towards the forecasting capabilities of this data integration we're talking about. Before we dive into the actual data and insights here, let's spend a moment to talk about these forecasting techniques. As we mentioned, both in the initial slide with the data records available at our disposal, we have many data records that have been announced over the past roughly 45 years. This is, as far as I know, goes, of course, internally within food companies, possibly way more data exists. The idea of these forecasting techniques is to utilize all of this data, train forecasting models in order to attempt to predict what will most probably take place for a specific ingredient and or for a specific hazard, specific reason behind recalls and or a combination of them or adding even regions into the mix and attempt to predict what will take place in this case over the next 12 months. The interesting thing here is that this way we can identify outliers, imaging cases that are expected to take place in the future. Specifically, this screenshot we're seeing here involves these forecasting techniques applied on top of the product category of herbs and spices, and we're identifying here, we're showcasing the hazards that are most likely to increase over the course of the next 12 months. In this case, we see that many pesticides seem to be quite prevalent, but also the presence of foreign bodies or chemical hazards also exist. The main idea here is both to identify hazards that are likely to increase in the future, but also make sure that we are taking them into account when drafting our own internal audit or lab test plan. Thank you so much for the use cases and as I share one more poll, and it's the last one for today. I wanted to highlight the fact that we have seen these preventive measures taken from our partners and we have some very successful use cases. They've been able through the alerting system that offers to warn suppliers ahead of time for clients as well to avoid potential pickles. And the question for the poll, on a scale from one to five, how useful would you find food safety forecasts in your workflows? Glad to see most answers are towards five, and we'll give everyone one more minute. That's a good thing Marina, and while people are answering our poll, let me also add here that the use cases two and three that we just saw can also work in combination with one another. So in use case three, we saw how internal data can be added and combined with external data, whether it's a BI tool or a system like Foodacai, it's irrelevant. The base idea is combining two different data and how use case three, how can these two data sets can be used in combination with one another to perform forecasting techniques. And these results of the forecasting techniques of the identification of emerging hazards or likely to be increased can also be incorporated back into internal systems and or BI tools. Definitely, and we've seen this need from partners as well, whether they're working on a project internally and they'd like us to fit that in or develop or, you know, the support on our side on those tools. It's definitely a major need to be able to kind of combine the data and also evolve that along with the predictions capabilities with the predicted capabilities. Thank you so much for taking the time to answer the board. Moving on into the next part of the webinar, we have an open Q&A. Feel free to share your questions on the chat, we'll do our best to answer them, all the Q&A of able to see both and read them. The first one we see here is how easy it is to deploy such a solution into a company's workflows because I can start with that and of course feel free to add. I would say in working with multiple partners, multiple companies, smaller to larger scale. It's really, it is a process to be able to deploy such solutions and kind of incorporate it into the workflows. We work very closely with partners from a customer success perspective to be able to train team members to have regular meetings with them and touch points to be able to understand the solution and how they can incorporate it into their workflows, substitute maybe time consuming activities like reporting for example or unlearning where they can do it through Budakai directly in a more efficient manner. Perfect, and I definitely agree with what you said Marina, and let me just add on top of that that at least the way that we are working, I think when trying to integrate a solution such as ours into the day-to-day life of food safety professions, we take it one step at a time. So identify the specific pain points they are experiencing in their day-to-day, see how each one can be tackled with a system such as our own. And if I may, Marina, you also see another question in Q&A, I think it's quite into what you just described, so perhaps we could take this one now. So someone asks, how much time does it usually take to adopt such a solution across all FSNQA teams? That is a great question, and it does tie to that. So adoption obviously changes with time. We would say that with proper training and proper touch points, we would definitely need three to six months for a team to be fully engaged on the platform. And as we move forward in a year, two and three years, then the adoption factor is definitely higher because they can't accustomed to the platform. They get used to using it even more. They have integrated fully into their workflows. We'd like to think that it's a user-friendly platform, so it's usually easy to add up as long as it stands properly. And Marina, no time to catch a breath. I see another question related to what you just described now. So someone else is asking, should the person be dedicated full-time? So that's a great question as well. We've seen that the adoption, the way the adoption works, it could be various ways. It really depends if it's a centralized team, if we're talking about various regions around the world, various teams. Always a project manager, a dedicated person from a partner's perspective is advised because they have that kind of central information for everyone they can share it across. So it is always great to have a dedicated person. It does not need to be full-time though, but it helps to have a main point of contact, especially for talking for various regions worldwide, various teams, which is what we usually do with the guy. Perfect. No, no, please, please. I see a question here. What are the typical challenges or problems faced when adopting such a solution? I see a lot of questions around adoption, and this is really, really interesting as we've seen this challenge arise a lot from our partners. A few challenges here. It's about collecting the data, be able to represent that into a dashboard, but also may have people actually use it, incorporate it and update their daily workflows with something new. If the solution is done properly and in a systematic way, it definitely works. Typical challenges or problems we've seen is that if the solution is just another solution for a partner without proper training, management, the proper ways to build those workflows, substitute those time-consuming actions or specific features on Fudakai, then we've seen that it could be problematic, it could be a challenge for people to adopt to it. If it's done properly in a systematic way, expanding into teams, whether it's based on region, whether it's based on food safety team and any other specific breakdown in teams that the partner has, then it definitely works. Perfect. Then Marina, perhaps you can take a moment to catch your breath. I can take on another question that has to do with the identification of emerging hazards. So someone is asking, does the system identify hazards that are not looking relevant? For example, emerging hazards that do not result in a reality change. As we've seen throughout this presentation, a solution such as our own is basically a data and analytic solution. So in order specifically for a hazard to be identified as emerging, some kind of data needs to exist within the system. Now, what we saw in terms of risk assessment for specific hazards follows the formula, the risk assessment formula we have incorporated internally. For a hazard to be identified as emerging, it means that at least one case, one data record exists over the past one year. So in essence, no emerging hazard will be identified if we do not have at least, let's say, one food recall or one border rejection. But of course, this is linked to the formula that we're using internally for risk assessment. If someone is incorporating our data and assessing risk using their own internal formula, of course, whether a hazard will be identified as emerging depends on the formula needed. At least for the one that we're using, at least one record needs to have taken place over the past one year. Thank you, Michalis. And I see a question about data harmonization on the chat, which is very, it's a very common question. How are data harmonized in the platform? Do you want to take that? Yeah, definitely. Let me say I'm super happy that someone actually answered this question. It means that the sense, the concept of data harmonization is known to people in the audience. This is very nice. So how is the data harmonized? What we're doing internally for our data platform is that we have our own vocabularies, our own taxonomies, basically lists of terms that we're using to assign to annotate each of the data records we have. We have a predefined list to describe every product category, every ingredient, every commodity out there. Same goes for hazards. So we'll have, again, a taxonomy, a vocabularies that we're using to annotate our data records using hazards. This is by the way organized in a hierarchical manner. And the way that the data is harmonized is before made available to the end user using, for instance, Fudakai. Regardless of the language or the format we are collecting, the raw data is announced. We are harmonizing them into our own vocabularies, which of course, by the way, is in English, and then making them available into Fudakai. So the harmonization step basically takes place on the backend side of things, on the back office side of things. Thank you, Michalis. I see a few questions about training. If training is provided, if the team is available for in-person training or workshops if needed, the answer is definitely yes. We do provide training online for the most part. There's dedicated account monitors for each account, which provide a very detailed training on Fudakai to cover all teams from a partner's perspective. And also, if there is need for an in-person workshop with any partner, we make sure that we cover that as well when it is possible and available. Our partners are really located worldwide. So that gives us the opportunity to meet and plan workshops in person. And there is also a question here I see, are retailers following different deployment paths? We work closely with food manufacturers and retailers. We have set various deployment scenarios and paths to cover various needs based on the partners. So yes, there's different deployment scenarios for retailers because the need is different and the challenges are different. There's different deployment scenarios for food manufacturers. And these are all customizable based on every partner's needs. Perfect, Martina. I see another question. It's a fairly quick one. What if our team's main working language is not English? Do you want to take that, Marina? Okay, so the platform is fully curated in English. We cover, as Michalis mentioned at the beginning of the use cases, multiple data sources around the world. All in several local languages. We take that information in a wonderful curation team who takes care of that. So it's only unified and harmonized in English for everyone. There is no plan at the moment to have the platform available in another language. But we do understand the challenge and we take consideration. And Marina, I am also saying here some other questions around the ease of integration to the day-to-day workflows. We partially covered that previously. Let me just add on top of what we have already discussed that concerning specifically data-wise integration, as long as the companies using our system are using some kind of known system, the integration is very straightforward. I will give a specific example. If a food company is using Azure or AWS infrastructure for their own BI tools for their own analytics to take place, then the integration, whether it's from agronome side to the company side or the other way around, is very straightforward. If they're using some kind of custom solution, however, of course, this is not such a straightforward approach and we'll have to consider this like a custom project or work specifically on this case. Thank you so much, Michalis, and thank you to everyone who sent or shared all their questions on the chat and on the Q&A. If there are others that haven't answered, we will make sure to follow up after the webinar. We do have them. We'll have them here in store. And as we are wrapping up, Michalis, would you like to share a few thoughts on these use cases shared so far? Yeah, of course, Magna. So what we talked about today is actually we attempted to identify three specific use cases to showcase the importance of integrating either external data insights or generating internal data insights and adding them into your own workflows into the day-to-day life of food scientists. Now, in the first use case, what we saw was a way to make sense of the weight of information announced out there concerning food safety and make this information that reaches your mailbox, for instance, tailor-made to your own supply chain. In the second use case, we saw how can we build on top of simulating our supply chain into an external solution such as Foodacai, how can we extend the insights generated from such a tool by also incorporating our own data into the mix. And the final step, we have added a lot of data using the first two use cases. The main idea of the third one is can we use all this data to generate risk assessment insights and or perform forecasting AI techniques on top to be able to stay ahead of potential cases? Thank you so much, Michalis. It's been a really interesting session. And if I can comment a little bit on the questions we got through the chat in the Q&A, it seems that the concerns and the worries and the challenges are very common to everyone. We often get these types of questions from partners. How others use the platform? What are some successful use cases? Is there a training? What's the integration like? How do others manage to incorporate Foodacai into their workflows to save time? Hopefully, we'll get to answer some of those questions today with Michalis and give you some real-life examples and ideas and inspire you a little bit. We hear from partners that the challenges are many but common across the board. Too much historical data scattered across digital and physical touchpoints, which makes it extremely challenging to gather it all in one place. Then once that data is gathered and collected, the next challenge is to visualize that into one dashboard in order to be able to assess, to make informed decisions and to take preventive measures. And finally, once the dashboard or the app is set, it is needed to be easy to train, easy to incorporate, easy to navigate on a daily basis for people to actually adopt it and adopt a new way of working and incorporating it to their workflows. Those other challenges were constantly trying to accommodate Foodacai with the continued support from our partners because the shared interest and goal is mutual to reduce food safety risk in the supply chain for all. Thank you so much for today. Thanks everyone.