 So a very warm welcome to our First International Summit on Privacy Preserving AI for Food Risk Intelligence. My name is Chris Elliott. I'm Professor of Food Safety at Queen University Belfast and Professor of Food Safety at Tamizat University in Thailand. And I am co-organising this event, along with Manus Parvunas from Agrino. A very warm welcome to all of you, the participants. We've had over 100 registrations, so I think it shows how important, how topical the subject of AI and particularly around privacy is. And we have put together, I think, a really quite exceptional panel of speakers today who will give you some information about their own demands around AI, around privacy. And each will give a bit of a case study, and we will very much welcome you to participate then after the presentations to ask some questions. And please feel free, start typing your questions into the chat box, and I will deal with those questions as we can during the panel discussions. I think it's very, very timely to have this summit on AI. It's a topic we hear more and more on the news about, on many different forms of media. I would say approximately 50% of the news stories are good news, 50% are bad news. But I think the real news is AI is here, and what we really have to do is try to understand it better, and then think about what best applications for AI are. And this summit is very much about food risks. And most people in the food industry understand that food risks and food safety, it's something which is non competitive. It's something that we all have to work together to try to ensure our food is safe and authentic. But also we realize that data that generated in terms of food safety and food risk monitoring is incredibly sensitive, it's very confidential. And actually it's extremely difficult to think about how you can share that data, share that data in between companies, or even sharing the data with regulatory agencies. And again, we'll have some case studies where we will show actually there are really good ways where you can share data, but do it in a way that protects your business, protects all the interests of all of your stakeholders. So again, please think about questions that you would like to ask. The more difficult the questions, the better, because all I have to do is ask other people the questions, I don't have to answer them. So what a wonderful position it is to be the chair of this. But I think I'm more serious. Now, it is interesting that just in the past 24 hours, there was a very high level summit between the Prime Minister of the United Kingdom and the President of the United States. And they were talking about AI and the benefits and some of the risks. They were also doing it in an environment which was unbelievably polluted, huge smoke passing over large parts of the United States of America. And of course, what the cause of that smoke is climate change. And here are some of the really big risks and there are so many things that we can think about in terms of trying to mitigate against climate change through using AI. And also we know that that climate change is a massive impact on food safety as well. So there's lots of different issues coming together. And these are the things that we hope to talk about today. So in terms of our panel members, I each of them will be giving a short presentation, and I will ask them to introduce themselves just in terms of their background and their areas of expertise. And just to kind of walk through our program a little bit. In a couple of minutes I will start to talk about the food fortress. And the food fortress is something that we're extremely proud of, proud of in Northern Ireland, because it started off with a crisis, and we ended up with something which actually gives us a competitive advantage. Then Manus will will give us some really good background information in terms of the systems that can be used to collect information in a federated approach that can really help build really robust AI models, but still will collect the integrity of your own data. Tim Hill will will will talk about another case study, and they started off actually is as very UK centric, but it was actually really expanding to many different parts of the world. And that's how to exchange information intelligence about food authenticity and doing it in a way which is incredibly safe as well. So Lee will talk about issue or sorry, first of all, we'll have asked who will talk about a lot of the technical background to federated AI learning as well. And this is a presentation that this is a presentation that's for somebody who is a novice and AI like myself. I'm really interested to learn more about about the work of us and WFSR. And then finally, so Lee will talk about consumer views and the importance of consumer acceptance in these types of applications, which again is hugely important. You can put in place the best systems in the world in terms of food, but you know what, if your consumers don't understand it if consumers don't like it, we're pretty much wasting our time. And then after that, we will go into our panel discussions. So what I'm going to do now is I'm going to start with my presentation. And just to say that I can't share my screen at the moment Manus, because I think you are so maybe if you can log off. No problem. Let me say a couple of things for our brainstorming sessions at the end as well. Thank you very much for introducing the agenda as well and everyone here contributing to the event. I think we have put together a great agenda. And of course the most part of the day is near the end. We have a lot of brainstorming sessions where we invite you all to participate and voice your opinion for what you will be hearing about today. In the meantime, please write down any questions you have freely as we're going through the keynote speeches. We will answer as many as we can during the discussion panel as Chris also mentioned. And after the discussion panel, you are all invited to participate in small group brainstorming sessions as well. So if you're interested for that just just remain till the end. Okay, so let me stop sharing Chris so you can share your slides. Go ahead. Okay, and just to check everything you can see my slides okay Manus. Thank you very much. So, what I want to do is to talk just briefly about a real crisis. This was a crisis which happened 2008 2009 on the island of Ireland, both north and south of our island. What happened was, we got a very small amount of dioxins. It was actually calculated to be less than one gram. One gram of dioxin got into our animal feed supply chain across the island of Ireland and caused an enormous food safety scare. Now, at the time it was calculated that the cost was about 125 million euros. After the crisis, we did some better calculations. And the true cost was closer to a quarter of a billion euros. And I always say that one gram of dioxin was the most expensive substance on the planet, one gram quarter of a billion euros. It caused a total recall, a global recall of all pork produced on the island of Ireland. It caused massive reputational damage to our industry. It actually was very close in getting into the dairy supply industry of the island of Ireland, which would have been even more severe. For a small island we produce 1% of all the world's milk, but we produce 10% of the world's infant formula. This is a hugely important industry. So we had this crisis. And in terms of a crisis, there is plans to manage the crisis. A lot of it is about finding out what the problems are. A lot of it is about trying to communicate with different stakeholders, particular consumers about the food safety risks. And actually the food safety risks associated with the scare, they were minimal, but it was incredibly difficult to communicate that at that time. So I will tell you to prevent such incidents is much, much better than trying to manage them. And I say that because I was involved in the very early steps of trying to manage this crisis. And what was very clear to me at that time, every company was blaming other companies. There was a massive degree of fighting and arguments between the industry and the regulators. Everybody trying to blame everybody else. And this is again really common features of what happens when you get a food safety incident, particularly a large one. And what I wanted to say is we actually moved from blaming everybody to getting everybody to work together. And I think it's a wonderful model to know about and to understand how we went from fingers pointing at each other to the hands coming together. And really about data and how you can manage and how you can exploit data. So from being adversaries to moving to partnership. How did we do this. And we achieved it by, first of all, agreeing that the industry had to come together to work together. And think about how to try to prevent future accidents, incidents happening. And that for individual companies, even the large companies and we have some really large animal feed companies on the island of Ireland. We import over 6 million tons of animal feed each year. And I was thinking about that we should not think about the risks as an individual business. We should think about the risks as an industry. The industry has to have a single agreed form of risk assessment and then a single risk management plan. And how we did that. And I was very pleased to be I was contacted by the feed industry on the island of Ireland. I was contacted by Pat Wall, the former chair of EFSA to think about how we could come up with a way that a single risk assessment risk management program could be put in place. The only way we thought we could do that is from getting very, very sensitive and very confidential information from a lot of the companies, some that were implicated in the scandal and lots of the companies that were not implicated in the scandal. We collected a lot of very sensitive information about the amount of testing that they did for particular risks. And then we looked at that data and looked for the strengths and weaknesses of individual companies, but also on a collective basis. And that information shared with me and my research group. No other company was able to see the data from any other company. It was always kept very confidential. And then what we were able to do was to think about the application of cutting edge science and technology. The objective was to the world's best quality assurance scheme. So going from a point of crisis to a point of actually having a commercial advantage. What we did was that we devised a way that sampling could be taken and samples were taken based on the size of the industry, size of the company, the number of risks that we were associated with particular commodities that they were importing and sourcing. We devised a testing program. We devised a way that results could be shared across all of the members informed and protected individual businesses. Another hugely important part of what we were able to achieve was not only to get the buy-in from regulators, but a huge amount of support from the regulators as well. What they saw was the industry were coming together to self-regulate, to self-place themselves and the resources of the regulators who were always in diminishing amounts in many different parts of the world, and they were getting access to the data. Now, where we are now in terms of this scheme, we call it the food fortress. We are now a brand, and we have over 80 companies on the island of Ireland, but also in Great Britain who are members of this scheme. And on the island of Ireland, we cover virtually 100% of all animal feed materials. And that is a monitoring program, which now looks after six millions of tons of animal feed each year. It's not only the animal feed industry that we are supporting, it is the entire agri-food industry on the island of Ireland. Now, for instance, in Northern Ireland where I come from, it's a small part of the small island. The value of our food industry is well over five billion sterling over six billion euros. And on the island of Ireland, we will have Michael Bell, who is the CEO of one of the main organizations that helps to support that very large agri-food industry. And I think what is also really very positive about putting such a scheme in place. It is now used by the industry as a way of promoting the quality, the safety, not only of our animal feed, but all of the food that comes from our island of Ireland. I regularly meet buyers who come from many different parts of the world, particularly from Southeast Asia, from North Africa, from the Middle East, who want to buy dairy products, who want to buy meat that comes from this very, very highly run, very well-policed agri-food supply chain. What we now have is, we believe, is the world's leading quality assurance scheme for animal feed. We have the highest levels of traceability, the highest levels of quality. Now, it took us quite a number of years to get there. What I have summarized in the last seven or eight minutes actually took us about five years, and it took quite a long time because of the issues of sensitivity of data. The system is now in place. It's something that we've shared with a lot of other stakeholders in different businesses, and I think it's an exceptional way in terms of managing data to reduce food safety risks. So that's my IK study. And again, please feel free to ask any questions that you have about the food fortress, how we established it. And what I will do now is, I will pass you back to Manus, and Manus is going to talk more about the background to this federated approach of data sharing. So thank you very much for listening. Thank you very much Chris. Let's see on the screen again. Let's see this. All right, so from a very successful initiative. Let's now go a bit into how things could look in the future. Okay, so hi everyone. Again, I'm Manus. I'm leading our innovation team on agronome. Today, my main hope is that I can help us all become a bit more confident that the analytics for safety can work well and be applicable in real world situations with true business value. And most importantly, that we can do all this while ensuring that the confidentiality of the safety data involved. So what we will be talking about is in part our internal work in agronome with the type of project, but we're also very happy to be coordinating an important European horizon project called EFRA. That is also co-organizing this event, where we work together with many exchange partners across Europe to advance the field of food risk prediction. So without further ado, let me get started with a provocative question for today. Can we predict food safety risk while ensuring confidentiality? All right, so no need to vote here, but take a moment to think this through. What is your opinion? Is this something that you feel is feasible? Can we predict food safety incidents before they happen and also do this in a way that does not expose any sensitive data to third parties? By the end of this presentation, we might revisit this question. All right, so I would like us to examine through a realistic example. Let's say we are working with a poultry company and the company owes multiple facilities that have a set audit schedule that's based on hazard. We would like to improve on it through a data-driven and predictive approach. That means the AI model with the interest rate went and were floated based on real-time information prediction of potential risks. To ensure that no data needs to leave the company premises, we decided to train the AI model directly on the company premises using their local data and their local computational resources. This is an interesting approach. It's a nice baseline. It can ensure confidentiality, which is not so much of an issue when we're talking about the same company. We will see interesting ways to extend this. Multiple companies in late in the presentation that will be the essence of this presentation. So before all that, so we're all in line. Let's start with some steps we would follow to train this model and create such an AI predictive approach. So to tackle the challenge every time we need to put together a cross-domain team with food safety experts and computer scientists. And the first thing we do when working in these cross-domain challenges is to understand why we're creating this AI model in the first place, which believe me is not a given many times for many of the participants. So what are the things that make us worried that we hope we can avoid or we hope we can do better? This serves as a focus for every entity, but the motivation as well. So in this case, they would say the experts might share with us statements like, I'm constantly worrying that we might be something critical and it will sneak up on us leading to reports. Or without orderly sources being so limited, it's really crucial for us to spread them out for the best outcome, and it feels like we're constantly juggling. So after this next is time for the AI experts. We will look at the family of AI models that may be best fitted to the challenge, given what we're aiming for. So let's say that after cross-domain discussion with it of designing that we're aiming for a data-driven approach to prioritizing general audits based on a predictive time for incident. That is, which of the facilities is predicted to have an increased chance of food safety incident risk related to the others. What is an expected time frame for this? So the food company can then audit practically and remedy the situation. So the computer scientist will say something like this seems a lot like asking how long will a given facility survive without an incident and the time that mentioned is important. So a specific approach would be to use survival analysis, which is a particular family of AI models. Doesn't, does not concern us too much today, but it's a particular way of doing, it's a particular type of AI model. So before we look into how we can use survival analysis in this way and actually use it as an example for some driven principles of what AI models can do. Let's go back to our cross-domain team a bit and look at the most crucial part of the whole process, what can make or break the whole effort. Okay, the next thing we do is where cross-domain collaboration is most crucial. Computer scientists and food safety experts, we sit down together to understand the factors that influence the probability of an incident of care. Here, most likely we also need to narrow down our scope to a particular type of incident so that the factors, the contributing factors can be considered present. Okay, so now when the food safety expert will say factors, the computer scientist will always hear columns. This is an interesting takeaway message we will delve into it a bit more in the next slide. It's a bit of an oversimplification, I know, from the point of view of the computer scientist, but it helps cross-domain collaboration a lot. So, let's understand all this one example. Let's say we are working with a poultry company and the ownership of facilities as we said in a rather complex supply chain. Let's say you are concerned for salmonella cross-contamination incidents in their facilities and would like to intervene practically based on any specific risk. So first we need to make the code clear, it is to collect data for the factors that influence the salmonella incident and the poultry facilities. So we start by monitoring some of the facilities for let's say one year 365 days. During that time some of the facilities will experience an incident and others will not. So we note down various factors that we believe might be related to the arising of an incident and we track them through time to simplify very much. So let's say these factors are the columns in this table, okay? It's a facility how many days it remains salmonella free. After that time stop, it might have an incident or not. And then we have contributing factors. So it could be something like a risk score based on some analysis on side, hygiene measures score, fit quality score, equipment maintenance score, related to employee behavior, et cetera, et cetera. So the factors are simplified, I know, but really that is all there is. We'd say the experts will say factors and computer scientists will say columns and that is columns in such table. And what is the trick here? What is the effort? If we can put enough columns in this table and make this table long enough. We can trust that we can find a particular AI model that will produce very good predictions. And even if we do not put enough columns in this table, even if we do not account for all positive factors, the AI model will still be useful. It will provide a data driven approach to something that might not have been that are driven before. So the next step, I won't go too much into it, but the next step involves on the computer scientist. Here is where we construct the model. What does that mean? Essentially it means introducing the AI model to the previous table. Okay, it's a bit more tricky than that, a lot more tricky, but that's the essence in a sense. So what is the end result? The end result after constructing the model and training it. In this particular case would be noted prioritization by AI model. That is a trained AI model for users with time records. What would predict the time to incident per facility? Okay. This is how we would go about constructing a model for a simply specific food company for a particular type of incident. Now going from one to two or more companies in Stigl and Schrodinger financiality is a whole lot more challenging. In other words, we have put together the FRIU project. So let me introduce you very quickly to the consortium. We have stopped from university from Sweden, back in again from state releases from the Netherlands, Mojpac from the UK, CNR and Maze from Italy, Grip from Croatia, Reino from Greece, SGS DG comply, and as I will know from Greece. So let's look at what happens if we want to add a couple more companies to the mix. Why am I going to do this and why it's tricky. So as we'll see from Bass's presentation as well, all AI models can be improved the more and the more diverse examples they encounter. So remember the wider and the longer the table the better the predictions. So this has a very deep implication. So let's say we have our audit prioritization model set up and we have a company that wants to train it and use it. We can do so they get back a useful model, but in a sense this is a model that has been informed only by their own data. So in a very particular sense, they get back a deeper insight into what they already know. But the industry has other companies as well with their own internal facilities. So if the model could be trained over all their individual examples the final model would be much more powerful. In a sense covering the entire industry, rather than any particular company. Of course, an individual company would be hesitant to expose sensitive data. They might say okay the model is great, but what if my data is exposed my reputation maybe. So whatever we look into is to prove the concept that you can get the model without exposing data, and actually the high level ideas are the straightforward. The model moves around the companies getting trained with the local data, but no data is moved around, only the model, making it stronger and informed by all participating companies. And the final model is given back to all the participants. Of course, the devil is in the details and pass will serve with us a lot of these details, but the vision is very clear. It is sector specific intelligence networks built around such privacy preserving AI. So, very quickly to also introduce teams presentation immediately afterwards. We see this as a feather evolution of very successful intelligence networks such as Fin. Fin follows the data submission and our organization approach, and the participating companies get back consolidated report focused on other these issues. Very successful in the UK gradually spreading across Europe, and we will hear more from team in the next presentation. So, that's all from us. Tim, would you like to go next. Thanks, man. Give me one second. Okay. That coming up. Okay, man. Yes, it's great. Go ahead. Thank you. Good morning, everyone. My name is Tim Hill. I'm a partner with ever said Southern we're an international global global law firm. And I'm part of our environment health and safety team which covers food safety and food hygiene issues. We were involved in setting up Finn in the UK. You've heard this mentioned by both Chris and Manos. That's the food industry intelligence network. And I'm going to give you a quick overview on how that works and again bringing together the last two presentations. How does this data sharing and in particular the anonymity of that data work in practice. Very briefly background on Finn. It was set up in around 2015. It came out of the horse meat scandal in the UK where clearly products that shouldn't have been in there were getting into the supply chain. And it caused a major issue for quite a number of the food manufacturers and retailers. Major one of those who was one of our clients and they approached us in conjunction with a number of other key industry players and asked for our help in setting up the industry leading organization of Finn. And working out how they would operate to keep everyone's data safe but at the same time as Manos just said, getting the maximum amount of data to make that as worthwhile as possible. So very briefly, Finn is aimed at integrity, authenticity and traceability of food products. There are currently 63 members in the UK and these are pretty much all of the UK's leading food producers and retailers. So everybody from the major supermarket chains and the sort of main brands that you would see through to the equally large and important brands behind the scenes who make a lot of own brand and other materials. Either under their own name or for others. So that's pretty key that it has a very broad sort of membership base who will contribute their data across the system that we help provide. The idea is that it's a very reciprocal arrangement. So by being a member, you accept that you will provide your data. So it only works if everybody supplies their data and equally, you will only continue to be a member. If you are supplying that data and that's the only way you will get access to the anonymized data of everything pulled together. There's a process for quarterly data submission. And that's then analyzed and reported out. And that includes things like testing of different products, any contamination that's been discovered any mislabeling, any fraud. So it's a very broad range of issues and a very broad range of products. So pretty much every product you can think of whether that's meat, arable, fruit, manufactured products, herbs and spices, honey, everything is covered. And it goes through this series of quite detailed data requests which each of the companies submit on a quarterly basis. If you fail to supply your data, you get a warning, then you get a couple of second warning. If it's a third time, you're effectively asked to leave the membership. So it is taken very seriously because, as I said before, it only works if everybody contributes their data. And that's the only basis on which you'd be able to then see the combined data of everyone else. So unless you contribute, you can't play effectively. The key thing is this has become a commercially successful endeavor. So there is a membership fee involved, but effectively it's not for profit, but it pays for itself. So there's clearly a benefit to all of the organizations taking part in this that covers the admin costs for both ourselves and for Kremglobal who do the data holding. And so it works in that sense. So it helps support the industry, it helps provide industry-wide sort of safe haven for providing all of this information. And that assurance that then all of the businesses can take, not only about their own product, seeing where they are against sort of market trends. And also to the last presentation, it's helping to work through predicting where things might arise or early warning of problems elsewhere in the industry. One of the key ways in which it works to protect that data is this effectively double anonymity of all the data that's supplied. So how does that work in practice? Every company who joins is given a secure access email and they can nominate as many people as they need from their organization to be given an email, but they get a unique email and a unique password so only they can submit the data on a quarterly basis. And that data is then submitted via a code name. So each company is given a code name. It's effectively a color plus a mode of transport. So for example, black boss, red car, yellow submarine. They become the anonymous names so that anything that sees the data will only see the anonymous name. It's not linked to the company in any way. So if the data is produced, it is exactly that it is simply combined data with no traceability to any of the contributors who put into it. That is all then managed by the party Crem Global. They liaise through one of my colleagues. Again, we don't have access to the data itself. On a quarterly basis, we provided with assurance that the anonymity has gone through and we provided with a list of any people who haven't made the submissions that they should that quarter. So not the actual data, but simply a trigger to say this company has not logged in and provided their data yet. And we then provide that back to Finn who follow up with the members by name to say you haven't provided your data. It's really important we have it by this deadline, or effectively that's your first strike. And then they obviously follow up each and every quarter afterwards. Usually, we understand companies have issues from time to time or people might leave. So missing one, not the end of the world and most of the members get back on track. It's quite rare we have to get to the stage of sort of expelling members, but it has happened but that's important to keep the integrity of the model so that everybody who is benefiting from it is also contributing at the same time. That's the double anonymity side we get to if you like, organize the in and out of the data, but we don't see the data itself. Equally the company handles the data. They don't link that with any company names. They simply have that linked with this code name. So there's this double anonymity provides the reassurance to all of the members that they can supply their data, which clearly might have very sensitive information in it if they've discovered an issue or a problem. But it keeps that secure and it just allows everybody to see the data itself and not where it's come from. The other key bit of why Finn works and is so important is when you is when you build in the layer of how does the regulator view this so in the UK we have the food standards agency the FSA and they have a very positive view of Finn and how it works. There's a strong degree of trust and confidence. They see this sort of big picture so instead of thinking this would be a great way of accessing data on on businesses that are being compliant. They've taken a much bigger pragmatic view. It's much more important for the industry as a whole to have this integrity of their supplies and food chain so that they can benefit everybody rather than singling somebody out simply because their data might show they've got a potential issue. So the building of that trust with the regulator is really important. They also respect the anonymity. They understand how it works. They know that if for example they came to Finn, Finn wouldn't be able to to get that data because they don't have access to it. And equally if they came to us as lawyers, they couldn't get it from us. So they know it's not worth trying and they don't want to try and get behind it because the integrity of the system is much more important. They clearly have their normal powers of investigation, powers to require individual companies to supply data to them about that company. And equally certainly in the UK, those companies have legal obligations to supply data to the regulator if and when there are certain issues. So you have to self report a lot of these things anyway. And that's what the regulator uses to investigate the company, but they don't go anywhere near this collective data. And they respect the fact that that has real value in remaining anonymous and allowing everybody to share that data safe and the knowledge that it will be used by the industry, but it can't be singled out against them in particular. So that separation of the data from the contributors to the system is really key. But what I can say is that's now being in place for quite a number of years. It works very, very well. There is a quarterly meeting of the Board of Fin who will review that data and pick up on any trends. There are members meetings as one fact next week where wider members can get together to discuss broader themes and issues in the industry. So again, it provides more than just the data as well for those who want to contribute and becomes a really important forum that everybody can take part in provided they sort of play the game and provide their data. That then gives them access to this huge wealth of information that they can then take back to their own organizations and look at how they are sort of benchmarking themselves against the wider sector. Clearly, not every data entry will be relevant to all members because of they don't necessarily work with those food types or whatever. But that has never been an issue. And the fact that you get the overall benefit from where this is where the data is coming from is the real draw on the attraction. And that's why we have such strong membership and such good feedback on the products and the output that is put forward in terms of the data made available. I was going to say for now, but if people have questions arising from that about practicalities, please put them in the chat and I'll join the discussion later on. Hey, thank you very much. All right, let's continue with us. Are you ready to put up your slides. Yes, I am. Thank you for the introduction. Yes. I hope you can see it. Okay. So my name is Bas van Mokalden. I'm the head of data science at Wageningen Food Safety Research. And I will tell a little bit more about the technical foundation of federated learning. So what kind of AI do we do at WFSR? We make all very cool things such as web crawlers to scan the internet for food risks, to also analyze satellite data. We do a lot with genomics, mass spectrometry, we analyze with AI and microscopy. But all of this is quite sensitive data. So that's why I will talk a bit more about the federated learning approach for this. We do this in the effort project, but notably we also coordinate the Holy Coot project in upon the fells. Also in the participants I saw, see as the coordinator of this project. So reach out to her if you have any more questions about that. We also participate in the karate. I say we, so we have quite a substantial team of 16 people who really are a senior to just out of college with a brand new knowledge that we developed these AI algorithms. So what kind of algorithms am I talking about? I'll give you a short example. This is, for example, a dashboard that we developed for emerging risks and early warning systems. We did this using the European media monitoring. So we analyzed, for example, media blogs from it. And by doing so, we detected 10 compounds that were unknown to be used as stimulants and food supplements. So that's very powerful just by gathering all this data and looking for trends. We can analyze and find new risks that are happening. So as Manos already mentioned, there is an increase of performance with an increase of amount of data. And this is particularly true in the recent years. So why is that? There's a lot of new algorithms developed that are very powerful with a lot of data. For example, you might have heard of deep learning. This is an example of images of Chihuahuas and Muffins and assume a squared-pride scale to see which one is a Chihuahua and which one is a Muffin. But computers were up to a certain point doing well until Alex Kaczewski introduced AlexNet, which was a deep learning network, which really cut the error on this challenge in hand. And then everyone saw that deep learning had a lot of potential to become a very powerful tool. But deep learning is very data-hungry, so we want to collect as much data as possible. One way to do it, as Tim just discussed, was to get all the data and bring the data to algorithms. So you collect the data, you anonymize it somehow, do some other tricks to make sure that sensitive data is not traced back to the owner or to other sensitive parts of the data and then train the algorithm. But that's not always possible and it also requires a lot of effort. So what you can also do is bring the algorithm to the data. So the data stays in place, the algorithm goes through the data and updates over that. What it means is, I will share this video with a shorter introduction for that. When it comes to the food we consume every day, we want it to be safe. We all benefit from detecting food safety problems as early as possible. Food safety prediction models can do that. But these prediction models need data from all stakeholders in the food supply chain, such as authorities, farmers and food companies. Today this data is not easily shared because data is considered sensitive and data protection should comply to regulations like the GDPR Privacy Regulation. The food safety data train concept solves this problem. It is designed to make data from different stakeholders accessible with maximum data protection. The data of each stakeholder is linked to a secure data station, like a railway station. These data stations are connected via a secured infrastructure, like a railway network. And like trains, validated prediction models travel towards the data stations. Here the data can be used. The prediction model can train itself and become smarter, while the data never leaves the data station. Data station owners can control which prediction models can access the data station and for what purpose? In return, the data station owners can benefit from smart food safety prediction models or they might receive a compensation. In this way, data becomes available for food safety control purposes with maximum data protection, ultimately improving food safety for us all. Learn more about our work? Visit www.wer.eu. Okay, so problem solved, you might say. Well, not really. There's a lot of technical details that still have to be addressed. I will go over some of the core challenges with federated learning. One of them is that it's expensive in communication because you have to send models which in deep learning the sense become larger and larger across networks and depending on what kind of internet connection you have and what kind of, yeah, they can be down so that there's a lot of things to consider there. There can be system heterogeneity. For example, you might have a supercomputer with a lot of GPUs or cluster with a lot of GPUs to really train a very sophisticated model. But if you then want to train on the second device that only has a CPU or is maybe just a cell phone or something in the field, then that limits your abilities to get to the least powerful machine. And that's something that can be tricky. So that's something that we also have to think about how we can solve that. And there are ways to solve this, which is, for example, outlined in the paper that I cite here. Statistical heterogeneity, I will go into that in a bit more detail in the next slide. And privacy concerns, I will also hold on to that. So statistical heterogeneity, let's say that you have a distribution at center one, or as Miles called it, the serious birth company, if I remember correctly. And then at the second birth company, you have the same distribution so that your model trains are similar distributions. And that's fine because then it probably will end up with the same predictions in the end. But what if your third distribution of your data is skewed, or is just a totally different distribution? Well, there are ways to tackle this. Some examples, but not all of them include shuffling where in which distributions your algorithm updates so that you eliminate a bias towards one or the other distribution. You can give weight to the distributions. For example, if company one is by far the largest and is probably also the most data. So maybe you should weigh that into account in your algorithm. And there are other fancy algorithms for multitask learning of metal learning that I won't go into detail. But those are also ways that we will explore in this project. Another concern was privacy concerns. So especially if you go to deep learning, you have a lot of weights in your algorithm that you can update. And in theory, by tracking back to those weights, you could identify privacy-conserving data again. So we will try to see if we can do that and therefore investigate how robust the privacy conservation is using explainable AI. That's a totally different field, which is very interesting and very hot and upcoming. Please look at papers about it if you're interested in that as well. To finish off, there are different flavors of federated AI learning. So you can learn personalized models of each device and not learn from peers. So that's not really federated. You can combine everything into one algorithm that's updating everywhere. So they are one algorithm updating everywhere. Or you can learn personalized models for each device and learn from peers. And this is done quite often in mobile applications as well. So what I want to leave you with federated learning is very interesting but challenging tasks to combine data and to conserve privacy. We will try to investigate the solutions to these challenges that I just posed. We will test the robustness of privacy conservation with explainable AI. And we will look at all the design choices that have to be made. Okay. Thank you. Thank you, Mass. A lot of the details we talked about. All right. Let me now share my screen for you. Thank you. Okay. You have to go to the first slide though. All right. Yeah, I have to go to the first slide. Do you see the full slides or do you see the? No, we see anything. Yeah, maybe you can go to the next slide already. My name is Julie, I'm the main expert with technical University of Delft, and we are part of a horizon funded project called Titan on digital technologies for transparency in the food supply chain. Next slide please. Today I will be talking about privacy considerations for consumer level applications. And I first want to explain a little bit on the way we work at my faculty. We tend to look at social tech so we look at the ecosystem with all the stakeholders and we try to understand what their role is, and how technology can either facilitate support or replace their role. Next slide please. This typically informs us on the requirements of it architecture or it systems because we need to understand what motivations or obstacles are being perceived by the stakeholders in order to adopt technologies. And we also learn how, which considerations they need to make so which tradeoffs do they put into their equations when deciding about using technologies. And one of the biggest obstacles that we hear of is interoperability between IT systems so this is why we also always look at the languages between IT systems, and trying to define ontologies and semantic models in order for different IT systems to be able to communicate with each other. Next slide please. And when talking about transparency, it's much more than just bouncing around raw data. And I think it's very important to understand that transparency is a lot to do also with making data work data needs to be readable and useful for all the different stakeholders in in the value chain next slide please. So what the advantages of the food supply chain is that we are already dealing with a mandatory transparency situation, because of the general food law regulation. There's already a rapid alert system for food and feed in place so in cases of health hazards or issues leading to food recalls there's already a system in place that connects all the stakeholders in the value chain to retrieve the batches that have been contaminated. The problem only is that today it is mostly a paper reality. There's a lot of bureaucracy involved and because it's very hard to trace the actual batches, you are bound to mass communicate the issue and that can lead to additional damage. So, starting from this mandatory transparency status, the potential to digitize that reality delivers the possibility of creating more speed and precision of recovering batches that have been contaminated, maybe before these batches hit the shelves. Also, monitoring anticipation, then, you know, allows you to act more swiftly. For instance, by adding data from sensors in the value chain, especially in cold chain situations that can even prevent contamination to get to the consumer and thus limit reputational damage. Next slide, please. We do, however, need to take into the equation what considerations we need to make. Next slide, please, because even though there's like a lot of potential in digitizing the current situation. There's also a lot of concerns like the confidentiality from companies have already been named by the previous speakers. Privacy also for consumers is an issue because consumers, even if they're not really aware of it, they are protected by GDPR and does have to give their consent to be approached and to share data. Another consideration is how long will you make this data available, because obviously, once you need to have the availability. Also, you need to store the data and that leaves an environmental footprint. So this is something that you need to consider when choosing the IT architecture. The consideration is the accessibility and that ties back into the confidentiality and privacy issue, because who gets access to what kind of data, and that you could mitigate by introducing rule based access rights. And also that allows to tie that back into the availability of the data because even when you don't know which stakeholder in the future needs to access the data because of the rule based you can define who in the future would get access. Again interoperability is one of the biggest issues in accessibility of data so providing APIs for instance to make computer systems talk to each other does increase the effectiveness of the IT architecture. Another consideration to take into the equation is standardization and harmonization, especially when we're dealing with imports from outside of Europe, it serves its purpose to define certain digital standards. So when we're talking about food products. And last but not least, the readability is very important because in the value chain all stakeholders have different angles. So they might use the same data, but they would need a translation basically to make it usable for their purpose. So user interface is a very, very important thing that can increase the incentive for sharing data and providing access within the value chain. Next slide please. So when we're considering consumers. It's, in many cases already retail organizations have loyalty programs in place. So a potential way to increase the, the speed and accuracy in food recalls is connecting to consumers about contaminated batches through loyalty programs. Because the consumers will need to give their permission to be reached out to in those cases, due to the GDPR regulation. So, you would need to consider why a consumer would give you such permission. And one of the ways in doing so is to enrich the loyalty program by other applications that might facilitate the consumers life. And in such using maybe even the same data that is available in the value chain that is used to identify health risks, or even prove authenticity and identify food fraud. So some of the applications might be connected nutritional programs personalized programs connected to promotions, even the sustainability aspect could play a role there. The front of pack labeling space is very limited. So, if you want to communicate more information on the project, whether it on the product, whether it's health or social and environmental sustainability, you could communicate that again through the loyalty program. As such, food safety is an element that comes available for consumers and is not the only thing that is being communicated. And therefore, as being a part of the total package consumers might be more enticed to give the the approval to be reached out to. And I think that's the last slide manners. Yes. Thank you so much for your attention. That's great. Okay, I think that more or less includes our brief statements at the beginning. Let me give the floor back to Chris and let's invite all our panelists as well to open their cameras. Let me share. I think we have 45 minutes for a discussion partner. So, Chris. Thank you. Thank you. Thank you very much, Manus. And thank you to all of the speakers. I mean that was phenomenal really interesting information and from, and from many different aspects many different dynamics about sharing information, some of the problems, some of the issues, some of the opportunities. What I'm going to do is I'm going to take the opportunity now to ask the panel, you know, some questions that that I would really like to ask them, but also please add your questions into the chat box. There's some starting to appear now and I will, I will also moderate those questions as well. What I would like to do now is we have a panelist who didn't give a presentation. I would like to introduce Michael Bell, who is the chief executive of the Northern food and drink association, and maybe Michael and just a couple of lines of introduction from yourself, just about I mean your particular important role in the food industry. I'll come back to you with with hopefully not too challenging a question. Okay, thank you very much Chris and thank you colleagues for very interesting presentations. My role is to represent over 120 companies and help them with government with strategy and with inter company cooperation. I'm actually based on Michael Porter's clustering theory. For those who are interested in sort of the principles behind it, and that at its core states that companies gain competitive advantage when they work with each other. So that's, that's a sort of a core principle, co-optation I suspect is the more modern word for it. Could you produce enough food as a group of companies to feed roughly the equivalent of 10 million people. So this is a small but significant food industry on the world map. Many thanks Michael. I mean, that's a very good introduction. You are chair of an organization with many, many members well over 100 members. And, you know, we've talked a lot about sharing sensitive information. I'm just interested from your own role in terms of a chief executive, you must get access to a lot of very, very sensitive information. And you have your own mechanisms in terms of how to share that information. And for instance, I know we had a lot of discussions during the COVID pandemic about the safety of workers and trying to get procedures in place. That to me is one example of the crucial role of informal data sharing. So maybe you could just share with us, you know, other examples of or without needing perhaps naming it, but just how you manage that. Okay. Yeah, happy to I think some basics. Data sharing is ultimately done by the managing director of the business authorizing the data to move and managing directors of businesses have a statutory duty to act in the best interests of the business. So one of the things that I think with no disrespect to academia, it fails to understand is that directors of businesses are constrained by law. And so it's not just their cynical or reticent. They're actually constrained. So the personal relationship with the institution, particularly in the early days is very, very critical. I think trade associations who spend their working time communicating backwards and forwards with managing directors and dealing with their problems form a very useful interface between academia and the industry and I think Chris, you would agree that during the formation of food fortress, the trade bodies were a useful conjoint to get the message out and to get things put together. I was impressed with the use of the word heterogeneity today, because that's a word I wouldn't use with managing directors. It goes in the same campus organoleptic or multi factorial. This is all about the word integrity and every managing director will buy into integrity. Nobody will buy any of our products if we do not continue to build our integrity and present our integrity. And what we're talking about today is mission critical to that. I'd make a slight aside if I may Chris just on it. The recent BBC panorama documentary just this week, which was talking about ultra processed food and particularly a spartan. Took as its core data and peer reviewed piece of work. And I think that's quite dangerous territory for us to get into where academia is presenting things to the media that have not had academic rigor applied. And I don't think that helps either academia or the industry going forward. I think we all collectively need to build integrity and all facets of this process. I mean many thanks Michael I mean you've actually brought up a couple of different really important areas that I hadn't thought about you know there is legal responsibilities within a business particularly which resides with with the chief executive on a board as well. There's also the legal up obligations to share information on food safety with regulators as well you know that's not something that is discretional. And I think, you know, you're now actually talking about integrity of data, which is I think fundamental. I think we will explore that a little bit more. I think what I would like to do, maybe Tim I'd like to bring you in here now, because you gave a very good you know case study about Finn, and you know I'm a huge champion of Finn as well. And I do remember in a lot of the early discussions about Finn, there was massive massive worries about sharing information huge amounts of information. So I think there's two aspects might be worth sharing with people because again my recollection was that what is the potential for a regulator to say under freedom of information, or a newspaper under freedom of information. You have to share that information with us, which breaks that whole network of trust that that's the first thing maybe Tim we would like to explore. And then I think the second one is, you know, what about cybersecurity. So what impact to cybersecurity or potential breaches of data have and how do people trust that that the way that you collect and manage data is cyber secure so maybe freedom of information one and cyber number two. Yes, thanks Chris. I think the freedom of information one is is really important. I think there's, there's a technical answer that you could argue this. Finn is not a public organization and not subject to strict freedom of information in the same way as say local authorities or regulators might be. But at the same time, there are increasing ways and means of people trying to get data and equally asking for data and being told you can't have it is not a great answer either because the perception is you're trying to hide something whether you are or you're not. The reality is, again, I'd go back to the double anonymity bit, even if people got hold of the data. It doesn't mean anything to them, because without that that code, if you like, of what name links to the code name in the system. It is just data. I, for all sorts of reasons, Finn doesn't make its data publicly available. A large amount of that is because it's so complicated it would be very easy to misunderstand it so it's, it's not that there's anything being hidden in the data. It's more that it's, it needs to be interpreted properly and understood by those with with sufficient knowledge so I think freedom of information should not be an issue because the protections around the double anonymity make that worthless anyway. But also, data like everything else, you've got to understand what you're dealing with you can't just take a figure and extrapolate from it. And I think on the cyber security point in a way it's probably similar it's hosted by a third party with the best that they can then offer in terms of cyber security that is absolutely key to them. And fundamentally that that is their entire business gone if they have a major cyber breach so that's fundamental but it's, it's the same issue I think even if someone were able to get hold of the data it's exactly that. It's just data and so without being able to link that back to a particular organization. So that that hopefully is where the protections also then lend themselves to the benefits that the data has a value for the members because they can compare industry data against their own which they obviously have access to. That ability to to almost benchmark and see what others are doing, hopefully get a degree of early warnings and then act on that accordingly. And thank you very much for that I mean that was really very very helpful information on both of those have been quite tricky questions that I put to you. I'd like to do now maybe bass like to move on to you because I mean I was intrigued by by your presentation the amount of work that's done in Wageningen food safety is phenomenal and many congratulations to that. And I think the example that you gave about the muffins versus the Chihuahuas I mean, nothing is, I've never seen a better example actually in terms of how machine learning can really operate. I guess the one thing that surprised me was over more than a decade has there been really that much progress, you know, to me it moved from 27% inaccuracy to 16%. So as an analytical chemist chemist having 16% kind of false error rate is still pretty large. So I mean what's the likelihood of more progress in terms of getting that down closer to zero. Yeah, thanks for the question. It's, it's a little bit more complicated as you might have imagined than just Chihuahuas and muffins. So the thing with this data set is there are over 50 million labeled higher resolution images so it's not just these 10 muffins and Chihuahuas those are just an example that I took. And there are over 22,000 categories and there's also ambiguity in both the categories. So for example you could have a picture of a dog eating a cherry and then the picture is classified as a dog but it can also be classified as a cherry. So it's not truly incorrect but it's incorrect by the label. And the second thing is that it's a lot of labels so it is 15 million images so that's prone to the errors in there. Human labeling errors are, I think Aliou is also joining the call has a better grasp of the numbers but I think it's at least 23 to 45% in every data set. So even if you could, if you approach zero it's a it's a fake figure. Because there's just error in labels, it's in this big data sets. But to to further increase your enthusiasm maybe this was in 2012, and there have been significant breakthroughs afterwards so it keeps on going down, and it's really reached the observer errors. Basically this data set is now sort of solved. And we've moved on in computer vision to more difficult data sets and more difficult tasks, which require totally different approaches. You've all heard of the genitive models that are in place. I think chat GDP is the most in the news one now, but there are all sorts of very cool and also very promising techniques that have a debate on how effective they are but also how we should handle them. And I think that one of the things that we're doing here as a privacy conservationist is a very important aspect to that. Also seeing if we can sort of break these systems to to to extract the data that we don't want to be able to extract. That's also very interesting and very important output of this project in my opinion. Thanks bass to me that that's excellent. I want to ask you a second question, because one of the obstacles I think that you talked about was the expense the cost of this federated approach. And, you know, I think, you know, I'm Ulster Scott so we're very careful with our money, but you're Dutch, you're even more careful with your money than we are. So just in terms of, you know, how big a hurdle is that expense, and will some people just sit back and go, not for me now, I'll wait two years four years five years until the cost comes down, then I will engage. Yeah, that's a very good question. And to be fair, I think they're with a lot of the big pushes in technology, I think, and also infrastructure, not just technical, but also real votes. For example, there's a lot of input needed from governments and such agencies and you as of course one of the funding entities to researchers. As an example, I, I originally from healthcare and there's a Dutch initiative that just got 70 million euros to to really also develop such systems. So you're right, it's it's very expensive. But I think it's also very important. And in the end, the expenses are more in the development than in the maintenance, then in the, if it's something's up and running, then it'll, it'll be a lot cheaper and it'll be a lot cheaper, I think. Thanks. That's that's excellent bass. Perhaps I'll move on to Suley now. And again, I mean, your presentation was was really excellent. And you covered so many things that I hadn't thought of before. And I'm going to follow up just on a couple of them. I've got a very, very long list I could ask you. But I guess, you know, you talked a lot about transparency. And, you know, we have talked a lot about transparency over the past number of years. And transparency is not just about food safety. As you said, it's about sustainability. It's about ethics and so forth. So I'm really interested to hear from you. How important do consumers really think transparency is, because, you know, we're in the middle of a cost of living crisis food inflation is really high. What I'm picking up in the UK, the pattern of buying food is completely different now. And I think, you know, some of the things that we thought were important two years ago are less important now. So that's my first question for you. And I'll probably limit myself to one more after that. Okay. Well, thank you. I think. Yeah, the inflation doesn't help but at the same time. Also working with industry. I hear back that consumers are actually asking for life cycle analysis and for environmental footprints. I suppose the propaganda machine from the European Commission does contribute to that and regulatory frameworks like the green claims directive and, you know, the deforestation free products regulation that absolutely helps increase the awareness and the whole climate activism movement has, you know, it's not young. So the awareness of consumers on their own contribution to that dynamic is increasing and increasing. To be honest, I've been to many barbecues, where the cousin of my colleague would just all of a sudden talk about sustainability, where he's, she doesn't have a very evident affinity with the subject so I think more and more consumers are being sensitized with the subject. And then one, and also a lot of producers are communicating regarding that a lot of retailers are also communicating actively in the shop so the sensitivity is there and once you start offering more and more. And also consumers start learning more and more so their level of understanding and demand also rises. Accordingly, I think. Yeah, many thanks for that. My question was around the loyalty schemes that you talked about now I've known for many years that the data from loyalty schemes is unbelievably important and many people would like to get access to it and exploit that. Even the owners of the loyalty schemes, they really don't want to share anything in terms of the information that they collect it's a massive resource to them themselves. So I guess there's two things there is, you know, in terms of sharing data from loyalty schemes, you know, is there is a real willingness to do that from the owners of the data themselves. And even more importantly, for us consumers. Do we realize the amount of information that companies are collecting through loyalty schemes and exploiting that actually without our knowledge or information. And will there be potentially a time where people will go well actually I'm going to leave this because my data was sent on to somebody else. I don't know about it. And guess what, it's on the BBC news that there's been a big leak of that information so I'm just interested in consumer concerns perceptions and also from an industry perspective. I think consumers in that sense might not be that aware of the commercial aspects of loyalty schemes they just see the advantages of any promotions they get or you know I think a typical example is giving away your bio metric data on irises just to get certain advantages from chat to be a very good example of giving highly private information away for commercial purposes and in loyalty schemes it's not different. I mean, I think consumers see mostly benefits. On the other hand, I think, in the case of communicating and increasing the efficiency in food recalls and issues with health safety, you don't necessarily need the information from the loyalty schemes it just suffices to trace back the batches of sold contaminated products and reaching out to the people who actually bought the product directly. So that's something that can then be managed by the retail organizations and the data doesn't have to be touched by any external organization as such is just to increase the efficiency and effectiveness of the communication line that's already basically described in general food law. In terms of sharing data, I think, in a way, you can draw a parallel with Fin there where you would need to guarantee anonymity of data in loyalty schemes I think also for retail organizations and might be useful to start sharing some information that sits into loyalty programs on consumer behavior on on purchase behavior on behavior within the supermarket as to also comply maybe with some requirements regarding those new frameworks that are being introduced under the European Green Deal and also farm to fork and one of the very simple examples might be reuse systems when wanting to decrease the single use packaging. That richness of data. I agree that's very, very valuable and we might want to look into how to make that usable for the industry as a whole, rather than just for the retail organization. Excellent. Many thanks for that. Manus you've been sitting very quietly probably hoping that I wasn't going to ask you anything. So seriously bad news for you now, because I'm going to ask you about the serious chicken company. I'm really looking forward to going into a supermarket and buying that particular brand you promoted it very nicely. Why would different chicken companies really want to share that information. Surely is there not enough data that you can get from an individual company where you can build the models that will feed back to them as well. But that's the first part of the question. And then secondly, is it because chicken companies, they work differently. They've got different systems. The feeds will be different. The production systems will be different. Perhaps the breed of bird will be different. Climate is different. And are you trying to compare apples and pears and coming up with some sort of hybrid fruit that nobody wants. That's a great question. And that's to the chase 100%. All right, so let me start first with why we might want to encourage this type of. I wouldn't say data sharing because you don't share the data topic, but that's the idea. You, you share only an AI model for a very particular purpose that is known to all entities beforehand. Okay, so that's why, by design, the confidentiality of private consensus, not that deep in this case, because everybody knows what we're trying to do. Okay, so there are a lot of things I could say. First of all, let me allude back a bit to what Bas already mentioned. The more diverse examples you have, the more powerful your model. I understand Chris that in many cases, many of the examples that the model will encounter might not be homogeneous. And again, what Bas said heterogeneity is a very important topic. Again, I will answer your question directly. What does that mean? That means that most likely the relevant information is not to be found inside our own processes in a sense inside of the processes of one particular organization or what they have encountered in the past. And that relevant information for something novel for something emerging might be hidden in the data of adjacent industries. That's the idea. And of course, let's also keep in mind that the degree of patterns, the complexity of patterns that they are in models that are so deep compared to what the human could do with the same data, that what you might perceive Chris or me or anyone here might perceive as apples and oranges might actually be the very interesting deep underline pattern that somehow permeates across the industry. I don't know if that satisfies you Chris, but that's the idea. If we want to really look into what we don't know, it's very important to have as many diverse examples as possible. Thanks, Manus. I mean, it's very good answer and it brings me on to the second question for you. And I'm going to use another term that Michael Bell will probably add to his list of words that he absolutely hates okay, and it's metadata. So you talk about columns, I talk about metadata. And the columns are basically additional pieces of information that you can add to help build the model. Andrew, what I'm particularly interested in is collecting metadata, not just about where the sample came from what was tested. I want to know when it was sampling what the weather was like at the time the sampling that happened, all of those different things. And again, how important is for you columns and for me metadata into really building these very robust models. Great question, Chris, again. Okay, the idea is that, in a sense, factoring in enough of these columns enough of this data enough of these contributing factors to what we want to predict this paramount. But this paramount in a particular sense. So it's not that if we do not account for enough contributing factors were not doing something interesting. Because, as I said, in my previous answer as well, the type of patterns a well trained a model can and are extremely interesting. And because we already have some of this data with some of these factors, and we believe that okay, more or less whatever is hidden there. We already know about. We have our processes, we know how to solve this type of cross contamination salmonella pathways to a loot back to what I said, okay, you do, but you don't already have a data driven approach on it. Okay, even for factors that you believe you have accounted for. If you have enough data. We can create a models that kind of very interesting path. So that's, that's an answer to the question. If we don't have access to all the types of metadata that we need can reconstruct an interesting model. Yes, we can. We can construct an unexpectedly unexpectedly interesting model. Now, to go back into what you said Chris, before talking about federated land before talking about confidentiality and privacy. This is a fundamental step to be taken, which is increasing the digitization of the domain. For example, there are a lot of record keeping involved through traditional good safety management system. Many interest is still, and I know this Chris happens a lot. Many industries still record this in physical copies. Okay, and record this in a very interesting way I mean the time stamp is there. That's not where the record was taken is there. Many of them have low diagrams of their operations and you know where this particular record came from. But that's all in paper. If it was digitized through through one of the food safety management systems that already exist. If it were digitized information would be there to be exploited in AI models. One message here is increased the digitization and other messages then comes confidentiality and integrity. And then there is a lot that can happen with AI models. Thanks, thanks very much, Manus. Again, a question very well handled. I'm basically going to ask Michael and Tim same question. And Tim, you've got some early warning about what's coming your way because it's going to be quite similar but I had promised Michael Bell to come on to this I promised to buy him a nice lunch but in a cheap restaurant okay. And during that lunch Michael I was going to ask you a question but I'm actually going to ask it to you now, because I know mean seriously you wanted to come and join this summit because you wanted to learn more about this, you know the information and the opportunities around artificial intelligence. You know if you are going to go back to the board of NIFTA and say, Listen, we know about the food fortress it works really well. I've just heard a lot more information about federated approaches to collecting and sharing information. So what would be your recommendations to that board you know is it sit back and just watch this space, or we have another chance Northern Ireland PLC to come together and start to share information, which will be beneficial to the integrity of our supply chains and also will give us a competitive advantage. That's a great question Chris. The food fortress in my perception is increased resilience to threat with no massive on cost. That's a very attractive pitch to go to an MD with. I think. What I'm really dealing with is collection of risk data, and then using risk data to actually increase resilience and integrity. And to go back to metadata for a minute, which is data about data. I would talk to them about data about data descriptive administrative and structural, because that they'll get. The conversation we've just had which is explain what the heck is metadata. But seriously, I don't think this is optional. I think we need to stand back for a second and look at the rising tide of threats and challenges to a food system which is incredibly complex, a very difficult to approach a food system that has fundamental conflicts within its various member parts. So in the UK retailers have had to have a government referee put in place to manage proven criminal behavior with the supply base. That's not a great set of relationships historically. We've had just had a recent major case of fraud, where we've had criminal activity, which wasn't picked up by the existing systems. So I think we have to continually earn the COUNTS and Summers trust. And the analogy I give most people to help them grasp it is when you buy the carton of drink, and you punch the straw through the top. How many of us honestly have given any thought whatsoever to what's inside the box before we suck. We have total trust in the product. We've got to keep earning that because as soon as we have another dioxin, we will fundamentally damage the overall trust in our systems. But also, because we live in what I call the trip advisor world where if 100 people say something on the internet, then it's true. I think the magnitude of distortion about incidents is becoming bigger as well. Many thanks for that, Michael. And just listening to you now, probably what we need to do is for Northern Ireland PLC is to have a summit on the applications of AI for our food industry. And just looking at my screen at the moment, I think we've got some fantastic speakers. You can think about joining that. So that's something we will discuss at the lunch. Okay. Tim, for you, I mean, you and your company represent a lot of very large serious food businesses right across the UK. I think, you know, you've really sold fit into a lot of us. It works very well. But just with with with your kind of corporate hat on, you know, if you were talking to a lot of the technical directors board member CEOs of a lot of the food industry as well, just in terms of AI, you know, what other opportunities to you see for them now to get ahead of the game. Keep ahead of keep ahead of the risk, keep ahead of the regulators, building competitive advantage, maybe into the UK food industry. No, I mean, good question. Actually, in a way, it's a different way of picking up question, I think from Karen Constable in the chat around what would the thin process have revealed the country of origin issue with pork that I think is what Michael's referring to as well if if the companies have been members. I think it's two slightly different parts of that. Yes, you the more your members, the more members you have, the better your data and that's to Manus's point about the size of your spreadsheet of data. But I think, as you say, Chris, what I would be pushing for is to say, look, bin has worked incredibly well. I think it has got a lot of confidence across the industry and with the regulator. But things move on where we're eight years on and one of the, I wouldn't say limitation but just one of the factual realities of thin is it's on a bit of a delay because it's quarterly data that needs time to be assessed and reviewed and compared. And that can be incredibly useful for sort of medium and longer term trends, you can spot things that are happening and indeed it has spotted many things that are happening that sort of starts small and grow. But what it's less good at are those things that happen really quickly and like every bit of regulation, whether it's food, whether it's safety whether it's environmental, as soon as you have regulations guidance requirements regulators policing that you find people who deliberately want to try and look for the loopholes up for the weak spots and the ways around it. So for me, I think the advantage and certainly I've learned a lot from the session today from the preparation sessions we've had how AI is developing and that predictive ability. That's the next step really because I think by providing the data and now you are protecting yourselves as best you can. But things are going to move a lot faster in many different ways so that is having an insight into the next stage of how you can use that data as more of a predictive tool. Hopefully it will respond faster. Equally though I think that there is another aspect to it because my experience of a lot of these sorts of issues and I'm not for a minute commenting on that particular issue because I don't have the knowledge of that. But in general terms, quite a lot of issues in our world fall down, not because the system itself is wrong, it's just not applied very well. So most companies will have on paper fantastic systems for quality checks, assurance checks, audits of their supply chains and where everything comes from. But it's rare that it's the systems that's at fault, it's that there's a weak link somewhere that they're not enforcing that. Somebody misses something or is persuaded by sort of criminal body to deliberately look the other way. So the weaknesses are generally in the way in which those checks and balances are put into place not the checks and balances themselves. Hopefully that would be another means whereby AI would assist in getting past all of that and then you're not just relying on a single point of contact or a single check, you're bringing in that collective ability to say, well actually look, this is an area we really all should be looking at a lot more closely now. It's not just crossing your fingers and thinking, well, I haven't had a problem yet, therefore my systems are all fine. And we talk a lot more in the safety field that I deal with, but there's commentators talk about watermelons. So everything looks nice and green on the outside, but the minute you cut in, it's a bit red and it's not right. And that's one of the challenges with traditional audits and data, when businesses are so busy on so many different things, lots of green lights suggest it's okay. So you look in the problem areas, but actually is it really okay? Are you doing those deep dives and getting underneath the skin of these issues and checking that your systems may not have failed, but that doesn't automatically mean they're working 100% correctly all of the time. Terrific. Many, many thanks, Tim, Michael, for I think taking a, you know, a kind of a really holistic industry perspective there. And you also Tim, give me a nice segue into the question that Karen Constable has posed, which again is a very good one. And this is for Manus Bass, either of you or both of you. So first of all, the, a bit of the context in the UK, very recently we had another scandal about meat food authenticity. And the scandal is really quite complex. But to try to simplify it, it was a company claiming to sell British beef was selling beef from a different country because of the price differential. One of the calls for action after that has said that what we need is a country wide, that's the UK, check on mass balance about the amount of meat produced in the UK versus the amount of meat sold in the UK. There's been, you know, I've joined a couple of different working groups to talk about this. And the feedback is, no, it's just too complicated to think about doing that, which to me was quite disappointing. But I think from both of you, you mean you're real experts in collecting all of this different sorts of information. I think something like that would be achievable just as a model of not looking at something company specific, but country specific. I'll ask you, Bass, first and then Manus, you can come in after. To me, this sounds either like it's not authentically British, or if it was Malaysia, then basically food fart. We have developed some models that can detect food farts of two certain amounts of precision and also provide different sets of food farts. One of them is actually country origin. So it is possible to do this with such techniques. And I think it's also important to keep in touch with the enforcement agency to better understand how they were detected now and how such an algorithm would be useful to detect food farts. Does that answer your question, Chris? Yes, thank you very much for that, and maybe Manus, what's your perspective? Great question, Chris, again, you're always cutting to the chase very. All right, so country-wide infrastructure, country-wide data sharing and all that. The first thing a computer scientist will say is that this seems like a type of a structure that has or type of endeavor that has single points of failure. So everybody needs to contribute, otherwise this scheme will not going to work. So as a first answer, I would say that I find it difficult to happen in practice. Now, having said that, there's also another challenge that I think is very interesting, a team mentioned that before. So let's say you create such type of intelligence networks on the country's scale. How do you make sure that the correct organizations, the possible culprits or something like that, do participate in such a network? And apart from that, even if you're required by law for them to participate, how do you ensure that they share the data where this type of incidents can be found there? Even if they are in the scheme, are they somehow hiding the most critical data? That is true, a lot can happen through regulation. On the other hand, I will allude back to what I said before about the type of patterns that the AI can unearth. Even if we are missing some actors due to the interconnection of the supply chain, some of the indicators will come out in other parts of the supply chain. In actors that are participating in such networks. Through AI, we can find, if we have enough intelligence areas, we can find even minute deviations in what we would normally expect to anomaly detection or other ways of going through this. So, if we can't have everyone, we can have enough people interested, even upstream in the supply chain to cut such issues much earlier than we do. So no need for counterwide infrastructure. Good need for good intelligence in the actors that have the incentive to cooperate. That means thanks very much, Manus. I think your answers were really good. And just to say that, probably if we had looked at the right indicators, there was enough information actually, enough data available that broad would have been identified. And I think this is back to Manus, your columns, we just weren't looking at enough columns, okay. I see Suley, you've got your hand up, so just be careful because I'm coming back to you for the next question, okay. I just wanted to comment on what Manus and Bas said, because I think there's already a lot of information that's sitting around in different institutes like slaughterhouses, even health and safety bodies. So theoretically already, if you would connect those data points and then do like a mass balance calculation, you would already see hotspots in differences in numbers and volumes so that would also potentially allow you to identify where that discrepancy occurs. So the problem is also that these systems are not connected nowadays. Thank you very much, Bill. Michael, you've got your hand up. Just to make a quick comment on that one, Chris, I think it's documented that the legal compliance officers in the UK, ie the EHOs are substantially under resourced and their ability to visit sites and inspect sites and so on is diminishing rather than increasing. And the industry struggles with fully trusting the regulator, not unsurprisingly for fear of being hit over the head by the regulator, and it has been proposed to use more of the customers data systems like BRC. Well, those two have problems because they become effectively another tool of control by the customer over the supplier. And as I've already mentioned, that has proven to be a criminal relationship in the past. So what you've been discussing today, which is the independent and high integrity neutral academia management of data may be also providing a very cost effective and much more efficient solution to some of these problems than the legacy tools that we have on the pitch at the minute. I think it's a very good point that you make. You know, I think in so many walks of life, people talk about not having enough resource, not having enough staff. Yet we're hearing that AI could replace the jobs of tens of thousands of people in the future. So I think that's a very good point that you make. My very last question is for for Suley. And I think, you know, we've talked about the effort project today, which I think is really important and thrilled about you talked about the Titan project, the Hollywood project as well. So do you think we're doing enough in Europe in terms of really trying to capture how to exploit AI for for the food industry and trying to make food safer, or do you think this is just really very early days and we need to do a lot more work. I think we're in an exploratory phase here. And I think a lot will also be conditioned by enforcing through regulations, and also demanding more transparency throughout the supply chain. Now, again, I repeat, we already are dealing with mandatory transparency in the food supply chain. So presumably this industry might be able to move much easier and much faster with digitization than, for instance, the automotive industry or any other, you know, industry for that part, because it's already mandatory. So you're just improving the current situation by digitizing it. And in such it could also be used by the European Commission as an example for other industries, how to perform that kind of transparency throughout the supply chains and I think once companies embrace the holistic benefits of sharing and learning from each other's data, they can also create competitive advantage together as an industry and I think that's the main difference between the circular economy that's envisioned for Europe, opposed to the traditional linear economy where everybody's basically looking for their own benefits and personal growth and through collaboration, you can achieve so much more than on your own. I think AI and, you know, combining technologies and using AI to interpret the data and to deliver possible scenarios that can inspire strategic decisions, I think that's a potential that is really necessary for us to be able to integrate all the ambitions that the European Commission has developed over the years. And I think just to summarize, food safety is generally driven by regulation. It doesn't matter what part of the world you're in. That's what gets people to move to change to do things. It's about regulation. But I do think with the advent of science and technology, we have the opportunity to actually move ahead of the regulators because companies, businesses will see better protection of their brands, competitive advantage and so many other things. But what I want to do is because I know we're pretty much out of time for the discussion panel. The discussion was really, really excellent. I asked the most difficult possible questions I could think of. And unfortunately, you all answered them extremely well. So I'll have to work harder next time. To Suley, Manus, Tim, Michael Bass, thank you really very much. Your inputs have been fantastic from so many different aspects about this really, really important subject. Personally, I've learned a lot. I have written down so many different questions here and don't expect it to hear from me offline for further questions. So now is the time where I stepped down from the chair of this session. Thanks everybody, all of the speakers, but also thanks to the audience for staying with us. What I'll do is I'll pass back to Manus now and Manus is going to lead the session on interactive brainstorming. So Manus, over to you. Thank you so much, Chris. Thanks everyone. Thanks for the panelists. Indeed, Chris, you asked everything that I think it was at the top of your head. Very hot questions. Very important. Very nice answers. And as you said, Chris, I hope that there will be a follow up to the summit next year, maybe in hybrid. And as we get acquainted with each other, with the audience as well, now that it's going to participate, we can tell you even deeper. This is the first day. Okay, so what I will do now, I will stop the recording. The idea is to relax a bit, you know, whoever wants, you can open your cameras, but let me stop the recording first.