 I'd like to say a very warm welcome to this GFSI panel session. It's about public-private data sharing in terms of trying to prevent some serious issues that can happen across our food supply systems. And I think for many years what I have done is I've investigated serious issues about about food safety, about food fraud because that's that's kind of my job. My name is Chris Elliott. I'm Professor of Food Safety at Queen's University Belfast and I guess for five years plus really thought about this whole idea about collecting analysing data and then producing really important outcomes from that system. What one of the initiatives that I instigated back in 2014, six years ago, was after the horse meat scandal in Europe and one of the recommendations that I made to the UK government was that there has to be much much better ways of sharing sensitive information and the outputs and the outworkings of that was the formation of what we call FIN and FIN is the Food Industry Intelligence Network. There only is one FIN in the world in the UK and it comprises nearly 50 businesses with a turnover I think something like 150 billion per year and it's really very large national multinational companies and we'll talk a lot more about FIN over the next one, one and a half hours. And also we have been thinking about the collection, the interpretation of data not to look at issues that are present now but may arise in the future and this is the whole idea about predictive analytics, predicting issues and then dealing with them so going from being reactive to being proactive. But what we have done is we've put a really nice panel of people together and Francesca you could move our slides forwards and what we're going to do is there's going to be two rounds of questions and you know normally in a boxing match in the Olympics there's three rounds so I'm only going to spar with the panel for two rounds because I'm such a nice guy but it's going to be a bit of a question and answer session, a bit of a conversation maybe provoke a little bit of controversy of course that's what I really like to do and what I want you to do invite you as the audience you know in the Q&A chat box let's get some you know difficult questions certainly not for me but yes absolutely for the panel. So maybe if you just move on to our next slide what I'll now do is introduce our panel and I just think it is a phenomenal blend of people from the food industry from a regulatory environment but also from the world of data. So first of all we have with us is Sarah Mortimer who is vice president of global food safety at Walmart, Walmart the world's largest food retailer. We have Peter Whelan who is a good friend of mine and Peter is director of audit and investigations at the food safety authority of Ireland. Peter and I have investigated a few things together over the years. Third panel member is Helen Sisson also a good friend of mine and one of the co-founders of FIN and I would say Helen's day job is technical director of the Two Sisters Food Group and one of the largest food manufacturing companies in the UK. And fourth and last but by no means last is Janis Stortis who is founder and CTO of Agrinote and Agrinote is data analytics company and we've been doing a lot together over the past 12 months looking at data collection, predictive analytics and looking for these safe spaces where information can be analyzed and exchanged. So if we just move on to our next slide. So round one of the boxing matches I call it. This is about opportunities and opportunities in terms of unlocking the massive amount of data sets that are out there. You know the world is full of data there is absolutely no doubt about that individual companies have data you can get open source you can get close source data but one of the big issues is about trust. Who do you trust with your information? Who do you trust with your data? This can be highly confidential information it can be sensitive to your business it can create problems if it gets into the wrong hands. So round one is really about this whole question about opportunities of data trust. So what I'm going to do is I'm going to ask Helen. Helen the first question to you. Now I've already said Helen you're one of the co-founders the co-creators of Finn. So in terms of your own experience about setting up the data sharing platform and it could be Finn or it could be other things you think the critical points were in terms of actually getting Finn to operate and to get that trusted space and then we'll talk a little bit about not only sharing information between companies but companies and regulators as well. So over to you Helen what's your first initial thoughts. Yeah thank you Chris. Yeah as Chris has said you know the concept of an intelligent sharing network came out of Chris's report after the horse gates scandal and a group of technical leaders from across the industry in the UK sat in a room and said what are we going to do about this but we all agreed that what we wanted to do we wanted to be meaningful we didn't want to tick a box we actually wanted to create something that would add value to all our organisations and you know we had to sort of think about how we would do this how are we going to set it up and I'll be honest the initial meetings in the room with people from different companies was a little bit of I'm not sharing mine with you and you know and a little bit of oh how are we going to do this. I think a Eureka moment in the early days of setting up Finn was establishing how we were going to make this data safe. A lot of bit of what people were worried about well what happens if it gets out you know and even in the early days there was a bit of nervousness about what you do with the regulator what about the media how are we going to create this safe haven which I think Chris called it in his report and what we did we established a relationship with a legal firm and we identified a way of getting the information to flow through what we called our legal host through the legal firm and that data when it came out the event was going to be anonymised so that gave companies the confidence to submit their data into this safe space that data couldn't be linked to any company other than the legal host who knew who was submitting what only the event of any questions or points of clarity that would come out later on and ultimately all that information would get consolidated to come back out in what we call off the quarterly report and you know that really was a moment in time where people went oh I can see how this can work I can see how my data can be safe I can see how also I can see what I can get out of it because I think that was the other thing when we set up Finn is you know we're asking people to trust each other in the first place actually we're asking one food business to trust the other food business we also had a little bit of understanding that if we find any you know deviations in the data or any test results that look a bit odd that we're all not going to go rushing off the listing suppliers or doing something silly that will follow the investigations will do the traceability and actually you know use that information to inform our own insights to inform our own testing programs and that in its own right had a positive effect you know we could see examples where in the early days everybody was testing meat because of the scandal that had happened but actually when one company might have found a an erroneous result on one particular other supply chain the following quarter we'd found that other companies had gone and had a look at that supply chain so from the early days of getting that trust and getting people to share the information people could start to see how they could use that information to the benefit of their own businesses so I think it really was about creating that safety that place of safety the data was there and creating you know there was a bit of a leap of faith in the early days amongst companies to create that trust amongst themselves and we also have this always this bit of a motto is we're going to walk before we can run you know and I guess as we develop these questions we can talk about how things evolve since then but it was that creation of the safe space that was the breakthrough yeah I mean I think that's absolutely right the thing about who will see your data kind of be tracked back to you and it kind of cause issues with with your business and that leap of faith that you talked about Helen I think was absolutely magnificent and there was a small group of what I call really you know absolute leaders technical leaders of the UK food industry led that you also said you know it has evolved it's evolved over five years since its creation and you know the next leap of faith was also then sharing the data with regulatory authorities and they just talk through you know how you managed to deal with that and all the additional layers of complexity yeah so you know I think it's fair to say in the early days of thin or the regulators were very interested in what we were doing but we felt that we needed to get trust amongst the members first and once we've got that trust and as I say the value was starting to be created for the members of the network we started to have dialogue with the regulators and because of the UK system that meant having dialogue with all the devolved authorities so we're in separate dialogue with particularly Scotland, Ireland and England and we had to I think there was a point of principle whatever we did with data sharing between Finn and the regulator needed to be true way we were very clear for the network that that was really important that if Finn was going to share each data in in an anonymized way with the regulator what would Finn get back in return what would the members get back in return and that was a really important principle and we needed also to keep that theme of protecting the data you know we knew and we'd established how we were doing that for the members amongst themselves but we needed to flow that theme through to how we worked with the regulator and what we did with each of the regulators and each of the devolved authorities was create an intelligent sharing agreement so you know it sounds a bit formal and maybe it was but it was important for the members to buy into that and have the confidence again to you know to move Finn on as a network and that first agreement was signed roughly a year after the Finn got up and running Finn was formed in the middle of 2015 and got formally set up as a company if you like in the October and it was the following September that we signed our first agreement so it was a year of getting the network up and running honing the data sharing but as I say it was about you know having a formal agreement each party knowing what we were doing with the other party protecting the information again and creating a two-way process and that's moved on even further you know it's gone from just sharing the data to now every quarter after the Finn report is published we sit down in fact Peter and I are in a meeting tomorrow we sit down with all the regulators and have a multi-agency intelligent sharing meeting where we share the Finn data they share what's going on in their world and that you know may result in and has resulted in the past in you know particular investigations or actions being taken as a result of that forum and what we do that the key actions and insights from that meeting get put on the Finn website which was another development so that members also get some form of output from that forum which takes place with a couple of the board members with the regulators so so very much you know an evolving information sharing from initially food industry and through to all the regulators as well. And that's absolutely super Helen thank you and you know just for the audience I do believe and I stand to be corrected but I think the data that sits in Finn is the world's largest repository of food authenticity testing that that's the scale that Finn has reached now. I think this is a good time maybe to go on Janice now because Janice you're a guy who really like to manipulate and look at data so in terms of the massive datasets that are being collected how do you think we can go from being that reactive that I talked about to the pro activity about preventing you know big food safety issues scandals that might happen in the future. Thank you very much Chris first of all it's a pleasure and an honor to be in the same panel with Sarah, Helen, Peter and you of course Chris. So I will start with the second part of the question which is linked to the which are the best data sources that can give us better predictions. So the value of the of a data source in terms of predictions depends very much on the parameters that we need to include in our prediction models. So the best data sources are the ones that will provide information and data for a parameter that is very important for the prediction of the risks that we want to make. I will give an example from the fraud side from the fraud perspective. If for instance the fraud risk depends very much on production and trade information then having data sources for these parameters is very important in contrast to having data for parameters that is not important for predicting the fraud risk for a specific ingredient or commodity. So such data sources that provide information for very critical factors for the risk predictions are the ones that are the are the best ones and the ones that can provide value and of course there we need to be very careful because this data should be accurate, should be trusted data and should be combined and interconnected so we can apply the algorithms in clean and accurate data. Regarding the scenarios that we can have from such a critical from such a massive data and some big data that we have in the supply chain. So I will mention three but of course these are not the only ones there are many more that we can still be discussing in this panel. So I will start from the scenario that is very close to what Helen described for the case of Fin. So it is very important to have data sharing in order to align all the lab tests the lab data tests either this is for authenticity purposes or for safety purposes and which are generated by the private but also by the public labs having an aggregated view of all the lab tests and knowing the protocols that were used for the test because then we may be sure we can be sure that we can use this data in a safe way because they are trusted then this will establish a scenario that can strengthen the predictive capabilities because we'll create the critical mass of analytical results of lab tests that could be used to identify emerging risks in the supply chain and what Helen already described is very much to that scenario to that direction. The second scenario is from the certification sector. So for instance if we focus only on one part of the certification section sector like the inspection results performed by governments but also by certification bodies and by third party auditors these are very important very critical information that right now is trapped is closed in local databases in documents in PDF files and this does not allow to have all the required information that could be used based on these results based on these outcomes of the inspections to be able to predict very important events that may happen in the food supply chain. Another very interesting scenario that we heard also during the last three days in the GFSI is how to share and combine combined data that come from consumers. We call this in our team the consumer's voice so it may include this data may include reviews, complaints, comments about the taste or even the report of foodborne illness. The e-commerce in the food sector will play a very important role in the next year so being able to share such data between the e-commerce platforms and the manufacturers and the retailers can provide a critical mass of data that will help the development of risk prediction models. So this is also a very interesting scenario and of course for all these scenarios it's very important to this data to be delivered in a way in an easy unified way so they can be used by the people that are developing the prediction models either this is inside the food company or it is in a technology company that provides services to the food company. So this is what we are trying also to do to aggregate this data to collect and process this data but the most important is to give this data in an easy and unified way is a very important priority that we are working and we should be looking when we are talking about data-sharing scenarios. Yeah many thanks for that Kiannis and I think that takes me on really nicely to my question I'd like to pose to Sarah because yeah Walmart is a reasonably large company I think you collect huge amounts, vast amounts of data and then it's in terms of you know can you think of some scenarios where the collection the analysis of that data has really helped boost the predictive capabilities for for your company and maybe the broader food industry? A little bit I would say we do have vast amounts of data and a lot of that is internal data I would say that we're looking at predictive capability so we have 11,000 odd stores we have tons and tons of audit data, temperature monitoring data, sanitation data and of course all of that is increasingly being collated to look at trying to test data all of it. Some of it's collected by third parties some of it's collected by our internal associates and aggregating that more and more and looking at what that tells us so I guess we've tested our way a little bit in in this area by looking at all the data we have readily available to us that isn't fraught with the legal difficulties that Helen described when Finn was first set up or because it's our data and it's much more readily accessible. That said some of it of course resides in third parties so how much we trust the security of that data but that's certainly been very helpful in terms of predicting likelihood of failure at the store level. Really interested in Finn because I can certainly see a scenario of how data sharing could boost preventive capabilities in the future and as Helen was talking and as you were talking Janice I was thinking about and I've been thinking a lot about leafy greens in the United States it's a huge challenge to many of us and thinking about the learnings of Finn and whether that could be leveraged in some way as an opportunity to more share data the information than we have in the past. Walmart's probably the biggest purchaser of leafy greens in the U.S. possibly anywhere but as you know leafy greens have been fraught with problems over the last years with E. Coli challenges recalls most years we've had recalls and multiple ones highly costly highly disruptive and Walmart has pioneered I guess the use of blockchain as a data trust example where you know we've got immutable data going from our suppliers we've got over 90 percent of our supplier data now from leafy greens going on to the blockchain so we can very rapidly trace to the field where our leafy greens have come from but that's immutable data it relies on it being correct when it goes in what I'm thinking about in listening to Helen is test data and how could we aggregate test data for the for the common good for the good of the industry where we've got multiple companies doing testing multiple suppliers as well as customers testing and I know there are some projects being discussed to look at whether that could happen pre-harvest testing test and hold data a lot of test data that I think would be available that would really really help us in predicting prevalence really and and therefore root cause analysis for the future as to how and where and why some of these issues happen we know the root cause exactly in that it's cattle mostly or animals proximity to leafy greens produce agriculture but the patterns and the reasoning isn't always clear as yet but a lot of a lot of companies getting together on that interestingly Helen talking just as you were describing there's a lot of retailers get together a lot of producers get together we've got multiple coalitions underway we haven't yet taken that next step in terms of how can we get further along in terms of preventive capability that will give us better prediction so really exciting conversation I think and I'm really glad to have joined and joined the discussion yeah thanks for that Sarah and you know I think the the implementation of blockchain as you talked about fantastic and I mean I'm involved in multitude of blockchain projects but it is you know the value of blockchain is is the value of the quality and the accuracy of the data that goes into it easy to put bad information into a blockchain as it is into a you know a ledger and I think often we talk about the the verification of blockchain is through the analysis the testing that goes on so I think there's there's phenomenal opportunities there and you know I think the the example that you gave is a really good one because I think very very good hard data analytics added to all of the testing that goes on will give you much much more information about what is is is coming down the road so thank you for that gonna maybe switch a little bit now to Peter so Peter you know you you are the cuckoo in the nest okay because you are the one that everybody dreads you're the regulator okay and sharing information with regulators and you know joking apart is really can be extremely difficult for for a number of different reasons and we know that a lot of regulatory authorities you know don't have access to that much data you know food safety data certainly food authenticity data so I know you're you're very much and you welcome the the opportunity of joining Finn and I just wonder what has that brought to you as a regulator and how do you think that what has happened there could be expanded to other areas that that public private partnerships can work to in terms of this whole area of data sharing. Yeah I think Chris just taken up that last point I mean the concept of generating and storing and holding data is a very old one we've all filled up database after database after database and everybody wanted more and more and more and it was all in silos so I think what the really move here is that's novel and as innovative is we're getting rid of this silo idea and the food standards in the UK brought out a document called regulating our future and they suggested that there would be big sharing of data with industry and that that would generate all types of benefits and the FDA have now said with data trusts they think there should be all of this sharing of data and they're even looking at sharing it amongst themselves because they've difficulties between each of their states and how they share data. We've been talking to FSA as well the European Food Safety Authority and they're talking about the concept of data lakes where data is stored in a well tagged way and that respects confidentiality but all of the data is connected so it doesn't necessarily have to be in one location but it's all connected and it's interoperable and it's usable by anyone who has whatever model they like pointed to that data so it doesn't even have to be a common language. So that's a great move forward the EU recognizes because in the EU food fraud network we're now amalgamating into IMSOC the information management system for official controls the IRAS of data, the administrative assistance data, administrative assistance food fraud, traces, so there's a whole lot of databases being merged together there. Now merging data is one thing it's as you say it's what we do with it but for me Finn has been the greatest example of how you bring data and share data. For us when we get data as you said because most of the time we get fast amounts of data is when we take it when we have an investigation and an industry sector has hidden information or a particular firm has hidden information and we go with warrants and we go with legal provisions and we take all the data we can get and we analyze it and we look for a wrongdoing but that's not the place we want to be in because that's the last resort. With Finn that's not the case. With Finn we make great use of the data that we have there. We use it to inform our official controls. We use it to inform our investigations. We use it in our chemical sampling program or to develop every year. We use it to give our labs focus. We use it in the coordinated control program in the EU. We've actually had food selected on the basis of Finn data. We use it in Opsen and we have informed Opsen investigations and targeted actions on the basis of Finn data so tremendous use of data there. There's a big move away from traditional inspection. Traditional inspection of an inspector turns up and wants to inspect the premises and do an audit. That doesn't cut it anymore and certainly in terms of investigation of food fraud it has to be intelligence led and it has to be forensic but to move away from traditional inspection we do need to move to a situation where we leverage each other's data or leverage each other's data analytics or share third party audits so that the outputs from one organization can become the inputs for another so we're not seeing repeated generation of data that doesn't need to happen so if it's done once it should be available and then it's the input for the next person who wants to use it. I suppose for this to work really we have to embrace ways of processing and analyzing big data particularly and we need to be focused on how we generate I suppose targeted requests for specific purposes so you don't want the regulator coming in saying I want to see everything and I want it all available to me. That does no good for the regulator and it does no good for the industry sector so we need to be careful about how we target on a specific purpose for what we do. We also need to be very careful about how we balance confidentiality and transparency and accountability so really important so if a regulator gets access to a lot of data and confidentiality is broken or proprietary interest well then people won't feel comfortable sharing that data anymore nor should they and properly so but what we can bring is we have an enormous amount of data with a microbiological sampling program with a chemical sampling program with a residue sampling program with a pesticide sampling program with authenticity sampling we do sampling attached to outbreaks we do a coordinated patrol program we have the optimal results that we're on the FSA horizon scanning so we have a huge amount of data and we need to find ways that we can move that data into the democratization of data where it can be used by everyone it's not just my data your data it's our data and but to make great use of this though we have to move as Jan was saying we have to move to artificial intelligence and machine learning and so we have to get smart about how we analyze big data sets and what we want from them and how we're going to use them and and then we have to demonstrate benefits if we're not seeing benefits like more rapid traceability finding a source of an outbreak and stopping it really quickly linking outbreaks through genome sequencing has been huge advantage of data and investigating outbreaks and coming to quick results and then learning those lessons to put in preventative measures and because we all want public health to be protected whether you're a regulator or whether you're industry or whether you're an academic everybody wants the same results that is a good product on the market that's authentic it is what it says it is and it's safe and we're all looking towards that goal so food authorities are not competing so we've no competitive advantage when industry share data and you might think that your competitor has and you're afraid of what they might say but we don't have that so we don't come to the table looking for any competitive advantage but we do need to incentivize data sharing we need to be part of that incentivization so if we can say share your data with us we'll use a tourist profile and then we'll use it to maybe on a risk basis we look at the inspection frequency or the inspection type industry might welcome that because sometimes the inspection doesn't do a whole lot for us and doesn't do anything for you but if we have more focus because we can depend on the data sets we have we can change that and food safety culture three weeks ago the European Union brought in new legislation and they've now made it a legal requirement for industry to have food safety cultures and they've now made a very specific legal requirement from managers and management to show and demonstrate legally that they've introduced this food safety culture and to show that they've taken all of these actions which have led to safe food so what better way than sharing your data to show that and so I'll just finish on digitally ecosystems the way forward we should be part of it we need to be part of it and we need to come to the table with a positive view on us Peter many thanks and I think I think a lot of the the points that you raised will come back in round two when I ring the bell for that now we've had some questions have come in from the participants please free the free to ask some more what I will do is I'm going to use the chair's prerogative I'm going to ask the first question and I think what we've heard from all of the speakers is about the wealth of data that that is out there and I often compare it to an onion okay because an onion is just layer after layer after layer now how what's the way what are the mechanisms that you can actually take all of those layers of data and de-complicate them and turn them from data to intelligence these sort of dashboards that we talk about so I'd like to put that to Janice that's your challenge is how does a company like Agri know how does it take and aggregate all of that data converted to something that is really very easy to follow thank you it's a great point thank you Chris so it's very important what we do in order to connect all these different layers and I like very much the idea of thinking it as an onion is that we have an approach that is based on two things the first is how we can use standards in the way that data is represented for different properties in important fields like for instance products or hazards or companies how we can use the standards so we can ensure the interoperability between the data between the different layers so that for instance the information of company will be almost in all layers so if the company is identified using a standard way with a global identifier like for instance the GLN or the one that was proposed by FDA then it is possible to track this company throughout the different layers throughout the different stages of the supply chain so this is this is very important following such an approach for all the different importance parameters for properties or fields name them as as you like is very critical in order to have a combined and interconnected dataset interconnected big data from all these different layers the second thing that we apply is that we are using vocabularies we are using ontologies that have the knowledge of the domain so what does this mean with an example if we use a very well structured and hierarchical ontology for or taxonomy or vocabulary for hazards and for fraud issues then we can have there the knowledge of the domain so for instance if we have underchemical hazards or the different types of hazards as a child terms every time that we will identify mycotoxin we can automatically under give the two machines the knowledge that the mycotoxins are part of the game so in this way we can connect the data with throughout the different layers and we can also keep the knowledge for the specific data that we are collecting we can also have the semantics of the specific data so this enables the very nice intelligence dashboard that you mentioned where you can drill down to very specific problems you can answer very specific questions like for instance give me all the mycotoxins all the aflatoxin cases that were identified in pistachio nuts from border rejections in border rejections import refusers but also food recalls so we can go through the I have already described different layers of the onion the onion the layer of the border the layer of the market of the shelves of the retailer so having such a knowledge between and such a standard way of identifying the properties of the data we are able to provide answers to such complex questions and this is only possible if the data are interconnected if we have this if we are using data standards and semantics that will have the knowledge of the domain inside the system that we are using many thanks Janice we've quite a number of questions starting to come in now all very data centric I'm going to bat the next one to Helen and the question is are the data collected under thin made available anonymously what do you actually do with the data that Finn collects yeah so we collect the data on a quarterly basis each member we have a standard template which we've literally just launched a brand new database in March as part of the latest development of the network so that information from that member company gets put into the law firm if I want to put a better word and they scramble that data and consolidate it all so when that data comes out in the form of a report it's completely anonymized and so all members see that report they see all the consolidated data so they'll see all the tests all the foods are divided into subgroups which is in line with the BRC global standard categories for food so if you want to search on meat or you want to search on olive oil or you want to search on whatever material you can do that but the data is completely anonymized there's only if there's only the law firm can if there's a query thinking oh that doesn't look quite right or is there an error in the submission only the law firm can go back to the member company and query it the board or all the other members do not know whose data is useful yeah thanks for that okay the next question in terms of the data is about how do you assess the quality of it you know because lots and lots of data comes into fin is are there any you know simple analytical chemist and we have got lots of gates that you have to go through before you actually take the data and think it's meaningful so how does fin deal with what's called the assurance of data yeah I mean that's a network we don't specifically at this point specify certain testing or test methods but we do collect that information yeah so so we are you know somewhat reliant on member company submitting their data into the network and you know we do sort of make some recommendations in certain areas around certain test methods for certain ingredients you know and what we have under the board we have a technical steering group and where there's particular challenges in particular sectors so one of the examples would be sort of free range eggs we might commission a bit of a study on that which we did in this case to say you know what is the best test method for that particular ingredient material supply chain and make recommendations to the members on what to use but what we do do is we require there's an element of free text that goes into it as well so if there's a result that doesn't look quite right then we're asking members to provide as much information as possible on that investigation so you might for example get a speciation issue where you you know you're picking up one species that shouldn't be there but do we think that could genuinely genuinely be fraudulent activity or actually could it be poor GMP practice in the factory because you knock lean down between one line and the other so we use the members data but we also use their investigations and trace abilities to back up what the results might be saying so I guess that's probably one of the main ways we do that I suppose to can we sift every result no we can't because the data does need to remain anonymous and in order to do that you've got you've got you've got to a degree trust what's coming in the one thing I perhaps didn't mention is it's not just test data we also use in the network we also some members traceability information as well so we recognize that a lot of people there's a lot of testing and that does form the majority of the data but we also where people are using trace abilities to challenge supply chains we collect those as well and that information also goes into the network and gets consolidated and you know sent back out to members in the final report which again can back up what some of the test results might be saying super help any thanks to you Peter now there's a number of questions have come in in terms of regulatory issues and one that I think is very pertinent is about public authorities willing to share food safety data with the community and and you know is that something that you do at the moment or you think would be could be done and again how would you manage such a complex process yeah the food safety authority violence is believes in transparency and is very transparent so on our website there's a huge volume of data available and I see a note about recall all of our recalls and all of our alerts are put up on our website so that everybody can see them and see the reasons for them and we also share our pesticide data or veterinary medicines data or zoonosis data and our contaminants data with the with FSA and FSA make them all publicly available so all of that data now not many people know this but all of that data is out there and is publicly available so there is already a lot of sharing of data we are restricted in some ways and whereas we can publish our own enforcement actions and we also publish the report that goes with that enforcement action we are restricted by European legislation and that we can't reveal what we found on official controls so we can't actually say we found the following on an inspection and put it up on the website but we do respond quite openly to FOIs freedom of information requests and we give out as much data as we're legally permitted to and we also have to respect GDPR and data protection so we do that but we're inclined to give as much as we can we find excuses to give data rather than to hold it back Thanks Peter Sarah the last question from from audience that's coming I'd like to put to you because I think as a company you probably get lots and lots of information about food safety risks and a lot of that will be through we talked about social media trolling and so forth but what you won't get there is any information clues or evidence about authenticity so how does your company collect information about authenticity what might be the sources and do you think that might be a weakness in your current armory well we we like many others I guess subscribe to some of the services which I suspect Finn is behind in terms of horizon scanning kind of tools we find those incredibly helpful to give us an idea of where to look many of our markets testing programs they have to be based on what we feel are the likely places that we should be looking we can't test everything and so on but I guess it's about building trust with your suppliers as well that but I really like the onion piece because it's built it's drilling down from your your immediate supplier down to their suppliers to their suppliers to their suppliers of course that's often when we we have issues or many of my counterparts also have issues where it's those hidden id ingredients that cause problems but certainly horizon scanning type tools are very valuable to us and we do have test programs based on risk assessments that we conduct in a number of the markets are they perfect probably they can be improved but we're and we are strengthening our programs all the time in terms of threat and vulnerability assessment type program that we're looking to standardize now across our markets we haven't had a standardized a program before that's just coming in this year we're implementing a more standardized approach and then we have to figure out how we can share more widely across our internal businesses too because as you say Chris we do have a lot of internal data and we have a lot of markets to share that with but we do find some of the external tools incredibly valuable in helping us focus on where to look you can't really look everywhere so where do you start yeah absolutely so there's many things for that I'm going to move on to the second round of questions thanks thanks for all of the questions that came in from the audience please keep them coming in I don't want to give this panel any any easy moments at all so Helen now back to you we've talked about a lot of the opportunities you talked about some of the unbelievable benefits of having Finn but there must have been a myriad of technical issues that had to be dealt with you know I sit as I think I call it a critical friend on the board of Finn and I just sat on my listen to all of these different technical problems that never entered my mind when I kind of went industry go and do this it would be a good idea so you know again for those people who are thinking about setting up data sharing initiatives you know what were the real things that you thought were nearly showstoppers you know as Finn was being developed yeah I mean I think that's a big question I think I've already mentioned the the protection piece which was fundamental to getting us off the ground and once we'd got over that hurdle it was you know how how and if are we going to collect the data and what are we going to collect and in honesty we really did start with a spreadsheet and then we sort of thought then it's sort of what food categories do we need to do how do we separate them out you know if we do fats and oils but then what about olive oil what about you know other fats and oils then then we sort of oh actually country of origin would be quite useful information to have as well and it started sort of to trigger a framework and a format around how we collected that data the categories we collected in it and the information we wanted so it went from you know starting with a few food groups you know meat dairy to then breaking down those food groups into the various sectors and collecting further information so and as I say that started with a simple spreadsheet and the network launched with 21 members and that was challenging enough as the membership grew that the next phase was to get us a slightly more sophisticated basic database and we work with another company to put that in place and then literally any just a few weeks ago we've done you know quite a huge development actually in terms of a much more sophisticated data collection system which is going to allow members to upload their information much more easily and will give members their own dashboard in terms of being able to manipulate that data and play with that data in a more meaningful way so so as I say what to collect how to collect and how to present it so again you know the member benefits as well again the members initially got a quarterly report and with some support from Chris to give a bit of an overview on that report then we set up a technical steering group and that technical steering group would review that report and based on what the results were telling us but based on the knowledge from you know a group of technical experts in the room a technical steering group report went into the quarterly report and that would make recommendations of what to look for you know to make recommendations on where to target your testing and we also have an annual members meeting and one of the questions we had at the last meeting was could we have a top 10 list of ingredients to target so now that report has a top 10 list of ingredients to target and along the way we've you know there's also a newsletter on food fraud that we have another partner helps us create that we have the website where we put posts we also have the facility now because that's one of the other challenges is what happens if I get a bit of intelligence what do I do with it and I've got that bit of intelligence outside the quarterly cycle so do I really want to wait another three months to share that so so we have a member alert system where we can say or there's a bit of intelligence in this area or a member's flag day particular challenge and we can get that out to members very quickly so so as to say from a technical point of view once we've got you know it's it's getting that initial trust it was getting the confidentiality secured it was creating a means to collect that data on a quarterly basis increasingly as the network's developed and the membership's developed because we've more than doubled the membership now how do we make that data collection easier and how do we get make creation of the quarterly reports easier and then how do we make it even more value to members through being able to manipulate that data and use it for their own use and you know making you know that the network that members are clear on the benefits they get for the network as I've said all the added value extras you don't just get the report but you get a lot of extras being part of the network and I think that's really helped you know help us grow the network and extend our reach and we just obviously hope to continue to grow the network and particularly target areas where we might feel we're a bit underrepresented in certain categories but hopefully that gives you a reasonable overview on some of those challenges yeah it was tremendous because I think you just summarized about five years efforts in about three minutes so very well done for that ask you a question that I didn't tell you I was going to ask you but you know three rules if you want to join Finn what are the three rules there's probably two actually one is pay your membership and second is participate you know it's really quite simple the membership fee is really modest and you know for very large companies it's £3,000 a year which for all the benefits that come out of it that's a pretty modest fee and we do expect you to participate because we do not feel it's fair that you know you can get access to all this information if you don't put anything into it so we do monitor that you know when people will get a nudge from the legal people because they know who it is if nobody submits any data in a quarter you know and you're almost allowed one quarter's grace because we recognize I think people get to have challenges but but in the interest of fairness to all the other members you know we require to participate and really it's the two rules Chris to be honest yeah okay even even better thanks for that Janice it's your turn to go come under the spotlight and you know there was the questions that you thought you're going to be asked and now there's the question that I am going to ask you okay slightly different because I want to ask you about quality of data as well I'll go back I'm an analytical chemist every piece of data we generate we check and we double check and we triple check and there has to be you know quality assured how can you say you do the same thing with all of the disparate sorts of data that you collect particularly data from I would say the gray literature from social media it's reliability accuracy convince me that you've got systems in place that you can cut out the the background noise great question and good that it's not the ones that I have noted down it's better this gives me an interest so I will start with the data that we are getting from official sources from authorities from organizations that we know that a very good job is is done in terms of data and then we'll go to the most challenging part of any data source out there social media media sites and so on so for the first for the controlled data sources for the ones that are official one official what we are doing is that we cross check the data also comparing the data with the historical data that we get from the set from the same organization so we can see if the same format or the same quality of data or if there are any extreme values that will indicate that something is not right either this is in terms of units or this is in terms of the terminology that is used so this is these are the processes that we have that run continuously and every time that we are getting data from a specific data source even if this is a high quality data source marked as a high quality data source in our systems we will still compare the data with the historical data that we have and with the standards that we have internally in the aggregation system so this is this is the one part the second part is that we rely very much of them on the metadata there's no good data without metadata so when we have metadata we can be sure that the specific column in the specific spreadsheet is referring to the analytical result and in this analytical result the specific unit is used if we don't have enough data enough information enough meta information metadata for the specific data set we will not use the specific data set or we will try to contact the organization and ask for clarifications for the specific data set so this is the part of the high of the of the accurate data source of the more controlled data sources if we go and just to add on that that we always have human experts that are supervising the results of the algorithms of the text processing and all the of the enrichment and analysis of all the textual and numerical data so this is always in place and we have the confirmation by domain experts when we are moving to the sources like social media on media sites then the criteria that we have to publish something and use something in the prediction models are mostly so in this case we have of course we have the food safety experts and the data experts that are supervising and that are confirming the quality of the information and this is done for a large this is this is supported by the algorithms that we are using so it's not that we are checking one by one on all the fields but we we can do that also on a more mass logic on a more massive logic using specific dashboards that we have to approve the data that we are collecting in some cases we will also apply algorithms for fake new new certification if there is something that is very new in terms of data source and we are also cross checking if the same information is announced by different sites by different professional sites that are of high that have high credibility in the food safety sector so we will not publish something that is for the first time announced somewhere without checking the quality of the information and cross checking this information also with other data sources so so these are the different layers that we are using to make sure that we will deliver accurate information and that we will use the accurate and clean data in order to apply the prediction models that we have Janice many thanks and I hope it didn't shock you too much with that question you covered it extremely well Sarah on to you you're probably wondering what I'm going to ask you now and it is really about the ability to share data across food supply chains but this is where I get a little bit nasty and said is you know what I actually don't believe in food supply chains I don't think they exist because every time I look at a supply chain I just look at how complex they are I call them network supply chain networks and my goodness a company like yours you must have what 50,000 different SKUs and that's a lot of networks so how do you as your company think about sharing information across those different supply chains or as I prefer supply networks with great difficulty I could just answer and stop there yeah and not far off in terms of the SKUs we ship containers of grapes around the globe for example I mean it's just massive and the complexity is it is a network we use your diagram quite frequently actually to show how complex it is and the concerns I guess you know what are the concerns given the complexity of the supply chain it well the validity of the data for sure you know how trustworthy is the data but as an industry when we're thinking about what we can share who we can share it with you know there's this whole sort of antitrust piece that come in comes into it which I think Peter was mentioning much earlier thinking about who owns that data and how can we share it in a way that doesn't compromise or antitrust laws which you know is being the biggest market for us is the US you know very stringent controls there so that's a big concern too and when we're thinking about sharing data and we've talked a lot about food fraud data and I touched on the leafy greens piece and even then you know as retailers coming together when we're really careful we don't share anything related to who our network is it's and still an infancy I think as I listen to Finn around how do we share more test data than we have in the past but I was also thinking as we're talking about we've just been at the GFSI conference and I think about the difficulties that we've got in uploading you know when we see something that we think that that company probably shouldn't really be certified how do I share that if I've got a non-disclosure agreement I mean as basic as that when you think about how could we leverage what we see in a better way to really help again the consumer going back to the consumer and preventative action how can you put all of that information together which isn't necessarily test data it's observational data and how valid is it I mean it's fraught with challenges within that complex network of a supply chain but actually if we could find a way to do that we could probably prevent quite a few issues when you look at the root cause of recalls and you look at well that company was certified you know what was the root cause of that could that have been prevented did anyone know about that could we have shared it but how can you do that I mean that that's the hurdle we have to overcome but I mean the question on here is the main concern and about stakeholders in the food supply chain then you get to the regulatory piece we have had discussions with FDA how can we work more collaboratively how can we share industry at large concern I guess is what the ramifications are if we find something or if an industry finds something then is duty back to share that do they do they end up in a recall situation which they would have just dealt with somehow or I think lots of concerns buzzing around still that we haven't yet really got to the bottom of that great opportunity to do so and but I do think it's a stakeholder discussion I love the fact that that's where it started in the UK and Ireland and you've gotten to a point where you're really comfortable with that and I think there are learnings we can probably leverage in many other examples of data sharing more broadly for the common good because we get more and more data we have to figure out how do we how do we do something with the data that actually matters and can actually help prevent foodborne illness and safer food at the end of the day that's terrific many thanks so the last round two questions for Peter and I think Peter's probably sitting there going what's he gonna ask me and Peter what I'm going to first of all say is you gave me a master class a number of years ago in looking at the complexity of networks and I will never forget it you drew it on a on a on a PowerPoint presentation and then you're the question really to you is how you as a regulator start to think about how can I map these networks how can I understand where bad practice where malpractice is happening is it something that the authorities should do or again is this really about partnerships because you know a lot of food businesses are much much better at mapping supply chains now so what's your perspective on the complexity of supply chains and how you can get the right information share the right information Peter we cannot hear you sorry I'm used I thought you'd fallen off your chair No I'm still with you because I absolutely agree with you did the the food chains our networks and I described him as amazed and they really they're unbelievably complex and global in nature and the amount of movements of food and sales and transactions and that and that serves it for us then so validity of data if how do I find traveling down the road at 110 kilometers per hour and I know the speed limit is 100 do I go home and ring up the police and say I'm non-compliant and maybe you should investigate me or penalize me why then do we expect if an industry finds say illysterium anacetogen is 110 kindly forming units and they're ready to eat foods before it's used by day 12 before it used by date what prompts them to share that information why would someone pick up the phone is it in the interest of public health I genuinely want in the common good protect consumers is it that I want to be compliant with legislation there is a requirement on me is it the fear of being caught and if you're caught it's more damaged to your reputation or do you hide it and my fear in big databases is that as regulators we want to and have to find the non-compliant results and that's where we have to be and if people decide that they're going to hide it or not reveal it or hide it in a data set so it is there but it's it's it's hard to find so if you have an incomplete data set or you have the integrity of the data is in question or there's hidden or incorrect data then you're making decisions and the basis of wrong data completely and that serves no one's purpose and certainly not the public and we really are remitters to protect public health and we've had two very big investigations one very recently where we went into a very big company and the auditor for the third party certification was there the day before literally the day before and gave a fantastic report and a great rating and we found the most dark fraudulent practices we've ever come across in a food company in Ireland and we thought how does this tally you know so why would we want to feed data from third party certification or accreditation if that's the quality of what you're getting now this is in terms of food fraud and how much one person can do in terms of food fraud in an audit and this question of I have to say so we have to be concerned about the validity of the data going in to make good decisions I suppose there's a few other things like what we've mentioned before the confidentiality versus transparency versus accountability and we're years behind as well in the food safety authorities and we're not googles or amazons or e-bays so in terms of artificial intelligence and machine learning and internet of things and analysis of big data we're not there we're in our infancy so and we've started small we looked at trip advisor and we generated tools to look right across trip advisor reviews of restaurants and we said probably this shows and it actually showed us some things that are very interesting because people who say think they've suffered food poisoning in a restaurant or had bad service or bad food they put it up on the internet now so you can get some very interesting results by just going through online what trip advisor reviews might say and that might target where we might go or outbreaks that might have occurred that we didn't know about so and so yeah there's a whole lot of difficulties they don't just weigh the benefits but there is things like ownership of data and who at the end of the day has ownership of this data is it anonymized data and does that give us best use of data when it is anonymized is the legislation keeping up with the needs to share because we are restricted in some ways by legislative requirements have we the resources to do this properly and in an effective manner and capability it's change management it's a new environment we need to manage change to to move into this direction I think if we can it's going to show us a whole lot about food chains and complex food systems and complex mazes of food movements and but it still depends the whole thing is can we depend on this is it is this database genuine is the information correct has people stepped have people stepped outside because the people we chase and through fraud investigations they're never going to share the data they're never going to put up I'm selling illicit alcohol that I made in a unit that nobody knows exists and I make them from industrial alcohol and I'm buying buckets from China and I haven't labels printed in Ukraine and you're never going to see that in the data set so whereas it does have tremendous use for legitimate industry it still doesn't get us by the legitimate carry on I suppose yeah Peter thank you and and I think the last you made is unbelievably salient because analyzing data around food safety is very very different from analyzing data around food fraud and I've learned that over many years now what I'm going to do is remember I told you there's only going to be two rounds well I lied to you because there's round three is about to begin and here's the rules of round three is I'm going to ask you a question and you have to answer me within one minute okay and then after that I'm going to come to the audience question so please type type in any questions you have and I'm going to start off with Helen Helen I know you like a challenge is Finn you've got nearly 50 members not anymore and is it is it an exclusive club for for the Brits and the Irish or is it is it open to the world tell us yeah it is open I think initially when we started Finn we we talked about protecting the UK and Ireland markets we were perhaps a bit parochial but again linking back to you know walk before you can run we needed to establish the systems and the processes and rules to get the network up and running but but as we've grown and as our systems have got more sophisticated we do recognise with global supply chains it just makes sense to open it up so the answer is yes it is open to non-UK members thank you thanks thanks very much for that I think Sarah you're next now you've got all of the data that you collect you've got all of the clever companies like Agri know do all of the data analysis what's the one thing you know when you wake up in the morning you'll have turn on your computer your iPad or whatever it is and you have a piece of information what would the one piece or two pieces of information that you could get every morning that you can't currently get now how would it make your job easier how would it protect your company and I always say how will it protect your consumers those three things so what's that big piece of data that you would like to have analysed in front of you that's more than a minute it really is the emerging issue I guess it's like look here today you know look here now this is happening it's the immediacy the timeliness I suppose sometimes I feel we don't know until some way down the road and and we could have prevented selling things if we'd known sooner so speed speed and validity I would say piece of data that comes to me fast and that's valid is something that and something that I can react to and do something about would be helpful yeah yeah thanks for that Janice it's your turn now what I would like to ask you is you know your company is trying to produce a I would call it a coalition of the good those people who are trying to you know really do the right thing trying to prevent food safety issues crises about fraud now what how is it you're trying to achieve this how how are you trying to bring the coalition of the good together this is a good one huh in one minute and we need some months to deal with the issue that Sarah mentioned so the the most important thing that we are trying to achieve in order to help the food industry is to deliver all the interconnected data that for the important parameters and factors that we need to take into consideration that can help us to build the prediction models that will predict and will prevent food safety incidents in the supply chain so this is what we are dreaming of this is what we are working on and this is a challenge a very important challenge that keeps us awake every day to connect all this data all the available data that's out there thanks very much Peter the final the final one minute question goes to you and I guess it's around this whole session is about public private partnerships it's about food businesses starting to trust regulators and regulators trusting more food businesses what do you think the future is you know do you think that the direction of travel of what we've done in Finn is the right model to follow or do you think there are better ways that we should be thinking about building that relationship which will protect businesses and protect people yeah I can answer that in a minute all right because then I do think Finn is absolutely the way to go however I'd like to wake up in the morning and see Finn saying you know what we don't need to anonymize this data anymore we put out that data I'm going to attribute to every company and we trust that the regulator will take a proper view of it in partnership with us because information lies with industry it doesn't lie with the regulators we as regulators don't wake up in the morning and have any information and but industry have all of that information and particularly in food fraud the information is with industry you know much much more than we do about what's going on out there so and I think eventually this will happen I think I don't think it's today or tomorrow but eventually I don't think we'll have to have anonymized data sets we'll have people comfortable to share and because we're all in this for the same reason we're all chasing the bad guy we're all protecting public health we're all protecting people's interests so and as long as we have that common goal and I think that's the direction of flow but that might be a pipe drain but then that's that's I'd love to see it go in the direction okay but I think it's a very very important point you make so thank you for that so a couple of questions from the audience just to finish this this extremely good session and you know one of the questions with is there are the known knowns there's those things I mean Sarah you talked about the massive issues about E coli you know we know about massive adulteration that's going on and things like herbs and spices at the moment in different parts of the world but it's dealing with the unknowns it's these emerging hazards you know some of them might be the you know consequences of the covid pandemic it could be consequences of climate change it could be other things and really in terms of data and data analytics how how are we going to start to manage those things which previously you know you put up your hands and go well there's no way we could ever have have expected or anticipated that so Janice over to you first but that analytics side how do you deal with the unknowns yeah so dealing with the things that have happened so far is an easy thing to do it still has some challenges and as we mentioned but you can do it but managing the unknown is a more challenging thing but still we can do things I will mention two things the one is by knowing an increasing trend for a hazard like for instance a biological hazard and I will mention the case of ETO where salmonella was found to be increasing for a large period for two years or three years during the last two or three years in sesame seeds so this increased very much the risk of using an unauthorized chemical to in order to address this biological hazard and this was exactly the case what happened so when we see a trend an increasing trend in a hazard we can predict that this will be addressed with another hazard like a chemical the use of a chemical to fight the biological hazard so this is the one case where we cannot identify unknowns the other case is that we can identify how a problem can be expanded also to other ingredients and commodities because they came they come from the same regions where the same practices are used when by the growers or by the producers so for instance if you have an increased a trend for a pesticide used in Turkey or in another region or in Egypt this based on the knowledge that we have on where these pesticides are also used in other ingredients in other crops we can predict that the same problem may be also found in other crops so these are two ways of identifying the unknown and we can we can discuss in four hours if we also include other factors like the weather the climate change the environmental conditions there are many such cases I just selected two ones that are you know easy to understand and they are they are very it's also very realistic to go with them Janice many thanks for that what I can say to the panel now is time to relax I'm not going to ask you any more questions I just think for the last couple of minutes of the session just a little bit of wrapping up just kind of a bit of thought collection so what's really clear is there is a wealth of data huge amounts of data out there and thinking about how to collect the data together how to analyze the data turn it from data to intelligence you know there are more and more techniques based on AI machine learning and so forth that's really clear how to use the data in terms of preventing big food safety issues fraud issues I think we have explored some of the really difficult stuff about how you share the information between companies between companies and regulators was discussed and it's not straightforward it's not easy at all there are there are many many difficulties around all of this oh I'm a big advocate of of Finn and what has Finn has done and I tell you I've looked across the world and I haven't seen anything else like it nothing where the the amount of of information that is shared that converts to intelligence and then you know is actioned not only by food businesses but the regulators is something that I would I would really think you should look at in more detail but what we want to do is think about for the participants is this whole idea about building a community on public private data sharing and there there is on the slide there there's this website the URL openfoodintelligence.org really encourage you to log on to that it's really asking a few questions it's it's offering for you to become a part of a community there's no charge associated with this we're not asking you for information but it's really about building this community around information sharing intelligence sharing so I would very much encourage you to to join but what I'm going to finish off with is just a big round of thanks I think we've had you know one and a half hours of pretty intensive discussions I have grilled the the panel to a degree that they probably weren't expecting but I think the quality of the information that was provided I just think it was very open very honest you know no no stones were left unturned about some of the the issues so I very much like to thank the panel like to thank Sarah from Walmart in terms of of the view of you know the world's largest retailer I want to thank Peter because Peter I know very well you know undertakes really complex investigations and information is king Helen not only for the work in terms of Finn but also you know as one of the leading technical directors in the UK and showing that degree of leadership and talking through some of the big practical difficulties and and and you know technical issues that that got in the way of Finn and how they were overcome I'd also like to thank Janice because you know I probably put you under under the spotlight maybe more than anything else just in terms of how you collect the data how you analyze the data how you quality assure that so I want to thank all of you for for the the dialogue for the conversation I hope the the audience of this session found it found it useful I certainly did there's there's lots of things that I will go away and think about so wherever you are in the world I hope you have continue to have a nice day be it morning afternoon or evening and don't be too worried about what the next big food safety risk is because that analytics is going to tell you what it is and hopefully before it actually happens so with that I will thank you all very much last person to thank is Francesca who has been operating all of the slides for me she actually didn't trust me to move the slides forward and back and I think that was a pretty wise call Francesca thank you thank you all very much and I hope you enjoy the rest of your day goodbye thank you Chris thank you so much Chris thank you Sara thank you Peter thank you Helen thank you Francesca bye bye bye bye bye