 Καλή ημέρα ή καλή ημέρα ή καλή ημέρα, depending on where you're connecting or watching this. Είναι my pleasure and my honor and I'm very excited that I'm here with you, Eric, today. Because what we are starting today together is a series of interviews, conversations, relaxed chats, however you call them, that hopefully they will set some light into the magic, the black magic behind what AI is and how it can become a practical solution and technology for digital food safety. So this is the series of farsight chats on AI and food safety. I'm Nikos Manos Erlis and with me I have Eric Westblum, CEO and founder of Provision Analytics. Hi Eric. Hey Nikos, it's great to see you again. Good to see you. What I understand by taking a look at your website and also in the way that you communicated is that Provision Analytics is making digital food safety easier. That's the goal, that is the goal. This is what is really impressive in my eyes because I see that you're highlighting a lot how we can make technology easier, how we can make technology more usable, usability is everywhere in the way that you're communicating the value that you offer and I think that how we can also make such innovative and difficult solutions like the AI powered ones easy to use is essential. That's right and you know you think about whether it's AI or food safety, software, a lot of this stuff can be complicated, you're collecting a lot of information on a lot of workflows. We want that to be streamlined and configurable so that it is as easy as possible to use for food safety. What exactly is Provision doing? Give us a little bit of a background. That's great. We started the company in 2018 with the goal of creating a flexible software platform to digitize all aspects of food safety, quality, compliance, regulatory, all the stacks of paperwork that exist within these various facilities in the food supply chain. We've really ratcheted our focus in recent years to fresh produce and that chunk of the supply chain that would be packers, the growers that feed into that, ultimately wholesale distribution, transport, cold storage. Everything that touches that supply chain as we've continued to grow with that again an ongoing focus to be flexible so thinking about the number of different food safety standards that are out there, FSSC, SQF, BRC, Global Gap, Primus, a laundry list of other acronyms. How can we be flexible and accommodating to all of those while making sure that the product is as easy to use as possible? So the product is a software platform, a software solution that is doing what? So we allow users to capture the data for all of those forms and records within their food safety program. So this could be anywhere from 50 to 100 to 300 different templates. Think about sanitation, pest control, pre-op inspections, metal detector checks, water test records, everything that goes into a food safety program. We work with clients to configure all of those forms, then simultaneously the unique workflows within a business and then ultimately the reporting and the data insight that can be associated to an audit or them just kind of driving workflow and process of improvement internally. So that's everything that we're doing within our product and then continuing to expand the modular offering from there. Who is the typical user of the software, Eric? So who are you expecting to be using the software on a daily or weekly basis? It's interesting, we've seen that grow over time. So what is anybody on a production line or a packing line, growers, quality assurance managers, some exposures to QC all the way up to supervisors and then that started expanding out into what I was talking about is sanitation staff, pest control staff, facilities staff. And now we're seeing even some pickup and translation over to pure play health and safety. Making sure that the operations are on health and safety within these facilities have all of the data captured that they need. So that's maybe the organic growth of the product within some of these companies that we work with. And then starting now to build it up towards elements of supplier management, vendor management where you've got procurement involved and gradually reaching into these different areas of the businesses. Which means different people as well within the organization going from a production line level to the level of headquarters on the corporate part of a company. That's right and you know that would speak to both our small clients and then some of our larger ones is ultimately some of that supervisory or even C-suite level is starting to look at this where there's not only efficiency to be gained or considered but market access and then liability depending how you're looking at food safety as a kind of a cost center or liability or risk center within the business. Or is it a differentiator in terms of market access and how does it help you maintain relationships as some of your biggest buyers. And where does AI come to play. Yeah, where does AI come to play in the world. I think we're still at the bottom of that inflection point in the industry as a whole. Knowing a more pragmatic general user tool that's come out like chat GPT. We use that internally at our business all the time. Yeah, it's interesting right and so our look at it is how do you extrapolate something like chat GPT or these large language models and how can you start applying it to our product or to food safety at a macro level or whatever that means. And then you could start looking at the perceivably more complicated things after that. And you give me a scenario use case where you see such a solution like the last language model being useful and relevant for one of your users. Yeah, what we're starting to poke around with a little bit internally is if you were to apply one of these large language models to a food safety program how much efficiency can be gained. Because where food safety capture a lot of data in a lot of different instances of paperwork right so you fill out a sanitation record on a daily basis or you fill out a packing cooling storage harvest records whatever these are at the farm level with the packing level. If you're filling out this paperwork you have a lot of instances of that data captured on repeat. So today, our, our software does a tremendous job in speeding up the audit preparation component. So instead of taking a whole bunch of binders or paper spreadsheets and organizing them in preparation for an audit. Our software can compress all of that into customized tabular reports to pull out what you need when you need it. So what we're going to ask forward to AI and we're poking around is, could you just dynamically ask an LLM for the information you want on the spot. Right, can that be streamlined and sped up. I think so, but there's the combination of, is it pulling what you need and then what is user adoption look like and I think that's going to be trailing the rest of AI adoption and humanity. What we're saying is that we have lots of data that we are collecting and we are trying to organize coming and different formats but also a different time instances points in time and different work. And we already do a great job you're saying in facilitating and making easier the way that people interact and discover the information that they need within all this data. But what we would like to explore is whether we can do it in a more natural way. This kind of interaction with a system that will provide or will extract the information from all the data. Give me an example of a question that I can ask to provision analytics. So that I can get such an answer. I think a great example of this would be, you know, hay provision or whatever that equivalent example is. Prepare my audit give me all of the paperwork for the last year associated to packing cooling and storage for FSMA 204 pull all the key data elements for FSMA 204 associated to this product. And again, rather than going and configuring these tabular reports, you could pull dynamic information just right out of your database real time in any formatted layout that you want. So whether it's the FDA demanding it or some of your biggest customers, you can shape it into bullets you could put it into a long form read out right. It's the dynamic nature that you can pull and organize that data very quickly. So what you're describing here, if I get it right, I will try to translate it in simple components in my mind. Is first of all that the question per se hides the complexity of the requirements of the final format of this report. So I don't have to define it by using some kind of screen or selections or whatever. I just spill out my request and then the machine, the model translates this into fixed requirements that it does know that FSMA needs. That's one part. Then the second part, if I get it right is getting from all the data that I have in my storage in my databases, and then preparing practically the actual report in a format that is required and acceptable by the inspection authority. Do I get this right? Is this what you have in mind? Yep, you're absolutely correct. And if you consider some of the largest buyers in the food supply chain today, think about the biggest retailers, or even the large restaurant consortiums, we're starting to see them creating addendums, or their adjustments to core food safety standards like SQF, BRC, whatever that is. Well, what you used to be able to produce for an audit preparation report for one standard now has a whole bunch of different variations that are potentially coming down currently or in the future. So rather than configuring a whole bunch of reports and trying to monitor all these different standards, I think there's probably an opportunity to just train models on what the expectation is and then just call on the information you need for that entity that's looking at you or the use case that you're trying to pull that's being organized for you dynamically instead of how to handcraft and handtune these things over time. That's interesting. So what I hear you describing is that if I find a way to train, configure my AI model to do these mappings and transformations in the different versions of the same information, of the similar reports internally, then I can use it so that it can generate them again and again and again depending on the request. Okay, so I have a difficult question for you here because at least this is something that I hear from people that we talked at the market. To do something like this, this kind of configuration, I would expect SQM that you do it together with one of your customers. So you see together next to a customer, they tell you about the typical requests that they get and then you try to prepare the model that will serve such a scenario. But then what you're saying is that I will take this that I have co-developed with my client and go and reuse it for another customer who might be also their competitor. What does it sound? Is this a blocker? Is this an opportunity? What do you think? Well, I think first and foremost, in our case, our clients own their data. So any applicability of the type of thing that we're talking about would be generic and applied to their data sets. Ultimately, there would be a goal to aggregate and anonymize information to provide some other value added outputs like industry benchmark or something like that. Naturally, what comes with that is the data governance accessibility, obviously ensuring anonymity. There's some considerations there that frankly, we haven't dove into yet. We don't know how to slice and dice that today. It's all part and parcel of what provision analytics and any other company out there in any other industry is going to have to figure out is what does that look like? First and foremost to us, the customers own their data. The last thing that provision wants to do is make a client uncomfortable where historically some software companies 10, 20 years ago or maybe even more recently have made the mistake of trying to own data. I think that's challenging. But there's naturally going to be a bunch of challenges around data governance and what you display and how and where you're building these things. Those LLMs in that example are probably going to have to be isolated to individual client data instances, if that makes sense. That's a very interesting argument that you're making or maybe I hear two of them. Let's see if I get them right. I'm taking it slow so that people that watch us can understand the complexities that you're describing because what you're describing is quite complex. First of all, you're saying I will try to train a generic model. So it's not something that will be based and working only if it has the date of my client. Correct. First of all and second, this means that I don't take the date of my client away. I am more using the use case there, the opportunity to train a model that will share and add value to their data. Which means that if I can take them, the model and apply it in a different case, nothing will happen to the date of my customer. Which is, I think, an excellent argument. Yeah, and I think that's an initial thesis on the idea. And I think we built from there and the industry is going to have to learn from each other. But you're also describing a second scenario that is also interesting. I find it very interesting that I can also develop and train a model for a particular client and the data that has been used and the model that we have developed and trained for them, they feed to their data and they feed to the solution that I'm offering there. So in scenario A, you would be trying to develop something that is quite generic with clear disconnection from the data so that it can be applied to other cases. In scenario B, you're looking for a solution that can be customized, configured and deployed on a client per client basis. Where do you see more value or where does your heart come closer to? I don't know where I see more value, to be honest with you. I think each have their own highlights. Maybe one of the tools that I use on a daily basis, we use Notion for all internal documentation. Well, Notion's got a beautiful, simple, easy to use AI to just summarize whatever you need based on your whole company's documentation that saves tens of hours of weeks, maybe 100 hours a week for our team and just summarizing things, preparing everything from internal memos to outbound documentation emails and the whole work, the whole works. And I think about that as an analogous use case where Notion is locked down to your individual data instance where it can report on your company's information. You can also pull from outside. There would be some analog there to consider when, you know, provision takes it or your company takes it or whatever to use these and these types of use cases. And of course it's like an onion, right? You peel one layer and then you've got another question for me and another question and pretty soon I get run away, right? But I really like the fact that what you're describing here as an analogy is something that you are trying and using and that you see value coming out of it. So you say, we do try to use it in this way and we get value by letting such a model, such a solution and model be trained on our data and we feel comfortable with that and confident that nobody else that is using this technology has access to our internal data and this is very important. But you also brought up another interesting scenario opportunity. What if we have a way to anonymize some of the data that one of our customers has so that we can create larger data sets, aggregate data sets and have more power in what we get from such AI models. What do you think about that? How close are we to doing something like this? I can't predict the future on a time base on how close we are. I think naturally you're going to have early adopters and you're going to have a number that will never adopt. You can only compare it to something as ubiquitous as Google versus Duck Duck Go, right? Like if Duck Duck Go is not tracking any of your data or all of that you're getting a search engine that works. But Google, I would argue and I think that a lot of people are on the other side of the curve because all of my information listen to me all the time to make my life easier. And there's always going to be those early adopters that I think start to take that sentiment and as the value props start to derive and the outputs say like aggregate benchmarking for risk in apples. If you could look at that, gradually that value proposition will grow enough and then more and more people start sharing. I think it's going to take a while because there's going to be a lot of fear and there's going to be a lot of perhaps like relative naivety on how these things work for people to realize that there are a lot of positives that can come out of it and it's not all just Terminator 2, do you know what I mean? So what you're saying is that if we find ways to start showing the value of having aggregate larger data sets that do combine private data from, let's say two parties that agree to join their data compared to the value that we get today by having them silent and protected then we can make a case and we can demonstrate the value and we can get more people on board. Yeah, so here's a very old use case. So obviously far before the advent and commercialization of true AI, I had the opportunity to sit with the Danish egg and food council and their food science, yeah, their data science team in 2018, the same year I founded the company. And they walked me through the collaborative effort that the entire poultry supply chain had made within Denmark where effectively they had eradicated Salmonella within their supply chain because it was an open model whereby every hatchery, every kind of grower processor, everything through the supply chain had to share their data into a massive data lake from my, I'm on video so I'm probably going to get this wrong but I believe it was in the late 80s and it's a multi-decade aggregation of data in that supply chain so that everyone kind of looked and said, look, if we all just pool our information together, we can all eliminate risk and it's a net positive for everybody. That's a kind of utopian view on some of this stuff where there's going to be competitive information in there but that gradual adoption over time became this huge output in net benefit for everybody. I'm not necessarily saying that would work in Greece, in your case I'm not saying that would work in Canada in our case or North America, but slowly this adoption and aggregation of data exposure can provide net benefit, especially to the food supply chain which is so ubiquitous and so critical to the world, right? What do you think would be obstacles or challenges in doing so? Perceived competitive advantages and landscape. I think naturally whether you're a food company, you're a bank, you're a metal manufacturer, everybody thinks they have a competitive edge in a lot of cases they do, they do something different or unique but it's all within the same realm. As long as you're not uncovering or giving away true unique insight, I think that the ultimate net output of a lot of this stuff, there is a positive to it as long as you're not giving up proprietary competitive secrets. Do you see a cultural differentiation there? It does culture play a role. Do we have countries that are more open in following this road like Denmark compared to more traditional ones? Yes, I do. I'm not going to name those countries but I'll tell you that some of them are further north and some of them are further south. Okay. So there are plenty of factors for us to keep in mind, right? That's interesting. Absolutely. What's your biggest fear, Eric, in terms of AI in the food supply chain and in particular in food risk prevention? Is there food safety and food risk prevention? Oh man, that's a hard question. I think naturally these models aren't perfect and if they start producing data insight, direction, guidance, that type of thing, they're not perfect. They're never going to be 100%. So I think there could be some risk around information that's captured and how people use it. And then naturally, I believe inherently that people are lazy. This is the backbone of technology. Everybody is constantly trying to innovate and create the next thing to reduce workload. And the more dependent that people become on tools like AI, you kind of stop, let's talk about chatGPT, right? You ask it a question and it pukes out two pages worth of content. People just skip the proofread and there could be a whole bunch of nonsense in there. So I think that in the short term would be one of the bigger risk points. That we rely upon AI too much and we skip the thinking and the verification part. Yeah, the real work, right? The real work. And this might backfire. That's what I sense at the end of the day. Okay, so we had around 20-25 minutes of conversation. We talked about different things and aspects related to AI and data and its applicability and food safety. If you wanted to highlight one, the most important thing that we said this afternoon, what would you choose? What should our viewers keep in mind and not forget? I think that these tools are coming and they're innovating. It's a tremendous way to augment the efficiency of your business. But for now, at least, it is not a way to replace entire work streams. I think it's a mechanism for efficiency, not a replacement. I think that's a big thing. I'll just keep using chat GPTs. It's so ubiquitous. I know so many people that use it. I know some leaders in other businesses that were laying off staff in the first two, three months that it came out. I was like, wow, you think you could just immediately start eliminating people? I was like, that's pretty early in the adoption cycle to start eliminating humans. Again, it goes back to efficiency versus replacement. That's probably the biggest macro takeaway. So we are still learning the adoption cycle. Let us not fire our people away. You and I can keep our jobs for now. Yeah, yeah. We can keep our jobs and make sure that we are more efficient using this technology. That's a positive message for our closing. That's right. Excellent. Eric, thank you so much. I had Eric Westblom, CEO and founder of Provision Analytics, all the way from Canada connecting to me, Nicos Manuselis in Greece. Thank you so much, Eric. Amazing. Thanks, Nico. Bye-bye. Bye.