 Okay, well, hopefully you didn't give away all of the discussion points for today. We have a word cloud pull up on there. So if you're on Slido, if you have a use case where you're exploring improved data availability or data quality and where it could benefit your business, please feel free to put that in there. We can leave that up for a few minutes while we introduce the panel and then maybe we'll look at what people have said. So rather than go through your bio, I mean, we could do that, but they have them all online. What I'd love to hear each of you share a little bit about your data journey because it's so relevant to what you're each doing in your roles. So Ash, we'll start with you and then come this way. Sure. Good afternoon, everyone. I'm Ash. I'm the founder and CEO of Co-Leader Technologies. So my data journey began, I would say, in 1998 when I got introduced to neural networks and I felt they were limited because I couldn't quantify how much salt I wanted to add. So I started in trying to integrate neural networks with fuzzy logic and I published a paper on neuro fuzzy synergism, which got a national award with IEEE. But I was trying to do that with floppy disks. So we have come full circle and in 2016, I started tinkering with the same technology which has come known as ChadGPD today. So we were trying to build LLMs to democratize evidence-based health and we took the tech and built what I call a visual cognitive engine. So data comes in many formats. Basically the engine we have created, it's basically kind of a Tesla of OCR. So what we are lacking today in the industry is the ability to read information. We can detect objects but AI can still not read information like we do and understand it. So we built that and we layered many other things and I brought that into the supply chain industry. Background wise, I've been with big force. I started value chain execution practice at PricewaterhouseCoopers, later on expanded understanding digital supply chain practice and I was a vice president at XpLogistics since May 2019. I've gone rogue. Randy. Hey, I'm Randy from Google. I'll go a little bit backwards. I'll give you a little bit about how I use data today. I'll give you a little bit of my relationship with data over my career. So I lead product for Google's shopping data and catalog system. At its fundamental, what we're trying to do is bring in billions and billions of product listings from a very diverse and wide variety of sources worldwide, make sense of it. Understand what products are being sold, things like variations and materials and things like this, who's selling at price and availability, who makes it, what's the brand's value, what do they believe is true, what's their voice and then how users talk about these things in the world. How does it show up on social media? How does it show up in terms of reviews? We're doing that all to give a perspective and hopefully match our customers, our shoppers, with the products, the merchants, the brands that match their needs and needs not just in terms of cost and quality, but also that value that I talked about. So that's kind of my journey with data at Google. Prior to that, I spent time really endeavoring to bring a bridge between the virtual and physical world is the way I put it. I've worked on consumer-grade gaming and ASR systems for them. I've worked on self-driving cars and, of course, shopping now. And really, I've learned to appreciate the foundation of this data lifecycle, the acquisition, the understanding, the enrichment and the use of it as a platform, as a foundation, as this launch pad for bringing what I envision to reality. And so hopefully, we'll get into a little bit more. Awesome. Very cool. My name is Charlie Roberto. I'm the president and chief operating officer of GatherAI. And I'll start at the end, too. So we offer a fleet of autonomous robots, autonomous drones that fly around inside warehouses, automating cycle counting and vertical racking. Our mission is to digitize the physical world and we'll get into that. But the way that I got here is as a young buck out of Carnegie Mellon, I was at a sewer robotic startup, which smelled like you think it would smell. And we built these autonomous robots to automate this kind of terrible job of inspection, and then we realized what are we going to do with this massive data? That was actually the harder problem, is aggregation, analysis, tracking. It's very well known today. After there, I went on, I was head of product at a company called Advis, which offered website tools, which were installed in 15 million domains. And we had five billion unique cookies across the web, 100 billion pages a month, just kind of like the largest data network across the internet, which we sold to Oracle Datacloud in 2016. After a few years at Oracle Data Scale, we got the band back together and took over a company called SparkPost, which is a world's largest email infrastructure network. So we process 40% of the world's B2B and B2C email, and it's another data platform where just kind of this mass of information allows to extract unique insights and have data activations that aren't possible anywhere else. So now in supply chain, I'm missing that, right? There isn't a supply chain wide pool of data, and I think that's our holy grail of how do we figure that out collectively so that we can all benefit from these insights. We're gonna talk about that too. Okay, wait, but before we go on, I have to bring up the six degrees of separation from Carnegie Mellon, because as we got on the prep call, you all were talking about it. So maybe you can all tell a little bit about your Carnegie Mellon story, just for a friendly university rivalry. So yeah, by the way, my first job was as a wireline data logger in an oil company, and I fell off the rig and broke my leg. That's how I ended up in academia, and I went to see a mule. Where you're from? Yeah, I'm from. You paint the fence? I did not paint the fence. Yeah, I didn't drink enough. It can be fixed later if you want. Yeah, yeah, I still do it, there's time. I was a Miami boy, ended up in Pittsburgh after MIT denied my college application, but officially forgiven. You and Spider-Man. Yeah, we both lost. So I was there for the robotics institute for undergrad and masters. I was out of robots for 15 years in enterprise SAS, and I'm back. Yeah, I'm also robotics institute, by the way, so small world. Mine's by proxy. I was born and raised in Pittsburgh. I actually still work in Pittsburgh. I actually did attend school, that building, not this building, at the media lab at one point. But my connections are more just by proxy than direct. I did not go to Carnegie. But in 2005, I had admission at Carding Evelyn Tapper School of Business. And I deferred that to 2006, which they kindly accepted. So I went to Pittsburgh one week before the program is about to start. I've written taken the GMAT, put in the hard work, secured financials, and all that one week before the program is about to start. I go to a bar, meet my cohort, and something gets into me. And I just walked away and never went back to business school. So that's my card. It couldn't have been him, because he didn't drink. Not enough. All right, well, I have some questions for you all. I'm going to maybe target a question at a person. But please, rest of the panel, feel free to chime in. And if you have questions, you can enter them into the Slido and we'll get to some of those later. So Charlie, you mentioned just briefly in your intro, but talk a little bit about your efforts to link analog and digital data together. Yeah, it's a very exciting topic. So we're at the Media Lab. We're in the birthplace of the transhumanist movement. Do you know about this? It's the effort to use technology to transcend our limits as humans, like cyborg stuff, for real. And if that sounds ridiculous to you, my observation is we're kind of already there. Like we woke up years ago as this sort of unimpetent beings with our phones. We can learn anything we want. I can summon a pizza to be here by someone I don't know in minutes. And people even use them for drone strikes. Like it's kind of out there. But then all we're missing is the neural link. But we're already kind of influenced as part of this global digital network. We just have an analog interface to the screen. And as soon as you leave the internet, you're back to being immortal, like a normal, boring person who can't do these things. Except Sanjay, he's here from the Matrix. Except Sanjay, yeah. So I feel like as a citizen of the internet, you see this digital frontier as the real manifest destiny of how do we advance that. And it's happening now, enabled by commodity robots. Tesla, like iPads that basically drive around all roads all day long is one example of it. There's a company called Planet. They have a fleet of sun synchronous satellites. And for the first time, anyone commercially can buy 24-hour satellite data, the whole surface of the Earth. And they're finding new use cases for that anti-whaling, where a ghost ship can't run from a satellite. And that's kind of revealed this last mile problem of there's 100 times more texture and richness of analog data to understand and want to digitize. So that's really what we're trying to solve is how do you know as much as possible about the real world? Like I know all these things, but I don't know my cholesterol right now. Pythons are taking over the Everglades in Florida where I live, and I don't know where they are. Robots are really, I think, a major part of solving that problem. Ash, you mentioned something about industry's thinking isn't quite there. So what's missing in industry's thinking and how should leaders be approaching their data strategies? Sure. When I was a VP at XP or Logistics, I had no clue about AI. I had some clue, but not to the extent that I know today. And I was responsible for the entire digital portfolio from investment in AI to blockchain to all innovative stuff. To be told, if I stayed in that role, I wouldn't know what to do today because there's so much information being presented in front of business leaders. And a lot of business leaders woke up one day and AI was right in their face. I think for a lot of business leaders, chat and GBT was that awakening that moment. But it didn't happen overnight. We started building LLMs in 2016. So I think what I've seen in the industry is about their 5% of business leaders who truly know how to use AI and drive a business strategy, how to think holistically about an AI strategy versus like their 100 vendors saying the same things and a new tech comes out. They're about 5% of the business leaders. They are looking at things what others are doing and they're reaching out to vendors and then talking to them, trying to replicate, understand. 90% they don't know where to even begin. They don't have an understanding. That's the reality of the industry. And I realized that I was presenting to a business leader after talking very passionately about AI, visual, cognitive engines, art of the possible with automation. After one hour presentation, I was asked what's a GPU? And that's when it hit me. And I've been on several calls since and that's why it's very important to talk about these things in this forum. So educational sessions can be held by senior leadership for their next in command. People are probably gonna come up to you afterwards and ask you what the GPU is. So actually, though, that's a great segue. Before we get into probably the everyone's top of mind topic, Generative AI, because what conference doesn't talk about it, we have a question for the audience. What is your timeline for introducing a policy and a practice around broad company use of Generative AI applications? So if you wanna put that up, we have a few choices around answers timeline based for those of you who have never heard of it, welcome. We're glad to share some expert perspective from this panel, but we'll let people kind of post up their answers. So obviously we have to talk about what I love about the first session is the foundation of automation is data. You can't automate if you don't have access to the data. And I know we're gonna cover that in a little more detail today, but Charlie, you had said the world has gone AI crazy. So we'll start with you, but I wanna hear from the rest of the panel as well. Like what makes Generative AI great? Or where's a, you could pick either side. Maybe you all should debate the pros and cons of Generative AI. I'm a fan. We've all kind of witnessed the miracles that it's producing, if I can say that. And probably also been into the uncanny valley of where the weird spots are. But I feel like it's widely recognized as a technology innovation. We have LLMs, we have transformers, we have all these bits of it. But I almost feel like those innovations are inevitable and no networks have been around since the 60s, 50s down. Thank you, 40s, yeah. Oh wow, okay, so I'm dated. But I think the harder part of getting the AI revolution that we're experiencing now is actually the training data set. And we're only feeling these things because there is an internet that does provide unlimited reams of information to train on. And if it wasn't there, it doesn't matter how much technology you have. You're not gonna get those sort of insights. And I love this example you just gave about the divide. Data set of 17 million versus 70 million completely changes it. So as you look to your own operations and especially within supply chain, the quantity and quality of your data is gonna really govern, I think, how much technology can leverage and getting your house in order there. And then looking in the future how we can share data, I think it's gonna be critical for us to realize the same kind of miracles that we've seen in chat GBT. So there is 10% of the people who may be jokingly or may be not said what is generative AI. So Sanjay, do you wanna describe, cause Chris, I don't know if you were here as Chris was exiting his panel, he said people say doing math on the computer is AI. So maybe you wanna just give a brief definition of generative maybe versus other types of AI. So the old AI was more like pattern recognition and a little bit of prediction. So this is, like if you go through a series, for example, recognizing the voice or the face recognition on your iPhone or if you go through a global entry when it does face recognition, that's all sort of something. It's based on neural networks, deep neural networks or unusual neural networks. Generative AI is a different thing which is it looks at, it figures out, for example, how Sanjay sounds and you can generate new statements, fake audio of me. Doing stuff or saying stuff, right? You've seen the ones of Tom Cruise, et cetera, where it's not Tom Cruise. But that's only the beginning of generative AI. It starts with voice, then it goes to video, it goes to images like Dali, mid-journey, et cetera, et cetera. That's the beginning of the beginning. Generative AI can also be used to design a structure like a building and make sure that it's structurally sound. And then you can say, you know, the conference room is too small, I want to make it bigger and we'll come back with a bigger design. So we're right at the beginning of a new era where you can almost, again, using LLMs, say to the machine, I want this and we'll design it for you. So we're in an era of generating new realities computationally that we did manually today. Music, that's why we had the writer strike, right? Media, all of that is going to be perhaps the first set of industries to be impacted by it, but I think all industries will be impacted by it. Randy, one thing, I mean, I was really interested to see that about 60% of the respondents, so they already have a policy and practice around gen AI in place. If you want to hit on that, but I think also part of what I find fascinating is bringing AI to the non AI practitioner. Like I don't have to be a data scientist to interact with an LLM. So maybe you can highlight a little bit about that. I'll build on two points, I think, that Sanjay mentioned, but they resonate with me and answer these questions. One is right tool for the right problem, right? So gen AI, we used to kid around that you want to go solve a problem that's unsolvable, throw AI at it, and it'll just magically happen. I do think that you need to be, I look at the, within AI and these iterations and evolutions, the right tool for the right problem, right? Don't throw a generator of AI at everything. It brings me to the second point, which is maybe where some people have policies in place is also around, this changes our considerations with AI, as you mentioned, right? How are we responsible with it? How do we consider the authenticity of it? How do we consider the sources that a machine generated as a human generated, right? How do we convey this trust back to users? Becomes very, very top of mind. And some of it is, yes, I have a policy of where to use it and when to use it. The other one is, hey, here are places where maybe we want to be a little bit more thoughtful and when and where and how we convey to users. So both of those, yeah, the last thing I'll say is cost, if you don't know what a GPU is and you go after some gen AI stuff, you'll very quickly find out what a GPU, many of them are. Be, you know, there are industry-wide solutions coming out, when should I use them? When do I need to build my own training data? Big considerations that I think everyone needs to start thinking about for the industry. And maybe one other area, Ancillary, to this unsupervised versus supervised learning, I think you had some thoughts on when you would use each. Yeah, I mean, a simple way to say it, because once you get into the nuances, that's a lot of gray areas, but if you give data sets, you know, dog cat, dog cat, dog cat, dog cat, that's one way to learn. The other is unsupervised or self-supervised, where it's just reading text. So most text, thank heavens, the grammar's correct, right? If you're reading English articles on CNN, whatever. So it's just learning the grammars and it's also learning the probability of one word following another. I will meet you at the, well, the lex was like the restaurant, it's not gonna be garbage can, right? So restaurant is more likely, right? So building these models, all the AI system has to read large volumes of data and get really good at it. And that's self-supervised, unsupervised. Supervised is where you're feeding it examples, you know, and it's learning patterns. Fair? Yeah. Okay, you said garbage can, so that leads to garbage in, garbage out, which, Randy, you brought up. No, how are we doing on the data hygiene front? Because I think these systems probably need good data versus bad data. What's progress? What's holding us back? So it's a big question and so I'll give you a faceted answer. I think in a closed system, if I look at, especially in the applications, industries, domains where data has long been recognized to hold value to effectiveness efficiency, the tools, the techniques, the technology innovations that we've heard about, we continue to hear about, we've moved forward, we've gotten better. The facet here is the world doesn't stand frozen in time, right? And I think as more industries understand more segments, more applications come, the way in which data can be generated changes, that frontier, that data frontier is growing. And so I actually would say that there's probably great examples where we can look at, say, we've done pretty good. Others where we're just like, keeping our head above water and others where we've probably lost ground and we may never regret, we regain it, right? And we should become comfortable for that. So my facet answer is, it's great, but with this explosion of use cases and everything else, it continues to be somewhere where there's lots of headroom opportunity and innovation yet to be had. So that's on, are we maintaining? I wanna answer the challenges bit. And I'm product, so I'll give you, I know this is a data and tech kind of panel, but I'll take a user bent on this one a little bit. Garbage and garbage out. I think one of the things that we don't fully recognize, at least in my career, I've seen, I've not fully recognized all the time is, there's a friction, right? Good data, high precision, correct, rich, complete data comes at a cost. And that cost is the ease of use, like how am I gonna get that? For data providers, data creators. And so one thing that I like to challenge myself, my teams to is, can we really deeply empathize with the creators and providers of this data? How often is what we're asking them for, part and parcel with their day in and day out life? We heard about Walmart and big ecosystems. I mean, data interchanges, they're there, but how about the local store on the side that someone from my neighborhood owns, right? There's a very different expectation of how they're using the data. How comfortable are they at like reformatting data as it comes in as another challenge, especially across a variety of providers? And the last thing, which is there's gotta be some level of value for this effort, so don't forget it, right? Not just, oh, I'm gonna give you the data, I get return on it, but like the person entering the data, what's their incentive for when they get it wrong, fixing it, right? Each piece of data costs something to provide. How do I look at that? How sensitive are my models, am I to errors in the data? Maybe if there's three examples in one outlier, it's a huge impact. If it's the 100,000th, right? Am I as worried about it? So really thinking about that and really challenging the tech and where we're going in terms of like, hey, we will never get to a 100% perfection. Roll back a minute and say, how do we address those challenges deeply from a user understanding? What do I control? What do I not? Super important here and something will never go away. Can I just say something to that? So I think that there's sort of two categories to that. One is just errors in the data. We all have errors in the data. And for companies who want to build AI systems on top of their digital platforms, AI is going to be a very unforgiving tenant. You better catch your digital infrastructure right because it will train on your data. If your data is not correct or good, it's gonna have incredibly bad outcomes. That's point number one. And by the way, just as a plug, some of my, I was on the board, students created a company called Clean Lab, which cleans data, they just raised a big ground. So cleaning data and making the digital infrastructure right before you build up AI is an important thing. The second thing I think is bias. But I'll give you an example of bias. So there was this training example where they tried to train AI to recognize wolves versus dogs like huskies, right? Wolves versus huskies and they got it to work. And then they realized they weren't actually separating wolves from huskies, just that all the wolf pictures had snow in the background. So they were identifying snow, right? So leaving aside human bias, data bias is a very serious problem. And of course when it shows up in human beings, it's a wretched situation. So IBM for example, others have rolled back their face recognition algorithms because of these biases. You heard the anecdote of a skin cancer detector trained on the presence of a ruler. They would put in the image to measure the size of the- So just look for rulers, exactly. Okay, yeah, wow. Yeah, so still a lot to learn, right? In terms of how we train the machines better. You know, Randy, you talked about creators and you mentioned the word incentives. So Charlie, I wonder, right, is incentive, why is data sharing such a struggle? Okay, this is my space, right? Just when you asked, we're all about sharing data across supply chain ecosystems in order to improve business efficiencies. And yet for the better part of, well, my career's really long. I think they put it on the website, but the better part of all that time- Upage data. Exactly, that's bad training data. But for the better part of all this time that I've been working in supply chain, we continue to bump up against these barriers to data sharing. So why is it such a struggle? How are we gonna get all these different partners? It's not a linear chain, it's very much a web. How do you get people to share the data? Because I think that just enriches the value and makes the models better. But what's preventing that? I think it's a very human, very normal feeling. It's my data, it's my IP, it's my competitive advantage. It's very unique. So it makes me special, right? Yeah, I have people in operations and all this sort of stuff, but really, I'll add culture. Culture and data are the two things. And Meta agrees by not releasing the model weights. So I think what's, so here's an example, right? And I really did see this earlier. So if I ask you, hey, give me all your personal information. I wanna know the names of all your friends. I want all your personal pictures, and I want you to tell me everywhere that you go. I want a Zuckle book? Yeah, you're gonna call the police on me, right? But if I call it Facebook, and I pitch it as the cure for loneliness and human connection, then it's like, oh, this is awesome, right? Now I'll keep up with my friends, like the incentives are missing. So in AI, they talk a lot about vertical versus horizontal AI. What that means is if you have AI solution, are you solving a direct problem where it's applied, or are you supplying technology that people build on top of? And I think data sharing is not a horizontal problem, meaning it's not gonna get solved by saying, hey, we're gonna have to co-op. Everybody throw it in. You'll get data out. What you do with it is up to you. That's just, I think it's a non-starter. But if you solve problems in a localized way, real problems for people, and then you get the incentives aligned, people will say, oh yeah, that actually, that problem's really meaningful to me. And I will contribute data to a bigger pool as long as I have control and safety and I understand how it's being used. I'm willing to do that. So I think we're missing a catalyst and it's gonna come by companies who solve problems in the real world. So I mean, given that point, Ash, we talked a little bit about data ownership, but I mean, first thing Charlie said, it's my data, I own it. So from your perspective, because you're working with people who need to give you data in order for the systems you're offering to help them, what are some of the considerations you think need to be given around data ownership? I think it's, whenever I'm going into corporations, the first conversation that I've started having is what's your AI strategy? I think we got to start from there because you started talking about generative AI, which is the end product of all the data and all the AI systems to drive efficiency, accuracy, safety, et cetera. So in a lot of corporations, like we move from client server into the SaaS world. When we go to these organizations, the procurement team produces a SaaS contract and AI vendor Redline system produces a Redline contract and procurement team, why are you special? That's the kind of conversation that's taking place because a SaaS contract is not valid for AI word anymore, data ownership, who owns the machine learning models? What models is the vendor bringing? When you're working with a vendor, how do you protect your data? What data you share and where that ownership reside? But while you're doing this in a collaborative way, how do you push the organizational goals? And the legacy systems, the SAP oracles, Blue Yonders of the world, they were designed for very structured data. Now we are producing video data, data from robots even. We are talking about automation, but how do you know that your robots are meeting their SLAs? I mean, if you look into industry, like Ocados and auto stores, their implementations were paused, what happened? It's not that robots are perfect, they fail a lot, what happens, right? So how does this all interaction come together? Because data has gone beyond structured, it has taken the form of unstructured, which includes visual image, video. And we are getting into some territories, what Sanjay talked about, like parts. So if we can get to that level, where does the data ownership and protection stop, right? So a lot of what I see in the industry is, because AI arrived so fast, a lot of organizations, they're trying to build IT systems, single applications, and they're trying to push the data into a snowflake and talk to any large organization where business and IT, you know, business wants to move at the speed of light, IT has a three year roadmap, right? By that time, the era that we are in, that will be over, we'll be in, you know, like AGI, artificial general intelligence, we are in artificial intelligence right now. So we got to really take a holistic perspective of how do we like supercharge without losing data protection? And that's the kind of conversation that you've got to have with your partners, vendors, and you cannot have that unless you start from an AI strategy, where we, with business, IT, your partners, they all come together, and you've got to have the right AI first partners. There's, I was in a conversation with a multi-billion dollar company in New Jersey, you know, four weeks, three weeks ago, and I won't name the company, but it's a really large automation company, and the CTO stood up over there and said, we can, you know, your assets don't have, you know, license plate, we can train an AI model. You know, these assets are coming from production line, you don't even know what the attributes look like. You need at least 2000 images minimum to train an AI model if you want accuracy test to go to like 50,000 or more. So where do you house this data? What systems are you going to use? How does your contract look? That data strategy, that AI strategy is paramount before we can make any progress on AI, and some organizations are doing it well. Rest, you know, they got to come and, you know, talk to the right partners about their AI strategy. You know, we've talked, so I think we've covered some good hygiene. We've talked about ownership. I want to hit a little bit more on availability of data, and it struck me in the last session when they're talking about automation. I had the opportunity to visit an e-commerce, an automated e-commerce fulfillment center, and we went to the decant section, which is where they're ingesting product from either directly from suppliers or from one of their standard distribution centers. And the first thing I was struck by, and this is because I work in standards and we're all about can you read the barcode and get what you need off of that or the RFID tag and get what you need. And so the first thing that the person standing at the decant station has to do is key in the expiration date of the, key in the expiration date of the product. So you can scan a UPC, if you will, off of write a bottle of ketchup, but the expiration date is not machine readable. You have to key that in. So if you make an error there, everything else that happens to that ketchup in that fulfillment center is probably flawed. So I was thinking of both of you, Charlie and Ash, but anyway, answer. This absence of machine readable data, like is this hindering automation? And what do we do about it? So when I was talking about Tesla early on and that was the gap in the industry, so that was one of the key points. So the first technology, we have filed over 20 plus patents over the last six, seven years. So one of the first technologies we pioneered is what I call autonomous OCR. So if you look at optical character recognition, what we call reading, there are probably hundreds of vendors globally today. Every vendor hard quotes that at this specs pixel, you will find this information or they're training based on fixed formats. What they don't have today, you can talk about Silicon Valley, like really like massively funded companies, just because they sold doesn't mean they have the tech. So what we did is we created technology which can actually read information in real time to make sense of it. In warehouse we cannot talk about inventory accuracy if you cannot read labels. And expiration date is another part of data on the label. And now we are moving, the labels are becoming more expansive with two introduction of 2D barcodes adoption by industry. There's some more information, but there's also expiration date, route code, visual information, even color. So you got to read all of this and that's basically what we did when we built this visual cognitive engine. We, it has the ability to read expiration dates, broken barcodes, visual information, color, and connected with what's happening around that. So when you're dealing with a decanting operation, what the person is doing, you can read the barcodes, but if some products fell on the floor, you do not know, your inventory just went off, right? So, you know, we built the whole engine to read that. And this is why I was, I'm talking about that these things needs to come together before we can talk about accuracy, efficiency in supply chain. Charlie. I'll add to that, yeah. So I think ultimately, reading the data is the responsibility of the technologist, like it's our job. We're not gonna be successful if we go to our customers and say, oh yeah, we can grow this great solution, relabel everything. It's kind of a non-starter. So we have added OCR to our product. We have case counting now in pallets. We go through a lot of lengths to just work with what's there and I think that's gonna be the dominant mode. There are limits, right? So I talked to a prospect the other day, they have these really nice pallet barcodes and sometimes the forklift drivers put them in sideways, right? Or if you're working in a retail store and the UPC is in the back of the bottle, it's kind of like not much you can do. So there's not a lot of compassion for the robots, but I feel like robot accessibility is gonna be a thing. As we have moved from barcode readers to scanning codes very often now with mobile devices in QR, and we're actually working with GS1 on a common location standard or how do you mark an inventory location? What would be a standard label that robots can easily read where you might not be really nicely framing up your phone or a barcode reader to the code. You might be at a glance just having to opportunistically collect data. So how do you work better in that location? And the more we've done stuff like that, that'll enable a lot more dynamic environments. Like we wouldn't need to have a digital map of the warehouse. We could just kind of fly and slam our way through it if the labels are correct. So it's really our job, but there's things we can do to advance the field. Yeah, and then on that topic, I would like to introduce one more perspective which I personally find very helpful, macro and micro. So AI systems like what Sanjay was talking about earlier, this is a wolf, this is a husky. That's a macro problem, cats and dogs. Micro is when we are looking at every minute thing that's present in front of us that we humans will act on because we have that perception capability. When we get down to that level, when we see something in a second, when I'm looking at this room, I'm forming an opinion of what's going on based on information, whether it's minute or somebody just standing up. So label information, if it's a broken barcode, how do you fix a barcode? AI systems are capable of that, reconstructing a barcode. That's micro. If a person is doing something, for example, grabbing six products in a fist, you can't even see what the person is grabbing. If you can't see, how do you process? So these are micro level problems, but all of these things drive accuracy. For example, these are examples from a goods to picking or person to goods or goods to person picking applications. So all these are, we got to think in terms of what macro and macro when we are talking about data in AI, and bring it all together. And that's when we'll get to the 99.9% accuracy and efficiency that supply gene organizations seek. You know, I can add another example. I think 2D barcodes are all the rage, QR, data matrix in those different examples because they store more information, but they're very not robust to damage, material handling damage. So you get like a little crease or a little scratch on it, and then it's gone. You lose too many bits, where 1D barcodes are far more robust to glare, shrink wrap, all kinds of stuff like that. So there's real work considerations that don't always emerge when you're on the drawing board, where I think there's gonna be some combination of there are limits to how much data you store in a physical readable label, and we'll need APIs and web standards for accessing all the rest of the metadata. Associate with the products. I feel like that's, you got in my head on that one, because the biggest challenge is more data, the bigger the barcode. And so we look particularly at one dimensional barcodes like the UPC, which has been a 50 year solution for point of sale retail, because it does one thing really well, it goes beep. Oh, and by the way, it looks up a price, right? But to your point, the price isn't resident in the barcode, the price is resident in a system. And this is what you said to me a couple of years ago when we were thinking of a cold chain application, right? That all the heavy lifting is being done in the intelligent software. Now, figuring out whether that's cloud, whether it's at the edge, that I think that's the real challenge across a lot of industry use cases. But the reality is you need unique enough data embedded in whatever the machine readable sensor is. And I guess I would also say maybe 2D barcodes probably aren't a 50 year solution the way 1D was. It's gonna have to be a combination of sensors because now you're trying to gather so much more data about things as they move through the supply chain. But I don't know if you have a perspective on that. I know you're- Yeah, I mean, look, we walk around this conference with name tags, right? And that's a human readable format. But if I know Melanie, I don't, you know, I say hey Melanie, how are you? And we are digital or machine readability is the link between these two worlds we live in. We live in an analog world. We pretend that we're in a digital world and that link is readability. Machine readability is robust. We gotta focus on robustness, right? If you go to QR codes you can put more data but that's not the essence of the link. The essence is can I tell what the identity is? The data can be transmitted in other ways. And I think we keep sort of confusing that. It's about identification, you know? And 2D barcode has very little redundancy compared to 1D barcode because the lines are longer. So the smear doesn't blur it, right? I mean, it doesn't defeat it. So I think we have to work on the resiliency of the machine readable format that we use it. Or we just go to, you know, recognition based on, you know, CNNs, et cetera. Resiliency, persistence and uniqueness and then everything else can be handled by the data. What, oh, go ahead. I meant a small anecdote. There's a company out of MIT actually called Dust Identity. Yeah, yeah, Dust, yeah. And they care about authentication of luxury goods, diamonds, handbags, anything else that can be forged. Fence, yeah. The fence, yeah. So if you haven't heard about it, they make unique labels using diamond dust. And the diamonds kind of end up in whatever random configuration they're in. And you can't forge that. No amount of tweeters. The funny thing is you're showing, but you can't even see it. It's like nano. Right, right. And then the lasers will read it and then they can, that can't be copied, right? So there's all different facets to explore and really it's an ID problem. Yeah, agreed. All right, we have some questions from the audience. You can put them up on the screen. I have one last question for Sanjay before we maybe will do the most upvoted questions. But one thing for you Sanjay, because you talk about, we're going to implode on ourselves if we aren't careful. So what should the considerations around data pollution be? What should we be concerned about? We have so many ways to screw things up. And for example, right now, a lot of the new articles on the internet, a lot of the content is AI generated. You may have read about these AI bots that can take the transcript of a game and write an article about the game, right? And there's another whole concept, which is if you train machine learning on AI generated data, after a point you get to something called model collapse. It's sort of breathing its own fumes. The AI stops learning actually collapses, right? So we're sort of in a weird place right now. The data was fresh, right? And we're teaching AI. With LLMs, there's so much data, it's corrects itself, but if there's polluted data, you get all sorts of other craziness happen. My worry is that we are going to depend on it too much. It's the uncanny valley, as you said. And it's going to really lead to bad consequences. LLMs are not a knowledge source. They're a grammar, semantics, and next word thing. That's it. And the vast majority of ways in which people are using chat GPT right now is a source of knowledge. And that right there is a recipe for disaster. You know, I heard Charlie or Chris mentioned in the first session about human in the loop. One thing that was interesting, and I saw it here in the media lab earlier this spring, I think you and I talked about it, was a human unassisted by AI could make a decision. And that decision was right X% of the time. When given good information by an AI, that positive decision went up. But when given bad information by an AI, the human second guessed their own intuition and made a worse decision. So to your point of, like that can also, I think that human in the loop element can go both ways. Yeah, I mean, autonomous driving. I mean, you see the Tesla accidents, et cetera. People over rely. Sometimes it lets rather than overruling the autonomous car, they let the car overrule you. I mean, this is just a whole new world. They're not in Kansas anymore, for sure. All right, well, it looks like what the audience wants to hear, Randy, maybe we can start with you. What is the lowest hanging fruit to apply AI solutions into supply chain processes? Where's the quick wins? Oh, man, quick wins. I mean, again, for me, it's really around those basics of the identification of the products, the supplies, where are they, et cetera. And then I would say like, instead of trying complex models, we see a lot of benefit with what YouTube we're talking about, which is it's really don't over rely on one signal but build confidence models around it. For me, that's where we get a little bit more certainty. Understand the reality of the world is not always pristine. Rely on pristine data when you can, and then compensate elsewhere. So for me, that's probably it in terms of my point of view. I'm sure these guys have. Yeah, so maybe this might be just rewording the question in a different way. What supply chain process, I have an opinion here, I'll hold off on it for a minute, is currently quite broken, could have probably been solved for many decades, but we're still fighting against that where maybe AI can be the unlock. There's probably many, but what supply chain use case comes to mind for you, Ash? Right, so when we started building, what I call, we built a platform, we call it Koi Vision. So thought of it as a set of Lego blocks and you can use these to build any number of applications because that's the opportunity. But what are those Lego blocks? And once we look into those, and align that with cargo process, inventory and asset, these are the four things in any supply chain irrespective of the industry. One common denominator is the label scanning. So we started working with a Fortune 100 corporation and the label scanning accuracy over there was abysmal. So essentially, what we did is we went in and we demonstrated how they can completely change the game on label scanning accuracy because labels are still cheap, diamond dust and nanotech and other things. We are talking about infrastructural change, but label is where the journey begins and inbound outbound from your supply chain. If we cannot scan that properly. And so what we demonstrated, I'll tell a very quick story. So when I went in, I said that we are among the top 1% AI companies in the world and the supply chain director, she scolded me. She said, I'm very concerned you say something like that. This was two years ago. Two months later, I was in front of the global CEO and the entire C-suite team. They had cartons in front. She took a pen and she played American Cycle. She walked up to the carton. It had overlapping labels, multiple barcodes, 2D, 1D, faded and she took a pen and she just punched a big hole into the barcode and she asked me to scan. When two, five seconds later, the results were on the screen. People were blown away. How are you doing it? So if you, if you, if you can, what we did is we demonstrated, you take any vendor in the industry, we demonstrated, we can elevate the accuracy of label scanning by 25 to 76% accuracy, near perfect. Once you can do that, we can talk about dog door visibility inbound, all-bound. We can talk about decanting operations. We can talk about inventory tracking and all these different applications. So I think that's the beginning of the journey. And then other, in marrying that with other visual objects, visual information, visual data, that naturally comes into play. Charlie, oh. What are you gonna say? I wanna hear Charlie's first. Hold on then I'll tell you. For us, it's inventory. We put the company around it after 250 or 300 discovery calls that we did. It's, you have these error-prone manual operational processes for material handling and then you put more error-prone labor to try to fix that and it's also expensive and so on doesn't work. So that's kind of the easiest thing, the lowest hanging fruit. It's very common for me to do ROI analysis with a customer and we'll find out that they're losing five times more money from labor when they go to pick and the stuff's not there, then they do on inventory. And whether or not you use robots or use our product, you should just do inventory more often and we can automate that. Tim Barrett's in the audience. He has our drones in six of his warehouses and they've told us a story that debunked a new customer called Stadium Goods that they have individual shoeboxes. There's 500,000 of them in a warehouse and it would have been impossible without inventory automation. So there's, to me, that's a very obvious one. I do agree. I would say the one that keeps plaguing us is forecasting. Forecasting's terrible. Like we're still, we're overproducing, we're underproducing, on shelf availability. I was reading some reports just recently, re-open some reports from Cap Gemini. They were written in 2009 and it was called Supply Chain 2020. Okay, we're quite, and they were talking about how we were gonna solve the on shelf availability problem. And then I think when you now think of Supply Chain 2030, we're still trying to solve on shelf availability, right? And now we have an omnichannel thrown into the mix and so it's where are you pulling the inventory? So you foundationally, yes. What do I have and where is it at? Goes to the heart of unique ID and location. But I think also, getting it into the right place and having it available while you're maximizing profitability and we're just losing so much food waste, so much profitability loss because we're still struggling to get forecasting, right? Just to make you feel better, we were trying to solve the problem in 2000 as well. Yeah, exactly. Every decade we roll it forward. All right, let's see. So, I mean, we're talking about low hanging fruit. I don't know if this is distinguished. In fairness, because I don't wanna over-commercialize GS1, but I think there is a common modern data model standard for the exchange of information. GS1 has certainly built a standard. We put it in place as an outcrop of the RFID work that we did in the early 2000s and physical event traceability all the way down to the serialized unit level. I think adoption is one of our big barriers, but maybe we'll skip that one and talk about the next, well, most exciting use cases. I don't know, Randy, do you see one up there that you would like to answer or you could make up your own, too? I mean, I think the exciting bits and maybe this is within the AI supply chain, but honestly, I also deal with the end users are some of the summarization and creation bits that are happening, both to make sure that we're not over-biasing on certain things, but really making it more custom and more obvious to the user of how does this look on me? What is this product doing? It's exciting to me. Within the supply chain, I'll probably give it to my colleagues here that are probably closer to it of what's exciting in the supply chain, but I really like that. The last bit I'll say is, and I think it came up at one point about exciting, and maybe this is the general comment, is suspend this belief. I think sometimes we look at these problems that weren't able to be solved with previous iterations of AI, and they can be solved again, and faster, smaller data sets on and so forth, and so that's actually more exciting to me right now, which is, man, all these things that, I'm kind of getting older, so things that weren't solved or can't be solved in the traditional methods become possible again, so. All right, we are in between this group and their lunch, so please feel free to talk to the panelists after this. I know you all have some questions, maybe specific to what you heard up here on the stage, but I just wanna thank again, Ash, Randy, Charlie, and Sanjay for their time today, and I appreciate it.