 Hi, everyone. Thanks very much for attending. It's good to see so many familiar faces here. And yeah, I've probably saw some of you in the EU session. My name is Ed Beiford. I'm a product manager. I work on messaging and streaming, primarily working with some of the RabbitMQ teams that work at VMware and helping customers and prospects understand how to use event-driven architectures, how to figure out all the hard things about distributed systems, messaging, streaming events. I like to, in my spare time, do training for triathlons. I enjoy it. I mean, nobody really likes running, I guess, but I do my best. And obviously, recently, keeping track of how many prime ministers we have in the UK, because that's been a revolving door, which is embarrassing. All right. Thank you, Ed. And Marilyn Basanto introduced myself earlier. So I run all the product management for edge competing. And then so my fun fact is actually not very dissimilar to Ed, but I also love training for half-marathons. We've done 53 of them. I'm due for another one very soon. So yeah, besides talking about data at the edge, we talk about our stats together. All right. So sort of what I was alluding to this morning was, and the funny part about this morning is the reason me giving that keynote was so much harder for me is because I really didn't want to give away all of the content from this presentation. And this is the part of actually the most excited to talk about. And so at the edge, I know I talked a little bit earlier about the manufacturing vertical, but I was talking a lot about the instant experiences. And we all have, since we never let go of our phones, everyone expects those instant experiences now. So from the customers, they want to be delighted now. And then from a business perspective, I want to show efficiency. I want to show cost savings. I want to show the justification for putting these new experiences in the store, right? Because to the consumer, it's the fun part. But for the businesses, it's what am I getting out of this? Not just the data, but how am I improving my operations? So the overall story, of course, is what we all know, that the workloads are moving to the edge, and it all has to do with the line of business applications. So it's all the different use cases. What's interesting and what's challenging, I think, from us or from, and I assume it from other enterprises, of selling into enterprises to the edge is getting to the right target audience. So here we are at KubeCon, and our audience are developers. And developers are excited about the tech. They're excited about how I'm gonna enable them to get these new type of workflows put in the edge. But it's interesting when you go talk to the enterprise, at least for VMware, we typically talk to the IT buyer. So the IT buyer, they're like, well, we know I need the platform, but then you talk to them about like, so what are your apps? What's driving you to the edge? And they're like, oh, we know we need it. What's being asked for? But they don't, it's, how do you get these multiple groups of people for the edge to come together to talk about these new immersive experiences and kind of put together both sides of the house, the business operations, and then the developers getting the new experiences. Okay. So this is, I realize now that this is probably a similar slide and I should have just kept one, but the point I wanted to make here is that, edge has some of the biggest spending that's happening at the edge. And in particular, some research produced out of the beginning of the year from IDC is on what verticals, and those verticals really are retail and manufacturing. So it's no surprise that SIVM where we're focusing on retail and manufacturing, which is, fits to the adjacent market, such as automotive, power utility, fulfillment centers. So these are all the areas that we can really help revolutionize with our tech at the edge on what's necessary. So going back to kind of retail, because that'll be kind of what it'll lead us into, it is, well, I've already talked a little bit about the transformation this morning, which is to kind of recap, right? So we know edge is the future and so really going back to the instant delight is we're really augmenting that human experience. So that's really where we're kind of focusing on. So then again, how do you get to those instance experiences when I said this morning that in many cases, all these data is everywhere and different systems? So to highlight on the retail example, if you look at this chart here, this really kind of shows like, we use the term store the future, but I like every name then I call it the digitization and modernization of retail stores. So a lot of the stuff in the stores gets done, the manager, pencil and paper in hand or they've got some legacy VMs or you're stuck with your ISVs, I wrote your point of sale. It's very costly then to write your own point of sale. They know they wanna put in computer vision. There's a multitude of these ISV vendors that are, sorry, working on optimizing like loss prevention algorithms, like they focus on just building that one key inferencing for the retail or one vertical. And so how do you, as the retail store says, well I'm gonna put that out on my shop or on my store to take a look at how fraud and loss is happening and then you can take computer vision and apply it to inventory control. And then the other thing about retail is that they also have fulfillment because if you think of, especially during COVID when stores were closed, they ended up becoming kind of their own mini fulfillment centers. And although of course that was happening manually, how do you, a lot of that technology that had to come very quickly over the last two years, now the retailers are looking at how am I gonna actually put this together into a more automated, digitized way rather than having to quickly do self, curbside delivery and self-check out and all those bits. So what's interesting in these use cases, just to touch a little bit more is it covers a variety of the topics of what I was saying. Some of it is for the customer's engagements, some of it is for the business operations and then of course not to leave out, some of it is for safety and regulation, right? So it's ensuring we satisfy with our solutions, both sides of the businesses. Okay, so let's talk a little bit about the data problem. So now you saw some of the use cases that are being deployed in retail and like I said, some of this stuff, these are things that happen day to day, seemingly mundane things but let's kind of walk through our workflow of what that looks like. So there are a lot of use cases. So let's say you're in the store and someone falls over, seems the obvious things come to mind of what do you need to do? And also depends, of course, the context is important. Depends on what that fall was, who's in the store, all these different bits. So let's go kind of the first thing. So if someone fell in the store, the first thing you assess is how serious is this problem? Do I need to call paramedics? What do I need to do? So then you have, so now you need to figure out who's in the store right now? Who should make that call? And so typically then if you have an inventory of who has come to work that day, that's stored in some either in your Excel sheet or some database somewhere. So then now it goes to who's got the skills? Which of your employees should you send out? Who's had the first aid training? Who's got the right qualifications to go help someone urgently? So that's also typically yet in another database, your training data, or it's in the cloud. And then third, is this a security incident? Did someone get pushed because someone is stealing? Did someone, what is some type of incident where you have to then log the incident report? So that goes back to someone running, what happened, did something get spilled? So now you've got the cameras and the surveillance that you have to keep track of of that incident for police reports, again, or for insurance purposes. When this incident happened, did something get spilled in the store? Do you have to dispatch someone to come and clean it up? And then lastly, kind of putting it all together, does the manager need to be involved, or is it something that the employees can kind of handle themselves? So it goes back to if you go to all the different sources at the top, these are all in different separate systems. This one incident, how does it kind of all together to kind of communicate across these items? So you can kind of see that the data problem across the different systems kind of just explodes. And this is just one scenario of one thing that can happen in a store in a day that typically these type of incidents will happen multiple times in a day. So then now you have the volume of the different type of eventing happening. Okay, so now again, I say I cover that for one use case. What happens with you've got this mirrored numerous use cases happening more at one time. So you've got a whole mess of data systems that don't talk to each other. And it kind of just becomes a huge spaghetti monster of a challenge to solve. So now I'm gonna go ahead and pass it over to Ed. All right, yeah, thanks Marilyn. So we talked about one specific case and one problem. You can imagine that's multiplied as Marilyn said over numerous use cases. But really what we wanted to return to now is thinking about what is the goal. And we think of the goal as being the ability to unlock the value that the developer can provide to a customer. And really, we know that business value doesn't solely rely on infrastructure. It also relies on really being focused on what is that customer need? What's that internal tool need? And essentially anything that gets in the way of a developer just delivering value, accessing data, provisioning infrastructure, even just thinking about what are the myriad of different devices that exist at the edge location. All of that is noise that gets in the way of just producing some kind of application or some kind of tool that brings you business value. So we were looking at some of the customers we speak to over the course of our jobs and we really recognize five main data related challenges that get in the way of developers delivering that value. One of those challenges is limited bandwidth. Now this is a problem across multiple edge locations and really what happens is the cost per location can really add up. You've got hundreds of sites, you've got thousands of sites, maybe even tens of thousands. You've also got different types of connections. So like the traditional asymmetric connection is probably too low bandwidth. And then things like MPLS connections, more symmetric, probably too expensive. And then you actually end up with the locations becoming real net data producers because of the number of different devices that exist across that environment. And that can add up to a lot of ingress and egress traffic when you're thinking about where that ends up, which data center ends up, which region does it end up in. You've also got data being stored in silos. So as Marilyn mentioned, there's a lot of different types of edge applications and people think we're already doing edge. That's true, but those things end up in silos because there's a number of different ways that you can implement it. Oftentimes enterprises aren't thinking about a joined up way to do this across multiple different use cases. They would have delivered a computer vision solution or a point of sale solution but not thought about ways to make efficiencies across those. And because of that, there's a range of different vendors which might necessitate different kinds of workflows to access that data. So again, your developers are having to learn different ways to access data and connect things together. There's then no common API for publishing and subscribing to data, which again makes things more complex for developers. And then the event formats can be different. So then your developers are having to deliver new event handling logic for each event source. Now, those were some of the challenges but you didn't come here for challenges you came here for answers. But why do we need those kinds of things? Well, the main patterns we're seeing people try and implement are these three, and of course there are a myriad of other ones, but these are the main patterns which seem to help with these problems. So you've got AI and ML inferencing and we've already discussed several times that AI and ML inferencing should be happening at the edge for expediency. And as we know, the majority of AI and ML inferencing is, or training at least, is gathering data in order to make those models work. So that's why data is very important here. You don't need somebody to store that data and what we see is people implementing or wanting to implement a consolidated event stream. You might have streams of data in different locations, but sometimes it's the context between, and as we'll see in the demo, it's the context between what's happened in one part of the store with what's happening in another part of the store that you really need to tie those things together. So you need something that consolidates that. And then you also need to be able to do analytics on that stream. So analyzing the events but also analyzing the sequence of events to iteratively improve on those processes. And if we drill down a bit further, what we can do is take a set of open source components and really, these are the heroes of our story. We've got the spring application framework. We've got RabbitMQ that can do message queuing and event streaming. And you've also got Key Value Store and Apache Geo. There are others, of course. You can swap these in and out for different solutions, but these are the ones that we're going to go through. So what I want to do now is expand or move away from maybe retail, but think about logistics. In this case, it's a package sorting application at a logistics company. And we want to, you know, this to be customizable and composable, but again, we're going to use some of those heroes we just mentioned to think about how to implement different parts of this application. So what's happening here? Well, imagine a delivery company, a logistics company that's got packages coming down and conveying a belt and they need to be determined which delivery chute to be pushed into by a particular diverter arm. And that decision needs to be made quickly. You can't be communicating back up to the clouds. You need to make it locally. And if it cannot be made within a certain time frame, the package is going to go somewhere else to the wrong place. It may need to slow down the belt. You need to halt sort of processing. And again, that costs money. So this is laid out a little bit like a service blueprint, this diagram, if you're familiar with that. So we've got the kind of front stage. And at the front stage, you've got the developer who's sitting there writing application source code. The developer wants to push that application source code to the edge. And they don't know exactly, or they don't need to know exactly where it's going to be deployed. They don't know exactly how it's going to be deployed. And this is all going to be handled by a common API. Then you've got over here this edge compute stack, which is essentially taking the application, building it, versioning it, deploying it, making sure that it's running, making sure that dependencies are installed in those edge locations. And then down here, you've actually got the physical edge location. So you've got some compute, you've got running apps. And so essentially that process means that you've now got the source code up and running. Again, via a set of common APIs. Now you've got your application, your package, rather. And that's being scanned and inference via local AI. And what happens is the package is travelling down the conveyor belt. And as soon as the package is being scanned, then there's an event which is being generated. And that's a package recognised event. So we're going to have somewhere to put that. And as previously mentioned, we need to put that somewhere that has a end-to-end view of events that's happening in the business. And so we put that into a consolidated event stream. But the process doesn't stop there because you've also got the whole journey of the package. So you've got the diverter arm which is moving and routing packages to different locations. That's, again, a decision and also a diverter arm movement. And we'll see why that's important later. You've then got, once the package goes into that delivery shoot, it's going to go to a particular delivery truck. You want to, again, change the status of the package. And then you want to make sure that package reaches the customer, even process feedback relating to that delivery so that you can drill down and see if there are any problems and inefficiencies in the process. All of those events are being stored in the consolidated event stream. Now, as we said, a lot of inferencing happens at the edge. And we want to also make sure that the important events are being filtered and synced back to headquarters depending on what network is available and what bandwidth you have. And why do we want to do that? Well, there's a number of different reasons, but you might want to have an audit. You might want to actually train the rule sets there and have future analytics use cases be developed based on that data and that stream of events. And you also want to update that rule set. And then, of course, use that rule set to essentially iteratively improve on the model and have that passed back down to the application. So here we have the icons put into place which describe some of the open source components that we can use. So you've got the application deployment mechanism in Knative. You've then got also RabbitMQ acting as the consolidated event stream. And then you've got a number of different processes happening which allow you to filter the data using Spring, using KeyValueStore of Geode, and then also using the deployment rule set to deploy that AI model back to the edge. And why is that important? Why are some of those future analytics use cases important? Well, a number of different things you can do with that data. And the fact that you have a common API which allows you to do that greatly reduces the time to develop some of those new applications. So, you know, we can take some examples here where you've got training staff can be easier because you've actually built up a digital twin type model of an end-to-end package workflow and everything that happens in that package workflow so you can try and help staff understand what to do. You can also do things like predictive maintenance. You know, we mentioned the diverter arm that's moving packages to different locations. You can keep track of what that means for the diverter arm and what you need to do in terms of maintaining that machine so that you can prevent it from breaking down and actually stopping the flow of packages. You've then also got things like, you know, future automation use cases, things like forklift trucks or, you know, lifting and shifting AI-controlled trolleys that could move packages to different locations. And you've even got, you know, dynamic scheduling of maybe even automated and self-driving vehicles, delivery vehicles in the future. But in order to do all those things, in order to train that, you need to start keeping track of the events that are happening today so that you can work out or even model before you start some of these expensive projects thinking about how or what the improvement might be if you had rerun that. We're going to segue now from logistics back to retail. So these are some of the patterns that we've seen in logistics, but we find some similarities in retail as well. And we're going to go into a demo. Okay. So what we've got here is we've got a dashboard. And the idea of this dashboard is it shows in a retail location a fridge door being open or closed. And there's also a number of sensors in this demo which show the temperature of various food items. And you can see there's alerts and warnings. So the intelligence comes here not from just taking the silos of data from each of the sets of sensors, but also processing and correlating the data at the edge location and then filtering it such that only the relevant alerts are communicated back to headquarters. So we've got some components here, communications being done locally using MQTT via RabbitMQ. We've also got Apache Geode or GemFire used to correlate events, so the door open and higher temperatures. And what we're using here is Spring Cloud Dataflow to deploy applications which are going to analyze some of the sensor data. And they deploy them as streams. A stream is essentially a spring boot application that's going to be deployed at the edge. And we have here, as you can see, a scripting language which defines the rule set. So we're looking for temperatures over a certain value and the door being open or closed. If the doors open, we send a warning. And we're also logging everything into a RabbitMQ stream so we can order and track what's happening if we want to later. So as you can see here, we're creating the streams. Spring Cloud Dataflow is actually deploying those streams and deploying those applications. There we go. The deploying is happening now. The deploy has been successful. So we should switch back to the application. And the important thing here is that, you know, only the correlated events are being communicated as being actionable. We don't currently see any alerts or warnings currently. What we need to do is generate some data from the sensor. So as you can see, we've got a temperature sensor here showing a value of 11 degrees. So we're generating that. We head back to the application because the temperature is below the threshold. There are no alerts and, of course, no warnings. And if we go... generate higher than expected temperature for the frozen food, I'm going to start generating something like... it's 36 degrees for sensor 2. So these are just simple programs that are obviously generating some values and feeding them through RabbitMQ. And go back to the application. We start to see some alerts. But there are only warnings as yet because we only want to be communicating the important things back to headquarters. And the important things are where there's something actionable. And so what we're going to do now is we're also going to trigger the fact that in IL-8 there's a door open. So we're generating some more sensor data in IL-8. Clearly, I'm talking too fast. And so, yeah, IL-8 has a door open. What we're seeing is that warnings are coming through because there's a door open. And so the idea is then that the employee should go and close the door. And as we see here, what's happening neatly is we've got a local application running on the left. And then we've got data being replicated to another application that's running in the cloud. So again, we want to process locally and then broadcast data to the cloud only as required. All right. Thank you very much. Do we have time for any questions? Any questions? Well, if you don't have any questions, we are going to be at the booth outside. So come ask some questions there.