 Good morning, good afternoon, good evening, and welcome to a special edition of Red Hat Live Streaming. I am joined by Ortwin and Benjamin, two fellow Red Hatters. Today we are showing off something they call Bobby Car. So I'm super curious to see, like, I know the backstory, trust me, you will all appreciate this and enjoy this. So I'd like to just quickly let's do a round of intros and then dive in. Benjamin, please feel free to introduce yourself to us here at Red Hat. Yeah, thanks, Chris. I'm Benjamin. I joined Red Hat 2019 as a solution architect for application services. In the meantime, I've transferred into another role, into an account solution architect for the public health care in Germany. So yeah, I have a long development background before I joined Red Hat. I worked in the development role. So yeah, the Bobby Car demo was a nice recap of all the development things and all that stuff. So yeah, to my person, I will hand over to Ortwin now. Yeah, thank you, Benjamin. Yeah, my name is Ortwin Schneider. I'm a domain solution architect at Red Hat and also in Germany, but for the responsible, let's say, for the manufacturing industry there. And I have a lot of automotive customers. So Bobby Car was quite close to what I do in my normal job, so to say. And it is one reason why we've created this demo as well. Yeah, for my background, I've started as a Java developer, but let's say today quite a bit away from really engineering. This is just for fun stuff. I've done a lot of people management before Red Hat. And I've also joined in 2019 Red Hat, like Benjamin. And we know each other since I don't know, I think 2005. So we have a lot of projects done together and we're kind of the former dream team as well. So it's cool that he's also at Red Hat and it's great to be here. Yeah, I'd like today, I think we have, let's say, two parts today. So we'd like to present the demo, which is kind of a multi-product demo based on OpenShift in this context of edge computing, IoT connected vehicles and so on. And the second part of it is more the development story. So how could you develop for this Bobby Car demo? How could you expand it and so on? So this is kind of what we've prepared. And to give you some kind of context, I would start with a little bit of slides to just get you into what it's all about. And then we switch to the demo parts and I found always a little bit of slides and then demo parts. So I'd like to start with the screen sharing then. Let me just. Ah, yes, the screen sharing, God's smile upon you. This is great. I see the black screen still now. You got it. All good. Yeah, OK, cool. So, yeah, first just some org stuff. So if you have any questions after this session regarding demo, you know, here you find our contact data. And there is, of course, the GitHub repository where you can have a look into there's all the code lying there. There's also an Helm chart repository as well. Another thing here is I've written an ebook chapter with Marcus Eisler. I think a lot of guys know him as well. It is about cloud native architecture. And that was one chapter for this ebook. And there is also a blog post where it's about Bobby Carr and building cloud native solutions. So just if you need some additional information there. And yeah, from here, I would like to start then with with with a project. Bobby Carr, what is it? What is it about? And so on. So if you would describe it in one sentence, it's, yeah, microservice based cloud native application and a smart IOT transportation demo, which really highlights OpenShift and a lot of the Red Hat middleware stack we have in a business relevant context. And this is important. So one aspect or one thing from the background was we wanted to not only show individual technologies, right? So like API management or messaging or whatever. We want to bring all this, all the stuff Red Hat has together and make it work together in a meaningful context. So that was one thing. The other thing, as I've mentioned, I have a lot to do with automotive customers and IOT and edge, let's say architectures in this industrial edge as well as vehicle edge. And so it would be or I thought it would be a great idea to build a demo with all the capabilities we have in this context as well. So you could consider Bobby Carr also as let's say a subset of 90 reference architecture. So for example, if you look at Azure or AWS, what they have kind of in the reference architecture, what components. This is kind of an example where you could see, OK, these are the components we have from Red Hat and we can build those architectures as well with open source technology. So you can consider it like this. And then Bobby Carr, I'm not sure in Germany, at least everybody knows what a Bobby Carr is. These you've got a picture, right? These little red, red is the original color. These these cars where your kids sit there and push with the feet and they drive like crazy. They drive you like crazy and you have to kind of control them. So it is about car simulation, the demo. And so we have cars and the color. Yeah, it fits quite nice with our Red Hat color. So this is why we came to Bobby Carr and this is quite cool. So and the difference, but the difference is, but the difference is they are cloud native Bobby Carr, right? In this scenario. Not a big zero. Or the racing cars, which I thought was hilarious that these things actually get raised. It's like, of course, they get raised. It's a car. Of course, it would be a racing vehicle. Please go ahead. So from from for this demo, they are basically two core concepts. So as mentioned, they are the Bobby cars. And then we have Bobby Carr zones. And Bobby Carr's are actually vehicle simulators. And we have implemented them in Quarkus, let's say our cloud native Java stack. And we've used a lot of extensions there, like reactive messaging and metrics. And some of the stuff Quarkus offers. And we also simulate with one container, multiple Bobby cars. So one thread of the container is actually simulating one car. And this is in our demo or it represents really the vehicle or the edge tier. And basically how it works is when we start the simulation, one Bobby car picks, let's say, a random route from a pool of routes. This is in a GPX format at startup. And it simulates driving from start to end. And while driving, it sends telemetry data to an IT cloud gateway, like the current GPS position, speed, RPM, fuel consumption, and so on. And we've also made it a little bit configurable. And in terms of that, we've taken real or gear ratios, for example, from real cars. And you could also configure different driving strategies, things like this. So this is the core, our Quarkus application, the vehicle simulator, which is the Bobby car. And the second thing is the Bobby car zone, which is a geographical zone with a certain type, like a circle or a rectangle or something. So you can specify an area where you want to define specific, let's say, for example, mobility services. You want to get deployed to the car. So when a car enters this zone, you want either to deploy, let's say, a specific configuration, which is valid for the zone, or specific mobility services, which are relevant in this environment. So this is kind of this concept of a zone. And then you have also things like priorities and rules and what is when overlapping and so on. But these are basically the two things. And from an implementation perspective, we have implemented them as Kubernetes customer resources. So you could also then, of course, for example, with the GitOps approach, create new zones in a declarative approach. And from the high level architecture, it looks like this. So we have on the left side our, let's say, the edge tier, where we have the Bobby cars. And we can scale them up and down and can use all the benefits of what Quarkus offers. And then we have several regional IoT clouds. By the way, everything is running in OpenShift. In our case, also the Bobby cars. So we have kind of the cloud environment then, where the Bobby cars are connected to over MQTT, for example. So they send the metrics over MQTT to a Kafka broker. So everything we get in is running into Kafka, which is really our high throughput streaming platform or the central point for the data ingestion. Then we have in distributed in memory cache, the data grid, where we kind of store all the recent updates of all the metrics, all the cars, all the configurations we have in this distributed cache and for reasons of performance. So we want to be able to kind of restore the complete IoT state within milliseconds, for example. And this is also a pattern we often see at customers and so on. And these little boxes here are basically integrations. So for example, from MQTT to Kafka, Kafka to data grid and so on, or exposing data as per web socket to a dashboard. Then we use Apache CamelK, so also the cloud native integration components there. And yeah, this is kind of the regional cloud environment. And from there, we use the Kafka mirror maker technology to replicate the data to a central cloud environment, where all the data is coming then from all the, let's say, local cloud environments. And there we could use, for example, with open data hub, TensorFlow, IO, we use some machine learning technologies to really, work on the data coming in here. And this is kind of the high level how it looks like the architecture. I mean, there's a lot of more details, but I think this is to just give you the idea how the setup looks like. And yeah, with this, I would come to the first part of this demo. So let's say this is kind of the core demo then. And there we want to show two things, like I've just mentioned. So the one thing is all these connected cars, how are they connected? How does it look like? And the second thing is how do we apply location-based configurations, right? So these are basically the two core things. And therefore, we kind of zoom into this regional IoT cloud environment because this is what we've actually have in this demonstration. We've created this one. And there we have the Bobby cars with all the possibilities to configure them with driving strategies, with specific configurations and so on. And what they actually do is they send the GPS data over in Kafka bridge over HTTP to Kafka then. And the metrics, they go over the MQTT broker to Kafka. All the components here are the camel K components. So there's one MQTT to Kafka. There is one dashboard service exposing the data from Kafka for real-time dashboard. There is a zone change detection service and a service which is transferring the data to the cache. And so whenever there's a zone change, for example, there will be an event emitted, which is sent to MQTT and the cars are subscribing to certain topics they're getting the event. And yeah, pulling the actual configuration in case it is a configuration from a cache. So all the information is in the cache. So there's a cache API service as well and they're pulling the information in. So this is basically the architecture. And with this, I would directly jump then in the first part of the demo and just show you what you've seen here on the slides. So therefore I'd go to my OpenShift environment and you see I have a quite current, no, at least it is a 4.7. It's not 4.7. No, no, no, 4.8. I mean, it literally came out in that class week. So, yeah. Yeah, it feels like a 10. But it's quite current. So 4.7, OpenShift environment here and there is a project, BobbyCar. So this is our BobbyCar project here and we've deployed all the stuff you've just seen. So all the regional IoT cloud stuff in this main space. Let me just switch to the developer perspective here and we should see then the topology. So you see all the stuff right here. Here's the Kafka cluster, for example, running. Here's the Kafka bridge running. We have the several operators there. Here we have the CamelK integrations, for example, this Kafka to data grid integration and so on, caching service. And here's actually the car simulator, our BobbyCar. And you see there's one pod currently running and this one pod, it is configured that it simulates 20 cars right now. And there is a dashboard component. This is an angular component. We will have a look right, just a second there. So this is kind of all the stuff. Yeah, in case of you want to deploy it by yourself, you go to the GitHub repo and want to deploy the stuff. In general, we have packaged them as home charts. So for example, if you go to just here at project, the home charts, for example, and you see here, I've customized the home chart repo with the BobbyCar repo and you see all the hand charts for BobbyCar then here and you could very easily deploy them from here. So there are some core charts, like for the operators in front of the apps, these three are kind of combined the core demo. And then we have optional things, like if you want to show or showcase development, you can install the optionals, right? So this is, I think I really like that, that it's so cool integrated into the console of OpenShift and yeah, very easy. You just have to add the repo, right? Yeah, here, just the home chart repo. Yeah, it's very easy, right? So you just specify in this customer source, your home chart repo and it will pop up in the console. So quite nice. Yeah, then let's just have a look now in the dashboard because this is, let's say the only thing where we have kind of a visualization all the other stuff is a lot of data transferring and not so visual, so we've graded the dashboard. This is our dashboard component and as mentioned the Angular application and basically what we have is one map view. So this is our map view and you see, okay, or you should see these red circles here. We are currently in Germany, this is Frankfurt and every red circle here is kind of a BobbyCar zone, right? So whenever a car enters a zone it will get some configuration then. Our cars are driving right here. So I'm not sure if you see that they're actually moving. So you should see some moving markers here that the cars are driving right there and yeah, we have the different zones and now one thing we could do is, for example, we could, for example, create new zones from here. This is creating new zones and specifying good, this is the location and so on. I won't do it. I mean, you get the idea of what you could do with it and yeah, another interesting thing here is also this, let's say real-time query. What we want to do is, sorry. I just see dots moving and that makes me happy. So yeah, I like to see real-time data coming in here. Yeah, so basically these are the 20 cars and they're sending them data to MQTT, MQTT to Kafka, to WebSocket, to this dashboard, right? So this is kind of the data flow here, which you see. Yeah, by the way. And the first time you open this map, it will load all the data from the cache. So the cache contains the current snapshot, right? Right, yeah. So for example, also if I, let's say I reset the map, so you will see that it directly gets the latest and restored here, right? And then it gets the updates over the WebSocket coming in from the, let's say real-time data there. Ah, you see here, here we have some cars entering the zone. Let me see if we can hit this one. So if I select this car and you should see, okay, it's driving here now and you see the zone change event, right? So there was the zone change event directly now. This is kind of the detail view. Now just select one car. In this case, it's a Volkswagen, but this is just a gimmick to, let's say switch the cockpits like in BMW or Mercedes or whatever you like. So depending on the sponsor we have, we can adjust them. And there is kind of a heads up display where you also see now the metrics for this car, how fast is it going and so on. And what type of car is it and what is the current zone that the car is in and so on. And yeah, so you see it's arrived in the next zone here. And yeah, probably we just wait a few seconds to see again the zone change. And if it's successful, we should also see here the actual zone or the new zone then, okay? It's in there and you see it's applied also in the car now. We have this need an housing zone. So it's, yeah, working at this stage. Yeah, that was one thing. So you can have a look at the cars and can see, okay, is everything running the right way? The other thing was this real-time query. If I'm interested in, for example, a certain area and want to know or want to see what is the real-time data of a certain area here. So I have kind of this gray circle which is like my search environment, like my search environment and I can say, okay, I'm interested in all the cars driving here. And I want to have what is the average speed? I don't know, carbon dioxide and mission and so on. And the driver distractions thing, I'll come to this later because this is kind of a machine learning use case in this context. And then we can kind of start or execute an interactive query to the Kafka stream, right? So to the state and get for all the cars in here, let's say the current data then, right? So we query kind of the internal state of the Kafka streams application. So this is kind of a real-time query aggregation thing, which is also quite nice because we wanted then, if we, and later on, if we extend the demo then, we wanted to, for example, if there are a certain thresholds and exceeded, right? So we want to, for example, automatically then create certain zones for this area to specify the maximum, for example, speed limit or something, right? So to, yeah, react on this data then. Ultimate, do we have a street view data? Ah, okay. Because we didn't see that. Here, yeah, yeah, this is quite nice. Let me check here, a car detail. Yeah, you see it here. Right, if there's a street view data available, so this car is kind of driving like crazy. I was not saying like, what a lot of different angles here. Now it looks good, okay. It takes some time, yeah. Okay, yeah, okay, it's kind of driving backwards, okay. Yeah, but- No demo. Yeah, I was about to say it seems like it doesn't understand what elevation it's at or something, yeah. Okay. Where can I buy in this car? Where can I buy my shape-shifting auto transporting car? All right. Yeah, probably one other thing also, so for the car simulator, oh, maybe one thing, you see this is one part, you could then specify through environments like, let me just show you in a second, the environments. So you see here, like we have specified 20 cars for car sim, right? And then you can also specify things like, how fast should the simulation run? So three times, or the simulation factor, you can configure it here as well. And also delays and other things like connectivity to MQTT, Kafka and things like this. And yeah, so that's one thing. The other thing, let me switch back to the pod. Where's my actual pod? No, I wanted not the pod, sorry, the route for the car simulator. Where is my car simulator here? We have also some custom metrics as well, because this is one thing we also wanted to show, if you want to then integrate it into monitoring, right? So I want to scrape it with Prometheus and so on, and you want to have all these data here. So here you see for the cars running currently, right? All the Prometheus or the custom metrics we have implemented there. And yeah, that's one thing. And probably, I just, ah, I've almost... You wanted to scale, Audwin. Okay, right, we could just let us scale, why not? So that was one car and I said, one pod is simulating 20 cars. Let's scale it a little bit up and see what's happening then. So let's go to the car simulator and yeah, details. I do it in the console. So two pods, three, four, five. Yeah, five pods should be 100 cars. And first of all, we should see that the Quarkus pods, yeah. Yeah, there you go. They're up and running. And if we now go back to the map, we see here in the headline that the new cars are coming, coming to 45, 50, and you can configure a delay. I think there is a one second delay before the next car is coming and so on. So yeah, you see the new cars dropping in the map and now we should have the 100 cars. Yeah, and we're good to go with 100. Yeah, and this way, and with Quarkus, I think Quarkus is quite nice because of the high density and you could very easily also scale to thousands of cars and simulate it that way. And also like of autoscale, the other infrastructure in OpenShift and a lot of the things you can scale really automatically. So this is a cool, yeah, cool architecture. But yeah, that was, let's say the first part of the demo, like the core demo, we have the cars, we have a certain look at it, a real-time query feature. And yeah, and with that, I would just jump back to I think some slides for the second part and then we'll jump in the next demo part. Oh, by the way, yeah, the next, let me just share you these two slides because it is not only a demo. So currently I'm working with two partners really on real-world use cases, right? And one of it is over-the-air updates with eSync Alliance. And for those of you who don't know eSync Alliance and Automotive, this is kind of a new kind of standard, standard protocols, standard technologies on how over-the-air updates for vehicles can be implemented. And there are a lot of automotive companies in that. And basically the architecture is there's an eSync server component, right? Where you can plan your campaigns, for example, to update firmware of your ECUs in the car and so on. So there's a server component running in the cloud environment. And then in the car, there's a eSync client, which is kind of the IoT gateway, so to say, right? This gateway component connected to this gateway component we have all the different ECUs, right? And you can now update ECUs, decline components and the car components through this architecture. And currently I'm working on it. I have a running eSync server in this OpenShift environment. And what we want to do in the context also of BobbyCar is that whenever a car enters a zone, we want to update the car configuration with this eSync architecture, right? With this industry standard for over-the-air updates. So this is in the context of BobbyCar. And the other use case also quite nice with a partner, with a great partner entity is kind of we're building like we've called it the human driving perception platform. And this is kind of, it is about, yeah, capturing the driver's emotion. And so there's this emotional detection and combine it with the data from the CAN bus we get from the car, right? From all the CAN data from the ECUs and combine it and to really calculate, like, to see, okay, what is the driver distraction at a certain point, right? And how does it relate with the current data we get from the CAN buses and from the ECUs? And then calculate in the first step something like a vehicle collision risk. And other things like, why is the driver distraction, for example, very high in the certain area or at a specific street corner and so on and to do some machine learning with this data. So this is kind of, let's say also a new thing because normally in the real world, they don't have this emotional recognition and combine it with the other data available, right? So this is use case we built in this BobbyCar. And yeah, let me just show you some of these components as well. So go away. Okay, here. So we're back in our OpenShift cluster and what I like to do here is just a second, there is this HDPP, human driving perception platform. So what you see here are two containers or two pods, two deployments, actually. And one is the actual application. So both of these components are running normally in the vehicle, right? So, and this is the application. There's a Python application with OpenCV tensorflow and some of the machine learning libraries. And here's kind of the execution of the machine learning model running in this container, right? And there's also currently we get the data. There's a pretty huge PVC where we have all the video data line, all the Ken data and some millions of data sets for the Ken bus from the Ken bus and so on. And now let me scale this one up. So this is the actual execution of the machine learning doing all the real-time calculation. And then it's sending all the data to this cloud gateway. So this is kind of a standard component in IoT gateway. And this one, these two components run in the car and this gateway is then connected to the Bobby car, let's say infrastructure and the communication protocol is also MQTT, but there are some standard messages and there's a specification for this component. So this gateway is connected then. I've implemented a little bit, let's say something like a device registry. Let me just check. It is here. So this is a Quarkus application as well. So it has a REST API and here you can really onboard new IoT gateways from in the car, right? So you can, it uses a MongoDB and then if you have done the registration process, the data goes then over MQTT and then to Kafka and to the other Bobby car infrastructure. So this is kind of what we have here. Let me just show you the REST API. There is this web UI, so you see, okay, this is some APIs to register gateways and IoT gateways and so on. So this is a little thing I've implemented there. And now, okay, if we go back to the Bobby car dashboard, now it should be up and running our actual machine running application. So the story is from here, right? So we are in this cockpit view and from here we want now to see, we want to capture the emotional recognition. So let us see if it's working. So this is kind of now streaming. Yeah, you see it's running. Now everything you see here, the stream is coming from this container from the in car and all the calculations and everything is now running kind of in real time what you see here. And yeah, you see kind of the driver. We have two cameras, the inside camera and outside camera and you see, okay, detects what is outside, how's the driver reacting and calculates a driver reaction and the driver distraction and the vehicle collision risk and all these three components down here, they are then sent through the backend to the Bobby car, to Kafka and so on to do further machine learning logic then there. So this is kind of the working stage of this POC. I could see how this could be so helpful and training for that distraction kind of ML thing which to be honest with you, I feel like it's gonna come to every car at some point, right, like it's just a matter of time. I mean, even my like 2024 to escape has some capability to realize what I'm doing in the car, right, like it recognizes there's a human of a certain weight class sitting in the seat so it'll actually go, you know, that kind of stuff. It's only a matter of time until this becomes just normal, right? Right, right. So, yeah, but it's great. I mean, you can see all the components are running in OpenShift. You can do all the simulations there. You can use OpenShift, the complete stack to for everything, right? So for the cloud backend infrastructure for simulating, let's say the vehicle edge as well and so on and yeah, it's cool. So that was just to show you, there are also some real world use cases we're working on in this context and I think the next step would be then, I think I have some slides, but regarding time. Yeah, I have some things. You have 10 minutes for the slide. I have 10 minutes. Okay, this is not much. Yeah, okay. I think I will really go very short over the slides because we, I think we should jump into the next demo and proceed but from the storyline, you know, the car simulator is Quarkus and now, so why Quarkus and what is the cool story about Quarkus? I mean, Quarkus is cloud native Java stack. So the message here is everybody who's a Java developer, he should have a look or he or she should have a look into Quarkus because it's really, really great. Java is a language with, which is quite old. I think now 26 years or something and the way back then that we didn't have containers or Kubernetes or like the cloud native world we are in today. So Java was mainly, yeah, you've written more or less monolithic applications, right? For application servers and so on. And the challenge with Java really in this modern environment we are today is in terms of resource consumptions, memory consumption and also startup times. And this is exactly what Quarkus addresses there and we think the way we use or can use Quarkus in new modern approaches of also architectures like in serverless or even smaller function as a service and things like this. And basically the idea of what Quarkus does is it does kind of 80% of the work at build time, right? And as less as possible at runtime. And so Quarkus is kind of a framework or the extension framework is a framework for other frameworks which kick in at build time. So it's kind of a cool thing. Basically it tries to optimize everything which is possible at build time and it uses the growl VM to be able to compile the code natively as well. So to compile into machine code to have really very fast startup times and so on. So I think this picture is quite nice there. And this leads then also. Quarkus is great technology for native Java stack for serverless, for microservice and so on. And this is also why it's great for serverless. And serverless is, as you know now in this Bobbycard context, we're handling with a lot of events. Basically it's an event driven architecture like every your edge architecture. So all the things that the cars emit are kind of events. And this is kind of a great foundation to use events like an event-driven system and in a serverless context to really scale by demand, scale to zero, use all the serverless features there. And for the implementation, Quarkus is a great option right now as it's really fast data times which you need in this context. And also a low resource footprint. And yeah, so just I have a few slides regarding serverless. The basic pattern is like you have some kind of events and it could be anything like it could be an HTTP request and async Kafka message or something, right? Or an upload to an S3 bucket or whatever. So there's an event and this triggers kind of the your application and there's the framework, the serverless framework or platform which is then responsible for scaling your application and obviously producing some more or less sensible result there. So this is the basic pattern. And maybe one slide back, the cool thing is from our perspective, we can use the complete bunch of Apache Camel, right? So all the components we have to as event sources for our serverless platform. So this is cool. So you can integrate almost anything into the serverless world we have here. So this is quite cool. And yeah, from a developer perspective, of course there so serverless kind of adds in your structure layer, right? And so from a developer perspective, it is really focusing on the writing your code, your actual business logic and you push your code and the platform cares about the rest like compiling, building, packaging and dependency resolution, all this stuff. And also the scaling policies. Now we have OpenShift serverless. This is great. This is also part of the OpenShift platform. And now let's go to the next one. The way is how we use serverless in the Bobby Car context. I think this is interesting and we have kind of, let me go to the presentation mode. So we have kind of two use cases now. We have this Bobby Car core demo and what we have is a Bobby Car zones as mentioned, right? So our circuits and so the one use case is we want to kind of monitor the lifecycle of those zones, right? So whenever someone adds a new zone or changes a zone, edits a zone, deletes a zone, we want to trigger, we want, for example, to start an approval process because in the real world, Bobby Car zone is a very irrelevant entity, right? So that could be one reason. So we want to trigger or to start an approval process. And this could be, for example, implemented with Cogito and as an BPM process, for example, or to execute also some decision manager rules or some business rules, things like this. And the other thing is also the zone change events we want to consume them from Kafka and feed them as cloud events also into our serverless infrastructure. And then we have implemented the Quarkus funky, let's say service to react on those events. So these are basically the two things for the architecture of serverless. It looks like this. So we have kind of these event sources like for Kafka and for the API service source. We have a broker sitting here and then we have our consuming applications. The Knative services and also some triggers, for example, the approval service will only be triggered when we create a new event, a new Bobby Car zone, right? Yeah, so that's kind of the serverless stack. And let's jump directly jump into the demo here. So I go to my development environment. By the way, this is kind of the Git repository if you clone it, here you have the components. So all the components you've seen, there's a helm directory and here we have all the helm charts and I haven't deployed the serverless stack. So this is what we do right now. So there's the serverless helm chart and I will just deploy it now into our current Bobby Car namespace. So everything should be running then. We could then switch back. Let me just see if it's actually then deploying the release. Yeah, I'll check it out. So let me just... Just as a reminder, Otwin, could you please scale down the perception pot? Oh yeah, this is very important because my cluster is quite limited right now in terms of resources. So let me just, that was the HTTP. I scale it to zero, yeah. So this is the application. Let me just scale it to zero. Okay, now back to our Bobby Car namespace. This is where we just deployed the helm release. Okay, it's quite confused. Yeah, it has been deployed. Yeah, so you see exactly the same architecture, right? So the event sources and you see also here the zone changes. So we currently have, let me see in the dashboard if we have some cars currently. Yeah, so they're entering the zone, a new zone. And you see that there are automatically three pots now scaled to handle the zone change events. And depending on the amount of cars entering the zone, I have specified the concurrency for handling these services to one so that we see some auto-scaling here, right? So you see, okay, there are some zone change pots running automatically and it will then soon scale them down. And here is our approval service. Now, when it's scaled to zero, we can just create a new Bobby Car zone and we can then just see how it is triggered the approval service then, right? Yeah, the audit service is always triggered. So for every change, for example, so yeah, let us just deploy then a new Bobby Car zone. So let's go back to our repository, there is a config here, is a demo zone. So this is kind of the, I want to create a new zone without any specific configuration then. This is in Wiesbaden, Germany, if we were intersection and I want to specify the zone for this location. So let me just add it directly here in products. Yeah, more create the zone. Let's go back or go to the developer perspective. Okay, still confusing. So yeah, but you see, okay, the approval process now it's scaled up. So this one, this addition triggered then this approval process and we can also, for example, have a look there in the locks and you should see there is the Wiesbaden freeway intersection, you see there's the cloud event, right? So all these attributes of the cloud event and the data is actually in this case, the complete Kasturi source of this new one. And this is kind of the input for example, for a business process then, right? Yeah, this is just to kind of give an idea on some of the use cases, how you could use also serverless in this context of Bobby Car. This is just a snippet. I mean, there's a lot more you could really do in the real world in the serverless context, but yeah. So I think hopefully that was a bit helpful. Oh, it was very enlightening and insightful, trust me. All kinds of ideas are spilling out of my head now but I'm curious what the audience thinks. If you have any questions, please feel free to ask you know, how this demo was built, that kind of thing, feel free. All right. Fortunately, we will talk about this now. All the base is covered. Yeah, so that was kind of, let's say the core demo parts. So like the core, what is Bobby Car about? A bit of serverless. I mean, there's a lot more in the pipeline like GitOps deployment, advanced cluster management and a lot of other things we're currently working on like also an operator for Bobby Car and so on. But yeah, this is currently the core demo, what it is about. And the second part is now really developer workflow for Bobby Car, right? So, and in this, in the second part, I will put on my head and I will definitely switch roles now. So I'm not Austrian anymore. I am Mr. Subatomic. And I am Carl Nadeff. And Benjamin is from now on Mr. Carl Nadeff. He's the senior software engineer and I am the, let's say director of Bobby Car engineering and I am the owner of the project. And yeah, I hired Carl Nadeff because he's a great Java software developer and I want him to extend Bobby Car, right? So, and his job is kind of doing this in a loop development. So I want him to extend the demo to code, debug and so on doing unit tests. All the things a good developer does and then also the outer loop, if he's finished with his features, he will somehow also need to, we need some continuous delivery, right? So CICD is very important here. So this is what his job is. And what I've done is I've sent him just an email with one link to set up his IDE. And I think from here, Carl, you could get started. I even changed my title. I saw that. It was very, very slick. Yeah. So, yeah, thanks, Mr. Saputomik. I've got your email and I saw that you sent me link. So to follow what I'm doing here, I'm just sharing my screen. I hope you can see it now. Yeah. Okay, great. Yeah. I've already copied this link into my browser. And as you can see, it points to something in OpenShift. Carl, Mr. Saputomik mentioned that this is running or everything which is part of the development environment is running actually again in OpenShift. Right. Yeah. So the host name points to OpenShift. And then I see something like a load factory. Maybe a factory to create a workspace. Mr. Saputomik, it's okay. Cool. And it points to some fire. I will click. Yeah, just hit the enter button. Yeah. I'm doing this. Oh, what's happening here? It's initializing a workspace. Wow. This is cool. When I, every time I've joined a new company, I had to wait a week or more until my, my actual environment is ready to use and it's actually working. So yeah, let's, in the meantime, let's have a look at what I've created here and what this YAML file is all about. I will just make it a bit bigger. Thank you. So yeah, I'm looking into that. I knew I don't know about this, but it looks like that we have a workspace definition in YAML. Wow. This is really reproducible, right? Can just send my link and everyone will get the same workspace actually. Wow. The components, okay. I have several components with several purposes. There's an AMQ, okay. This must have to do something with AMQ. Ah, okay, the MQTT, right? But by the way, you have the same, the exact same container image for your development like for the MQTT broker, like we have in production. So you essentially kind of develop in the production environment. Wow. So I cannot say it works on my machine. Exactly. Okay. So I see also some plugins. So what we are talking about is a completely new IDE, but I also see VS Code here. I know VS Code. It's a popular development environment. Why is this a missus? Right. It has your, this cloud ID has the same foundation. Let's say like VS Code. And this also gives you the opportunity to use, let's say most of the VS Code plugins. And you will have a look just in a second and it looks quite similar. So... Okay. And what I also see is that there are some commands that I can run. Right, right. You can also configure all the necessary commands. So you don't have to really care about all the things and setting up all the necessary commands for debugging, testing, deployment, sonar scan, whatever. So we can configure everything you need upfront. So it should be very easy for you to get started. So this is one of the main idea here. And we have really to have consistency for all our developers. Nice, nice. And I see it's, it opened in my browser. And yeah, it also checked out the code. I can see the Cast Simulator. And I'm really curious how this will work out. You have now, from now on 10 minutes to get really productive, you know. Oh, okay. I give my best. So, okay. This must be the main application. I can open it. I can see syntax highlighting and all that stuff that I expect from IDE. That's nice. Let's have a look what this def file, what we've seen will do for me. Ah, okay. I can see the command. So let's run the Cast Simulator. I click on this command. Ah, and it's running. So what it does, it, yeah, it's renders logs in my console here. And okay, it's a Maven project, as I can see. And we need to download all the dependencies. Cool. What can I do in the meantime? I see there are several containers that must be the components from the def file. And let's try AMQ. There's a AMQ link here. I click on it. You promised me that I can use the very same version. So let's have a look if this is true. Wow. It works. No producers, no consumers. Let's see. What else do we have in this project? Ah, application properties. I already know this from Spring Boot and similar frameworks. So we also have this in Quarkus. Let's see here. Okay, Kafka is somehow mocked, I guess. Ah, but here, messaging. There are some messaging properties. So let's remove the comments here. It runs here in the same container. So I guess I could do it like this with local host. Oh, you're doing amazing, man. Yes. Yeah, but it's still starting, right? So please. Okay, let me look at the code. What else do we have? Events, maybe I find the code for publishing the events to MQTTR. Okay, nice. Here it is. So we have to wait a bit. Do we have questions so far that we could answer in the meantime? Thanks, let's see. No, no questions have been asked. So you're either blowing people away. Let me, maybe, maybe, maybe not. Let's look at the POM file in the meantime. It's a Maven project. So I will find all the dependencies in the POM XML. And yeah, I can see that we have several dependencies to Quarkus, all the different components, a health check. Oh, I know them. Small Rai, isn't this a project from Micro Profile? Mr. Subatomic. It is, it is. So Quarkus actually implements Micro Profile, that's cool. You're absolutely right. Wow. So you're quite familiar with it, right? Yeah, right, right, right. I developed several projects with Micro Profile before, running on a classical application server. So I'm very curious. I hope this will start soon. But we can also already have a look at the other commands. Oh, R-Sync. What is this, Mr. Subatomic? Why do we have R-Sync here? I know it's to synchronize files. Yeah, there's, sometimes you want to kind of develop locally with your local IDE. Like, I don't know what you use in TeleJ or Eclipse or something, and you probably want to kind of sync your, with this server side workspace. So this is why I have this R-Sync container component in there, so you could kind of work offline and sync your actual code then. Okay, in the meantime, the service started and I clicked on this link, the right navigation, and I can see the service. It's actually started. Let me check some of the end points. Here's something for health checks. I can find the metrics, nice. Very nice in the Swagger UI. So the Cast Simulator has also REST interface. Let's try it out, send a request. Oh, and it, okay, it gives me the route that is actually simulated, nice, nice. So what I do next is I think I have to look at the code again. Oh, it's yellow. What's wrong with this code? Ah, there is something unnecessary import. So this will be my first commit. I will delete this. And I'm getting productive in the first 15 minutes, great. Let me see Git where can I see my change stopping the service? And hopefully you don't commit every import removal in general, right Phil? Ha ha ha ha ha. Ah, I can't see it yet. Perception service, still running. Ah, here it is. I have to stage it, removed import. Commit. You know, the push is the one button I wish they would have out there to be like, not in the sub menu, you know? I get it, they're working with limited space, but that would be the one button I would always have, you know? But before I do the push, Mr. Subatomic told me that I should have a look at OpenShift again because whatever is triggered after this push must be here in this environment again. So let me look at the pipeline section. Bobby Cardeth is its own development project. There is a car simulation pipeline, nice. And it's visualized actually, cool. I can see the steps. So this should probably be triggered by my code push. So let's switch back. So I've pushed this to a registry, git repository, sorry. And yeah, let's wait if something happens. In the meantime, ah, there's something happening. Wow, a new pipeline run is starting, cool. Ah, it's quite cool, it's animated. Nice. Git clone starts, then we will have our unit tests running. Then we do Sonar scan. Now I know, SonarCube from my previous job. I love the pipeline interface, it's just lovely. And then we deploy the architecture. Yeah, it's so cool. You instantly understand what's happening, right? Yeah, don't be confused. There is a pipeline metrics tab. So if you go to the pipelines and there is a metrics tab, which is quite nice, but I just wanted to mention, don't be confused about the actual data is there, right? So our pipelines, they don't take seven hours or something. No, it's just too, but it's quite nice to see, okay. Especially for me to see how long does it take and how it works. But we have only two minutes. So I'm actually pushing back the meeting that I have. So feel free to take more time as you need. Okay, so what I wanted to say is, so we've seen that there's SonarCube, that there's something like Artifact push into some repository. So where can I find all the servers? Let's look at the pipeline. It's actually again, Yaml, as we all expected. And okay, the repository is somewhere in OpenShift. Nice. And Nexus, also in OpenShift. So, okay, I can also see the namespace or project name, Bobby Cardef, yeah. So I'm in Bobby Cardef. Let's have a look at the resources here. Go to the topology and yeah, the whole environment for our development is in OpenShift again. Very nice. I have the git repository where our source code can be found. And I see my commit, removed import, actually. Nice. Then we have Nexus and we can find, I'm sure that we can find the artifacts here. Okay, the pipeline is still running. So I guess these are old ones, but ours should appear here. And we have SonarCube. Ah, nice. There is the analysis. And I hope we have only nine code smells after that because I've removed one, right? I've removed the unnecessary import. So let's give it some time. And yeah, I click on that because I'm curious. Oh, I can even see the locks of each step instantly. Okay, it's again downloading all the dependencies. Takes quite some time. Yeah. Nice. It looks great. Colnative, you're doing great. Really, you're really doing great. For my first 15 minutes, right? Man, I have one question. So as you're a Java developer, right? So do you probably also have some experience with Apache Camel? Because in our body card context, all the integrations are built with Camel. So there is also workspace definition for Camel. And I just, yeah, I'm just curious if you could also have a look. Okay. Yeah, I can try to, I don't know. Let's have a look. Ah, there's a second workspace. Let's switch to this one. So I can have several workspaces, right? Right. That's cool. Awesome. Yes, you need. So you have to tell me what's happening here. So I have another workspace and this one has more capabilities in terms of all two. It has different plugins. So for example, there is this Apache Camel K, let's say, plugin that helps you kind of developing Camel K integrations. And for example, there is, yeah, you also see the components Bobby car where we have our, all the Camel K integrations like for example, the entity Kafka service and so on, right? And there are some integration in the IDE as well. So what you could do just to get started is for example, I'm not sure if you have to log in kind of to the Bobby car, for example, namespace or project where our actual project is running. And you could see like the running integrations there. You could easily start a new integration. I'm not sure if you're logged in. Of course we give our developers admin access to everything. Of course. Only cow native. I'm only cow native, fair enough. Fair enough, he's the architect for this whole thing. Okay, I'm logged in. So. Yes. So basically you should be. No, I can't. Okay, but what you could do just create a new, new integration. Okay. You could exactly, you go to the commands, search for Camel K and then you see, okay, what the plugin offers you and that one. Yeah, create just a new integration file and nothing is happening. Okay. So there is, seems to be something wrong with the environment. Finally. So at the end of our demo, something should fail. Of course something breaks. Let's fix it. I won't believe us. Okay. Maybe restart the workspace. Mr. Subatomic. Should I? It definitely could help. I'm not sure what the problem actually is. Yeah, just stop and start it again. But I guess there is some connectivity issue with web circuits or something. Yeah, it was inactive. There is, yeah. It looks like this. Yeah, okay. Anyhow. Let's give it a final try. Yeah. And you know, there's a question here in chat. I'll give it a while this is loading up. So interesting project. I was going through the repo and it is mentioned Bobby Carr is a sample implementation of an IoT reference architecture. Is there a reference architecture being, like are you referring to another reference architecture here and just implementing it? Or is there some higher level thing that you could point to where it's like, this is a, you know, this is our implementation of a reference architecture. And the answer can be no. That's totally fine. Like this is our reference architecture for doing an IoT kind of deal. Yeah, I mean, there are different reference architectures out there and they normally have kind of, let's say four to seven layers, if you want to say. And let's say they all have some things in common, right? So I'm referring to, let's say more or less general reference architecture. If you combine all the things you find out there, like, and also the examples from AWS, from Azure and so on. And you see how the architecture looks like and you kind of generalize what is really, what are the core components? What technology is behind that? So what is in common with these? And this is kind of the more or less generic reference architecture for an IoT. Let's say IoT environment where you have like cold path, data processing, hot path and so on, different streaming. And this is just one example of how you could build such a thing with Red Hat technologies. So I'm not referring kind of to specific reference architecture. And as far as I know, from Red Hat side, we don't have like also a specific or a reference architecture regarding IoT. So- No, like we don't do reference architectures in general at Red Hat anymore, just for like, there's so many different scenarios now and so many different vendors that we hook up with. There's no one like cloud reference architecture. All the clouds are different. There's no one like on-prem architecture because your premises, there could be varying things there that are different than what we would, you know, suggest or do, you know, if we were on site kind of thing. So general idea is this is a reference you can use to implement, you know, in your environment. Cool. Any luck getting it working is the question. See, oh, okay. This time it worked. The old workspace also said that it was offline. So there was a problem. So now we have created a new integration, right? I have Java code, very simple. There's a timer. Every second I will trigger a route with the ID Java and lock something out to level info. How can I run this, Mr. Subatomic? Yeah, just run that to, yeah. Get you going in this Bobby Carr environment. Yeah, exactly. Context menu and then you could just start this with the chemical integration you see the last point there, right? And then you can specify the different options. So in death mode. And if you then switch to the OpenShift environment, for example, in the topology view, right? You should then see if the integration is coming up. You should also see it in the topology. Pipeline has finished now. Oh, okay, cool. Nice. It was even successful. What a surprise. Yeah. Okay. Okay. So can we see it? No. Is it already running? It's about to say. No, no. Yeah. It's just waiting. Created. Yeah, now the thing is, you could, you see right in the Bobby Carr components repository, what is checked out there, like the empty GT to Kafka and all the services, what you could do, for example, which is quite nice. So you could really develop new versions by, let's say in the, in, in this, for example, in the production environment, even that is possible. So you could grab also the data. You could also subscribe to the empty GT and build your logic with the actual data coming in, right? So from, from in this production environment and test it there as well. Or so this is really a nice thing with, Yeah, because if you are writing or implementing an integration, yeah, you need the targets and sources for the integration. And they are usually in the production environment or something near the production environment. So what I as a developer want to have is to have a possibility to push my, my piece of code here into that environment so that I can test the integration between the real services or the real integration points, right? And this is what is nice here. And I don't know why it takes so long. What we would see here is that, Yeah, you could, for example, directly change the logic and you would, would see that it would get redeployed instantly. So it's also a kind of a life reloading as you might know from, from Quarkus, for example. So you can fast in a loop developing in this context. But yeah, obviously there is something wrong. I'm not sure why it's not working. So, but yeah. But we see our integration now here, but not, it's not working. So yeah. Okay. Well, do we want to troubleshoot this? Do we want to wait? Do we want to push forward? We can switch to the CLI, for example, and troubleshoot this. And for example, I, I, because it's installed my, our operator, we have a new custom resource definitions like an integration. Can have a look for all the integrations in this project. Very slow. I don't even see it. Did you say you were resource constrained on this cluster? Yeah. Yeah. Okay. How is it called? It's very slow. There is something very. Yeah, I was about to say something. Yeah, we also had today, um, let's say, um, serious DNS problems. I'm not sure if there is something wrong again or in this cluster. Maybe it did come back with errors. Last few kit things are aired out. I cannot. Again. Yeah. It's so very slow. No, I want to see the time. No, I don't see an integration kit from, from at all at the right time. I don't think it's created. Yeah. Yeah, looks like it's not created. Should I rerun this? I don't know. It's a good idea. You can just try it, but, um, Yeah. What, what does it say? Starting you. Okay. It's asking you which config. There. There you go. Updated. Okay. Let's see if there is something. Oh, I have an idea. Do we need to, to change the project? I didn't change the project before. What project are you? Yeah, I mean, we definitely need the, we are in the project, right? And the code ready project. So I need to, to switch to. You're not in the Bobby. You're not in the Bobby car project. No. Okay. So what you could do is now you could, in your code ready workspace project, you could also install the camel K operator. To make it work in your, let's say development environment. But, um, yeah. If you're in, if you don't have to camel K operated, then it's sure. Yeah. Work. Yeah. And that's why I haven't seen integration. You didn't see the other integration. Okay. Okay. I was assuming you already complaining. I haven't made an integration selection. Okay. That's because I know. Okay. Now we should also see that in topology. No, it should appear here. Yeah. Let me. Yeah. It shows up here. Oh, okay. Okay. Yeah. Nice. Yeah. I know you could just change something also right and then. Yeah. I will switch back. Sorry. Um, Okay. So we can see now in the output. Hello world. Hello camel K from Java. And we can just change this from open shift. See me. No, it's three deployed. We built. And. Let's see. Well. Yeah. Hello world from. It hates me. I see it. Yeah. But every new line. Nice. You still have fun as you can see though. Yeah. The thing you could. Yeah. Just, just show one of the examples we have now. And this is not the, the hello world. So we could now, for example, the Kafka to data grid service. Right. So we could have a look there. So this is the, the actual implementation. Yeah. Where we do the, the zone changes, right? And the caching. So this is kind of the. The actual implementation we use here. Yes. It has a lot of inner classes because we needed to do it in. Single Java class. But. But the nice thing is really here. We could start developing or extending it in this environment and test it with the data coming in and so on. So I think it's quite, quite nice. Oh, and in the meantime, it shows up here. Yeah. Because I changed the project. Okay. This was my fault. These are the four running services we have right here. Yeah. Just check the logs of one of them, right? So this should be now. Follow log. Yeah. And you see, okay. So these are the metrics and positions which are. Then exposed as WebSocket for the dashboard, right? Dashboard streaming. Yeah. These are the running integration. Cool. Nice. This is the other integration. Yeah. You're offline again. I'm offline again. Okay. So. I don't know what's. Activity. No, I'm online again. So, but I think in essence. This was our demo. I think. We covered all parts now. We had. Yeah. Human error. Let's say it happens. Yeah. Yeah. And but we found out how to, to fix that. We are at the end of our demo and happy to wrap up. Maybe there are still some questions. I don't know. No. I didn't follow. Well, there's a question here about what is our Twitch channel address, but. And they just found it right as I typed it. So cool. So yeah, thank you very much for this demo. Awesome. Awesome integration of multiple. Cloud native capabilities here. Pipeline still running. Yeah, we should also see that this. Yeah. Yeah. So nine code smells. Yeah. So. And we should also have. One year, but. To refresh. I think. Yeah. Number six. So it's also released. So. This is the proof that. The pipeline actually did something. Yeah. Yeah. Nice. All right. Well, Carl native and Mr. Subatomic. Thank you very much for coming on the channel today. But seriously, or when Ben does, but this is really cool stuff. Right. Like. This architecture kind of blew my mind when I first saw it. So it was one of those things where it's like, I hope you go back and watch this if you miss something. And if you're watching it, you know, after the fact after the fact. If you have any questions. If you feel free to reach out to me. If you have any questions. Short at red hat.com. Or you can find me on Twitter. Just at Chris short with two S's. And yeah, I can get these. Any questions you might have routed to the right people here. And yeah, coming up on the show. And a little over a half hour. We're going to be talking about database disaster recovery with our friends from crunchy data. So please tune in for that. And if you haven't subscribed and, you know, share if you so see fit. I always appreciate that from the audience. So yeah, if there's nothing else. Carl and Mr. Subatomic, I will. Sign off for now and we'll catch up with you all here in a little bit. But thank you for coming on. This was an awesome demo. Just, you know, despite human error. That happens. Thank you. Thank you for having us. All right. Have a nice day. Yeah, take it easy. Everybody stay safe out there.