 So my session today, I kind of did a similar session before, but I'm going to do it from a different angle today. So we're going to talk about game telemetry and I'm going to use Red Hat's OpenShift Streams for Apache Kafka to show how we're processing game events and telemetry from a game server and process them using Kafka and expose them using APIs or Quarkus and stuff like that. So hopefully it's a little bit of fun. And speaking of fun, I hope it's, you know, friendly as well. So everyone here feel free to ask Q&A and you know, if you think the session is amazing, then that's fun too. I had to get in video game references. I love this video game, despite a main game. So it's fun to start with a video game reference in a session that talks a little bit about video games. But before we dive in, I should talk about Kafka as well. So in terms of what is Kafka? I think a lot of people probably know, but it's an open source distributed event streaming platform, right? So what that really means is it's a distributed system comprised of multiple nodes and that gives you high availability, horizontal scaling and a full tolerant log of events for distributed processing of data effectively. So you can use it for streaming analytics, high performance data pipelines and just for integration as well. And we're going to talk about all of that in this session. And in terms of how it works without going into too much detail, this is like a super high-level diagram that takes, you know, some liberty in terms of accuracy. But basically you have producers, they produce messages, they get sent to brokers in the Kafka cluster and topics are how you segment your messages. And those topics are broken down into partitions. So you could have, say, three partitions and you have multiple consumers consuming in parallel from those partitions. So it allows you to scale up to large volumes of data and process them in real time very quickly. And because you have multiple brokers, there's replication between those brokers. So a partition could be replicated across two of your three brokers and it gets you that full tolerance and high availability as well. Because even if one broker goes down, you know, you're not totally in trouble yet. So you can have time to recover. So it's really cool technology. And it does have multiple APIs. So a previous session I did talk about the Streams API, but today we're just going to focus on the second and third bullet point here, which is producing and consuming. So as you can imagine, it's a kind of like a messaging platform, right? So you produce messages and you can consume them. And then of course, we've admin APIs to inspect brokers and manage your clusters. And the really cool APIs are the Streams API, which allows you to do very interesting processing on the data in your Kafka topics. And also Kafka Connect, which makes it easy to take data from, say, a source system, like a database or like an S3 bucket or even just a file system and pump it into Kafka. And then also do the opposite. So take data out of Kafka and pump it somewhere with those Kafka connectors. And they're very low code. So it's a really cool technology and cool ecosystem. And that's all open source. But getting back to the session today, we're talking about video game telemetry, but that's maybe a bit of a vague term. So let's talk about it and see why we care. And I'm going to use a practical example to explain it. So at Red Hat Summit this year, we built a game. And that game was called Ship Wars, because each year at Red Hat Summit, we generally do an interactive demo with the audience. And the purpose of it is to show off tech, right? So this demo is pretty straightforward on the surface. It's a five by five version of Battleship, which is a very small grid, only 25 squares, whereas regular battleship is, I don't know, like 60 something or more. I don't even know. It's a really big grid. But this game was a simplified battleship, but it was interesting because it was in your web browser. And also you weren't playing against other humans. You were playing against an AI and you're going to get to play that in this session to help me with the demo in a little while. And we also added that bonus round you just saw where you can tap to get a high score. So it's a fun little game for a tech demo. And what was really interesting about this is we deployed it in three regions. So it was in North America, on Azure, in Europe, on AWS, and in, I think it was Tokyo on Google's cloud. But what was even more interesting is we were using Red Hat OpenShift, which is Red Hat's, I guess, opinionated version or productized version of Kubernetes to deploy these game services, right, the back end services for the game. And as developers, I was one of the people who worked on this with a bunch of other, like, awesome developers to make the game. We would build their container images and push them to OpenShift and it didn't matter a rock cloud it was running on. So that was quite nice because I don't have to worry about infrastructure. I just build my images and deploy them and OpenShift scales them up and down. And I don't have to worry about the underlying provider of the infrastructure. So that was pretty cool. And what was even more interesting was we synced our geo replicated the game state between all three regions. And you could get, you know, you get in competition with people across the world during this demo for a high score. And Katya Arresti from Red Hat did a cool definition session on that. So check that out too. But getting back to Kafka and events, let's see how this game telemetry scenario comes together, right? So if we think about this battleship game, there's a few events in this game and any game really. So at first the human player, which is you or me, connects to the game and you position your ships on the grid. And then the AI or the AI we programmed connects to play against you Buffett. So that's happening on the server side. But those are events that the game server receives, your connections, right? And then as you play the game, you get hit, miss, hit, miss, so on and so forth as you place your pegs. On the game grid. And those are all events too, right? They're interesting pieces of data, not just for the game rules, but also, you know, from a telemetry point of view, for reporting to understand how people play your game and how effective your AI is, for example. So this is all really valuable information. But the game server isn't really concerned with the value of it, I guess you could say, right? The game server just wants to execute the game rules. But other people like data scientists or marketing people, for example, if you play like any big e-sports games, they have huge merchandise and items you can buy. So marketing would care about people's activity and how they play the game. So if we tie this back, that was a very simple example. But if you look at the real world and real gaming industry, it's huge, right? And modern video games are constantly evolving as well. They get updated after they're released, and the teams behind the games need to decide what to focus on when they want to update their games. They have millions of active daily users, and those users are going to generate tons of events, which are data points. And that's where Kafka is going to come in, as you see in a minute, because you need a high throughput pipeline to process the number of events that have come from a video game. For example, on the right here, I have the top 10 games that are on Steam, the popular gaming platform on PC. And you can see Counter Strike at the top has 600,000 players at the time I took this screenshot, all playing at the same time. So imagine each of those is responsible just for 10 or 20 events. 10 or 20 events times 600,000 is 6 to 12 million unique events. So ingesting that data and processing it is just a monumental amount of volume. And Kafka is really good at ingesting data, and you can have your downstream consumers process it as quickly as they can without overloading them. And also, you know, you don't want to do point-to-point contact. But looking at a practical example of a game I play, and this is an associated Red Hat, this is a popular online shooter, and they have an API of telemetry events. So I can use their API to build applications. They've heard parties who build applications using their API. And their telemetry that they expose has 52 unique events from the game. So when you play a match in this online game, when you shoot another player or you heal yourself or you get in a car and dry or you just jump out of an airplane, like it's crazy action game, they are all events and they record every single one of them and expose them through their API. So it's really cool and you get all these stats. So there's global leaderboards for how people play and there's even tools where you can use this API to see replays of your games afterwards using that data in a web browser. And you know, you can see like how everyone played. So it's really interesting, these third-party use cases. And even if you think about Twitch, right, Twitch has real-time data use cases as well where streamers stream their games. So they might use other APIs from the games they play to link into their Twitch stream, or even there's interactivity in Twitch, right? So their APIs are real-time and people send messages to streamers and the messages can appear in the streamers feed and that's all powered by Twitch's APIs as well. So there's lots of real-time use cases and these are just fun ones. These aren't like business-orientated ones. So you can see how important real-time data is, even just in this one industry. And why Kafka is so powerful is because it gives you that hub and spoke type architecture. So we know Kafka is high throughput for huge numbers of events and data and processing that data, but also it allows you to decouple things, right? So your game server is going to emit all these events and it could emit them to each service individually, but that doesn't make sense, right? Because then you're creating this strong coupling between your data pipe or between your different services that process and need that data. Whereas if your game server just sends them to Kafka, it doesn't care about who consumes them downstream. It just uses a common protocol and whoever needs them downstream can get them, right? And it also gives you a nice separation, right? So your game development or game server development team can just push things to Kafka and they don't really need to think too much about what happens downstream. They also need to format the data nicely, but ultimately they don't need to talk to every single team who needs the data maybe, right? Which creates a nice separation. So you have different layers. So it's a nice architecture here. And coming back to our demo and our game today, what we did was we forwarded all the events from our game servers, so each server in each region to a centralized Kafka and that was our Red Hat's open shift streams for Apache Kafka. So we were using a managed Kafka service. And we actually had a local Kafka in this architecture. I'm not going to spend too long explaining this because it's kind of crazy, but on the left you have the game and the game connects to this green game server here. And the game server, when the players take actions, they emit them as events, right? And those events went to a Kafka broker local to the region and then they get forwarded to our managed Kafka in this example. And this is kind of nice because the game server, there's other stuff going on here. There's a leaderboard. There's the scoring that generates the leaderboard and there's also a cache. And when the game server emits these events, they get processed by other things like the leaderboard and the scoring downstream. But the game server doesn't really need to worry about how those work. It just emits the events. And then these other services do their thing with those events. So it was quite nice for... I worked on the game server piece and some of the other guys from the team worked on the scoring and the leaderboard. And we had to agree about what data would be exposed, but I didn't have to send things directly to them in a specific URL or API structure. I could just have made events and move on. So it was really nice. And speaking of Kafka and the streams for Apache Kafka we used, this is a service you can try and I'm going to show you in the demo today. So you can head over to our development preview and try it out for free. And I think there was someone on just before me before I joined the session. I can't remember her name, but she did a cool demo about how you can use Red Hat's developer sandbox. So you can use Red Hat's developer sandbox to create software and deploy it on OpenShift and also connect it to this service, which is very cool. And this service is built on OpenSource, as you might expect from Red Hat. So we're running these Kafka clusters using Strimsy, which is an upstream OpenSource solution for running Kafka clusters on OpenShift. So even though this is a managed service, it's built on OpenShift or sorry, OpenSource technology. And we're not saying you have to use managed versus unmanaged. You can also use Red Hat's technology on your own clusters with something like Strimsy. And this service, what's nice about it is it's fully managed and hosted. So you don't have to worry about managing Kafka cluster. For me personally, I'm not an expert at managing Kafka, but I do know how to interact with it and send events to it. So this service allows Red Hat to take care of running the infrastructure so you can build your applications. And it exposes, it has a nice UI and a CLI and it also has APIs. And it even has a feature called service binding to make it easy to inject credentials into your applications as you'll see in my demo in a moment. And of course, being a managed service, it has a high SLA, a 99.95 SLA with 24.7 global support. So without further ado, I think we kind of get the idea, the gist of what I'm going to demo. So I'm going to jump right in and do the demo. So this service, as I said, is a managed Kafka service from Red Hat. And you can find it on Red Hat's cloud console, so the hybrid cloud console at console.redhat.com. And this is what it looks like when you log in and check out the service. So I'm on the dashboard here and I can see my Kafka clusters. So I have one Kafka cluster here and you can see it says I created it 18 hours ago. And I'm using the development preview that all of you can try out if you're interested after this session. And that means my cluster will be around for 48 hours before it gets cleaned up automatically. And I can even click on my cluster here. I can see details about it. So you can see I've deployed it in North Virginia on AWS. And I can get the connection information at the bootstrap server. And we also have this service account section here and even information on how you can authenticate to the cluster. Since it is on the public internet here, we want to make sure you're using SASL and SSL to connect to these clusters. And you'll see that in the demo in a moment. If I click on my cluster, I get some nice dashboards here with metrics that are used disk space and how many bytes are going in and out of my cluster. My graphs aren't particularly interesting because I haven't been doing much with this cluster. But we can also go to things like topics. I have no topics at the moment. I'll create some in a minute. And you can also see consumer groups. I won't have any consumer groups because I haven't actually deployed anything yet. So the first thing I need to do is show you this game. And I have it running here on OpenShift. So if I go to OpenShift, I have the four core services for the game running. And one of those is an Nginx container. And I can visit its public URL and see and play the game. Now, the problem is, I haven't actually connected my game server to Kafka yet. So if I go over here to this Node.js pod, and I take a look at the logs here. So we can see if we scroll down here, there's a bunch of logs. But my server started and it said that it didn't find any Kafka bindings. So it doesn't know how to connect to Kafka yet. So I'm not going to get any information in my Kafka cluster that I've spun up over here on Red Hat's OpenShift streams. So I need to fix that before I get you all playing because I want to capture events from when you play. So let me show you how I can do that. The first thing I need to do is... Well, there's different ways I can do it. But what I'm going to do is show you how to do it using our CLI tool. So if I go to my terminal here, and what I need to do quickly is export a token. Yeah, there we go. Okay. So what I need to do is use the Rode CLI or the application services, Red Hat application services CLI. So I have it installed already and you can see it has a bunch of different commands available. I can use the cluster command to perform operations on my OpenShift cluster and link it with my Kafka's. And I have a Kafka command here. So let me show you how I can, for example, interact with my Kafka instances. Using this CLI. So I will run the Rode Kafka list command. You can see here that it listed the same Kafka instance saying that it's on AWS that we saw on the UI. And what I want to do is just check do I have any topics? I don't think I do. So before I actually connect my Kafka cluster to my node backend, I definitely need some topics. So yeah, let me go ahead and create some topics. I have the list of them here in my notes. Because there's a few of them I need to create for this game. But the command to create a topic is pretty cool and easy to use. It's just this one here. So it's rose Kafka topic create and you give the name of the topic and the number of partitions. So I press enter to run that command and it creates the topic. And some JSON is output telling me about the configuration of the topic that I don't need to worry about for this demo. So what I'm going to do is copy and paste those commands here and create the topics I need. And if we go back over to the UI actually and inspect our Kafka instance in the UI and go to the topics list, you can see they're showing up now. So I have my topics necessary to play the game. So what I'm going to do is connect my Kafka instance to my game server, which is this pod on OpenShift. So let me show you how I can do that. I'll use the rose cluster bind command and the cluster bind command and I'm sorry the cluster connect command I should say. It's going to allow me to create a link. Well, I don't have a token. What's going on? Oh, I need to log into my OpenShift cluster. So what I need to do is I've been logged out of my session. So what I need to do is copy this here, get my token and hopefully no one does anything nasty. But now what I can do here is log into my OpenShift cluster and run this command again. And it's going to basically create some secrets inside my project namespace. You're logging in, but you forgot to copy the last three. So the port number is wrong. Thank you, Ed. There we go. So yeah, I've got to log into the cluster and now if I try and connect, hopefully this command works and it's going to allow me, yep, there we go. It's running this time. So what that did was that created a resource known as a Kafka connection. So for example, if I run OT describe Kafka connection, what we're going to see is that there's now a resort in my Kubernetes or my OpenShift namespace and it contains all this information that instructs my application on how it could connect to the Kafka instance. So you can see it says, it needs to use sazzle SSL and it even has the bootstrap server host here and also created a secret to contain service account information which is a username and password to connect. So all I have to do now is run one more command. So I'll run the road cluster bind command here. What that's going to do is it's going to ask me which of my services in my OpenShift cluster I want to connect to my Kafka and that's going to be my game server here. So I'll say yes, and if I switch back to OpenShift you can see a new pod that's spinning up of my game server. And if we go to the logs, it's going to take a second because when the game server restarts it has to do a startup process but what's going to happen is it's going to connect to my Kafka instance and be able to send messages over there. So you can see here Kafka producer connected to my broker down here. So now what I'm going to do is I'm going to share this URL with all of you and get you to play the game. And we're going to do some fun stuff with Quarkets now quickly. So I'll go to the stage here, share the URL and you can all join in. So go ahead, open up the game and play in the background, just fire a few shots. And what I'm going to do now is show you how I can locally develop a Quarkets application that deals with some of the events coming from my cluster. So for example, if I run, sorry, if I run Kafka cat here so I have a Kafka cat script which is basically a CLI that allows me to interact with my Kafka instance running on Red Hat Cloud and I'm going to look at the Ship Wars attacks topic for example here. And oh wow, okay, you're all busy. So there's tons of information flowing through here. So I'm seeing all of your attacks being streamed from the OpenShift cluster to my Kafka instance but there's only so much that can be done with this data here in Kafka, right? I'm not going to give, you know, external users are external users permission to connect to my Kafka instance. I'm going to expose a HTTP API, probably, right? Or within my company I might give them access but I might also have a team that's responsible for a data pipeline that exposes services internally as well. So let's take a look at VS Code here and something I've been working on. So here I have a Java service written using Quarkets. It's fairly straightforward because Quarkets makes it easy to build these things. I'm not really a Java developer and even I can get this working, which is fun. But this application, it exposes a telemetry endpoint and you can see here it's using the reactive messaging, micro profile, reactive messaging to connect to the Ship Wars attacks topic and effectively this is creating a consumer and I'm going to expose an API endpoint that allows us to see all of the shots that are coming in. So I can connect over HTTP with something like an API key, for example, to access this service. So I can run this really easily in my local terminal here. So for example, what I'm going to do is say Maven Dev, Maven Quarkets Dev here to start up the service on port 1990 and what I'm going to be able to do is actually see all your attacks from this HTTP API, right? So I'm just developing locally using Quarkets and if I do a call request here in my other terminal to the shot endpoint, what's going to happen is if I go over now, oh, there we go, someone's making shots. So someone's helping me here. These are streaming through over HTTP, right? But you're probably not going to expose an endpoint that needs every single shot, right? Like for the, if you look at the example I did earlier or gave earlier of the game I like, the Battlegrounds game, they have endpoints where you're going to, you can filter for a specific player, for example. You might not want all the attacks for every player, right? You might want to just look at your profile like a typical API query, right? So using Quarkets, that's actually kind of easy to do. So all I need to do is do a query parameter here. If I can have something like API query, RAM, and it could be something like match ID, for example, string ID. And then when we have the shot stream here, we could do something like filter, right? So we kind of like filter. And we can just say s dot, I can get the data, right? So this is the actual data calling true from the Kafka Streams. And I can say get data dot get match and check that it equals past in, right? Do a null check and stuff here, but this is a demo, right? So we're not going to do that. So if I now go back here and go to game, I will get my own match ID and start playing the game. So you can see my match ID here at the bottom of the UI. Now if I start playing, I already got a hit, which is quite nice. But what I need to do is pass the match ID as a query string parameter here. Do my Quarkets application endpoint. And now we have an API basically where we can query for our own matches in real time, right? So if I start playing here, there you go. You can see I have three shots and now the AI is going to fire again and I get the fourth. So it's very easy to start connecting your applications even when you're developing locally to this managed service. So, and you can see how you can build up these APIs to expose your telemetry to either public or internal users within your company. So that's quite cool. And another service I have here is an aggregator. So this is actually a service that I've been developing as well. And if I go to my VF code here, where's it gone? Open VF code. And again, this is using Quarkets and Kafka Streams and I'm just developing it locally like I always would. And if I go and take a look here, again, we have a rest endpoint. So it's called the shot data endpoint. And you can actually query for a user's games, right? The record of all their games. So I don't know if some of you want to put your username in the chat, I can look this up here because I'm not going to push it to the OpenShift cluster. But if not, I can just look at my own one anyway. So I'm going to run this here and you can see it has a path where I can pass the username. So I will do that. I'll run it here. I'll be able to bring back some of your game records from the service that's connected to my Kafka cluster. So this is starting up here using Quarkets. And if I give it a minute, it's going to build up some of those records using Kafka Streams. And I can then query the endpoint. So if I go here to my other terminal, I can do a curl, for example. And if I go, I can find my name is PuddleKeeper here. I can actually get all the games that I've played. So I haven't played many. So it's going to be pretty empty here. But if you played more than one, I'm guessing we get a bunch of record back. That didn't work the way I expected. Try that again. The demo gods, are they working? Kafka Streams is not connected. Let's see here. Let me restart. If it doesn't work, it doesn't matter anyway. We saw the other example. But you can see here what I'm trying to show is that we're collecting all these telemetry events and able to easily expose them using APIs. And we can build those using Quarkets or Java or Node or whatever it is you'd like to use. Oh, I never created the topic that I need. So yeah, let me show you that. So if I go back over to the UI here, I need to create a topic for this particular service to work. So I can go in here, enter a topic name, click Next. I can set as many partitions as I need. I'll just go with three here. Default retention settings. Click Next and finish. And now I just have to create one more topic. So I'll go ahead and do that. Everything behaves nicely. So what we should hopefully see is that I'm able to query this API and get back game records. And let me see if someone put their username in the chat. It didn't, it doesn't matter. Volcano Wonder, let's do that. Let's query for that when this starts up. So I'm starting up another service over here. Oh, the topic's not there. Why is it saying it's not there? Oh, because I never finished creating it. That's why. All right, so this should start up in a second and we'll see loads of events getting processed because I'm just developing locally. But it still works just fine. And then our API should start working. Yeah, there we go. Okay, we're seeing tons of events getting processed, which is perfect. And now our API over here should start working. And what I can do is, once I'm happy with these, I can then deploy them on OpenShift or Kubernetes or wherever I like, really, and hook them up to a production cluster, for example. This is just a development cluster, obviously, but I could hook them up to a production one and expose this via some API management solution to my other users. Ah, it's not working. Well, not to worry. We saw one example of exposing these APIs and having fun with them. So I know we're running short on time, so I'll end here by saying try out the service. If you're a better Java developer than me, you might not run into any bugs like I just did because you'll write your Java services correctly. But yeah, I'd like you all to check out the service. You can head over to Red Hat Developer, sign up for free, and give it a go and have fun with Kafka without having to worry about managing it. So Edson, I think it's back to you. All right, thank you, Evan.