 Hello everyone. Welcome to our next session, building modern microservices at scale with DataGrid and Quarkus. My name is Ruhot Mons. It's my pleasure to welcome you all and to introduce our speakers for today's session. And that's Shafshad. Hey, thank you Ruhot. So glad to be here. I'm going to talk about building modern microservices with scale at Red Hat DataGrid and Quarkus. I'm going to deep dive into some of these things. So welcome to the session. I hope you're excited and ready for a quick presentation and a demonstration as well. Before you kick off, I do want to remind people that you can ask the questions either via the chat window or via the QA window. The recording will be made available after. And please, please also follow Shafshad. He put his contact details here. Don't hesitate to reach out. We will try to answer all the questions live if they come up. If you have questions later, we will try to answer them as well. So reach out via these channels. And with that, take it away Shafshad. Awesome. Thanks, thanks Ruhot. So any questions I'm not able to answer, definitely we'll get back to it. Feel free to just post them. I will take the questions as quickly as possible and Ruhot, you can help me with some of those as well. So yeah, so me, I'm a Java developer from quite a long time actually. Maybe I shouldn't really say when, but if you've done Swing and native duty back in the days, then yeah, you'll find me in that zone as well. I'm a developer advocate at Red Hat. You know, work with architecture, solution architecture, and also been an engineer in the past. So I'm an open source enthusiast. I'm an info queue editor as well for Java queue. And then of course, I'm volunteer coach and trainer as well. So lots of things, but most interestingly, ask me anything about Java, backends in architecture, and that's exactly what I'm going to talk about today. I have my contact details here in case you want to reach out as well. I'll definitely post the speaker deck later so you can take this one as well. Before I go in, you know, what am I going to cover? I'm going to talk a little bit about Quarkis, the ultimate Kubernetes native framework. I'm going to talk about Data Grid, which is a caching solution. And if you have heard about the Infinity Span upstream project, we're going to go into a bit of details around that and how it works on Kubernetes, like a distribution like OpenShift. What features does it have? And then of course, during this time, I will be showcasing demos throughout this talk as well. And we're going to do a demo for cross-site replication and hopefully everything will seem like how it's replicated through different data centers and you'll be able to see your cache entries, et cetera. So hopefully interesting and stay in tune for this. So what is Quarkis? Quarkis is a Kubernetes native framework. I say it's the ultimate framework. It's the next thing. It's not a new thing. It's in version 3.0. But it is definitely one of the frameworks as a Java developer that you should look at. It gives you the possibility to write microservices. It gives you the possibility to write serverless functions, whether you do that on AWS Lambda, Azure Functions, or if you're writing functions on Knative as well. If you're doing microservices and working with databases like Mongo, Postgres, others, or you're doing streaming event-driven design, all of that, you know, Quarkis has a lot to offer when it comes to that, whether it's rules, whether it's integration, et cetera. But most of all, Quarkis is also very fast when it comes to booting up quickly. It's able to do that in relatively a very, very quick time, as you can see on the screen on the right here. It has the possibility to be compiled into native using GralVM. And of course, it also has the possibility to be used with JVM mode, which a lot don't talk about is also optimized much more than the traditional stack that you would see. The figures here I'm posting are test figures from a simple REST API endpoint. When we talk about memory, it also takes a lot lesser memory. As you see at the bottom of the screen, a traditional app would take a lot more memory, but with Quarkis, with JVM mode, almost half of that, and then even native even further. I don't want to go into a lot more detail because there's a lot of other talks that are covering this today. I think Kevin showed something around Quarkis. My colleague, Eric Dandrea, is also going to talk about that later today. So if you want to learn more about Quarkis, those are the talks to get into. What I love Quarkis the most is it's way how we develop applications. So it's simple. It kind of goes into my workflow. I don't have to restart things. I can have test containers integrated really nicely. I have continuous testing, so I can test my application while it's actually running on my machine, and I don't have to restart it over and over again. It has a nice CLI, et cetera. But before I go forward, let's just do that. Let's just quickly take a look at what a simple Quarkis application would do. So here I have my console, and I hope you can see my console. If not, then roll. You will just please tell me. So I'm using Quarkis CLI. So let's just say I'm using version 3.1.1 final. So if you're using SDKman or something similar, or you want to download Quarkis CLI directly from Quarkis.io, you can do that too. What I will do now is, of course, I will create an app. I already have a demo one app because I tried this before. And by doing that, I'm giving that parameters to a Quarkis saying create an app, and it's called demo-2. And it should have a package called, let's say demo-2. So here we go. It's going to create a simple Maven project for me. If I go into my app here, it's a simple Maven project. It has source, et cetera. So let's take a look at that. Yes, I'm going to make this bigger so you can see it as well. And it has generated a project for me, which is a simple greeting resource project. It has app properties, which are currently, there's nothing there. And then, of course, it creates some tests as well, right? If I try to, and let's talk about developer friendly or developer joy here, if I just do a simple Quarkis dev over here, it's going to spin up my local Quarkis application, which is this one, running. And I see that it gets a little slow probably because I'm streaming out right now. But let's see that it goes fine. It's going to spin up the project over here. You can see that the application has started on localhost 8080. But also at the same time, you see that it has options like, hey, there is tests. Do you want to run tests? So if I press R, it's going to run my test. And it says that everything is running and it's successful. So it's awesome. It's actually the app is running. If I go back to my phone here, to my browser, here's my app, which is running. I have a hello endpoint, which is, let's say, hello from REST easy react simple app, nothing, nothing specific at this point. If I go back in my resource and then just change this greeting message, hello from Dev Nation or hello Dev Nation. And here you see that my tests already start failing. And this is what continuous testing is doing. It's continuously testing. I could stop it or pause it if I wanted to. But then it is failing because if I go and look in my test, the specific test, it is not doing that. So it has a different message that it's testing again. So let's just take that message and make sure that it actually is the right message, paste it over. And now you'll see that it starts passing. And if I go back to my app here and do hello, then you see that it's simply changed. And I don't need to restart my application or anything at all. Another thing that I could do now is while this is running, I could go in and add an extension. So here I'm going to add an open shift extension. In an open shift extension, what it would do is that it's simply Could you do me a small favor? It's very small. So could you boost it a little bit? Or maybe choose the white background? Maybe that helps, but the audience has some difficulty to read with you. Okay, hang on a second. We are going to do that. This is just a much smaller demo, so we're not going to take too much time on this. But yeah. So in the back end, what's happened is that Quarkis has added the extension into my POM file, which is the open shift extension, if I see here. So now I can actually deploy to my Kubernetes cluster, which is open shift, once this extension is added. At this point, I haven't restarted the app. I have done nothing of those sorts to do it. So what I'm going to do now is simply go and deploy this application to open shift. I could have done this from my command line as well, but just to show you, we're going to do that later. All the extensions that I have, this is the developer UI, which has all the details. I'm sure some of the other talks are going to go in more detail. But here I can say, okay, deploy my application to open shift, except untrusted certificate because my open shift environment has an untrusted certificate. Can you boost this as well a little bit? Because it's rather small still. Just a simple zoom in. Hang on. It's not zooming in. Demos are awesome. Why is it zooming in? Okay, there you go. That's much better. Thank you. Sorry about that. Somehow my computer keys are saying, hey, don't do this anymore. No worries. At least the audience now can see. Thank you. Perfect. So here, of course, my application is being deployed on open shift. So where's my open shift cluster? Again, I probably need to zoom this in as well. Sorry. There you go. So if I go into my project developer console, if this is my current project, demo two, and here my demo two app that we were using is now deployed onto open shift while we were fixing our zoom here. Simple definition. Of course, zoom in again. Hang on. There you go. That's deployed. So in simple terms, it's a simple framework that is Kubernetes native. It helps you to do that. And this is pretty much how much I'm going to show about that today, since this is not a talk about Quarkis. So bear with me. So good stuff. It's a great framework, and we're going to see a little bit more how it works with cache, because if I provision or add a Postgres extension, it's going to do a test container at the back end as well, but I'll show some of that later. So what is Red Hat Data Grid? Red Hat Data Grid is an in-memory data grid that stores key values. It supports multiple data types. It has a lifespan of your cache. So imagine that usually you would write the cache inside a map in your application, and once you do that, it's within your application. It's in the same memory. As a microservice, when you start to scale that or you have multiple microservices, all of the instances would start to store that map locally because they're all running in different JVMs. What Data Grid does is that it gives you that possibility to have that key values on a distributed node, which is the data grid itself, which keeps, takes care of how it scales, which takes care of how the changes are mapped, what kind of events you need out of it. As soon as, of course, it gets distributed, you need to start to think about those things. But your application no longer needs to keep that in your memory. Obviously, there's going to be some use cases where you want to keep some of that hot cache inside there. You can use the near cache as well, or you can just use an embedded cache as well. There's tons of those features that the Data Grid has for you to store your applications data into the memory and scale with it as well. There are other features like transactions. A lot of caches don't support that, but Data Grid supports transactions. It supports curing as well, and it supports searching, et cetera, and cross-size replication, which we will see later today as well. From the infra perspective, Data Grid is also pretty cool when it comes to how it scales out. On a Kubernetes environment, you have the Data Grid operator. The operator manages all the different aspects, for example, encryption, just by providing some of the CR into the Data Grid. It's going to create that encryption for you with SSL, TLS, whatever you want. If you don't want it, that's up to you. If you want to expose it as a route, as an ingress, as maybe using a load balancer, all of those things are provided to you through the infrastructure as well that you can do. It's inherently scales out. It has this hashing algorithm where it can understand how it's supposed to scale out as well. Multiple features on the infra ops as well. Applications that you would usually use this for is IoT applications. You could use it for mobile applications. Anywhere where you see the need to store your data temporarily or even long-term, where you will be able to gain performance by doing it, read performance as well as writes, you would use Data Grid as well to do that. Like I said, it's a key value store. The simple functions are put and get and that's just how you would interact with the Red Hat Data Grid. When it comes to data types, it has scalar types. You can use text, numeric, binary, any form. You can upload files into it if you wanted to document such as XML and JSON. You can put them in as well. The default is protobuf. You use protobuf to work with Data Grid. Of course, there's use cases where you might want to have session detail, etc. Like in our example, when we have a cart, we're using a session emulation there, where we store that card into our Data Grid as well. Finally, collections, the Java collections, you can use them as well and use the keys or that as well. You have a breadth of different objects that you can work with at the same time. Another interesting thing that Red Hat Data Grid could do for you is it's going to be able to give you events. Any time there's an update or deletion or expiration or insertion, any of those events Red Hat Data Grid is able to listen, the server is able to listen, and it's able to pass that on to your application as well. It helps you to do that in terms, for example, you had petabytes of data in your cache, and you wanted something happen within the cache, a new entry came, and you want to run a distributed server task that's going to run on the entire data set as well. You could do that too. You can have a process to do real-time aggregation and analysis, as well as you can do change to capture use cases together with Divisium, being able to queue some of this data as soon as the events come in. You have the possibility to query. Basically, once your data is in there, you create a query, whether it's a simple query or equal query, and you're able to listen to that query as soon as that query changes, you will get notification of those events. You are able to index the cache, and you can provide them through the Protobox schema as well, which helps you to basically do searching and use Lucene or Hibernate Search sort of mechanisms to do that as well, and it of course provides you statistics when that happens too. Security is one of the most critical features, I would say. It has the possibility for you to have SSL TLS encryption, whether you're doing it through the client, whether you're doing it between the servers, whether you do it between cross data center regions, and it has the possibility to give you role-based access control as well. You have multiple users who have the different roles that they can be using on top of the cache as well. There's other options as well where, let's say, you can reload the or load the entire dataset from the underlying database. There's file stores, there's data stores, and then there's SQL cache stores. Originally, typically, you would write an application that's going to aggregate all the information from a SQL query onto the database and then put it into cache. DataGrid has a feature where you can actually provide the query in DataGrid config in the cache, and it's going to pull that up and store it into the cache as well. You can define the expiration, lifetime, etc., etc. There's some interesting stuff, but also if the dataset goes down, it is still able to operate and your applications don't go down as well. From a resilience perspective, that's also quite good. It also has the possibility for you to give lifespan to your cache, expiration, passivation, all those different features you are able to do with the cache as well. On the redundancy side, which I'm going to quickly demo as well in a bit, where you can basically replicate from one server to another or one cluster to another as well. The ability not to just being able to do that within one server as a sparse or scattered cache, but also being able to replicate that across different sites with the same cache as well. You can define the cache and the schemas that you want to use there. I mentioned scattered. In this case, when a cache is created, based on, let's say, the hashing algorithm that DataGrid uses, you are able to distribute your data across multiple nodes within the cluster. Orange, yellow, blue, and green over here are different datasets that are basically being scattered all over the different nodes. If one node goes down, it's going to rebalance, redistribute. Of course, redistribution has a cost for those who know caching or have worked on it in production, but again, it has the possibility to do that and you can either replicate it to all the nodes or you can choose to have a scattered redistribution as well. It's up to you. You also have the possibility to keep data in the heap which is traditionally being done, but then you can also have off heap data as well within the cache, which gives you the possibility to have the metadata, data management, all of those things that you don't really need within the same heap. To optimize performance, you're able to do that as well. But then, like I said, you also have the possibility to store it outside in a database or use a file store, which is on a disk as well. Depending on the different use cases, you could go with any of those options. Let's get into the demo scenario that I have prepared for today. I have an inventory service and this is a store and eStore. The inventory service has a simple pojo, which is basically I'm using Quarkis and I'm using the Panache framework, and I'm using over here the repository pattern, but I'm just basically showing you the model here. In the model, I have a simple item ID. I have a location, quantity, and a link to that particular inventory item as well. That's how my inventory basic model looks like. Of course, there's a lot more code in there, but basically that's how the basic model is. The catalog is a catalog of all the products. It takes the data from inventory. It calls the inventory, and it also has the item ID, title, description, quantity. All of this information somehow needs to correlate because on the eStore, you have to have a catalog and an inventory as well. The catalog is taking care of the actual data about the product as well. Then I have a cart, which is basically having the total of the cart items. It has the shipping cost, et cetera. It has some promotions, and et cetera, all of these part of the model, and then of those different cart items. Basically, when we shop on the store, we are able to get all the cumulative data. Every time we're doing something that is going into the cart as well. Over here, this is the total overview. I have the Angular app that is being written that basically calls these services to get the data or put the data into them, and it caches that data over there as well. Let's just go back and I'm going to switch here and try to see if I can have a bigger view. I have multiple services, the cart service, catalog service, inventory service, et cetera. At the moment, I want to work on the cart service, so I'm just going to open that one. In my cart service, I have something called the cart service implementation. It has a simple thing. It has a cart cache variable, which basically is saying, okay, I want to be able to use this cart.local, but, hey, do I have cart.local? That's a different question. We're going to take a look at that. Then, of course, it is creating that remote cache on the cluster. In the resources, in my application properties, I have set up that, hey, I'm going to speak to this server. This server basically is my caching server. The host that it's there, my user. I mentioned security and RBAC, and then the password as well. Let's quickly go back and check the OpenShift front tool. Here, I have my NYC, and I've, of course, already deployed some of the components like my eStore, which is here at the moment. So I have my eStore deployed, which is a simple store. I'm not a front-end person, so you have to just stick with my skills here. Then, of course, I have the catalog service, which speaks to a catalog database, inventory service with inventory database. Then I have the cart service, and this one we can just, for now, okay, get rid of this. Let's say, delete deployment. This was our test app. So my data grid is installed over here. If I go in and look at my administrator view, it's just deployed to my operators. In my operators, I have multiple instances, and, of course, this site that I'm working on is called NYC. NYC has two replicas. It has multiple resources. Here, you would see that it's created the certificates. Learning has been, sorry, sorry. Okay, so here you go. And with, you know, there's a generated secret. There's security. There's all those different things. There's the routes that we're using. So in this case, I have an external route that I could use. And let's see if I can use that one. So here, my cache is, and this is my cache console. I'm already logged in, so it sort of like goes there already. And what I'm going to do is I'm going to create a cache, which is called first. You have to zoom in that one as well. Okay, sorry. I wasn't loud enough. Okay. There you go. Cots local. It's distributed. It has a protostream, et cetera. No capabilities. I could use multiple capabilities, but, you know, we're just going to show you a simple card here. Does it have concurrency level? Yeah, sure. We could use that heap. And then, of course, next. And here I have my cards.local. Now, I have my cards.local. What I'm going to do is that since I have that configuration here and the password is hopefully the same as last time, I'm going to use my open sift extension. Last time, remember, I deployed it through the browser. But this time, I'm going to deploy it through my command line. So I'm just basically saying, hey, the cork is maybe even plug-in. Deploy this onto my environment. And it's going to basically just deploy. It's going to build this. It's going to create my jar file. And it's going to deploy this directly into my namespace on Kubernetes. And now it starts with... Sorry, this sounds like a broken record, but this could also do a small bump up. Okay. I'm moving from this game. Hang on. Okay. So if I go back to my developer view here, that's the thing. Okay. If I see my cards, you can see that there's a build happening. Build number 23. It's being deployed at the moment from my console. So push is successful. Seems like it's already deployed, which is great. And then it's going to change my part that's running. I have the new card. So if I go back here and start to do this, it's going to go. And here you can see that I have received a response from my card as well. Well, if you increase the size. Yes, I just realized that again. And I'm like, sorry, this is my mistake. And it doesn't do that. But anyways, that's the developer site. But we can go into the container. If you go into the cache container in cards local, I'm going to see that yes, my card is here. If I go back to my store and I start adding, I will see that quickly that it has... It's going to have the data as well because it's been called there too. So this is awesome. I mean, it's rebalancing. It has a very basic config. It's just a simple cache that's living on one server. It has a matrix as well. But what I haven't showed you and in respect of time, of course, I'm not going to create a whole cross site as well again. But here you can see that I have a cluster membership. I have two nodes in a cluster to replicas. But then if I go back here, I also have another site, which is called the London site. And in the London site, wow, okay. In the London site, if I go there, again, open console, I have cards as well. And you can see that some of the data is already there. That's because I've sort of tested this before. So what I'm going to do now is I'm going to change my cache to two cards instead, which is a replicated cache, which will basically take all the data that we have on the NYC site and basically also replicate it to the London site. So in case we have a failback scenario or anything else, we would be able to do that too. So simple, at this point, I've done nothing about fallbacks and failovers. I have just simply changed my name of the cache. I'm going to deploy this again. And I'm going to move back quickly because I don't want you to have the small view of the debugger there. I'm going to go back to NYC here. And on NYC, we will see that our build is running, it's pending, and it's going to come up. And as soon as it comes up, we will have a different cache, and that cache is going to be starting to replicate. If I go in, I can see the logs again that, yes, it's being pushed. It's being built at the moment. It's creating the container. It's pushing my jar file. It's pushing the image into the registry. Awesome. It looks great. If I go back to my cart here, the new port is already in place. Awesome. So now when I go back and I hit this, I'm going to get a new cart. And let's just say I'm going to add some stuff to it to see that it's actually working like we said. So this is our London site, and this is our primary site, which is our New York site. If I look in the New York site, all those items have come into the cache. If I go into my London site, I will see the same items are also have come into the site. So that's awesome. Absolutely. Yes, yes. Oh, zoom in. Yes, perfect. Thank you. So here you can see that it has that as well. So now, just to make things have been a little bit more fun and bear with me here, I'm going to basically, what I'm going to do is I'm going to try a fallback because what you want to do is that, sure, you have a New York site, but you also want a London site. And what you want to do is that if the New York site was failing, you are also able to fall back and maybe write the data to the London site, because that's exactly what you might need in a production environment. So we're going to do that again. I changed some code and I went blazing fast, but basically I'm using the small array, a failover extension, and I'm using the fallback methods, basically saying that, hey, do this randomly fallback every time you call the cart. And once you do that, there's a fallback service that I have defined, which is the cart's backup service as well. So it's going to do that too. So I'm going to deploy this and hopefully this will work. And it's deploying and the same message that we see when it's being pushed on OpenShift, we basically see the same log console here that it's built on OpenJDK17. It's a particular image stream, which we're using on OpenShift. OpenShift has this concept of image streams where you can define a certain image stream with a container and you can pass on your binary into it. So it's going to create that container from the source that we are pushing in, and then of course push it into the image registry as well. So now if I go back here to my OpenShift console and my app is running, I think this is the new one, right? 44, yes. It's up. Okay. What did we, yes, perfect. So it was 45, not 44. Almost scared me. But now when I go back to my home and start to add stuff, I'm just going to make a couple of calls. And if I go back here, I see that I have done something wrong. Hang on. I have, okay. So it's supposed to fall back and it's going to fall back nicely, but of course I've done something wrong. We have two minutes left. And let me just quickly see here. And I think it's also a good time to take questions because we're almost out of time. Well, at the moment, the audience has not just asked the question, so you can use some of your time to take some more time to explain stuff. So all good. Yes. Okay. Awesome. So what I'll do is, so what we've seen so far, of course, is that there's one question coming in. Thank you, Navin. He asked, or she, I'm sorry, I'm not completely aware, is the cache two-way replicated when NOIC is back online and we move back from London to NYC? Will we get the data back in New York as well? Yes, you can definitely do that. You will have the possibility to define that. You can have an active cache, which is going both ways. Or you can also go back, let's say in your configuration, and you have the possibility in case you wanted to do it manually, you can start transferring it as well. You can also take it offline to do that if you wanted to. So yes, there's a possibility to do that. You have the opportunity to define the topology that you want to use and the rules that you want to use for it as well. Yes. Cheers. Awesome. So what we've done so far, we use an active, active setup. We're able to cache that card into it, but I strongly recommend that you go ahead and look at the Red Hat Beta Grid in more detail. This was a teaser, of course. It has a lot of different features when it comes to monitoring observability and the operator experience as well, where we're able to use that natively on top of Kubernetes. But I also understand that a lot of you might actually be using a Redis caching solution as well. And in that case, the DataGrid also have a tech preview at this moment where you can use the REST 3 protocol. This means that if you have a Redis client, you would be able to connect to the DataGrid without even knowing that it's actually a DataGrid behind you working. So from the operational experience, you will still have DataGrid. From the client experience, you'll be able to use Redis. So in case you want to move, you don't need to worry too much about the basic Redis protocol there as well in terms of, of course, not the administrative ones, but general data sets, etc. All of that should work. And then again, of course, thank you for listening. I have my details here and happy to engage and take your questions if there are any at this point. So Kihoro asked, so they joined a little bit late, how does Quarkis help to deploy to cluster from the cache cluster? I think it doesn't really matter that it's Quarkis, right? You can do it from any application? Well, Quarkis is, yeah, you could do it from Quarkis, but Quarkis definitely helps you because with Quarkis, if you look at my model here, I have different options. Well, this is commented here, but I can write the cache directly into my system. I have created the cache before, but then let's say my cache schema, my proto schema is also created just by the annotations here, which means that if I go back into my, into my carts, into my data container here, I see the schema that was created. This was created by the Quarkis app. When I loaded my Quarkis app as well, when it comes to simple things like if I said over here, mvn, Quarkis, golden dev, what this is going to do, hopefully, we'll probably throw some exceptions because I'm connected to the other props, but basically what this will do is that it will from dev services also going to spin up a local data grid cluster as well. So if I, if I went here in my dev UI, I would be able to see that I also have possibility to directly work with my, with my OpenShift cluster as well. So here's my InfiniSpan console, which is basically- Yes, zoom that in again. Yes, yes, yes, yes, it's coming, hang on. So, Kiho asked a follow-on question. So if I have an on-premise OpenShift, can I join the service too? Absolutely. With the right subscription for Red Hat Runtimes, you can deploy the operator on your on-premise OpenShift and then you go from there. Yeah, I'm running this on-prem. This is my own instance of OpenShift running. So pretty much if you hit this URL, you'd be able to hit it right now. And yes, you can easily install the operators. You just have to go to the operator hub, search for data grid and you would have it there and then you have the possibility to install it on your cluster as well. So yeah, perfect. We'll have to wrap up. There's one final question that I will answer in the outro. So first of all, thank you Shaf. It was a great presentation. Too bad we had to struggle a little bit with the fonts. I'll re-question a smaller monitor for you so we can avoid this in the future. As I said in the intro, but it's also still true in the outro. Everything we've done here today will be uploaded in the Red Hat Developer YouTube channel. So you can watch these later. Shaf will share the presentation slides and you can find his contact details as well. Is there any follow-up questions? Please don't hesitate to either go to the generic global chat or reach out directly to Shaf himself. He's already busy in chat so maybe Shaf, you can share your Twitter so that people can easily find you. Next up is the break. So we'll enter the break. So hopefully you'll have the time for a snack and build some of that energy. Please stay away from the computer for a couple of seconds. And then we'll start the keynote. Burr Sutter is going to take us into a session becoming the developer's developer. Plus, I don't know if there are as many spots available, but Shaf is also running a lab later today. If that's possible, try to see if you can join there. If not, there's four other contents, little tracks happening right after the keynote. So don't go anywhere. Hope you had a wonderful time and good luck with the rest of the conference.