 So, hello everybody, my name is Etso Yanaga and here I have Burr Sutter 2. We'll be presenting for you today Reactive Microservice with Vertex. We're both directors of developer experience at Red Hat, in just in case you want to follow us. We talk a lot about DevOps, microservices, Java, domain-driven design, and all sorts of school stuff, including Vertex. So our Twitter handles are etburrsutter and etyanaga. And just in case you're wondering, I'm Brazilian. But my grandparents are Japanese, but I was born and raised in Brazil, okay? But I took a long time to get here, 45 hours. So I was a bit tired, but now I'm almost okay. I'll be jet-lagged when I come back home too, so I'm leaving on Saturday. And just to give you a few tips, we've just released a new book from Riley migrating to microservice databases in case you're wondering about microservices and how do I deal with my data, if I still have an old legacy relational database that I have to deal with. Well, on this book I try to cover a bit of these strategies on how can you try to split and lay more important, later how do you integrate the data. So if you want to get a free copy of the ebook, you just have to type this URL. Or if you go to developers.redhat.com slash book, you'll be able to get a list of the books and get the download for free. Just in case you're going to Vox Singapore this Friday, we'll have a few copies of the book, physical book, as giveaways. So if you pass by our booth, you probably can get a copy. In fact, I think I'm doing a signing too, it's Friday afternoon. So just in case you want a signing copy, I guess we'll have a few copies there. And I always like to start my sessions with this quote from Forbes, which means that now every company is a software company, you don't work for a bank, you work for a software company, you don't work for an industry, you work for a software company. Because software is changing the world, software is changing how people interact with each other. Software is creating what we call this new digital economy. And I'm one of the people that used to think that economics had everything to do about money. But then after studying behavior economics, and later I got interested in the subject that started to study behavior psychology too, I realized that economics has nothing to do with money, but has everything to do about how people interact with each other to take one system from one state to the other. And people have some motivations to get a system from one state to the other. Some motivations might be love, hate, power, and including money. So digital economy has everything to do about how people interact with each other in this new world which is ruled by software. Some great examples of these new companies from the digital economy are the largest car transportation company in the world owns no cars, which is Uber. The largest lodging company in the world owns no real estate, which is Airbnb. The largest online retailing in the world owns no stock, which is Alibaba. And the largest content network in the world produces no content, which is Facebook. All of these companies that have something common is they only exist because of software. So software is changing the world and we as software developers, we might think and might say that we are the most important people in this world from now on because nothing, absolutely nothing is going to happen in the next thousand years without the software developer behind it. So if you're going to find a solution for cancer or poverty hunger, it's going to be through software. And we're here today to talk about reactive programming and people are thinking about reactive systems and reactive programming. We're not going to do about reactive systems, which is a new kind of systems. We're discussing these new kind of systems, which are systems that are supposed to be reliable, scalable, which are supposed to have very strict times for answering a request. So it's not in the sense of real-time systems, but they should be fast, reliable, self-healing, distributed, scalable, and all of these kind of aspects. But we're going to do about reactive programming, which you might be thinking isn't that async programming. Yes, you use async programming for reactive programming too, but it might be a bit more of that. And we're going to do that using Eclipse Vertex, which is a project, which we have a lot of Red Hat engineers working on that too, but that's a project sponsored by the Eclipse Foundation. So just in case you type verdex.io, you'll be able to go to the Eclipse Foundation website and check the project. And I found it very interesting to be talking about verdex today because we just had a session about Groovy. And one of the cool things, most of us are Java developers, we can create verdex applications using Java, but we can also do that using JavaScript, Groovy, Ceylon, Scala, Kotlin, and did I miss any of them? No, they might be creating another one for now, but all of them they run on top of the JVM. So if you want Groovy, you like Groovy, you can use verdex with Groovy, you like Kotlin, you can use verdex with Kotlin. So you have all of these different opportunities to be using this new reactive programming framework for your applications and better than that, all of them integrate with each other. So you can be writing your JavaScript applications with verdex. They will communicate seamlessly with your verdex application written in Java, written in Kotlin and everything else. We have some very cool demos that can show some of these capabilities later because I'm doing the dirty talking here at the beginning and Burr, which is our demo master, will be presenting you some very cool demos of how can you apply verdex technology today. And what makes verdex so important and so cool for us? I have to discuss this. Have you ever noticed that even though in the past like 10 years network technology has evolved so much that we're talking about 40 gigabits per second, 100 gigabits per second or even more fiber optics, but the web is still slow. We still take a long time to load the things, to load the applications, even though we thought that our web application would be even more responsive. This responsiveness, they didn't improve in the same amount as we had improvements in network technology. So maybe what happened or what didn't happen in the past few years that we could involve our applications. So one of the answers that we're still using HTTP protocol and still most of the websites still using HTTP.1.1, HTTP.2 will probably solve many of these issues, but HTTP.2 alone is not enough because you see in the past few years now we're being overwhelmed by the amount of requests that we have to deal every day in our applications. So we're used to have a traditional threading model for developing applications. You also have to think that it's not only the amount of users that have increased in the past years, issuing requests to our backend services, now we have a lot of microservices. Now we have a lot of IoT devices that are increasing exponentially the amount of requests that are hitting our backend servers. So we have internal traffic, we have outside traffic hitting the same endpoints with the same technologies, same thread model that we had 20 years ago. So how can we change that? Maybe they know the guys, they had a great idea when they created Node.js architecture and created what we call the event loop where everything is a synchronous, etc. But people that are used to the JavaScript word and use Node.js for these backend applications, they just realized that the problem with the event loop is that you have machines with 16 cores these days and you only use one of them because it's all single-threaded. So Vertex, built on top of this abstraction, so Vertex also uses the abstraction of the event loop where everything is a synchronous, but Vertex uses what we call a multi-reactor event loop pattern. So we don't have a single event loop in our machine, but we have an event loop per core. So if you have 16 cores in your machine, you can have 16 different event loops. That's the default behavior. Of course you can change how many event loops you can have. You have one thread per event loop and you have as many event loops as you want, but by default it gets to the number of cores that you have in your machine. But you might be thinking, okay, I might have this new event model loop and Node.js was created for that, so everything is already a synchronous, but we, in the Java word, or if you're using Java libraries for that, most of them were designed for this traditional threading model where everything is synchronous. How do I bridge the gap between these two different words? And the answer, Vertex wouldn't be useful if we weren't allowed to use these old libraries, synced libraries with this new asynchronous word of the event loop. So you can mix and match. For example, you need to issue a database request using JDBC, but JDBC is synchronous. Is my application going to be blocked? Now I only have a single thread. Is it going to be blocked while I wait for the database? The answer is no, because Vertex has what we call a hybrid threading model where you can use both kind of libraries, and you will not get blocked by that. We have a demo to show that, too. And of course, if you have a distributed application which is supposed to be scalable or reliable or anything else, Vertex also bundles by default an event bus, so you can write a message to the event bus and all of the instances of our application that are participating in the event bus will be able to join, get a request. So you can have a very reasonable response time for all of the messages that are being propagated on the bus. And this event bus, it propagates not only for your back end instances of your Vertex application, but also if you're using Vertex with JavaScript. It doesn't have to be a backend application. If you're using Vertex on the browser, the event bus that you have on the browser, it communicates with the event bus that you have in your backend services. So you can make, like, create... I have a JavaScript application. I do something. I get this message propagated to the backend. It can be propagated through any instances processed in parallel. You can get the response and send it back to the JavaScript application. It's bidirectional and shared by default. That's one of the cool things that we can have in Vertex, too. And the basic notion behind all of this Vertex framework for developing application is the Vertical. So Vertical is the unit that is going to be processed by our event loop. And I'm going to show you very quickly how can we create a very simple Vertex application. So let me open here. I just created, just had a ready-to-go project. So we are going to create a new Vertical, which I'll call mainVertical. And for me to create an application, you just have to extend abstractVertical. Just to create a main method, which is very complicated to create a Vertex application. I just have to create Vertex.Vertex. And say you have to deploy a Vertical and it's going to be an instance of this mainVertical. That's it. I have a Vertex application already running, but it's not that useful because it's doing nothing so far. So let's try to create an HTTP server. Well, thank you very much. And by the way, I don't know if I mentioned that, but it's a very nice opportunity because just in this room we have three different Java Champions. And by age, not age, but championing date. We have Burr, we have myself and we have a manual. We have three different Java Champions from Red Hat today. So I think it's very nice to have all of these two great guys and myself get it together in this room. So for me to start my application, my Vertical, I just have to override the start method. And if I want to create an endpoint, maybe first I have to create a way for me to know this. I specifically request to use URL, we do this and do that, et cetera. For that we need a router. And I pass the vertex instance. So I have everything I need just to root the request in my vertical. Then I need to create an HTTP server, which I just have to type vertex. Oops. HTTP create HTTP server. Request handler. Router. Accept. And list in on port 8080. Okay. It should be enough for me to have a complete HTTP server. So you have a lot of different building libraries. So networking and HTTP 1.1 and 2.0 are already built in. The event buds are already built into. So you just do that and you have a full-featured HTTP server. If I get it to run, it's already running. If I go to my browser, localhost is ready. It's running, but it's doing nothing because I didn't say to do anything. So let's try to create a resource with that. And one of the cool things that I like, instead of having to type a lot of different annotations and configure or deploy a new class, if you just in case you're used to Node.js programming, you can do all of that using a cool DSL. So just keep typing what you do in your command line. With Java 8, you can use lambdas to be doing the simple stuff. So if I say the router.get, let's say that if you go to the root URL, I want you to handle that how, well, maybe I want to get a response and reply with just hello Singapore. If I do that, I just ask it to run and I get to the reloading point. It's already hello Singapore. You can see it's very fast to be reloading the app rather than the application because it's very lightweight. And if I want to do anything else, for example, if I want to give a JSON response for that, maybe I just want to create a class. I know in Groovy would be much simpler to be creating a Java bean. But if I have a private string name and private int age, if I create a constructor, both of that, and if I create a getter and setter, just to be compliant with the Java bean specification. Now maybe if I get here and type router.get, I say maybe in the person you are endpoint, I want my handler to be doing what? I want the response of that to be using JSON.encodePretaly and I want to create a new person. So I'm going to create a new person, add some, and I'm 38. I'm not that young as the guy here, okay? So just that I'm encoding and we're replying to that. If I rerun here, I have localhost and slash person. I have a JSON response. So it's pretty straightforward. You have DSLs for everything. That's what I particularly like about Vertex, because you just keep hitting the backspace, control backspace keyboard, and you know what you have to do. Everything is fluid. Everything is pretty obvious when you're programming Vertex. And though there's not really much magic here, you just type all the Java code, very few lines of code, and you have the replies that you need. And how big is that? Because if you create a Java application or a Springable application, you can expect your application, your FET jar to be like several hundred megabytes, at least when you add all of the dependencies. Well, Vertex has a lot of in-built team. Isn't it expected to be big too? Well, maybe not that much. So for me, just to have the HTTP server and the event bus, which are the dependencies, all the Vertex core stuff embedded on that, I'll just try to package the application and see how big is that. Okay, let's go to Target. And just in case you're wondering how big is a Vertex application, it has six megabytes. We've all of the HTTP server and distributed event bus, which can communicate through different languages like browser and back and forth. Six megabytes, that's one of the biggest. When we have a very, very big Vertex application, you can get like two 11 megabytes. That's what we can say it's a monolithic Vertex application because 11 megabytes, it's a very big Vertex application. So how many threads is the web server? Web server? Yeah. Yeah, one thread. In fact, one thread per event loop defaults the number of cores that you have in your machine. So if you want to run on a single core, it has only one thread. You might be thinking, how does it handle, how can it handle multiple different requests with just one thread? And you're going to show that, right? Right. Okay, Burr has the demo that can answer to that because in fact, the fact that it has only one thread and it's completely a sync enables Vertex to process simultaneously much more requests than a traditional HTTP server like Tomcat or Undertow or any other kind of web server. Because how does it work? Traditional threading model, you create a thread pool and since threads are an expensive resource, they take some time to be created. You just create like 200 threads in your thread pool and when you get the request, you allocate one of the threads, it's going to process this request, one thread, this request and everything else. If you have too many simultaneous requests, you have like 250 requests at the same time. What is going to happen? You have 200 threads, the 200 threads will get allocated and the other 50 requests will be holding. We'll have to wait because one of the threads, when they finish the work, I'm going to get the free thread and locate the request. It limits very much the potential for scalability. So how does Vertex handle that? It has a single thread and all of the processing is a synchronous. So it never gets blocked. In fact, you should never block that because you should have a blocking operation like a JDBC connection. You should be using the blocking API of Vertex to send that, oh, this is a blocking request. I want you to process using a traditional threading model but everything else should be processed in the single thread model using a synchronous processing. So maybe this is kind of a distract what I'm saying but it's much more coolant to see that in prets. That's why I'm going to call Burr to show how things real work in prets. Let's show you a bunch of demos. Yeah. Let's talk about that. We've got to switch the microphones over fast. Okay. I'll let you pin that on there. I'm going to be using my phone to actually get to the public internet and if you guys have your phones, I'm going to need your help in a little bit. In other words, if you guys don't participate in the demo, it's not a fun demo. Plug it. Yep. Okay. Let's see. We've got screens. Okay. So everything Edson just said is specifically what we're about to show you but we're actually going to go beyond that. You've now seen the basics of what it takes to set up a Vertex application and it is that hard. You basically have a main method. You basically include one thing in your POM XML and you're off and running. Now, as you get more sophisticated, you can add more things to your POM XML and you can build some applications that look more like this. But let's start off with something easy. This is actually a little spring boot application that we have to kind of make a point about threading. So a spring boot based on Tomcat, it's used in the standard enterprise Java, Java model of doing things. In this case, it has 100 threads available to it. If I have my web application, let's just come over here. By the way, there's a free book now on this topic up at developers.red.com. Let's do this. Let's go to bring up... Yeah, let's bring up this one. So here's my little spring boot application. I'm going to come right here and, well, this window is so small. Let's see if I can do this. I need to make things a little bit bigger. But right here, you can see a standard spring and VC application in Spring Boot. Now I'm going to throw in 200 concurrent requests. Watch what happens to this application. So I'm still trying to make requests and now I'm locked up. So basically, there's 100 threads in the thread pool. There's 200 requests now, some of which are in queue and the user at the front end of this is going to have to wait until he has available resources and you can see the browser essentially just waits. So this concept is what we've been living with for the last 20 years of Java. So this is not new. You might just not have thought about it. And you might not have thought about the fact there was actually a queue involved because that's what happens in this case. If you have more requests than you can have threads for, so if you think I'm picking on Spring, but I'm not, here's Wildfly. I come from the JBoss team of old. This is our application server. It works the exact same way. So let's get that guy up here. So there it is. It's up and running. So same thing as before. If I throw in too many requests, it's just going to basically lock up the browser and the user just has to wait and wait and wait. Now there's no thread that's asleep here. It's just a long running business process. It's calculating pi. So in this case, if you're listening to my machine, my CPUs are going nuts right now because it's calculating something fairly heavily. In the case of a Vertex application though, you're going to basically offload the request, if you will, to a background thread. But you do that without any real work. So there's my Vertex application. And then we're going to basically throw in, again, a bunch of requests. And you'll notice that the system will continue to respond the whole time it's running all those crazy requests because the foreground thread, the event loop, is never blocked. If you do block it, you will get an exception from Vertex. Say, sorry, you spent too much time on the event loop. Stop. It's real simple. So we basically make sure you cannot block the foreground thread and everything can be responded to. So the whole concept here of being responsive is the key attribute of any reactive system. So if you read the reactive manifesto, number one rule is be responsive. You've got to be responsive in the face of overwhelming load and you've got to be responsive in the case of failure. And of course, we use Kubernetes as our ultimate backplane for all of this. So mostly what Ed's and I focus on is Kubernetes or OpenShift, as an example, but here we're talking about Vertex. So we're not going to spend too much time on the Kubernetes aspects of that. But let's kind of jump into another demonstration. You saw the web application server already. So what I'm going to do is I'm going to have you guys hit it on Amazon. I'm running it here. So you basically can run an application like this. Vertex run. Actually, I forgot to do something. I'm going to get it denied because I'm running on port 80. So you're going to go to this URL. Web.bird.red. There we go. And web.bird.red and you can put in hello and your name. And then you basically are now connected to my server that I just stood up a web server right there on the internet and you can access off your phone. It's that hard. So we work really hard in Vertex land to try some really interesting things to get done super fast. This is just a little web application that I set up and I'm running. Now, so this is going to get a little bit more interesting. So did you guys hit that? Web.bird.red, right? Make this a little bit bigger. It's a little bit hard to see maybe. It's just a VM, right? I'm SSH'd into it now and I SCP'd it from the local machine up. That's all. It's just a Java file because we actually treat all the programming languages of Vertex like dynamic languages including Java. So Java compiles dynamically in this case and runs. Groovy compiles dynamically and runs. It doesn't really matter at all. We treat them all the same way. Yes. How much memory can you show us how much memory? RAM for this particular component? I'm not sure if I can. Let's see here. It's up and running but I'm curious myself. So SSH, Web.bird.red. Let's see. Can we figure out how much RAM is associated with that? And you can see it's slowed again. So let's see if we can figure out. Okay, whoops. It helps if you get the P in front of the PS. All right. So there is that guy right there. And how do you specify? How do you ask for memory associated with it? So tell. Is it tell? Okay, so I'm looking at top as well. I'm curious to see how much CPU has been used. It doesn't really seem to be much CPU being used there. And L free L? It's not free L, is it? PS minus EFL. Oh, EFL. EFL, okay, sorry. Whatever there's pages or kilobytes. Right. Where is it in here? This one? Yeah. So whatever that translates to. Huh? Is that kilobytes or pages? I can't remember. That should be kilobytes. But they... Okay, but let's try this next. All right. So if you have your phone still ready, I want to show you the event bus concept. So I'm going to basically bring up the event bus. So same idea as before. The nice thing about this one is it actually shows all the different languages. So if you come back here and go to web.bird.red, okay? The concept of the event bus is super straightforward. It basically creates a server side cluster across all the JVMs that communicate with each other through the slight way to event bus. And so you always have the event bus available to you. And it's just a simple pub sub model or request reply model. So it works either way. And you can push messages to the browser super easy. This takes almost no lines of code. Okay. So in this case, the server is pushing to your phone. Anyone have it on their phone yet? Okay. So what I can do now is if I jump back on that server and where am I here? I can actually add other members to this cluster. So I can add a... Actually, let's add the Ruby publisher. And let me SSH web.bird.red. And unfortunately, my cell phone is not making a particularly good router. But I'm getting in there. All right. So... And if I do... Let's do the Ruby publisher. And then you guys should see the Ruby publisher by now. And we should see Java publisher coming online. So if you watch, basically, we have all these messages, Javi root and the server all coming through. So that concept of the event bus takes almost no lines of code. If I come over here now, let's go look at this. So like here, let's look at the... We have JavaScript also. So my consumer Java. Here's what that looks like from a JavaScript standpoint. That's the consumer side. The publishers equally as simplistic. So the nice thing is the API is available to all. Now, this event bus concept is incredibly powerful. And that's why we talk about it a lot in a Vertex presentation. Because this is a simple use case. This is just a push use case. By default, it will use web sockets if it can find them. It'll fall back if it can't. It's using Sock.js. And basically the server side of this, just to show you what the code for this looks like. Where is my web server here? All right. We use a bridge. And basically you determine, like a firewall, what inbound and outbound channels are permitted. In this case, I basically say make the outbound available on this thing called my feed. And therefore, everybody is a peer to peer on this network. So I can consume it, produce it, all the way out to the browser and back again. Also to Node.js applications. Those are also part of this too. But again, this is not an interesting. Let's show you some a little bit more interesting. Where'd it go? Where'd it go? What'd I do with that application? There we go. OK. All right. Let's bring this guy down. Shut these guys down. Sometimes you have to worry about job processing sticking around out here on my little server. It's not a very big server that I'm running. OK. Some of you guys need to go away. And funny enough, JPS doesn't find these guys half the time. All right. So PS-JF. Did we get rid of them all? All right. They're all gone. OK. So let's try this one. So sudo. This is another example of a fat jar, by the way. So let's do it. All right. What is it? A07. So here's the fat jar we did with the Maven package. So a six meg fat jar like you saw earlier. And then I'm going to sudo Java-jar07. OK. Let's bring it up. Now connect. You're going to want to refresh again to connect to this guy. And there we go. I got a bunch of people connecting. I have a little message that's socket connected. And what it's going to do is going to give you a little user interface that looks like this one. Let me make this a little smaller. OK. This is a finger painting game. So now we're going to be bidirectional, right? We're actually going to have you paint pictures for me. And now I got my dashboard up. There we go. You guys are painting away. So basically each stroke that you guys add to this screen, so I can put my name here. Right. All right. So basically as each stroke falls from your phone through whatever 3G connection you have, up to my little server, it's just a tiny little server I'm running right now. Through the event bus, it's serializing the HTML5 canvas and then pushing it out of my dashboard. OK. Now, again, there's almost, I think there's four lines of server-side code here. All the real code is actually the ugly JavaScript I hacked together to make the finger painting part work. OK. Because we basically are taking the HTML5 canvas, serializing it through the event bus from the browser to the server, back out to the browser, and that's how you get all this up here. So these are things you can't really do or even envision in a traditional enterprise Java world, right? You know, and again, I come from a traditional Jboss world, things like that. And so that's why I'm showing these things, because you're like, holy crap, that's a very different way of thinking. OK. And if you have refresh on your mobile browser, you actually get another palette, you can start another drawing. That's a different issue. Now, this is still not that interesting. Let's show you the interesting one now. OK. You got your phone ready? Now, I'm going to, I'm just going to leave that up, because I actually have a different server now. Let's show you this. Let's actually go over here. You want to go to game.bird.red. And actually, let me pull that up. OK. It's going to take a little while to load your phone. This is kind of a heavy-duty JavaScript application, in this case. This is a, this actually, this game actually performs at 60 frames per second. So play game. Start game. All right. This is essentially fruit ninja for your browser. OK. So as you pop balloons, you'll see your score is showing up there in real time on my dashboard, as you guys are scoring by hitting balloons. Like I can come over here and hit balloons too. All right. Now, here's what's crazy about this. This is on a slightly bigger server. Every balloon pop... I'm working on the Wi-Fi. What's not working? Oh. The website's connected because of the internal network. Oh, I'm running over my phone. If you're on your phone, you should be OK, right? Where's the Wi-Fi? Oh. The Wi-Fi doesn't work? Well, you can't win the game then. Look at these guys. All right. So these folks are all playing the game and scoring points. But here's, here, you have to understand this actually game is written in Groovy, by the way. So Groovy's the back end. But here's kind of the cool thing about this game. Oops. I shouldn't, I shouldn't actually move all these windows around too much. Let's go here. OK. I can come in here and make changes to the game. Like right now you guys are scoring too many points. I can make, I can change the opacity, change the size, change the speed. OK. And we can make this a lot harder, change the background and update. There we go. So the game was too easy. We made it harder. Oh. OK. I can come up here and make it, make it maybe, make it really big balloons, really slow and really bright enabled. We can make the red balloons 10, right? Blue balloons 15. We can also change the color again. And go. OK. And then we also have this concept of golden snitch. And this is something when the team and I worked on this, they're like, hey, Burr, we're going to add you to the game. OK. Now, you can, and you can see, you can see that every interaction you guys are making is making an interaction with my server and then updating the dashboard. Now here's what you don't understand what's happening in the background. There's actually a full Java E application calculating the actual achievements in the background. VertX is the edge service. It's the gateway, if you will, that handles all the asynchronous interaction, real-time interaction with your phones. It then, in turn, asynchronously calls a traditional Java E app that runs a drills, if you guys are familiar with our business rule engine. It's running a spreadsheet to calculate those business rules, right? Storing them in a database and then coming up and updating the dashboard. So you will notice your balloon pops are probably slightly delayed with what you see here on this dashboard, but you hardly even notice because it's all running in basically three JVMs on a single server right now. OK. And you probably think, well, that's super cool, but let's try this. OK. All right, boom. What? You think we're done? No, we can go back to playing again. OK, now we'll be done. Game over. So I'm curious, did he? There's actually a bug in the game. Did anyone stay in the game? Are you still in the game right now? Yeah. OK, because there is a bug. I have to work on the game over logic a little bit because the pause logic goes to everybody. The game over logic doesn't go to everybody based on the connectivity you have. Because if you're paying close attention, your phone's actually connecting and disconnecting a lot. We just have a little bit of JavaScript in there that says, oh, I lost the connection, reconnect. And we're pretty smart about making sure the game configuration gets back out to you real fast, except for the game over event. And so I got to go spend some time looking at that one. But yes, sir? You have this demo available to download. What's super funny is when we were building this game, we ended up spending way too much time testing it, right? So actually, everything you see here is open source. What's that? It's the code available from Reapo. Oh, yeah. If you go to my GitHub repo, you'll find, this is a very complicated, I'm not going to kid you, this is a complicated piece of code. What's the winner? Who's the winner? Warlock, something? OK. Hold on. What? Spice Warlock. Spice Warlock. All right, but let me bring up this last screen here. Nope, not that one. Where'd this thing go here? Right here. Just to kind of put things in context, when I say this is a complicated game, this is what it's doing, all right? And so you have several repositories representing the game engine. Like I said, mostly this is written in Groovy as an example because we can use multiple languages for a vertex system. That's what's handling all the interactions to all the user interfaces. There's an Achievement service written in Java, and then, of course, there's a traditional Java application using Jaxrs and traditional Java EE that's doing the calculations of what you achieved. And I did keep the database in memory in this case to make it perform well on a nice little tiny server. All right? But you guys were just playing the game, and oh, I get, oh, I messed up. I forgot to show you something else that was important about that game. We need to be done. I know you guys have to go home. But let's see here. Let's see here. Game.bird.red. There's a scoreboard too. And let's see what it says. You had 9,000 transactions pushed into our system right at that point. And these are real transactions. I'm not kidding, right? It really runs through the whole back-end system, hits the database, everything. It's just like any other business transaction. So this is the power of Vertex. Hopefully this kind of makes the point. You can't do this with traditional technology unless you work really hard. And in some cases, it takes almost no lines of code in from a Vertex system standpoint. Now you're probably thinking, well, I don't really build games, but Vertex is ideally suited, and I'll just give you one last use case and look at this code here. By the way, I have an IoT demo. I do temperature streaming into this also. But if you have a scenario where you actually are reading from an API, getting an event stream from that API, using the events to go get another API, to use the data to go get another API, Vertex fully implements RxJava. So we're committed to RxJava as a programming model specifically. So you have the callbacks, you have futures, composite futures, and those are Vertex futures, not Java futures. Java futures don't quite work. They're blocking, not async. So Vertex futures, Vertex composite futures, and then RxJava. And in this case, you can see it's really hitting the GitHub API, getting more data from the GitHub API and getting more data from the GitHub API, zipping it into composite JSON and shooting it back to the user that is waiting for us. So if you're dealing in a microservices architecture where you have to write edge services that have to respond quickly, but aggregate data from multiple services, this is a great solution, and you can just use this one feature. It's a single jar file, including your app. There's not much else like this in the Vertex world, but I do have stickers, by the way. I even have enough stickers for more than our top ten winners. And if you guys have any other questions, I know we need to let you go. I'm willing to stay here later. But where'd this go? Congratulations, by the way, if you won our game. Any specific questions? Where's the source code? Where's the what? Source code. Source code? So you probably just should just email me, because it's all out here in GitHub, but I have a lot of repos. And this is a lot of repos. So... Is it for the Vertex game server, the Spall server, the Vertex game server? Yeah, let me see. There's Scoreboard, there's Leaderboard, there's User Interfaces, the Mobile App Admin, the Mobile App itself, this is actually the balloon popping as this guy, the Achievement service, the Score service, the Game server, right? So there's the Game server itself. So they're all out there in my GitHub. And I do have a long document that describes how to set it all up on Amazon if you want it to. Yes, sir? How does it look like thread local variables? Thread local variables? You know, I haven't looked at thread local variables, but it's only one thread, okay? So you've only got one thread. Works on the blocking API, but not for the event loop. And you're never supposed to block anyway. So you only block when you know you have very dire circumstances. That balloon game is a good example of never blocking, right? That's the only way you get to that level of scale with just a simple little system. So does it mean your web server just only need to run one logical processor? Yeah. Keep in mind where this technology is built at VMware initially, right? So Tim Fox is working at VMware. He worked for Red Hat. He went to VMware and went to the Pivotal team, and he built this over there at the VMware team. He didn't come back to Red Hat. VMware donated this to Eclipse, right? And then Red Hat now owns the engineers that work on the Vertex project as an example. And when Tim did this, he basically looked at what was happening in the Node.js ecosystem, and Node.js works on a single processor, right? It was a single-threaded system also. And everybody was like, oh, my God, Node.js outscales everything we've ever seen before. So Tim basically said, I can outscale that, and he did, right? And so you'll see all these benchmarks of Vertex versus Node.js as an example, because he wanted to prove that the JVM can handle the workload too. And it's just the event loop magic. It's really, believe it or not, single threads are faster than multiple threads. But only if treated very carefully. You're going to take a look off some Netflix performance engineers. They always benchmark their stuff against Vertex, because they want to know how fast is their system. It's always the baseline. But if you go to the tech-empower benchmarks, you can just look for Vertex benchmarks online. Tech-empower actually benchmarks all the web frameworks, including Netty and Spring and this, and there's like 45 of them. There's Python and everything else, C++. And so Vertex, and so Vertex under tow, which are the things we work on, Netty, this is actually based on Netty. They actually do very well. And Node.js often actually is like in the 12th position, which is kind of funny. Node does well in other categories though. But like I said, I did bring some stickers. You guys want some stickers? And do you guys have any other questions? Yes, sir? When you're doing the hardware model, you have your main thread. Then you have a, let's say, talking to the JDBC, there's a, presumably, there's another trackbook to do those launching calls. Now, the question is, how do you make sure that... But we also have async database drivers. Okay, yeah. Now you're more awesome. Let's talk about that, right? So in that case, how do you make sure that your, let's say, the operating system scheduling will not kind of like put the event loop into a sleeping and then just like go all the way out to serve the rest of the batch of thread pools, right? Because worst case scenario, you can always push those jobs off to other whole servers if you need to. The nice, one reason there is the event bus is so you can literally distribute this across a network. So if let's say I go to deploy a single node, single node, what type of action, I will go to sort of run into this, a very tricky situation. Once I get low high enough, right? Yeah, so keep in mind that, like some people always ask us, what is our management philosophy? How do you spread this across a cloud? Kubernetes is our answer to that, right? So that's the world we live in with the Google team. So Red Hat and Google work on Kubernetes, right? And we call that OpenShift specifically. But that's the management backplane. So if you wanted to scatter this across 25 different servers and have it balanced across those 25 servers, you would use the technology you see here plus Kubernetes to manage those pods. So that's one answer to your question, but I'm not sure if that fully answers your question. So the other one is let's say if I were talking about some sort of like heavy number crunching, would that be it? Would that have any effect to the... Oh, absolutely, because the one, the first demo I showed you is specifically calculating Pi, and it takes about eight seconds on my computer. That's where the wait time is. And so specifically you don't do that on the event loop, right? You do that on a background, what's called a worker thread or worker vertical. It's still the same concept. It's just a thread pull at that point, right? Work has to be done on a thread. And so if you know you're not responding instantly, here's my answer, here's my web page, here's my JSON, whatever it is, you would do that in a background thread. The cool thing is the foreground thread and the background thread communicate asynchronously. So you basically are still all async everywhere you need to be except for the little weird thing. In that case, you are going to pick, let's say, the context page, or let's say the synchronization between multiple threads. If you're doing all this on a single CPU, you might run into that contention. I think it's a user swing buffer, so it's actually lockless. So you don't lock, per se, something from this point. In a typical use case, when people are using vertex as the front end for handling like 100,000 of the class simultaneously, when you have some transaction stuff that has all the processes, you just create other process and you get various two forward and fasts when it's ready to just get back. So you can scale that independently too. In your case, it's always responsive. Then how do I kind of mine the whole class? If I say I want to see the distribution of the load, the spike of the load has some stuff like this. Is there any kind of ready to use point-by-point? You want to use the whole class. Now, if you're using your relic, you can use your relic. Of course, I have to say that it can be... Is Hokeler integrated? Not really, per se. I mean, everything is exposed to JMX. Go crazy, right? Use the J console, right? So literally in JMX, you'll see the number of event loops, the number of threads. You'll see the queue depth. Because there is this queue here, just like there's everywhere else, that waiting for the event loop to have handled it, you'll see all that. And you'll want to monitor the event loop and specifically the queue depth on the event loop, as long with memory consumption and other things. But what you won't see is a high number of threads, which is in a normal Java app, you would normally see a high number of threads, in this case, you don't. And it's easy to cluster, right? We use Hazelcast for cluster discovery. So basically when that one event bus demo is doing, every time you bring up a new JVM, it finds it, finds it, finds it, finds it, right? And then they share a workload across that group of JVMs. It doesn't matter if they're on a single machine or all scattered across a bunch of machines. And you can cross data centers if you need to, though that's always a, you know, an iffy thing no matter what in this world. I've liked everything, but it's pretty cool. Yeah. This is our Hazelcast gentleman here, right, as an example. I... Yeah. And so there's a lot of cool things you can do with it. And actually, there is one slide I can show you to kind of answer some of your questions if I have it here, right here. No, not this one. You know, there's a lot of capabilities. Like this is the stack, if you will, right? So we just kind of focused only on this layer, right? But, you know, Hazelcast, ZooKeeper, Ignite, we also have some InfiniteSpan with JGroups integration there, too. Integration of different messaging environments. We have async database drivers, Mongo, Redis, CouchBase are really good ones as an example. There's a service discovery and circuit breaking. So you can use HISTRICS with circuit breaking if you want to, but this is a true async circuit breaker. HISTRICS is not. It's a synchronous circuit breaker. So you're blocking again. So this is a non-blocking religion, right? So you just have to keep that in mind. Drop wizard metrics, different programming languages like we showed you. So there's a lot here. The shared data is actually also very cool. Basically, you can have, there's no, you mentioned thread local as an example. You don't have any state within the context of the event loop. So there's actually a shared data mechanism to share data across all your events in the event loop, which is cool and works perfectly asynchronously. There's also one to share data cluster-wide, too, which is nice. And it's just a simple programming model for that, too. Does that help? Was this cool? And again, I have stickers and I'm happy to stay all night if you want to ask more questions. But I know some of you guys need to get home and you're too polite to leave while the rest of us are sitting here, right? But thank you guys so much. And again, I have various stickers for this and other things if you want to try this out.