 Okay, so my name is Clemence Coffier. I'm working for Red Hat and I'm going to present to you briefly how to build reactive distributed system with vertex. The first thing we need to discuss is what vertex is. Vertex can be summarized by this sentence. Vertex is a toolkit to build distributed and reactive systems on the top of the GVM using an asynchronous non-blocking development model. That's a lot of buzzwords that we will define. So the first one is a toolkit. Vertex is not a container. Vertex is not an observer. Vertex is not a framework. It's a toolkit. So it's just a plain boring jar that you can put on your class path. Once it's done in your class path, then you can start using vertex. All the vertex components we deliver are just plain boring jars. There is nothing smart or nothing complicated. But thanks to this, you can use it everywhere. You can use a class path. You can use them in a fad jar. You can embed them in a jar. You can embed them in an ear ear. You can embed them everywhere because, well, there is nothing smart behind this. This slide here are actually vertex applications. All those slides are served by vertex. And we are going to see that it's actually not a simple vertex application. It's a clustered application and we'll see all the other components I'm going to start are interacting with these slides. So these slides are here. It's just a jar. Then we have two dependencies, vertex web, that is a component to build a modern web page and modern web app with vertex. And a cluster manager will be other cast in this demo. And a couple of dependencies. In this demo, everything is packaged in a fad jar, but we don't. That's just a detail. Vertex is made to build distributed systems. There is no better definition of distributed systems as this one, which means a distributed system is going to fail. Anyway, it's going to fail. You can do whatever you want. It's going to fail. So you have to live with that. You have to deal with these failures. And one other thing is it's not any kind of distributed system. It's a reactive system. A reactive system has been defined by this manifesto here a couple of years ago and are built on four characteristics. First it's responsive. Responders means that your system is going to respond to the request in an acceptable time. Acceptable depends, of course, of your use case. It's elastic. Your load is going to evolve. So you need to be able to scale up, scale down, change things, kill nodes, restart new nodes, and so on. It's resilient. Well, as we said, distributed system fails. So you need to be prepared to handle these failures. And it will be built on an asynchronous message-pacing because asynchronous message-pacing will let you implement location transparency, lose coupling, and so on. So something to understand is the difference between reactive programming and reactive systems. There is a lot of talk at Java 1 about reactive. If you go to the agenda and search for reactive, you will have probably 25 talks. But actually, there is two meaning behind reactive. Reactive means showing a response to a stimulus. That's not for me, it's from Oxford. And from this, there is two branches. The first branch is reactive programming. We have the programming lever. It inherits from functional programming and it's exactly just a subset of functional programming where you handle streams. A stream is a sequence of data, error, and end of stream. So you just have a pipe where you have data inside and you observe this pipe. Every time that there is a data or one of these events, you will be notified and you produce another stream. You have to produce another stream to stay in the reactive programming paradigm. So here we have all the Erics library, and Java, and JS, and Groovy, and we have also Spring Rector, which are reactive programming. While Vertex is a reactive system, we are not at the same level. We are at the architectural level. It will still show a response to a stimulus, but the stimulus are not a stream. The stimulus are messages, the stimulus are failures, load, matrix, and so on. What you build as a reactive system is actually a living body where every time you do something it's going to react. It inherits most of its concept from dynamic and autonomic systems and Actors and Adjoint systems. Something to understand is that functional programming was defined in the 60s. Actors and Adjoint systems were also defined in the 60s. So there is nothing new here. Actually, in this world presentation, there is nothing new. Just recycling stuff. So here, as a container, we have ACA and we have Vertex. So the last part of the definition is it runs on top of the GVM. That doesn't mean it has to be developed in Java. We have Java once. All the code that I will show here will be developed in Java, but I could have used Groovy, Ruby, JavaScript, Ceylon, Scala, or Kotlin. So Vertex, the toolkit. So we will build distributed system. There is two ways to build distributed system. You can use a framework that will hide the whole complexity. Or you can use a framework that will let you handle this complexity because anyway it's going to fail. So it depends on your passion in debugging a live system in production. But generally, we don't really like debugging system in production. So you need to have a way to handle this complexity and that lets you handle this. Failure will be a first-class citizen and Vertex is not an all-in-one solution. It's just providing the building block. So which building block? So we will provide you TCP, UDP, HTTP, 1 and 2 server and clients. Websocket, of course, non-blocking DNS client. This is actually very important every time in Java where you resolve a DNS, new URL with a name. Actually, this is a DNS request which is blocking. If your DNS server is slow, your code is going to be slow. And you have nothing to do. It's inside the GVM. So in Vertex, you have a non-blocking DNS client that will handle that. Clustering, so built-in clustering, built-in event bus, distributed data structure, built-in load balancing, built-in failover, pluggable service discovery, circuit breakers, matrix shell and a lot of other things. So this is just a small part of the components we provide. We will build reactive systems. So the applications that are built with Vertex are going to be fast. I will show you some numbers at the end of this presentation. You will see it's really, really fast. It's going to be elastic. It will scale up and down. And you will do that by just starting a new node. Nothing complicated. And it will dispatch a load among all the nodes that you have started. It's going to be a resilient failure. It will be a first-class citizen. You will see the code might look a bit complicated at the beginning, but actually it's new platform are built with failure as a first-class citizen. If you look at JavaScript or if you look at Go, failures are there. In Go, you have to check that it was not a failure after every call. We have built-in failover and the asynchronous message-pacing will be based on a synchronous and unblocking development model. Other that work in traditional Java development, when you want to deal with the database, for example, you are stuck. You will do your select or your query and you wait for a reply. You wait until the database manager and computed the result and give you the result. Well, in a non-blocking approach, it won't work like that. We will enqueue the request, pass a callback, then the thread is released. As it's released, it can do something else like BC, then get some free time under some more stuff. And when we have the response from the database, we will continue the task A. This may or may not happen. Remember, distributed system, that doesn't mean that it's going to work. It means that it may fail at any time. Or it looks in terms of code. Well, generally, you was doing this. Do something A, B. And actually, with an asynchronous development model, you will do the same thing, do something. You pass the parameters and the last parameter is going to be a callback. Okay, what about callback? Callback can become very complicated very quickly. So we have a second way to write exactly the same thing with futures. That's not Java futures, that vertex futures. Java futures are blocking, vertex futures are non-blocking. So here, your operation, you just pass your parameter is returning a futures and when these futures will be completed or failed, then this handler here will be called. With futures, you can compose them quickly and efficiently with this, do that when this is done or wait for all these futures to be terminated successfully and so on. So let's start to see some code and we will start with a very simple paradigm, which is a request reply and has HTTP and REST as rolling more than, well, mostly all the applications right now were just using HTTP for this demo. The first application you will do in vertex is this. You get a vertex instance here and you start an HTTP server. As I said, vertex provides HTTP server by default, so you just do create HTTP server. When we build an HTTP server, there are two things that are important. The first one is what do we do when we get a request? So we attach a handler here and it will say, well, when I get a request, I will just say world. The second thing, and every time I get a request, I will execute this. The second thing is this. Listen here. Starting an HTTP server is going to take time. There is many things that need to happen at the operating system level to discuss, to negotiate the sockets, to be sure that the port has been available and so on. So this operation is asynchronous and may fail. So the AR that we have here is actually an async result. That means the structure that will contain the result of an asynchronous operation. So here we can say, okay, did it succeed? Yes, okay, so that's fine. Or here is a stack trace. So here, it's an async result and as it's an async result, we have to handle the failure here. So already here in this very simple snippet, we have the failure of the first class citizen. Okay, but let's have this running. So this is this code here. It's mostly the same code. I just extended a bit. Like I use a simple date format. Well, as a Java developer, you should say, well, come on, this is not thread safe. This is going to be a failure in production. We are going to see that. Then I just want to also print here the name of the thread and we will see that. So I'm going to start this. Remember, vertex is not a container. An observer is just a toolkit. So to start a vertex app is very complicated. Run. No plugin. Nothing. You take your IDE, you just start it. You don't need any plugins. You don't need anything. It's just a main. So it started on the port 8081. And if I go back here and do invoke, you see, that's my date. And well, that's a weird, that's a weird thread name. Look, and it's always the same. It's always this event loop thread too. But what is this? What does event loop means here? An event loop is very simple. It actually can be implemented with four or five lines of code. It has somewhere a QF event. Events are just passed by event provider. We don't need to know what they are. It's just a QF event. And this event loop will get the next event, find the interested handlers, and dispatch events. That's very simple. As it's a thread and it's a single thread. Well, that's very simple because actually you don't need to do any concurrency or synchronization construct because there is a single thread. So you can't have deadlock. You don't need synchronize. You don't need volatile because well, there is a single thread. That's exactly the same concept that you have in Node, the same concept that you have in Android, and so on. This development model, this model is actually very powerful but has one main drawback that involves the golden rules of vertex. If one of your handlers here is blocking, it's expensive, take some time, stuff like that, what is going to happen? Well, the thread is stuck. So as the thread is stuck, the event provider will continue in queueing events and events and events and events and nothing will happen because here you are stuck. At the end, you will run out of memory. So the golden rule is very simple. Never ever block the event loop or the world model collapse. If you don't block the event loop, this is going to be pretty fast. I will show you the performance at the end. It's really, really fast. Okay, so we saw how we can have a server but we can also have an HTTP client. Vertex.com is a built-in asynchronous HTTP client and we create one like that. Vertex.create HTTP client, we set the host, we set the port and then we sort say, okay, I will do a get request on slash. Here is what is going to happen when I will get a response and actually HTTP is interesting because when you get the response, you just get the status line and the headers. You don't actually get the body. The body can arrive later. So that's why if I am interested in the body, I need to register a body handler that will get the body. Buffer here is just a representation of byte array, just a byte array actually. So if I do that, that is going to be this code here, exactly this, what we are shown. So here I just do the, that's up now. This one here, yep. So that's exactly what I've shown in the previous slide. So I will just do my get no and then I get the response. I am interested by the body and so I read the body and here I'm doing the second way, without callbacks, I just return a future. When I have the body, I just complete the future so I give a value to this future. It can come at any time. I don't care because as soon as I do future.complete, then this handler here is going to be called. So now if I do that, we will have my browser. So this application here, that is calling a server that is using the HTTP client, that is calling the server that I've started in the previous demo. And if I do that, did I start it? I forget to start it. So if I don't start the application, then it doesn't work. That also works for non-distributed application actually. If you don't start the application, it doesn't work. Here we go. As soon as I start the application, things are much better. So we have the hello that is appended by B. The Java one was coming from A and now we have this. Obviously we are still using the same thread. So right now in this system, we have my slides which are a vertex application. B, which is a vertex application. A, that is a vertex application. We are already a distributed system with three nodes. Let's go a bit further because while that just requires reply, let's go and do some messaging. Vertex comes with a built-in messaging system which is named as the event bus. It's a nervous system of vertex. Every vertex application is going to use at some point the event bus. It's very simple. It has two concepts, address and handler. Good news, we already saw handlers. So handlers are just a callback that will receive the message. Address is how we address message. We don't have hierarchy, topic, queue, stuff like that. It's just opaque string. If you want to use space or tabs, you can. If you want to use UTF-8 or with character in your address, you can. If you use only space and tabs, well, you can but that's going to be hard to debug, but you can. Once we have this, the event bus will provide you three ways to deliver the event. The first one is point-to-point. You send the message to an address and you have one something else that is listening to this address and gets the message. So how it works, from the vertex object, you can get the event bus. You send on this address and a message. Here, just a string but it can be an object. And on the consumer side, you just register a consumer on the same address. And here is my handler. When I get the message, I do that. So here, I do nothing. A message is very simple. It has a body and a couple of headers. Obviously, we have published, subscribed, same thing. So instead of send, I'm using publish. And we have a third delivery pattern, which is kind of interesting in request response. I have a sender that send a message to an address. A consumer will receive it and has a possibility to reply to this message. So on the send side, I send to address. That's my message. And I say, well, when I have a reply, just do this. So I append an handler to my call and this will be the reply handler. And on the consumer side, when I get the message, I can reply. So let's not imagine that the message that goes out say, okay, method foo, param one, hello, param two, java one. Well, and the consumer is just returning or replying with hello, java one. This is RPC. We can implement a full RPC system based on this asynchronous messaging system. So it's an asynchronous as RPC, sorry. Obviously, an event bus, if it just locale to a node, that's not actually very interesting. The cool is to have this event bus distributed among all the nodes, all the nodes that I've started. So to do that, instead of doing the vertex dot vertex, I just do vertex dot cluster vertex. I pass some options here. I don't care. Then I pass a result because it's an asynchronous operation. Joining a cluster is kind of long because there is lots of negotiation. First, we need to be sure that the cluster exists. We need to join it. We need to wait until the consensus is computed. There's only an asynchronous operation that can take time and can fail. But once it has succeeded, then you get your vertex instance like that. And all the examples that I've shown works. So what is cool is that every time I start a vertex node on the cluster, the event bus is going to be distributed. But the event bus is so simple that it's not limited to vertex. We have a TCP bridge so all cc++.net application can interact using the event bus. We have Node.js. I'm going to show that in a second. Node.js can also be a member of the event bus and send and receive message. And my browser here. This browser here is going to receive and send message on the event bus. So, oh, there's something running in the background. Cool. Here I'm going to start this application here, which is going to send to publish a message every two seconds on the event bus on this address here. Here, okay. So I'm starting that. So here we see it's starting to join the cluster and I have cluster of two nodes. My slides and this new application. And once it started, we see that we have hello from and now a counter. So two, three, four, five. Okay. But now we have a Java application that is sending message to my slides. How does that work? To my slides. That's my browser, seriously. And I'm really not a web developer. So believe me, I'm unable to do anything in web development. But let's go a bit further. If I go to my terminal here and I do npm start. Look, hello from a Bonjour from node. No, we have a node application that is sending message to my slides passing by the event bus. So I can show you the node application. It's very simple. You just get this npm here that we provide. Oh, no, that's here. From this npm, you just create the event bus. Oh, that's an interesting address. From where does it come? We are going to see that. That's actually the bridge address. And then once you have opened the event bus, you can just send message or publish message. Actually, it will work with publish too on the event bus. So how does that work? Like that. That's actually the vertex application that holds my slides. And it has a SOGIS bridge. SOGIS is a JavaScript library that will negotiate a protocol between node and the browser. So it will start by WebSocket and degrade to iframe with sort of steps or SSC and so on. And this bridge is going to be used by node and by my browser to be connected to the event bus. So my node publisher connect to this bridge, send an event. The bridge sends this event to the event bus which come back to the SOGIS bridge because my browser here has registered also on this bridge to say, well, I'm interested by this address. And when you get an event on this address, send it to me. So it's used WebSocket in both cases here because Node.js is really good at WebSockets and I'm using Chrome so it has WebSockets too. And my Java publisher here actually is just sending message on the event bus as we seen in the up here. So I'm going to stop a couple of things now. So that's good to do distributed application but again, they are going to fail. And they are going to fail a lot, believe me. So be prepared. You will have to handle your failure by yourself. We are not bullet proof, we are humans. We are all making bugs and the hardware and the physics beyond networks and so on is going to make failures too. So you need to handle these failures. We already saw that actually with my asynchronous result. Asynchronous result is a structure that contains the result of an asynchronous operation. An asynchronous operation can fail. That's what we check here or can succeed. If it fail, you can have the reason that the cause, it's an exception. Or here, if it's valid, then you can get the result. Another way to handle failure is to set time out. It's actually a pretty bad way. I'm a former academic so I'm going to cite a couple of very old papers and stuff like that. But in 1985, some researchers have proved that we can't use time out to be sure that an operation has failed. And it makes a lot of sense. I have A and B. I'm calling B and if I don't have the reply in one second, that means that it's a failure. That's not true. That means that the message to go to B may have failed. Or the operation that happened to be may have failed. Or the message on its way back may be lost. So that the operation may have succeeded. But for us, time out will be still strong. So all our operations in vertex can have a send time out or a time out. So sometimes it's configured differently. And when you have a time out, what happens is that it will use this asynchronous result and consider it as a failure. Again, that doesn't mean that the operation didn't succeeded. It means that we didn't get the result in this one second. All our time out is in a millisecond. Another way very popular in the microservice world is to use Secret Breaker. So again, Secret Breaker really holds. It's something that was invented in the 70s in operating systems. Secret Breaker is just a three-state automata. It's really very simple. You start in the closed state. And here, every time that one of your interaction is failing, you just keep track of the number of failures. And once your number of failures reaches threshold, you go to the open state. In the open state, you won't execute the operation anymore. You will go to a fallback immediately. Periodically, and that was cool with Secret Breaker, periodically, it will try to go to an half-open state. In the half-open state, you will take the next request and one request only and will try to execute the operation. If the operation succeeded, then, well, everything's fine. I go back to the closed state. If it fails, then you go back to the open state and wait for the next attempt to go to another half-open state. Why is very important? Because when you fail, when there is a failure, you can cascade the failure bubble up and the whole system can die. Or you can contain this failure to a single node. Well, you can do that with a synchronous result. But what Secret Breaker gives, it gives time for the system that is failing to recover. And you won't stress it, because as soon as you are in the open state, you will execute a fallback. So you won't stress the system that is in very bad shape and need to recover, need to take a breath to be back, take some fresh air, and once it's back, thanks to the half-open state, you will recover. Vertex has its own Secret Breaker. Of course, you can use these tricks. Of course, you can use any other Secret Breaker, but we provide our own one, which is integrated with a non-blocking system. Another thing that is actually important is, okay, Secret Breaker is for failure, but what happens when the whole node crashes? SeqFort, out of memory. Well, all these things may happen. It's going to happen. We have leaks. We have SeqFort because we are using GNI. So at some point, something is going to crash. Vertex will write a built-in system to handle this. Actually, for introducing this, I need to introduce one other concept, which is Vertical. A Vertical is just a chunk of code, just a class that gets deployed and run by Vertex. A Vertical, so here, can deploy other Vertical and so on and so on and so on. So it's kind of a component model. It's a very lightweight component model. Vertical can be written in any supported language, Java, Groovy, JavaScript, Ruby, Salon, Kotlin, Scala and so on. When we start Vertex in high-level 80 mode, and one of your machines, of your cluster is crashing, Vertex detects this and will take all the Verticals that have been deployed and will deploy them automatically on another node of the same cluster. You don't have to buy an expensive solution. It's all provided. So generally, your cloud provider can do that. Here, you don't need even to rely on your cloud provider. Vertex will do it for you. So here, if this machine dies, this Vertical here is going to be redeployed automatically on this same node. Well, there is settings that you can configure to be sure on which node it will arrive. I won't show you a demo, but if you're interested by this, I have a full talk on Vertex Wednesday morning where I will demonstrate all the circuit breakers, this, really, with live demo. Elasticity pattern. What's the worst thing that can happen to you is that you develop a great app and you start getting success. And then, well, you fail because you fail because your system starts to be slow and so on. So be prepared to be famous. We never know. You may become rich. There is two ways, two elasticity patterns. The first one is vertical scalability and the second one is horizontal scalability. Vertical scalability means that instead of one event loops, remember, I have introduced seven loops. We don't have one event loops in Vertex. We have as many event loops as CPU cores you have. But it's not because you have that that all the synchronization pattern that I've stated at the beginning will be broken. We guarantee that. Your code is still going to be single-threaded. You don't need synchronization. You don't need synchronization construct. But the load is going to be dispatched on all your CPU cores. So a Vertex application is actually going to use all your cores of your CPU. The second thing is, thanks to the clustering, we also have horizontal elasticity. So vertical scalability because this is fixed. Well, you can't add a core to your CPU or it's kind of really advanced engineering. But horizontal elasticity because you can increase the number of nodes that you have on your cluster or decrease it. So if you have a lot of users, you start new nodes and once your peak is over, then you can reduce it. Oh, it works. Actually, it will be very simple. Every time you will send a message at the event bus, instead of always sending to the same customer, it will just implement a round robin. So if you have a lot of messages, the message will be under first by A, B, C, back to A, B, C, and so on. And when you get less load, then you can kill C. Then it will be just A, B. And if you really are almost no traffic anymore, well, just kill B. Let's say your Amazon build that you will pay at the end of the month will be much, much smaller if you have only one node and then you scale when you have a peak of load and so on. So Vertex will handle that for you. Round robin is built in. Okay, so that's not the end. What I've presented is almost only Vertex score. So it's this part here, but there is many, many other things. We have asynchronous RPC. We have a lot of things around TCP. We have Vertex web that is used to build this application. We have clustering. We can use Rx and reactive programming to build your reactive system. Two times reactive, not the same meaning. So reactive programming to build your reactive system. We have Docker support. We have an MQP bridge. We have Mongo. We have GDBC. Because yes, you can deal with GDBC. GDBC is a blocking system. We provide you an asynchronous client. We have Stomp. We have service discovery. We have integration. We have Camel somewhere. Yes, here. So you can integrate reactive system with all your legacy applications that exist right now. Thanks to Camel and so on and so on. We have GCA if you have to provide the application. We have HTTP2 and so on. If you're interested to go further, first the Vertex website, Vertex.io, then there is this blog post series here that is really for developers but will accompany you and build your own, your first Vertex applications with testing, integration testing and so on. And that will be an application to manage my collection of bottle of whiskey. Yes, I developed it and it's actually used by me. And then there is a hands-on lab that will be Thursday noon and there is a talk which is Wednesday morning. So check on your agenda. If you like Vertex, vote for us for the Jax Award. That would be great if we can award this Innovation Award. I said about performance, so let's have a quick look. We don't run this. That's taking power. That's preliminary data but it's almost final. We are here. We are in the top five. One, two, three, four. We are sixth. So we are in the top 10. Most of them are just pure native. We are based on Java. We are faster than NetE on this benchmark. All benchmarks will show similar results. That's all for me. If you have questions, I will be around the booth. Well, thank you very much.