 Hello, everyone. So we'll have Hugh McKee presenting you ACCA and Kubernetes building clustered systems today. Yeah, you can go ahead. All right, thank you. Hey, everyone. Like she said, my name is Hugh McKee. I'm a developer advocate. I work for a company called Leipend. Anybody heard of Leipend? So some how about Scala? Anybody heard of Scala? So we're the company behind Scala, one of the co-founders of our company. Martin Ordersky is the creator of Scala. The other co-founder of the company is Jonas Pinarra, who's the creator of ACCA. Scala is an object functional language on the JVM. ACCA is the actor model implemented on the JVM with Scala and Java APIs. And today, I'm going to be talking about running ACCA on Kubernetes. Just real briefly, though, as I mentioned, my company is a company buying Scala and ACCA, which are all open source. But there's two other frameworks called Play, which is a web services framework. All asynchronous has been asynchronous since before asynchronous became cool. Reactive programming became cool. And then Lagom, which is a relatively new microservices framework. Again, all open source. But really, I'm here today to talk about running ACCA, the actor model on Kubernetes, because ACCA has been a platform that can run in a cluster for a long time. So you have an actor model that runs in a JVM, but it can run as a, like a single system running in a cluster of JVMs in a very kind of tightly-coupled cluster of services where actors are talking to each other across this cluster. And we needed an orchestration platform for a long time. ACCA is 10 years old. ACCA clustering is around, it came out around 2013. And since then, we've been really kind of waiting for something like Kubernetes to come along to manage running all the JVMs for us. And that's really what I want to show you here. But so part of it is, though, is the kind of showing you actors running in Kubernetes. So have you guys heard of the actor model before? Anybody use any kind of actor model, like an Erlang or Vertex kind of does the actor model a little bit? But the actor model has actually been around for a long time. It was originally proposed back in the early 70s by a guy named Carl Hewitt. And it's been in use, you know, Erlang was one of its first usages for it, and it was used often in things like phone systems. Systems that had to run all the time, had to be able to go through hardware failures and keep running, had to go through software upgrades and keep running, things like that. And the actor model really worked out to be a nice way for doing that. But in order to kind of describe a little bit about the actor model, if you're not familiar with it, is I stole these first few sentences from the Wikipedia definition of the actor model, which I thought was really very concise and very well written. So I just stole this. I'm going to walk through this a little bit. So the actor model is a mathematical model, which is a little scary, but don't let it bore you, on concurrent computing. So one of its big things with the actor model is it's very, very good at concurrent computing, which means it's very good at threading. So if you've ever done any kind of thread-based programming, especially in a language like Java, it can become pretty wild, pretty fun, and difficult to do right. But the objective is to do a lot of things concurrently, run multiple threads at the same time. With the actor model, doing things concurrently becomes a lot easier. It's a lot more intuitive. And hopefully, you'll see a little bit of that. But with actors, I've heard people say, well, the actor model or coding in actors is hard. But to me, I picked up using ACCA back around 2014 or so. And I've been programming for a long time, working with Java since it came out in the 90s, all that kind of stuff. I came across the actor model, and I thought I'd died and gone to heaven. I mean, it was like the coolest stuff I'd ever worked on as far as building a system. It took a little bit for me to kind of wrap my head around doing building systems with actors. But the more I got into it, it just got more and more fun. And it was awesome where I could solve problems that I just couldn't dream of solving before. And one way I'd like to kind of maybe try to explain, fundamentally, what actors are, I mean, you write them in software, in Java, you just write them as a Java class, and I'll show you some code and so on and how it's done. But one way to think of it is, is a fundamental building block with very simple characteristics, kind of like a Lego block. Lego blocks are very fundamental with very simple characteristics, so you can build things with them. But just like with Lego blocks, where you can build systems that are, things that are really cool, like I just grabbed this picture, somebody built this castle out of Lego blocks, you can use actors to build really powerful, sophisticated systems. So with an actor though, the only way you talk to an actor is by sending it a message, an asynchronous message. So it's kind of like you and I texting. You send me a text message, it's really an asynchronous message. I get that message, you're free to go on and send other messages to other people or do other things, those types of things. Well, that's exactly the way, say, one actor talking to another actor works, and that one actor sends a message to another actor, it's an asynchronous message. Now, when a actor gets that message, it does things like it can make local decisions. So it's just a class, there's a method where the message comes in, the method's invoked and that triggers the code in the actor class to start doing stuff. So it's like, what do I do with this message? How do I handle it? So it's just, now you're in regular code handling it. So it can make local decisions. What gets fun though is you can do things like one actor can create other actors. So you can create a hierarchy of actors. And it's also a form of delegation and concurrency. So say one actor gets a message to do something, asking it to do something, that actor in turn delegates that work out to other actors. Maybe there's part one, part two, part three, part four of the tasks to be done. Those tasks can be run concurrently. So the actor, say you send me a message, I'm an actor. Well, I send a message to four other actors, bang, bang, bang, bang, and they're all running concurrently, working on their part of the solution. And then when they're done, they send me a message back. So it's not a request response, it's actors deliberately receiving a message, reacting to it, performing some kind of a thing like delegating out to other actors, and maybe sending other messages. So an actor in response to getting a message can send more messages, which is kind of what I just explained. This one's really kind of fun and powerful in that when an actor gets a message, it can decide how it's going to react to the next message that it gets. So a quick example is say, and this is just by design, we're developers, we're implementing our actors and say an actor gets the message and by design this actor is in an idle state. So it's meaning that it's just sitting there waiting for something to do. So it gets the message and as soon as it gets that message, it decides that it's no longer in an idle state, it's in an active state. So if it gets another message asking it to do something, the way it reacts to that second message is different than the way it would react to that same message, type of message when it was in an idle state. When it's an idle state and it gets a message, hey, can you do this? It goes, okay, yeah, I can do it. If it's in an active state when it gets a message, hey, can you do this? The response is, no, I can't. So say the scenario is, I get a message asking me to do something, I delegated work out to a bunch of other actors, they're doing their thing and I'm just sitting there, this actor's not doing anything. I get another message asking me if I can do something. My response is, nope, I can't, I'm busy. Even though I'm not really busy, it's these delegate actors that are currently busy. So it's kind of a unique characteristic of how you program an actor is but how do you react to the next message type of thing. The other thing that's really interesting though, and this is where the concurrency part comes in, is that the only way the state of an actor can be changed is in response to a message. So it's very different from say the object model where one object has a reference to another object, it can invoke methods on that object and how the state of that object is being changed to be changed by multiple other objects. You've got to use locks and things like that to handle the concurrency. With an actor, there's a real clean break. It's the only way you talk to an actor is to send an asynchronous message. So this really, in the case of doing things concurrently, it avoids the needs for using locks, which like with a, in thread-based programming, you would have some maybe a synchronous block, right, which is a form of locking, where thread one has invoked this block of code, it's running synchronously, if thread two tries to access it, it can't, it waits because it's locked out. It's that kind of an approach. With actors, there's a fundamental different approach to it. So that was a real, real quick overview of actors. It's very, very cool, I love it. Kubernetes is the shiny new kid on the block. I go to a lot of conferences and it's one of the big topics right now. I've already talked about it. Last year I spent a good part of the year working with a customer, a big enterprise customer and insurance company who has an on-premise Kubernetes environment and they want to move a lot of their legacy applications to it. And we were working with them, helping them do that. Very, very cool environment. But for running clustered actor models on an environment, Kubernetes is like the perfect platform for things like ACO. And as I hope I'll be able to show you, there's some real power in Kubernetes because with Kubernetes, the nice thing is, I always say, I'm a Java developer. I want to build my code. I want to run it with the least amount of friction as possible. In the past, we had all this friction, all these details that we had to deal with with getting something running on a machine somewhere. Kubernetes is like the lowest friction environment that I personally have run into where with Kubernetes, you basically say, here's my application, you define a kind of a file and there's other approaches for doing it. But one of the original ways was, you describe what your application looks like to Kubernetes. And it's a declarative description. It's not an imperative thing. It's not like run this, run that, run this other thing and so on. It's basically saying, here's my JVM. It's in this Docker image. I want you to run this Docker image for me and I want you to run one instance or N number of instances and you take a Kubernetes and you take care of the rest. So things like when you declare this to Kubernetes, what Kubernetes is doing is that it starts running our jar. Our jar is running, say it gets a hit in traffic and you've defined, you declare to Kubernetes, I want this to auto scale. Kubernetes will scale it up for you, scale up, scale it down. Even better on the resilience side, if you say the underlying infrastructure where your jar is running and a Docker image dies or has a problem, Kubernetes detects that and then restarts your jar somewhere else in its cluster and in its environment. Without us having to write any kind of code to do that, which is very, very nice. So it's this very declarative way of describing, here's my stuff, please run it. And Kubernetes really takes over all the heavy lifting part of it. So let me show you, what I've got is, I'm running on my laptop, I'm running a Kubernetes environment. I'm using MiniShift, which is the local version of OpenShift. People, a lot of people use MiniCube. I just typically run this environment on my laptop. And I like MiniShift because it's a little bit easier to install in many cases. So just to show you, I got this environment running. Hopefully everything's going. There's a command line interface, a couple actually, command line interfaces that I can use. And what I said was, I said, OC get pods, this is the OpenShift command CLI, command line interface. Pods are Kubernetes definition of running something and a pod can run one or more containers. So in the case of my application, I've got three containers running in three pods. Inside those containers, I'm running a single JVM. So there's three JVMs running here. Now the nice thing is that I can bring up a console, which looks a lot like the real Kubernetes console running in a real environment, not on my laptop. And I've got some projects to find and the project I want to show you is this one I just called AcaClusterDemo. And just real quick, so you can see in this user interface, you know, again it's showing kind of some details about my application running, just running three pods with a nice big circular view of it down below. It's got more detail information that I can click into. But what I want to show you is the application itself. So this is a visualization of this sample application I got running, it's an Aca application. At the end of the talk, I'll give you the link to the GitHub project that has all the source code for this project. But basically what I'm trying to do is show a lot of stuff in one single picture. So this diagram is done using, it's a really awesome JavaScript library called D3JS. Just, you know, just something, I love it. I mean, it's great for visualizations. Like this visualization is all done with D3JS with a little bit of JavaScript programming at my part. Very cool. If you've never used it and you want to do some kind of awesome visualization of stuff, it's very powerful. Go look at the, it's got samples, examples, a ton of examples that are just awesome. In any case, I'm just showing this running environment. So you can see there's things happening. It's kind of moving around. And what I want to do is kind of, I'll be showing you what all this is and we'll be using this to kind of see how things happen, what happens as things change in the Kubernetes and knock environment. So, the visualization, somebody in my company called it a crop circle. So I kind of like that name. Kind of looks like a crop circle. But it's, as I say, it's an attempt to show what's running here. So, first off, off the center circle, which is just kind of the root of this tree. And then off of those three big circles, there's a whole tree of things. And every one of those circles, the green and blue circles, are representing instances of actors, real actors that are running in this system. The circles, the real interesting ones here are the blue circles on the perimeter. The blue circles are where the business logics went. So I'm calling it an entity. And really that's what it's used for. It is an instance of an actor that's handling something like maybe the telemetry data coming in from a specific IoT device. So device one, device two, device three, there would be three blue circles for each one of those devices. It could be say it's a shopping cart application. So you have n number of customers each building their own shopping cart. So each entity actor is handling the incoming requests for a specific shopping cart. My shopping cart is one of those blue circles. Your shopping cart is another one of these blue circles. The green, and they're actors, they're instances of actors and I'll show you the Java code behind us in a minute. The green circles are another type of actor and they're called shard actors. And the job of these actors is to distribute the load over the cluster. Because again, we're running three jams across the network. It's one, running as a single, think of this as a single microservice running, it can scale up from one JVM to n number of JVMs in this example I'm showing you running three, but it's a single microservice and we need to distribute the work across the cluster. So there's these sharding actors to do that, these green ones. Now, in this example application I'm running here, there's only 15 shards. There'll always be 15 shard actors, no matter how many nodes there are in the cluster. And we'll see how that works in a minute. And then the bigger circles represent multiple things. They represent that this is a pod, it represents that this is a JVM, it represents some instances of actors, one actor per JVM, that type of thing. The little pink circles that you see kind of poof out of existence, what's happening there is these entity actors are written to only stay active if they're receiving messages. If they stop receiving messages, they shut themselves down. So that's what you're seeing when an actor shuts itself down because it hasn't received any messages for a while, it shuts itself down and then visualization here kind of poofs out of existence. So again, think of this one circle as representing a single microservice in a system of microservices. Now I'm not saying every microservice in the system has to be written in ACAV, so it's just easier for me to do this real quick slide just by copy-pasting the single circle. So let's go back to the running environment. So one thing I can do to simulate, say, losing a node. Yeah, we're running along here, we're running on three JVMs, this microservice is handling all these requests and say something goes wrong and to simulate this, I set this up so that I can click one of these big circles and it's gonna cause that JVM to shut down. So the JVM shuts down and then what you'll see is pretty quickly, we'll be back to having those 15 green circles, there were five of them on that node that went down, now they were redistributed on the other nodes. Now in the background, that was ACAV, actors reacting to the loss of a JVM. Now, Kubernetes also saw that we lost a JVM and as you can see, it's already started one back up. So it started a new JVM back up really pretty quickly but in the meantime, the service had to keep running, it had to keep handling all these requests. So what you see here is that the new JVM started up, it joined the cluster and then ACAV actors saw this and go, ah, we got more capacity, we lost some capacity in a moment ago but now we got more capacity back. So it starts shifting work back over to it. So you can kind of see visually as this is happening, we lost the JVM for a bit, the ACAV actors reacted pretty quickly to it and then Kubernetes also reacted pretty quickly to it. A new JVM started up, joined the cluster and then things started to kind of migrate back over to it. So this is what the sharding actors do, they rebalance. When we went down to two nodes, all 15 shards were on those two nodes. When we go back up to three nodes, now the shards redistribute themselves across the cluster. So another thing I can do is say this microservice is getting a hit in traffic and it, you know, a spike in traffic. So I mentioned earlier that you can set up your Kubernetes environment so it can see, say the CPUs of the three JVMs are reaching a higher threshold that you've configured to save. Say if the CPU utilization goes over 60%, that's a trigger to Kubernetes to start off some more pots, which starts up some more JVMs for. So I'm going to simulate that here. I'm going to use the user interface to scale it up. So you can see it kind of goes gray and then it goes, it starts up. So what's happening behind the scenes is now we've got five pods running and then in the visualization, what we should see in a little bit, hopefully, is we'll see two more of these big circles join in. Now it takes a little bit of time because what's happening behind the scenes here is the JVM, you know, the pod starts up, it has to load a Docker image. The Docker image starts up, it has to load the JVM. The JVM starts running my code. My code starts running code to join the ACA cluster. But that's happened. So you can see now we're at five of the big circles, right? So there's five JVMs running and then that's what Kubernetes did. And then now the ACA environment sees that, oh, we got extra capacity. This is what the Shard Actors are for. So you can see kind of slowly the green Shard Actors are migrating over and they're bringing over the entities that they're responsible for. So the Shard Actor activity is being stopped on the older nodes where they're at and it's being rebalanced, okay? So this is just a way for kind of gracefully handling or taking advantage of the increased capacity. Now the green Actors are Actors that are out of the box with ACA. It's called cluster sharding. The blue Actors are, you know, that's code that I wrote as a developer. The mechanics of these green Actors and some other Actors that are involved in this whole cluster sharding isn't rocket science. This is stuff that once you get comfortable with ACA, and I'll show you a little bit of some of that mechanics of it in a bit, is stuff that you can write yourself to do these kinds of things like distributing work across the cluster. And the other part is all these Actors that are, you know, they're able to message each other across the cluster as well. So say, you know, we had to spike in traffic. Now the spike in traffic's over, so Kubernetes goes, all right, I'm gonna scale back down, and I wanna go to visualization quickly, boom. You know, so Kubernetes killed those two pods, those two JVMs really quickly, and then ACA saw that this happened and it reacts very quickly where those 15 shard Actors are rebalanced on the remaining three JVMs. So this is a pretty good example, the combination of Kubernetes and the soccer cluster of what's called a reactive system. You know, the term reactive has gotten very popular in the last few years, and primarily it's because of reactive programming, which is great because reactive programming is a very good style of programming that a lot of tool sets and frameworks have adopted over the last few years. But there's also the concept of reactive systems, which is pretty intuitive, but also kind of important, and historically hasn't been that easy to do, which is you wanna have a system that's always responsive to users. And the only way that the system can always be responsive to users is that the system's built architecturally from the very, you know, kind of from the ground up to be able to handle failures and also to be able to scale. And that's kind of what I was just showing you here. And it's really with this combination of Kubernetes and Aaka, one approach for doing it. So the project that we're seeing here running, it's just a Java Maven project, okay? How many Java developers here? Maven, fair enough, okay, a few. So in a Java Maven project, Maven is used for building the code, all right? And there's this file called a POM that is used to define your application. So if you're familiar with Maven and all, one of the things you do is you declare all your dependencies, all the libraries that your code uses. But then the other thing you can do is you can define plugins. So this is one, there's two plugins that were added to this project so that it was kind of set up and able to be deployed to Kubernetes. One is a plugin that's used to build a jar jar or a big jar or a self-contained jar. So what this plugin does is when you build the project, it creates a single jar that contains not only your code, but all your dependencies. So it's a single file that can be run independently. The other thing that is part of this is a plugin that builds a Docker image for this project. So basically when you build this project, it builds a big jar and then it plugs that jar into a Docker image and creates that Docker image. And one of, probably one of the most interesting things here is this is kind of, if you've done Docker or something like that with a Docker file, it's just describing how to run this thing. So basically we're just giving it the Java command to run our jar. Okay, and that's it. Also in this project are some YAML files that are used to define, or this is how you declare how to run the application. So it's a type of deployment. You give it a name, name space and a lot of other stuff here, but this is the name of the image, the Docker image that's going to be deployed. And then a few things about some of the ports, the network ports that this application needs. So it's talking on port 8080, which is of course the web, for the web client and 2552, which is what Acre uses to talk to other nodes in the cluster and then 8558, which is a NACA management port. So there's three ports here that are defined. That's it. So you give this file to Kubernetes, it's just a simple command, like kubectl apply minus F, give it this file, and this is the declaration to Kubernetes, hey, take this image and run it. And also by the way, I said start up with three instances of this. So run three pods. And then there's other than that, there's just regular Java code. So this is the class with the main method in it. And it does things like, here's some Java code where we start up Acre. It's like one line of code. This actor system, you can kind of think of it as a glorified thread pool. It's way more than that, but this is the thing that you spawn actors from. This is the thing that handles all the networking. This is the thing that handles communication between actors. And then just some examples here where some actors are started. So when this JVM starts up, there's a set of actors that are started up to get things rolling. And we'll see more about what these guys do in a minute. The one thing I did want to show you is this Acre cluster bootstrap, which is a method that I defined on here, which has these two lines of code. So what happens is, when this JVM has started up in the Docker container, and it runs this jar, this main method is invoked, it calls this code, which calls this Acre cluster bootstrap. This is where the magic sauce happens, where the JVM starts up and it announces itself to the cluster. Because say we're running, when we're running three nodes, there are three JVMs running in cluster, they're all talking to each other over the network. And then two more started up when I scaled it up to five. When those things started up, they ran this code as the Acre cluster bootstrap. What this does is it uses a Kubernetes API to ask Kubernetes, hey, are there any other pods like me running? And then Kubernetes answers, oh yeah, there are, here's three. And it basically gives back the network addresses to those other three pods. So then this new pod, this new JVM can then use that information to announce itself to the other nodes in the cluster and say, hey, I wanna join the party. So they start talking to each other and that's where these multiple JVMs form themselves in the cluster. So not a lot of code for us to do, but as you can imagine, there's some fun stuff that happens under the covers to make this all happen. So those entity actors, the blue circles, as I mentioned, they represent instances of like your shopping cart and my shopping cart and somebody else's shopping cart or an IoT device, whatever it is, whatever this application happens to be handling. So in this little diagram here of what I'm trying to show is that say, you're on a phone client that's sitting this microservice, I'm on another phone client and somebody else is on another one. So it's just three different examples of users. So your entity 64, I'm entity 17 and the other is entity 76. So what I'm saying is that these are IDs. It's like shopping cart ID or device ID or whatever. This is a design decision that we make as developers. But what happens is that say our device comes in over the wire and talks to an HTTP endpoint in this microservice and then that HTTP request is then used to build a message that we want to send to our entity actor. Well, in your case, following the blue line, the HTTP request lands on the JVM running at the top left. However, your entity actor is running in a JVM that's running at the bottom left, okay? So the messaging that occurs here is all handled through actor messages. So the HTTP request is sending a message to an actor, a local actor on its JVM. That local actor looks and sees, oh, this is not for one of my entities is for an entity on a different node in the cluster. So that's what that actor is responsible for doing. That actor then force the message over to an actor on the other JVM and then it forwards it to the shard actor and then finally gets forwarded to your entity actor, okay? So the coding to do this is dead simple. It's like one line of code, I'll show you some examples. But the mechanics that's happening here is this is what ACA does. So you might look at this and go, oh my gosh, this is like way overhead message, message, message. Well, there was one never copped, okay? But then messaging between actors in the same JVM just basically method calls, all right? So not a lot of overhead. There's serialization and deserialization happening when the message goes over the wire, but once the message is internal to the JVM, it's not. And ACA is actually built for very high-performance systems. It's built for systems that handle millions of actors and handle millions of messages per second. So this is what this thing's been built to do for ACA's 10 years old. It's been beat up hard and toughened a lot to do these things very, very optimally. And in the case of, say, right now in this example, we're running with four JVMs and four Kubernetes pods and say for some reason we lose one of those JVMs because that pod stopped, either because it was shut down deliberately or something broke. Well, Kubernetes is able to handle that on its level, but the ACA actors are able to handle it on its level as well. The idea here is that the clients, the web clients, they just keep going away and they're not missing a beat. If something, you know, we scale up or scale down, we have some kind of a failure behind the scenes from the perspective of the clients, the system's still responsive. That's what a reactor system's all about. So let's take a look at the entity actor. So this is the code behind those blue circles. So this is just Java. You got a, you know, I created a class that's extending an ACA base class called abstract logging actor. Got some instance variables. Those are the local state of the actor in a sense. There's one method that I have to implement that's called this create receive. And this is the method that you write to handle incoming messages sent to this actor. So we're not writing the code for serialization, deserialization, all that kind of stuff, sending, you know, all the mechanics of sending messages. We just write the code for handling incoming messages and then I'll show you an example of sending a message in a minute. So this actor is set up so that it can handle a specific set of messages. So I got one message called command, another one called query, and this other one called timeout. An instance of this, something called a timeout. So the command, you know, these messages for command or query, they're just, you know, when that object, it's just an object that comes into the actor, invokes a method. This method looks at that object and goes, oh, okay, I'm gonna call the command method. So here's the command method. Now this is just my code, right? And this is, you know, kind of a bare bone, simple implementation. I didn't wanna have a lot of stuff around other than just trying to show you some of the mechanics of handling a message. So the message comes in, I perform some of my own logic. The way I wrote this actor is if this entity variable is null, that means it's the first time it received a message, it's the first time they've received a message since the actor was started. Because the actor could be started, could be in state, and it could be received more messages after that, which means it's gonna go down to the else. But I get this message, and the thing I wanted to show you here is in this line, this is where this actor is sending a message to another actor. Now in this case, there's a convenience method that I inherit called sender, which gives me the, what's called an actor reference, which is like an actor URL. And this is how you send messages to an actor. You use this actor URL, which is sender methods returning, and then you do a tell. And then I build an object, which is my design as a developer, and say, all right, I got this command act object that I wanna send back. And then this last parameter self just says who's the message from. So whoever sent me this message, I did my work, they sent me an asynchronous message. I received that message, I did some stuff, and then I sent a message back to them, another asynchronous message. So it's not a synchronous request response, this is all asynchronous. Okay, so that's really the flow. So I gotta zoom ahead because we're running out of time. So the idea is that all this stuff keeps running and working regardless of the pod. So if we got one pod, we have one JVM running. If you scale up to three pods, we got three JVMs running. If you lose a pod, that's okay. The Kubernetes is handling it on its end, and Akka's handling another end. The movement of the shards is all handled by the shard actors themselves. The last thing I wanted to show you real quick, let me skip ahead, this tree. This tree thing that I showed you is just something I implemented as developers, not part of Akka or anything. And the fun part of this problem was though, is that I've got a webpage that's sending a request to my cluster asking for the tree, which is just sent up as a JSON object. Well, the JSON object has to be composed of the entity actors that are being created all across the cluster. So if the HTTP request comes into one JVM, how does that one JVM know about all the other entities in the other JVMs, right? That's the fun trick here. So what happens is that when an entity actor starts up, it sends a message to another actor to say, hey, add me to the tree. That actor gets the message, and then it forwards it to its counterpart actors on the other nodes in the cluster. So let me very quickly show you the code that does this. So I'm back in the entity actor, and there's this method called notify start. So notify start, I just build a message, and then I do a tell. So the HTTP server variable is an actor reference. It's the actor URL. And I'm saying, hey, here's the start object. Deal with it. And the entity actor's done. It doesn't expect an acknowledgement back from this other actor. It's just sending a message off to this other actor. This other actor is called this HTTP server actor. And this guy gets that message. Just the same thing. It's got a create receive. It gets that message and action entity. It looks at the message. It goes, if it says start, add it to the tree. This is just, again, code I wrote, but then it checks a flag and this message is saved. If the message has a forward field in it that's true, then I want to forward this onto my counterparts. But when you forward it, you don't want it to be true because you don't want it to forward and infinitely you only want to forward it once, right? So I go to this forward action method. And here, I'm looping through the members in the cluster. And I'm checking, and I can ask Acre for this. I'm saying, right, what's the cluster look like right now? Oh, there's three nodes running in the cluster. Let me loop through all three of those nodes. If the node that I'm looping through right now isn't my node and if that node is up, then I'm gonna forward the action. And then down here, I'm just building a URL and then forwarding that message off to my counterpart. So that's where that the yellow guy, the actor received that message, the flag was true and it forwarded the message to his counterparts across the cluster. So this is kind of an example of cluster where the code to do this is pretty simple, pretty powerful. So let me wrap up. There's some other things here. We won't have time to go into them. There's clusters, singletons, cluster sharding, which we looked at a little bit. If you've ever heard about or are interested in doing like event sourcing, a command query, responsibility, segregation, CQRS, this is kind of the foundation, the implementation of that. You know, the whole idea is having this cluster being distributed. But the thing I already wanted to get to is that these are the GitHub projects. And I'll post these slides, but the project that I was going to was the top one, the Kubernetes one. It's just, and the intent of these projects is you can just clone these projects, build them, there's a readme. I've been trying to get the readme self-contained with some, you know, where you can take it and use it. I've been using this in some workshops. The other six projects are ACCA, Maven projects, not Kubernetes, doesn't have the visualization, but the intent is something that if you're curious about ACCA and you wanna see about running stuff in clusters, which is really very cool, you can take these projects and kind of incrementally start from a very basic project up to a couple of projects that do event sourcing and CQRS, the last two projects do event sourcing and CQRS, the persistence, persistence query. I've also written a little rally book that's free to download if you're interested. It's no code, lots of pictures. It kind of explains about actors if you're interested. And that's it. Thanks very much. And I've got time for questions. Yeah, okay. Thanks for the talk. You have mentioned that during the bootstrap phase, it was asking the Kubernetes about the state, how do you use the fabricate client or just the HTTP rest endpoint or what was the method for asking the Kubernetes? Sorry, couldn't you? During the bootstrap, you have mentioned that it was asking the Kubernetes about the state, like how many ports are there so that it can be able to connect to those ports. How it's talking to Kubernetes API. Oh, it Kubernetes has its own API and the API makes restful requests back to the Kubernetes environment. But it's really just a Kubernetes API that we're using that handles it. There is, in Java, there is fabricate client that's used for talking to Kubernetes. My question was if you are using that client specifically. I know that on the back end, the actual implementation of the bootstrap code is written in Scala. But if it's Scala, it's Java as well, right? So it's a Java API. Okay. Pardon me? Yeah. So it's a rest API with a Java wrapper on top of it. Thank you. So my question is, does ACCA cluster provide any benefits to replicating actors? So say like if a pod died and all the actors in that pod died, would ACCA cluster have replication info or any resiliency info? Yeah, so with the replication, so yeah, you have an instance of like your shopping cart actor was on a pod that died. So what actually happens is that the actor, if another message gets sent to that actor, there's a mechanism built into it where it says, oh, well where it was is gone. So now we have to figure out where we wanna put it now, which means where it's char going to be relocated to. So that's all being handled by the, out of the box ACCA actors that, but then your actor is restarted, okay? A new instance of the actors brought up. Now, in order for it to cover state, the state had to be persisted to some kind of a database. So this is where event sourcing really comes in where, you know, commands are coming into the actor and you could do a crud-based approach as well, but where every time a message is coming into that actor and it's performing some business logic, it's also, you know, it's not done until it's actually persisted somewhere. And then, so when you lose that actor, then when it's, the actor is restarted, it can recover a state from some persistent store. Okay. Okay. Thank you. I actually had the same question as him, but I'll ask you, does the ID stay the same or is that up to the- Yeah, the ID is the same. So yeah, because it's like your shopping cart entity 64 and that ID is not going to change. In fact, it's important that it doesn't change because the ID is used to recover the state. And again, it works really well with event sourcing where you're not using crud, you're just storing events. Like, you know, you add an item to a cart, that's an event that's stored in some kind of a database. Add another one, remove an item, add the shipping, add the billing, you know, each one is just an event. And the aggregate of all those events is the current state of your entity. So when that entity has to be resurrected, somewhere else in a cluster, it replays those events to recover its state. If you're using event sourcing, of course, through this crud, you just, you know, do your query to recover it. Okay. Thank you very much.