 Okay, well welcome back everyone, thanks for coming and joining. My name's James Roper, I am from Australia, I flew in yesterday due to flight delays, got to my hotel room at about 3am and then didn't sleep at all. So I did have a two hour nap before I came here, so hopefully that's just this morning and hopefully that's going to sustain me for this presentation, but if it doesn't, please wake me up. So I work for LightBend as the architect of Calix, you've probably heard a lot about Calix so far, I've heard some people saying that LightBend for some reason isn't here but this other company called Calix has come. I've been talking about Calix a lot these days, but my presentation today, for the most part I'm not going to talk about Calix, I'm going to talk about Cervilis, which obviously Calix is a Cervilis problem so it's going to lead to Calix, but we're going to spend a lot of time looking at Cervilis and why I believe that Cervilis is the problem that Reactive has been looking for. So to start off with we're going to go back in time and look at a brief and very incomplete history of Reactive systems because I think it's important to understand that none of this stuff is actually new. So to start off with the actor model and for those that aren't aware, the actor model was formalized in 1973 in a paper by Carl Hewitt and a few other guys and they actually came up with it in trying to build AI systems. And the idea is that you have this concept of an actor which is a universal primitive for concurrent programming. So actors can run independently and fail independently from other actors. They communicate via message passing which is what allows them to fail independently because they're not dependent on each other running. And then over the years there's been a few actor systems implementations. One of the major ones was Erlang which was created in 1986 for systems that must run non-stop. For example Pabax Exchanges. So Erlang was built on the actor model and there's a really great demo that you can see that they produced back in 1986 and that these developers show fixing a bargain redeploying a system that while it was running and without interrupting an existing phone call. Move on to more modern times. ACA was developed by Jonas Bernier who we heard earlier today. And it's a modern actor model for the JVM inspired heavily by Erlang. So we've got these concepts of actors, of message passing, of being able to fail independently, of being able to be deployed independently. But there are also other things that were kind of brewing over the years. Another thing that's probably a bit more modern is conflict-free replicated data types. These were I think first kind of the term was coined around 2010 in a paper. But they've definitely been around for a lot longer than that. They're designed for distributed systems. The idea is that they're always available. So reads and writes can occur concurrently regardless of network partitions and what's running and what's not. So again we've got this theme of resilience of things being able to run independent of failure. They are eventually consistent. So no matter what changes you make on what node, in what order, you can make 10 different changes on 10 different nodes. And then eventually as long as they all gossip to each other at some point, they will all converge on the same state. And importantly, this requires no coordination. There's no point where they need to synchronize and say, okay, everybody, we all know what the state is. We're all agreed. Okay, let's commit. Nothing like that. These are able to run independently and converge independently on the same state. Another concept that has been coming to prominence is event-driven architectures. Now, events have been around for probably much longer than the actor model. And there's definitely been versions and systems that have been built that could be described as event-driven for a long time. But it's probably become more popular through the 2010s, especially with things like Kafka and similar message brokers. So in event-driven architectures, states communicated by streams of events. In contrast to when you need state fetching it or updating it synchronously as part of the request that needs it. And again, this allows components to be isolated from each other. So if one component needs to update another component and that component is not running, it doesn't matter because all that component has to do is publish its event stream and eventually the one the failing component will come back up and be able to consume it. And event-driven architectures also allow components to scale independently. If one component has a huge amount of load, that's not going to push a huge amount of load on everything that it depends on because it's consuming their event streams, not calling them synchronously. So on this background of ideas of things that are resilient to failure and that can scale independently came the reactive manifesto. So it was first published in 2013. We've talked about the reactive manifesto a lot already today, so I won't go too much into it, but at the top we have responsive. So this is kind of the overarching goal of what we're trying to achieve with reactive to be responsive to users. Then we're also resilient. This has achieved the responsiveness by being resilient in the face of failure, being elastic, so being able to stay responsive in the face of varying workload, both small and large. And then to achieve those two things, to achieve the decoupling necessary to do that, we need to be message-driven. And I should point out an earlier version of the reactive manifesto actually said that was actually event-driven here. So event-driven is a system that is event-driven is reactive, but message-driven is a bit more encompassing because things like CRDTs, which can fail independently, they're not event-based. They just work by gossiping in their state, hey, this is my state, hey, this is my state, and communicating that way. So after we kind of coined this reactive manifesto, what was it good for? Who would actually use this reactive patterns to build systems? And in the early days, it was mostly web-scale companies. So, you know, I mean, if you remember back in the 2010s or late-naughties, people from Google, if you ever met Google employers, would talk about Google scale as if Google problems were at a whole level of transcendence above everyone else's problems and nobody else had problems like Google. And so, I mean, Google did use many of these patterns to build their systems, but other companies did start having the same sorts of load that Google faced, Facebook and Twitter, and so they had to build event-driven systems and so on and would use, maybe without actually using the term reactive, would use these reactive principles to build their systems. And there were also niche use cases, so things like high-frequency trading, and where did my screen just go, and very large IoT systems. So at LightBend, we had a customer, the Dutch border control that had tens of thousands of sensors all around their border that needed to be able to communicate and work independently of each other to be able to control the Dutch border. And they were using ARCA to do that. But then in the mid-2010s, Cloud native started becoming a big thing, and at LightBend and people building the tools to support reactive systems, we were excited because Cloud offered a great opportunity for people to... Alpha was a perfect place that we felt that reactive was good for. So reactive systems, the cloud is less reliable, it's on your control, you're not actually running things, so you need to make sure that it's resilient, and reactive systems can help achieve this. And cloud applications also had to be elastic because cloud offers almost unlimited resources at your fingertips, but to be able to make use of those resources, your applications need to be elastic, and reactive systems help achieve that. But it wasn't the perfect use case for reactive because old patterns could sort of be made to be good enough. So you add a few constraints, 12-factor apps is a good example. Keep your application nodes stateless, use distributed databases, and these practices would sort of work until they didn't because the database becomes the bottleneck when you could have caching issues, but people kind of hobbled through being able to use the cloud by applying Band-Aid fixes to these issues. And of course, when everything falls over, you can always just blame someone else, so AWS went down, it's not our fault. So serverless came along, I don't know, 2017, 2018, and at face value, just like with cloud, it looked like the old patterns will work because, like, Lambda's are just stateless services, right? The fact is that Lambda was nothing like stateless services, and people found this out very quickly. You couldn't cache with them. Database access was a problem because you'd be creating these database connection pools just for one request to be handled. Performance can be a problem when you link them all up together. Reliability becomes a problem because you have no control over when these things are started and stopped. And many of the hardest problems were not solved. So you still have to manage a database, and that database still becomes the bottleneck. You still have to manage failure. So what if the database is down? What if this Lambda isn't functioning properly and so on? But at the same time, there were people that started publishing success stories about how they successfully used a serverless. And what you saw was they talked about being event-driven. So responding to events and publishing events rather than going and having these big chains of Lambda calls. And they would talk about being able to update state independently without having to coordinate. And you even started to hear about things that sounded a bit more like actors, durable functions and things like that. And so we kind of saw this almost reinventing of reactive that people had to do in order to use serverless effectively and successfully. And so that's where at LightBend we started talking about serverless 2.0. So the idea with serverless 2.0 is that it's reactive from the ground up. So we're not just trying to... We're not just providing these kind of abstractions that you then need to go and put the reactive principles on top of. It's reactive from the bottom and you just write your application in this constrained environment and you can be successful doing it. So an important thing in serverless 2.0 is that state has to be a first-class concept because state really is one of the hardest problems to solve in computing. Coordinating around it, sharing it between components and so on. So in a reactive serverless platform, you think of it as deploying entities, not services. Okay, so underneath you're still deploying something that runs on a server somewhere. I mean serverless doesn't mean there's no servers. It means you're not thinking of it in terms of servers. And just like in the same way, you're not thinking of it in terms of services that are communicating with each other. You're thinking of it in terms of entities that you are deploying. These stateful things that have stage. These entities are actors. So they communicate with each other via message passing. Those messages might be events or they might not. But they're not dependent on each other running. The entities themselves are run, manage, persisted, cached, whatever needs to be done in order to run them by the platform. You don't need to worry about where they run, how many of them are running, what their scale is. That's handled by the platform. And as I said, they communicate via message passing. The second thing about a reactive serverless platform is that it is fundamentally event-driven. So one thing that that means is that entities themselves have to be sources of events. So you don't think of entities as, here I go and put some data in a database and then I might go and publish an event somewhere or something like that. Anything that you do with the entity is an event. So any time you change an entity, that's an event that something else can consume. In order to provide such a platform, we have to have multiple state models. Because these kind of reactive patterns, as I talked about before, we had actors, we have conflict-free replicated data types, we have event-driven applications. And these all kind of solve different problems in distributed computing. And not every use case that we have requires the same model to solve it. So an example of a state model is event sourcing. A simple value could be a state model, a CRDT could be a state model. You will have views that aggregate multiple entities together. There's another state model and so on. And so developers will select state models based on requirements and they're not going to use the same state model across the whole system. For each different entity that they have, they will choose the state model that best suits what that entity needs. So if they have certain latency requirements, then they might choose a state model that ensures that the state is at the node that receives the request already before it's handling it. So something like CRDTs. Similarly, if they want concurrent rights or if they don't want concurrent rights, they'll use different state models for that. And if they want, depending on their consistency requirements, if they want eventual consistency, do they want strong single-writer consistency, they'll select a state model that meets those needs. So an important thing here is that these state models mean that we have these constrained state patterns. And Jonas did mention in his keynote a little bit about the importance of constraint, because tight constraints gives the platform more power. If the platform understands your state model, it can do a lot more with that, such as scaling horizontally. It can know how to replicate that. Does it need to replicate it in one way, or can it replicate two ways and then merge what kind of options does it have for replication? When you use a constrained state model, the platform can make those decisions and the developer can just focus on their business logic. And the platform can also make and meet consistency guarantees. And the platform can also make decisions to stay available. And there's also an unrealized potential here that goes far beyond serverless or that pushes serverless beyond what I'm talking about with serverless too. So, for example, replicating all the way to the edge. And I don't mean just, like, to the local region in AWS. I mean, like, to the cell phone tower. So we could share state at the edge without having to go back to the central cloud thing if we're using the right state model that allows that that the platform knows how to manage that and get the state to where it needs to be, when it needs to be. And even go all the way to devices. So this is where I come back to Kalex. And Kalex is the platform that LightBand has built that is the embodiment of serverless 2.0. So it's hosted by LightBand. We launched it in May of this year. We've actually been working on it since 2019. It's gone through various incarnations of names and ways of distributing it. It's GRPC-based, so we describe the entities using GRPC and declare how their state model works and so on. And it's Polyglot, so currently supports Java, Scala, JavaScript, and TypeScript. But it can support any language that supports GRPC. So I'm going to give a demo now. And I'm going to run it through it fairly quickly because I don't want to run out of time. But what I've built here, so if we go to here, Kalex service list, you can see we have a service deployed, the asset service. I'm going to create a proxy to that with a GRPC UI front-end. So Kalex supports, like, creating an Internet-exposed ingress, but when you just want to test a service, we offer a way that allows you to go through an authenticated proxy that doesn't have to expose your service to the Internet, and that's the feature that I'm using now. And so this service, what it does is it tracks information about assets, and the only thing that it currently tracks is location. So if I get an asset, I'll just make this bigger. If I get an asset, let's say I want to check where the hammer is. Its location is the toolshed. I can move it to... I'll take it out of the toolshed and check it into the yard. And then if I go back and load that and invoke it again, you can see it's out in the yard. So in a real asset tracking thing, there might be other states for assets, but in here it's just very simple for this demo. I'm not going to go into too much of how to actually implement this entity. I'm going to show implementing a new entity, but you can see here important things is we have this checked-in event. So whenever I check something in, it emits this event saying that the asset has been checked into this location. And we track the current state with this location here. And I've implemented this in TypeScript. So you can see when I get the check-in message... the command, I emit a checked-in event. I could do any validation here before I do that and fail if I want to. And then in my event handler, the check-in event, I simply update the entity's location. But the important thing that I want to show here is this assets publisher service. And what this does here is I'm declaring the input for this... for this GRPC service is... So this is an... what we call an action in Calix, which is not an entity, it's just some code, some stateless code. But it's going to subscribe to the asset event-sourced entity stream and publish that into what we call a direct event stream, which can be consumed by other services. And over here, we have the code for actually translating that internal checked-in domain event to the external checked-in event that we're going to put that other services can consume, which in this case is actually the same event just for the simplicity of this demo. So what I'm going to do now is I'm going to implement a new service, which is a view service. So I've taken the liberty of already setting up the initial project and the initial protobuf here, and I'm going to call it assets by location. I'm going to tell the Calix code gen that it's a view. And I'm going to say that it's going to subscribe to the direct eventing from the assets service consuming that published event stream ID called assets. Now I'm going to declare this update location method to handle the checked-in events that I'm going to start receiving. And this is going to produce an asset location, which is actually the schema for my view. So that schema is pretty simple. It has the asset ID and it has the location. So then I'm going to tell Calix that this isn't a view update method. It's going to update the assets by location table. And I'm going to need to transform some of the events it receives in some code. So now I'm going to move on to the query method. So this is going to be what actually receives the requests. It's called get assets. We have a by location request, which simply passes in the location that we want to get the assets by the location. And it's going to return a assets at location response, which has a list of asset IDs that are at that location. So coming back to the query, another Calix annotation, and this is where we declare the query itself. And this is an SQL-like thing, so we want to select the asset ID. So that comes from our asset location table. And we're going to put it as, whoops, need to remember my shortcuts, as assets IDs. So that's going to put all the asset IDs that it selects into this asset at location field. And, oh, yep, from assets by location. So that's saying I want to get it from the table that was updated here, where location equals the location from this by location request. And so that's almost all I need to do. I'm now going to let the, let Calix generate some code for me if I can type on my laptop. So that's generated this file here. And the first thing that I'm going to do is import this asset location protobuf, and then I'm just going to transform the checked-in event to the asset location schema, which gets the subject of the, that's the entity ID, the asset ID, and the actual location from the event. And I've now finished implementing the view. So this is the slow-part MPM run package. So I'm going to build a doc image, and I'm going to publish it. And so if we copy the tag here that I'm building, you can push that, and then we're going to do a Calix service deploy locations. Now, I need to talk just for a little bit more while that's going to take a little while to pull the image and then create the tables needed for the view and so on. So we can see it deployed there, and it's not available yet. And so if we just give it some time. So hopefully this is going to work. But going back to this SQL here, so we don't support a full set of SQL. One of the things is that it is quite constrained because if we supported just any arbitrary SQL here, the platform wouldn't know how to manage the state. It wouldn't know how to guarantee various caching and consistency guarantees. And let's... Okay, it's ready now. And so now I'm going to start a new proxy to the locations, opening GRPC UI that's going to point to it and on a different port to the other one. And so you can see here's my assets location, get assets and all the assets in the yard. I'm not sure why that's not... And for some reason did I enter that wrong? Oh, I pushed Yard too many times. And so there you can see our asset IDs, including the hammer that we moved before. And I'm running out of time here, so I'm not going to show more than that. But an important thing to notice here, when I wrote this location service or when those assets were loaded, this location service didn't exist. Because this is an event-driven system, fundamentally event-driven, I didn't have to do a data migration or anything like that when I deployed my location service to populate it with data. I could just... It just consumes the event stream that's just there. And it can bring itself up to date in its own time without having to worry about the fact that it didn't exist when those events were created. So that's a very brief demo of just basically one of Kalex's features. But if you're interested, I would strongly recommend going and signing up for a trial account and trying it out yourself. So thank you. We might have a time for... No, we don't have any time for questions. Okay. If you want to ask me questions, I'll be outside afterwards. Thanks.