 Thank you Candace. Thank you Candace and welcome everyone. So what's next after Kubernetes? Is that a audacious question? Do we dare ask? I think Kubernetes is such a massive force on many of us and has had a massive impact on the industry as a whole. Do we dare ask? But I think if you're here, you're probably as curious as I am as to what is next after Kubernetes. So in this talk, I'm going to show you an example of something that we've been doing, my company, that is kind of next after Kubernetes. But I'll give you a hint. It's Kubernetes less and take that for what it means and we'll get back to that. So before getting into some more of the details, I think it's pretty important to position things. I honestly kind of think of the where we're at today, the world we have been for a few years as we're in the Kubernetes era. And I'm talking from the perspective of backend systems, typically running in the cloud or some highly virtualized environment. But there's kind of we're in this epoch or era of the Kubernetes time. And I think we're well entrenched in that time right now. Kubernetes has been around for a while and highly adopted as it should be. So there's these three eras. There's pre-Kubernetes, there's Kubernetes, and then there's post-Kubernetes. I'll be focusing on post-Kubernetes. But pre-Kubernetes is we're doing things without Kubernetes. We're doing that for, I've been in the business for a long time. We've been doing things that evolved at a certain pace for a long time before Kubernetes came around. But then, wow, when Kubernetes came, it was just a fantastic thing for us to adopt and many of us have adopted it. And then now we're starting to build on top of that. So that's where I think things are heading. And what I mean by many, much of this is that we're kind of dealing with abstraction layers. So this visualization here, I just want to use this to talk a little bit about some of the abstraction layers and just the concept of abstracting things away as time goes on. So there's like six layers here. And I've just added some labels. These are just arbitrary labels that I picked. We could label this many, many different ways. But I just wanted to kind of go through the gauntlet of some of the layers of infrastructure and technology that we depend on to run our systems and the abstraction layers that intersect as we go up the stack here. Starting at the bottom, there's the hardware. And just where we do computing, the actual physical devices that we used to do computing. And when I started, I've been around a bit. That's what we use. We use the physical hardware. But then along came virtualization. Then it was great. It was a great abstraction layer. And it was kind of what I'll call a hard abstraction layer, meaning that when we were allocated, my teams that I worked on, or if you're using it as well, when we're allocated virtual systems, we had no idea what the actual physical devices that these virtualized servers and virtualized devices were running on. And we didn't really care. So there was this kind of hard abstraction layer that, okay, we lived in the virtual environment and we were quite happy with that. Now stepping way up to Kubernetes, that changed the game again, big jump in abstraction. And again, it's a hard abstraction layer where we run our applications in a Kubernetes environment. It gives us all the wonderful things and capabilities and features that come along with Kubernetes, vastly superior to, say, just what we had prior to Kubernetes. And it's a hard abstraction layer. We're quite happy to be there. We understand how to use Kubernetes to run our applications. And we're good with that. Now, of course, some people are having to deal with the inner workings, kind of what's in the black box of the Kubernetes environment. But there's a clean kind of separation between those that are using what's on top versus those that are maintaining what's within. And then stepping up another layer, containers. And I know Kubernetes is really there to kind of manage containers. It is one way to look at it. And containers, I think, preceded Kubernetes to a certain extent. But it's just kind of enough for the sake of this discussion. It's another layer of abstraction. The containers nicely embraces the application code and shelters it from the outside world. It gives us the application facade of these are the libraries that you're using. This is the flavor of the operating system that we're using, those types of things. So it isolates our application code. Stepping up another layer, we've got services. And by services, what I mean are things like databases and message brokers and security and say single sign-on. And there's just tons and tons of different services that different applications depend on in order to actually be completely functional. And then finally, the top layer is the application layer. Now, what Kubernetes did is, like I said, it's a hard abstraction layer that we really don't need to deal with what's below that abstraction. The vast majority of people that is using Kubernetes. And those layers are still there. So this is the less thing. It's virtual system less. It's hardware less. It's server less, in a way, because the servers are abstracted away from the users, especially the development teams. And this is my background, the people that are trying to get things running within the Kubernetes environment. What's next, though, I think, is really taking another big jump. And this is abstracting away even more complexity. Because one of the phenomenons that has happened, I think, recently, especially like with dev ops and so on, is that, and trimming things down and making things easier to a certain extent, a lot of responsibility for dealing with the infrastructure that was left, you know, it's great abstraction with the Kubernetes and these services, fell on the development teams. And we're kind of drowning in complexity is one way to think about it. And that we have to figure out, okay, what database we're going to use, what message broker we use, how are we going to tweak this, how are we going to use Kubernetes, how are we going to do all these different things. And then by the way, oh, yeah, we got to write the application got to implement the features that the business and users are asking for. So a lot of our time in development teams is spent not just cranking out features that matter to your users matter to the business, but also dealing with, you know, the environment that the application is going to run in. This next evolutionary jump, I think, is trying to abstract away a lot more of that complexity. Now, this is a big change. And I'm going to go into it, of course, into some more detail, but I just wanted to bring this up because there's this great video. If you haven't seen it, I highly recommend you watch it. It's on YouTube. The title is down here at the bottom, Brett Victor, the Future of Programming. It's one of the most amazing presentations I've ever seen. It was done in 2013. So it's 10 years old, but it's still, I think, highly relevant. You can see that he's dressed and he is actually presenting in 2013. He's talking about, as if he was talking from the 70s, from 73. He's dressed as a developer from 73. You can actually see that he's got a pocket protector in. So he played the role of a developer predicting what computing would be like in 40 years in 2013. And one of the things he mentioned in his talk that really I think is relevant to this discussion is that he was talking about in the early days of programming, first, the program programmers coded in binary. And they got comfortable with that. They're quite happy with that. And then all of a sudden somebody came along and introduced assembly language. Symbolic optimized assembly programming, I think, is what that acronym SOAP stands for. Anyways, it's assembly language. And there was a lot of pushback because it was new. The people that were comfortable with binaries like, why do I need this assembly language? Because I'm losing control. I have total control of the hardware at the binary level. So there was pushback. This is the version to change. And then we settled into assembly language after some whining and crying and finally started to get some adoption. But didn't take too long before yet another wave of abstraction came along here, programming abstraction. The one he shows here is Fortran. So Fortran comes along. The assembly language program is all gone. I'm losing control. I'm not going to use that. I can do whatever I want in assembly language. His point is that there is this aversion as these new waves of abstraction are introduced. We have to go through this change. And change is something that we resist and we often find to be painful. And I think if you're here, if you're listening to this talk, you're probably not necessarily in that group. But I like the points that Mr. Victor made here because I use it for ammunition when I'm talking to other people. So maybe you could use this when you're talking to other people when you're trying to convince them to try something new. So what I'm going to do is just jump quickly into before I show you what this abstraction layer is. I just want to show you an example of an app that I built using this abstraction layer. And it's an app I call Wear on Earth. And the way it works is that I wrote it. I wrote the front end is just in JavaScript and then I wrote the back end in this highly abstract layer of that. I'll show you some more details in a minute. But it's based on open street maps. I can zoom into a certain area of the world. And what I can do is create simulated IoT devices. And as I zoom in, you can see that there's these regions, these circular regions where there are devices that have been created. So what happens here is that, let me pick another part of the, I'm just going to pick another area here. And I'll just show you a quick example. So you can zoom in on to an area of the map. I'm just picking another city here. And I can create what's called a generator. And I pick a location. Then I pick a radius. Like, this is the area that I want to create devices in. Then I can pick the number of devices I want to create. And I make around, say, like 1500 or so. And then I can pick a rate and say, like, 500 per second. So it's going to go pretty quick. So then I send in a request to the back end service, back in microservices that I'll show you what there are in a moment. And then this is treating a whole kind of event driven type of sequence of events. And you can see the devices start to appear on the screen at the rate that I asked for. So the devices are created. And then there's kind of another process of aggregation that's occurring that takes a little bit more time where it's going to be computing, where devices are, how many devices are in a given region on the map. And you can see it's starting to appear there here on the map as they're getting computed. But this aggregation is also happening through kind of an event driven type of a process. And you can kind of see it all happening real time. This is the fun part of this application. This is a really fun application to write. Now, the tricky part was that this aggregation has to happen at 20 levels of zoom. There's like zoom level 19 is where devices are at and zoom zero is where is at the whole world. So as I zoom out, you can see where there's devices across the whole world. And this aggregation is happening now as the computing is going on. You can see more and more of these aggregated regions are showing up. Anyways, this is the app. Could spend a whole hour just talking about this app. Now, it's written in this case, it's written in Java. This environment is polyglot, but I've done it in Java. And it's using spring like notation. So the API is quite simple. This is the generator microservice, as I call it. And this is the one that got the request to create a new generator with the location, the radius, the number of devices to be created, and the rate. So this generator then started emitting events which trigger the actual process of creation. And I wish I had more time to explain how it works, but it's really pretty cool. But the point I want to make is that this is running in a highly abstract layer where it's even database less in that, you know, say this, this command is coming into this microservice. And you can see that I'm just doing a getting an event for that. This is just a method I wrote for this command. So given the current state of this device, which in this case, the device is brand new, so it has no state, the create command comes in, and that's going to trigger creating an event which gets persisted by this return. So this effects dot emit events, then reply is basically handing back to the platform that this is running on. Here's an event that I would like you to persist and then process further down the line. So there's no database connections here. There's no database transactions here. There's no serialization or deserialization here, handling the incoming requests or sending back the response. It's just, it's simplified. So this is the whole idea of this kind of coding here is that the code is fairly simple here. The only complicated code here is kind of figuring out the actual random location of devices using doing some latitude and longitude type of computations, but that's the business logic. But the integration with the, you know, the persistence layer handling events and so on is all handled by the platform. And the code is quite simple. So this, I'll give you the link to this project, but it's pretty straightforward. The, the other thing is I want to move on here. So this is the actual kind of high level design of this application. You know, there was that client UI written in JavaScript that had that open street map integration, you know, in the JavaScript code. But basically what I did was when I created a generator was I captured all that data using the user interface and then sent a single request to a generator microservice that received the request to create a new generator. So that new generator gets created and it's submitting quite a few events, kind of on repeatedly where it's triggering the creation of devices. And these events that are coming out of the generator are triggering that ultimately result in sending individual messages to each of the 1500 or so devices that I just created. So a lot of eventing happened in the system where it was creating all these devices. And then as devices are getting created, there are meeting events, which are getting handled by another microservice I call region. And region is those, those rectangular areas on the map where it's aggregating the data. And the reason I have kind of this loopback event is that what's happening is that a device tells its region that it's within, hey, I'm here, I'm a new device you need to aggregate, you need to add me to your account. That region then is at a certain zoom level and then what it needs to do is propagate a higher level kind of aggregated event up to its parent region. And that propagates from 19, zoom level 19 where the devices are at to 18 where the first real physical region is at. And it goes all the way up to zero. So there's eventing that's just kind of propagating all the way up. This is the way I made this app work. You could do it differently. But the point is it was, this is all just done to event driven types of integration. The client, it just sent in that single request, got to, to say create a generator, got a response back, yep, the generator is created. And then what the client is doing is just pulling views, these little v's I represent as views. And I'll get into a little bit more detail on what all these symbols being here. But this is basically how this app worked. It's a fun little event driven type of application, highly scalable, highly distributed, runs in clusters, all those types of things. I don't care about that as a developer, what I care about is the design and implementation of my application. And then I deploy it, I basically give my application to the platform and say platform run my application, which means it's going to set up everything I need, like the databases that are underlying this application, everything gets set up. That's all abstracted away from me as a developer. I'm just kind of concerned about my code. Here's another example app. You can see there's also a link down to this get up repo for those that are interested. And it's another event driven type of an app. And I'll get into a little bit more detail on this one as well. But before I get into it, I want to kind of explain some of the pieces of this, this abstraction layer. So it's quite simple. It starts out with projects. You define projects within this abstraction layer. And projects, you can have any number of projects, and you can use projects in whatever way you want. So some projects could be just for you, for development, or for your team, for doing team development, or another project could be for testing, or performance testing, or whatever you want to do. And then some projects are, of course, for production. Each project you can control who has access to those things are highly secure, especially this is of course very important for production. But you have control over what you do with your project. You manage the project within this environment. Within each project, there's just simple deployments. So they're called services. But I'd like to call them deployments, because that's pretty what they are as far as I'm concerned. And you can have any number of services, one or more services within a project. And these services are self-contained, independent of each other. They're deployed independently. But they can interact with each other, of course, as well. And then within the deployments, there are three building block components. There is what's called an entity, but I think these more as a tight focused microservice. And then there's views with this V-shaped. And these are queryable views. And the reason what's going on here is entities are responsible for, like I said, they're microservices. They're responsible for handling commands or requests to do some operation coming in. They perform some kind of a state change operation. And there's two flavors of these. There's one that's event sourcing where it emits events. And then another one is more card-like. It's like a key value thing where it just emits the current state change to be persisted into a state store. But I think there's much more value and usage of the eventing ones, the event sourcing ones. So if you're familiar with event sourcing, event sourcing, and then typically what goes along with that or what event sourcing is typically part of is CQS or command query responsibility segregation. And what that means is that entities or these microservices are only focused on writing data. They're not focused on reading data. Now, you can retrieve individual things like the state of an individual IoT device or the state of an individual shopping cart or an order or something like that, whatever happens to be your design. But the views are for reading. And this is the segregation piece. That writing happens in one place, reading happens in another place. They're separated from each other. So the views are there to take the data that's coming from the microservices and project those into queryable views. So this is the CQS part, and that's built into this platform. So that's two of the three building block components within this platform. The third one is a real workhorse. And it's called an action. But it's a, you can think of it as like a stateless, serverless function. But it's, I think that's almost kind of not giving in enough credit because you can do, you do so much with these actions, but they're like the synaptic connection between event flows. And I'll show you how that works in a minute. But these are the three building blocks. I'm amazed how much I can do with this as a developer. And I also like the fact that it kind of keeps me within some guardrails as far as how I develop things because of the way these things work. You might think initially, again, every time I have that aversion, oh, I can't use this, I've done this over and over and over in my career. And now I'm at the point, whenever that instinct kicks in, I know I need to polarize it to the opposite end, meaning that instead of just dismissing this because it's like, oh man, I can't use this, I need all the flexibility. It's like, no, no, no, here you're wrong. Your, your instincts are wrong. You go with it. And so I'm not, it's like, I got to check it, you know, when I hear something new, instead of just immediately dismissing it, I typically try and check it out before I just outright dismiss it. And I think in this case, the payoffs pretty high because I was, I've been talking about as a, as a developer advocate, I speak at conferences all over the world, and I've been doing it for, for some years now. And I've been, one of the things I've been talking about for a long time is microservice systems. And one of the things I've been talking about with microservices is microservices should be folk that each microservice should do one thing and do it well. Each microservice should own its own data and not expose that data to anything outside of the microservice other than through its API. Each microservice should be loosely coupled. And now I finally feel like I've got a tool that formalizes a lot of these practices. It makes it hard for developers not to do some of the things that many of us have been trying to evangelize the development community. And of course it's being adopted by some of the development community, but the microservice environment over the last, you know, almost a decade now has been pretty wild and mooly. And it's, you know, it's, it's loose, I think in many cases, how, how we've been doing things. But we've been trying to say, you know, tight focus on your data loosely coupled, all that good stuff. And here that this is exactly how this system works. So in the case of a disorder processing demo, I just want to walk through the event in very quickly. So you've got a client that's, you know, interacting with a shopping cart, you know, I got a shopping cart, you've got a shopping cart, other users that are currently active on the system right now, each one of us has our own shopping cart. It's a unit of state that's being managed by the backend system. We're sending in commands from the client, you know, via requests to shopping cart, add an item, change an item or move an item, whatever, we're building up to each, each of our shopping carts. Hopefully, from the perspective of the business that's running this, you know, this online shopping site, people hit the buy button and they check out the order. When orders are checked out, that microservice, that shopping cart microservice emits an event that's picked up by an action, you know, that stateless, serverless function, which translates that event, you know, shopping cart created, say, event, and it creates, it transforms that into a create order command and fires that command off at the order microservice. And now an order is created. Now, the client sent in that single request and the response back went to the client, you have, yep, your orders checked out, we're working on it. So the client's gone at this point, but this one single event, this checkout event is going to trigger a pretty elaborate cascading sequence of events here that's kind of cool. So the order gets created when it gets created, it emits an event that's actually being picked up by two different actions. One is an order item, which is just kind of there for query, so it's not really important, but another one is a shipping order. So the difference between, at least this is my design, don't have to be, doesn't have to be done this way. I know this isn't a real system, but it's, it was a fun demo to do is more than just a simple shopping cart demo, which I was getting sick and tired of doing just that. I wanted to do something more real, like allocate stock to an order. So that's what this is working towards. So the shipping order is trying, is responsible for getting stock allocated to the order. The order tracks the order through is full life cycle. That's what these two different order does and shipping order does. So in order gets created, it emits an event that causes the creation of a shipping order. When a shipping order gets created, it creates an event that actually explodes out into creating multiple order skew items for all these items in the shopping cart in the order. And when each of those individual order skew items gets created, guess what, they emit events, which gets picked up by stock. But in this case, the actions doing a little bit more interesting work, instead of just transforming an event into command, what it first does is a query against a stock view. And what it's looking for is stock that's available. It gets that response back from the query. And if there is stock available, then it will create a command to send to, say, allocate stock to a particular order, allocate a particular unit of stock to a particular order unit of stock. If there isn't sufficient stock available for that skew, this action, I don't have an arrow showing it, but it would actually send the command back to that order skew item, say, hey, put yourself in a back order state because we don't have sufficient stock right now. So there's a bit of a saga-like pattern going on here. Again, a lot of detail here that we just don't have the love of your time to cover here, but it goes through the processing. So all this eventing occurs that ultimately ends up notifying the order that either all the stock has been allocated and the order is ready to ship, or some are all of its in a back order state, that type of thing, all done through eventing. All the eventing is being managed by the platform, all the messaging, for me, for you as a developer, we're just writing the microservices, we're writing the code for that, we're writing the code for the views, we're writing the code for the actions, that's it. Events are being passed to actions by the platform. You don't have to explicitly write code to retrieve events from the event journal, say, from the upstream thing that you want, you just declare in your code to an annotation in Java that my event source is from, like in shipping, that shipping order action, the order to shipping order action, it just declares to an annotation saying, hey, my event source is order, give me events from there. That's all you have to do, it's kind of a declarative type of thing. So back on the, that we're on earth, I just wanted to show you a little bit more detail, and I go through this quickly because we're running out of time, but this is a kind of a more detailed breakdown of the processing that happens at the demo, I call it, we're on earth, W-O-E. So the idea is that the client said to generate, create a generator, the generator gets created and meets an event, that event gets picked up and propagated into a view, but also gets picked up by some actions. You can see one action is actually going back to the generator, so there's kind of a loopback that's occurring here. And what's happening here is this loopback is just telling the generator, right, you generated some of the devices that are needed per unit of time, here's your next unit of time, generate the next quantity of devices based on that kind of the current time. So it's just kind of a loopback that keeps triggering the generator to keep generating stuff until it's done and then that loopback cycle will happen. Conversely, what's happening is that the generator is generating events that ultimately result in an action sending commands to individual devices like the 1500 devices I created, there were 1500 commands that were sent out to create all those devices. That just kind of loops around until all the devices are created. As devices are created, they emit events that are getting picked up and it goes to a region, the region emits an event that kind of keeps looping back and to go up the different zoom levels, up the stack of zoom levels that were in the map to do that aggregation of all the data that's arriving. There's some dampening that occurs so that the zoom level zero isn't getting like 1500 commands, it's getting far less than that because it's kind of like what happens is in this aggregation flow that occurs through this eventing, there's kind of a flood of events that are coming in at the lowest level zooms but then it kind of gets down to a trickle as it gets to the top level region for the entire world. And then the client is just querying the views so all the data that we're looking at on the map is coming from the views that were ultimately updated as a result of all this eventing that occurred. So that loops around for a bit and we're done. Another app and I'm actually working on this one right now, I've got the design worked out and it's actually kind of my second iteration on the design and implementations like I'm re-implementing it with this kind of new design, it's a simplification of my prior design. But the fun part is that I really enjoy as a developer and as an architect or designer is thinking through the eventing process here. And one of the interesting mechanical aspects of this that you have to consider is that all of the eventing, all of the messaging that occurs here is a guarantee at least one's delivery which is great. That means that every single event that I want to trigger in action to perform some kind of operation to send a command downstream to another microservice, I know will happen. It'll at some point it will happen. It might not happen immediately. It might happen very, very quickly. It typically does within milliseconds but it could not happen until tomorrow if there was an outage or something or seconds later if something slowed or if the network perked or something like that. But all these messages are guaranteed to be delivered because this is at least one's delivery. That's typically what you get with any kind of message broker. But that also comes with a consideration of item potency. And item potency is that each of the receivers of these incoming messages have to be able to handle getting the same message more than once. And that will give you pause as you're considering the design. I will guarantee you because I've been dealing with this for a year now myself, kind of heads down all the time, is working through designs where you have this at least one's message guarantee, delivery guarantee which is great, but you've got to build your services to be item potent. That's a new thing. Typically we haven't had to deal with that as developers. But when you do it, it's a thing of beauty because you end up having a robust system. Anything can break here. Any part of this flow that's occurring here. In this case, what this application is doing is what it's designed for is this topic that is consuming messages from that are flowing into this transaction microservice is high-volume. It's like thousands of messages are coming off of a Kafka topic per second. The system has to be able to consume all those messages and aggregate that data into what are called merchant payments. So what's happening here is that there's a flood of data coming in every second into this system. And it's stepping down kind of like what I did with the zoom levels in the region. This microservice that I call interval is handling data at time intervals. So initially, it handles data at a sub-second interval and sub-second intervals feed into second intervals and second intervals feed into minute intervals and minute intervals to hour and hour today. And then day intervals, their events are being watched for in this action right before the merchants. And it's feeding as the days that are getting updated with merchant-specific aggregated data, sums of transaction activity that need to be sent out as payments to the merchant or payments that the merchant has to make to other entities for service charges and things like that. That flow is quite low at the day end because it's been kind of stepped down through a series of aggregation. But this is money, right? So no data can be lost. Nothing can get corrupted. And it was very important to consider things like an impotency in a system like this. But this system, what's happening then is the merchants submitting events as days are getting updated. And those days are just going to payments, the current payment for the current payment cycle. And we're just aggregating continuously in real time. And whenever payments have a certain time cycle, it could be hours, it could be days, it could be weeks, it could be whatever the receiver of the payments want them to be. But when the payment cycle is done, we just shut off the flow to one payment. The merchant does this from one payment and the flow immediately switches over to another payment. So this system is often done in a batch way where when you want a payment, you trigger a whole batch kind of activity where you're trying to accumulate a lot of data that has come in over time and you want to do it as quickly as possible to bring out a payment. This system does it in real time. Again, it's rock solid because of the lease once delivery and being very careful to design the services to be item potent. So this is an example of this new abstraction layer. Again, I wish I had more time, but the main thing is like it's trying to cut out as much complexity as possible and leave us with the really important stuff, which is design of these applications, which is a really fun intellectual challenge and one that's great when you're not distracted with all the other technologies, things that we historically have had to deal with. I feel liberated. Honestly, when I'm working on this, I tell people I don't want to go back to the old ways of doing things. I love Kubernetes. It's a great invention, but I don't want to go back and start dealing with YAML files again. I'm sick and tired of coding configurations in YAML files. In this system, there's zero configuration. My configuration is defined in my code. How I wired things together, how I wired these services together is basically defined in code and that's it. I'm done. Again, you can tell I'm a big fan of this talk, but early on in this talk, Brett made a really interesting point. He said it's easy to adopt new technologies, but it being hard to adopt new ways of thinking and he hammers on this. I just noticed this, but if you look at his picture, he's got that pocket projector and he's got this one pencil sticking up. I don't know if he's given us a salute or not because I think he's admonishing the audience for why is it in 2013 and now in 2023, we aren't doing some of the things that he talked about. It's a really interesting talk and that's the kind of the point I'm trying to make here is that we're moving into an era where there's more solutions coming out that's abstracting away complexity and giving us these new kinds of platforms. What I've been talking about in this particular case is it's called Calix. You can look it up at Calix.io, but I think I've also hopefully covered things in a somewhat generic way in like generic about microservices, generic about a venture and types of systems and about platforms that are coming. It's not that we're getting rid of Kubernetes, but by far this platform that I'm talking about Calix is built on top of Kubernetes, but is Kubernetes less from the perspective of people using it? This database less is broker less. All those things are there, but they're abstracted away and I think they're abstracted away quite nicely. I'm really excited about our future as more and more solutions like this come out. Thank you and I think we're ready for some questions if there are any. It doesn't look like there's any questions right now. You've got my Twitter handle down here, the key H3. If you do have any questions, I'd be happy to try and handle those, especially if you have people watching the recording. I guess we don't have any questions. I'll turn it back over to you, Candace. Thank you so much, Hugh, and thank you everyone for joining us. As a reminder, this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars. Have a wonderful day. Actually, Hugh, it looks like we got, I'm going to hold off on sending everyone away. It looks like we might have gotten some Q&A. Oh, yeah, we did. Oh, yeah. We got a bunch. We will hang out and then we will end in about, you can get through these and once you're done, we can wrap up. So the first question is, have you plans to add support for Apache Mesos? Not that I know of. Of course, this system has to be able to integrate with things that are outside of this environment. And the two primary ways to do that is either through message buses like Kafka or Google Pub sub or something like that. And then through the APIs, either external systems reaching into applications you built here via the APIs provided by the services you built on this platform or actions within this platform in your application reaching out to external services. But no, the underlying, I deliberately didn't say what is inside the box, this Kalex box, other than things like Kubernetes is there and there is a database and things like that. But it's pretty technology agnostic. The next question is, do you think Kalex concepts are universally agreed upon? Excellent question. Thank you. I think yes. And I kind of touched on that in that especially in building event-driven microservice systems, the concepts are grounded in those things. And like I said, microservices, we've been trying to evangelize concepts of microservices should be tight, focused, own their data, loosely coupled. And one of the best ways to loosely couple microservices is to do eventing because one service emits events, it's fire and forget. The producer of messages does not care who the consumers are. The consumers of messages does not care who the producers are. That's loosely coupled. So the concepts I think are heavily grounded in where, and another thing I just real quickly, is I'm saying especially in people that I talked to out in all over the world, they're kind of going through a generational shift in people that have built some microservice systems that weren't following some of these principles. And now event-driven, for example, is becoming much more popular. Let's see. That looks like it for the questions. Thank you for I'm glad some people enjoyed it. Give it a try. There's a free trial. You can try it out. I've got my examples that I've been trying to push out there. If you're interested in this kind of stuff, try it for free. You can do it on an individual basis. It's rocked my world as a developer. I'll say that. Oh, here's some more questions. This is a good question. Let's see. We've got a few more minutes. How is this different from Dapper? Excellent question. Dapper is a distributed application platform. I think is what it stands for. It's from microservices. It's a really cool solution. Definitely worth taking a look at because I consider it to be similar in some respects to Calix. The big difference is what Dapper does is it straps away things at the code level, but it allows you to configure in at the level below this code abstraction layer that's provided by Dapper into specific services. So you are involved in configuring in the services, and you are also responsible for running all those services, or somebody is. With Calix, there's that hard layer. Calix is a platform as a service. We provide the platform. We manage the platform for you. We run it. There's nothing to configure. The services that are within Calix are defined. Dapper gives you the flexibility to do something, do it the way you want to do it. But it is, I think they're similar in a lot of ways in spirit. I think we got a little bit more time. Is there any plans for a community-based version? It is a platform as a service. It's kind of like Amazon serverless on steroids. If you look at some of the other talks and other materials we have, we've kind of talked about how we're in spirit. Similar to serverless, but beyond that in a way. Because we're not just abstracting ways, the functionality, but we're abstracting the data layers and the messaging layers and things like that. But in both cases, you're running this on a platform that you pay for. It does run in a dedicated environment. You can have a dedicated environment running in Amazon or Google right now, Microsoft coming later this year. But no plans for a community version. And then let's see. We've got one more. Yeah, somebody wanted to contact, like I said, to initially contact Twitter. We're great. And I think we're out of time. Let me finish this one. I'm trying to understand this last question. I couldn't find your status last question in your Apple Music Library. You can ask me to play a radio station or ask for your music in a different talk. Okay. The last one I wanted to cover is, is it about how much percent it increases the developer development time? And it is, but it's not increase is decrease. The developer, I think, and I'm speaking from my experience as a developer writing code on the system. And I've been writing code for decades on many different platforms, all usually back in, mostly Java. So that's kind of my background back in enterprise systems. It decreases the time that I know I've spent on development because I'm eliminating the time that I spent on non-development activities, like configuring and services, like understanding how to do, you know, learning how to do new things like Kubernetes and Kafka and all these other things. I've done that. But that, you know, when you're new to that, of course, you have to invest the time to do that, then invest the time to typically have some involvement in setting it up. Somebody somewhere has to do this, and often it falls onto the development team. The thing I like about this is that now, like I was saying earlier, that I'm totally focused on the design and implementation of this. It's like trying to really reduce the friction from the initial thoughts of design and the initial lines of code all of into production, make that entire process much more frictionless. That's the way I view this. So and all focused on really features that matter to the business, features that matter to your users, not all of the other tasks that we typically have to deal with in these highly complex environments that we run around back in applications. With that, I'll turn it back over to you, Candice. I think we've used up enough time for everybody here. Awesome. Thank you so much for your time today, and thank you, everyone, for joining us. A reminder that this recording will be on the Linux Foundation's YouTube page later today. We hope you join us for future webinars and have a wonderful day.