 Thank you for joining us for another OpenShift Commons briefing. My name is Michael Waite from Red Hat, and we are bringing you our special Veterans Day edition of the Operator Hours show. Today we have Mark Brewer and Hugh McKee from Lightben. And Mark and Hugh are going to be talking about fueling digital transformation with cloud-native applications. Mark and Hugh, why don't you give us a little introduction and tell us who you are? Hi everyone, my name is Hugh again and I'm a developer advocate for Lightben on the ACA platform. So my role is pretty enviable, I've been a developer for many decades, but now I get to talk to people about ACA technology, about Lightben technology, I speak at conferences around the world, do lots of videos, do lots of events like this. And also I get to write a lot of cool demos with, my favorite thing is adding some eye candy. So that's going to be showing you later on in this session, is a couple of demos of ACA running in OpenShift environments in Kubernetes with some fun eye candy pieces to it. And it looks like you have a bunch of eye candy on the wall in the background there. How does that work? This is my battle station they call it, you know, some of the stuff I've been working on. We'll be seeing some of it in a little bit. Excellent, we're looking forward to that. Mark, over to you. How are you today? Good. Like Michael said, my name is Mark Brewer, I'm the CEO of Lightben, been in this open source industry for quite a while, actually 20 years now, I think 21 years. My first company in the open source space was Covalent and Covalent was the company behind the Apache web server and the Tomcat server like container. I actually had an opportunity to work with Red Hat back then as well. But I've also, from there I should say I went to Spring Source, so I was part of the Spring team that took Spring framework and the Tomcat product and brought it to market and then of course that company got acquired by VMware back in 2009. And then a few years later became a spin out of VMware, it was pivotal. And so anyway, I stayed with Spring Source until, I'm sorry, I stayed with VMware until I came here in 2012. Looking forward to talking about my time. So you've been in the open since 2012? Yep. Okay. The company was founded in 2011, I joined about a year later. Okay. And Lightben, bending of light, is there a symbolism in the company name or did you decide to just pick a name that was easy to pronounce and people wouldn't misspell it or how did they come up with the name Lightben? We have, a lot of our partners have insect names and some of them have farm animal names and so why Lightben? Yeah, well actually we weren't Lightben when we were founded. We were Typesafe. And Typesafe, the name to type safety, which is one of the things that Scala, the language that were the vendor behind, provides type safety. And so the name had a lot of meaning initially when the company was all about Scala and early on with the product we focused on Scala developers only. Just to side note, today and actually for the last seven years, Java has been our primary language that our customers utilized. But anyway, the company was Typesafe for its first five years of existence. We changed the name in 2016, went through a long exercise of trying to come up with a trademarkable name and a name that wasn't restrictive like Typesafe was. Typesafe was all about Scala and the company provides the full platform. As I said, it's targeted at the Java audience more than it is even the Scala audience. Regardless of that, we also struggled with people spelling the name wrong. Typesafe frequently became typeface and we never were a font company and never intended to be a font company. But, Mark, I've been here for 19 years at Red Hat. And I completely get it about the spelling. I mean, Red Hat's two words. It's capital R, capital H, and people make it be one word with lower case H. So I completely get it. Matter of fact, we actually went through a huge logo rebranding exercise about two years ago. And it's probably our 18 month project to kind of make the logo and the brand a little bit more consistent with modern day. And so I completely get it how important the right branding is and picking a name that people don't misspell. Yeah. Well, we accomplished that with Lightben. The other thing that we were trying to accomplish is to find a name that was not only easy to remember and easy to spell, but didn't limit us. It wasn't tied to just one piece of technology like TypeSafe was. So I'm pleased with the name, but it doesn't have any other specific meaning. Obviously Lightbending is a real thing. It's difficult to do. It takes a pretty big mass to actually bend light, but it can happen. Mostly it's just a cool name and, like I said, isn't hard to spell. Okay. So that's how you got in the business by sort of transforming the company and so forth. What about your relationship with Red Hat? We've had a longstanding relationship with Red Hat and even prior to that with IBM. We've worked with IBM on a number of fronts. In fact, they're actually an investor in our company. So our relationship with Red Hat extends through a number of relationships or areas in the relationship, specifically around OpenShift. So we have, you can find our technology on the OpenShift Commons. Find us on the Marketplace as well, the Red Hat Marketplace. We just last year, I guess it's this year, not last year, sorry, won the Lightbender, I'm sorry, Red Hat's North American Partner Award. And we've certainly found many customers that are using OpenShift with our technology. In fact, I'm going to be talking about that here a little bit later. But we have quite a few customers who have been leveraging the OpenShift technology along with Lightbender's platform. So I'd like to put in a gratuitous plug for Lightbender. You folks have been a member in good standing of the OpenShift Commons community for quite some time. It's not trivial. You mentioned that you folks have an application that works with OpenShift and you're in the Red Hat Marketplace operated by IBM. I just wanted to make sure that people understood that that's not a trivial statement. That's not a logo swap and putting it on a website saying I'm affiliated with some program. You folks have been working our technical teams for quite some time to build and test your containers. And most recently, your Red Hat operator. So you folks have a Red Hat certified operator for OpenShift. And that enables companies then to be listed with a commercial offering in the Red Hat Marketplace operated by IBM. So I just wanted to point that out that at the end of the day, the story for customers is people who want to run Lightbender on any of the Red Hat platforms, including OpenShift, can know that, you know, for the best day to supportability is assured because of the engineering integration that we've done together between our companies. So I thought I'd thrill that out there. Yeah, I appreciate that. And we appreciate the partnership. Well, it's all about doing it for the end customer. So nobody likes surprises. They want to try and run something in a production environment, especially now when we have, you know, everything's multi-cloud and distributed everywhere. So it's all about, you know, enhancing the customer's value. That's my team's chart anyways. Okay, so talk a little bit more about Lightbender, like what it is, what it does. I know who was bringing it to Aka. You know, is it one product or they're multiple products or they're, how does it work? Yeah, before we jump into the product, I'll give you a quick history. So the company, as I mentioned, is started in 2011. The objective of the founders, that's Martin Odarisky and Jonas Bonair, was to help companies build distributed based systems and bring it to a broader audience, a broader audience than previously could have built these kind of complex systems. Thinking back to, you know, 2011, this predates cloud native as a term or even something we've thought about. But that's essentially what they were envisioning, is that people are going to build systems that need to run in the cloud, an environment that's obviously distributed and not controlled by the developers or even the ops folks that might run those applications. So let's shift gears and talk about our technology and how it's being used. The use cases that are highlighted here, we'll talk about some real customers and they'll all be ones that are running on open shift. So real-time financial processes, obviously this can be not only time sensitive, but it also has to be really performant in that it can handle mass volumes of data and be able to process it near real-time. Hyper personalization, being able to actually deliver contextually aware personalization in near real-time. Real-time analytics, similar to the financial processes, just being able to process a lot of data and come up with answers in a very short period of time. IoT, interesting use cases in Internet of Things type projects. Everything from Tesla with their power wall and virtual grid to companies that are just tracking devices and trying to keep, collect all that information that comes off of the device and predict whether there might be a failure or something needs to be replaced or repaired. Simply application modernization, I'll talk about one customer where it was all about that. They had a very old legacy system that was getting harder and harder to maintain and they just were looking for a way to bring it to newer technology, but make it easier for developers to continue to maintain that application. And then finally, e-commerce, we see a lot of new e-commerce platforms built using the Lightman technology. Let's talk about some customers. Actually, let's first talk about the platform. So you asked me about the product. We sell the ACA platform and ACA data pipelines. Essentially, it's one product. It's the ACA platform. They're just as two personalities, if you will, or personas. One is focused on delivering what you need to build reactive microservices and get them into production. That's the ACA platform at its core. And then ACA data pipelines brings in streaming technologies, both our own streaming technology, ACA streams, as well as some third party spark and flink and others that are widely adopted in the market. But in the ACA data pipelines product or persona, you can build streamlets that become core or integrated into the application that you're creating. And like I said, we'll talk about a couple of use cases there. All of this, of course, runs natively on OpenShift and takes advantage of features and functionality that OpenShift and Kubernetes provides. So what does it deliver? Well, delivers performance. Obviously, with the use cases I highlighted, that should be a foregone conclusion. Reliability, something that you don't find in a lot of platforms. And more importantly, you don't find developers thinking about it as part of their core construct, that being embracing failure, that something might crash, that something might go down and you need it to be able to heal itself so that there is no exposed or experience from the users that something has failed. Scalability, one of the things we're really proud of is that LightBin technology and specifically ACA is used to run some of the most highly performant and biggest web applications out there in the market. Whether that's Spotify, Shopify, parts of Lyft, Twitter, LinkedIn, all built on top of the ACA platform. So we use these every day. Don't think about them as being something that ever fails or has issues. But you have to also recognize the amount of processing, the amount of users that are accessing these services in real time. And system just runs or just performs. Next, efficiency. Efficiency is obviously important. You want to find ways to utilize your cloud infrastructure, whether it's running on Kubernetes or not, you want to make sure that it's taking advantage or only utilizing what it needs. And in a Kubernetes open shift world, highly efficient. You'll run these things in a very small footprint with scale when you need it or availability when you need it. Real time, talked about this already, but streaming of data and processing of data in near real time, where you can actually make business critical decisions, whether that be AI or machine learning based system or personalized customer experience, be able to deliver that in real time or near real time. Let's talk about some customer cases. First one, Brighthouse. This is Brighthouse, by the way, is a spin out from, I'll take it because we're talking here, but they became a public company a couple of years ago, spun out of one of the large insurance MetLife, there we go, I knew I'd get it. And the problem they were trying to solve was to reduce, not just risk, but reduce the amount of time to process mass amounts of data to come up with answers on risks, whether or not this was an insurance policy that had a high risk or a lower risk. They were able to accomplish that and reduce it from 70 minutes or more down to less than 10 seconds. Now this doesn't just mean that they can get an answer faster, it also means that they can run these models much more frequently. So they can evaluate risk at a much more rapid and frequent basis. Yeah, so with USAA, another Red Hat Open Ship customer that was looking to improve the time it took to send out messages through all their different channels. If you don't know USAA is an insurance provider for military families, and I think they have 10 million members or more, and one of the things that they were struggling with was being able to interact with their member base in a way in which you could get information when it was relevant to them. So if they got in a car accident, they wanted to place a claim or let their insurance agent know, take a picture of the accident and send it directly back to the persons that would authorize a claim. Well that used to take days, now you can do it in real time. They replaced their entire under core communication system that allows them to communicate with their members via email, via SMS, via actually phone and the like. So this application was started in 2017 or started being developed in 2017, went live in 2018. Their development team was able to get this into production in less than six months. That was something that they didn't expect, they thought this was going to be a much longer project. I mentioned this earlier, personalization, hyper personalization in the cruising industry. When cruising happens again, obviously cruise ships haven't been going out anywhere for the last nine months. It's going to take a while for sure. Here it is, it is. The Norwegian and we're actually working with a number of cruise line companies that use our technology, all for the same type of application where it's about personalizing the experience of somebody who goes on a cruise. And that starts from the time you book the cruise, whether you do it on the web or over the phone or via their app. They actually have an app where you can book your cruise. But more importantly, it's the experience once you're on the cruise. Booking your reservations, whether it's for dinner or for an excursion, if you're going to go out on a snorkeling trip and you realize the weather is bad, you need to change that excursion to something else. Well, this experience, this personalized experience happens on the ship for everybody. And in the past, before they had this, you used to have to book all of your reservations in advance and days trying to get on the phone to change something if the weather looked like it was going to be bad. Now it all happens in real time via the application. And like I said, this is something that we've seen with a number of the cruise line industries. Norwegian has been a customer of ours and a customer of Red Hats for a number of years. Mark, how does that work? I'm not quite sure I get it. I get it that you don't have to book everything in advance, but how specifically does Light Bend and your products, ACCA, actually improve that the user experience of people on a cruise ship? I'm not picking it up. So they built an app that you experience via the phone or a tablet if you've got an iPad or something. And that application allows you to keep track of not only all the things you've booked but anything else that's available that you might want to try out. All the way down to your dinner reservations, you can even find out if there's a long line at a particular restaurant on the ship, but you don't want to go there for dinner or lunch because the line's too long. So it's all via an application. And you can also set up alerts so that for your dinner reservations, if you want to be notified that there's going to be a long line, then it'll alert you so you don't even bother going to that particular restaurant. Does that make sense? Okay, I know that Hugh is going to be giving us a demo later on as well. So based on all the eye candy that he has prepared, I think it will be something pretty interesting. Yep, I've got a couple more use cases I thought I would share and then we'll turn it over to Hugh and have him give the demo of the product. Rogers Telecommunication, they're a Canadian telecommunication company. This was all about reducing costs but also delivering a much better experience on their e-commerce site. If you go to Rogers.com, that site and the app where you can buy a phone, you can order services, whether you want to change your cell phone coverage or you want to add a new family member or change something else on your plan, all of that's done via an app that was built using the light and technology. So the most important thing they were looking for was to reduce their footprint in infrastructure while making sure the system stayed up all the time and obviously that's a requirement for any e-commerce site, anything that you want to run your business through. So they were able to accomplish this by using light event technology, using ACA in our frameworks. The reduced footprint has saved them nearly 40% per year in their bill for infrastructure. So the amount of footprint that they were running their old system on versus what they're running it on now has shrunk by 40%. And then lastly, I think I have one more, ING. ING, if you're not familiar, that's a large international bank based out of the Netherlands. They have offices all over the world. They were looking to replace a very old cobalt based system and some of us have been around long enough to remember programming and cobalt. It's hard to find cobalt programmers. It's hard to maintain cobalt based systems. So this is a swift payment system. Anybody who's been in the banking industry knows what this is and has experienced with swift payment systems. They decided they needed to replace it and replace it with something that not only was more modern and could be maintained, but would allow them to add functionality in a more rapid fashion. The old cobalt based system, I'm sure it hasn't changed much over the years. They're not done with this. They're in process. This is also an open shift customer. Their plan is to roll this out at the beginning of next year. We'll see how many cobalt programmers are left in the market. There's probably a few. There's still a few. There's still people who are writing applications for open VMS. I know. Hard to believe. You see the beautiful open shift console here, right? Wonderful. All right. So I'm going to be flipped over to an app. So I'm going to show you two demo apps, both running Aaka, both running in a Kubernetes environment. One's running on my laptop using open shift code ready containers. It's a developer tool. For me as a developer, I want some kind of a Kubernetes environment. So this first app is behind the scenes. I'm actually running in an Amazon environment, but behind the scenes, there's two Aaka microservices and they're both running a number of pods in a cluster. So the second demo app, I'll show you the anatomy of this thing. But here I just wanted to show you kind of a little bit more realistic type of an application. So the user interface of this demo app is a map. It's like a Google map. The concept is that this is simulating an IoT type of an application. So these markers that are distributed around the world on the map are showing locations of these simulated IoT devices that have created using this demo app. And the real goal of this app is to create some load. We're trying to push the parameters of how hard can we push Aaka? How hard can we push in open shift Kubernetes environment? And probably the biggest one is how hard can we push the backend databases that this thing interacts with? So just to show you, you navigate around this thing just like you do with Google Maps. And I'm going to zoom into the area around London. And as I get closer, you start to see where there's these highlighted regions. And what's happening here is this is showing where these hypothetical IoT devices have been placed. So each IoT device is in a certain region on the map that's bounded by a rectangular area that falls within kind of a top left bottom right longitude and latitude. But if you... How does Aaka know that? Are the IoT devices phoning home to a central... Yes, exactly. Yeah, great question. So what this demo app is doing is that one service... I'm using that to simulate the outside world, like IoT devices that are sending in telemetry messages into another services. So this is what's happening here. The second service is receiving these telemetry messages saying things like, hey, create a device at this location on the map. Is the device happy? Is the device sad, kind of a state change, or delete a device? So this is a demo app. It's intended for developers to take and implement or get running on their own, say in code-ready containers, play around with it and learn about Aaka, but kind of a more realistic example. And the JavaScript and everything that I'm using to make this map work and so on is something I wrote that's included with the demo app. So the developer can take all this and make it work. But on the backend system, there's... Aaka is the actor model on the JVM. And actors are kind of glorified objects. So if you're a developer of any kind, or if you heard about software development, commonly you hear about object-oriented programming, well, actors are kind of very much like objects. They're written in the same way that objects are written, but they have a kind of unique characteristic. And the unique characteristic is the only way you interact with an actor is you send it a message, an asynchronous message, versus how you would interact with, say, a normal object written in Python or Java or something like that. So in this system, what I'm zooming in, but as you can see as I'm mousing around here, moving their mouse, it's kind of pulling data from the back end. But there's 1,024 devices. So in this region that I'm over right now. So on the backend of the system, there's 1,024 actors that are alive that know the state each individual device. Now, in this map on the bottom right, it's showing that there's like 223,000 devices that have been created. So what that means is there's 223,000 live actors, kind of hot in memory. They know the state of every single device on the map worldwide. And in this view here, I'm looking at around 26,000 devices. And the idea is that the concept, and we got this from the folks at Tesla, they're doing some really cool stuff with ACA and batteries, and they coined the term digital twin. So for every physical device out in the real world, like a battery or a book detector or a streetlight or whatever your IoT system is doing, on the back end, there's a digital twin, which is an actor, which is kind of responsible for echoing the state of those physical devices out in the real world, one per one per one per one. And that's what the system is doing. Where do the operators that you folks built for OpenShift, where do those operators live? So if you think of there as like an AI, artificial intelligent daemon, if you will, that allows apps to self heal and helps with configuration management and so forth, where do the operators fit into this whole scheme here? And with ACA, the main thing is for deployment of the applications to the environment is what we use the operators for. So one of the things that I can do with this app is, like I said, it's kind of a test bed for demonstrating the technology. There's a lot of source code for developers to look at and things like that. It's kind of fun to play with interactively here is I can kind of do things at scale. So for example, I'm going to create a bunch of devices and I can do through this UI, I can create some devices and it's not like one device at a time. I'm kind of backed away from the surface of the earth a little bit. I'm at a level above the earth where this region can hold and it's just by design of the application can hold around 4,000 devices. So if I can pick a region and if I click it, that sends a request to the backend system and it kind of goes through some gyrations but then what we see is those individual devices get created. So what happened was that the first service got this request, a single request. It cascaded that into 4,000 gRPC messages over the network to the second service. That second service got those 4,000 requests as if 4,000 devices suddenly came online over the course of about four seconds. Did a bunch of database work, did a bunch of actor stuff and then brought a bunch of information to the database and we saw it happen in real time. So there's a lot going on behind the scenes that made it all that work but it runs pretty fast. You know, that was 4,000 pretty quickly. I can do my little bit. I'll do it once more just for fun and create 4,000. I can zoom in to kind of watch them happen in a little bit more granular scale. So you can see the devices start to show up and it's done. So there's now another 4,000 devices. So I can zoom out more and generate more traffic but I want to go to the second demo. All right, so this is another demo that's running in OpenShift code-ready containers on my laptop. The first one was running, you know, a real cloud environment in Amazon in Kubernetes. But this one is structurally behind the scenes exactly what that other application was doing. And it's a way to kind of show a cluster, an ACA cluster in action. So this is showing kind of live things happening within this little demo application. The scale is on the map. I was dealing with hundreds of thousands of things. Here I'm dealing with hundreds of things. It's scaled way down because I can't render hundreds of thousands of things like this. Can you help me with something? So the ACA is sort of the core brain, if you will, right? And all the IoT devices that are out there, let's, I don't know, you know, what they might be, there needs to be some kind of a relationship set up between the companies, the people, and the apps that are running in those IoT devices to get them to talk to the ACA infrastructure, correct? Correct. So how does that work? Does your company basically, you know, sell your services to all these IoT vendors for them to be able to have all their information phone home into your systems? No, it's more of we're giving them the tools to build those kinds of systems. We're not giving like an out-of-the-box IoT system, but we're giving them the kind of the core tools to do that. And so building things like authenticating new customers and allowing customers to set up devices that would phone home to, you know, the back end system, those types of things are things that have to be implemented in the application, but that's not part of what we provide. We provide them the tools to build those kinds of applications. Okay. So in the case that Mark Brewer was showing there with, I think it was the Norwegian Cruise Lines customer story, this infrastructure that we're looking at right here would be the infrastructure of the Norwegian Cruise Lines organization. And all the little blue dots, all the, you know, IoT devices would be all of their stuff that they use to run their business. Yes. So, and the demo app, the map app, where, you know, the IoT devices, each one of these little blue circles represents an individual physical device. If this was a shopping cart app, if somebody used ACA to build the classic shopping cart app, every one of these blue circles would be somebody actively interacting with a shopping cart, you know, your shopping cart, my shopping cart, somebody else's shopping cart. These little blue circles represent real actors running in the system. In this demo, again, it's like I got a hundred or so of these little blue circles running right now. In the map app, I had 225,000 of them. Right. So the idea though is that the, we're making it easy for developers to write the business logic for handling, you know, the manipulations of the devices or the shopping cart or whatever the application has to do in a very distributed environment. So on the perimeter of this circle, I'm showing a bunch of, you know, a hundred or so, the count is down at the bottom here, this entity count, it says 102. It'll change because these things are coming and going because the system's actually, you know, percolating along here. Next level up is another kind of actor, which is shards, they're called shards. And if with databases, for example, you know, sharding in databases is a way to kind of delegate work out to multiple places. And this is exactly what the shards are doing here. We're using shards to distribute work across the cluster. Pardon me. Poor functions of spanner, right? Was how they, how they grew the spanner database was sharding it? Yes. And that's exactly what's going on here. That, you know, in this case, I've got, the number of shards is fixed, you know, the actual number. You know, in this case, there's 15 in the map app, but there is like about a thousand shards. And, you know, to scale the hundreds of thousands of actors. But the shards are basically, their work is really just to distribute, you know, work across the cluster because the big circles here represent pods running in Kubernetes in like in my OpenShift environment and in my laptop or in a real Kubernetes environment running somewhere in the cloud or in an on-premise OpenShift environment. So right now I'm running a cluster of three pods. And those pods, of course, contain a container and those containers are running a Java virtual. Those Java virtual machines in use Aka and the Aka code allows things to collaborate with each other. But it also gives us things like, I want to kind of show you resiliency and scale. So I want to, if I, I'm going to tag one of these entities by clicking it and it should turn red. All right. Just so I tag it. So I'm going to force this pod to stop this pod 19 and do that just by clicking this up here. So it should shut down in a moment. And what we should see is this entity that I've tagged as long as, as well as a shard will jump to one of these other pods. There it goes. Beautiful. So right away, this is where it recovers. It's like, for me as a developer writing to code that does, you know, handles the entity logic, I had nothing to do with the redistribution of my code around the cluster that was all handled by Aka. One moment my, you know, that the instance for the year shopping cart was running on one pod and the next moment is running on another pod. It recovers itself. It just kind of brushes failure off. It's like, yep, expected that. We know pods come and go. That's just a fact of life. And it just deals with it. So, and then the meantime, the beauty of running in a Kubernetes environment is that for a moment we were down to two pods, but I told Kubernetes I want three pods in this environment. So Kubernetes saw that there was a pod down and it started one up and it came back. So the Kubernetes is like the perfect, perfect environment for Aka clusters. Aka cluster has been around for a while and it pre-takes Kubernetes. I used to Aka clusters when I used to work at HP and IT and we had, you know, just like virtual machines that we really, really wanted something like this, you know, a beautiful orchestration environment. It's just Aka was waiting for something like Kubernetes. So another thing I want to show you is I'm going to scale up a load a little bit. I can do that by scaling up my load generator. Which is just some pods. So what you should see is the, these, on the top right here, these are nodes or pods running in another cluster. And all they're doing is generating traffic that's flowing into this service. You know, it's generating HTTP requests that are flowing in and I'll show you. So these four more spun up. And now the density of the entities is increasing. So I'm, you know, kind of trying to simulate increasing the load on the system. So now the stress, say, on each of the pods running in this cluster, each of the three pods has gone up. Of course, this is at a very small scale, but say this was, you know, a much higher scale. We were, we went from around 130 or so, I think entities and now we're up around 180. So, of course, with Kubernetes, you can set that up to auto scale, but you can also manually scale. So that's what I'm going to do. I'm going to go back over to the open shift also. And I'm going to scale up my ACA cluster here. So what we should see is two more pods will show up to which contain two more containers, which are running to more JVMs. Those containers will spin up. Those JVMs will light up ACA. They reach out and say, Hey, I want to join the cluster. Here's what the one just came in. And then the second one should come in. What you see also is that the shards are some of the shards are automatically getting redistributed to these new pods. So now we have some extra processing capacity thanks to Kubernetes. And ACA sees this and this is part of the sharding strategy. It goes, Ah, I got more capacity. Let me move stuff over there. So ACA recognized that more pods came into the cluster and said, I'm going to go and redistribute the shards for you. Yes. The zero code that I wrote that makes that happen as a developer, you know, I developed this little demo application. I developed the map application. I don't run any of the code that does all this rocket science redistribution and stuff like that. That's handled by ACA. And I just have to follow kind of a very simple, prescriptive approach for setting up the way my application works. And I get this kind of out of the box. Where do the limitations start to come? Meaning like if you went back to the OpenShift console and you just cranked it up to be absolutely monstrous, is that something at some point ACA says enough is enough? No, actually, I mean, I just heard, I think it was yesterday at another conference that somebody recently was playing with ACA a cluster scaling to thousands of nodes, thousands of pods. So the scale can get pretty high. I think before this, I'd heard things like ACA clusters running with say 2,000 nodes, things like that. I think more typically though, you see clusters run with 10, 20, 50, 100 nodes, those types of things, but we can scale. You think about it, Fortnite, the game runs on ACA. And when there are all these active players at the same time, it needs to scale to some pretty big numbers. I don't think we ever got a final number from the Mahoney nodes, but it's in the thousands. And when people aren't playing, it just shrinks down. So it's elastic. And therefore, their cost of running the system is only high when they have a lot of people using it. What about application monitoring of the devices all the way out on the outside perimeter? Does ACA help with that? Or do you integrate with other tools from other APM vendors? ACA is managing that. You, I think the question was on monitoring, providing metrics on the devices. Oh, yeah, sorry. Yeah, we've got, as a commercial offering, monitoring that provides instrumentation, very specific to ACA, the JVM and so on, that can be integrated in with some of the more common tools, the application monitoring tools. So yeah, it does that. But as far as things like when a node fails or a node spins up and we want to redistribute work across it and the decisions to do that, that's all ACA itself, the open source ACA of the box that does that. Yes, telemetry is part of our commercial project. And telemetry plugs into APM tools like Datadog and New Relic and others that are out in the market. The one last thing I want to show you is kind of the flow in the system. These extra lines that appeared when I clicked this one circle, you can think of this top right as a load balancer, just a load balancer in Kubernetes. And these HTTP requests are coming in from this load balancer into an HTTP endpoint that's running in this JVM, this one JVM here. And it's getting requests in to send messages to specific entities. So some of those requests are to entities or entity actors that are running within the same JVM as where the HTTP requests landed, but many others are in JVMs that are distributed across the cluster. So all the routing of these requests is handled by ACA itself. So as a developer, it's really easy, all I have to do is I write the code that kind of receives the incoming, say, HTTP requests, maybe some JSON or whatever it is, create an object which represents some message that I want to send off to an entity actor. And I identify the entity by some ID that I want to send this message to. And all the routing to either local or remote off to another node across the network is handled by ACA itself. It's all called location transparency. The location of the entity actors is completely transparent to the code. So when I write my code, I'm writing it as if we're running in a single machine, but it's actually running in a cluster of machines. When I turn them all on, you can see all this flow coming in from the load balancer. And it's just kind of a spaghetti bowl, but it's showing all the distribution of all these incoming messages around 20 messages per second are coming in this little demo app that are just getting distributed all across the cluster. And it doesn't matter where the HTTP request comes in. We'll always route it to the correct entity actor no matter where that thing is. And that allows us to do something with state because a very common pattern for people to develop applications is called a stateless type of an application. And the reason they do stateless is that it's hard to do stateful in a distributed environment. But this is exactly what ACA excels at because it has this very powerful mechanism for distributing messages across a distributed environment, which means that we can have stateful actors, which means with stateful actors, we can do things like reduce the load on databases, which means the applications can scale to higher levels of performance. That's really cool. It's been really helpful, too, for me. Excellent. Excellent. Yeah, this visualization has been fun. And again, these are things that we make available. Just stuff I wrote for developers to take, play with, maybe show their team, get excited about ACA or see how ACA works. And like I said, have a little bit of eye candy. I love doing this eye candy stuff. That was actually great for me because when you were talking about the Norwegian cruise lines and the cruise ship, I was kind of struggling to figure out, okay, so how exactly does ACA fit in there and tie all this together? But now I get it. When you're running these large, complex businesses that are mobile and moving around the world, I couldn't imagine trying to manage a business like that without something like what that you folks provide. You couldn't do it. Well, there's even a more challenging aspect in Norwegian or any cruise line use case. That is that they aren't always connected to the internet. They may be out at sea and they may not have good coverage. The satellite coverage isn't good or they're too far from shore to get the wireless connections there. So the ship itself is a data center. And that data center has ACA running and has pods of ACA running. But when it's close enough and it has internet connectivity, it expands to the internet. In other words, it uses Amazon, I think, as their provider. But regardless, it literally behaves just like that visual graphic that you were seeing there from Hue, where all of a sudden there's another pod available. Now I've got the internet so I can start using it to process these things. But when the ship is out too far away from the internet or too far away from the shore or satellite coverage, it still works. Everything works. We were talking about, I mentioned people are still using OpenVMS. I used to work for digital. I used to help software vendors port their apps to Alpha, Alpha NT, Digital Unix, OpenVMS. And there were tools out there made by computer associates and BMC and others that were kind of doing something, and correct me if I'm wrong, kind of like what ACA is doing now in a distributed multi-cloud world. Is that a fair analogy that some of those more legacy vendors were doing it in a data center, but now there's applications like ACA that can take that same type of functionality and have it running on multi-cloud, whether you're in Amazon, GKE, or on-prem? Yes. I don't know if you want to make a comment about that. On steroids. Because I was there. I worked at HP when I was kind of technical sales and digital was our biggest competitor by far back in those days. And this is just to a new level. And one thing I wanted to say was the big challenge we have for people going to the cloud, just like before, was getting them to unlearn what they know from pre-cloud and learn how to really use the cloud. And I think one of the strongest things about ACA is that you're kind of forced without a lot of pain and suffering to unlearn a lot of your old habits, adopt these new habits, and you get really where you're really, really using the cloud. I mean, you're scaling, you're resilient. If your Kubernetes auto-scales you, it's elegance, auto-scaling where your actors are redistributed and all this kind of stuff. It's just massively cloud native compared to what most people seem to be doing, kind of bringing forward, oh yeah, we're going to just build our stateless microservices on Kubernetes and all good, right? It's like, well, no, you're not really taking advantage of all the really awesome things you get with things like bringing open shift, bringing the cloud on-premise to USA, for example, and allowing them to really take advantage of the power that they're getting, not doing the old things anymore, doing the new things. And application modernization is not just buzzword bingo. I mean, it's what everybody needs to be doing. Mark, we talked about some of how old we are. We talked about present day things with Norwegian cruise line, but if the state of computing has changed so much since back in the day to now, what's it going to be like in 24 months from now? Yeah, great question. I don't have a crystal ball, but I will say that there is a number of movements that we're both a part of and watching very carefully, specifically around abstraction, making it easier to build these complicated systems. If you think about all the configurability and what frameworks provide you or provide a developer, it gives you a lot of power, but it also takes a lot of work for the developer and even for the ops folks when they put those things into production. We see a world where in 24 months, maybe much sooner, where companies are going to look at serverless, look at ways of abstracting complexity and therefore losing some of the configurability, but giving it up in exchange for rapid development and honestly, something that I don't have to worry about that literally has been operated by the cloud providers, the service itself just runs. I just write the business logic and deploy it and all the rest of it's handled for me. We've launched a project called Cloud State and we're going to be launching a service called ACA serverless and that's all about abstracting that complexity and making it much easier to build these complex distributed systems. I think that's not just Leitman that's driving towards that. You see other vendors as well, including IBM Red Hat, providing technologies to make it easier to build cloud native based systems and not put a lot of the work on the developer. And we have an OpenShift serverless offer as well. Serverless also could really just mean somebody else's servers. Unfortunately, the term is not well defined and overused. And you need operating systems and kernels to make computers run. It's just where are the servers? I guess I didn't say it very clearly, but the simplest way of describing serverless, at least in our context, is abstracting all that complexity and configuration stuff you have to do. So there's less work. This makes it easier. Sorry, go ahead. I was just saying less drama. All the drama of trying to get things working, you know, you go to, I mean, I kind of look at the world today in three areas. You know, there's pre Kubernetes, there's Kubernetes, and there's post Kubernetes, which is serverless. And it's all about abstraction. You know, that Kubernetes is a huge abstraction layer that removes so much complexity that we had to deal with, say, in that, you know, when we were dealing with real machines or virtual machines. So now we have this new abstraction in Kubernetes. And then serverless takes it to the next level of abstraction where I don't even care about things like pods anymore. And I have no idea about machines. What all I'm doing is saying, I will spend up to this much, you know, things like that, you know, I will pay for this much compute power and IoT operations and how you do it, I don't care. I just want my app to run. I want it to scale. When things break, I don't want to even hear about it. Don't give me any drama. I just want to focus on the heart and soul of my application. You know, what's the business logic? What are the features that I want to implement? Well, we're coming up on the top of the hour. We have a couple of minutes left here. If I called your director of VP of marketing, would that person be saying, oh, my gosh, I can't believe you were on there for an hour, Mark Brewer, and why didn't you talk about the phone? Let's stop that call from happening right now. Yeah, so to engage with light, Ben, it's pretty simple. We obviously provide a lot of open source technology that people take advantage of. But when it comes to engaging with the company, it's all about helping you with your project, making sure you're successful at building those applications. And they meet your business requirements, whatever those business objectives were, whether it's transforming your business to something new or adapting to this new online world where we're not doing as much physically in person as we used to. Those projects, those business objectives, we want to see them succeed. And so you can find out about light, Ben, by going to our website, find out more about light, Ben and its relationship with Red Hat. Please, please reach out to us. We're happy to help. And by the way, you'll also be able to see all the use cases and more that we talked about today. And, you know, just another plug, you know, LightBend is available in the Red Hat Marketplace. So people can go to marketplace.redhat.com, find the LightBend offering that's there. And certainly if anyone needs to get in touch with Mark and or Hugh, you know, you have Mark's email right there on the screen. We're at the top of the hour. I am Michael Waite. And this has been another exciting edition of the OpenShift Commons Briefings with our operator partners from LightBend. Thanks, everybody. And we'll see you next week.