 Well, welcome to this OpenShift Commons briefing today. We're really pleased to have Hazelcast's Chris Engelbert with us to talk about in-memory distributed computing using Hazelcast. And we're streaming live from a couple of hotel rooms. So we're just going to say up front, there may be a few breaks in the Wi-Fi where he is in Spain and where I am in Texas. So today is going to be interesting. We're going to have the session live streaming now on BlueJeans. It'll also be available to be watched afterwards on YouTube. If you have questions, there's a chat box here. I just asked that you type them in there. And we're going to let Chris introduce himself and his topic. And if you have questions, I think there'll be one little pause that he takes when he tries to upload on Wi-Fi a Docker image to Docker Hub or something like that. But we're going to let Chris start. And it should be about 20, 30 minutes of talking and demoing. And then we'll have time for Q&A at the end and I'll open up the mics then. So Chris, take it away and introduce yourself. Okay. Yeah, thanks Diane. So you already said we're talking about in-memory distributed computing on OpenShift today. The most important part is I will try to show how to run Hazelcast on OpenShift. The in-memory distributed computing part will be a bit less on that. But if you're really interested into that part, just Google for my name. There are quite a few different recordings from different conferences or webinars and stuff. And those are all about in-memory distributed computing at all and about how to do that on Hazelcast. So today we'll be mostly Hazelcast and OpenShift. So normally that is my first game. I can't ask people if everybody read that, but I think you did and you probably know what is in there. So it basically says, don't believe me. So who I am, Diane already said it. I'm Chris Engelbert, coming from Germany, being raised in one of the biggest industry areas in Germany. Currently I'm working as manager of developer relations at Hazelcast. I actually joined in November 2013 as one of the core developers, moved on to the cool stuff doing conferences, traveling the world, like all the good stuff you can do over the world. For people that love to use Twitter, you see my Twitter handle down in the slides. I know that you're not supposed to have your Twitter handle on every slide, but I don't care. If you really want to have your Twitter timeline bursted, just follow me and you're good to go. I would say I'm a Twitter, I'm a Java passionist. That means I'm part of the Java JCP, the Java community process that actually moves the Java language and the JVM forward. And just today we got the proposal for Java 9, so that will be fun looking through that. But apart from that, I'm really interested in seeing that Java is one of... I'm really interested to see that Java is one of the most used and most interesting language, and it's still growing and that is good. Apart from that, well, everything about performance, garbage collection, all those kind of benchmark fairy tales. I mean, we all know benchmarks are not real. Everything in those kinds of things is exactly what I normally do. So what are we going to talk about today? I already kind of gave the introduction. We're talking about in-memory computing. In-memory means fast and fast storage. But we do this with Hazercast, with OpenShift, and I think that most of the viewers from the OpenShift block or the OpenShift briefings probably know how OpenShift 3 works. It actually works on Kubernetes and Docker, so far so good. So Docker, as I said, you probably all know that. Docker is meant to just build an image. You ship it somewhere, you run it, and it just magically works. It pretty much works wherever you are. Docker hides all that kind of weird shit that you normally have to do in deployment, and it just magically works for you. Auto-magic. And Kubernetes on the other side, I stole this from Ray Tsang, one of the evangelists from Google, and I really love that picture. Kubernetes actually manages the cluster for you. You tell Kubernetes, I want to have 20 machines of that image, and Kubernetes, again, automatically makes sure that this all happens somehow, somewhere. And I think that's pretty cool. He actually showed it with this Hello World. That's why this picture has a Hello World. He just started Hello World, I think, 100 times in the Google Cloud. Pretty cool. OpenShift now takes Docker and Kubernetes and some internal parts and just moves all that together. And that is the interesting thing, because OpenShift itself gives you this amazing infrastructure that automatically handles all the different parts of deployment strategies, of how to distribute your images, and how to scale up and scale down. And at Hazelcast, we thought that is cool, and we definitely need to be part of that. So the interesting thing is what is Hazelcast? And I think that is where most of the people probably don't know yet. So Hazelcast is a so-called in-memory data grid, and I really love the picture, because we're not going to have a laser, but it's pretty close. So in-memory data grid means we're storing... First and foremost, it means we're storing data in memory. I think you got that part. The other part is data grid. That means you can store data, you can compute data. It's not just caching or those simple solutions, but you can really do a lot of things. A lot of people think an in-memory data grid might be something like an in-memory database. It's kind of close. Just remove the persistency part and you're pretty close to what it is. So as I said, or as I already said, there's one important thing. When you have a cluster running and you store data, there are two different ways. So let's take the other one first. The first one is data replication. I certainly forgot a slide for that. Data replication. That means you save the same data on all machines. That is nice because you can just route your reach requests to every node, and that will give you a speedup, but it means you have the same data, and that means you're wasting a lot of memory. So the other option is data distribution, and that is what Hazelcast does. That means if we have this grill and this big pyramid of nuts in the middle, what we actually do, we take subsets of those nuts and distribute it in the cluster. That is cool for a couple of reasons. First of all, you have like a huge shared Java heap, because your almost-linearly scale up the heap in the different JVMs. The other reason is if we have a request, a query or whatever you want to send to the data, we actually send it to all the nodes. Everybody's working on this little data set in parallel, and that is the cool part, because that means it speeds up your queries massively. So let's get further. As I said, it speeds up data, and speeding up data actually means where you start to use caching and in-marry data grids and stuff is when you want to have fast delivery. Caches are the most typical solution or the most typical use case, but not the only one. We actually have customers using Hazelcast as their primary, first and foremost, primary data store. They still plug some database behind that to make sure that if they have to restart the cluster for some reason that they can restart and read the data, but they use Hazelcast for pretty much everything in their application only knows about Hazelcast, and Hazelcast knows about how to write to this underlying database stuff. So why do we go for in-memory when we want to have fast delivery? So I have this nice numbers here, and I think everybody knows that L1, L2, these days, even L3 and L4 caches are the fastest memory you can get, but unfortunately, those things are still pretty small. Even though you have one of the Xenon CPUs, it's still just like 64 max or anything like that. I'm not sure what the current value is. The interesting thing is main memory. So we're at 100 nanoseconds excess time for main memory, and that is pretty fast. It actually is faster than a gigabit, sorry. So it actually is faster than the gigabit network. Still, a gigabit network is amazingly fast if you look at other things like spinning disk, which is 10 times as slow, sorry, a thousand times as slow, or even sending a network package, a TCP-TP package from California to Amsterdam and back to California. That is 150 millisecond latency. You really don't want those things in your cluster. True. Even if we look at network compared to spinning disk, so there's 0.01 millisecond excess time over a gigabit network, but there's 0.15 milliseconds, so 15 times as slow as the network. That means our gigabit networks are so fast these days that even some of the fastest SSDs can't really keep up on sending data fast enough. That's a very interesting thing. Hazelcast itself has a test lab. We have our own test lab, 10 machines, 786 gigabytes of RAM, 24 cores each, I think, something like that. And we have a gigabit network card, we have a 10 gigabit, and we have 40 gigabit. And I promise whenever you think that network is slow, go for solar flare 40 gigabit network cards. Those things are so amazingly fast. We got roundtrips times in microseconds. That is absolutely amazing. So network is not the problem these days anymore. And the other thing, so network is because we have a cluster, right? Main memory is because we have an in-memory data store. I'm not sure if you can read that. And I think my mouse cursor doesn't work. So this chart actually shows the way that the price and the size of memory moved between 1980. I think everybody knows most good C64 times. And the claim 640 kilobytes might be enough for everybody, even though I think it's not true that Bill Gates actually said that. But it was amazingly expensive to go over this 640 kilobytes. So a megabyte of RAM in 1980 was over six grand. Over six grand. Whereas today it's less than half a cent. That's actually pretty impressive where it's about half a cent. It kind of stabilized at that level where it is in 2013. But the memory size is growing. So these are home computers. And I think 16 gigabytes these days is not uncommon. My home computer is 32. I'm a gamer, so I'll probably need some more. But looking at server hardware, Amazon just yesterday launched, I think it's free terabyte VM. Free terabyte VM. That means they are not running a single VM on a single system. They probably run 10 or 20 or maybe 100. That means they must have machines with 30 and 300 terabytes of RAM. I don't know how they do that, but that's cool. But even as a smaller company, as I said, we have 10 machines, 786 gig each. That's close to a terabyte of RAM. And I think on the HP website, at the moment you can buy two terabytes right from the store without any kind of specialization. That's absolutely amazing. So I already said that two of the main use cases for Hazelcast are caching and primary storage. Caching is interesting because that is where most of our customers start from. They need a distributed cache. Those replication caches or EH cache doesn't work for them anymore because they have too much data. And then they look for distributed caches. And Hazelcast can do that for you, and it does it very well. So that is where most people come in. They need a caching solution, and then they figure out, well, I can do way more with Hazelcast. It's not only a cache, what a lot of people think in the first place. But then they start to use it as a primary storage. As I said, your application is only working against Hazelcast and doesn't know about anything like a database or anything in between or under behind that. So that is the second use case. And then they figure out there's even more. So Hazelcast also offers things like lists and sets and queues and also cool things for distributed computing like distributed executor service. And the interesting thing about Hazelcast, what we always try to do, we try to make it as simple for Java developers as possible. And that actually means Hazelcast implements those features, listset queue executor service based on top of the Java standard libraries. So we implement the Java collection API and the Java concurrency API, so including semaphores and locks and all that stuff, distributed for you, transparently distributed. Your application pretty much doesn't know about it because you're working against the official APIs. And that's the cool part. Hazelcast tries to be as simple as possible. And even though we have a thing, well, I think that it will be the next slide. So Hazelcast itself is all about scaling out. So that means instead of always buying a bigger machine, there's always a company that can sell you a machine that is twice as fast but triple as costy, Hazelcast goes the other way. Same way as a lot of other node SQL solutions out there. And that means you take cheap hardware or virtual machines or whatever and you take a lot of them. And that actually gives you a speed improvement over just scaling out your single machine. So how does that play together? So Hazelcast, as I said, we have a Hazelcast cluster. For Hazelcast, that actually is a peer-to-peer network. So I found this picture which seems to have every node connected. I like that. And you have your cluster running. You have your application in there. But then you have this problem, right? You either want to update Hazelcast or you want to update your own application. All people know that. What we have to do is we want to roll it out somewhere. And rolling it out actually means we first have to package everything up. That means our application code, Hazelcast, libraries, more libraries, even more libraries. And when we haven't stopped all the libraries yet, we have even more libraries. And that is still okay. So now we have like a huge jar, a war file, an ear file, whatever is out there. And we want to deploy this. So how did this work before we had all those cool things? Well, somebody actually had to move your war or your deployment unit to the servers. And that often was a manual step. Not saying that this is true for all companies still. But believe me, you would be surprised there are still a lot of companies doing this, at least in a semi-automatic way. Pretty scary though. So we try to deploy it. And when you actually manage to deploy, you can see yourself as a deployment hero. And that actually was true after every deployment. And I'm speaking as well from my own experience and from my own past. We did this at a couple of companies. Every deployment, there was a special stand-up in the morning just to see that everything is good. We had to be early in the company just to do a deployment, or just to know that if the deployment goes wrong, that somebody is there. Not what we want. So what often happened is some deployment failed. That was why people were there in the morning. And we actually had to either rollback or quite fast fix it and try another rollback, another deployment and hope that this one works. And I think everybody knows how those fast I need to fix it right now things work or even not work. So Hazelcast started from a customer's request to say, okay, let's do it differently. This customer wanted to deploy on Docker or as a Docker image. So he asked us and we said, well, you know, that's actually a quite good idea. You're probably right about that. So with 36 Hazelcast started to have official Docker images. And those are all on Docker Hub. You can just download them. And what we added now or what we're adding for 37 is automatic deployment or out of the box deployment for OpenShift. This talk will actually show us slightly more manual version because it's not yet completely done, but it will be whenever Hazelcast 37 is out. So Diane and I would just talked about it and we're trying to get this out as fast as possible, probably by updating blog posts and whatever. So that is where Hazelcast and Docker kind of Hazelcast Docker and OpenShift comes together. And why, why do we choose OpenShift? It's pretty simple. The main reason is OpenShift is easy for DevOps and for infrastructure teams to manage. And it's actually quite easy. If you have a template, we see that in a bit to run Hazelcast, to scale a Hazelcast right on top of OpenShift. And that is absolutely impressive and amazing. And I think it's a good work. So let's have some fun with OpenShift. And I actually was clever enough to, to prepare all the slides for the case that my life, life demo doesn't work, but we're trying it life, life first. So let's see. So that is the Hazelcast OpenShift project for now. That's on my, let me see, that's on my GitHub account for now. As I said, this is still a draft version. That's not completely done yet, but we're working on that. We're getting it ready whenever it is needed. So I already cloned that repository and you see while I was trying to get all this stuff running on OpenShift, I basically collected all my commands I ever found. And it's actually quite interesting because a lot of those commands you can only find using Stack Overflow. And you wonder yourself where people get this idea from. So we have this repository right here. As I said, it's just cloned. If you look here, it's set status. Perfect. Okay, so let's look through some of the, the files. So I chose the most simplistic way, I think, that is actually possible. I use a Maven POM file to build my Docker image. And in this case, I give my Docker image a name. I call it Nocturious. That's my common nickname. You've probably already seen that on the Twitter handle. I not use the Hazelkast image yet. As I said, this is still a draft. I'm a, well, I have a maintainer handle here. And this is quite important. So the, yeah. You're not actually sharing your screen right now. I'm not. No. Okay. Let me see. Whoa, it stopped it. Okay. Good to, thanks for saying that. Thank you for speaking up there. I was, I could see it for a second and then it went gone. Okay. So now you see it again, right? Yes, it's there. Okay. So let's, let's start again. Like I said, it's just a Maven POM file. And I use one of the smallest Java 8 images I could find. And as I said, this is pretty important. The standard port for Hazelkast is 5701 at the TCP port. And we just tell Docker, hey, we want to use that one. And here's another thing that is just the way I did it. I give it a Java command to actually load all the libraries and to start my small test application. And this test application does nothing else than just creating a Hazelkast client or a Hazelkast member node depending on my environment variable Hazelkast type. So we will see how that comes together. And actually installs all that stuff to opt Hazelkast. Nothing important here, I think. So let's build this first. And I actually have wonder why it happened, but we'll figure it out. Come on, clear. So what we actually do is we build it first. And that is when the Docker plugin actually builds the Docker file itself and it doesn't work for... Oh, right, it can't work because I have to use the Docker terminal. True, I remember. But you see I use all that vagrant stuff, which is quite nice. It makes development pretty simple and pretty easy. It just needs to start up for whatever reason sharing stopped. I don't know. That's interesting. There we go. Okay, now let's go there again. Still there, right? You still got it there. Okay. Live demos are fun. Yeah, live demos are the best. Let's try it again. And I'm pretty sure it works this time. And it seems like, well, build success. I knew it works. So there we go. Once we build a new image, we have attack for latest. And the next step is actually to tell Maven or the Docker plugin in Maven to upload it. Docker push and that will be interesting because I have no idea if that is going to work. It probably does. Not. Okay. It does not. But I think there's already a latest version on the Docker hub. So we just keep going and I think it still works. So let's see what else. I think there was one more. You're tethered to your phone. So I'm waiting to see when your phone stuff runs out. Well, it looks good so far. So I think we're on the safe side on that page. So what I actually missed, there is a Hazard Cast XML. And that is where you configure how Hazard Cast will do IP discovery or in this case member discovery. So what I did for OpenShift and Kubernetes itself, there is a Hazard Cast Kubernetes discovery strategy. That means it will ask the Kubernetes DNS service or REST service for other members based on, in this case, a service name Hazard Cast OpenShift and inside the namespace default. And I think that might be wrong but we'll figure it out. I think the namespace is actually wrong but we'll see. So let's keep going here. I downloaded the vagrant file for OpenShift and I know it's not the right one especially. I think it's OpenShift Webinar. I think that is the right one. Yeah, I know it's not the latest version probably. This Bootstrap 106 but we're going to install this anyways because I know that one works. We have a vagrant box at name. Wow, OpenShift vagrant OpenShift Bootstrap thing. So that will be quite quick. This I think does just install this OpenShift Bootstrap thingy into the local Docker repository. That's at least what I understand from what it does. But the next step will actually pretty slow and that is where we can do something else like looking up a question for example. So come on. You can already search for a question. I'm just going to write the second line whenever that one is finished. Okay. And we can do multitasking so that works. I'm curious if any of the other folks that are on the call right now if they have a question they could unmute themselves and ask. I know Rich Carpenter is on there. We heard him not having there and I'm not wondering if at their organization if they're using any in memory computing already. If not, if not Hazelcast. We're not using Hazelcast right now. I know there are different applications that it looked at some different kinds of caching like EH cache. Okay. I don't know if anything widely spread. I'm curious to find other people who are using in memory already on OpenShift and to get their feedback on this approach. So that one is definitely pretty slow. One of the things I was wondering about is you mentioned that in some cases it's being used as a primary data store and they have a database behind it. Does Hazelcast itself take care of persisting updates into the database or do you need to develop something on your own to do the persistence part of it or how does that work? So Hazelcast by itself has some interfaces. We call them MapStore MapLoader. If you know the JCache specification, they have cache writer, cache loader, which is the same idea. It gives you a very simple crud interface plus some batching methods and what it does is you just implement or what would you do? You implement that based on writing to couch base, writing to Hadoop, writing to your relational database, Oracle thing, whatever it is. And pretty much the same way you plug in those discovery strategies, you can just configure for a specific map or a specific JCache instance that you want those interfaces or these interface implementations have to be used. So yes, you have to write it on your own. But whenever you plug it into Hazelcast or whenever you tell Hazelcast to use that everything else is completely transparent. That means in the best case there's one or two people in the company that actually know about there's some magic going on in the in the back end. You don't want to have one person, you probably want to have two or three. So I noticed that the discovery method you were showing in there that Hazelcast, Kubernetes discovery strategy. If we have OpenShift installations in two different data centers and they're completely different installs, but a single application might be deploying into each of those environments, sort of for DR purposes. So if one of the data centers is destroyed, their app is still running in the other data center. So being that they're completely independent OpenShift environments, it would seem like that Kubernetes discovery strategy probably isn't going to work because it's being it's using. I think the important question is on how do you configure your Kubernetes discovery to work, not not be Hazelcast discovery. How do you do the service discovery part on Kubernetes? If you have one master thing for that, it will work. If you have completely independent data centers, it won't work. In this case, you should probably anyways use Hazelcast VAN replication to connect those to independent clusters. So Hazelcast has a way of connecting the two? Yeah, we have VAN replication in the enterprise version. We configure the different endpoints and the important thing is as for the reason that Hazelcast does data distribution or data partitioning, you don't want to have a low latency connect between different data centers. So that would mean that statistically half of your data is in the other data center and your whole cluster will get the penalty of this low latency connect. But what VAN replication does, it adds an asynchronous replication channel that can run in active passive or active active. Obviously for active active you might have conflicts you need to resolve. Thank you. Okay. And anything else so far? Otherwise we're just. Let's go back to the demo and get that. So I'm now in the OpenShift vagrant based JVM virtual machine. So I use vagrant SSH to connect to this internal Docker image. And I do a sudo su so I get some root rights because I now doing some things on OpenShift you probably all know and probably better than me. So I lock myself in into the OpenShift configuration and I'm probably not doing it the right way. That's the only way I figured out actually works for me. So if you know better just interrupt me and I'm creating a new project. I'm really happy to figure out there is another way to create a new project but I don't think so. So I'm creating a new project called Hazelcast Cluster and I'm actually joining that one or switching to that one. Hazelcast Cluster. And we're now working on the Hazelcast Cluster project. And now comes the longest part and I think I just gonna copy that from my slides because I'm lazy where you are. Where you are. There you go. Perfect. So what I just did I downloaded the OpenShift Hazelcast deployment template. I'm not a VI guy. You figure that out. So the OpenShift or the Hazelcast template configures or defines how Hazelcast works in OpenShift. I just deployed what image as most specifically and I think the image is somewhere down here. So there is the Hazelcast OpenShift latest image and I hope that will work the other way around downloading it in a bit. It also configures or defines a couple of environment variables. We see there's the service DNS and service name and namespace and a couple of things we already configured. Pretty fall it will always use IPv4 to connect internally and I'm honest here I never tried IPv6 so far. But I think it will work at least from Hazelcast and again the configuration for the part. We know that there's only just one Hazelcast version or instance running in each container so that works. Apart from that we are configuring a replication controller. So the thing that actually makes sure that there is so much replicas or so much instances of this Hazelcast pod running that we just deployed. Pretty fall we always configure or we always deploy three instances. You can up and down scale this at runtime image. I think that's basically it. And there's some more configuration especially to make names nice. So let's see. So we already downloaded. Now we need to import it. Hazelcast deployment and we want to install this or import it into the Hazelcast cluster project. There you go. So I think that's it so far and what I actually like about the vagrant environment that it configures proxy set ups for port. So that is nice. So again it's admin admin. We're just logging into the OpenShift console and as I said it's probably not the latest version. I think so at least. We select our Hazelcast project. And what we do is we just say we want to add something. We select our template template. There you go. And here are the environment variables again we just saw in the configuration file and that is one part of the game why you need those. Jason or I think you can also make YAML files because that actually says where what what are the parameters we want to do when we when we deploy this application. We say Hazelcast cluster cluster local and we want to have members members hard word. And we start the deployment. If everything goes right we get three parts that actually hopefully connect together as I said I'm not 100% sure because of the of the namespace but I think it still works. We'll figure it out. So it pulls down the image over my tethering over a Europe based data plan so that will be interesting. It works and if you run out of tethering your six gig limit will I think I won't run out of that. So that is that is good so far. We haven't yet made a single gigabyte so I think that actually works but oh wow it's it actually downloaded. Wow. There you go. Let's see. They're running. And now let's figure out the VI var. Wow. Var lock container. I think container. Hazelcast. You might have a typo there. Hazelcast. There you go. And I think it's pot lock. Nope. What is it? Nope. That's the wrong one. Not those amazingly long file names. You never have an idea what you actually do. There you go. There's some whoops. There's socket acceptor. It says stream established connection. So they're probably still working on it. Yeah. Well you see that the image is also a bit older. It's 36 early access to there's 36 36 something out. There we go. So here's actually we have now a cluster of three members. And there is 17 05 17 06 and 17 07. If we look back here, we see it's 567. So it actually kind of worked. That's unfortunately not very readable in the lock file. So we actually got a high cost cluster running just pretty much out of the box in open shift. And I think that's interesting. And it's really nice. I unfortunately don't know the the command out of my head right now to start another one to to scale up. And I think it's not in this build of the UI yet. Unfortunately, if you go to if you go to deployments, if you go back one screen. Okay. But it's in the that browse deployments. No, not deployments or builds. Now it's bill this is local bill. I think the problem is the UI changed pretty much right after that version of the the image. All right. Suffice to say that if you did this live on open shift dedicated or open shift enterprise, or even open shift origin, it would all pretty much. Right. It will also be the UI will look slightly different as far as I know right now. Unfortunately, don't remember how I made this nice version of the log file. But we saw it actually works. So let's go to the last couple of slides and I think I was pretty much it. I think it was easy enough at least. It's not as easy as using Hazel cast standalone. But what what I really like here is it makes deployment easy. So for developers, they often don't think about the deployment part. That is where we we figure out that DevOps really make sense. So you have to in the past with developers and we had infrastructure team or operations team, whatever you call it. And operations doesn't wanted to to think about any kind of exception. So you get a Java stack trace and they just restart it. And engineering doesn't wanted to think about deployment. That worked fine for some time until somebody figured out, well, maybe something like a DevOps is the cooler thing to do. So I think that this this combination of Hazel cast and open shift is exactly what that what is made for DevOps DevOps. It's easy to deployment or easy to deploy and Hazel cast is also easy to understand for people that are not working every day with it. So we haven't seen a Hazel cast source code yet, but you can really believe me. It looks like pure Java code. So for me, I think that is what we have to be happy about. I think we're we're in a good in a good situation in a good time these days. And at at Jack's here in Germany, here in Germany, I'm in Spain right now, but at Jackson, Germany, one guy from from get up gave a keynote and he said, we're going forward to a complete new millennium of deployment and his daughter is writing her own Minecraft plugins at plugins at the moment. She's running her own server. And he said when she's going to be a full time engineer or full time DevOps at in the end at some point, she doesn't expect to have those long, hard rule based things for deployment. She's she will expect it to just click and run pretty much what open shift gives us and what is amazing for Hazel cast. One more thing. Go go right ahead. Okay, one more thing about Hazel cast. The main version, all of the cool features are Apache license, you can go to GitHub and just download it, send pull requests, send issues, or hopefully you can't file issues you probably file feature requests. And there's also an enterprise version, but I'm working on the community side so I don't care about that one, besides the fact that it pays for my salary but we're not talking about that. So, that's the reason why I normally consider Hazel cast or to be the first place for developers as as a developer you want to get started easy just look for the examples it's really that easy as it as it seems on the first glance and if you want to pay us some money and work from my salary has cast calm it's it's already in the sites down there. And I think everybody should be able to just go for the whole presentation today and just do it out of almost out of a box at least. So do we have any more questions. We probably have. Perhaps. We're almost to the end of the hour. One of the things that if you can send me your slide deck or put it up someplace. Absolutely do that, because the rest of us are just as lazy about cutting and pasting. And that would be a wonderful thing to share that with along with the video that we'll put up. There'll be a blog post on open shift blog that openshift.com with the video links and the podcast slide deck. Rich or anyone else if you have questions please feel free to ask. I think we've we've really covered quite a lot of territory here at this morning and I'm really grateful for you taking the time to do this Chris because I actually learned quite a bit. And I'm looking forward to updating the old blog post about Hazel cast which I think was in 2013 or 14 with some new information and getting people started on this so I know you're on the road the next week or so. But when you get back to your home office wherever that may be. Let's sync up and get get a follow on blog post with with all of this because it's really some great stuff and I'm looking forward to testing out myself and getting the containers. All up and available on wherever the open shift registry hub ends up being in the short term. Well thank you very much. Take care for having me. I think that is that is the most important thing. All right. Well you keep that happy dance.