 for joining us today. It should be a pretty interesting and exciting few minutes here. Mark Hornbeek will be joining us by video. Mark was in a hospital when we recorded this the other day and you'll probably get a sense of that. He's recovering from COVID, but being such a trooper, he really wanted to make sure that we stuck to the time schedule and got this done. So we certainly thank him for that. So Mark will kick us off. He and I have an interesting conversation about where you might go to find hidden data delays, hidden delays in the overall pipeline. Then we're gonna bring in Barack and Barack's gonna talk to us about how data in the data services layer itself can actually be a tool for platform engineers, DevOps engineers and others to speed the process of CICD pipelines. I know we've spent a lot of time and energy accelerating CICD pipe and getting all of the burrs and friction out of it that we can, but we've discovered this new kind of layer underneath the ground, if you will, that is slowing things down and has I guess sort of been always accepted as just a constraint of the environment. And that's the data layer. We're spending a lot of time waiting for data to be moved, to be reset after destructive testing, to be pulled off of production and made available for testing in the process. And we think we can really speed up the overall process by focusing more on that, on the layers of data and data services underlying the pipeline. So Barack will take us through a demonstration and show you how that specifically plays out and works. But let's get started today with the conversation that I had Friday afternoon with Mark in a Mexican hospital. Thankfully, he is just fine, so nothing to worry about there, but we'll start with that conversation. All right, well, let me introduce Mark Hornbeek. Mark and I have been developing relationship through Zoom and through email messages over the last few weeks. Mark is joining us on videotape because at the last minute, Mark got COVID. Not unlike many of us, I suppose, in this time, but he has soldiered on and agreed to do the broadcast with us, but he is in Mexico recovering from COVID. So hello, Mark. Hey, I really appreciate the opportunity despite the challenges here. I really... Well, we're thrilled to have you with us. Just by way of background, let me tell you, if you don't know Mark, you really probably should. Mark is fairly well-known, having written the book, Engineering DevOps. He's known as the DevOps, the gray, although after COVID, it might be DevOps to white pretty soon. Yes, maybe, yeah, I think so. But anyway, he is a recognized expert on DevOps, DevOps processes, and who better to help us sort of understand how the data layer and data itself affects the CI-CD pipeline and DevOps in general. So we're really excited about having Mark here. So Mark, let's start out with this. Sort of a general question. Let's talk about the key issues that are affecting DevOps as related to storage. How does storage and DevOps sort of intersect? Yeah, so certainly a lot of people think about DevOps, continuous delivery and pipelines, and they're mostly talking about the application code. But of course, the application code doesn't really have a whole lot of meaning without data. And DevOps is all about trying to accelerate continuous delivery, eliminate bottlenecks along the lines of the concepts of lean engineering and lean manufacturing applied to the software development and delivery process. One of the largest bottlenecks in the whole continuous delivery process for DevOps is setting up testing data and having data available. It's not just a matter of having data available. One time, the pipeline is in fact a pipeline. So there's a series of stages starting typically with development prior to continuous integration, which is often not talked about all that well. And the developers struggle to get the right data. You wanna be testing with data that is realistic as much as possible. So having a database with good data to perform good testing development prior to integration is really important. And it's often hard to get that in a lot of big organizations where the people responsible for the data are separated from the applications people. And then once you get into the integration phase, which is typically very automated these days with continuous integration, you still have testing to do and you want to test with realistic data. You wanna be able to bring in the most current possible realistic production like data during even the integration phase. So there's yet another bottleneck potentially if you don't have a good connection between your data pipeline and your data capabilities if it's not well integrated. And of course, after integration, you're really just producing artifacts that are candidates to be released in what's called pre-production, the staging environment. Some people call it the continuous delivery phase prior to deployment. Again, the same thing. You need data, you need current data, you need data that's relevant and you want it to be reliable. If there are problems, you wanna be able to back up very quickly to different revisions of the data to match the revisions of the pipeline artifacts that you're having to work with at any point in time depending on the challenges. And this is going on not just once for anyone application. Anyone application may have many developers, all of them trying to work in parallel. And finally, producing the production data and having data available to deploy, being able to back up and restore things when things go wrong. So these are all just examples of some of the challenges, having the right data when needed, having to wait often long periods of time to get that data from whatever the sources are. Having to be able to coordinate development data with other people and being able to resolve issues quickly. Often there's a lack of automation going on between the continuous delivery application pipelines and the processes for data. So these are very serious concerns that are slowing down many enterprises and other organizations that are really trying to leverage DevOps as part of the digital transformation. No, that's helpful, thanks. I think you often hear about the developer sort of don't know, don't have an interest in where it comes from. I just want, just give me the data. I plug into it. I don't know, it comes out of the wall or something. I don't know what happens on the other side of the plug but there's supposed to be data there, right? It's never quite that simple. So let's talk about some of the impacts of all this data orchestration on the process. Yeah, so I'm a consultant. I work with a lot of different firms. A lot of the different companies I work with and our large organizations also you know, government agencies and so on. And there have been plenty of reports written that I can say from my own experience seem to be correct that, you know whether it's a large insurance company trying to put up a policy app with all the different kinds of data that's needed for that. I work with a large global nutrition company that has agent database with all their agents. I work with a large global storage company as well. And I also work with some military organization. They all share some similar problems when it comes to data. And the state of the DevOps report from Redgate is just one example of a report that echoes what I see in real life with a lot of real life use cases. Fundamentally, it's all about getting data quickly. In order to keep up with the pipeline. The whole point of DevOps is to be able to deliver things quickly through a continuous delivery pipeline. But if you don't have the data to go with it it's gonna bottleneck things and slow it down. I do a lot of value stream management sessions. And quite often, you know, that the number one bottleneck has to do with data and especially data related to testing and being able to recover that when there are problems as well. This is tying up a lot of time for the developers that need to be able to do other work. And downtime also can be seriously affected. You really don't want your pipeline to be down. Once you get into DevOps, you know, higher level of performance you're depending on that pipeline to be working very consistently and constantly. And if there's downtime it doesn't just affect maybe one developer. I've seen it affect thousands of developers where they're literally waiting to see that pipeline come back up and running again, things of that nature. It's getting especially more important as things get more sophisticated when you have microservices and an application or services built out of many services so that every one of those pipelines actually has to have some level of coordination like a federated set of pipelines and each of them have data concerns as well. So this data issue really, you know it needs to be much more integrated along the lines of how we deal with applications than they are today. I think a lot of the application type problems are being solved with tools like Kubernetes and being able to stand up infrastructure and so on. But the data side really needs some better solutions. I think I read last week there was one of the industry, one of the larger industry analysts firms was talking about developer, how much developer time is spent on non-development activities, essentially infrastructure, trying to find data, get data, manage data, figure out what to do with data. And it was extraordinary, it was 40% or something of their times related to non, not what we're paying you to do, right? It's all this other stuff in order to be effective. So certainly- Oh, it's very, very true. People go to engineering school thinking they're gonna be spending all day designing new things. But then you find out that in the real world it's a lot of these more practical issues that you have to deal with. Yeah. And certainly, yeah. I describe them as plumbing problems. Yes. Well, that's why some engineers are, in Canada, they call engineers plumbers. I think that may be the reason. Well, listen, speaking about applications, let's talk about how traditionally data infrastructure and data orchestration has been integrated into the pipeline process. Maybe there's something here which is leading us to why these problems are occurring. There are, this is just one diagram out of my book in the section that I'm talking about data storage and DevOps, but one of these approaches that people have taken to solve, if you like the data problem is if you're thinking about an application that typically has more than just one pipeline, even a relatively simple three tier application. Forget about microservices for the moment, but even just a relatively short application like that. The often they're dealing with it by setting up a separate parallel data CI CD pipeline. There are a number of tools out there that can help you do that, but then you have to coordinate between these different pipelines and the coordination is particularly challenging. How do you make sure that you've got the right data at the right time for each of these different pipelines? I guess I didn't know that there was this potential for a possibility for a separate CI CD pipeline just for data. So that's something I'll have to go learn more about myself. Thank you. I appreciate it. But it definitely appears that to me anyway, that the problem of having data external and not integrated into the rest of the toolkit is certainly contributing to this sort of overall friction, data related friction. Let's talk about pain points. What do you see as kind of the key pain points that are gonna drive our listeners to wanna go find a better way to manage data in the CI CD DevOps world? Yeah, so there are plenty of same points. I'd mentioned some of them already. They're probably the biggest one is to be able to have good data available for each of these pipeline stages when you need it. And bearing in mind again, there's typically multiple pipelines. So there's multiple places where you need data. If you have too many instances of data and now you've got inefficiencies, if you have to replicate data all over the place for each of the different stages of all these different pipelines. So that's a major pain point. You really wanna be able to pool the data together somehow in a better way. And also just getting access, if you've got all of these different sources that are slow to access because of lack of automation, lack of integration with the different data tools and the pipeline tools. This can cause bottlenecks for an individual pipeline but also for an entire release that may consist of multiple pipelines. In general, when you do have a problem which of course problems occur, otherwise you wouldn't be doing any work if they always work. So it's a long time to restore services and data all of these things. Yes. Okay. All right, great. Well, we're coming down on time. Let me just ask you sort of the final couple questions here. What if you're gonna whiteboard a solution for us? You know, what does it start to look like? How do we get out of this mess? Right, so the pipelines themselves I don't think are going to change that much over some time. You're still going to have depth tasks, continuous integration, your production, continuous delivery and production. Those major, major steps are fairly common and they're not gonna go away from an application point of view. What's needed is to have better integration with data and not to have to have it as a separate entity. It needs to be more integrated with that pipeline. So Kubernetes has proven to be an excellent approach for orchestrating containers. So containerizing data using tools like Kubernetes that to really integrate data into the pipeline as a more integral capability so that you can treat data just as if it's in other artifacts that are being treated as part of the pipeline. For that to happen, there needs to be some kind of platform where the data volumes can be managed as a pool for efficiency purposes and be called up ephemerally as needed and being able to share data very quickly across different volumes because one volume may be more up to date and you need access to that latest volume very quickly. At the same time abstracted off from physical stories these days, pretty much all the work is being done in a distributed remote manner. So this platform needs to be capable of serving across very widely distributed, very efficient pool, effectively manage the whole set of data as a pool that can be called up on demand as required no matter where you are in the development organization or in the testing or production organization. I agree. And we also have to keep in mind that this is a geographically distributed problem too, right? Because the developments are no longer in one place. So make the data available where it's needed, when it's needed. Okay, one last question. So for our listeners, where do they start? How do they, where do you put the spade into start digging into this problem? No, as a consultant I deal with a lot of different firms. They often ask me the same question. They're often coming because they have started already and they failed and they weren't really sure that they're not getting the results that they thought they would. I have what I call a seven step transformation process that I strongly recommend. It's very simple. You can apply it to anything, including DevOps. You start with leadership. You start by getting a common vision. You have to get the leaders on board to say, yeah, we do need a solution and we need a really good solution. What are the overall values that we're trying to accomplish with our DevOps pipelines? How can we, you know, what are the major goals? The major tactics? What are the major technology choices that should be made? That's step one. Step two is more of what I call team alignment where these senior leaders get their reports to agree as to what the more specific goals are. Don't try to boil the ocean. Don't try to do everything at once. Set out some realistic goals for the first one or two legs in the journey. Of course, I'm biased. I am a consultant. So I always recommend bringing in somebody that's done this a few times before. Step three is something I call the assessment stage where you do a very thorough current assessment. It's a current state discovery which involves survey tools, online discovery tools and value stream management and gap analysis to really do a thorough understanding of where are you now with your set of issues. That will lead you to what I would call a future state value stream where you then map out, you know, what are the right components for the solution? In this case, what are the right people and process and technology components that need to go together to make up the solution? Once you have a recommended solution, you then implement an MVP of that solution, instrument it, validate that it's working well and then you can start doing what I call operationalize it, expand it and finally expand across the organization. This is not a small journey. DevOps doesn't happen by just buying a tool. These things take time and they do take investment. The good news is when the investments are made, it's been proven time and time and time again, not just by the unicorns. If you look at the, you know, the door, state of the DevOps reports and others, the results are remarkable. You don't get a few percent improvement. You get a thousand percent improvement now but it does take a very strategic approach and a stepwise careful approach if you really want to succeed without having too many, you know, going forward and then going backwards. That's my recommendation. Great. Well, listen, I think that's very, very helpful. Really appreciate your coming out of your recovery to spend a few minutes with us. We'll let you get back to your oxygen tank. I have to tell you a story while we were talking. There's a nurse in front of me saying, I only have two more minutes and I need to get back on the oxygen. Oh, my God. Well, you're a trooper and a half. I'll tell you that. And I really appreciate it. I'm sure the listeners do as well. Thank you very much, Mark. Thank you very the opportunity. I really enjoy this topic and I think that I wish you all the best and success with everybody. Great. The state of problem will be is solvable when I feeling you guys have a good solution. Well, thank you very much. I really appreciate it. OK, take care. Fine. For the sake of time, let's throw it over to Barak at this point. And he's going to take us through a little bit of a demonstration of how we might build such a Kubernetes container native storage platform. So, Brock, I'll stop sharing. And if you could jump in and let's go. Here on your side. Sure. Thanks, I'm going to share my screen quickly. It's the second. OK, so thank you. Thank you, everyone, for joining today. So I'm Brock Seem. I'm part of the IEA team. And I'm going to walk you through a quick intro about what is IEA and how we are, what's our role in this kind of DevOps clouds, let's call it, or DevOps environment. And I have lined up two basic demos to show you our capabilities and how we can accelerate your data operations in your DevOps pipelines. So let's start with a quick, very quick introduction. So what is IEA? So IEA is a Kubernetes native storage solution. So that means that we are operated and installed, let's call it, or provisioned on top of your Kubernetes cluster. We are not an external entity to your Kubernetes environment. So we are at the physical point, let's call it, we are aggregating and pulling all your physical storage devices. And we are presenting that to your applications and to your pods running on top of Kubernetes. And we are not just doing that in a basic way. So we are microservices based. So our solution is very native to Kubernetes. So we are using all the capabilities of agility and elasticity on top of Kubernetes. And we are just like any other application that is hosted in your Kubernetes clusters. And of course, we are providing storage and in a form of persistent volumes for your applications running on top of Kubernetes. And we are introducing as part of it a lot of enterprise capabilities that used to be the world outside of Kubernetes in the past. But we are bringing them into your persistent volume environment. So capabilities like deduplication and compression are all in line as part of our solution. We have a unique solution that is labeling, let's say, for any IO operation that you haven't done environment also at timestamp. So we can actually revert and go back and create clones from the past, which is very important for the use case that was mentioned on the video about going back to a specific point in time in your DevOps environment that you knew was operating working well. And of course, another big and important capability is about moving or accessing data across clouds. So we have two main capabilities. One is the move data across time that I mentioned before, which means that we're actually giving you the tools to simulate or, let's say, clone your environment to any specific point in time that you decide. And the second capability is move data across space. So no matter where your persistent volume runs or exists, it can be private cloud or public cloud. It can be in your environment or not. And then we can teleport. This is our capability that we call of moving persistent volume and persistent volumes between Kubernetes clusters in under 40 seconds. So no matter where your cloud is, and no matter where is your deployment of Kubernetes, and no matter also the size of the persistent volume that you are running, in under 40 seconds, we can bring that anywhere in your Kubernetes environment. So Kubernetes is the platform for the hybrid cloud and multi-cloud. We're seeing that more and more with customers. And this is making data as agile as your application. So we know it's very easy to provision applications on Kubernetes, but data is still not addressed in a very agile way. And this is exactly what we are aiming at in IME. So we have these two capabilities. And this is just to show you a basic or let's say a common way of how we see customers running their pipelines. You might recognize that from your environment or not specific stages here and there, or you might be doing continuous delivery so you don't have, and you do have the staging, maybe you're not, but in essence, this is to just depict your environment in means of, let's say, it could be different Kubernetes classes, it could be different namespaces in the same Kubernetes cluster, and it can be across your maybe private cloud and public cloud as well. So the arrows that we see this is on the left side of the slide is what we see without introducing data as part of your pipeline. This takes hours to copy databases or move code around or artifacts around your pipeline. And what you see in the right side is actually a pipeline that is optimized by IMEs. So using our capabilities of cloning and cloning back in time and teleporting volumes between clusters, we are actually cutting any time that you have in your pipeline that is related to data weight in means of copies or moving data between environments. So we move that to 40 seconds or less. So this is the high level of the stages themselves and this is what we're gonna show in the demo in just a few seconds. So on the left side, you can see the first basic demo that we have lined up is actually using Jenkins inside Kubernetes environment and using our APIs and our storage system to rapidly clone our, let's call it build environment and create multiple pods or multiple environments to build and test in parallel that specific code or project and this can be dependent on your environment and dependent on your project. The impact is very great. So that's the first demo that we'll do in a second and the second one I'll do right after that is relating to what we discussed before which is introducing a production data between Kubernetes clusters. This is actually using our teleport capabilities. So we will teleport the Mongo volume from our production environment to our Dev and test cloud which will run queries on and we will see that it's under 40 seconds you will get the production data after the point that we started the teleport. So that will be inside that but just before I'm gonna start this one, Kirby can you start maybe the poll? I just wanna know ask around people. I can start it myself, sorry. So I'm gonna launch the poll, take a few seconds to answer that just quick three seconds to understand your environment today what we have in your data center today and I'll use the time to switch between displays. So I'm launching it now. I'm gonna give it a few more seconds. I'll let it get to about a minute of giving people some time to answer questions. Perfect, looks good. I'm gonna end the poll in a couple more seconds. Thank you very much to everyone that answered. So we can see that some of you are using Kubernetes for your DevOps environment, some not for your Jenkins environment. And we see that a lot of you have described that you have problems with data weight and you're mostly using private cloud which is very similar to what we see with our customers as well. So I'm gonna start the demo. I'm gonna run quickly across the two scenarios that I mentioned before. Hopefully we'll have enough time for questions afterwards and I'll try to give you all the answers. So what you see in front of you, this is the Ionir UI which is actually managing two Kubernetes clusters that are deployed specifically in cloud environment. So we can see we have our test cloud and we have our production cloud. In our test cloud, the only thing that we have running right now is our Jenkins workload which has a couple of volumes used for it. And in our production environment, we only have a MongoDB that is running right now. It has a 10 gigabyte of a database. I'll show you that in a second that has data in it. And these are two clusters. I'm gonna jump quickly to Jenkins. So probably most of you know this UI so I'm not gonna explain too much what you're seeing but we have our two use cases or our two concepts here. So this is the parallel build test demo that we have and it will actually, what it will do, it has a first step as we saw in the image, the first step of Git pooling a Git project that is about a gig in size, let's say. It can be even more than that, depends on your environment and then it spins up three jobs that are using clones of that Git repo and running their own test builds on different branches or different components of your environment. And you will see that you don't have so usually what customers are doing, so each worker node is actually pulling his own environment or pulling his own set of code. But that is, when it happens in parallel you have bandwidth problems, you have different read and write operations that are colliding and so on and so on. And of course, most worker nodes are also ephemeral so that's the best practice in Jenkins you probably know that all. So you're losing that point as part of your environment. So let's just quickly run this workload. I'm gonna jump into the output in a sec but this is all parameterized. You can see this is just parameters for our internal product which is actually all API based. We're shooting up some parameters to make it easy to, you know, jump between environments and we're gonna build this right now. I'm gonna jump to the history and we'll see what actually happened. So you'll see that we are creating a pod on our Kubernetes environment that will get assigned our persistent volume as we decided. And that will in a second we run the Git pool as we said. So while it's running and downloading and doing the Git fetch, I can show you our Kubernetes cluster. So right now you'll see that we have two pods that are running that the Jenkins deployment itself and another pod that we created. And of course, you know, the PVC that I showed you before in IUI and you can see that our clones are already created automatically in a few seconds. So if I'll jump back, you'll see that our API finished correctly and it's already running our downstream jobs. So these jobs, for example, let's dive into let's say test job two. We can see it's output and it's also creating a pod to run its own test scenarios. This is specifically based on Java and Maven which is actually doing some builds as part of it and running a specific test scenario that is defined as part of this, as part of this build. Yeah, and this is creating the pods. Now it will start to run the Maven flow. So you can see in the background, this is our cluster. Now you can see that we have multiple test pods that have been spined up by our job and they're all sharing the same content that is came from this PVC, which is our Geedful PVC and we create clone based on our definition. So we created three clones. You can create 100 clones, you can add even more than that. And what's important to know that we are 100% inline dedu. So it means that you are not paying on storage. You're not paying on any resources in your environment. So everything is, of course, thing we provisioned and is 100% dedu. So the impact of having five clones or 10 clones or 100 clones is minimal when it's up to zero. So the only the Delta is actually making an impact on your storage layer, let's call it. So we can see that this build has finished in success. You can see it has a few test scenarios and it gave us a success as part of the end of the build. So if we'll go back to our main job, which is our Geedful piece, we can see that for now the, I'm sorry. So the latest build is done, is finished successfully. We created three parts, the three jobs has finished successfully. And what's nice about it, that now in this configuration and in this setup, we still have those clones ready to use. So if I'll spin this up again, we can connect to the same up to the same PVCs, the same volumes, have the same content. And if we have a broken build, we can revert that back to any point in time that we would like and run the job again with a different setup, with a different set of test tools and so on, and get the best result that we need. And we of course, you can decide if you wanna clean this, if you wanna keep it up. So it's all based on your decision and your environment. Okay, so that was the build and test scenario. And our second scenario is a little bit similar to a point, so we have one upstream job that's called, that calls three downstream jobs. And instead of doing a build or instead of doing a test scenario, what we're actually doing is teleporting the volume from our production cluster to our test and dev environment. And of course it's under 40 seconds. So you will see that we have a MongoDB, I'll show you that in a second. You see we have a MongoDB for each job and it's running a CLI pod and it's something that is very cool and very sleek to show while it's running. So I'm gonna run the same concept. The only thing that is changed from our previous deployment parameters is the fact that we are talking to a remote Kubernetes cluster now to bring the PVC to our test cluster. So I'm gonna run this job. Let's look at what it does. So potentially you will see again our environment is pretty clean right now, right? We only have a teleporter pod that we have created. This pod is just running our APIs to make sure we are bringing the right PVC from the target. And you can see that it's running and it's gotten us three volumes per job, right? So we have, what I created in my setup is I created the namespace for each job that we have and you will see that each namespace will have our PVC in it. So if I go back here, you can see our namespaces. So we have a namespace per job, right? Let's jump into one of the namespaces. We can see there's nothing here yet but our PVC is already here. So as I said, this is our target cluster. So if I will see that I have my MongoDB in my production environment, we just jump into that. So you can see the production database, perfect. This is our MongoDB that has 11 gigabytes in size, right? So of course I can run a query here, I'm sorry. And you'll see they have some answers here from the database. So this is, we'll use that to show you actually traveled across space in these under 40 seconds. If you go back to our production environment, draw a test cloud, sorry. So we see already have a Mongo pod here and a pod for our test job that is actually already running. So let's jump back, let's jump back to our environment and we see that the three downstream jobs are already running. You can jump into one of it and see what's actually running in that scenario. So you see we have the teleport that has happened here and actually we created the MongoDB to catch that PVC from the production database and we're running the Mongo query similar to what I ran in the production and you can see that we're getting the same results. So I can put it even side by side and this is our production database and this is our test environment, right? So we are getting results in the live build as well. So this is finished with success. Of course you can make this very complex and run different scenarios and just use something very, very, very basic to show you the capabilities of IonAir that brings you production data to your cloud and of course this is the view of our test cloud. You can see we have a lot of volumes right now. Actually teleport is still running in the background. If you're interested, you can spend a couple of minutes. We don't have too much time to explain about the teleport but in essence, you can see that the volumes are already accessible and can be consumed by our Mongo test pods but actually we are teleporting the data in the backend and making everything available for the application. So this is the magic of teleporting data between clusters. Great. All right, Brock just for the sake of time a couple of questions have come in. One, the clones that you're creating when you do create those clones, instantly creating them just to the point they're completely independent, right? You can read and write to each one of them completely independently, right? Of course, of course. And you can even delete the parent which is something that not usually happens in different storage solutions. So there's no link between parent and child and the child is completely independent. You can read, write, delete whatever you want in that setup. So essentially each developer gets their own set of data in 40 seconds. They can do whatever they wanna do with it and it won't hurt anybody else or affect anyone else. Exactly. Aiman asks a question for Kubernetes clusters are you using a prod cluster or a mini cube? So our requirement is to be, we should be production because we are production storage. We do have a flavor coming out which is developer friendly, let's call it where you can run that probably on mini cube or different low-end servers, let's call it but you need to understand that INEAR is a production grade storage system. So each volume that you create is protected with multiple copies. And of course we have Erasure Coding coming and everything is aligned toward being fully production and fully let's say mature storage system. So this is not like an NFS or something that is very not stable, let's call it or not performant. So we really, really insisted on creating a production line storage system. Great. Another question was differences between public and private. The speed obviously is super quick here. You're feeling private. What happens in the public cloud? Does it work same speed, same way? Yeah, that's exactly it. So our proprietary IP based, patent based solution is you're making sure that under 40 seconds you will get your volume wherever it is in the world. And of course if it's private or public cloud the same promise holds. So we will bring you the volume to be accessible and used by the application under 40 seconds, yes. Okay, great. If you will throw it back over to me, I'll close it up and we're almost done. Yeah, sure. So I wanted to thank Mark even though he is not with us but he is with us in spirit and heart. Sorry about the technical difficulties there. Hopefully we got it cleaned up and we will, when we store this for reuse we'll take care of the audio there. Something happened with Zoom apparently. And I think we, but I think we captured most of his points. If you have any questions we have a lot more information available at inear.com. You can also grab yourself free trial of the inear platform which will do all the amazing things and more that Barack just showed you. But it really does allow you to condense and eliminate many of these data weight gaps that we've identified in the pipeline. Also we'll just say if you have any specific requirements that you'd like us to sort of take a look at as it looks as it relates to these pipelines. Barack has kindly offered his own personal time to take a look and see if he can help you identify some of these gaps and show you how we might eliminate them. So please you can get ahold of us at the contact us free Kubernetes trial. Let us know that you're interested in having Barack take a look and we can do that. But overall thank you very much for joining the podcast. Again, thanks to Mark, Barack. I'm Kirby Wadsworth and we really also wanna thank the Linux Foundation for allowing us to share this information with you. Hopefully it was helpful. Thank you. Thank you so much Kirby. Thank you everyone. Thank you Barack and thank you to all the attendees who joined us today. As a reminder of this recording we'll be on the Linux Foundation YouTube page later today and we hope to see you back at a future webinar. Thanks everyone. Thank you.