 Hello and welcome everybody once again to this OpenShift Commons briefing today we're going to do CI CD workflows with OpenShift using Yamante's container converge infrastructure and we have Mark Balich here from Yamante and he's going to kick it off so I'll let Mark begin. Great thank you so much Diane good morning everyone thanks for joining our webinar CI CD with OpenShift and Diamante container converge infrastructure. This webinar is being recorded a link to the recording will be sent to everyone who registered today so stay tuned for the posting in a few days. Following today's demo of OpenShift and Diamante container converge infrastructure we'll discuss the Kubernetes open source project and take questions from the audience. I'll just do a quick speaker introduction the primary speaker is going to be Chakri Naluri Chakri is Diamante's lead Kubernetes contributor and member of the technical staff I'm Mark Balich head of products and marketing. So what we're going to be talking about today is we're going to just do a quick review of some of the CI CD challenges for IT and infrastructure Chakri is going to take us through a demo of CI CD with OpenShift and Diamante container converge infrastructure then we're going to have a quick expert inside chat about the Kubernetes extensions for networking and storage and then we'll open it up for audience Q&A. This is a really nice slide or diagram that came out of the OpenShift blog. Businesses are demanding modern applications and workflows to deliver new digital services faster than ever. Container technology is becoming the norm for delivering these services heavily disrupting development and IT operations organizations. While containers enable faster application development cycles IT ops is slowed by today's virtualized model with multiple layers of infrastructure rigid complexity and high costs. IT ops requires a robust proactive approach for containers that aligns development velocity operational service levels and budgets. So CI CD touches all layers of IT and infrastructure and there's a tension between the faster development cycles and existing infrastructure and overhead. So as you go between all the different stages in the application life cycle how do you maintain the dependencies between those and that's from a whole application perspective not just the top tier of the application but things like the databases and the backend infrastructure. So there's a lot of steps that are that are often hidden underneath that can slow down the application life cycle figuring out how to get compute the networking the storage all set up doing the testing and the performance tuning. This is a very time consuming operation. Diamante is the first appliance purpose built for containers were a hyperconverge infrastructure that brings the ease of traditional hyperconvergence with the unparalleled efficiency and performance of bare metal containers. The first thing we do is that we allow you to accelerate application time to market to seconds. We do this by automating the network and storage configuration and management plugging directly into the Kubernetes stack. We provide guaranteed real time service levels. We do this by isolating the compute network and storage resources for each container so that we can provide guaranteed throughput for each portion of the application so when you're deploying your application there is no concern about whether there's going to be some sort of a conflict or a noisy neighbor with other containers running in the same cluster. And then all this enables you to consolidate as well. We can routinely run at 90 percent utilization and we use existing network infrastructure. We use existing open source software. So there is no vendor lock in with with proprietary interfaces and you're actually able to use the infrastructure that you pay for because of the isolation that we provide. So let me talk a little bit about how this works before I transition to Chakri's demo. DMODD automates the deployment of OpenShift apps with guaranteed service levels. So the first thing we do is that we allow you to use your existing OpenShift and Kubernetes applications and literally what that means is that you can import your existing apps and containers. The only thing we ask you to do is to tell us the service level. Are you looking for high performance or are you looking for best effort? What do you need? We just need to know what the application's performance requirements are. At that point you can define your network and storage requirements purely through software and through open source interfaces and we'll show you exactly how that works. At that point you use your standard OpenShift and Kubernetes environment to deploy the application. We automatically plug into the backend by virtue of the upstream contributions that we've made and we automatically configure the network and storage as the application and the containers get deployed. But beyond doing that initial setup, we are real time guaranteeing service levels. So we make sure that an application that is running well now will continue to run well in the future even as new workloads and new containers are deployed into the same cluster, into the same shared infrastructure. So DMONT Container Converge Infrastructure starts with a basic X86 Linux core. There's nothing custom, it's all standard X86 Linux. So we run Red Hat and basically what we do is we attach to that Linux core a high performance local ethernet switch as well as local NVMe flash storage and NVMe is an industry standard supported by all the major hardware and software vendors and Linux distributions. We then put on top of those network and storage resources a virtualization layer but this virtualization layer is at the PCI bus level. It's not a software overlay but it actually sits physically on the PCI bus in a controller and provides each container with its own dedicated virtual network interfaces and virtual storage volumes. So the container sees what look like to be bare metal resources that are directly accessible and mapped into each container's namespace. We then round out the appliance with performance cues. Each of the network and storage resources for each container is supported by cues that are unique to that container. So Diamante can sit in the data path in real time and prioritize workloads to make sure that everyone gets the guarantees that they've requested. So you will never have a noisy neighbor problem, you will never have someone else taking the throughput that you expected to have for your application. We then cluster together multiple appliances. We begin with three nodes for high availability and load balancing and then you go up from there. However many nodes you need to achieve the level of scale and capacity that the applications require. All of this is connected using standard 10 gig ethernet networking in your existing data center environment. There is no forklift. There is no complex network topology to set up. All of the resources, the compute, the network, the storage resources are all pooled together and shared across the cluster. So there's workload mobility. A persistent volume can be accessed from node 10, then to node 2, then to node 15, whatever you may want so you have flexibility to move workloads around as you require. Network segmentation also applies across the entire cluster. And all of this plugs into a standard OpenShift Kubernetes environment so that all of that application deployment and management is fully automated and fully integrated from the beginning. So what Chakra is going to be talking about now in the demo, we're going to be talking about the dependencies across the application from a CICD perspective. And there's a lot more to making an application work successfully than simply pushing out a few microservices. Those microservices have dependencies on the back end stateful application workload like a database or a key value store or a message queue. And we can make sure that those back end dependencies are satisfied in a fully automated way through OpenShift through Kubernetes down to the infrastructure level. And therefore, when you deploy your set of microservices, the entire app from the microservices down through the persistent data tier down to the infrastructure is able to deploy in an automated fashion and provide guaranteed results. OK, so with that, I'm going to turn it over to Chakra for the demo. Hi, guys. Good morning. Demo time. May the demo gods be with me. So what do I have over here? I have a two node demo anti cluster. And you can see them. They're named WebNOT1 and WebNOT2. And we have like a bunch of 32 cores available in each of these systems. And we have 64 gig of memory. And a bunch of storage. And the IOPS is a throughput which you can expect from our hardware. And we have a bunch of network virtualized networks, which we can network adapters, which we can give to the containers. And we have a bandwidth. We have two links up for each of our blocks, which is about 20 gig. And we have a bunch of storage controllers, which is again the virtualized storage controllers. We give it to the workloads. So I'm going to focus more over here on the like. OpenShift can do amazing and cool CI CD stuff. I'm going to focus more on how do we plumb to the OpenShift layer in the backside to pull it all together. So I have a small script over here, which I put together for this demo purpose. I'm going to just use that. So we saw the status of the cluster. We have two nodes, both of them are in ready state. And they're already part of the OpenShift cluster. So our infrastructure has already been plumb to the OpenShift. So the first thing what we do is we define our own network. So this is a layer to network, which we integrate with the OpenShift. What this provides is kind of gives you an ability to specify a separate IP address to each and every container. So when you spin up a container, the container will pick one of the IP addresses from this. And the old way of traditional way of doing things like your container will get a separate IP address, similar to your host. So we are creating a network. What we are defining over here is we want the network to be from a subnet 172.16.200. And I want to use a small range from that subnet because not everybody is blessed enough to get a complete range of the subnet all the time. So here we are using the network from 100 to 254. OK, and we're using a particular VLAN for that network. So now you can see that we are already a default network and I already got a pre-configured blue network in the system. So now let's go and look at the volumes. I have over here a simple MongoDB volume defined, which you can use it for storing any of your database data. I'm going to create one more volume called test one over here. We created a size 10 gig. So what this does is this goes in the back end, calls out a separate volume out of our storage, which we listed in the cluster status over here. OK, and it's going to make it available for use by the OpenShift deployments like pods and replication controllers. So the next thing is we have a QoS template, which you also call the puff tiers. So these are the ones which actually define the application's ability to request the performance tier they're interested in. So here you can see that we have a medium best effort and high performance tiers. So what this means is that whenever your application wants, it's always guaranteed these resources by the back end hardware so that you don't need to worry about somebody stepping in and stealing your resources. So I have here a high, medium and best effort, and I'm going to prompt these three performance tiers to three of my names, this is going forward, gold, bronze and silver, silver and bronze. And then we're going to see that how this affects the overall performance of those projects. So if you want to add one more QoS template to suit your needs, we allow users to create up to eight QoS templates. So over here, we created a new QoS template low and we are saying that the user is requesting a 1K IOPS and a bandwidth of 10 meg. So we added a QoS template, you can see it over here, we specified a 1K IOPS and a 10 gig of network bandwidth. So for applications like your database and web tier, you might have unique requirements and this allows you to define your own performance tiers. So if you want to something like a network intensity, we can define your own QoS template with a high network bandwidth. And if you want a database which requires a lot of IOPS, you can actually define your own QoS template, which actually guarantees you those many IOPS. Now let's look at it. So right now to start with these are the default files that you have. So this is like a monitoring and this default for use for monitoring purposes. So let's go and add some more ports. So I have over here a sample spec. I don't want to go through this. This is like a big ML file. So but what he does is it goes and creates a bunch of Postgres containers and it's going to create. If you're familiar with Kubernetes Postgres services, so that that is the endpoint which you can use to communicate with Postgres. Before, as I said earlier, we have like a high low and best of high medium and best of full performance tiers already defined. So now I'm going to tie it out through the new projects in OpenShift. Over here, we are defining a new project called Gold and then we're going to scale. We're going to create four of them using the new project Gold. You can see over here I'm creating a bunch of Postgres containers. Initially for creating the container, we created a post-trend volume for it and then we created a claim that volume using our claim and then we created a part which uses the volume. So we kind of actually defined a storage in our layer and we claim the storage and we are asking and we are making sure the Postgres is actually using the storage. So we defined four of our Gold parts over here, four Postgres containers, and we're going to show the difference in the actual performance they get from our backend because we kind of guarantee the performance for these different classes of projects. So if a user wants to have their own high priority workloads, they can use these performance tiers to define their workloads. So now we have four Postgres calls running and all of them are using the high QS class which has a more guaranteed bandwidth and network bandwidth and IOPS. It also defined four Postgres endpoints so that you can use that to talk to the Postgres databases. So I have four Postgres. So the volumes which we created actually, we use the flex volume backend for this one. So that's the way we plumb the actual storage to the Kubernetes and the OpenShift. So over here we see that we created four volumes. The four volumes are actually in the backend of using the flex volume framework which we added to the Kubernetes. And then the Kubernetes claims which is a traditional web doing the claiming your volume in Kubernetes and OpenShift. Actually binds those volumes into the Kubernetes infrastructure and the pods. Now I'm going to create another project, Silver. Okay, this is going to use our medium QS tier or perf tier. Again, I'm going to define four more Postgres containers. In the end, we will run all these containers, load against them and we'll show that the performance between them varies based on the definitions, based on the request we did using our performance tiers. So now I created four more parts in actually for the project, Silver, which is going to use our medium performance tier. So we have four more endpoints for the Postgres containers using which we can communicate with the Postgres parts. And we can see that we created four more volumes and the claims claim those volumes. And we're going to create another project branch and we're going to create four more of them. And this is going to use the best of our QS class. So the best of what is kind of like a typically used for bad jobs. So let's say that if you want your applications to be running only when nobody else is running. So that's typically when you end up using the best of what. So the idea behind this is what demand provides a unique value to the infrastructure is actually we make the network and store scale elastic. What this means is that if you have a best of to want to run your bad jobs, you can keep on running them and as long as your application requires the resources, it will get it. But if the application is ideal and not using your resources, your best of our jobs or the bad jobs can go ahead and use the resources. There is no capping. We don't inherently do any capping. So we kind of try to make the resources elastic so that you can always extract the extract the juice of your box for all your bad jobs. So now you can see that you have like four of your Postgres running. We have the full postgres volume claims and now I'm going to create a test room where we created a test performance tier, which is a lot of performance tier. I'm going to get a part in there just to demonstrate that whatever we gave it or this guy is using it. You know, we can see that we have portal in 16 postgres containers running over here. And then when we run the workload against them, we will see that. Like they get the performance based upon their performance, based upon their performance tiers. You can see it in here, the OpenShift console, you have a refresh. And so see those projects over here. You can see the silver over here. You can see the gold over here. You can see the test over here. And the bronze one may be hiding in the top. We're going over here. You should be able to see there like a bunch of the full postgres containers. We started on the project and put a gold, you will see the other full postgres containers. So all of the infrastructure we've plumbed in the back end is completely usable by the OpenShift. So again, we are showing the volume we created. So now we're back to the project default and you can see that I have total 16 post end volumes, which are carved out of storage. And actually they're carved out based upon their performance, the projects and the performance tiers. You can see that we have 16 of the parts which are open running. And you can see all the end points we defined for those postgres containers. And you can see all the volume claims which are defined in out. And this is our cluster. Now once we log into our UI and we start the workload, you should be able to see a clear distinction. So I'm going to log into our web console. So our cluster, if you see it over here, is running as the virtual IP 10.100. I'm going to connect to that. So this is our dashboard. We already started the work in the back end. So now you can see that we have the four parts which are using the high performance tier. They're getting around like a 50k IOPS. And the ones which are using the medium performance tier, they're lingering in the middle. And the ones which are the low and the best efforts, they're lingering in the bottom. So what this means is that if you have high performance workloads, you can define a high performance tier for your workload. And they are guaranteed to get the IOPS which they requested for. And if you have any other workloads which are running a rogue or like a trying to step over the high performance parts, they will never be allowed to. And we do it all of this stuff using the performance queues in our hardware. So there is no way like any of the software components can override it. That's the end of the demo. If you have any questions. Excellent. Hey, thank you so much, Chakri. So what we're going to do now is I wanted to, Chakri, get some of your some of your views here since you've really been driving our Kubernetes community work along with some of the other folks on the team. I figured it would be really interesting to get your perspective on how the Kubernetes project has evolved over the past year, year and a half to become more flexible for network and storage. So I think the first thing probably it starts with scheduling, figuring out where to put workloads across the cluster. Can you talk about how Kubernetes extensible scheduling works? Yes, actually that's one of those nice additions to the existing Kubernetes scheduler. So what this allows you, if you go to any data center, they have their own unique requirements when you want to schedule your resources. Some people may have, OK, we may want rack awareness, some people may have data gravity, and some people may have some other unique requirements which are defined by their environment. What this allows you is actually allows you to write your own plug into the existing scheduler so that when the scheduler decides the workload, deployment of the workload, it can actually ask you for the hints so that you can also participate in the decision-making and you get the workload deployment in the actually the rack or in the system you're actually looking for. So at Demond, what we did is we actually use a local storage. So it gives us a, by using this extension, what we do is we actually enable data gravity so that if your part is using a particular volume, you don't want it to be aligned somewhere else in some other system. So you want it to be running on the same node where actually your volume is available. So we try to hint to the Kubernetes scheduler to make and get involved in the decision-making process. So all the workloads, in most of the times, land on the same node where actually their durable storage is. It's also used for other purposes, like let's say we have our own network demand and network is limited. We have like a 20 gig of the network. And let's say if one of the nodes I'm completely used up, my CPU, but my network is available. If I want to myself my storage through the network, I can put my workload somewhere else and using our remote storage, unique remote storage protocol. What we do is we actually extend the NVM view to the Ethernet layer. So you can use the volume remotely and we also get involved in the decision-making of the workload placement. So that gives you the guaranteed performance tiers which are looking for it from your infrastructure. Got it. And so the scheduler extension API, that was upstream as part of 1.2, was it? Yes, it was part of 1.2. All right, Kubernetes 1.2, yeah, right. And so then we've implemented a scheduler plug-in based on that API. And other people could do the same thing to standard open source API. Yes. Got it, okay, cool. Second one, I wanted to get your views on FlexVolume. Why did you create FlexVolume in the first place and how has it been received by the Kubernetes community? So when we started initially looking at writing the wrong plug-in to the Kubernetes, we are actually kind of, like, Kubernetes has a nice collection of plug-ins, but the exchangeability was always tied to the Kubernetes release cycle. That's one of the primary problems we had. And if you want to have our own, like, we had a startup and we are continuously innovating our feature list. And we didn't want to get tied with the Kubernetes release cycle so that we can enable our users with our new features faster. So that was the rationale behind it. So we decided that we will do a FlexVolume. What FlexVolume provides you is it's an adapter to the plug-ins and it kind of extends, enables you to write your own plug-ins. Any vendor can write their own plug-in and the FlexVolume framework will call your plug-in, okay, and everything will call the features, what you are looking for, you can enable through that. So it gives that ability, which none of the plug-ins inside the tree have now. So I've seen that multiple people have started adopting it. So D-Mandy was the one who pioneered it. And we use extensively for enabling our local storage and also the data gravity and all the other cool features we have. So, and I've seen other folks using it, like the bunch of vendors, we go through the FlexVolume, the PRs in GitHub, you will see that there are a lot of interest in it. And we are actually making it more extensible in the one or six time frame. So look forward for more support in the FlexVolume area. Got it, right. And I think also even whether it's a small company or a big company or a different open source project, the ability to update the different capabilities for FlexVolume without having to read the entire Kubernetes code base and do a release, I think that's a real operational benefit because you can take updates and quickly consume capabilities without having to kind of do the forkless upgrade of the entire system. Yes, yes. Yeah. Excellent. Hey, one last thing is, so we talked about scheduling, talked about storage, what about networking? So how does CNI fit into all this? So CNI is actually right now, I think Coroise was the one who actually kind of pioneered the one and we had four minutes adopted it. So, and we are also trying to use the same network back and what that gives us, that gives us the ability to plumb out actually the hardware, the performance we talked about earlier into the container. So, and we are the only guy I think out there who can actually do a 10 gig network and directly connect it to the containers. If you want to get a 10 gig network from your container, we can do it today and none of the guys in the market can do that. That's a unique value. And we plumb, enable all that using the CNI infrastructure. Yeah, I mean the way I think about it, tell me what you think. I think about it is that we wanna make sure that OpenShift and Kubernetes users have kind of open source plugins and APIs that they can use to basically pick their best agreed technology choices for infrastructure and by upstreaming those APIs we basically create a level playing field where everyone can use the same APIs and semantics there's no lock in. And then that allows us and others to be able to take our network and storage capabilities and our hyper-converged appliance and plug it directly into that OpenShift Kubernetes staff. Yes, our goal has always been like as you know, to always be open and get integrated with all these existing support so that we don't try to do something uniquely without hardware. So we always try to work with open source ecosystem and integrate tightly with it. Yep, well, that's great. Okay, so for anyone who wants to learn more detail, love you to reach out to us directly but you can go to our blog, theamant.com slash blog. We have some really nice blogs that Chakri and others have wrote. And then please get involved. Join the Kubernetes SIGs. There's a storage SIG. You can meet Chakri there. There's a network SIG. There's a cluster federation SIG. So lots of good things going on. And this has really been a really great example of cross community collaboration and working on the open to benefit just about everybody in the community and all the different vendors as well who are trying to create these plugins. So it's wonderful work that you're doing Chakri and it's much, much appreciated. Now I'm wondering if there's any questions online and yet I ended up having one question myself. This seems to be a great solution for some of the HPC computing requirements that I keep fielding from our financial stock exchange type deployments. And I'm wondering if you've already working with people in the finance sector, Mark, and how this is working out in comparison with other solutions that are out there. Yeah, that's a great question, Diane. Thanks. So yes, we are working in the financial services. So we are deployed in financials, in media, in communications type of companies as examples. We've talked about folks like NBC Universal, MemSQL, specifically around financials, there's obviously a large volume of real time analytics that are required. Some of that is required by the, I'll call it revenue generating side of the house, things like trading, but there's also a lot of regulatory overhead that financials have to deal with since the last few years as everyone knows. And those regulatory requirements continue to get steeper. So basically there's something called risk analysis. And over time, the financials are having to do what's called intraday risk analysis, which means that beyond just checking out my bank's positions at the close of trading, I need to be doing that throughout the day. And those requirements are getting more and more real time as things go by. So you have a large mountain of data that has to be sifted through and you have real time events coming in, you have trades and positions coming in on a real time basis. Some people implement that as part of a real time data pipeline. There are lots of different implementations, but they all have in common that they need to be able to quickly scale, they need to deliver performance economically. And containers are a really good approach to that because they're a very effective vehicle to deliver multi-tenant high throughput applications with bare metal efficiency because containers can give you direct access to infrastructure. And that's something that we specialize in. So we are seeing more and more use cases of things like database services, analytics services, real time data pipelines in financial services. And by being able to deploy those in a standard open source manner where we can provide high density as well as guaranteed throughput and processing for data intensive apps. And that's both on the compute the network and the storage sides. That is turning out I think to be a key use case for us. So that's a really timely question. Yeah, I think it's actually really timely because we get a lot of that around, is Kubernetes ready for that market sector? And I think you answered it in yes and the amount is helping make that a reality. That maybe one of those deployments might be a good topic for, and a later briefing from their point of view and how they're using it and deploying it. So we'll look forward to pulling in that use case and exploring that a little bit deeper. We would definitely love the opportunity to kind of come back and talk about that. That's a really interesting set of applications. Well, we get hit on for that a lot, especially at Red Hat because with all the rail and Linux and cloud stuff that's going on today, that's one of the market sectors that we cover off with OpenShift and being able to bring Kubernetes to that is really a game changer. I'm happy to do that with you guys in the future. I'm looking to see in chat if there's any other questions in a moment. I think you've answered them all, Chakri, in the demo and in the flow and we'll be definitely looking forward to seeing you, oops, here's one that just popped in. Does the tool work with OpenStack? I think the answer is yes, but I'll let Chakri answer that. Yeah, definitely it does. So basically we are all standard interfaces and the questioner may know that OpenStack has very close integration with Kubernetes and the container stack via Magnum. So absolutely we are able to work in that environment. I mean, really, like I said before, our goal is through the contributions that Chakri and the team have made around scheduling network and storage extensions, the goal is absolutely to be able to have a standard infrastructure model that works regardless of what type of application you're gonna be putting on, whether it's OpenStack, whether it's OpenShift on OpenStack, right? So there, we wanna give flexibility to our users for that. Well, that's one of the big use cases for OpenShift right now is deploying OpenShift on OpenStack and everybody's over at the OpenStack Summit this week and one of the topics we've been diving into and in the past briefing was about deploying a thousand nodes of OpenShift on OpenStack. So if people are looking for examples of doing that, there's some great blog posts, just Google a thousand nodes of OpenShift and you'll find out it's, the deployment was on the CNCF cluster and so we've done some testing on OpenShift and we're just about to do some benchmarking with comparing the OpenShift deployment to bare metal on the same CNCF cluster. So there's some interesting work, there's some great work that Dell has done around an OpenShift on OpenStack reference architecture too. So making it all work together seamlessly with Monati will be an interesting use case too. Yeah, thanks. So you know, one thing before we close that I wanted to make sure everyone was aware of Diamante's Try Before Your Bio offer which is a limited term offer for qualified users. Basically we can help get your application onto Kubernetes and OpenShift with Diamante container converge infrastructure and let you observe the service level guarantees in your own site for 30 days. This offer is available to users who have an immediate production need but it's for a limited time. So just reach out to us if you'd like more information. You can send it to info at diamante.com and we'd be happy to work with you. Awesome, well thank you very much Mark and Charpre for today's briefing. We look forward to hearing more and perhaps presenting a couple of the financial use cases and the not too distant future. And so thanks and this blog, this will be blogged and the video link will be in the blog early next week and we'll send it out again to all the folks who are participants. So thank you very much for today and we'll look forward to hearing more in the future. Thank you so much for having us Diane and thanks everyone for joining.