 All right, hello, how's everyone doing today? Thank you for joining us. So in today's session, we're going to talk about our NetApp experience with using OpenStack and Docker to make the development and deployment process faster, easier, and more reliable. We're also going to talk about some potential solutions to some common DevOps problems. Now, talking about DevOps, because I know you work with our DevOps team, so what are some of the things, some of the requirements that you get to hear from these teams in the field? Yeah, that's a great question. And I think in all the interactions that I've been having, more and more customers are asking about, hey, I have got so many developers, so many different tools available in my environment. How do I go into this DevOps process of a CI-CD workflow? And if you look into that environment, the biggest challenge that you normally have is different silos of infrastructure, and every time one phase you completely move on to the next one, then you spend a lot of time to actually have a more standard platform to do it right from the development all the way to the deployment. And if you look into this picture, and there's a slide over here, we have so many types of tools available, and developers would be developing this code on Mac, on Windows, where they would come back and say, hey, my code, my testing actually worked on my Mac, so I don't know why it's not doing it on Windows. So the code will be completely, and the dependencies may not be available in that place. So what exactly is the big challenge of the requirement right now for consolidating and making a standard platform is to make sure that the application that has been developed in one platform should be working in a different one. And that's where containers come in, where the scalability and the agility in which you actually can develop and deploy applications is where containers are more important. And then also, from an open stack perspective, you provide a more standard infrastructure where you can actually do right from the development all the way to the deployment part. So that's good. So Bakash, I know you were talking about containers, which means that we're really talking my language here. And containers are fantastic for a number of different reasons, right? Not the least of which, well, they expedite the process. They abstract the application from the underlying operating system. They remove friction between what's happening with operations and with developers. Right, so for example, maybe I want to use a public cloud in production when I wanted additional compute resources and open stack for my development environments when I want cost efficiency, right? Absolutely. What it means is that with that abstraction layer in place, I can deploy on-premises, leveraging open stack, for example, leveraging VMware, leveraging whatever cloud, whatever platform I happen to have available. The container provides that abstraction layer so that I can take the exact same image, I can push it up into Docker Hub, for example. Now I can deploy it onto a public cloud. I can deploy it onto AWS or Azure or any of these other service offerings that are available out there. I can create a containers as a service offering inside of my own organization if I choose to. And from a developer perspective, it also means that I eliminate the, it runs on my laptop syndrome. I don't have to worry about where it's running, how it's running, whether there's dependency issues across all of these different environments, right? It just runs. Sounds good. Do you have a demo to show us? So we do have a demo. So earlier this year at NetApps Conference at Insights, we went through a demo that shows, well, what does it like to be a developer? What does it like to do these types of things? So I am taking on the role of a developer here, right? And let's be fair, I am not a very good developer. I'm actually an operations person. But what you see here is making some code modifications. We're literally just adding a feature, adding some things into our application. And once we go through and we actually create the application, we want to take it, we want to push it into our Docker container. So the first step is to go through and do a build. This is a pretty standard process. Docker build, we pull in all of our new code. We ensure that all of those resources, those libraries, binaries, tools are all available to us. We pushed it up to that public repository so that now it's available across teams and across infrastructures. And now we can go in and simply leverage Docker compose, which defines our application to deploy the app, to deploy it. It doesn't matter if I'm deploying on my laptop, sitting here on the desk today, it doesn't matter if we're deploying it into a public hyperscale cloud. We can go through and we can redeploy the application the same way every single time. And the great thing is, it includes storage. Yeah, that's really fast. It's so cool. So don't you think that you guys miss anything? What are you talking about? I told you, I'm not a very good developer. I think you're gonna miss the testing part. I just saw you make changes and you just go and deploy it. We didn't miss that. So now in this instance, yes, I'm a terrible developer. I actually introduced a bug. And not just any bug, I introduced a bug that has caused data corruption. So yes, testing is a critical part. It is something that should be involved in any and every development cycle that's happening inside of any organization. Right, so, but hold on. Because we're using containers, shouldn't we be able to like revert to a previous version of the application, right? Absolutely, absolutely. Being able to roll back to a previous version of our code release is extremely simple leveraging containers. It's as simple as changing that version number inside of the Docker compose definition. Right, and again, as far as your data is concerned, right? If you do introduce a bug that corrupts all your data, they can pretty much do the same thing with the data as well. And the NetApp Docker Volume Plugin can help you with that. So if you're not familiar with it already, it's an open source project that allows you to use NetApps, data on-tap size, KZ, or NFS as a Docker volume. And the great thing about it is that you don't really need to learn something new. You can use native Docker commands with it. Now, as far as advanced functionalities go, so, you know, if you're familiar with the concept of volume types or share types that exist in Cinder and Manila, we have a similar concept here. So you can define volume types with Docker and let's say you use specific capabilities for your volumes. Now, these volumes, again, will be available across multiple hosts. And if you're already using a volume with your Docker containers, you can import them into the management layer of NDVP. Now, again, since we are short in time, there's a lot to talk about. You can find more information on www.netapp.io, that's the website that you're using. All right, so, again, just to, you know, go back to Soli's point, let's say he did introduce a bug that corrupts all of our data. Let's see how we can fix that and how easy it is. So we'll start off by listing all of our snapshots. So all of these are like native Docker commands. And then we'll just create a new volume using the latest snapshot. And again, it's super simple, you know, one line command and we're just gonna pull the latest snapshots ID, hit Enter, and we're good to go. Now, all we have to do is update our compose file, and as soon as that is done, we just restart our Docker services and we're good to go. Now, just to bring all these things together, obviously there was a bug, there was an error, there was been a fix. Now, all this is happening in one, one somebody's laptop and somebody's computer, personal computer, and the fix has been done, but it's finally, how do I build a continuous integration process? And that's a very critical piece because the agile workflow is very important because every time you introduce a bug or an error, we need to fix that at the very early stage of the development process. And that's where people start to move into the CI CD continuous delivery process, which is more agile and much more effective in identifying bugs and it helps the developer to fix them. And now, if you look here, we have got a self-service portal which is actually tied up to your open stack infrastructure for your staging and staging could be one of the components and staging is where you're actually doing all your testing, user acceptance testing and your QA. But prior to that, developers also can actually run their own test in their own user workspaces in the personal builds that they're actually creating, the private builds that they're creating. And finally, getting into the integrated builds which is a part of the CI process. So this is all can be done through the open stack environment. Now, if you see here, on the left-hand side, you could be using any SCM tool. It could be the GitHub, it could be GitLab. But what actually makes this workflow more productive is the CI tool that you use. And one of the most common CI tool that you have is the Jenkins CI. And that's where you actually use NetApp Technologies and OpenStack with Manila and Cinder Drivers to actually clone and test some of those environments. And this can be done very rapidly. I would less amount of space is an all-thin provision. So this entire integration right from the Jenkins to OpenStack and ONTAP, which is NetApp's product, will actually give you the complete agility to not only identify bugs, but also fix them a lot faster than what you normally do in your traditional methods. And just to summarize, if you see here, with OpenStack specifically, you are getting a standard platform right from the time you start developing your code till you deploy it. And second, you're also providing security groups where you're actually segmenting your workloads. As I was giving an example in the beginning, where there are parallel builds which are much more higher density workloads, which high metadata, and you're running parallel builds, which takes a lot of your time. That workload will be very different from what you're going to do from a user single builds that you're doing, or a QA process, or a UAT process. So these workloads are very different. Now, how do you isolate them? And OpenStack would be one of the reasons where you can actually isolate these workloads. The other thing is, supposing you need more number of cores on your VMs and over VMs for your build process, you can automatically spin them up based on the policies that you set. And once the work is done, you can spin it down. So actually, you can keep your resource level a lot more lower and cost effective. So you don't have to keep your VMs running when your job is already done. And if you look at the demo, right now, we'll go through the process of setting up the Jenkins environment where we're creating a VM. And then we are spinning up Docker container. So you're provisioning that stuff on an OpenStack environment where we are selecting all the different components and the settings to spin up an over instance. So now, we are starting to spin up the Jenkins instance through a Docker compose command right here. You're logging in and your Jenkins instance is up and running. And then we are starting to start your all your integrate builds, making changes over here, going through the CI process. So which will be a lot more easier to identify. Wherever you have got mistakes and bugs assigning the code, that can be identified at a much more faster pace and at the early stage of your development process. So ultimately, our goal throughout this demo is really to show that DevOps isn't necessarily a single product. It's a process. It is a goal. It is getting to the point of doing continuous integration, continuous deployment throughout your application lifestyle and process. And storage is a critical part of that. Storage is, well, where your data lives. Network guys like to say that it's a bad day when the network goes down. Nothing can communicate. But it's a terrible day when you lose data. Data is always money. So if we can leverage that data in more interesting, in more critical, more unique ways in order to better prepare our applications for production in order to enable the business to do what the business needs to do, that's a win for the entire business, not just for IT. So what we've shown here today is leveraging native tools, leveraging Docker, leveraging Jenkins, communicating via the APIs, via the plugins that we have provided to facilitate those processes. To make it easier, whether you're a developer, whether you're an operations person, whether you're an infrastructure administrator, easier to consume those resources, easier to make them available to the application and ultimately to the business. So in summary, please come by our booth. NetApp is located in booth A30. We would be happy to talk about how storage is critical to the DevOps process. Additionally, in the corner of most of our slides, you saw the logo for the pub, netapp.io. This is our platform for publishing engineer to engineer how to accomplish these things, leveraging code, leveraging examples, leveraging real world experience that our employees, that our partners want to share with you. So thank you very much for your time today. We greatly appreciate it. Thank you. Thank you.