 All right folks, thanks for coming. I'm Bob Calloway, Principal Architect of All Things OpenStack at NetApp. Today we'll be talking about some work that NetApp has been doing within the OpenStack community both to make the operational experience of OpenStack a little bit better in terms of once you get it up. We've talked a lot about at the summit of how to get started with OpenStack. But once you get that up and running, there's the whole operational side of how do I give it the care and the feeding and make it successful as a true platform moving forward. So I wanted to give you guys a little glimpse into what we were doing to focus on that operational dynamic. And then also talk about some work that we're doing in the community at large to actually help people learn about OpenStack. So today we have a couple of speakers. We have Anurag Palsuel, I'm sorry, I screwed it up. Anurag here from NetApp who's going to talk about some work that we've been doing with Cinder. And then we have Dan Radiz from Red Hat who will talk about the work that we're doing with TriStack. So with that, I'll hand it off to Anurag. Thank you, Bob. Hey, I'm Anurag. So today I'll be talking about some announcements that we did in actual monitoring and operational control of resource pools in Cinder. So I'll talk about that. We have a small demo at the end. So we'll run through the demo. So before we go ahead, just a quick background on what resource pools really are in Cinder. In Juno timeframe, Cinder introduced this resource pools concept which allowed the backend drivers to expose the storage backend functionalities in terms of pools. The pools can have their own capabilities and capacity attributes and eventually a Cinder volume lands on the resource pool. The pool becomes really a unit of scheduling for the Cinder scheduler. So all this is fine, but what does this lead to? So once you have pools, so what are the challenges in monitoring the resource pools? So what if the pools run out of capacity? You could end up in having some right failures if your volume is within provision. What if the storage backend changes over time? It can really cause some performance issues with your Cinder volumes. So how does an administrator learn about that? And how can we automate administrative actions based on these events? So this is what we did. So what we did was we enabled Cinder to send some storage pool-related metrics over to Solometer, which can then be alerted upon. So Solometer already provides alarming and alerting functionalities over the existing metrics. So what we did was enable Solometer to send these couple of metrics over to Solometer so that it can be tracked and it can be alarmed. And you know, administrator can get automatic alarms over that. So I'll go through a quick demo. So what we'll do in the demo is see how we can set up alarms on these new metrics and how we can use Cinder and Heat and Solometer integration to take some automated actions when the alarm is triggered. So first, what we're doing in this demo is just trying to list out the existing resource pool in Cinder. So we're using the existing API called GetPools to just show what pools I have today registered to Cinder. So as you can see, I'm just concentrating on one of the pools, which has a total and free capacity of around less than 1 GB. So next I'll just show you what alarms or what metrics we added, just a Solometer list to show what new metrics that we have added. So here are the new set of metrics. So as you can see, there are free and storage pool.free and storage pool.total metrics that have been added as part of this functionality. So next we'll try to set an alarm on one of these metrics. So here what I'm trying to do is setting an alarm on the storage pool.free metrics. And the threshold value is 1 GB right here. So right now as you see, the free capacity is less than 1 GB, so the alarm should be immediately triggered. See the alarm. So now it takes a couple of minutes till Cinder refreshes the new data. So at the beginning it shows up as insufficient data. And once the Cinder has already refreshed data, you can see that it triggers an alarm and the state goes to the alarm. So next what we'll see is how to set up these alarms so that you could take some action against these alarms. So in this particular demo, I'm trying to resize the Netaflex wall, which basically maps to a resource pool in Cinder when such an alarm is triggered. So here I'm using a custom heat template, which I have attached as an action, as an STTP callback with Cylometer. So when I now create an alarm, I attach an alarm action, which basically points to a custom heat template that I have written to resize the Netaflex wall. So you'll see that as soon as the alarm gets triggered, the template automatically runs and the alarm state should go back to OK because the pool gets resized too to have then the threshold value. So as you can see, the alarm is triggered here. The state goes to alarm. And if you run the same pool API now, you will see that the size of the pool has increased to more than 1 GB. And after some time, if you wait for some time and look at the alarm list, the alarm also should go back to OK. So this kind of gives you an idea. This kind of enables you to manage your capacity. This can be extended to do more things like alerting on capability changes too. So that was the demo. Let me switch back to my PPT. So the current status, the code is already submitted upstream. I think soon it should be much upstream. So in addition to that, we are trying to look at holistic storage management story, how we can do a complete end-to-end management for the storage controllers too. So that was what I had. I'll hand over to... Thanks, Sarah. Before Dan comes up, one of the things I'd like to mention is, in terms of how we're engaging the community, certainly NetApp is working with a large number of other entities out there, both from the distribution providers that are in the market. We work with them in terms of making sure that the out-of-the-box experience for deploying OpenStack is as optimal as possible, considering some of the concerns around high availability and ensuring that you get good performance and good management capability in getting that initial OpenStack deployment to be successful. In that light as well, we also have offered several reference architectures. Specifically, we have kind of two different flavors of reference architectures. The first is kind of a generic best practice of how to deploy the individual OpenStack services, obviously with storage in mind, to ensure success. And then we have specific guides that are built off of that generic reference architecture that help actually install OpenStack from its specific distribution. So we have documents from Red Hat, SUSE, and several others. But when it comes to OpenStack, oftentimes there's another topic that frequently gets raised around how do I automate everything into the data center, thinking about moving towards a fully self-service infrastructure capability that OpenStack provides. You have to minimize the amount of human interaction that it takes to care and feed a system like that. So some of the area that NetApp has also been focused on is actually working with automation partners like Puppet Labs and Chef Software to develop automation tooling to assist in the management of the lifecycle of the storage itself. So today we have, if you go to the Puppet Forge site, you can actually find Puppet modules for managing our cluster data on tap systems as well as our seven mode systems, in addition to our E-Series platforms as well. All of that's today available for download and that you can actually start to integrate the automation of those platforms into the same processes that folks are using not only to deploy OpenStack but frankly to manage their entire data centers. With Chef, exactly the same story that the Chef supermarket, you can go and find the right cookbooks for both clustered on tap as well as E-Series. So we're really trying to make sure that all the storage and underneath future infrastructure deployments, well, often at times that's an OpenStack endeavor. But we wanna make sure that the automation part isn't forgotten. So in that light and thinking about how do we get people successful with OpenStack, I'll invite Dan to come up and talk about what we're doing with the OpenStack Foundation around tri-stack.org. So, Dan? Thank you. My name's Dan Radies. I'm on the OpenStack team at Red Hat. And one of the projects that we've been working on for the past couple years is tri-stack.org. This was a project started by the OpenStack Foundation. I'm not sure exactly when they started, but Red Hat got involved between kind of the fulsome Grizzly transition period as Grizzly was getting ready to go to the GAA. We came in and have begun to maintain RDO, our community release of OpenStack on tri-stack.org. Tri-stack.org is a publicly accessible free OpenStack cluster. So you can go out there right now and log in and there's restrictions on the amount of time that instances can run and how much storage you can have. But it's intended for demonstration purposes or development purposes. Things that you don't really need to run a production workload long term because if people did that, they'd misuse it horribly. But it's intended that if you have a demo that you want to do or development or just someone that wants to try it out can go and see what OpenStack is all about. And so what we've committed to doing is keeping tri-stack.org installed with the latest GA of OpenStack. So I mentioned that it's installed with RDO. RDO is our community release of OpenStack. So the upstream community will cut a release of OpenStack. And when those upstream releases are cut, we take those source code and bring them down and wrap them in RPM and then distribute them. So if you went and pulled directly from source and installed OpenStack, or if you installed through RDO, you're getting the same OpenStack, we're just doing it through packaging. So that those of you that are familiar with RPM packaging, then it makes it much easier to get it up and running. There's two installers that are involved in RDO right now. Packstack and RDO Manager. Packstack is intended for smaller installations. If you want to do kind of a proof of concept or an all in one, just I want to try and install it. RDO Manager is intended for larger installations, multi-nodes or something that's going to be maintained longer term. So tri-stack is using these RDO bits. And as our community builds these RPMs, we pull them down and we make sure that they're running on tri-stack. Last year we came kind of to the trouble that the hardware that had been originally donated before we got involved was just starting to fail. It was many years old. The drives were failing. We've lost contact with some of them. We've actually lost contact with the physical body that will allow us to get into the data center to maintain them, which was troubling. And so in the process of people turning over and not having access to much of this original hardware, we began to collect new hardware. And so this being an OpenStack Foundation project, they wanted to keep it going. So they have bought a new set of servers and put them in some Red Hat racks. Red Hat is donating pipe and power for them. And NetApp has donated a filer. So we're going to do some storage integration with the NetApp and show off some of the features in OpenStack with Glantz and Cinder and Manila and how out of the box you can get generic operation out of that. But then if you use the NetApp, there's some real incredible power in using that filer on the back end to do things like making sure that the, I'm going to say this wrong, that the Glantz images are snapped or cloned correctly. So that therefore reducing the amount of data that gets moved back and forth between your control node and your compute nodes. Those Glantz images are available immediately through the filer to the compute nodes as soon as they're in Glantz, they're available across the entire cluster. And so the, look, that's what I just talked about, great. So Tristack itself is managed by Foreman and Puppet at this point. And so our intent is in the future to move to that RDO manager. We're working through that transition right now. But under the covers, it's all using the same puppet modules, the same Stackforge modules. So as a part of RDO, Red Hat works really hard to take the Stackforge puppet modules and test them and vet them. And then we roll them into an RPM we call OpenStack puppet modules. And it's just kind of a snapshot of those puppet modules and time that if you're using the Stackforge puppet modules and you would like kind of a tested collection of puppet modules. You can rely on that set knowing that RDO's continuous integration has done testing over those puppet modules and they all work together. So we kind of get that benefit of using those puppet modules and currently using Foreman to do both our provisioning and our configuration management. So we have run every major release since Folsom. And so that's kind of where Red Hat came in. I think I've already said that. So part of this refresh, the Foundation's donated three Dell C6105s. And these are kind of high density cloud boxes is what they call them. And they're two used servers. I've got a picture of them that I'll show you that have four servers in them. And those four servers share power. But then all the networking and CPUs and memory and storage are all independent. So we've started with three of those. We're actually adding two more of those because RDOCI runs on TriStack. So all of the RDO bits that you're using when you use the RDO distribution have been tested in our continuous integration on TriStack itself. So those results are publicly available and they have been tested in a public cloud. So we've got, once those get in and we end up finishing racking those, we'll have five of these Dell boxes. Right now, we've got two Cisco 4948s to take care of the networking. We've also got two more switches coming, some Juniper switches that have been purchased to do, test out more in depth networking cases with RDO. So we're gonna look at doing some bonding and other networking like that. So we've got the NetApp 2500 series FAS and then the Pipe Power Maintenance right now is done by Red Hat. Here's the selfie of the gear. He was really excited that day about being in the rack and said, take a picture of me. So there's the FAS at the bottom and then there's those three Dell servers. So you can kind of see the four sets of storage on the front and then on the inside there's four independent servers, motherboards that are operating that each have eight cores a piece and 96 giga RAM and I think there's a terabyte of space per machine and then they share power and then the two switches at the top. So as part of our refresh plan, RDO Kilo on REL7 for Vancouver, we've done that. We have everything installed and operational. Unfortunately, we haven't been given the new IP space that we'll go through with this so the cluster is up and running and happy and we've actually been working very closely with NetApp to get Manila integrated. And so we've got Manila installed by way of the RDO packaging and configured and working for the generic driver and the NetApp driver is in process right now. So we'll be able to show off both of those back ends. We're doing the same working through Cinder and Glantz and getting the NetApp fully integrated into that cluster on Kilo. Again all done through the RDO packaging so we're not pulling anything special out of a hat to make this happen. We're trying to do it with the vanilla upstream just wrapped into RPM and then dropped onto this cluster. Right now the authentication is done through Facebook on OpenStack. We've had lots of complaints about folks that don't want to use Facebook or don't want to have an account or we've had lots of complaints about Facebook basically. So OpenStack has come out with OpenStackID.org in the past. About a year I think is when I started hearing about it and there's new integration in Keystone that's gonna allow us to replace the Facebook authentication with OpenStackID. So instead of logging in with your Facebook credentials, you'll log in with your OpenStackID credentials, which is the same ID that you'll use to do voting and all of the authenticated things that you need to do through the OpenStack Foundation. So it'll be kind of a centralized. You'll be able to do all things OpenStack and not have any Facebook involved. So you guys can cheer for that if you like. Yeah, there we go. So we've got Manila added. So that's already taken care of and continuing to expand that capability. And then most importantly, we're ready to start expanding the community that over the past couple releases that Red Hat's been working on this. We've been working really hard to get things stable and have this as a platform that we can really show off. That Vanilla OpenStack through the audio distribution is running. But we've gotten to the point now where as these projects expand and there's more and more OpenStack projects, we're having trouble keeping enough resources on it because OpenStack is big and it needs to be taken care of. And as we add more things to this installation, we need more hands to take care of it. So we're looking for operators to start to get involved. So if you have interest in donating to the community through your time and have that availability, we're absolutely looking for folks that are able to help take care of the monitoring and be available for future upgrades and to be involved in operating this cluster. I think that's all we have. Is there anything else you had, Bob? No, thanks, Dan. Yeah. Any questions for Dan or Anurag on the topics today? You guys have been great. Thanks for coming. Yeah, this is sticking out right to the end. For the folks on YouTube watching this, this is the absolute last session of the last day of the summit. But we've had a nice audience. So thanks for coming. Appreciate the time and safe travels home.