 Right, so I think we can go good. Yeah, okay Well, thanks everybody for coming you guys are the the true die-hards of the summit here coming Coming on Thursday afternoon I'm Jeff Hapowide. I'm a technical marketing engineer at net up and Have Arthur Barrazin from red hat. Yes. So I'm a Arthur Barrazin a senior technical product manager with red hat and today we're gonna present you a Our joint solution with net up and and red hat so Yeah, so this is our agenda. We're gonna talk, you know, we talk a little bit about the open sex platform six and talk about Net apps integrations with open stack a bit and then go into some details on actually deploying Red hat up and stack with the net app integrations for sender And I'll show you some you know screenshots and a little bit of demo on how how you can actually get that done So I'll turn it over to Arthur for some red hat open set six. Thank you. Yeah, so So right at enterprise Linux open stack platform six You'll know what open stack is you've been there throughout this week listening on the various components going over them and studying each and every one of them So in a single word open stack is basically a platform or or a framework to build the various cloud-based solutions So why would you choose red hat as a distribution for for running open stack in your environment? So as you all seen open stack is a framework to run various infrastructure as a service Components you can run storage as a service you can run a computer as as a service, etc Now all of these components are basically linux services that run on the base operating system Now the important part there is that they're basically each and every each and every one of those components are using Existing existing components and subsystems of the bear operating system it runs on so basically open stack It's it's really hard to the capital to the couple The open stack components from the base operating system since they're all based on technologies That exist within within the base operating system itself So part of what we do is basically is co-engineer the various components of open stack with existing Linux kernel and various subsystems of the bear over OS itself so And this is really important because the integration is is really tied between the various components So you have the heart you have the The hardware the bare metal that runs out the actual operating system And then on top of the operating system you have open stack running Now open stack uses those components For example kvm. It uses the network stack for example to implement various parts of neutron It uses namespaces and slinux for example To implement security components of open stack Now this is a whole integrated solution That basically uses one and each component uses another component to implement the various as a service components As part of the distribution you use So this is one part the other part is basically the the partner ecosystem around that Obviously open stack comes always with with various With various plugins And drivers that he used you never use open stack vanilla by itself You always use some sort of drivers or some sort of extensions to actually Integrate the environment with your with your real world application and real world Existing environments that you already run So we're on a really extensive partner ecosystem Through which we certify various vendors To use to use the various drivers and to run on top of open stack And and part of that is also testing and covering and making sure that solutions are fully functional and working as they should work And also we work with the vendors To solve problems if they come if they come along once Once a customer encounters such issues So part of this region we also have an install and installer that makes the the whole deployment Of open stack much easier And this is really this is really key to to Deploy open stack quickly to an environment So the installer is basically an intuitive wizard based deployment tool for open stack platform 6 called opens stack relosp installer And this is based on the on the on the form and plus puppet modules And the web ui wizard that helps you navigate through the deployment process of an environment And part of the part of that installer it also deploys in a highly available configuration, which I'll go go Go over a bit later So this is also this installer also supports various plugins and extensions So it all obviously supports the net up Net up on tap devices, which jeff will will cover later on and this really Helps and makes it makes it much easier To deploy a production environment in such in such configurations so That has joined the open stack community for almost evident the very the very beginning of open stack In in july 2011 and we've been a top contributor for since the grizzly release So we're very active in the community as you've all seen throughout the sessions today And if you had a chance to glimpse At the design sessions, so you've also seen the participants of reddit within within the the various components Now we've been a top contributor since since grizzly release But I think the important piece here is basically that reddit is contributing not for a specific Particular component basically where we're working across the board to make sure all of the components are working basically Contributing to all of the components relatively equally without you know choosing specific components Now this actually helps us support our customers Since we have Since we have a really a relatively good influence over the community and we have a really really good Overview what's what our customers are doing in their real world production environments So we this really helps us influence the way the community is implementing the various components and the various features across the components And this also helps us drive new features within those components since we have a relatively good influence And and contribution across the board across the various components themselves so Reddit is is open source and an open source company and this is this is basically part of our dna Right, so we always contribute to the to the community first And basically from that community from those communities we pick and choose and create create our own community version Distribution of those communities. So for example, we have we have the OpenStack the OpenStack community and from that we basically build the rdo distribution Which is relative which is really close to the upstream bits that you'll see and basically an rpm based version distribution for the for the upstream OpenStack release and from that upstream distribution we we produce a Enterprise grade Distribution which is called reddit and roslinux open stack platform so The difference between the rdo and the community is basic and the community version is that the rdo is an rpm based version And it runs on the major enterprise grade the operating systems. Obviously it runs on rail. It also runs on centos and on fedora and all On rpm based operating systems And it follows really closely to the upstream life cycle So if we have a release every six months every major release for OpenStack Every six months we will have probably about a week or a couple weeks later in our rdo based version and Following that we have about a period of about two to three months where we stabilize the release basically Do a lot of bug fixes that we contribute to the to the to the community later on And three or two to three months later we release an enterprise grade stable release Reddit and roslinux OpenStack platform and that has a three years life cycle So basically, you know, you will get your support Throughout that time and then and they get bug fixes and security fixes, etc So so by the end of the day, why would you choose reddit and roslinux OpenStack platform? So basically you have a enterprise grade hardened version This is a version that is stable has been tested by our quality insurance engineers And we've been solving a lot of fixes that we see coming in each each upstream release basically We contribute those fixes back, but we we have them in the In the community in the enterprise version So we also again as I mentioned before we also integrate the various components. This is This is something we we take very seriously to make sure that the liver talks to kvm And the usage of live liver think kvm with with nova compute, for example Is is is working properly and and we do it across the board for various components Now we also as I mentioned, we have a three-year production phase Cycle so so we know the version is supported for three years And you also leverage from the large ecosystem where we have for example with net up today that jeff to talk a bit Jeff will talk about to a bit in a bit So I think i'm going to skip the ha part Jeff Thanks arthur So i'm going to talk a little bit now about net apps integrations our history With open stack just to give you a little bit of context for the work that's been going on So net app has a pretty long involvement with the open stack Project going back to you know, basically the sx release where we released our first driver Actually rob ester sitting in the back who's had involvement in this effort for quite a long time We're you know a charter member a gold member of the foundation You know summit sponsors, we were the first major storage provider in the community really in the center project You know with upstream contributions and numerous production deployments In fact the recent user survey for the open stack foundation Found that net app is the number one enterprise storage deployed in open stack Deployments today so and that continues to be the case and even growing so We're quite happy about that We're also a deployer of open stack internally. We have We have you know 4000 node deployment doing you know Various workloads related to engineering or support Internally running on redhead open stack And so it's a growing a growing thing for this for us as a company You know just from that sort of back out that the 10,000 foot view You know in my view every every release seems to be getting better You know the six months cadence is always about features But because of the continuous integration and the automation of the testing that goes on an open stack You can pretty well rely that the quality is stable In fact getting better and certainly From net apps perspective our quality efforts have really grown a lot. We have an internal ci. We've you know, we spin up I think you know, we've had since april maybe 50,000 vm spun up for various tests So when we submit code before it even goes upstream We test it internally and it runs against all of our protocol modes So whether even if we're only changing something related nfs it gets tested against fiber channel Nice scuzzy and the other modes that we operate in so The other thing is that ha deployments are on a rise You know a lot of the early testing and tweet if you're just pulling the bits down and configuring manually You know you're obviously not configuring ha But the tool sets with puppet and you know the form and installer really make ha quite easy to achieve We'll get into that in a bit and I'll show you The integrations that we've done From a high level, you know, we we participate, um, you know, primarily, um in in two projects But we have integrations with other projects as well So our primary efforts are obviously in the sender and manila projects sender being the block storage project and and manila being file sharing But they have integrations with the compute because you know, obviously you're attaching the um through Through nova or we have integrations with glance that i'll describe in a bit in a later slide where we can do advanced, um Rapid cloning basically features using our flex clone technology And the real benefit of that is reducing time to boot so We also have you know, e-series is an option for deploying Sender as well as swift and i'll i'll talk a little bit about our e-series platform Many of you if you're net app customers or or no net app you may not be as familiar with e-series So i'll talk a bit about that Uh as you know, the continuous operations is really its its main focus It's you know, it has advanced data management features. I spoke about flex clones, but you know Everybody that knows net app is familiar with the you know ability to do basically zero performance impacting snapshots You know and they happen in a millisecond So you know where you have workloads where you need to be be able to snap For instance a database in hot backup mode and take that instant snapshot and go right back to operations That's really where phas excels It can also do advanced mirroring which we've exposed through sender. We have ways to filter Through what we call extra specs On various different capabilities that we have so whether we want to enable dedupe on the back deduplication A compression mirroring If we want to filter on this type, let's say you have a in a later slide I'll show you a little bit clearer, but we can filter on the underlying disc type of the aggregates as well So that's a pretty nice feature to have You can provide what we call the storage service catalog So you can have a sort of listing of various services that you're offering up with as a cloud operator E-Series excels in performance that basically E-Series grew up in in the world of high performance computing So parallel file systems really super high throughputs Bandwidth, you know, I think of a really, you know high performance serial rights where you have workloads that that require that kind of performance It's a great fit for that And I'll talk a bit more about E-Series in a minute As I said, we have we have integration with glance. The first thing is that You know using netapp as a glance data store makes a lot of sense when you enable deduplication I've personally Used used dedupe on on data stores where have multiple images os images and gotten 90 percent Actually, I got 98 percent Deduplication in one of my test projects, but that was where all the VMs are the exact same os So if you have drift from that and you have differences in your operating systems, you know Let's say you have different versions of linux or what have you you'll you'll you'll you'll still get really good deduplication So you're you're basically paying for you know one block out of 10 Because you're you're able to you know free those blocks up through the deduplication technology the other big win with netapp In open stack is with flex clone for you know in many summits You'll hear there's a big focus on operators wanting to reduce their time to boot And one of the ways you can do that is to simply eliminate the copying that has to happen from glance over to sender And the way we do that is it's just transparent You don't have to really do anything to enable that the first time you pull from glance We'll cache that image file in the in the flex volume that's hosting your sender volumes And thereafter second third and fourth all the all the remaining copies to sender from that glance image are a flex clone and they take milliseconds So literally you can reduce all the pull time from from glance and and be often booting your vm Basically immediately once you've done that first copy from glance. So That's a big win if you're you know, it's it's people don't like to wait for their vm's to come up they want to see immediate gratification click boot and there you go and so it's a Good way to enable that As a as I mentioned we have what we we call the stored service catalog And this is simply illustrating that you can create them through the sender extra specs here Whether you're talking about, you know, the raid type the disk type, whether it's sass sata disk or you know flash disk or all flash fads We can enable that with an extra spec so you can create a volume type that you can filter on so You know the typical, you know, gold silver bronze scenario, but you don't have you could these are very customizable You can pick out the different attributes that we enable here Also deduplication mirroring if you had, you know high value workloads Databases that you want to mirror that data to remote location. You can create an extra spec based on that mirroring Volume type so and just basically the scheduler will be smart and know where to put it if you select that volume type It's all can also be useful if you know you're trying to show, you know charge back and You know, you basically as an operator, you know, you can you can over provision, you know within provisioning or you can Turn on deduplication and actually It's it's even well beyond in provisioning So now I'll shift a little bit and talk a little about our joint work And this is actually this slide probably needs to be updated because we are even today Working with red hat moving forward on relos p7 and the integration is going forward But we work with them, you know to expose our sender driver as as of relos p5 And the native gui support came out in relos p6 in the a2 release a2 and a3 releases So if you want to know i'll show you in a minute here, we'll have a screenshots of those enablements And we have, you know regular meetings of red hat. We're trying to align our product You know roadmap where they're working closely with them on the manila project to enable that Going forward. So there's a lot of tight alignment there. So More to come I guess the last point there would be that to obviously there's there's support ability. So our as arthur mentioned There's a certification process. So our drivers actually get tested With the red hat certification suite so that when you know when you go to deploy All the functionality that's expected within sender is going is going to operate because we pass those certification tests So this is a bit of an overview for the FAS I don't I don't want to call your attention to too much It's kind of kind of busy from a network standpoint But the main thing I want to sort of draw your attention to is the two different storage networks here There's basically a storage management network, which would be Your administrative connection to make zappy calls to to the controller And then there's going to be the actual data network where you're serving data traffic Whether it's nfs or ice scuzzy or what have you And then, you know, all this is sort of enabled obviously through the red hat installer Which is you know discovering the nodes that are that are in the network. They're pixie booting. They're coming up They're getting discovered Recognized you'll see them in the console and then you can you know perform operations on them from there This is sort of an alternative deployment Architecture here with our e-series platform in this case. It's a similar thing The control path isn't going to happen through The the web services proxy, which will proxy, you know, the restful calls that come into it To the e-series array and then the data path there again The ice scuzzy traffic will happen over, you know a data network. So just a variation on the same theme So here we're actually looking at the start of a new open stack deployment and In the red hat up in second staller This is this is sort of assuming that you've already done your discovery. You already have nodes You have hardware that's been discovered. It's come up. It's on the network It's been registered with foreman and so then we're ready to go and do a new deployment here I'm obviously don't have time to cover every single Part of this but I'm kind of giving you the highlights here There's some choices here as far as high availability, which obviously you want to enable There's you know choices related to the networking subsystem whether you want to do newtron or nova Whether you want rabbit mq or cupid And you know generating unique passwords, etc And then next In this case, you're going to actually be the networks are going to be They'll show up here. This is sort of post dragging them down You can drag your you predefined your networks, but then you drag them down to the the proper location And so this gives you a lot of flexibility as far as your network design depending on what your needs are And then this is a service. I'm skipping a little bit here But this is the sort of the two critical things in the net app deployment. You'll see there's an nfs option We recommend that you deploy to nfs as I spoke about the deduplication and the way to do flex clones from From the the nfs data store In this case, you know the nfs server is living at the ball glance and then Moving along to the sender configuration Um If you see there, there's a net app checkbox and you'll see that it's enabling those three protocol or those three modes Clustered data on tap You know seven mode and the e-series platform and the you know, the fields obviously auto update depending on the the selection there This is sort of a height a detail view of just a cluster data on tap mode with nfs You remember I spoke about the control path versus the data path Um host name here is the is the administrative interface for your fass So that's where you'd actually go to ssh n or run, you know zappy calls against your uh your fass appliance Obviously needs credentials to get in we recommend uh secure, you know protocols hgtps And then you give it, you know nfs shares path So in this case the center volumes will go to vol sender And you give it a shares This is a file that actually is gonna this data is going to get stuffed into the nfs shares config file And the last option here is actually pointing it to the particular storage virtual machine that's on the controller. So For those of you that aren't familiar with what a what an svm is Net apps can you know, obviously they can be clustered from two nodes up to 24 nodes and you can have multiple storage virtual machines that are kind of a chunk of storage and network space that can be dynamically moved around between those 24 nodes For, you know, non disruptive upgrades if you need to do Upgrades on one node you move off services to another node and you can operate and then fail back And so there's a lot of advanced features that are enabled through that. So So we recommend that you deploy through the svm actually require it for for seaman And this this screen shows uh Some of the nodes that are available to do the deployment to so you can see here that I've selected all nodes This is this works great if you have identical hardware If you don't you need to get very specific with each individual piece of hardware Because when we get to the next step, we're going to be actually selecting. How are we going to bond our network interfaces? So as I mentioned, you know, you set up Your your network ports are going to be enabled through lacp, you know So you can do failover between the bonded interfaces And I'll show you in the next step here. We'll configure networks there And so basically you you can set your bonding mode in this case in our deployment guide We recommend lacp, which is that 802.3 ad mode And you pull your you know to the bonded inner you select the interfaces here You can see there's two physical interfaces in our bond. There could be two or more You can drag your particular d-lands to that bonded pair and You basically are set up to do, you know, high availability fail over there As far as Things that I learned in the process and working with red hat It's very important obviously to sort of measure twice and deploy what our cut wants as of carpenters used to say Kind of make sure your environment is set up right to begin with double check your networking your switch ports All that stuff needs to be set up right for the deployment to be successful So there's a lot of planning you really need to think through, you know The deployment you can't just sort of go and willy-nilly click next next finish. It's not it's not that simple There's a lot of you know, sort of planning and documenting before you go through it As far as the installer itself You need to make sure that basically you want your installer to be capable to be Kind of a nat router to to the internet or to your source for for your young repositories So that's basically the test you want to make sure that your IPR tables rules are correct for that Let's see if you're doing testing and you know, I recommend that you do start very simple Maybe even with a single controller node Just do that as a deploy a single controller see what it brings up See how the services interoperate with each other And if you want to change things you need to Change the the immutable flag on for example, you want to play around with your cinder conf and you want to enable Two different back ends or something like that What's going to happen is next time puppet runs It's going to clobber your cinder conf file and revert everything you've changed So you're going to have to prevent that with with this command here if you want to do some testing outside of Because puppet basically takes control of the nodes once you've done the installation Or those of you that are familiar with puppet Also, you know as I said start simple Test with a single controller make sure that works great You can delete the deployments. It's no problem You can you know reboot your hosts once they've been deleted in the console reboot them They get rediscovered again if they're set to boot to the network Pixie will discover them you can redo your deployment. So it's very easy to kind of iterate through a You know simple and then get a little more complex and so you can build up a deployment that that's successful You got to be patient because there's a lot going on with this installer It's performing a lot of tasks as it's moving through. I mean you'll see you know early on you'll see You know the basic services coming out the database services the Coursing and pacemaker and you'll see glance come up and then you know finally you'll see sender come up and You know nova and those services so it just takes a while You can if you get impatient you can hop on a node console and run top and then see what's going on You know see the services coming up, but Puppets doing a lot of things in the background If you have an issue if something doesn't match up if you have a network port that's not matching properly or services can't talk One thing you can do is hop on The failed node if you have a friends to see have a controller that fails you can just hop on the console Run puppet agent dash t and with the debug and it'll actually run and you can see the output of it and help you debug You know some of the problems that you might be encountering So these are a couple of docs that i'm referring you to the first one is our ha deployment guide bob calloway wrote that he's sitting in the back back there And the second one is the deployment guide that i wrote based on the platform five The screens that i'm sorry the screenshots that you saw are basically just slight slight updates from that process Some of the things that were complicated in relos p5 for us are now quite simple with the GUI integration So some of the stuff in that in this doc here would apply And some will just be irrelevant because the GUI basically takes care of setting up sender for you now so So Any questions? Yeah, sure Can we use ice cozy or fiber for the sender backhand instead of yes? Yeah, so if you're using faz you can use ice guzzy or nfs And whether that's seven mode or c mode with whichever type of control you have or clustered on tap or seven mode I should have said And if you're using the e-series it's ice guzzy Yeah, I think that's in the pipe Yeah, fiber channels on the way actually fiber channels enabled right So so yes, so yeah, the installer for osp7 would would not be based on forming to be based on on the triple and tusker work So when but that would probably come in in a by later on Yeah, anything else? Thanks, you guys are troopers for for being here late on the thursday afternoon. So Thank you. Thank you guys. All right