 For that it's being recorded. I want this to be as interactive as possible So if you have questions, or you adamantly disagree with something that I'm saying Let me know we're all here to learn like this whole conference this week. So please let's talk about it I want it. I want to touch today on I know there's people are talking about cloud here How does this look different types of infrastructure? I really want to focus in on what what is called converged infrastructure and where that fits in the market and kind of How people are using converged infrastructure and for what they're using it for today because I think there's a really unique use case That's really important While I'm busy doing Architecture and design work. I also have been involved with virtualization community for some time. So I was Early on a Active participant in the VMware community. I cut my teeth years ago with Solaris containers Actually is how I kind of moved into virtualization and then from there fell into VMware land ten years ago with everyone else that kind of Happened to go that way and I've been working around with Working around and working with open-source projects and different types of solutions since Where they fit and how they fit into different different pieces. I'm work with one of the founders of VM underground, which is a Completely user-driven event that happens every year prior to VM world, which is interesting That's the for fun and parties You can find me on free node. I'm there usually on the internet, but mostly I'm making tons of noise or being noisy on Twitter So if you have a white field if you have a green field It's it's sort of a unique environment, right? Like those don't exist today I don't think in very many places So as we start trying to figure out how we evolve our infrastructure From the way that it's looked for the last we'll say 15 years We really have to sort of like take into account all of the things Of what we're trying to do how do you plan for elastic workloads? That's huge, right as companies start talking about So an interesting antidote was VM world this year VMware is the guy's on stage. He's talking about all of VMware's products and this huge What's amazing and what's cool and one of the slides comes up and it's next generation application stacks and why? VMware is best for next generation application stacks all of the next generation application stacks all of them a hundred percent We're all open source projects. They weren't products They were open source projects that is where the industry is moving as we start evolving how applications are written They are being written differently to be hosted on cloud style application Layers, right, which is scale out No single point of failure. You can have infrastructure plug-and-play. The problem is is we have Existing workloads not only do we have physical workloads today that need to be virtualized still we have what I lovingly refer to as Legacy virtualization workloads, which is anything that was migrated to a VM any time in the last 10 years, right? Those are legacy virtualization workloads because now that they're easy to manage and now that they're in a VM No one's patching them. No one's doing anything with them and no one is encouraged to migrate those things and if you are patching them those operating systems that are on I I Free infrequently see those operating systems being upgraded at some point in a life cycle of those operating systems They turn into appliances or virtual appliances or try to harden them to a virtual appliance But that code base is still like Fractured because it's still in a container running on top of a hypervisor, which is fine. It's nothing wrong with that and then We can't really do anything about the last one. You have inertia in place. You have architecture in place You have hardware in place you have people in place that like working with product X or product Y A network designer and architecture that was put in place years ago. All of these things are sort of like They're already. So how do you tackle these things? What is the problem we're trying to solve? I guess from from my perspective the the main thing to start with is As we're walking around there are a lot of different cloud solutions that are doing interesting things, right? And so as you start looking at okay I've got this environment and I want to layer on cloud to try to start solving my infrastructure challenges Or how do I grow in my infrastructure? How do I move these things away? Can't we just use this product? Can't we just use this project? Can't we just dump this on top and that could be open stack that could be cloud stack that could be pick anybody It doesn't matter that there's an assumption that I can layer on some sort of cloud application layer on to my existing infrastructure and Fix my problems that I have that I have going on today Storage IO challenges network design and architecture challenges if I just layer on this cloud product on top of this I can move my workloads into this and it's going to fix things and that's sort of a scary Problem and and it's it's amplified by the fact that the language that we use describe these products is Ambiguous right even if you usually have the word cloud right how we're how we're trying to discuss this It creates a ton of challenges for us because they don't do the same things projects like open stack and projects like cloud Stack while they appear similar on the surface Actually tackle things in a very very different way So sort of baselining the conversation since it is sort of murky Legacy enterprise workloads is stock lock stock and barrel enterprise virtualization right that is I take a VM There's the whole pets cattle everyone's heard it everyone in this room. I'm sure he's heard it If a VM is going to die and you actually care and you spend any amount of time trying to repair that VM That VM belongs in your enterprise virtualization environment period If that VM it dies and you're like spin up another instance or even better than that It's automated it's CI handles it and you have you know your environment spins up another instance for you That belongs in your cloud infrastructure environment. They're not the same right This on the right is a picture of red hat enterprise virtualization It looks like vSphere it this is any any type of environment if you're familiar with what that looks like It's your traditional hypervisor attached on the back end to a large storage array with the networks on the front end and VMs are running and If this if this ESX host, I mean KVM host dies right or Zen host dies, right? It doesn't matter. They're all the same at this point if that node dies Another hypervisor says hey that VM supposed to be running and it turns that VM on and another ESX I mean KVM. I mean Zen server hosts, right? They all spin up the same way. They do the same types of things And then you have some management portal and stuff like that But even in the definitions we start seeing things that are different and we start seeing things that have to be keenly identified on resource pooling on-demand self-service rapid elasticity versus Maximum server utilization minimum server count, right? Those are two one is really focused on like Maximizing hardware utilization and the other one is focused on being able to scale out your applications Different ideas so if we have a good idea around what That sort of legacy enterprise virtualization workloads look like everyone is kind of is there anyone that hasn't seen this slide Like I give this I've given a similar talk to this at different places and like this slide is everywhere now This is really important. This is elastic cloud, right? Like the thing about OpenStack to me that's most interesting is What it actually is there to do well Not what people are trying to shoehorn it in to do and that's something that's very important as you start designing at data center scale You want to make sure that you're leveraging applications to do what they do well not trying to force them If anyone's trying to do HA on OpenStack today, and I know this is going to cause heartburn You're doing it wrong, right? Like you shouldn't have to worry about HA Workloads running an OpenStack today because you shouldn't care where those things live but that infrastructure at scale facilitates Elastic workloads to be run very well, right? So everyone is everyone here should be familiar semi familiar with this life So this is from the godfather, right? This is probably I see this is probably one of my favorite quotes Mara Puzer is the guy that wrote the godfather. He's not that's not who that is actually but that's his quote And I think this is applicable, right? Everyone needs to like hammer this home I usually would ask everyone's kind of early for this if it was after lunch We would stand up and say it out loud like elastic cloud and enterprise Virtualization are oil and water, right? They don't belong on the same framework There's no point in having them on the same for you could put them on the same framework But they don't need the same things, right? Like you can put 50 screaming children in a station wagon if you wanted to but let's put them on a bus Let's do something different So that's something that we try to focus in on so now we have an idea of how we're describing enterprise virtualization and how we're describing commodity cloud or elastic cloud and there's This is the challenge, right? Like if we look at enterprise Virtualization legacy virtualization workloads and how that looks with converged infrastructure. It hasn't changed since it came out right Zen server Red Hat Enterprise virtualization VMware vSphere it has looked like this since it got released. None of this has changed, right? This diagram it doesn't matter the reason like this is so one is I have horrible skills in making graphics So there's there's that layer We're gonna add that but but honestly this doesn't matter what I'm talking about It's applicable to all three of those because this is what enterprise virtualization looks like today, right? But we can do cool things and we sort of get this sort of additional layer, right? Just that idea of live migration just the like fact that that's tweaked in and every hyper v Everybody's doing this idea now that vMotion or live migration between commodity compute environments changed everything So if you roll back was 10 years now 10 years ago roughly when vMotion was introduced by VMware and everyone's doing it now That changed everything if I was an operations guy sitting in a data center I didn't have to get up at 2 o'clock in the morning to go power cycle a server I could do it in the middle of the day When I'm fresh when I you know have time I vMotion that vM Maybe at 2 o'clock in the morning when I didn't trust it when it was still new But I can do hardware maintenance in the middle of the day that changed Everything for an operations team just the ability to migrate that workload live Everything changed. It's pretty cool, right? We have this problem It's kind of the problem. I want to talk about actually as we as we start talking about what the next step is for enterprise virtualization This hasn't changed down here. This layer hasn't changed. We're still limited By what's going on down here below the commodity compute environment, but we did gain all these things, right? I mean how we look at all of this stuff is has changed. It's this is Actually pretty pretty interesting Again that same idea of being able to migrate a workload impacted all of these things for customers But we think about what was underneath that layer what was underneath that vMotion layer underneath that Commodity compute layer It hasn't changed for a really really long time. It doesn't matter how much cool stuff you layer in on top How much cool functionality you can build into your hypervisor and the functionality that you're doing for your commodity compute environments not it It is important But you're not Maximizing what you can do if your storage architecture it still looks and smells like it did 20 plus years ago if I still have a sand cable plugged into a two-node head And I'm worried about how many IOPS I can deliver through a single, you know You're you're tackling the same problem. That's been tackled a thousand different ways Sometimes you're really limited by what's underneath, right? So come bang coming coming back to that If you were going to design an infrastructure today right now to fix the idea of converged infrastructure or fix the idea of how does What can I do different in enterprise virtualization? How can I kind of move the ball forward for legacy enterprise virtualization? Realizing that everyone is focused on the cloud today, and this is a problem. That isn't terribly exciting I think it's exciting because there's a lot of virtualized workloads that need some TLC if I if I was Gonna try to tackle that bottom-tier layer of how do I fix this? How do I move the ball forward? This is what needs to happen, right? We need to have something that is a single storage solution that is visible for sand Is visible for file share and it's visible for object store, right? We under we know where things are going We know where the future is going So if we're gonna fix this problem for convergent infrastructure, we need to tackle it with an eye towards the future as well So we need to have something that's file system aware. We need to Be able to push everything into a single namespace. We understand APIs are important We want to have an API that we can code against so that we can automate this thing. I not have to deal with it I don't want I don't have to get up if something on my storage breaks I don't want to deal with it. I want it to take care of itself, right? I want to be programmatically tackling problems instead of trying to like turn wrenches We need to clearly identify interface points in the virtualization stack That is probably to me the biggest one And that brings us right back around to the start of this thing is converged infrastructure What is converged infrastructure? That's Wikipedia Coming back to my lovely PowerPoint skills. This is what converged infrastructure summed up is to me We have this today, right? This hypervisor inserting this inserting this shim Above the commodity compute layer think of all of the things that changed and think about all of this Zoo that we have down below that's still the same. It still looks and this doesn't pick a vendor, right? I mean you we've got different vendors in here. This looks like everything Everything looks like this. It doesn't matter that app emce IBM HP Dell pick a vendor, right? What if we just get rid of that we get rid of those those heads that are attached to that disk? We really care about the disk. We don't necessarily care about the way that they're attached What if we just get rid of that layer? Well, we got to move it somewhere. What if we take that shared storage and shove it up right next to the hypervisor? Potentially running alongside the hypervisor. What does that what does that change? What can we do? I don't even know right? I mean I Build out infrastructure that does some of this and I'm seeing some things and some added benefit that I get by deploying this way But by moving our shared storage Above the commodity compute layer and leveraging disk and chassis in some sort of file system that is striped or redundant across to multiple nodes You can standardize your hardware platforms and you can do all sorts of crazy things that you couldn't do before When you had a chain down below attached to your storage in a traditional fashion. This is interesting What does this change? That to me is what Converged infrastructure is we're not the first to market right like as of the free software community as the Linux Community here. We're not like coming up with this idea. No one is going to fall over that. This is new There are other people doing this today already, so we kind of already have a path in front of us of what we can do different VMware Cisco and EMC created a group called VCE. I'm not sure what the technical name of that is But they're doing some stuff in the converged infrastructure space. They sell giant racks of gear that they push out as a single unit That's mildly interesting sort of convergent infrastructure. They call it that. I don't know if I buy it Dell's doing something That's actually kind of interesting in the same space And it's kind of close to it's more interesting than what VCE is doing, but it's similar Right now Nutanix if I'm looking on that market Nutanix is the company that's doing this well, right? Nutanix is a box disk and chassis if you want to buy their product You go buy it you drop in 20 30 of these things it clusters all of the disks together across all the boxes And it runs the hypervisor. That's pretty cool, right? That's pretty awesome, but they're all proprietary They're all black box and they actually go so far as to use that in their marketing material, right? Like that is their graphic. That's their their stuff, right? That's Nutanix. This is not what we want This is not where we need to go, right? We're not okay with this That's a problem, right, but it's a really big difficult challenge I think it's a hard problem. I think I think she thinks it's a hard problem as well So what do we have to sort of start tackling this? I'm not saying that what I'm talking about or what I'm proposing is like the end goal or the end post I think there's a lot of work that needs to be done. We need to shift how we think about our infrastructure Drastically from a developer community We need to shift what we're thinking about and how we apply ourselves drastically from a developer community But we can start looking at tools that are already available today To start thinking about things differently and start looking at infrastructure in a converged way How do we leverage infrastructure in a converged way? There's some tools that are out there, right? So over it's interesting. How many people show up hands I've ever used over it before Got a couple Anyone anyone else over here? No, okay So I'll take a couple minutes and burn through What overt is it's an open-source alternative to v-center and v-sphere, right? So red hat releases Red Hat Enterprise virtualization the upstream project for that with the so everyone's familiar with red hat Enterprise Linux, right? So the upstream project for that is Fedora, right? So Fedora feeds down into red at Enterprise Linux. That's how that works anyone can go get Fedora for free They don't care it's use it what you want do with it contribute if eventually stuff is cool it get trickles down into red Enterprise Linux overt and Red Hat Enterprise virtualization have the same relationship over it's the upstream everyone can contribute everyone's pushing stuff into it good things It trickles down into red Hat Enterprise virtualization Focuses on KVM only so this is interesting, right? It only uses KVM and the reason for that is and I think I could be completely wrong If you only focused on one hypervisor It allows you to take advantage of all of the features of a hypervisor and do things differently if you try to support multiple hypervisors There's other projects that try to support multiple hypervisors if you try to support multiple hypervisors You have to have a very low Compatibility for all of your stuff, right? You can't take advanced of can't take a leverage advanced features and Go deep you have to stay thin across the top so that all the functionality is the same So just focusing on KVM allows for over to do some really interesting things and take care of every new feature that gets released specifically around KVM which is cool Large-scale this is old-school style right and this is legacy virtualization workloads Which is fine? It looks the same right There's I mean you could rip this out and throw up Zen server You could rip this out and throw up vSphere you could rip this out and throw up whatever you want, right? This is I am a user. I'm connecting via an HTTP to a web portal I am logging in I am talking to an engine or vCenter or Zen server, right? And it is talking to a hypervisor, which is running things in this case cumule, you know KVM libvert and it is talking out to again the black box, right the old-school storage configuration which has Some sort of disk arranged in a IOP Configuration that allows for performance for VMs, right? I mean that's everyone is knows how that works This is how over it works today. That's cool. It works for us We can start with that at least and we're gonna change some things from a storage perspective It's important to understand it organizes Disk and it organizes your IO the same way that a Zen server or a vCenter does right in a very If you're if you're used to one you're going to be very familiar with this as well It's not going to seem all that strange So your block NFS POSIX layer Rolls out disks run on top of it you get domains that are built out for storage And then you have your you can do your live migrations within a storage pool between different hypervisors All of that is very interesting So we'll start with that The real challenge at this point I think for converged infrastructure I think where as in as a group, you know, we think this is cool Or we think cloud is cool and we want to work on this project or this project Where we fall over is we lose track the fact that file systems while boring are Ridiculously important and how we tackle storage is also Really really important and so we need something if we're gonna layer this together kind of make a converged infrastructure Sandwich apparently I'm hungry because I'm thinking about sandwiches You know, we can use cluster today. So how many again, so that was interesting for overt People weren't familiar with that. How many people have used or heard of Gluster? Let's say let's see heard of Gluster, right? Everybody's hands should go up How many people have actually used Gluster like deployed Gluster for real configs less hands come up, but there's a few It's interesting, right? So Gluster is Changing very rapidly the light the latest release. This was kind of cool. I was At Red Hat at the time and it was the first time that there was a major feature release They got pushed into Gluster that Red Hat didn't do like it came from a partner, which was awesome For Lib the QMU integration stuff Lib GFAPI QMU bits, which we'll talk about in a bit But anyway, that's awesome So people are outside of Red Hat are contributing upstream to Gluster We talked about the upstream like Fedora and Red Hat Enterprise Linux And then we talked about overt and Red Hat Enterprise virtualization Gluster is Red Hat Enterprise storage, right? It's the same thing so everybody can contribute do what they want with Gluster and do all their Stuff and add and code and contribute and out the bottom comes Red Hat Enterprise storage It's a scale-out clustered storage Environment lots and lots of clients. This is an old graphic, but this is my favorite They don't really talk about this very much anymore, but you can layer Gluster on top of anything, right? You can put a Gluster think of it as a For me The thing that clicked finally coming from like the virtue coming from virtualization land not really dealing with storage that much It's a storage hypervisor It's a storage hypervisor You can lay it in as a shim between whatever disc you want anywhere and Whatever you want it to talk to it does the same thing that a hypervisor does But it does it for your storage for your disk IO, which is kind of cool in this case The far left of this graphic is the direct attached disc. That's kind of the key points, right? The direct attached disc piece. That's how it's most commonly used but sometimes you can throw a SAS cables onto a server and drop out a J-Bod behind it or For if you have a NetApp 2020 or FAS 2020 or a VNX or an HP whatever a left-hand sand You can always attach those and use those discs as well. It just adds the hypervisor abstracts that layer away linear performance scaling Gluster.org so we're kind of going to start Piecing some things together here for converged infrastructure. I Don't like reading you all can read so I'll leave you with that Those are some ideas behind Gluster. It's very important to understand these key differences where you put the Discs what the discs are and then the collection of discs so this is Gluster FS sort of really high-level right this is I'm Logging in I took the CLI I do my configuration changes. Here's how I build out a Gluster environment Here's my cluster between these three servers. I've got all these Gluster FS daemons running that are managing all of the discs across these boxes and I've got oh look My graphic is wonky you can connect via rest API NFS SIFs You know all of the different ways that you can connect but we're not going to do all of that We're specifically going to change things and for us the only way that we're connecting for our converged infrastructure play is with the Hypervisor right and we're going to use the native Gluster FS connector, which is going to be interesting Where we at three cluster release three two one QA to just came out I think They just released that yesterday or the day before Back in three one so so long ago Like a couple months Gluster Added in features to allow for interaction with overt which is kind of interesting right so overt has always been this How do you manage your VMs right? That's what overt does and you can build VMs and manage them But they added in this feature set to allow for you to manage your Gluster environments through overt as well Which is kind of interesting so you have the ability through a nice pretty gooey that you're managing all your VMs in you have the ability to Also build storage arrays and like manage your storage disks and manage your Gluster environments directly through overt, which is pretty cool Single pane of glass I think is the architecture term they like to use for that single pane of glass to see all of it It's it's interesting But more important is the fact that these two tools are already talking right so we can start leveraging these two things To move that storage layer north of the commodity compute environment Also important. There's a VDSM Gluster plugin that allows for that to happen. What does that look like? If I have my storage servers each of those are the nodes from the previous slide that have all of my disks in them I'm going to drop out an agent on each of these that's going to talk to the overt engine Which is the vCenter ish zen server center ish piece in the middle And then I can manage it either through a python SDK, which means it's programmable, which is interesting Or through a CLI services and I can take advantage and tie all of this stuff together Which is great? That's pretty self-explanatory how can I use More Gluster bits the key thing for me here that's also really important to talk about is if you're doing Over in Gluster in this type of arrangement where you have all of your commodity compute layer You can actually Directly attach the hypervisors and the storage through one interface, which is kind of cool There are some challenges to do that and you want to make sure and start the volumes and optimize them for virtualization workloads But it allows for you to break this up and push things into that that layer So we had a show of hands. This is going to for people that have used Gluster before this has becomes sort of odd at this point There's a GUI right so like through overt There's this interface that you have to do all the stuff that you were familiar doing with Gluster before via the command line you actually have a portal that you can log into and build out all of your hosts create clusters for Gluster Create your volumes add your bricks all of the things for that file system Allows you to go through and do all of this and tile this stuff together through one interface still which is pretty awesome So this is where things get really interesting for me and why it gets important this is The I think it was I beat was IBM It was IBM that did the big code contribute for the cumue the Gluster FS natively on the back end So before if you'd use Gluster years ago one of the problems and it was a valid concern See how many people I can irritate by saying this Gluster's performance wasn't great for VMs. I think it's the nicest way to say it It had some issues right like there were like if you're mounting up and using a guy of Gluster cluster that's built and I'm using NFS VM performance was Potentially problematic. It's probably a nice way of saying it. However, this is a large code contribution from IBM That was in the latest release for three four they I Have a home lab so I can't really accurately speak to what it does at scale Red Hat was comfortable saying publicly 200% performance increase Which means that it's probably more than that if that's their conservative estimates Mine were a lot higher than that By a 2x which is terrifying Yeah, the cumue Gluster FS natively in the back end mount. Are you sure? These I would double check. This is a John Mark slide Yeah Three you're thinking over it over it's on a 3-2 release 3-3 is coming out. Yes. This is no worries. That's fine Keep me on my toes. I actually stopped and I'm like really I've given this presentation like three times now and you're the first person that's caught it. That's terrifying to me Yeah, no The important thing is is once this code dump happened Cumue, right? We all know how cumue works KVM your can natively talk to Gluster FS Which means it's not talking through fuse anymore. It was I had to go through this fuse It does this translator then I can talk to Gluster on the back end. We just took all of that out Cumue natively talks to Gluster FS now, right? That's huge. That's a massive performance gain from a hypervisor perspective And also the if you're doing like data center design the NFS V3 ACLs nobody's really talking about this That's huge if you're trying to do like multi-tenant solutions You have to have something like that. Otherwise people get very touchy about how people can see each other's data So NFS the V3 ACLs is pretty solid 3-4 is out. There's I just think it was 3 4 2 is it 3 4 2 1 now you can now now I'm all over the place Gluster FS 3 4 2 1 I think is the latest beta that just came out and hit and go grab it and start playing with it It's interesting The block device support is The other translator live GF API is huge okay, so Going rolling your mind back to that slide where we took the commodity compute layer and shoved it up next to the hypervisor I think that actually fixes a Lot of challenges that people are having in their in their enterprise today or in their custom in their companies or with their Customers today from a converged infrastructure story VMWare is moving that way VM We're just released the vSAN like all of us should be aware of that. It's actually That is going to change the industry the storage industry I firmly believe that there's other companies that have pushed it, but what they released was Gluster not really right? It's their own product. It's doing their own thing, but they released a distributed file system that runs on top of a hypervisor Right, that's what they just released the industry is moving that way. There's value in that We need to start thinking about things differently and pushing in the same direction however Now that we have a converged infrastructure story or play for all of our Legacy virtualization workloads. We do realize that all of those next generation applications are coming What do we do right? How how does this work? I Don't actually have to do this anymore. So this was my when I was When I was at Red Hat, this was my don't talk about things I can't talk about slide, but I can safely ignore that slide now Right now today you can go out and grab RDO RDO is the not doesn't it doesn't stand for Red Hat distribution of OpenStack RDO is the upstream release. We talked about upstream downstream at Red Hat RDO is the upstream very similar to overt for what will be Red Hat Enterprise Linux Opens the deck is it yeah, rel OS. I think is what they're calling it So that is RDO is the upstream version of that So if you think about how that process works same thing Fedora rel RDO Red Hat Enterprise Linux OpenStack You can use Gluster right now for instance or okay this let's we talked about over in Gluster How many people are you have are familiar with what OpenStack is everyone should raise their hand their marketing is fantastic, right? How many people actually use OpenStack and deployments like how many people have rolled it? I got one hand in the back anyone else how many people are considering pushing OpenStack out. That's always good question, too, right? So RDO is actually really easy way to get moving in that direction if you're interested in OpenStack There are different chunks of that. There's the objects from a storage perspective I don't know why but I really like boring infrastructure problems like storage There's an instant store and there's the cinder block storage in the glance image. These are all different pieces of The storage layer that are that you need when building out an OpenStack environment and Gluster FS has different tools to assist in that today you can use And deploy Gluster and let it be your object store for the Swift as well There's a plug-in that allows for that interaction so that you can use a Gluster cluster as your Swift object store Also for cinder and for glants as well. So that's all available. There's some stuff that's coming I don't actually have visibility into this anymore, but that open hybrid cloud is important this I don't know what this is right like I don't know what open hybrid cloud means. I know what I want it to mean But at some point I think Red Hat's gonna get upset. This is what I want it to mean right like this is what's important You can do most of this today This is actually with a converged infrastructure story something that's very easy to do and also Critically important to start thinking about which is although oil and water Enterprise legacy virtualization does not mix with cloud application workloads. It doesn't mean you can't use the same tools It doesn't mean that you can't Minimize the number of different components in your infrastructure so that you can move forward seamlessly Right now today upstream overt. I know it's not in their current or in I think it's in the upstream It's in then it will be in the new 3-3 Glance is the image repository piece for open stack you can use that Today with overt to manage your image repository as well So natively inside of overt you can just leverage glance on the back end to manage your images for your legacy Virtualization workloads that does that's insane right that's pretty cool You can use Gluster on the back end for your storage for Nova or your storage for over on the back end And if you have it up in the commodity compute layer you can use the same reference architectures as well hardware reference architectures for these types of deployments We can use Swift on the back end Again across the board standardizing on KVM something that I've done. It's terribly easy to do Different management pieces the overt engine and the horizon pieces. Yeah, there's different portals and different stuff like that But both of them now today as of the latest releases as well upstream can leverage neutron networking API as well So if you're using an open v-switch implementation, you can leverage the both of those and Plug them right in to either your overt configuration or your open-stack configuration Which is kind of cool because then your networking layers the same your storage layers the same your hypervisor configurations same hardware architectures the same and while it is oil and water you can standardize on as many components as possible this is Really important for me and again highlighting my massive skill at putting together graphics for slides. It's fantastic lots of boxes So talking a little bit about glance Glance is that component layer specifically around? image repositories, it's the way that you get images Amazon AWS is anybody using and anyone is using AWS just because I'm curious now and I feel like raising my hand again for image repository, how do you move an image or an AMI in and out of? either open-stack or into like something like AWS or You know their vCloud VM or vCloud they have a portal or a middle ground a DMZ If you think about it from that perspective where you can migrate images in that then get imported into their cloud Offering into your catalog or pulled out the same way if you need to download something that is that piece So by open-stack and over upstream natively leveraging glance We sort of can take advantage of that at the data center layer and standardize images, which is cool Being able to migrate is pretty important and Gluster functionality right no one builds a data center right no one builds This is my one environment in my one office and it exists on its own and I never need to move it anywhere right everyone worries about dr replication Business continuity plans that they start building those things into their environments You want to be able to leverage replication pieces of Gluster as well? To move your AMIs or your images around push your images around and how do you back them up those types of things? and the networking support Just realized I've been rambling for some time The networking support right like how we leverage in our open daylight foundation go stop by their stuff and talk to them They're doing really interesting things the important piece here That's all north of neutron right like so if you look at neutron as an API for open stack That allows the open v-switch implementations to connect in and plug into neutron You also have that layer with an overt as well so that you can take over it Actually comes from the bottom up and plug into neutron And then you can drop in your open v-switch implementation the same standardized across both environments And then your SDN framework can layer in on top of that which would be your you know open daylight Or if you're in VM or land your VNX VMX whatever they're calling that company they bought That's pretty important because again minimizing pieces means when things break people actually know how to fix stuff You don't have 15 different implementations so I'm still really active in some of the underlying storage pieces I still really like and think that storage is one of the biggest challenges today for not only legacy virtualization workloads But also for like fixing cloud moving forward. There's really really smart people working all over the place But I from a putting wrenches together and turning wrenches and deploying stuff We need more people thinking differently about storage. There's tons of people thinking differently Industry-wide about about networking and handling SDN, right? We need to think differently about storage and what that means to move that storage from a base layer j-bot attached Black box down below up into that commodity hypervisor, right? How do we move it up next to that? And also thinking about it alongside the hypervisor. That's that's interesting challenge get involved, right? Like go do something go download something complain about something on IRC break something fix something just get involved, right? either through Gluster through the overt environment or Project X of your choice storage is something that needs to get fixed and it and we need to be moving in the right direction together Those are the same slides that I I think I have anything else. Thank you If anyone has any questions, let me know this is sort of an interesting problem that needs that we're gonna start Start tackling from a data center perspective It sort of disrupts everything, but I think that as an industry. We're starting to move in the right direction Do we have any questions? How are we on time? Are we okay? Perfect. Perhaps not. Yes, you are a quick question about comparison of Seth versus sure of us. I know like last year There's a lot more momentum behind stuff, but it sounds like I think there still is I think there's still more momentum I'm the wrong person to ask right because I I really like simple stuff like I want like if you ask me do I want Stick shift versus an automatic like in my heart I want a manual because I like to grind it and I like to just go after it, but just getting around town I want an automatic. It's less work. It's less hassle I can drink coffee and do totally inappropriate things while I'm driving my car and I don't have to worry about making it go Faster, that's my analogy between Seth and Gluster today, right? Like if I package everything into one Damon that runs without a metadata server, right? And I don't have all these other services if I just have one service running and I build the environment out That's Gluster to me, right? It's simpler to me to me. That's again This is my own personal take right other people will say other things My take is Gluster is solid for what Gluster does and it would came out of sort of an operational background in other words some guys Guys like everybody sitting in this room. We're like I have a problem. I need to fix it I'm going to build a service and they built something and shoved it into operational support and then started fixing it along the way And it grew out of that whereas I think Seth was sort of this Here is this ethereal problem that I'm trying to solve and I'm gonna write a paper on it Which is awesome and I'm going to solve it in these ways which is awesome And then at some point in the future I'm gonna push it into production like I want something production ready today I know there are multiple petabyte deployments up and running of Gluster I know that when I read the Seth documentation it still says Don't use this file system. Don't use for production environments. I'm sure it's going to get there. It's awesome It does amazing things right for the guy that wants to grind the gears and go through stuff It's fantastic, and I'm sure that's a horrible analogy It's not probably not fair I just really like simplicity and I like to be able to very quickly and rapidly get stuff up and running and When I'm doing puppet configurations or puppet manifests and building stuff out It's less steps for me to integrate and deploy on Gluster and to get Gluster up and running then it is for me to get Seth up and running not saying it one is You know this or that I'm just saying this is this and it is what it is And I also like the fact that there's the geo replication bits are built right into Gluster FS And I can do geo replication between data centers without having to buy an additional service. I Don't know last time. I looked I think you have to pay externally someone's gonna yell at me later I'm sure I think you have to buy something to do multi-site replication Like for production environments with Seth. I may be wrong if anyone knows please tell me I don't want to be wrong when I say that that's my I believe that to be true I know with Gluster the geo replication bits are built in and you can have it out of the box And everything I'm doing is multi-data center anyway, so I that's a requirement Hi, I just got a couple quick points Right now I've got a combination of overt and open-stack install Yeah, doing a semi migration from one to the other and I'm considering Gluster. Yeah, I know okay. I'm considering Gluster for the open stack side of things sure and You know possibly even doing some of the integration you said with with no one in the back end on over To kind of make the transition easier. Yeah quote unquote. However easy that is gonna be painful, right? Bleeding slightly less than before but still bleeding. Yes, we're we're also I'd have to mention We're running three one on a CentOS. Okay. Yeah, so that's cool But what would be permanent is that we're running through a one-gig pipe. Sure 220 vms sure in that stack bonding bond bond bond bonded all day long, okay, and so There's things like and this is one of the things I've been having an issue with is the performance with ice-guzzy and Like it kid. I was wondering is there anything or if you even know that might be better with Gluster. I mean do you think Yeah, yes, and I would say you'd see an improvement with NFS to like I know this this goes This is something this is a person again a personal opinion Like I I don't use ice-guzzy if I can avoid ice-guzzy I avoid it all the time again It comes to that complexity issue grabbing a scuzzy packet and trying to shove it inside of an IP packet to push it over The wire to have it undo that on the other end seems okay. I just I'm sure it works. I just don't know why to do it like if I have fiber network I'm gonna use fiber if I have a IP network. I'm gonna use an IP base protocol Which is NFS. This isn't Yeah, I'm kind of picking up somebody's mess. Sure. Sure. Yeah, which is the way it usually is right, right? No green field. Yeah, if you have other questions later, I'd love to talk more too because I yeah Anyone else have anything else any other questions? So I'm thinking I'm counting in the back of my head I'm counting things people are gonna light me up for later. So there's ice-guzzy and there's Seth and there's a few Yeah, um If you guys have questions about this or you have ideas about this, I'll be around Come talk to me. Go talk to Red Hat. They if you want to talk about Downstream stuff if you're interested upstream stuff. Let me know or there's other folks here that are with the upstream Red Hat community as well. So there's plenty of people around Gluster is ridiculously interesting. There's a Gluster Day event on Thursday as well So if you're still around hanging out and don't have anything to do on Thursday, go sign up for the cluster day There's different workshops up shops. I may be talking about how I'm using John Mark doesn't know this yet, but I'll probably be talking about how I'm gonna I'm using actively using bits or it for geo replication of enterprise storage arrays today, which is kind of interesting So if you're around and you want something interesting to do it'll be entertaining So thanks