 I think we're gonna start a few minutes early just because it's a slower capacity and I actually thought it was gonna be 50 minutes instead of 40, they were all 50 last year. So I may actually go through it a little bit faster because I wanna leave plenty of time for people to be able to ask questions. I think one of the great things about these conferences is actually being able to ask questions of people who are doing this in the real world. So I wanna make sure to leave a lot of time for that. So my name is Ryan Richard, I work for Rackspace, Private Cloud. This talk is going to be for some of the things to think about as you're looking at like designing a Private Cloud. Let's see, if we could hold off the questions till the end, that'd be great. Like I said, I wanna leave a lot of time for that. So the first question might be, is why Folsom? So the title of this talk is a Folsom update. Last year I gave one and really focused on Essex because we'd been running it for six months for customers, so we've been running Folsom for six months for customers. So I wanted to provide some best practices for that. So at the next design summit, we'll do the same exact talk except with the best practices we've been doing for Grizzly. I think Grizzly's gonna be a huge one. There's a lot of really interesting things coming in Grizzly that I'm sure you guys have heard about. I'm gonna, there's gonna be a slide at the end where I'm gonna touch on a little bit of the Grizzly features. All right, so just to get started, I wanna define what is a private cloud. One of the things that I feel like we could focus a lot of time on is sort of the goal of your private cloud. Are you actually building a private cloud that's gonna be elastic in nature? So you're gonna try and mimic one of the public cloud models? Or are you just really looking for a cheaper version of traditional virtualization? In reality, I think you need to pick one. OpenStack does both. It probably does the elastic one a lot better. And it's certainly not a drop-in replacement for, say, VMware, right? And if you think that you're gonna be able to simply install OpenStack, move your VMs over, and everything's gonna be the same, I think most people in here probably know that that's simply not the case. So, certainly the first thing to figure out is what you're actually looking for. Are you looking for elastic or are you looking for traditional virtualization? The next thing for a private cloud is most likely multi-tenant, but not necessarily at the organization level. It's multi-application, right? So each application, each application stack might be a different tenant. But certainly in a, and when we think about tenancy in a public cloud, it could be X number of customers or X number of different organizations. Size, for this talk, I'm gonna keep it limited to about 100 nodes and under. 100 nodes feels kind of like a breakpoint to where architecturally it's a lot different. 300 nodes, it's somewhere around 50, it changes a little bit. But after 100 nodes, it definitely starts to feel bigger and there's more considerations after that size. The endpoints for a private cloud may or may not be public. It's certainly acceptable that your endpoints for all of your Nova services and all of your OpenStack services are there in private space. The public internet can't reach them, right? Limited and bound connectivity, that kind of falls the same line of thinking. There, instances might be only accessible within your organization. They have accessibility out to the internet, but perhaps there's literally no accessibility in where this is where floating IPs come into play. And then the last piece is certainly is customizing for specific workloads. I feel like that's probably the biggest selling point for a private cloud over any public cloud is the ability to customize it for your needs. So that could be things like specific images, specific flavors, specific hardware, right? Most likely the hardware is probably one of the biggest selling points. So I wanted to think about what it meant to be like when you sort of start looking at sizing a private cloud. And we started thinking about the resources that every compute node has. Really, ultimately, we look at VCPUs, RAM, and hard disk space. There are more, of course, right? There is network utilization, network throughput. But these are really sort of the big three. So if we take the default flavors, and we start with the smallest one available, which is the M1 Tiny, these are just out of the box, OpenSack flavors. That's 512 megs of RAM and one VCPU and a disk size of zero, which is essentially the size of the image that you're using to boot. If we were to fill up one entire compute node, this is what it would look like, with just those number of instances. So the RAM to CPU ratio is way off, right? I'm consuming all my CPUs, I'm using almost no RAM, and I actually don't know how much disk I'm using. Because my disk size of zero, I can't plan for that, right? Total instances on something like this is about 48. Now, realistically, we're probably not gonna be running a lot of 512 meg instances, but there's certainly an occasion where you might do that. But again, the big issue here is that the CPU to RAM ratio is just way off. So let's take the medium, for example. So if we think about the number of instances with the M1 medium flavor, choosing four gigs of RAM and two CPUs and about 50 gigs of disk, this is a much more possible workload, you'll actually see this. Once I've consumed all CPUs, I'm using most of my RAM, which is good, and I'm using about half my disk space, not bad. That's a lot closer, that's really what we wanna strive for. And if we think about something on the other end of the spectrum, which is a heavy RAM, maybe a 64 gig image, eight CPUs and 100 gig disks, here I'm kinda on the other end of the spectrum. I'm consuming all my RAM, but I'm not really consuming my cores efficiently or my disk space efficiently. And I'm only running two instances on here. So my thoughts, what that leads into is some of my capacity thoughts. Initially is don't allow a disk size of zero. So you have to really start thinking about yourself as a service provider. You're actually acting as a service provider for your internal teams, for your internal applications. If you allow a disk size of zero, you're actually not gonna be able to plan for capacity at all. Because how big is image X or image Y? Like you don't actually really know. Most people try and keep them as small as possible, maybe seven, 800 megs, but you don't really know how big they are. You think about a public cloud model where they limit flavors based on resources. So I work for X-Base, so I'm pretty familiar with our public cloud model. And if you look at our resources of our flavors that we use, the more CPUs, the more RAM, the more disk space you get. So for capacity planning reasons, that makes life very easy. They know exactly how I can fit this number of instances on this piece of hardware. The trick isn't a private cloud world is that everybody ends up doing the second thing, which is they add flavors for application workloads. So you could think of a RAM heavy application workload like an indexing service or some sort of caching service, right? If you run a lot of those, your capacity might be underutilized in one way or another. So I think there's a middle ground that we have to figure out, which is do you act like a public cloud person, a public cloud provider, and you only give your internal teams access to very prescriptive size flavors so that you can understand capacity and trending really well? Or do you go more flexible and do you allow them to create flavors that match their workloads? I don't know the answer. I think that that is specific to every company. Everyone kind of has to make that decision. Lastly, don't forget about network utilization. I think that's often overlooked is to people kind of forget that you have only X number of gigabits per second on a compute node. It's certainly not something we wanna forget. With the private cloud model, trick there is if you are gonna go that way, watch trending very closely so you understand where your capacity is going. Certainly if you find that you have one application workload over another, plan your next set of hardware for maybe better for the application. So if you're very RAM heavy, go up on RAM and less on disk. So that is the trick there is that I can always add more machines. I didn't realize this is cut off in the edge. But I can always add more machines. We can always add more compute nodes. Compute nodes are extremely easy to add. Hopefully everyone in here knows that. If you haven't stood up your own OpenStack cluster, compute nodes are probably the easiest thing that you can add. With our Chef recipes when we deploy a compute node, assuming we've already set up the operating system and networking takes about two minutes. It's not, it doesn't take very long at all. The big trick is without quantum is you can't change the fixed network once instances are already running. So that's something I talk a lot about in this discussion is because pre-quantum it's extremely, extremely prescriptive with the fixed IPs. VLAN mode opens that up a little bit but it's still, you still have to define a fixed VLAN and once, or excuse me, a fixed range. Once you've defined it, you can't really change it without destroying all of your VMs and recreating which is not something anyone's gonna do in production. There actually is some ability to define multiple networks in Folsom even in flat DHCP without VLANs. You can do it, I don't necessarily recommend it but it can be done. The dashboard doesn't respect it but if you use the CLI or API, you can do it. So it's something worth looking into if you're gonna be running Nova Network and you wanna break up network space. Along those lines, if you have two networks with the, you know, on the same interface, if you just boot an instance, you're gonna get an IP out of both networks. If you specify which network you want, you'll get an IP out of only that network. So this is something that I had last year when we think about resource consumption or resource utilization, you basically take the resource, divide that by that resource in your smallest flavor. It really gives you the maximum number of instances per machine. Again, that's just something you need to think about for when you're defining the fixed range. And if I define a fixed range of 512 IPs, that's the most instances I'm ever gonna get in my one private cloud. So you really need to think about what flavors you're gonna offer, how many of those can be running, and then you probably wanna actually double or quadruple that just for growth, right? So we actually, most of our customers, I think we start somewhere 1,000 or 2,000 just out of the gate, you know, even if it's a very small private cloud, just to get going. So just to keep going on Nova Network for just a minute, there's really, there's two networks in play and there's three networks in play if you are going to use floating IPs. Certainly not everybody actually needs floating IPs, but they do come in very handy. So the host network is extremely easy to deal with. It's how you access the machines. You know, changing or adding the hosts is really not too big of a deal. Or even growing network, network's not that hard. The fixed network, like I said, is really the important piece. You really don't wanna try and change this once you're in production. That will be not a fun experience if anyone's, if anyone's had to go through it, you probably know. And the last one is the floating IPs. They're extremely easy to add. There's concept of pools. It's more analogous to how quantum is doing networking in the future. But you can essentially add a pool of floating IPs and then if you need more, you can have another pool, create those and those will be available to your users. Also assigning a floating IP completely changes the way connectivity to and from the instance happens. So I'm actually giving a talk tomorrow at 11.50, just on Nova Network. So not quantum. It's just gonna be an over network for to kind of go in some of the details of, you know, what's actually happening behind the scenes. So if you guys want some more information on that, I'm giving another talk on Nova Network tomorrow. There's also a number of quantum talks this week. I highly suggest going to those as well. All right, so I'm gonna switch gears a little bit and talk a little bit about images and storage. We've started making some images that we can give our customers. And as the bottom says, one of my team members is actually giving a talk on this tomorrow at 1.50 in room C123. But this is essentially what we've chosen, which is I think probably what most people have chosen as well. As far as drivers go, we're sticking with Verdeo for now. We've gone with QCal 2 format. A bear container. I think people are either between bear, or AMI right now. Cloud init and partitioning, we try and be as dynamic as possible. There was a talk in here earlier about how to deal with images. Basically, you need to be able to move the partition table around. If you're using AMI, you can put everything on a full block device. You don't have to move a partition table. That way when you boot, you can just resize the file system. And the other models, you really can't do that. So your operating system needs to know how to move that partition table. So we've done that in our images. So I'm calling that dynamic so that the partition table moves out. And then when the instance boots, Cloud init resizes the file system for you and you're able to gain those, get, make use of that extra space. The other thing with QCal 2, this was touched on earlier as well, is that there is a bit of a performance hit when you're using QCal 2. I don't think it's too much. I heard 3% earlier. I've heard 7% before. It's certainly, you gain a lot of space savings. So I would highly suggest that I investigate QCal 2 for your images if you're gonna be making your own versus taking someone else's. Some of the other formats that have been added, I only show three up there, but I think Glance supports seven or eight now, which include like VMDKs, ISOs. There's actually a significant number of formats that Glance does support. So speaking on Glance, there's a couple options there on where you're going to store your images. So Glance is essentially the service that provides images for the compute nodes, the boot instances on. File backed is gonna be probably the most common until you reach a size where that becomes unrealistic. You have some alternatives, right? So you have Swift. If you wanna run Swift internally, you can point Glance to Swift. Rackspace cloud files you can point to. You can point to S3 if you wanted. You could also do something like NFS where it's locally mounted as a varlib Glance and you have all this space available to you. That's certainly an option as well. As far as performance goes, the file backed local is gonna be your best bet, but you're limited on how much space you have in that one physical server that's performing that file that's backing Glance. As far as snapshots go, it's realistically hard to guess what your users are gonna be doing. Are they gonna be snapshotting or are they not gonna be snapshotting? That starts to become a sizing consideration, right? So all of a sudden, you know, you're two terabytes of disk on your controller. If I'm got, you know, a thousand instances running, any of them can be snapshotting any given point in time. How do you deal with that space, right? So I believe that Grizzly actually brings, and I know this is a fulsome talk, but I believe that Grizzly brings some features where you can actually pick where the snapshots go, which don't necessarily have to be the same place as Glance or where your Glance backend is. So it's worth looking into. Definitely take it into consideration when you're building out the disk space on that Glance server, right? It's, as far as QCal goes, so I heard so much of the stuff that I've been talking about in the last talk about images. One of the best benefits for using QCal right now is that your, so when you have QCal, the base image gets copied over to the compute node when you're spinning up an instance and creates a QCal file, and then it uses that as the local disk for the instance, right? So if I only have, say, one base image, as soon as I have an instance on every compute node, my base image has already been cached, basically, on each compute node, and so each instance takes about a second to get going, right, versus the flip side of that is if I have a very large image, very large raw file, and I have to copy all that data over every time my image boots, so if I've got 20 different images, my local cache kind of becomes pointless. So my whole point there is it's simpler to standardize on less images and leverage automation and orchestration techniques to actually build the stack that's gonna go on it. So this is one of those things that I don't think a lot of people necessarily think about, but Glance performance, so network throughput becomes really interesting. You could easily consume an entire gig throughput just trying to copy an image over, especially it's a very large image. If you're sharing Glance with the rest of your services, so like on our controllers, Glance is running also at the Glance registry and Glance API as well as the back store, is there you could easily consume your entire network just with an image copy that happens every time your instance boots. So certainly take network throughput into account. You may wanna look at RAID 5 just due to the large sequential read and writes that Glance performs. We tend to go RAID 10 as much as possible, but RAID 5 is certainly doable, especially if you want more disk space. You may wanna prefer disk bandwidth over raw IOPS. You're not doing a lot of small read and writes, you're consuming very large sequential blocks, so disk bandwidth, maybe it worth looking at. And what I talked about earlier was improve the cache, so reduce the number of images that you're letting people use. That's basically what I talked about earlier. If I have a not cached 1.4 gigabyte image, it takes me 20 seconds to copy that over and boot it. But it's not too long, it's not too bad, but once it's cached, it takes a second. All of a sudden, that's quite a big difference. And especially if you start looking at larger images, like Windows images that are 16, 17, right? Into the teens of gigs, that takes time to copy that data over, and it will consume your network while it's doing that. And it will also consume IO on both your controller and your compute node, right? And so that's IO that you're competing with your instances on. So the more you can cache, the more you have cache hits on your compute nodes, the better off everyone's gonna be. That's just how I measure the time. The last thing I wanna touch on for images and storage, basically there's four focus points. For storage, it's basically glance, compute, sender, and swift. So glance, you're really focused on space, as I was saying earlier, sequential read and writes. For compute, you're really gonna be focused on random IO, right? Cause that's where all of your instances are running. They're gonna be all competing for that IO time. So a few hundred IOPS across 20 instances, maybe all of a sudden not so good. So really look at random, building for random IO. It's probably Ray-10 or SSDs if you got them. Sender, really looking for performance, including network performance, cause that is iSCSI, and also density as well. So making full use of your head unit. So if your head unit has a lot of cores, you wanna make sure that you're building the right density of sender disks with it. If you've got a really beefy sender node, you might be wasting a lot of your CPU resources. Also don't forget about things like interrupts, right? So if I'm on a 10 gig network, I now have to worry about how many cores my interrupts are consuming at 10 gigs if my sender nodes are being used heavily. And then for swift, swift is basically JBODs, and you're mostly worried about density there. It's not really going into any of the network considerations for swift. All right, so just some architecture examples and thoughts. These are the same ones I used last year for the most part. So one to 20 physical servers is relatively simple to build. We use a single controller, or now we have HA, so you may have two controllers. Single API endpoint, which again with HA, you may have load balance API endpoints, and you may have a single network, right? So everything's running on one network. One gigabit or two gigabit with some aggregate bonding is probably gonna be fine for that size environment. Just so you guys know, a controller to us is basically my SQL rabbit keystone. All the APIs, so Glantz, Nova, the VNC, Horizon, what else is on there, the scheduler. Basically all the processes besides Nova compute, Nova network, and Nova metadata. Everything else is basically running on the controller in our world. So network utilization, I say one or two gigabits is fine, but obviously that your workloads are gonna be completely different from everyone else's workloads. So just plan accordingly for the network throughput piece. As far as 20 to 100 servers go, it's really not probably a whole lot different. Obviously I'd recommend looking at HA controllers, load balance APIs, Swift or cloud files, right, for backend to Glantz to deal with snapshots and the more images that your users might be consuming. The major thing there is what I talked about earlier was you really might wanna look at limiting the images and the flavors that your users can consume. If everybody can upload images, you're gonna consume that space real fast. You may wanna look at availability zones to separate out hardware based on some characteristic. Availability zones though is essentially going away in grizzly, or it might still be there, the concept of cells is coming in, which is really interesting. You may also wanna consider front end and backend networks. So in sort of this model here, where basically I've got my fixed IPs on a completely different interface. So my VM to VM traffic is happening on say ETH one of my compute node, but my Nova compute talking to Rabbit, all my Nova services Glantz is that all that's happening on ETH zero, right? So I've separated out those network workloads. I do have Cinder on the same network space here, but you can have a dedicated Cinder network as well. Certainly if you have a large block storage installation, you'll probably wanna do that. As far as metrics, collecting metrics on compute nodes at this many servers, you're probably actually gonna want dedicated machines on that. We've seen it at somewhere around 40 or 50 servers that the IO for collecting all the metrics possible from a compute node starts to become pretty heavy, about 40, 50 physical servers. So some just performance considerations and bottlenecks. I think that IO is most likely gonna be your problem. Like I was saying earlier, random IO becomes quite a big deal on the compute nodes depending on how many instances you're running. You're gonna wanna try your best to reduce as much IO per instance overall. Block storage helps a lot there. So all of your major IO operations, so like databases, right, are pretty heavy. Anything that's reading a lot from disks, you'll probably wanna look at block storage. Just get the IO off your local disks and throw it in Cinder. But that does cause more networking problems. So it's certainly something worth thinking about. Review hypervisor best practices, that's the other thing. Depending on what hypervisor you're using, there's probably a best practice on how to build those images. So we chose VertIO for now, but the VHOSnet module looks pretty promising for performance considerations, right? So I would certainly make sure that you're building it correctly for your selected hypervisor. I guess I should have put that on the first slide. That's the first thing you have to figure out. What hypervisor? What hypervisor are you gonna pick? I don't know how many we're up to now. Five or six hypervisors. That's probably the first thing you should figure out when you're building one of these. Some lessons learned. Floating IPs must be associated with the flag public interface. That's been an interesting one that we've been fighting somewhat. The public interface flag sets up a number of IP tables rules. I'm gonna go into that a little bit at my Nova Network talk tomorrow. But basically your floating IPs in that network needs to be associated, needs to be available on that interface in VLAN that public interface is set to. Each piece of OpenStack has its own architecture, right? People like to show that really complicated diagram from Ken Pebble's site. It was on the images talk earlier, it was on one of the talks earlier. It's actually, OpenStack's still pretty complicated, right? Each piece has its own architecture. Swift has its completely own architecture, right? Cinder basically has its own architecture. Nova has its own architecture. So make sure you focus on all of them and not just one. Folsom is stable. Like I said, we've been running it for customers for the last five or six months. Since Essex, OpenStack's getting more and more stable. So if you're still wondering if OpenStack's stable or not, I wouldn't even question at this point. And I'm really looking forward to see what Grizzly's bringing. Migration, so either live or block works. But there are certainly situations where it doesn't work. I would suggest if at all possible don't rely on these mechanisms. I know some of the other private cloud vendors are doing some stuff to make live migration work really well. I think that comes back to the discussion on are you building traditional VMware? Are you building an elastic cloud? In an elastic cloud world, live migration shouldn't matter to you. But that's kind of a hard place to get to. So OpenStack keeps changing, obviously, right? That's why we're all here. So keep up to date on all the current projects. That would be the other thing. We're up to five or six core projects, but there's another 10 or 12 smaller projects that people are talking about pretty heavily now. So Heat and Slometer, Oslo, this triple O, OpenStack and OpenStack. There's all kinds of stuff going on now. So keep up to date with the community. And the other thing I added this real recently was don't do heterogeneous nodes. That actually means from like a network standpoint. It's actually okay from a compute standpoint because the scheduler is gonna deal with that. It's gonna put resources where you have resources. There's gonna put instances where you have resources but from a networking standpoint, I'm not sure if anybody actually is doing heterogeneous nodes, does anyone? Does anyone have their networking that looks different? Yeah, all right. So pretty much every slide here could probably be its own talk, right? You could have your own talk about networking. You could have your own talk about Glantz. You could have an entire talk about Keystone and everything else. Trying to cram it all into one is kind of difficult but certainly think about every project when you're building your private cloud. And this is pretty much it. So some operational updates from Folsom. Last year I gave a talk about an operating OpenStack. I'm not giving you one this year but there were a number of new Nova calls added. So there's like the hypervisor list, hypervisor stats. There's a number of calls that were added to Nova so that you can actually get statistics from your hypervisor about utilization and instances running there. So make sure to use that. Image types in Glantz, I talked about that earlier as well. We have new format types. The policy.json, highly recommend investigating that if you're looking at limiting API calls based on role. You may wanna look at the policy.json files. And then what's coming in Grizzly, the main things at least in my opinion are the sales concept, you know, quantum for networking. Quantum was here in Folsom but I think it's actually probably ready in Grizzly. And then better LDAP and AD support is one of the big things that we're hoping for. And that's pretty much it. So this is also a design summit. So completely open to, we have about 10 minutes left. Questions, discussions, thoughts. Anyone? Yep, I've actually wondered that myself. I haven't looked into that. I believe, I don't know, does anyone know the answer? So the question is, if you have Swift backed into Glantz, does every compute node when you're booting an instance, does the image get delivered from Swift to the compute node or does it go through the controller and get delivered by the Glantz process? Okay, in Folsom, but in Grizzly that sounds like in Grizzly it does not. Okay, so the compute host will actually pull it. Okay. So in Folsom, it sounds like it does, which is what we're talking about, but since Grizzly's been released, it's a great question. I've actually wondered that myself. I just haven't investigated it. Yep, yeah. So how do you know your utilization? How do you know ahead of time the utilization? So he's basically wondering, how do you find all the information about utilization, performance when before moving over to private, to build private cloud, right? But how do you get this from your existing world? Yeah, so it's probably a pretty traditional requirements gathering effort. So if they already have that workload somewhere, then you can obviously gather the performance of what it requires, right? So if they already, it becomes very application-centric also. So you wanna focus on one piece of that, one application and find its IOPS network throughput. If it's already running, if it's not already running, I mean that becomes a conversation with the customer, how do you, what do they think they're actually gonna be consuming? I don't think there's a magic bullet there. But obviously, if it's a workload already running, you can gather those metrics through any number of systems that exist. I just don't think there's a simple magic answer. Obviously if you're designing as like a service provider or like a public cloud model, that's a little different, right? Where you're basically saying, this is what we're gonna put out, you have to use this. That's a little different story, right? So are you building to customize for their application? Are you building to be a service provider? Which is not gonna be customized for anybody. It's just gonna be a model that you have to consume. I think that's a decision you have to make. Sorry, I didn't follow what was that. So the question is once you have the numbers, how do you build for that? I mean again, I think that's traditional architecture, right, we're still running Linux infrastructure. So if I know my X number of IOPS and I know my throughput requirements, I can certainly still build for that. That's I think pretty standard. I don't think that really changes to be honest. Now there may be performance considerations based on what like hypervisor you're using and what kind of performance you're gonna lose. How much percent of a performance are you gonna lose? But I don't think that that changes too much actually from traditional Linux architecture. So I got a couple of questions, right Eric? Okay. Yeah, so you have customers that have requirements to use different hypervisors. Is that because they have existing hypervisors or like they want Hyper-V because they have a bunch of windows and they want KVM for like free, right? So we don't run multiple hypervisors in our private clouds. I probably right now wouldn't suggest doing it, but also theoretically it can be done. I have no experience with it. I really can't say one way or another. I wouldn't recommend it though. So in our HA and these guys, couple of these guys might actually be able to answer it better, but I believe for those services that you can't run multiple of, we're restarting them on a failover, right? Is that pretty much, I don't know, Brew or Chef do you guys have a tie in on there? Yeah, we didn't use pacemaker and core sync. So we're leveraging keep live D. We can, there's a, the team that's like really worked a lot on HA is here. So if we have more questions afterwards, we can certainly, we can pick their brains if you have a question on how they're pulling that off. Where? So again, I'm not overly experienced with the VMware use case. I haven't built a VMware private cloud with OpenStack. Okay, so to use floating IPs or to not use floating IPs? Yeah. That's certainly tricky. That becomes one of those questions of do you, should you even actually try and forklift an application that's not cloud aware and put it on the cloud, right? In an elastic cloud? That I think is a million dollar question. Like that is becomes a problem that a lot of people are having right now. They're trying to do that and they're running into problems. It's like trying to do traditional HA and OpenStack like on top of OpenStack, right? You can't really do that. I don't know if there's any good fencing mechanisms for moving a floating IP. I really think that just forklifting a traditional application over and just throwing it on top of OpenStack is a bad practice. But it's certainly gonna be one that a lot of people are doing. So, absolutely. Yeah, absolutely. That was a two to one oversubscription. We don't, we let customers decide what they want their oversubscription to be. So, since that's a flag and a process restart, it's pretty simple to increase it later. We start at two to one and we don't oversubscribe memory currently. If customer wanted four to one, eight to one, 12 to one, I mean, we could certainly increase it for them. We want them to establish their workloads and then look at utilization and bump it up from there. More questions here. No. I think that RAM is, I think it's RAM or disk space, to be honest. What's that? Disk IO could certainly be, it certainly can be a problem. I actually think it's a bigger issue than RAM or disk space as far as performance goes. I would probably agree with you. RAM, when you boil it down, is really gonna be your kind of limited to that amount of RAM. Certainly, VCPUs can be over committed a lot easier, right? That's a pretty easy process to deal with, but RAM is pretty much a gaining factor. I would agree with that. It depends on what you're trying to accomplish, right? So through, if you have very large, so he's asking about network performance, is it more latency focused or is it more throughput focused? Again, things don't really change from a traditional architecture there. If I'm storing stuff in like Rackspace cloud files, I now have a latency problem, right? I also have a throughput problem, but it's not gonna be nearly as fast as my local disks a hop away, right? I'm now going to a public cloud provider, and so it's gonna naturally be a lot slower, but I have infinite amount of space I can use. If I start thinking about instance-to-instance communication, latency might become an issue, but throughput is certainly gonna be a concern, because now that's a traditional virtualization problem. If I've got 10 VMs running on a hypervisor and a one gigabit pipe, that's all I'm gonna get out of it, right? I'll never get any more. So again, I don't think that changes too much from the traditional virtualization model. So I would say that actually a lot of building an open-stack cloud doesn't differ that too far from a traditional model, right? At the infrastructure layer, but there are things that you have to account for, which are like glance, and Swift is a different animal as well. I would argue that it's not too far different, right? Right. So yeah, so he brings up the shared storage piece, but I would argue that a lot of people in here are probably using shared storage for their instances. So my thoughts on that are that you should really be focused on building elastic nature and not dealing with shared storage ever, but there's certainly a lot of people who build it that way. So that kind of touches back to my first point of is you kind of need to make a decision. Are you building traditional virtualization or you're trying to build for elastic nature? There was a question over here. So the question is basically if I have multiple data centers right geographically separated and then like performance of glance copying between one or the other, I would probably actually put my glance images in a public cloud provider and then use the URL and ask the systems to pull down the glance images would probably be the model I would go instead of storing them locally or at least in Grizzly as we learned earlier, right? In Folsom you might not have the option. I didn't talk on that. In glance, you don't actually have to have a file backed. You can actually store a URL location of an image. So you could have the image could be sitting on someone, Ubuntu's website, right? Or the image could be in a private container or a container in cloud files. You don't actually have to store the image locally but obviously going out and making that request is gonna take additional time, right? So that becomes like a convenience factor over speed, right? I would say strive for less images and standardized if possible but certainly as one of the questions brought up earlier people are trying to move workloads over and not thinking about it from the ground up, right? How do I stack this instance versus just having an image that is like this very important, you know, my one specific application image, right? That's the, those are the questions that people keep having. It's hard to force a business to try and move, right? I get that but I think you wanna try, though, yeah. That's pretty much time, I believe, is that correct? I'll be, yeah, I'll be around. There's a number of other private cloud guys in the room from Rackspace. Be glad to answer any questions you guys have.