 Hey, thank you. Thank you for being here this morning to learn about how we as a support team at Rackspace take care of over 100 private cloud customers every day. Going to give you, there's four of us talking today. Behind me is Darren. He was born and raised in San Diego by a pack of wolves. He's busy raising his kids now up in Dallas Fort Worth with the same pack of wolves. And he's the second shift guy on Rackspace, private cloud engineering support team to solve all of your cloud problems. I am Mark DeVurter. I've been at Rackspace for a little over five years. The one thing I've really learned with Rackspace is kind of grab your and get ready for what's coming next because sometimes it's not a smooth ride. Big guy over here is Chris Woodard. And he's here just to prove that somebody, even somebody with a Louisiana School of Public Education can work an open stack. Berto behind me hails from North Austin. His full name is Alberto Rivera Laporte. And North Austin is where people with the names of that complexity come from. He lives with his wife, Diana. He's got two dogs, Leo and Ryder and the cat, Daisy. He didn't give me a picture of the cat. This is sort of internet stuff. He failed for not giving me a picture of a cat. Anyway, to move on, I'm going to hand it over to Berto. And he's going to talk to you about open stack support one year later. Hey, greeting, guys. So what Mark actually failed to tell you guys is at the end of the conference, at the end of this presentation, there's going to be a real nice cat waiting for you out there. So congratulations. All you guys are going to be cat parents. I wanted to ask everybody a question here. How many of you are open stack operators in the room? Oh, man, this is sweet. OK, how many of you are actually new to open stack in general? I have very little experience, so I haven't worked on it before. OK, fantastic. This presentation is for you guys. So this month actually marks my first year where I've been working in an open or supporting open stack at a professional capacity. So I've survived my first year. It's been really interesting and very exciting, and I'm happy to share a lot of my perspectives as to what goes into supporting this type of environment. So when I started learning, or when I started working with open stack, this is my perception of what the average environment would look like. We are very simple, but it's not always the case. It does that diagram that you see here does a good job into giving you an overall perspective of what open stack environment looks like. But it's not when you actually start digging into it, then you start uncovering the fun stuff. And seeing that for the first time, my mind just went absolutely blank. So this is what resembles a logical architecture in an open stack environment, keeping in mind that does not take into consideration any of your routing, switching, or host, or any of your physical layout. So it does do a good job, as well, to tell you about the complexity that we're involved in working with on a day-to-day basis. So to the eyes of a guy that just started doing this or novice, you just feel like you've just been entirely just trolled by open stack. But it's like that with any type of technology, right? It seems a little bit unsurmountable at first to be able to be very intimately familiar with a lot of this technology, especially as a new guy. But like everything, it just takes time and patience and determination, and then you should be able to get a decent idea of what it is, even after a year later. Now let's explore some of the differences on traditional support, Linux support, and then what you do as an open stack administrators for those that have never supported open stack before. And a lot of this right here that I'm going to be presented next, it's very subjective to the size of your organization, right? They are small shops or small organizations where you see the Linux team pretty much handling everything that you see there and more, where there's also large organizations where there are multiple Linux teams, which are focused on different disciplines. So that just wanted to just point out the difference there. Now, as far as how differences this is in comparison to what you do as an open stack, it's really not a lot changes. You just get to wear a lot of different hats at the same time. So you get to do everything that you see there in a traditional Linux support environment. And you might be exposed to a lot of different technologies that some of you may or may not have any experience working with. I mentioned containers there just because, as a rack space support engineer, we utilize open stack Ansible as the deployment method for our open stack deployments, which uses that technology as well as containers. Sorry. Yep, we use those two technologies to deliver the installation and maintenance and upgrades of the different planes. Network or networking configurations are still one of the very challenges of understanding open stack as far as the deployment methods. It uses a lot of different technologies that a lot of the times I haven't seen. I've been doing Linux for close to 10 years now and it wasn't until I started working with open stack where I had to learn about Linux network namespaces. I was like, my mind was entirely blown by the concept at first because you typically don't see it on a day-to-day environment. And then you have to deal with stuff with Linux bridges, open V-switch if your neutral model supports it, and same thing with IP tables. There's lots and lots of ITP tables that you have to familiarize yourself with. In addition to all of the challenges that you're seeing to having to figure out the physical components and all the other infrastructure, you also didn't have to worry about, not worry, but you have to be responsible for supporting all of the different components. And I'll point back to the logical diagram that we were looking at. We have to dig very deep into a lot of these Python messages or stack traces and identifying what is a normal condition. I mean, is this what it's supposed to be doing? Is this a bug? And given the inner dependencies amongst all the services, a lot of the errors are very obvious, but a lot of them require a lot of dig digging. But like everything with some patients and determination, you're rewarded with a greater understanding of the whole picture. And even a year later, I still have a picture still almost there, but now it's great. In addition to this, you have to also educate and help your consumers of the cloud, meaning your tenants, your projects, your users, with all levels of difficulties of experiences on those users, in addition to dealing with your day-to-day to additional system operation responsibilities, break fixes, continuous improvement, infrastructure life cycle, backup security, and all that stuff. Now, some of the useful strategies to me as a new open stacker, things that I've discovered along the way that have helped me understand this is to divide and conquer, right? I really hear it's the term divide and conquer. It's dividing all of the different components into what they call planes. And you're gonna be hearing all these terms throughout the conference. You're gonna be hearing control plane, network plane, maybe compute or storage plane, is identifying all those different components and the level of criticality, right? So the control plane, which consists of all of the services that are shared among all of the open stack services, is what I think is one of the most critical, right? So having a great understanding of how this perform and how well this scale is very critical to any open stack infrastructure followed, which I can't believe I didn't put it here right underneath the control plane is authentication plane, your keystones. You have to understand because as a service that is dependent across it, all of the other projects utilize it. So it's good to have a great understanding of what that entails and how to troubleshoot it and so on to get a better idea of what the entire picture. If you are a new user or someone that's still a little bit on the fence or still looking to explore a little bit more, start small. There's, even within a year from now, there's still been a great changes on the number of all-in-one distributions that are out there that you can use to familiarize yourself with the different projects. And if you were like me that were intimidated by that crazy spaghetti diagram that we see at first, it's really rare to see all the projects implemented at the same time. I mean, so that's one thing. And with that in mind, taking time to explore your core projects, right? Your Keystone, Nova, Neutron, Cinder, Glance, and Swift. And join the different mailing lists that we have. Their IRC channels, your open-stack meetups. I mean, I'm crazy, I'm blessed here that here in Austin we have a really big open-stack presence and there's a lot of meetups that happens on a monthly basis and because of my schedule I'm not able to attend them all, but man, it's great. It's a rich resource to have to interact with other open-stack users. And of course, you know, events like here like the Summit. You know, that's pretty much for me for this morning. I'm gonna go ahead and pass it to Darren Carpenter here. How you doing, everyone? I'm gonna be talking a little about public cloud or private cloud. What's different, what I've seen. Still kind of a high-level overview, but let's keep going. So things changed quite a bit when I went from supporting private clouds to supporting public clouds. Regardless of which cloud you're supporting one key thing is always, you know, stands in the forefront of my mind is that I'm here to serve. I don't want customers using services that they're unfamiliar with or making simple mistakes that I could easily help them avoid. In the public cloud, imagine each support team being compartmentalized from a support aspect. Everyone tends to have a specialty or focus that they're assigned and your success is measured on how deep your understanding is in that focus and how that translates to a happy customer. We had folks specifically working with Nova, others with Cinder, others, you know, tailoring images for consumption and so on. In the private cloud, just imagine being thrown into a canyon, left to fend for yourself wearing a tunic and wielding a crudely made spear. The environment around you changes every six months. So the little shelter that you built to get you guys out of the environment needs to be constantly in a state of growth. With that in mind, from the viewpoint of a support manager, the type of folks that you bring into your fold needs is a very important decision, not that it ever isn't a important decision. It's just that you have to consider that not every great thinker is a great adapter. I realized shortly after transitioning to a private cloud role, that attempting to sit pretty as a master of one or two things simply was not going to be acceptable. Support in a private cloud is like being a Swiss army knife where every tool that you have at your disposal is constantly being used to the point of going dull. With that, the requirement that constantly sharpen your skills becomes paramount in the future success that you're going to achieve. Outside of simply having a support team where members of the unit come from various backgrounds and have various levels of familiarity, it's benefited us to have a small team of folks dedicated to research and development of a training regimen. These folks could be chosen simply due to their tenure, but in my opinion, it works better to find folks with a training background and melding those two groups together. Just have the best results with that. Not to say in the public cloud that you should not also receive training, but from what I've noticed in more generalized roles, the will and desire of the individual determine how much training they're going to pursue. Whereas in the private cloud, it needs to be part of the actual job description to receive frequent training on new and upcoming technologies. Working on the public side, I've run into a very wide range of customers. These customers can be mom-and-pop startups, small development groups testing something out, a variety of other types. The other end of the spectrum can be large companies that simply don't want to invest in the hardware and expertise required to operate things privately. What I've noticed the trend with these large companies in the public realm is that they're testing the waters to see if this is indeed something that they wanted to do. The problem I saw with that was the fact that resources on these hosts are shared and environments created tend to be deployed similar to that of a dedicated offering in that redundancy and recreatability are sometimes limited and this sometimes left them with a bad impression. Let me stop there to make a very important point in my mind. Regardless of the route you choose, public or private, the architecture of your deployment needs to utilize the strength of the cloud. That strength in my opinion is the ability to deploy quickly. Without the overhead and delays encountered in procuring and bootstrapping those new resources. Far too many times in the public cloud, I had customers running issues where they took their dedicated mindset of a single machine or set of machines that must stay up. What I found in all these environments is the data, of course, needed to be accessible, but it wasn't placed on a medium that could easily be transferred to another host in the case of a down event. Ideally, we're coming from a down event in the cloud, should be as simple as bringing up a new host to be a configuration management, potentially even automated against a monitoring failure and associating the required data to that host. Afterwards, you pull the problematic device out of the equation and then once the swap is complete, throw it away. If you were using Rackspace's public cloud, this would typically involve cloud load balancer in front of the respective service with various nodes behind it. RCA's would be limited to repeat issues with hosts of similar type, saving time and effort from your staff at the end of race, but requiring a little bit more preparation on the front end. Now please understand, my experiences do not define every workload or data set, this is just kind of what I've run into in my time amongst the clouds. On the private side, the customers begin to look a little bit more simpler. Similar, they're typically larger, have a legacy dedicated environment, they're attempting to shift onto their cloud platform. These companies typically have a little bit more experience with cloud, though they haven't necessarily graduated from some of the mistakes they made in the public cloud. One key thing that they do understand is that owning the entire cloud presents some new problems, but it prevents other people's problems but from becoming issues that they have to solve for within their deployment. Supporting the public cloud revolved more around the knowledge of the APIs and the ecosystem that was built around the environment. The department-type support was kind of the norm once I moved into an operations role, since the environment gets so large at that point that it's no longer feasible to have a single individual or a group of individuals supporting it. Specialist-minded folks seem to thrive here as they could pick a particular project they were interested in and dig deep. Personally, I've always been more of a jack-of-all-trades, which is why I feel very comfortable working in a private cloud support role. Rarely have it all as far as I can remember that I touched Keystone in the public cloud. Whereas in the private cloud, manipulation of users and tenants is rather commonplace. Upgrades definitely become more of a job in the private cloud. With the frequency of releases, you're typically in discussion with the one or another customer about the version OpenStack they're running and how they can get up to the new, latest, the greatest, with all the fancy bells and whistles. With that in mind, it would benefit you to have a drafted plan in place for environment upgrades and how your instances will be handled during these events. You get to shuttle the instances down at once. You want to shut them down per compute. You want to restart instances in a particular order. Maybe migrate the instances to available hosts. That would probably be the preferred, in my opinion. Another benefit of private clouds is that they provide you with greater flexibility in terms of what modules are utilized. For example, if you want to use Neutron with OBS, Neutron with Linux Bridge, versus in the public cloud, you're pretty much stuck with whatever the vendors decided it's going to work best for them and their environment. Multi-region deployments are another thing that need to be considered as many companies have environments, locations, and customers scattered across the world. From the public cloud standpoint, when dealing with a relatively larger provider, it's much easier to get a married environment in multiple locations as part of their selling points going to be where they already have a presence. Contrasting that against the private cloud, this would typically be limited to where your company itself has a presence, assuming you were going to host it in your own data center. If you were using a cloud provider, you could, of course, utilize their data centers. Public cloud, in my opinion, gives you greater agility in this area, simply because the hardware's already in place. It just becomes a matter of determining what is needed, spinning it up, running some configuration management against it, and then send it out in the wild. Whereas in the private cloud, you may need to order the hardware, get your initial kick, add it to the environment, boot your instances, run configuration management, and so on. Another thing I noticed is there seems to be like a lack of a deep talent pool when it comes to cloud computing, both for the public cloud and the private cloud. The benefit the public cloud has is it's usually a solid system administrator can pick up the nuances of supporting a public cloud environment relatively quickly. While in the private cloud, they'll need to get their feet a little bit more wet, just because you have a tendency to deal with quite a bit more variety on a daily basis. But with that in mind, utilizing the expertise of cloud providers and their support teams becomes beneficial to organizations that don't want to hire a new staff or invest heavily into training their current staff and cloud concepts. One thing I can definitely say that benefits the public and private cloud, consumers rather, is to lean heavily on configuration management. And if you're an administrator, it would behoove you to become familiar with one or more of the configuration management tools. If you're logging into every set of posts whenever a piece of your application stack goes down, in my opinion, you're doing it wrong. In the public cloud, security and privacy are a slightly bigger issue, considering again, that your instances are running on shared resources. I would suggest that you pay close attention to the news regarding the hypervisor and tools being utilized by your provider, along with obviously keeping up the date with the same thing for your application stack. The last piece of advice I'd like to leave you with is regardless of which type of cloud you're using, public or private, ensure your environment is mirrored in at least two different data centers, because you never know when some poor man is going and his pickup truck is going to plow into an electric box outside of the data center where your data is hosted and cause a harddown event. That's all I got for you guys today. My colleague, Mr. Divertive, behind me is gonna be speaking to you all a little bit about escalations. Thanks, Darren. So we all want our clouds to work, and they should work because they're redundant, right? We've got hypervisors everywhere. We have control planes that are multifaceted, but why do we have so many escalations? And it does happen. Escalations come up with us almost on a daily basis at Rackspace. In my experience, it's always rabbit. Something else could fail, but that's gonna take rabbit down. Rabbit doesn't like disruptive network traffic. It starts to partition. It'll lose your cues. You're gonna end up with backed up cues, like neutron workers. It's really a bad situation. You really have to care for rabbit. That's your pet. You really have to pet your rabbit. Look at this. The same diagram that Alberto showed you. Over there in the center, bottom sort of center left, that's rabbit. In the middle of all these API cues, every API message goes through rabbit. Now, if you're talking about large neutron heat builds, that's a hundred, I mean, neutron workers go out of their minds because there's so many requests coming in for port workers, but I'm gonna get to that in a minute. Here's the easy thing to do. In the rabbit documentation, take a look at the link. You can also pull this down from our slide share, but go to that link, and rabbit says the default is 30 IR, 30 Erlang IO thread workers, but they recommend 128. In the OSIC cluster, which 1,000 nodes, but I was working about 512 of those servers building 800 instances. It took me about 17 minutes without changing anything to build 800 servers with about a 25% failure rate. Most of those were due to neutron failures and some keystone stuff, but that was also related to neutron. I'll get to that in just a moment. So look at your IO workers. This could be an all-in-one deployment. This could be in your, however big your private cloud is. You gotta care for your rabbit, bring those threads up so everything can work properly together. When you're seeing some failures in rabbit, here's a simple script. Again, you can pull this down from our slide share. This'll pull any message out of a queue that you want if you have the username and password. It'll pull the queue or message, let you look at it, and then re-inject it so it can be processed properly, but it gives you an idea of what your rabbit is doing at this point. So what are the other issues? Well, heat builds. See this a lot with our customers that are bursting up and bursting down. In every four hours, they might bring up a few hundred instances. They'll tear them back down another hour, they're bringing them back up. Well, those are overrunning your message Erlang queues, but your neutron workers really suffer. The default here, I think in the neutron.conf is 16 workers. If you're bursting that big, you've gotta really increase those neutron workers. I brought them up to 128 in our OSIC environment. I didn't see any detriment to load, so seems pretty safe to me. But without increasing that, if I monitored the neutron queues in Rabbit, the neutron queues were bursting to 7,000 messages. That's all these compute hosts requesting ports, ports trying to get sent back, all of this going through Rabbit, all of this having to go through Keystone, and that's the other part, your Keystone workers. Once neutron starts bursting, neutron's gonna start giving you authentication failures, too many connections, I won't do it anymore. So look at your Keystone workers, you have to gauge that against your environment, but this is how you start preventing your end users from coming to you and saying, why is everything failing when I try to do my heat build? So put your focus in your infrastructure. These are your, if it's an AIO, if it's at rack space, our builds are in quorum, so we have to have three, five, seven, that keeps Galera happy, that keeps Rabbit happy, of amount of infrastructure nodes, but look at what you're gonna have to do, look at your CPU power, look at your RAM power, engage your infrastructure accordingly, that keeps down your escalations. Other stuff, neutron, all of this is networking, whether it's VXLAN, whether it's VLAN or GRE, it's all networking at the bottom end. I'm gonna throw a shameless plug-in right here for my friend James Denton, buy this book. Learning, OpenStack, Networking. The second edition just came out, but this you should know back and forth if you're a senior engineer on your support team or expect your support teams to know what they're doing with neutron. You have to know it, this is the best book there is. There's a new one that just came out, Essential Neutron, it's a little bit lighter, it's more topical, it'd be good for your junior admins to learn, but you have to know neutron because if your routers start failing, your instances go offline, they don't have IP addresses anymore. How are you gonna troubleshoot that? By the way, he'll be selling the book tomorrow. We missed it today, it was earlier this morning. And what? It's free. What's free? The book is free. The book is free? Oh, okay, the book is free. Tomorrow you can get that at the Rackspace Cantina at three o'clock. What else? Our move to MariaDB and Galera over MySQL, Master Slave Replication, Master Master Replication, it had been a problem for us at Rackspace. We moved to MariaDB with Galera. I haven't seen maybe but one replication failure with Galera doing all the work. And when it failed, you just restarted Galera. It's an amazing product. So that was another big point of escalations were replication failures. It was a great move for us. I recommend it for your private clouds. Go with that. And with that, I'm gonna give it over to Chris Woodard. Thanks, Mark. Okay, so you've heard about one year later being in open sec support, coming from public cloud to private cloud and then escalation. So I'm gonna do a very, very high level overview of what is architecture and how we design for customer specific needs. So architecture comes from a Greek root to build or to create. And remember we're doing a really high level overview here. So if internet information's super highway, architecture is the road, we would be the civil engineers. So typically what people are thinking about, they're talking about architecture, they're thinking about a product. So it'd be some sort of reference architecture. And really that's just like a set of standards for your product. And so RPC is Rackspace Private Cloud. This is the reference architecture for, sorry this is an ice house. You're gonna hear a lot of the same themes here. You know, some of these guys were kind of covering some of this before. So I don't know who sort of ran our earlier deployments. So like Havana, we did a Chef-based deployment. We had two controller nodes. Use HAProxy, OVS, we're doing active, active replication. So a lot of the things that we learned and some of the pain points that we experienced there, we rolled that into ice house and we made some design decisions for to have a production ready cloud. So we went with containers for a seamless upgrade. We did Linux bridge because we experienced issues with OVS. We took, well, this template's been changing as we've learned other things. Like Cinder is now on bare metal and then we have like a dedicated logging server. And as we add additional projects or features, these like little asterisks on storage will continue to grow as we support additional items. But essentially you have your compute logging in your infrastructure and those are your core components that we can just add on to this. And as we grow to regions or anything else, this will adapt and grow for that. So what really like learning what the customer needs or how we design for that is asking the right questions. Who are you? What do you do? Who's your customer? What do they do? And what does your workload look like? Is it bursty? Is it sustained? Do you have high IO? What are you doing there? And what is your environment? You guys doing straight L2? Are you doing L3? Do you have a ton of neutron routers that are using floating IPs? Do you have multi-region? Are we looking at a DR solution here? And then also we kind of talk about your roadmap. What is your immediate goals? Long term, what's on fire? What can we immediately do to make your life better? And so we'll take that information and we kind of process it. What will benefit you? What will give you the most quality of life? So you're doing a lot of one-to-one virtualization or you're doing bare metal or like you're spinning up a lot of Hadoop nodes. How about we try to do ironic or spin that? The clouds are really about consolidation and so just bringing all these things together. And so trying to make your life easier and possibly does it make sense to bring a product in early for you? Is it production ready? Are you gonna have a bad day with this moving to this product? So really the end of the fun portion of the job is really setting expectations. So we talk about the KISS method. Keep it simple. So you got this million dollar sand that goes down twice a year versus some compute nodes we're just spinning this that are up for three years. What makes sense there? So if you're every summit people are trying to change design direction, new hotness might be lukewarm. You never know. In supportable and repeatable three A.M. calls happen. You don't want this to be a snowflake that only you know about. They talk about being the hero. You don't wanna be the hero every Saturday. So have it so everyone on your team can support it. Solid plan. Also don't, you don't wanna test it in prod. You need people. This is kind of the same thing there. You don't wanna be the only person who knows about this. And also you don't wanna be looking things up as you're troubleshooting. And I think I talked a little fast. And really kind of wrapping it up is we wanna take a base template, receive our customer input, build the solution, enable features and really just plan for tomorrow. How are we gonna grow this? And I think, and so our plug is go check out the Rackspace Cantina and questions. So I know that Mark earlier was explaining about the, took him about 17 minutes to build a bunch of instances on a 500 compute hose and then with about a 25% failure rate. But you never heard back as to how long did it, after making the neutral modifications and the API workers and so on, did that make a difference in your build success rates as well as the overall speed of the deploy? Thanks for bringing that up because I never looked at my cards once and that was in there. So yeah, it took 17 minutes before any of the tuning to build about 800 instances with about a 25% failure rate. After making the tunings to the early laying workers, bringing my neutron API workers and the keystone workers, I was building 800 instances in four and a half minutes. Somewhere around there was, it was four and a half to five minutes just depending on each individual build with zero failures. All out, everything going status active. I could ping everything over the script with no issues. Yes. Well, so it started off with tracking rabbit logs and looking in rabbit and seeing AMPQ connections failing, going to specific compute hosts where there were failed, you know, instances that went to error state and looking in novacomput.log and seeing what was the basic failure and was AMPQ. So I went back to AMPQ and started looking at the documentation, said, let me bring those workers up. So I brought the workers up. Now I lost when I did my next builds, I was losing the AMPQ connection errors, those were gone, but I was still having failed builds. Now when I dug digger or dug deeper, I saw that they were really related to neutron issues. So neutron was just timing out. Dug into neutron and found out that I just don't have enough API workers to deal with all these port requests going back and forth. So I brought that up. That reduced it now down to about a 12% failure rate. At that point, I was a little bit of a loss but I just started looking through my infrastructure. I went to Keystone, looked at my Keystone logs eventually and found too many connections. I didn't have enough pool workers and they were timing out. So I brought my Keystone pool workers up. I increased my time out to 30 seconds. And at that point, it all cleared up. It was really just Linux troubleshooting. We have it into the OpenStack product, our RPC product. It's published into there. I do need to put it out to the public. I'm just trying to find the right avenue to put it there. Yeah, I'm on FreeNode. My last name is Dverter, but if you spell it backwards, it's reved. You can get me at reved anytime in IRC on FreeNode. Yep, and I'll give you a business card if you want one. Absolutely, thank you for your questions. Anything else? Gotcha. Tell me to what extent you agree or disagree with the statement. When you're on the well-worn path, OpenStack is great. Take two steps off that well-worn path and you're in for a world of hurt. And to give you some context, at NVIDIA we're doing game streaming. So we want to virtualize the GPU. We want to run on Citrix Zen because that's where VGPU works the best. We want super low latency networking because latency is king when you're streaming your game. And we also want high performance disk because loading textures is very IO intensive. We've barely gotten this working on AWS. And so I'm wondering, are we in for a world of pain if we think about OpenStack? So with OpenStack, in our reference architecture, everything on the compute plane is, or the instance plane is running at 10G. So we try to take away the network latency there. We have one customer that we're working with right now on installing actual GPUs and bringing those in through KVM hypervisor to do the rendering inside of an instance. There's a limitation there. You can only import one GPU to a single instance. So KVM is extremely limited there. I haven't worked in Zen for a number of years, so I can't answer to the point that you brought up right now. But... But in general, so VGPU, it sounds like there's no support at all. It's only pass-through. It's pass-through. We've tried Zen with the latest OpenStack and it just flat-out has bugs. And Citrix acknowledges this and they want to work to fix them. So right there, there are two things. We're in the weeds already. And I worry that that's just the tip of the iceberg. And so for example, we recycle our VMs every 15 minutes. So if someone plays a game, we kill the VM, we recreate it to re-init the state. I like that. That sort of thing is a big deal if the duty cycle, if it takes 10 minutes to get the VM back up, you're only at 50% utilization. So I'm worried that there's gonna be those sorts of gotchas that we're gonna run into if we go down this road. Well, honestly, there probably are gotchas that I don't even know about at this point because these are things that I just haven't worked with with our particular customers. These, the rendering aspect, that's something that we're just starting with with one of our customers. They're gonna be bringing GPUs into their environment very soon. That's something that I've been tasked with working on them with. So once I, I don't know, work on that, look into it. Maybe I can publish something on it, bring a little bit of enlightenment. But it is open stack. I mean, as far as virtualization, bringing up VMs, running things, very, very cloudy. This is probably the number one, this isn't the number one product for it. For some of the really specific use cases, I don't know if... We're on the video, just watch this. Man just wants me to say this, but it's trial and learn. Okay, cool, thanks. Let me check our time. Yeah, we're out. So I was just gonna ask, from a staffing perspective, one of you mentioned that jack-of-all-trades is your more or less approach, but in coming into a legacy enterprise IT shop, is it more common that you've seen where they try to take somebody that's working on a legacy component, like networking or storage, and try to bring them and map them over to that component inside of open stack, or is it more common to have somebody be the jack-of-all-trades and try to address whatever project is being used under the open stack project? Thanks. So from what I've noticed is there's kind of a mixture of both. If that individual doesn't want to stray from what they're used to, then pushing them into that role works. If you have the staffing, obviously you can put more people into specialized positions, whereas if you have smaller teams, then you end up having to be a jack-of-all-trades, just because that's just the way the world works sometimes. But there hasn't been any kind of definitive, hey, if you're coming in as this, I think Berto was networking at some point and now he's basically doing all the same stuff. So jack-of-all-trades seems to work, and then if you happen to have the staffing available to specialize individuals, then by all means, it just put them in with what they're used to. If I can, just I'll add on to that for a second. It depends on the shop. If you saw the keynote yesterday morning, they talked about having to change processes and procedures. Companies coming in the cloud aren't chance, they're coming in the cloud thinking they're gonna save some money, but they're just moving old stuff over to new stuff. They're not building out their new design. They're not training their people to do things a different way. It's really, it's a total change of your thinking, of your enterprise thinking. We just can't rely on a single thing, like Darren said. We need to use a load balancer. We need to put services behind the load balancer if something falls over, there's other things to run it. So I think it's really, it's a change of process and procedure for companies. We have some that are very stringent. They rely on single instances. They can't live without them. If they go down, our phones are ringing off the hook. We have other companies that things fall over and they build a new one and they don't really care. Sir? I have one question about the numbers. So how long will it take you to onboard a new customer to your private cloud and how long will it take you to deploy a typical private cloud architecture and how many staffs will actively participate in the deployment? Staff-wise, I don't know how many people would do it in the data center to get everything cabled and racked and all that stuff. So I can't answer that. We have a deployment team of five people who will deploy the cloud on the hardware once it's up. It'll come to my team or our team actually to do the QC and make sure that it's all set. We'll onboard the customer about a week. So it's starting from the requirement or the department has to go live? I'm not on the deployment side. I would say it's probably, I think it's 30-ish days. So I guess it really depends. So we do rack-based DC deployments and also customer DC deployments. Typically what we say for deployment times if we have access is five days deployment time just to cover all our bases, QC, ensure everything's up. Hardware, typically they say about 20 days and that's just for procurement, set up everything like that. And then we turn over the customer and then from when we turn it over to when they're actually using it, it really depends on the customer. So. Okay, thank you. Any other questions? Sweet. Thank you, bye. Thanks guys, really appreciate it. Thanks for coming out.