 Ready to go? Good morning, everybody. Thank you guys for coming out. This is the OpenStack A2Z talk or panel. We're here from Comcast. We're extremely excited to be here. And we thought we'd take this session to share a little bit about how we do OpenStack at Comcast. Specifically, we plan, build, deploy, manage that platform. And we have some of our awesome engineers here to kind of share with you on that. So we're going to kind of go around, do a panel format here at the beginning, go through some quick questions that we prepared. And then we'll also have time for Q&A at the end. So I think we'll just go ahead and kick it right off. So the first question is, can you provide an overview of the environment? Maybe Sheila, you can take that for us. So the Comcast Cloud team was formed in July of 2012. And we released our first proof of concept in November. I'm sorry, in July. And then we went into production in November. We were running Essex at the time. And since then, we are running all over the nation in several data centers. We are running Havana now. And we are moving to Ice House. One more thing I should add. We do have multiple customer-facing apps, internal and external, that are running on the cloud today. Parts of X1. We have Xfinity Share. We have our internal conferencing system. We have tools that we use, such as EtherPad, IRC, ZNC Bouncer. Everything's on the cloud. And much more that I just haven't listed. Awesome. Thank you, Sheila. So Scott, could you tell us how we have expanded our footprint and upgraded from Essex to Havana? Yes. Can you repeat that question one more time? Sure. How did you expand your footprint and upgrade from Essex to Havana? So we've had a lot of challenges with upgrading and expanding because customer demand has kind of outpaced us in a lot of ways. We still have some Essex environments deployed. And we've been moving people to Havana. Most of our environments are Havana. And the challenge that we've had with upgrading Essex environment is that you can't just do a live upgrade. You have to kind of nuke and pave it. So we're in the process of expanding our footprint to get those users off and absorb those into the Havana environment. We're also rolling out Ice House. And we'll have to do something similar. But with Ice House, we're experimenting with and Havana. We're experimenting with looking at live upgrade options so it won't be nearly as disruptive. Great. Thank you. Warren or Anton, can you guys maybe share a little bit about the other critical piece of cloud, which is storage? And what back ends we have implemented? Sure. So back when we started off our journey in the Essex days, we sort of had this idea that it was just going to be ephemeral and object storage. And there'd be no block storage. I guess looking back is probably a little bit naive. So we had more and more customers ask about block storage. People, for better or worse, also found Nova Volumes. So we started looking into SAF once Cinder materialized. We launched our first SAF cluster somewhere around the Hong Kong summit. And then since then, I think we've got over a dozen SAF clusters. And it's really been sort of exploding in size. Awesome. Maybe Anton, you can share some of the challenges we have of running SAF at that scale? Sure. So we definitely, I think, like most people that run SAF, at least the people that we've talked to at this conference, we started out with poor configuration choices with hardware, also with some of the config choices for SAF configuration on the software side. So just to give an example, we started out with no SSDs for journals. Then we went to really crappy SSDs for journals. Then we went to better SSDs for journals. And now we're actually switching to PCI SSDs. And then we're also looking and starting to switch to NVMe SSDs for journals. So that's been, so SAF is incredibly resilient. But we've had massive failures. And SAF just takes it and keeps going. We had one cluster that it was like six nodes. And we lost one node. Then we lost another node. Then we lost another. So we went from six nodes to three nodes. And we actually kept going. And obviously, as it rebuilds, it's a little bit slower. But it just kept going. So that's really cool with SAF. Awesome. Sheila, can you maybe describe some of the maintenance of the environment, the upgrades, the migrations, the nitty gritty that maybe we don't often get into here at the summit? Sure, absolutely. We go through maintenance, break fixes. And we follow a strict change management process that we put together. We make sure that our customers are notified fully prior to the maintenance. We discuss it with the team. And we create tickets internally so that we can keep track of what's going on. And that includes break fixes with incident management. If there is an incident and we need to change something immediately, we will. And planned and scheduled maintenance as well. So on the storage side of things, particularly with SAF, it's a little bit easier. We're relatively up to date on SAF. We're running the latest Firefly, basically. I think that we've sort of learned to hang back a little bit on the release of SAF. But we've been able to do things like add host, take away host, re-image them, update the kernels, largely without too much user impact. Great. What happens if our cloud goes down, Sheila? So the cloud has not gone down as of today, thankfully. But we've had individual data centers. We have. And we do encourage our customers to build in multiple data centers throughout the region, nation, and make it as redundant as possible. Yeah, we encourage pushing resiliency up to the application layer so that they can survive any single data center outage. Maybe Chuck, you can share about how we support multiple locations. Sure. So we do have multiple centers that we implement as regions. And depending on the type of the region, whether it's a national data center or a more local regional data center, we encourage people to bring up their applications in more than one site. Because with the local data centers, they might actually have a problem every now and then. They're not as redundant as the national ones are. Now, of course, this also brings a little complexity to the users, too. They have to deal with things like inconsistent IDs for images and so forth, too. So we try to make that as painless as possible for people by synchronizing all of the IDs and things like that. Or at least that's the plan. So yeah, there we go. Anton, can you share with us what are some of the default settings we had to change to make things work better in our environment, like our lessons learned, best practices? Sure. So I would say the biggest thing we went through was changing the RAM overcommit from 150% down to 95%. Just because in some of the higher loaded environments, as VMs actually start to use the RAM on the compute node, you run out of RAM. And then a random VM gets killed by the kernel. And so we were seeing strange behavior like that. And then we quickly figured out what it was. So we went from 150% to 95%. And actually now we're completely out of capacity in some locations. And so we're actually looking to change that a little bit. And I think we're looking at like 115%. But we're going to look individually into each environment and see what the actual utilization is. I think we're deciding that we're going to choose per environment. We're going to change that setting per environment. On some of the SEP side, so basically, if you guys didn't see it, if you guys saw the Time Warner cable talk on SEP, it's basically exactly the same thing that they are having problems with. We're having exact same problems. Closetively. So for example, by default, some of the default settings for recovery, so Max OSD backfills and Max Recovery, we set both of those to one. Because by default, if you have a failure, either a node fails or an OSD fails, the SEP cluster will basically DOS itself, trying to shuffle all the data around. So that was a huge thing. On some of our larger SEP clusters, we had an interesting issue where SEF, the memory utilization spikes up with SEF. The kernel isn't able to keep up with allocating the RAM, so we had to change the min-free k bytes setting in the kernel. We went to, I think we can mention, so we went from the default, which is like 90 megabytes to one gig. And we still had some issues, so now we're at two gigabytes. So that's helped us with that. There's obviously some other tuning things with Max PIDs and the Max port counts and stuff like that. We'll have to wait to change. Awesome, thank you, Anton. Chuck, maybe you can share with us how we add new customers to the cloud and how do we support them? Okay. So for us, we have an in-house tool to actually do all of the heavy lifting for adding users to the environment. Shout out to Tim Millick and our coworker here for putting that together, thanks, Tim. Now, as far as support goes, that actually begins before people even have access to the cloud, because we'll offer to meet with them, do what we call an intake call, where we discuss the cloud. And of course, a lot of these people have, they've never even heard about cloud before, but they find themselves being urged to go take a look and investigate. So we try to inform them of the ways of the cloud and get them to cross over. Now, as far as when they actually have access to the cloud, we send out what we call a welcome litter and it has lots of good information to internal and external pointers for resources, including some videos and so forth, very cool resources we've developed as well as ones that we found out on the net. And then of course, after they've had access to the cloud, we provide support via an internal IRC channel. It is, I have to admit, very, very disruptive sometimes during our day, but it is such a benefit to our users that we can't imagine not providing it now. And actually, I'll set a little bit of context. I know we mentioned this when we were sliding in on the couch show, but our cloud is serving internal customers within Comcast, so things like our products and services run on our cloud. And we have approximately 600 different projects or tenants running on our cloud today, about 1,500 different users. But we are internally focused. I just wanted to set the context for the room there. Anton, maybe you can share with us how we support custom code changes within the environment. Sure, so we keep our configuration consistent with tools like Puppet and GitLab, so we have our own internal GitLab. As somebody on the team makes a change, we go through a peer review process. So we need to have, I think, two or three plus ones from somebody on the team. And then those changes will get merged. And then as we're ready for a new release, we'll bundle those changes into a tag. And then the operations team will take that. We'll schedule a maintenance. We'll provide documentation on what are some of the changes. For example, after we launched Havana, we added live migration after the fact, some other stuff like that. And so we'll use Ansible to pull down the changes into each individual environment. And then Puppet will push those changes out to all the compute nodes in the environment. Awesome, Scott. Can maybe you share how our maintenance processes are evolving and changing to adapt to an ever-growing? Yeah, so when we first started, we had a small enough cloud that we could do maintenance and get through it. But we're running into the size issue. Bigger the environment gets, the more time it takes to get through a maintenance, like any one of these vulnerability maintenance that come out these days, a very large environment you could be spending weeks and weeks trying to push through. Our policy has always been one environment per day, no more, because doing more than one environment and introduces risk where if you didn't foresee the changes that you're making were going to cause some kind of issue with the customers on that cloud, then you've just kind of enhanced that by propagating that across multiple environments. We have recently improved our process for automating our maintenance, trying to be as hands-off as possible. We're using Ansible quite a bit more to create the playbooks necessary to make all the changes end to end, as well as the verifications and validation of the cloud after the maintenance is done. So what I foresee is going to happen is that because automation has improved for doing the maintenance, we will now have to go back the other way and start doing multiple environments per day just in order to be able to get through something in a relatively short period of time. We've doubled our environment in the last six months. We're doubling it again by the end of the year. Next year, it looks like it's going to be a pretty rocking year. Maintenance wise, from an operations standpoint, is something pretty much on the forefront of our mind. Yeah, so he mentioned basically doubling the compute environment, our storage environment. It's been growing extremely rapidly over the last really year and even more so over the last just three months. We're sort of exploding in size. I put up a little bit of a slide here just so you can get an idea. But sort of trying to manage that capacity growth is really difficult. So some of the things that we've done is we've looked into how are we going to build out stuff to manage some of this capacity need. So we diverged a little bit from typical recommendations. We're doing like four U boxes that have a lot of drives in them. I know the SEF community, even Sage himself, has kind of warned us about using the really big boxes for SEF. But we do make it work. It works probably. I think even the Red Hat guys, they're coming around a little bit on it. And they realize that it works a little bit better than expected. But it's definitely not insignificant to tune and run these. But it's also easier to build a house with bricks than it is with Legos. So we didn't want to kind of monopolize the entire time with preparatory questions. So I think we're going to open it up to the audience. We've set a little bit of context of what we do at Comcast. But maybe we can. The one thing I ask is either if you can come up to the Q&A mic here, or if you're on this other room, I can share my mic with you so that the questions are captured on video. So I think we're going to go ahead and open it up to questions. Hi. How big is the team that you have supporting your cloud? And are they dedicated to that team? So I'll take that one. I don't know that we can share exact numbers of how many we are on the team right now. But they are dedicated. We have three teams today, Dev, Engineering, and Operations. I would say that they're relatively small for the size of infrastructure we support. We also do support, which might be different than some cloud today, a lot of diversity in applications. So we're not just running a bunch of web apps. We run video apps. We run voice apps that are sensitive to jitter and latency and whatnot. We run video apps that have high bandwidth demands. Apps that have high packet per second rates. Now we're looking at doing some of the NFV stuff. And when you look at NFV, it's got all sorts of crazy requirements to it. So a lot of our engineering and focus goes into how do we support these things at scale? Part of our team led the IPv6 effort to upstream IPv6 and OpenStack. So we're doing some cool stuff and pushing the envelope. And so that also might add some dynamics to the size of our team. So we also work with a lot of other teams at Comcast that support us. For example, we have a good representation from our network engineering group here at OpenStack. So a good portion of what you saw in stage was from network engineering and other groups as well. Next question. We had a question over here. One sec. Other than your fame at the OpenStack Summit since 2012, what was really some of the value propositions you had beginning this journey? And as you've had your success listed up there, what additional value propositions do you encounter that you maybe didn't see before? So I'll try to take this and anyone can jump in as needed. And part two is how did you guys, what are some of the processes and role changes you guys had to really adapt to in making this move? So I'll answer the value proposition part, at least, or try to. So I think when we started this, there were a few options at the time in terms of what you were going to do with Cloud. You were going to go to a public Cloud, or you're going to pay some vendor to try and do it for you. And I think one of the nice things about OpenStack is that you can bring it in-house. You can be part of a vibrant community. So when we were looking at this, there were a couple choices available for running your own Cloud. And the thing is we were seeing increased velocity towards OpenStack. And so that sort of at least put us with the crowd. And we realized that we could build a community and build internal knowledge as opposed to sort of paying someone to do that. And personally, for me, it's great to be in an environment like this. It forces me to reach out to the community and talk to folks, as opposed to just being siloed. So can you ask the process part of your question again? How did we evolve over time, given? So that has been kind of challenging, because as with most people here at the summit, you have to start somewhere, and usually somewhere with practically no knowledge about the OpenStack and how to put together Cloud and run it. And we had to figure that out as we moved along the process. But the other piece of it, which has affected us quite a bit, is how do you get the user base there, you build it, will they come? You've got to really market it. And you've got to not only market it, but you also have to market it in the right way. That is a pets versus cattle. How are these different from typical virtual machines that may run on, let's say, VMware? Or how do you take a physical application and move it into the virtual world that works in the OpenStack environment? And how do you take an application that typically uses infrastructure that helps you provide that HA? How do you take that HA aspect and cook it into the application so that you're not relying on the hardware so much for your HA infrastructure? It's been an interesting challenge, and we've just kind of had to grow that along the way, kind of looking back, and I'm amazed at how far we've made it with as much as it's been going on. Next question? My question is regarding how you implement environment when there's still, I assume there's still some gaps with OpenStack and provide today versus what you want to do. And can you give us a list of those things and explain how you're going to cover those gaps? So I think one of the things that's probably a great example of that is IPv6. So we had to carry that internally for a while, but now it's actually upstream. So when you launch an instance in our cloud, you actually get a dual-stacked instance. You get an IPv4 and IPv6 address. So it's just an example of one of the things that we do. We either carry the code internally, or we have to set expectations with customers. And hopefully, with that expectation, there's some sort of timeline expectation that we may eventually get that feature. Maybe we could also share, maybe Anton, about how we had a gap by not having blocked storage for a long time, right? And then we actually worked with the community, found what others were doing, and brought in a solution. Well, you just answered the question. Thanks. Fail as moderator. I was just going to add, we don't offer load balancer as a service, for example. But we do provide great instructions for people and how to deal with that. And so we have some external tools for that. There's a couple of different ways how you can manage that, right? As somebody said earlier, we encourage people to deploy their resiliency on the application layer. And we tell just on every call, we basically repeat that with customers. And it's still not enough. But you can do that with the DNS load balancing. We do have some hardware load balancers. But that's not part of our open stack right now. So to kind of add on a little bit with that, and it kind of goes back to the whole process thing, one of the other things that we've encouraged all of our own internal customers to do is to kind of rely on themselves as a knowledge of information. And there are places within Comcast where users can ask questions, how do I do software load balancing? How do I do particular networking type things? And our user community has actually been pretty active in helping answer those questions. And sometimes they have the answers that we don't have, which is also very good. If you remember that IRC channel we mentioned, too, that it's getting to the point now where frequently other users will answer questions from newer users, too. It's kind of working out very well as far as we're concerned. Yeah, I was just going to add the same thing that Chuck just said. We have an internal forum as well that sometimes we push people to participate in. And it's just awesome seeing other customers giving feedback and advice to our other customers about how they're doing things. Cool. Yes, I'd like to hear some of the examples of onboarding process and how much freedom you give to the tenant or admin, project admin. And how do you control Cloud Sprouts? OK, what was the last part? How do we control Cloud Sprouts? Well, everybody uses Sprouts. All right, well, as far as how we deal with people, how we get them onboard in the first place, generally a lot of the questions that we have are related to how are our environment set up. For example, they want to do this particular thing in networking. And they need to know, OK, how do I make that happen in one of the clouds? Or is it even possible? And generally, though, it goes along those lines. And then there are also questions where we have to defer to other groups, too, as well. Because we're learning as our customers are learning as well, too. So we'll frequently refer people to other teams and people who have done things before. So it's interesting how users actually approach us to get on the cloud. And a lot of times, we'll see them actually come into our IRC channel and ask, how do I get on your cloud? I've talked to so and so, and they told me about it. I want to be on there. And then very often, we'll see stuff come down the management chain, which will then be passed off to this. But in all cases, we just generally push everybody to the ticketing system and say, this is where you start. And the whole ticket process that drives the intake will ask various questions like, what is it you're trying to do? How big is it that you're trying to do? What kind of resources are you looking at? Are there any special needs that we need to be aware of? And more importantly, do you want more information about the cloud? Do you want us to come talk to you about how to get on our cloud? So that's a very important aspect of the process. Well, one thing we do have right is a goal to automate a lot of that onboarding this year, right? So that it's not actually a ticket process just to even get set up on the cloud. Part of that whole process thing. Yeah. Do you want to address also the part about sprawl? Yeah, so we're still kind of in our infancy for dealing with sprawl in general. And I'm sure that's not helped with our capacity issues. It's so easy to get on the cloud that that's typical of any virtual environment. I'm sure there's quite a number of VMs that are sitting idle and not being addressed. But we do have quite a bit of things on a roadmap to try to address these, like looking at Janet or Monkey, and using Solometer to help identify the various VMs that may be dead weight or just kind of flat line and not doing anything. And the showback models will help encourage people also to make better use of their resources because they may eventually get charged back for that kind of stuff. Maybe we'll actually implement something like Cloud Minion. There was another talk, I think, yesterday on that too. And it was a very good talk. It's definitely something we need to look into. Have you noticed vendors approaching you to start using your cloud? Or to start using our cloud or to provide their cloud offerings? Because we did get that. No, well, like, give us an example. Well, if instead of selling you a bunch of servers that you probably already have to operate their own internal cloud, have you started having vendors approach you to run on your internal cloud? Right. So our cloud is completely private to us. So we don't advertise anywhere externally for people to get that impression to come to our cloud. I don't really think so. So we do something a little bit different. So we have a lot of vendors in general at Comcast supporting applications and whatnot. And what we do do is encourage them to actually either build their own internal OpenStack clouds or use a public OpenStack cloud to vet their application on OpenStack before, so that when they come to try to run it in OpenStack at Comcast, it's already working. So we do do that. But we're not necessarily, our security models, everything are really focused around internal use today. We don't really extend that outside of Comcast or Comcast NBC and properties today. Hold on, I'll go to Mike over. Do you then, or are we? OK, sorry. Hi. This has been very interesting. I'm enjoying it. Could you speak to your capacity planning process and onboarding new apps and growth in your existing environment who owns the cap budget and things like that and what you brought into it? So I'll just start off with the storage capacity thing. So like Scott was mentioning, people will put in a ticket and they'll say, oh, I need half a petabyte of storage. And then we'll all freak out and we'll be like, wait, wait, wait, let's talk about what is it that you're doing so we'll get more information. We have a fairly lengthy procurement process. So it takes us a while to get things ordered, delivered, racked, cabled, networked, everything like that. So we do have to plan very carefully. We have to set expectations very carefully to let people know. So if there's like a large project that comes down the pipe and says, hey, I need a bunch of stuff, compute, memory, storage, we will tell them, OK, well, are you guys already on our cloud? What are your IO requirements? Sometimes that's easy to get from people. Sometimes it's very difficult. And then usually like for larger projects, it's better managed on upper management levels. And we'll know well in advance that something large is coming towards our way. So we'll be able to order things in time. One of the things that we're actually just kicking off because we do struggle with this issue is we've actually been building out an internal CRM, customer relationship management, right? And actually reaching out to all our customers, trying to understand their capacity, right? We're almost trying to operate a little bit as an internal business unit, right? And saying like, hey, what's your projections? Where are you? What are the issues you're seeing? What features do you want? I'm starting to build that data so that it can better drive our decisions. I've really enjoyed this discussion. And you guys have really been a great success program for OpenStack. And some of the operational processes you've clearly thought through and have in place are really well done. So thank you. I had a question about as the enthusiasm for OpenStack grows at Comcast, how do you evaluate OpenStack projects and new functionality that OpenStack is releasing and how you determine when or if you'll incorporate those new projects into your cloud? So that's a really good question. And I think I don't have a really great answer on that one. But what we really try to focus on are the core projects. And then beyond the core projects, if people are looking into or asking us specifically about projects, we have a lot of folks here. And we keep our ear to the ground on what the current state of OpenStack projects is all the way from incubation through eventually being promoted. And so we do try to lay out a timeline for those customers or goal if we're going to ever accept the project into our own cloud. One of the cool things is the ecosystem that's bubbling up at Comcast where, for example, we actually had a database team approaches and saying, can we actually build and run Trove on top of your cloud and work with you on this effort? And we're like, hey, that's great. Because we don't have the operational capability to maintain a database as a service platform. We're more an infrastructure as a service team, this specific team. But this team that happens to be nearby us saying, hey, can we do this? Because they see this as a natural evolution of where data persistence is going. And so they're working hard. They're testing out Trove. They're doing all the proof of concepts, whatnot. And we're trying to figure out, what does this ecosystem model look like? And we have other teams doing similar models with things maybe up the stack or parallel to the stack. And working to integrate within the ecosystem to be able to deliver a service. And we also have a feature request page that we send our customers to. If they want to run something and they think it's really cool and there's a bunch of them, we just send them to the page. And then our architecture team takes a look at it, works with development, and eventually rolls it out. Hi, I'd like to know how your team built up the open-stack expertise. It seems that it's a very mission-critical applications. And obviously, there has to be some systematic way of developing skillset, including, by the way, a new hire, as well. I think being picky was one of them. The process of picking talent over the last several years has changed. We started off with just a few of us on the core team. And have had to grow just out of sheer necessity in order to be able to grow our cloud. We, at some point, got big enough that we were just one team. We had to split. And that's where operations and engineering and architecture kind of diverged a little bit. But despite that, we all work extremely closely. We're all in the same area. We talk to each other every day. And we act as still one big team, even though some of the stuff that we do is different from each other. From a talent acquisition side, I think that being picky about who we bring into our team has been very important. And it's not necessarily expertise, because open-stack expertise is actually very difficult to acquire as a general rule. All around us, we have lots and lots of Linux engineers. And a small fraction of them have heard of open-stack. And then an even smaller fraction have even played with it, or have much knowledge about how to install it, or do anything. So we do a lot of training internally. But we've had a lot of success stories, bringing on Linux engineers from within the company and network engineers. And we've also had success. I think our operations team is a maybe you're not tooting your own horn there a little bit. But we've had a great success program in terms of hiring green engineers, young engineers coming out of college, or just one or two years of experience, and then training them up as well on the operation side. And so we've pulled from all directions. Yeah, so it's kind of that evolving process. We're trying to find experts. And then we've come down to let's get some more juniors in there and train them up. And now we're about to kind of look at the other side again. Let's try to get some more experts if we can. But to kind of restate what Andrew said is that as our user base has grown within Comcast, and we've got over 1,500 users now using our cloud, we have been able to pull from that resource of users who now have experience as a user in our cloud, know the technology, know how to deploy their applications. And we have brought a number of them into our team. And they have been assets to us with regards to continuing to grow. And I think there's one last place for our source is actually the community itself. While it is competitive, the meetups we run, the operators meetups, the local meetups have been a great source of talent for us. And also being like, I think there is an attractiveness to a lot of engineers for working for somebody who's actually operating OpenStack versus being a vendor of OpenStack and having maybe a slightly different agenda or what might not be. So we actually are an exciting place to work because we really do care about OpenStack. We care about open source. We care about upstreaming the work we do. We care about our community participation. And I think that those values have helped draw in a wonderful team to kind of contribute back there. We got three minutes left. So a question about release scheduling. I understand not just upgrading instantly on what you've got running, but as you're planning out your next session, why are you staying so far behind? So we have internal patches that we have to carry and vet against the next version. And I think that's probably the primary reason that we get dragged down. Even after an OpenStack release comes out, we've got a dev cycle of I think we've evaluated it to be on the month's time frame to basically integrate our code into the new releases. It kind of depends on where we are. But primarily, like Neutron, we have a patch for Neutron that's been pretty painful for us to carry along. I mean, I think one of the things we do try to upstream our patches. But one of the things, too, is we didn't focus on building out CI, CD early on, continuous integration, continuous delivery. And because we do carry custom patches, right, not everybody needs to carry them. We had some stuff that we were doing with QIS that we hadn't upstreamed yet, or hadn't been able to get upstream yet. We're working to get that upstream. But we've also built out a good CI, CD pipeline now, which I'll hopefully enable us to do those things faster. I think maybe time for one last question. And then we have to wrap up. Anybody else have a question? Hi, guys. It sounds like you're well on your way towards hyperscaling OpenStack. If you could give one piece of advice to maybe some of the people here who are just starting their journey, what would it be? Just what we were talking about earlier, don't write custom code. No, let me rephrase that a little bit. Upstream your code, right? Work with the community. Upstream it, right? We did that with V6. We no longer carry anything with V6. Works great, right? So I think that's a great example. There is investment to work with the community, but we feel it's well worth it. And kind of to the community, one, don't give up. Because OpenStack is an extremely challenging thing to set up, especially if you're coming into it new and you don't understand what OpenStack is or how it all works. It takes a lot of effort to get it up and going. Don't give up and rely on the community. Come to the summits. Come to the meetups. And take advantage of the vast amount of knowledge that happens to be everybody else has already had that kind of pain. Yeah, and I think the community is key there that you can kind of get that help leverage it. We have one, I'm going to kind of use a moment to pitch our favorite event, which is the OpenStack Operators mid-cycle meetups, right? So in between the summits, the big summits, so every six months, three months from now, right? So probably first week of August or late July. Somewhere, I think, unfortunately for our international, it's usually even in the US, we get together with just operators. There's no salespeople. There's no vendors, that type of stuff. It's people who operate OpenStack Clouds. Everything from people operate like a little small 20 node cluster to close to hyperscale-sized OpenStack. And we sit there and talk about our best practices, what things work, what pieces of equipment work well, what ways work well, how do we solve keystone scaling, how do we solve that, all these type of things we learn there. So we're not in this on our own. There's the mailing list, there's IRC, but the meetups have also been great for us. So I think we're out of time. So thank you all, and we're going to hang around here for a few minutes or a question. Thank you, guys. Take a bow.