 All right, let's get started. Thanks for coming to our talk. My name is Will Foster. I work for Red Hat on the scale and performance team, mostly on the DevOps kind of systems administration side. My name is Kambi Zagaipur. I work with Will, also in DevOps on the performance and scale team at Red Hat for OpenStack. So today, we're going to talk to you about Tristack.org. Tristack is a community OpenStack sandbox that we run in our free time. It's a volunteer effort. We're just going to kind of dive into some of the details and hope our slides behave here. So what is Tristack? So Tristack is a free, publicly available OpenStack cloud that's available for the general public. It's absolutely free to use. And it's purely for the point of advocacy for the OpenStack platform. So it was created in 2011 by some folks at the OpenStack Foundation. And we've done some significant overhaul of the environment. We've revamped it quite a bit. And it's also changed hands a few times. But it's an OpenStack Foundation project. So all of the hardware and the resources, both the expertise to run the environment as well as all of the server and network infrastructure comes from corporate sponsors. And it's volunteer community-operated as well. So we spend a portion of our time on kind of the development and operation side doing upgrades, keeping things running, fielding requests for any sort of feature enhancements. But in summary, it's really just a place to kick the tires for OpenStack. OpenStack's a very complex project. There's a lot of parts to it. It's very modular. It's also one of its strengths. But it's not trivial to set up and operate yourself. There's always a hardware requirement. There's lots of different components to pick and choose from. So the goal of TriStack is to kind of make this as simple as possible for the average person to simply just log in, poke around, spin up some instances, and just kick the tires and see what's changed from the current release to the former release to the current release. I just want to look at the slides up here. The exact same thing happened at our previous talk. So I want to talk a little bit about kind of the roadmap that we have for TriStack. So we're currently on the Liberty release. And we have a parallel deployment of Newton as well that's not quite publicly available yet. We're going to go through some of the steps that we're working on to open that up and cut everybody over to the Newton platform. In the span of 2016, we've tripled the original hardware footprint of the environment since its inception in 2011. And we've also doubled the public IP address space. And this would be a slash 23 public network that provides neutron floating IP addresses for tenants and people that use the environment. So let's get into some of the details of what comprises the hardware, what it looks like. You're kind of at a disadvantage because I can see what's here on the slide, but I'm just going to explain it to you. So the presence is in East Coast US Data Center. There's a slash 23 public IP address space. And we have a mix of Dell hardware. We have some 1U R620 machines that were donated recently by Red Hat. We have some FX2 blades, as well as some C6105s. So it's really kind of a hodgepodge of Dell gear. Some of it originally was also donated by Dell for the project. On the switching end, we have a couple of Juniper switches. And we've just upgraded the back end network to 10 gig. So we have a mix of Dell Force 10 and Juniper switches on the back end. But we take hardware donations, and we're happy with whatever we can get. So this might be more effective if all of you close your eyes, and you just kind of imagine what's happening. We're on it. We try to do is turn the laptop around and have everyone just huddle around us. That might be more effective. So imagine if you were on this mystical journey in the halls of data center excellence. And we're going to talk about this free awesome platform that you can use, but you can't see. But you have to just look at it. So what we were going to show you was what the TriStack data center looks like. And so we had some pictures as well. And it looks like the 16 by 9 aspect ratio slides are just flickering too much with the presentation mode. We had the same thing off of another Linux box and another presentation. So apologize for that. All right, now we're back in business here. So this is the TriStack data center. There's absolutely nothing out of the ordinary with this picture at all. It's just a typical corporate data center. What you see here in the picture are lovingly referred to as the data center cats. Now they've been there ever since I have been involved with the project. No one's really sure where they came from. They just kind of hang out and chill. I think they're multiplying because more of them show up every time that we go there. They are completely benevolent though. I think at a certain point, this one fixed Neutron. So that was really nice. We were stumped on a Linux network namespace issue. And let's call him Chester. Chester popped in there and fixed it for us. It was wonderful. So kind of moving on to some of the use cases of TriStack. TriStack is really for anyone. It's aimed towards developers because it's a place that you can go and spin up an application. You can test it, how it performs on an IIAS platform like OpenStack. You can spin up any manner of maybe a DevStack environment or a blog or whatever you want to do within our very loose rules that we have for operating the environment. But it's really about kicking the tires and it's about taking the complexity out of deploying OpenStack, managing it, heaven forbid shaking down your CIO to get extra hardware to test out a cloud implementation. So we've taken all that complexity away from you or to all that complexity out of the way for you. So you simply can just log in and get to work. OK, so a couple more use cases. This is one of the ones that I'm most excited about in academia. We've been approached by a few universities that are teaching classes on Python development and computer science and interacting with APIs. So OpenStack provides a very robust set of APIs and it's perfect for collegiate level computer science courses, learning how to program, learning how to interact with web services, things of that nature. So we're ecstatic that we have use cases like this. There's a university in Slovenia that had a Python course and so about 50 to 60 students signed up and they had their own dedicated accounts and they were able to run through their whole course using the Tri-Sack environment. And then most recently in Cork, Ireland we had a computer science course, about 100 people using the Python API against Neutron and some other things. And so we welcome more universities and academic institutions to approach us if they'd like to use Tri-Sack. It's kind of a learning tool. Another just one more thing I want to point about the academia angle is that it's extremely important to expose people to open source software early on. When someone is learning about technology in school, it's very important that they're not myopic about the technologies that they're exposed to. So OpenStack is a very good sandbox for people to get exposed to interacting with APIs, a greater development community, and I'm really excited about that part. Lastly, one of the other use cases is with Manila development. So I'm not going to go too much into this slide but if you're curious, you want to ping Ben Schwartzlander, who's the PTL of the Manila project. We have a dedicated NetAppFaz system that some development goes on there and there's a couple of milestones that are hit every release using the gear on Tri-Sack. So we talked about the use cases. Let's kind of look at what the metrics look like. All right, look at these big numbers here. So 26,500, that is the number of people of all time since 2011 that have created an account on Tri-Sack and logged in and used the service. It's important to note that this other number, around 4,800, usually goes between 4,800 to about 5,500, that is the current active users on the current release that runs on Tri-Sack. So every six months, we upgrade to the newest release of OpenStack, at least that's our goal, and we reset the environment completely. So those kind of user statistics are reset and then counted again. And then you have some other metrics here, about 300 instances a day, 7,500 a month, and around 800 to 1,000 active Neutron networks. So it's quite a bit of demand for rather small amount of hardware. So if you look at those same metrics, kind of in a graph form, this is a Grafana dashboard that we use. So Grafana is one of the operations tools that we take liberal use of to help us kind of illustrate the environment, show the health, the status, any kind of trending information is really useful for us. And you're gonna probably see a few more of these throughout the talk. And this is just average usage, we see kind of dips and spikes here. One thing I did wanna point out, on the right-hand corner, you see instances over time, you're gonna see this kind of healthy, trending slope of up and down of resources. And the reason for that is that, in order for us to service a very large user base, and to give everyone enough time on the platform, we have to pull back resources occasionally. So when you have an instance, it's good for one day, it gets reclaimed. Now you're free to use the API, we have all the APIs open, you only need to log in once to get your API keys, and you're free to re-spin your instance back up. But we have a set of tools and kind of automated scripts that go in and make sure that we don't overuse the resource allocation that we have. So going into tooling a little bit, I don't wanna spend too much time on these, but these are some of the tools that we use. We use a combination of Puppet and Ansible, it really depends on what we're doing. I'm of the notion that there's not one configuration platform, configuration management platform, that solves every problem. So we might typically use Puppet for open stack deployment, but we would use Ansible to set up some of our infrastructure services, do kind of routine runs to change the state of a certain machine from its current state to its attended state. And we kind of balance out what makes sense at the time for us. These are kind of splitting categories. So we talked about Grafana and Graphite. Graphite is kind of the graphing and trending arm of some of the tools that we use. We also make heavy use of the elastic search stack. So we tend to opt for use log stash, but that's also interchangeable with Fluent D as well. The idea behind that is that we wanna aggregate all of our logs into one location. We don't really wanna be in the business of having to go to individual machines to check certain events and things that are happening. And open stack provides a lot of logging, a lot of it's useful, but some of it's not. So it also gives us the ability to kind of filter out noise from things that actually mattered to us or things that we would need to take action upon. Lastly, we have monitoring bit and there's a lot of solutions out there. There's a lot of great open source monitoring tools. We happen to stick with Nagios because it's what we know. We've been using it for about 10 years now and it's not gonna win any website awards. It's extremely ugly, but it gets the job done. And we're gonna dive a little bit into some of the monitoring checks that we put into place that kind of fill the operational gaps that you get from when you deploy open stacked and when you actually start to manage it at a large level. And lastly, there's a new tool called Browbeet. So I don't know if any of you guys were just at the Browbeet talk that we gave, but the projector's being a little nicer than it was at that talk. So if you thought you had to use your imagination now, like you could have blindfolded yourself and gotten more from that talk. But Browbeet is a performance scale testing tool for open stack. So we try to treat performance and scale like CI. So anytime that we do a new deployment of open stack, we run it through various tests, rally workloads, other scenarios to make sure that it's performing at an acceptable level before we open that up to the general public. So this is everybody's favorite monitoring system here. Web pages were mastered in the 1990s, so Nagios needs no further work whatsoever. It is perfect. So what I wanna bring your attention here is we have a check here that's a custom check that goes through the environment and hits pretty much every gamut of the life of an open stack instance. It starts out by spinning up an instance. It creates a router, sets a gateway, attaches a floating IP address, pings a floating IP address. Then SSH is into the instance, runs an arbitrary command and then collects the results and then spins everything down. It does this every 15 minutes. The reason for this kind of exhaustive test is that if any point one of these actions fail, we get alerted. That means something is wrong on the back end. And pinging an instance or let's say you get a public IP address up, you can hit it with ICMP. It doesn't mean that it works. You could very easily in lots of cases be able to ping an instance and not be able to SSH into it. So the metadata service could be down or there could be any number of issues on the back end that you're not aware of. So I'm just, the point of this is kind of illustrate that out of the box, you're not done. The fun just begins when you get open stack deployed. There's significant gaps in some of the tooling and some of the operational needs. And this is also gonna differ depending on your cloud. I mean, every deployment's gonna be different. The needs of tri-stack are gonna be significantly different than, say, a public cloud that your company might run. You know, the fight that we have is around a huge amount of demand and very little fun at resources. So we do our best with Brisbane Gap with tools. So kind of onto that theme, there's four major areas that we do clean up. These are all custom tools that we've written to basically reign in the resources of the environment and ensure that we have an optimal level of resources for new users to kick the tires. It's really just what it says. And we'll pull the floating IPs back every half day. Network gateways get cleared every day. Cinder volumes get deleted every 48 hours. And instances get deleted every day. So we may go back and revisit this retention rate, but right now, this is kind of the sweet spot that we're at that balances user demand versus having capacity to serve the general public. And again, all the tools that we referenced today are all open source, they're all in GitHub, or they're all of a pretty well-known component or ancillary piece of infrastructure that is used with OpenStack. So this is the Grafana view of kind of that calling activity. So again, you can kind of see the instance count at the very top here. These are VMs that are spawned and spun down. And we can see that the reclamation is fairly smooth across the board. There is one interesting part here. If you see this kind of plummet, this drop off, it looks like the US stock market. The routers with gateways here, plummet it down. That's a very abnormal thing to see in what is normally very concurrent, very smooth graphing. And that indicated to us an abnormality. Next slide. I know you guys are probably helping us out with the display here. Yeah, so as Will said, when we looked at this, we realized that how we're doing housekeeping in TriStack is not relinquishing some of the resources. So at one point recently, and you can see what the date line looks like, this was in mid-September, we started seeing posts where folks were unable to allocate floating IPs to their tenants. And we knew that we should have been relinquishing those, as mentioned in one of the slides. Every 24 hours, the floating IPs were relinquished, the gateways were being cleared every 12 hours. And we started to look at why that's not happening, as we dug deeper into how we're doing our house cleaning. We actually found a bug in the Neutron CLI tooling that we submitted upstream, which was causing the CLI tools to fail, and we were relying on those CLI tools to work correctly. So by making use of visualization tools like Grafana, we were able to see that, oh, we can definitely see that there's something a little odd with the way that the relinquishing is happening. And then as we dug deeper, we fixed our scripts and we filed a bug upstream. And as you can see at that point moving forward, there's a lot more of the cleared gateways which gave back the IP addresses to TriStack so that users can allocate them for their application testing. And just some further automation pieces. The upstream infrastructure folks do a very good job of automating all of their systems, having very intelligent build processes and peer review built into their infrastructure. So one of our goals as well is to kind of align whatever makes sense and whatever we can to how they're doing things. So one example of this is if you go to tristock.org and look at the website content, the CSS content and the other stuff on the website, that is managed in Garrett and Git, exactly how OpenStack code would be managed when you submit a patch upstream for review and ultimately inclusion into OpenStack code base. We do the same thing with the web content. So we're trying to mirror all of our activities and kind of our pipelines and our workflow processes to be near the same as upstream infrastructure if not actually consuming some of the same resources. Another bit of automation too is all the service alerting goes to IRC bots as well just to make sure that we're aware of them and that we can take action if something breaks. Cool, next slide. Yep, so there's quite a few challenges that we've had to overcome with running a big service, especially a free service. And the main one is just this overarching theme of demand and growth. We're gonna always be fighting this battle of surge of demand and not enough resources to combat that and to kind of deliver longer running SLAs for people for their instances. So we're constantly tuning and kind of fighting this. Security is another one and it's probably the most interesting one. When you run a big public service out on the internet, you get some very, very interesting use cases. I hesitate to call it use cases. We had one gentleman who was hell bent on torrenting Justin Bieber albums. So I believe that anyone should be able to listen to any kind of music that they like, but they shouldn't use like a free public cloud service for it. They should do that in whatever means they needed to. So we've had to develop some rather creative tooling to track usage of floating IP addresses of timings and other things that, again, don't ship out of the box with OpenStack or maybe they're not recorded in the Neutron database or anywhere else. And we've had to kind of bridge the gap to have a good paper trail and a good audit trail so we can pinpoint when someone's abusing the service and being a bad neighbor, we can get that person off of the project and they don't want to pack to other people who might come behind them. So a little more on that record keeping and audit that Will is talking about. The biggest, or the resource that which we have the toughest time with is the floating IPs and the gateways within Neutron. So one of the things that's true about OpenStack or about TriStack is that it's a global resource. So you've got people coming in from all over the world. You've got folks in Australia, you've got folks in Africa, you've got folks in North America. They're all over the place. So as far as the housekeeping goes, maybe traditionally in a production environment you might do housekeeping during your maintenance window that might be a designated time but we couldn't necessarily do that with TriStack without impacting somebody around the world. So the fairest way that we could come up with doing the reclamation of resources, specifically the public IP addresses, which we have a shortage of, was to track the time of allocation. And within Neutron there's actually no field in a database where you can use Neutron CLIs to list the floating IP and have it tell you that this floating IP was allocated at a specific time. That's true for VMs, VM launch times you can query out of Nova but with Neutron there was no easy way for us to do that so we had to come up with our own creative tracking mechanism. So when tenants spin up a router and then they allocate the gateway or they allocate a floating IP into their project, we start tracking that and then we actually, we record the time of allocation so that as we go back and reclaim those resources, we can give people the appropriate 12 hours or 24 hours of time to use their resources. And the other thing is that you have folks that are bad players out there, like for example the Bieber Bandit that Will mentioned, when they start doing torrents, we might get a cease and desist order from the data center that says, hey, you've got somebody running a torrent of content that they cannot legally download, so we need you to stop. Well, that IP address gets reclaimed by us, that VM that might have been running a bit torrent client is destroyed, but the next user that comes along might be doing something absolutely legitimate, they just have an application that they're testing, they happen to get that same floating IP, so when we go back to look at our records, we need to make sure that the bad users are the ones that then essentially get removed from the tri-stack group, so currently the access to tri-stack is managed by way of a Facebook group that you can join, and if there's somebody that does something nefarious, like the Bieber Bandit for example, we just remove them from the tri-stack group, and we know exactly when the cease and desist order came in because the reports will tell us and we'll go back and look at our records and make sure that the right person gets dinged for that behavior. And other than that, I think that the other thing that I wanted to point out is we don't want to reinvent wheels, so there are tools out there that you can use against your open-stack deployment. One specifically powerful tool is OS Purge that you can use to clean up resource allocation, but it's a heavy-handed hammer, so when you point OS Purge at a tenant, you can tell it that you want to clean that tenant out entirely, it doesn't disallow the user from using your open-stack deployment, but it does clean up all of their VMs and all of their neutron networks and routers and so forth, but if you have active use case going on against tri-stack for example, and you just launched a VM, we don't necessarily want to clean out your tenant, so we use our own record tracking to make sure that when we run OS Purge, it's those users that have come in and tried tri-stack and now they've gotten a feel for what open-stack is like, but they're just not coming back because now they get it, so those are the ones that we clean out so we can kind of combine OS Purge with our own record keeping so that we can effectively keep resource utilization under control and allow everyone to have a very user-friendly experience when using tri-stack, so the other thing that I wanted to talk about is that we're constantly revamping the environment, we're doing- Next slide, please. We're doing refreshes of the environment, so as the demand on the environment grows, we may need to expand the hardware footprint. The latest round of hardware refreshes were donations from Red Hat's Scale and Performance Lab, where we had some deprecated hardware that was no longer under warranty, so we donated that into tri-stack. We're always keeping up with the newest releases, so we actually have a Newton release that's on deck. In this particular case, we did skip Mitaka, but we're not, typically we don't do that, so Newton's what's on deck because of better integration with federated identity, which we'll talk about in a little bit, and we're also looking to phase out Facebook authentication, so currently what you have to do when you log in is you click a Facebook login button. That's all custom code that's been integrated into Horizon, and it's not the direction that OpenStack is going with as far as federated identity, so we wanna get away from doing that as well in tri-stack. So we talked a little bit about the newest hardware expansion, so I'm not gonna talk much about that. There's also some 10 gig networking that we also donated for backend connectivity as the current Liberty release only has one gig connectivity on the backend. Next slide. So as mentioned, what's on deck is a Newton-based deployment. After summit, we're gonna spend a significant amount of time to try to get the federated identity authentication piece working. We're envisioning is that we would put a free IPA instance up that's gonna be available under the tri-stack domain that's gonna be a self-service portal so that users can come in and create an identity for themselves. We would go through an approval queue, and once we activate those free IPA accounts, there would be an option within tri-stack's login page to use that federated identity to make use of tri-stack. Another option that we're also looking into is to use the foundation's open ID connect endpoints as well. So if you have a foundation login via launch pad, you can just log in to tri-stack. And lastly, we also want to reintegrate the network, NetApp storage backend for storage testing and Minoa testing. And here's some details about the open IDC stuff that in previous talks about tri-stack, and we've talked about moving off of the Facebook auth, this is kind of in progress now, and this is how we're doing it. So we obviously can't deprecate a federated login service without having a replacement. So our goal here is to have several choices. And as Combee's pointed out, the launch pad login, so if you're already logged in to Garrett review or openstack.org or any of this, your end goal is that your credentials are carried over to tri-stack, that there's no further authentication needed and that we get you to use tri-stack faster and get you to kick the tires and be on your merry way. So in as far as the deployment details, I think it's been mentioned a few times now, we're currently still actively on the Liberty release, but we have essentially cut the use case of the number of compute nodes in half in order to be able to stage the next release, which is Newton, which is what's on deck. Once we have Newton fully integrated with the federated identity login working, we're gonna consume the Nova compute nodes from the Liberty release, basically deprecating that environment. And as we get ready to do the next release, closer to the release time, we're gonna go ahead and split that environment back up in order to be able to put on deck the next release without impacting the ability for users to log in. It just means that the overall capacity of the environment will be diminished while we're deploying the next release. All right, and kind of using the staged approaches, minimal downtime, really no downtime at all, because at this point, we're really just flipping DNS records. So the Newton deployment is already up, it already works, and we're making some serious progress in getting the OpenConnect stuff working with the patchy as well. So a bit of a plug for our orchestration, benchmark orchestration and workload tool called Browbeet. Some of you may have been at the previous talk, but basically what Browbeet lets us do is to automate the workloads in order to validate that environment is ready for use. And when we do put the next release of OpenStack on deck for TriStack to use, prior to turning it over to the end users, we do run through a battery of workloads that we orchestrate with Browbeet. Specifically, we're interested in some of the rally-driven workloads that test things like how does OpenStack perform when you launch a large number of VMs simultaneously, which is very typical in the TriStack use case. Sometimes you see things falling over if you have a surge of usage and folks coming in and launching too many networks simultaneously, that can also cause some workload issues. So when you run Browbeet, if you haven't looked at Browbeet, I encourage you to download the slides and take a look at it. There's a whole slew of Ansible playbooks that you can run that really make it easy for you to put the workloads against your OpenStack deployment in place and visualize the results. I'm not gonna go too deeply into the technical aspects of it, but I will show you on the next slide what a typical rally run may look like. So next slide, please. So in this case, we ran a rally workload to launch a large number of VMs. And if you look at the individual rally workload, you can kind of see where the spikes are. Initially, when a large number of VMs were launched, you can kind of see where the performance numbers differ. But overall, you should see kind of a steady performance number. And while this rally run was in play, we were also looking at Grafana, which is the other tool that we use extensively to visualize performance data that is coming back from our various environments. Next slide, please. And in this case, what we're seeing is a very healthy sawtooth pattern that this gave us a very comfortable indication that this environment was in fact functioning within our expectations and that there wasn't anything out of the ordinary going on. There are different metrics that we're visualizing here. Bandwidth utilization, system load, number of VMs being launched. So everything is within standards there. So we were quite happy with that. And as far as how we use the tools that we've talked about so far, there is a Git repository of all of our tooling that help automate this stuff that's gonna be on the last slide. So you can all go and check that out. I encourage you to check it out. And if you have any improvement suggestions, I would welcome suggestions and feedback. We're on IRC as well, externally on free nodes. So you'll have that information as well. Next slide, please. All right, so this is a list of... Well, when it shows up... What's going on here? Anyway, what you're supposed to see is a slide of a list of all of the original founders of TriStack. I don't know if there's anyone here in the audience, the original folks. Nachi was at our last one, but just kind of the folks who founded the project and then the current team that works on it as well. I just wanted to give a shout out also to our Facebook moderators that jump in and help answer questions and kind of keep order. Rain Lander, thank you. And Matias Range, they've been instrumental in helping us answer questions on the forums and really reinforcing that kind of community feel. So thank you guys for your time. And again, TriStack is for humans on Earth. Next slide, please, the last slide. It's limited only to humans and only to Earth. When we do make contact with extraterrestrials, we will probably change the policy. But again, it depends on how aggressive they are. So everyone's seeing Independence Day, probably both of them. I'm at odds with my colleague here. I think that aliens should be allowed to use TriStack. I mean, we just have to cross that bridge when we get there. We're just gonna have to agree to disagree. Anyway, that's another topic of discussion, but it's open for anyone on the planet. It's purely an advocacy platform. We just want you to check out OpenStack with all the complexity involved in getting it installed and managed and running. So we'll take questions. We've got a few minutes here, and thank you for attending our talk. Appreciate it, guys. If you don't see any mics, but... There's a mic here. Okay, any questions? Any comments, questions? Suggestions. Better support for Linux. With the display, maybe? Right, so yeah, the question was, what OpenStack opponents do we provide? Everything out of the box. Anything that you would get in a vanilla deployment. So currently, we do limit on Liberty, we do limit people to upload their own glance images. We've had some contention on the storage in with just not having enough space, and we've unfortunately had people upload many, many copies of Windows Server, which we don't officially support there, but we will preload and update the latest version of any cloud OS that's available, including core OS and any of the other derivatives. Our only kind of rule there is that it needs to not be copyrighted. It needs to be freely available. It needs to come from a verified source so that there's checksums and we can make sure it's a secure image. Other than that, we're happy to put anything else up there. I think currently we also disable Solometer. We've had some performance issues with Solometer, but the current Newton deployment, everything is enabled across the board. So you should have all the functionality available there. I don't believe that we include Sahara yet. I think that's still an optional component. If there's enough, again, if there's enough demand for it, we're also happy to deploy anything optional. VPN is a service, Firewall is a service, whatever that doesn't come with Newton or Ricotta, we're happy to go and set up if there's enough demand for it. Yeah, I think the biggest challenge really is for us that we're doing this on our own time, basically. So as far as our full-time job on performance scale at Red Hat, beyond that, whatever time we can find to contribute to TriStack is what Will and I will do. And the more open we make the platform, the more likely it is that it's gonna be abused to some extent, as far as the content being on there that's copyrighted, the more open it is, the more we have to do verification that people aren't putting Windows images out there when they shouldn't be or whatever. So that's the tough part. Yeah, but it's also a lot of fun and we've learned a lot as well. We've hit a lot of open-stack bugs. We normally would not have encountered and gotten fixes for those. We've hit a lot of stuff around Neutron. Before we had a lot of the culling scripts kind of fine-tuned, we would hit all sorts of Linux and Net Namespace issues and things with scalability. So for us, it's also been a really good learning experience and kind of our goal here also is to take what we learned from this platform and push that knowledge upstream and file bugs were appropriate and submit patches were appropriate, so. As for the resources that we mentioned, by the way, we're gonna put those up on, maybe Browbeetproject.org as a blog post can point to this because there's slide decks that you might be interested as well as the GitHub repository references that are on the slide that I'm looking at that you guys cannot see up there. It's a beautiful slide. I wish you guys could see it. Did you have a question, sir? No, not really. I'm aware of a lot of a few other clouds specifically around CI. We are working a little closer with upstream infrastructure than we were before, but this is really purely an advocacy platform. This is, there's no SLA on the environment. Every 24 hours instances are pulled back and it's a best effort sort of endeavor, but it's that way for a reason. Our kind of goal is to accommodate a large swath of use cases and to tear things down and deploy the newest version as soon as possible, even going pre-production as well if we need to. So I do think that they fit a somewhat different use case, whereas if someone's running CI on a cloud, they want permanency on their tenants, they don't want floating IPs dressed to be pulled out from under them and things of that nature. So I do think that while there are a lot of parallels, what we're doing with TriStack, I think is really more on the advocacy end and less on any kind of permanent use case. We have worked with the RDO-CI group in getting their RDO-CI moved temporarily into TriStack and exempted them from the house cleaning that we do to get rid of instances when they were actually in transit because part of their environment had to go offline and they needed a publicly accessible cloud in which they could run CI. So we do collaborate closely with the various teams as well as the infra team. So, but like Will said, it's more of a sandbox environment. So if people want to see what does TriStack look like when it's stood upright, they can come in to TriStack and have a look because it is a daunting task to deploy OpenStack, no matter what the method is that you use to deploy it. It's a blessing and a curse because the modularity I think is the strongest bit, the biggest benefit of OpenStack that you pick the components that you need. You don't get this kind of box solution that gets dropped on your door and you have to fit your infrastructure and tools around that. Instead, you can kind of mold OpenStack to be what to fit your needs. So it's kind of the opposite of what a lot of the vendor offerings would be in the proprietary cloud space. So the modularity also comes with a responsibility to and usually manifests itself in complexity. So our goal is to kind of remove that complexity for people. All right, well, if you have any more questions or you want to talk, we'll be hanging out around here and thanks for joining us today. Thanks guys.