 There's a lot more people in here than I thought it would be. I guess you guys want to see OpenStack in production. They did the bad yet. So I'm Robbie Williamson. I'm the vice president of cloud development and operations at Canonical. What the hell does that mean? I'll get to that in a minute. Here to basically just talk about how we use OpenStack in production as in within Canonical to support both the internal services across the company globally as well as the Ubuntu project itself. We had a mission, I guess, a few years back to really practice what we preach. And this is kind of just the high-level overview of what we went through. It's kind of an evolving talk. I found that a lot of people don't know or did know that we actually use this stuff internally and that they are probably using it as well if they're using any of the newer Ubuntu releases. I also cover that. And I think that plays really well into our story of how we develop and integrate OpenStack into Ubuntu and provide services to deploy and manage Ubuntu. It's because we have to do it ourselves. I force my IS team to do it. And they hate me. Really quickly about Canonical, look, we're global. These are our big offices. We have a small office in the Isle of Man where a certain person lives. It's a very big and important office. But it's established in 2004. We have over 500 people, close to 600. And the mission is to bring Ubuntu to the world across multi-suit of devices, including the cloud. Right now we have a close partnership with Amazon Web Services, HP Cloud, Rackspace, Microsoft Azure, believe it or not. And of course, we work on across a wide range of flavors of OpenStack. Important note, we've been doing cloud for a while, 2008. I joined the company in 2008, previously at IBM, Linux Technology Center for about 10 years. At that time, my first UDS, I don't know if any of you have ever been to OpenStack. I mean, Ubuntu developer summit, UDS, much of the same format as this. Smaller now, virtual now. And I remember we were in Mountain View and the entire server team at the time of, it was very small, led by Rick Clark, one of the founding members of OpenStack. They were all hooded in his room. I was like, what the hell are those guys doing off in his room? What's he doing in the UDS? Oh, no, we're doing this. We're looking at private clouds. Private cloud, get out of here. I heard that at IBM forever. It's like, that's his BS. No, no, no, that's his company. This is university we were talking to. They got this thing called eucalyptus. We don't know, we're looking at it. And few months later, six months later, we were producing a eucalyptus installer. Now look, it was in top notch. It was eucalyptus. I'm not gonna even mess with that and go to that area. But ever since then, we've been focused on cloud, scale out. From a server perspective, that was when Ubuntu server really became something more than just debbing out of cadence. And the company as a whole in Canonical really split into really a client and a server focus. And so I try to remind that when people talk to us about cloud and a lot of other vendors are coming along with cloud. Look, it's not a buzzword for us, it's our business. We've been doing it for a long time. And it's ingrained in how we approach our engineering, how we approach our deployment and how we even choose the features that we focus on. So 2011, roughly, maybe 2010, it kind of blurred together. It was really put forward. Can we really practice what we preach? Can we transition our traditional IT infrastructure to a really cloud-centric workflow across the entire business? Running finance systems where we can on it and so forth. I'm gonna tell you right now, it's not all there yet. But we made a lot of good progress. Another question was support both, not just Canonical and infrastructure, but Ubuntu project. There's a lot of websites at the.ubuntu.com domain from just your traditional web to the archives to errors.ubuntu.com, which tracks bugs and doesn't lack basically a heat map of where the high-hitting errors are in the current releases. To launchpad.canonical.com, there's a landscape. I mean, there's a lot of different services that are hosted between Ubuntu and Canonical. Can we move those as well over to the cloud? And move internally from a traditional, like the IS department is just some guys, we open a bunch of RT tickets against and curse that. And development is a bunch of idiots who don't know how to deploy things into production. To like a real proper team, a real merged team. Can we do that as well? So the answer, just we can. But it was damn hard. And that's a picture of Mark Shuttle was playing in his garage with some servers in mass. I love this picture because it really shows like who he really is deep down. Like, yeah, he's the owner of the company. He's kingly smart and visionary. But he loves this stuff. He gets into it and drives it. And yeah, he was basically deploying some servers in his garage. And we had that, that's called the garage lab. And we can actually check that out remotely and use it. But there's a lot of different expectations and different things we had to meet, right? There's organizational expectations. There's the heterogeneous hardware that we had. And the sense that there's hardware all over. Different types of hardware. It's not like you just, you know, point a wand at it and all of a sudden it's a cloud. And then there's software decisions that we had to make. And I kind of just gonna go through these high level areas. I mean, there's other things that we had to do in terms of like processes, how developers interact with IS when you have a whole DevOps formation. But I really wanna just cover these three right now. So, organizational expectations, management. So management wants to know well, how much faster will it be? How much more efficient? Efficient means always how much money can we save? You're not gonna save a lot of money first of all when you go to cloud initially. I'm gonna tell you that right now. You know, first of all, unless you have a lot of spare hardware just laying around, you're gonna, you can't, or you can afford to just bring entire IS down for weeks. We couldn't do either. So you have to buy additional hardware. And that's the first thing they freak out about. What the hell do you mean buy extra hardware? I thought we're cloud. We're pulling this stuff together. So yeah, you can't pull things together before, unless you want me to shut down the booty.com. Oh, okay, okay, okay. And how efficient will we be? And that means we'll have a cloud. That means you can scale the demand automatically. If we happen to do, I don't know, maybe a countdown to send the entire world to my website at the same second to announce a phone, can it stay up? Well, yeah, I mean the cloud can scale, but it's a software-written scale as well. I don't know. Our web team, there's a lot of assumptions built in. Yeah, you can scale compute and storage and network all you want. But if the actual code isn't built that way, it doesn't matter. Developers want to know, well, man, that means we can deploy, test, develop and production fast. Now, normal opening RTs, I can get whatever I want live right away. No, you can't. I mean, listen, this is a production level thing. We still have to have protections in place. You can't have you running rogue DNS's. I got an email today that someone had deployed a rogue DNS in one of our clouds, and that cloud was open to the public. That's not good. So yes, you're gonna have a lot more freedom, but there's still gonna be controls. There's certain things you're still gonna have to wait on, and I think some people think there's gonna be the, well, it's a cloud. They can just box it off, and no, it doesn't work that easy in operations. Jesus, what do you mean we're going to a cloud? How can we do our jobs? How can we track who's using what? How can we, you know, you're gonna give developers access to deploy their own services? Are you kidding me? How, what are we gonna do? And so there was, I said, look, calm down. You know, it's, well, first of all, we're gonna do it internally, then we'll move externally. We'll learn from our mistakes early, and then hopefully correct them when we move externally. And we'll take baby steps. It's not gonna be all at once. It'll be okay. And so there's a lot to do with all these different areas. One of the ways we did it was reorganize things. So underneath me, you'll see typical engineering, solutions, yada, yada, yada. And then you'll see infrastructure systems, right? IS. So within canonical, all these teams, these are all peers. They all have various levels of, I mean, we have obviously some teams are bigger than others, but they're all peers. So if I have my juju manager saying, oh man, that works, you can do deploy this, this, this, no problem, you can scale, no problem. My infrastructure guy would say, no, that's bullshit, you know. I just tried it, didn't work. And so, you know, there's a lot of calling back and forth. I mean, we are proper DevOps in the most dysfunctional way in terms of like a family of just arguing all the time, but it's good in the sense that we call each other out. I mean, we're friends, but it keeps you honest, right? And IS, it goes both ways when IS says, oh no, that'll take forever. You know, maybe the guy, you know, my Gucci server guy said, no, no, no, no. Mass will help you do that. Just do this and this and that. Oh, okay, okay, okay. Another cool thing is when IS is underneath you, you get your ticket hands a lot faster. But so, you know, outside of my organization, we have, you know, online services, which is your Ubuntu One services, which is primarily, honestly, hosted a lot back into the cloud in terms of Amazon Cloud because it's a lot larger for a service to service people. We have Ubuntu Engineering, which does a lot of the client side in the core of Ubuntu as well, and the phone. We have, and then your traditional things like OEM relations, operations, legal, all those types of things. Those are all outside of my organization, obviously. But they all depend on IS systems in one way or another. Heterogeneous hardware. So, we have different vendors, different architectures in our labs. And Opistak didn't really work right off the bat with all of them. I mean, you know, I think there was a talk earlier where they talked about different IPMI, how some vendors like to tweak it this way, tweak it that way, or they like to change something. And when you want to just pull it all in one simple, you know, pull a machine for a cloud, it's not as straightforward as you might think. And so, when you're trying to explain this to folks who don't understand, they get a little frustrated, they say, well, we have all this damn hardware. What do you mean you can only put half of it in there? Well, because there's, you know, some of it's arm. We can't put a bunch of panda boards in there. It doesn't work the way we want it to do. Or some of it's, you know, different capacities. So, not just so much processing speed, which does make a difference. I mean, but there's also storage. I mean, some of it's, I don't know what it is. If we need it, we have some that are sans, some's on the sands are backed up on SSD, some sands are backed up on rotary disk. I mean, depending on how fast you need that data for the given service, you know, depends on where we put the data. And then when you have a cloud, okay, so now I gotta figure out where I'm gonna put the storage. And I can't just pull it all together because it's all different types of hardware underneath. Networking, just the fact that, I mean, some of the machines have a lot of cards, some don't. I mean, that comes, quantum helped us a lot towards end on that one. But it is a consideration. And then finally, this is hell, locations. Our primary data centers are in Boston and London, but we have another one growing in Taipei and are like, you can have one in the garage and all of a man. How do you pull that together? We want one big cloud. What do you mean with one big cloud? I mean, I'm already paying for a huge pipe between Boston and London. I get yelled at every year when people ask me, why are we paying so much money for this? What the hell is this thing for? It's like, so your data centers can talk fast. But can you make that one big cloud? Can you make it a head of the genius cloud where, one cloud to the user, but underneath it's two. How do we do that? And then you talk about China. Whew, I mean, we are looking at expanding to China. I don't know if you saw the news there. And there's a lot of different challenges in terms of IT and your work with China. Just try to use Google when you get there. So amazingly, side note, you can use Skype. Now, I'm not gonna say that that conversation is gonna be private, but if it's a public, if you're talking about a rolling release or something like that, hey, far enough, they may want to know, but I find that interesting. Skype work like a charm. Hangouts, not so much. One of the tools we use, and ironically this is recent, is Landscape. Which is a product that we've been pushing out for a while now, but just like any other IS team, they get crudgy, they get comfortable with their own tools, they never want to change. But we kind of say, look, get your shit into Landscape. And while that's good, because if you have a lot of machines running up into you, Landscape is pretty damn good at managing it. I mean, you can manage your upgrades, it alerts you, it lets you know capacity, it can even tell you when you're about to run out of memory or it's storage on certain machines, it's notified. It's a really good tool for managing a bunch of you. Use it, so now we use it. We have about, that's a 522 machine, give or take. I mean, there's a mixture around there with virtualization and Zen machine as well. But we leverage that, and later on, I can talk to you about how we're taking Landscape to and leveraging and expanding Landscape to, to allow us to do more things within a production level five. What do you know on here? Ah, software decisions. So which cloud platform do we use? This was the earlier question. This is when it first got proposed, you know, eucalyptus, cloud stack, over stack, yeah right. So it really came down to which over stack do we use? Essex, Folsom, Grizzly, well we started out with Essex because that's when that was available. But a lot of the conversation around this and conversation with our cloud partners at the time is why there is the Ubuntu Cloud Archive. It became very clear, it's like, well look, this stuff's moving too fast right now to just say, stick on the LTS and you'll love Essex for five years. It's not gonna happen. So, that's when we came out with the Cloud Archive, which is a supported archive you overlay on top of LTS and support run newer versions of OpenStack. Grizzly's already in there. Actually came out, I think, Chuck Short had the packages up and ready and running maybe in a week of the release of Grizzly. And of course Grizzly will be in 1304 when it comes out. Next week, I believe. So, I mean, right now I think a lot of our clouds are running Folsom, actually. And with Grizzly coming out, we will be looking at which ones upgrade, which ones we wanna wait on. And I'll talk a little bit about that as well, how we do that. How do we manage the hardware pool? So, you know, not all of our machines went to the cloud. We have certain machines that we don't need in the cloud. Some are just storage. Some of the archive.agunti.com that we don't need in the cloud, we just need it over here and get it in there to our marriage. But the machines that we have, how can we easily pull them out? How can we easily install and upgrade at mass? Because we don't wanna, that's a plan of words. But how can we do it all at once? Like how can we make it easy to be able to slide these machines in and out? What tools are we gonna use? Are we gonna use COGLA? Are we gonna use this FAE and what the hell? These are decisions and discussions that we had actively internally. And kind of drove some of the decisions and things that we did in our products. How do we manage the cloud? You know, managing access, tracking zones, measuring resources. Now, we don't do, you know, in bigger companies, you have all that internal billing. I remember when I was at IBM, I used to just drive me crazy that I had to pay all this money to some internal team to protect the firewall from the internal lab and make a claim that it's profit when it was all like blue dollars, drive me crazy. We don't do that. But it's still useful to know who's sucking up all your resources in the cloud, you know, is, you know, our online services guy's doing some development, sucking up all the CPU or is, you know, I mean, so how do you track that right now? And that kind of, and that of those discussions early on kind of led to the whole salameter project. Nick Marseille at the time was with the company and there was a lot of discussion around resources and metering and, you know, eventually billing, but metering was a real key. I mean, there's always been, you know, billions gets really complicated, but, you know, we want to be able to track that. Yes, you can track resources of the hardware itself, but also who's using it, how much they're using, you know, are they abusing their free privilege of using this cloud in terms of our company? And finally, how do we manage the services in the cloud? How do we deploy them? How do we manage them? How do we scale them? We had been thinking about this problem for, what was it, since 2010 when we were just looking at the cloud in general, in terms of, you know, you're going from hundreds of machines, hundreds of instances to hundreds of machines, to a thousand instances, you can't scale your IT department that way since it's inefficient. How do you approach management of services that way? And so from that, before I get there, there's one small clarification. And I just added this slide in because it drives me crazy when people think configuration management and service orchestration are the same way. So configuration management is, I think of it like the old universal remote where, you know, I can control my TV, I can control my satellite, I can control the stereo, you know, I push one button, turn it on, it's all pre-programmed, it's pretty easy, but I still have to like do it in the right order and make sure I don't screw it up, right start all over, you know, my mom could never figure it out. Service orchestration takes a higher level. It's all about what you really want to do at the end. It's like the new one, I just want to watch TV. So I hit watch TV, the TV turns on, the cable box turns on, it splits you to the right, it doesn't automatically, I don't have to, I don't care, I just want to watch TV, I don't want to fiddle with all the detail. I'll play a game, so forth. That's the difference I see in the sense of configuration management versus service orchestration. I would claim this as my own example, but I stole this from one of my managers, Mark Ram, I'm getting credit right now. So moving forward, these are some of the softwares that we decided on. Obviously, overstack, duh, easy decision. Ubuntu, come on. The other three, we, Mass wasn't the first choice. You know, we were looking hard at Cobbler for managing software, but the project started kind of telling off a little bit. I think Red Hat moved to a new project. I can't remember what it was. And then our security teams are looking into it and they kind of freaked out and the bugs are out there, you can look at them. And so it was like, okay, look, we don't need, and Cobbler was a little heavier than what we needed. Had a lot of cool functions that we just didn't need. We just needed something to be able to pixie boot machines, deploy it and track them and maybe do a couple of things like firmware updates and inventory management. So we just started with Mass. And Mass originally was wrapped around Cobbler. As of maybe a couple of releases ago, because it was a great start, but then we removed Cobbler and kind of put in our own stuff. And it does the same basic thing, pixie boot. You can do DHCP, DNS, but it's just a metalized service. It's a way of controlling hardware at scale. You don't have to use Juju or anything else. If you just wanted to deploy a bunch of Ubuntu servers, you use Mass. Landscape uses Mass in your remotely managing machines. It's just a conduit into managing machines. Landscape, obviously, we started looking at in terms of managing the hardware and now we started expanding upon that in terms of how can Landscape Manager cloud and I'll get into that later. But there's a lot of cool stuff that's about to be available on that. And then Juju, our service orchestration tool, basically supports various back ends. We started out with EC2, Amazon. Then we quickly moved to OpenStack. Now we have what we call a local provider, which allows you to do dev tests, deploy on your laptop, and as well as Mass, which is our bare metal back end provider. Juju's very pluggable in the back end, so we can write those in various different ways so that you can deploy services. To us, OpenStack is no offense. It's no different than any other service. I do, hell, WordPress, MySQL. They're all just services to us. So whether or not we deploy them on bare metal or we deploy them in the cloud, it's the same. So we use the same tooling. It wasn't that we were... So why do we use Juju and Mass to deploy OpenStack? It's because we use Juju to deploy services and we deploy services on bare metal with Mass. We can deploy Seth storage onto bare metal using Juju and Mass. We can deploy Hadoop onto bare metal using Juju and Mass. We can deploy Hadoop into the cloud using Juju and Mass. So we use those tools because they work for us and they cover more than just... We have to do more than just the cloud. The cloud is very important, but we have to do other services as well. So a lot of people kind of ask us why we chose those is because our focus is on the services, not one specific service. So, Canonostack was born. It's an internal cloud that we use. There is no SLA on this thing. It can go down two minutes from now. I mean, it's really just for our own... We encourage our developers to use it to deploy services. It has external IPs. Someone deployed a DNS on that recently, so now we're blocking certain ports. Our developers can use it to prototype services. They can use it to prototype surveys, websites. Errorstartaboocher.com was originally prototyped on there. And it's pretty nice. I mean, it's not always up, right? You know, we're always playing with bits. We're always testing new things out because that was the original intent of it and we kept it that way. It has two regions, so we can always upgrade one before we upgrade another. And, you know, it's not huge, but it does enough and it suits our needs for what we use it for. Again, it's mostly dev and test. After Canonostack was up for a while and we were comfortable with it and I think maybe it was false and had released, we made Prodstack. So Prodstack is up and running just as of 12.10. And it's 12.04 LTS, running Folsom from our cloud archive. There's a detail up there. It's running a number of services now. Errorstartaboocher.com is a service, again, that tracks bugs in the background in terms of how many, the heat in which they're filed so we can get a better sense of, you know, where we need to focus our development, mostly on the client. If you've ever looked on the recent Bungie installation and wondered what the hell whoopsy daisy was, whoopsy daisy is collecting that data and sending it there. It's an item I, if you don't care who you are, we just care about the bug. Certification website. So if you want to certify your hardware or look up to see your hardware certified, that service is hosted on our cloud. And product search, which everyone loves, I know, Amazon search on your desktop. So when we first released the product search, it was wide open. You could type something on there and you might get some crazy stuff from Amazon.com. So we had to put some filters in place. We, you know, we, so now, you know, every time you run a search through the product search, it hits our filters. We don't, we anonymize, we don't care who you are. We want to make sure you don't get back things that you really shouldn't be getting in terms of just wide open, you know, anyone can use your desktop. It could be kind of embarrassing. So that's run now. So those three services plus some back-end CSS stuff on our website are currently running right now in our ProStack, which is OpenStack based. Moving forward, we have some plans to put more services on there. Music search, video search. And again, those are just filters to keep, to make sure that the appropriate data is getting sent back. It's not like, again, we're not tracking people. I know people are always freaked out about that. We don't, we don't want to track that. If you want to track it, we can track all your downloads from our goddavids.com or whatever. It's mirrored, but we can track that from security. Come on, people. Fullabuntu.com, so we want the full website there. I mean, I like, again, a lot of the back-end is there. We can scale a bit to meet demands. I mean, we have a pretty important release coming up next week. We'll see what happens. We've got to test it with the phone and tablet announcements. And any of you went to there on the day, you know that our test, I won't say we got an A on that, but we'll cover it really quickly. A lot of that goes back to the web developers not designing for scale, not necessarily the compute storage and networking. Obviously, it's easy for me to point the finger when I know I don't manage those guys, but that's just the fact. Um, and launchpad PPA builders. Eventually, we want to move those into the cloud. And that'll be pretty cool. I mean, we do a lot of virtualization on the arms side. Ideally, we're going to move to real hardware on that hopefully later this year. Um, but, uh, that'll be a pretty big test. Maybe even launchpad running in the cloud. I don't know. It's pretty, pretty hairy beast. And, you know, I would like to keep the ability to create and make a bunch of a live as well. Um, some other things we're doing. So right now our data, if you saw on the previous slides, our, um, our clouds are deployed in, in single data centers, which they're big data centers. A lot of other big companies are in there. I mean, I'm not too afraid, but still it's not the best redundancy method. And we don't really, and we like to stay up. So we want cross environment deployment. So that means I can have OpenStack deployed in my London data center, OpenStack deployed in my Boston data center. It's the same cloud. You never know the difference. And that depends on some of our tool in as well. So it's not just a, it's not really an OpenStack problem. I mean, there's certain services that could be a little better at that, but it's also some of our own that we need to solve in the coming months. HA plus one, right now our cloud isn't even HA, yay. But we spent the last six months, six, eight months working on the ability to deploy OpenStack in an HA, HA plus one. Why do you say HA plus one? That means you can always upgrade your machine, your, your servers and still stay in HA. So it's three machines per service. We can do that now. Live host machine upgrades. Can we upgrade the cloud without bringing it down? It's one thing to be able to say, can I say that's gonna be down, you know, over the weekend. Sorry. Can't tell you that a budget account will be done over the weekend. So how do we upgrade the cloud? Not just the OpenStack release, but the kernel updates. How do we do that and keep the cloud up? How do you manage that, that upgrade across the entire cloud? Working on that solution, we have it down and working now. Chaos Monkey and Mayhem Badger. Chaos Monkey, you've all heard, right? Netflix, who ironically is it moving to? They have a demon that goes around just killing random services, right? And see if it's HA and going to respond. Mayhem Badger, I called that, is where we just go around and just start unplugging random machines and see if the cloud responds. That's the test our HA plus one. It's been, it's been a directive that we have to make sure that we can really deploy these production little cloud. I mean, you know, it's just as a budget.com. We're not like, you know, a telco, but we are talking to telcos, we do business with telcos. These are the things that they need to be able to depend on. So all these things, to be honest with you, are either in deployment or now. So tomorrow, there's a keynote. And Mark will be up there doing another demo that stresses me out. I highly urge you to attend. And I could say, you know, given the past, I guess this is our fourth time doing this. I've been involved at various levels on every single one, from packing up HP microservers to flashing disks to pushing servers up a ramp. This is a really good one. And we're really able to show what a real production cloud looks like. Ideally we wanted to have 30 something machines on stage, but we're using just two virtual machines because we only have about 15 to 20 minutes. And I don't think the power constraints of that stage would allow it. But it's all the same tools. There's no magic. There's no curtain, man behind the curtain. There's not gonna even be Davey up front, like you've always seen in the past, hacking away to make sure we don't lose network connection. This is live and real. And I encourage you to show up for that. And with that, sweet, I got time for questions. If you wanna ask them, I'm done. Questions. If you have a question, I'm told you're supposed to come up with that microphone or I'll just repeat it. Yeah. You're running Swift and Seth, why? One is for, we have one block storage. I mean, it's the mimic basically what we have in AWS, right? I mean, object storage, right? Yeah, object storage, block storage. That's why. And hey, Seth is cool. We like Seth a lot. We're growing that, and Swift came with Overstack, so why not run it? But we don't really choose which ones. I guess technically, if you're running Seth in your cloud, it's not OpenStack, but we don't care, we're running in production, we need their stuff to work, so. Is there any sense of how you're personal? I can tell you, you can ask Andrew, I haven't scaled our personnel at all. But it's actually, if you have the right tools in place, once the lab is up and running, adding more hardware doesn't really add any problems. It's more about getting the first, getting it up and running, right? I mean, getting it all configured, understanding what you're doing. But once it's up and running, and you're just adding more servers in, if you have the space in your data center, it's okay. When you start, those are the constraints you do with more. I think power, data center stuff. But if this cloud had to grow by nodes, I don't think you're, I know I would not scale the IS team to do that. We did have a bigger focus, I think, on web ops in a sense of training those guys up to understand how to use our tools and so that they can help others deploy services in. But that's the beauty of a lot of, the draw of the cloud and of service orchestration tools is that you don't have to, that tight coupling is your decoupling that way to grow. So you can grow your infrastructure without necessarily saying, okay, I got to hire 20 more people or, because quite honestly, that just doesn't work for me, budget-wise. Question, I can't see very well. Yeah, cloud, I don't know, there's no custom change. I mean, if we, well, we have the luxury of those are our packages. So if we have to make a custom change, you benefit because we're going to push, we don't want to keep a separate, we don't want to keep a separate. We had to make charms, which are what Gigi deploys. Initially, we had custom charms until our IS team got basically access to upload the charms back into the community. So, you know, and improving the thing is they aren't patchy a charm and making it more robust, more secure than the vanilla one we had out there, but no. Any change, we would never keep a private change because it's the product, we're running our own product and so we would just push on out the data center. I mean, one thing is our tool Gigi needs to be able to support what we call cross environment relations. I mean, I could stand up here and talk a lot about Gigi, but, you know, the ability to have a service in this region and connected to this one and having it seem seamless, you know. Once we, that is a big one for us. I mean, the data center to data center thing, we kind of solve that in terms of like logistics because we pay for a big fat, you know, data pipe between two, such that, you know, the archives and our, like our OEM team in Boston doesn't have to wait forever. I mean, I know the folks in Australia, which we had one down there, but I think, I think it's gonna be more, infrastructure itself shouldn't know, right? I mean, the networking is gonna be a challenge, quantum, you know, making sure that it's all on the same shared network. I think it's gonna be a challenge, storage. I mean, do you have your swift across that data center or are you doing some special magical top of that? I think it's the same traditional challenges you have with just having multiple data centers anyway. I mean, I think, like I said, one of our, I think our challenge is additional because we're trying to make it really easy and transparent from a deployment standard. It's not my problem. I just tell my jokes. Our build, our build and test, so I saw a launchpad, a lot of it, not yet. So when I said the PPA builders or the launchpad builders, we're gonna slowly transition those over. Right now, there's a, right now for Ubuntu, when you, you know, have a PPAs, they're just shared machines in labs that get randomly assigned. Canonical does reservists. We have our own internal ones so we can always build Ubuntu no matter who's messing with PPAs. Sometimes we pull some of those PPAs towards a release so we can make sure that we get the release out. But ideally, yeah, you know, there's no reason why it shouldn't be. It's just a manner of getting it done. I mean, there's a lot of old stuff that we have to undo and just time and effort. The fact that we release every six months, it makes it really hard for us to do long projects when, you know, as we get closer to release, the IS team has to start gearing up and, you know, and, you know, moving things around, buying additional bandwidths so we can meet the demands of the, you know, so it, you know, we're getting better at it. We, you know, we're redoing it. Our IS team is kind of re-organizing more of a squad approach where they can rotate a little faster. But eventually, yeah, I mean, we, it's, you know, like we said, you know, it's all about dog-fooding, practicing what you preach. In the previous session, a lot of speakers, well, yeah. Yes, I can talk about that. Yeah, I mean, when we first started talking about Gigi, our guys were like, whatever, I'm using Puppet, I'm not trying to, I mean, ops never want to change. I wouldn't want to either. I mean, hell, you know, it's, you know, when you're an ops, any downtime is bad. So if it ain't broke, don't fix it, right? Now, and then our developers are always on the cutting edge. You just want to come on, man, it's gonna work. Don't worry about it. So I think, honestly, and it's weird as it sounds like when we merged the organizations, it kind of broke down some barriers because it wasn't us and them. That does help, to be honest. And when we have development sprints, we include IS. They're there, they provide input. They provide input into design and development. They're included. They feel more, they're not, they accept it more. You know, they're not scared of it if they were involved in helping design. I think making sure that IS is just the doers, but also involved in the planning of the technology that helps a lot as well. And I'll say, look, you know, because of our company, a lot of our IS guys are developers. So, you know, some of our deviant guy, I mean, so they, I don't know if they're your traditional ops. I know what that really means, but at least within our company, they're more minimal to developers than maybe, you know, your bigger companies. But again, it was initially very hard, you know, very, a lot of pushback, a lot of, you know, we're not gonna do it or this is crap or I can't believe you're making me do this, you know. It just takes time. But I think, you know, you start breaking down those barriers and really having the teams work together and feel like they're all one team. And they all have, that ops knows that if there's a problem, development's not gonna push them away. That helps a lot. Questions. Well, we all, we have a single RT system for various things, but I, you know, I'd be lying if I said, oh, our tickets have gone down and it's been magical now, you know what I mean? And, you know, the RT system is still, it's still like beneficial to know somebody to get your RT responded to a little faster or jumping the IRC channel and be like, hey, what the hell's going on? Tickets, four or seven, five, seven. But at the same time, I think developers feel a little more empowered to get some things done. Ironically though, because you give them a little power then they want more power. Like, well, why can't I do this too? Why can't I do this? Like what would slow down, you know? Cause then you turn around and have someone, again, deploy DNS on a public damn thing today. That just happened, like, am I out of time? We can give them the wrap-up symbol. So I'm wrapping it up. But you can email me, Robbie at Canonical.com, Robbie at Ubuntu.com, same thing, same email. I'm an Ubuntu member. I earned it, I didn't just get it. Yeah, if you have questions, feel free to email me. Thanks.