 We're close enough to the top of the hour. If that's, yay, we're live. Hello, everybody. My name is Rob Hirschfeld. And we're going to have a really fun discussion today about open infrastructure and how it's any infrastructure. I've been in OpenStack for a long, long time. I was one of the founding board members. I served four years on the board. I've been to every summit. So I've seen OpenStack for a long time. And really embraced fully the idea that we needed open source infrastructure and open cloud. And then as I progressed through this journey, I really started seeing that when people outside of our community are consuming cloud infrastructure, they're not as concerned about whether you used open source software or not. They consider Amazon just as open as OpenStack. They consider Google just as open as OpenStack because they have APIs. They can consume it. And there's a lot more open source activity actually around Amazon than there is around OpenStack. That might be heresy in this crowd. It hopefully isn't, because I think we need to actually understand the communities that we're involved in and how they want to consume what we're doing. That's more of a preamble. My background, I'm the CEO of a startup called Rack N. And we run a project called Digital Rebar. Two generations ago, it was called Crowbar, if you know some of our history. And we really focus on helping people run cloud and physical infrastructure in a hybrid way. And so for this talk, I had a very straightforward goal. I'm going to launch into demos, so things are a little bit out of order because I want to kick the demos off. But our goal was to run a reference workload. We're doing a lot with Kubernetes right now because we think it's interesting and a really strong community forming around that. We love these open community processes. But we want to be able to run Kubernetes on any infrastructure with the same operational process. So I want to get out of this whole idea that it matters if I run on Google or Amazon or OpenStack or Metal or Packet or any of these other places. I want to be able to run Kubernetes and have a conversation about somebody who's running on a different infrastructure and not have that get in the way of is it scalable and upgradeable and secure and the real operational concerns. And so what I want to be able to do and what I'll show right now is we're going to run Kubernetes on three different infrastructures. I'm running, I'll explain how I'm using Metal. And I'm going to have three different software to find networking choices. And I'm not going to switch the operating systems in this case just for simplicity, but actually have a nine, this is in this case a nine by nine test grid of options for this. And I'm also going to demonstrate something else which I'll have as sort of a research point if you were in the last session, this will be a really nice tie in. But I actually did work in advance of this running on multiple open stacks. So my goal originally was to run on six different open stack clouds and new Kubernetes across six different open stacks. So that's my demo one. I'm actually going to jump into that and show you. So here I have a command line. And I'll show you the UI behind all this stuff. But I have just an open stack script. And what this is going to do, those are correct. I've already run this script to create the providers. But it's going to tell our infrastructure to, let me show you what it looks like. This digital rebar is the project that's doing this coordination, rebar.digital if you want more information. And on this system, what I've done is it's gone in and it's spun up requests to build, request a server from five different open stack clouds. So it created, it uploaded my credentials. That was those 404s, I was trying to upload credentials. They're already there. Five different infrastructures. I have hidden the names to protect the innocent, to protect the guilty. So you won't know which cloud infrastructures we have. Because I'll tell you four of the five aren't going to actually complete the deployment. And we'll talk about why. So I'm literally using the tool to spin up these five VMs on five different infrastructures. And digital rebar is going to take those. And it's going to send those out to five different infrastructure limitations. So that's goal one. And that's demo number one. And we're going to let that spin a little bit. The one I do identify as Dreamhost. Because what I found was Dreamhost worked incredibly consistently in their new Dream compute. And we'll talk about why. I call them out. And I think that what they've done is really important. They'll become very clear as I go through the talk. So this is actually going through and spinning up these cloud infrastructures. And then back on the talk side, goal number two is this multi Kubernetes deployment. So we're going to take the same Kubernetes automation based on an Ansible playbook called Kube Spray. It's one of the community-maintained Ansible playbooks. And we're going to deploy that against three different targets. And what that looks like is this. Actually, I'll show you what the, so there's, I'm not going to show you all the parameters, but there's a helpline that would show you tons and tons of options and variables in these Kubernetes playbooks. So in this one, I have an AWS provider. So here's my Kubernetes workload. I'm just using my administrative API. I'm saying use AWS, name the deployment AWS, and use Flannel as my networking. So that's my AWS infrastructure going up. In this line, I have the exact same line, except instead of using AWS, I've told it use Google. I'm going to name that deployment Google. And I'm going to use Calico for my networking. So these are different SDN capabilities. Calico's here. Juniper, I'm going to show Contrail next. OpenContrail, I'm going to run on the OpenStack deployment. OpenContrail's a little bit more sophisticated. They're supported by Juniper. They're also here. And it'll actually set up a dedicated node to run the Contrail workload. So it'll actually bring in an extra machine for that. All right. So basically what's happening in this case is it's going through, and it's going to talk to our infrastructure and going to request machines from these three different cloud providers, and then build all of the roles and associated pieces and permutations to build a Kubernetes deployment. And I'll go through that. I'm going to check back in and out as we go. But I love doing a demo for this, and then we'll talk about what the demo means from that perspective. So if I go to demo server one, you'll see there's a lot of activity going in here. It's not as simple as just the five nodes I built. So this deployment request, there's four nodes in each Kubernetes deployment, five, and then one that uses Contrail. So we just spun up 8 plus 9 to 8 plus 5, 9, 13. And I already had a working Kubernetes deployment going. So we're watching this stuff come in. It's going to automatically allocate off of these different cloud providers. So like in my Google instance, cover your eyes. This is competitive cloud infrastructures. So I'm spinning up machines in Google. I'm spinning up machines in Amazon. So here's my instances coming up in Amazon. And here's my machines coming up in Dreamhost. So down here it's actually bringing up these different deployments. And so that'll come through and spin, actually allocates the nodes and brings them through the whole process. And we'll jump back into this, and we'll show it, and I can talk more about how this works and things like that. Makes sense so far? I know I'm going super fast because I'm trying to kick off the demo, but it's a one line command demo. So it's not much to do. Back to talk. So what I'm demonstrating for you and the purpose of demonstrating this is hybrid. The business value, the thing that people should care about here, is not that I'm installing Kubernetes and I'm doing all this stuff. You can take your time. You can see demos of how this product works and watch how digital rebar actually orchestrates and composable ops, things like that. We'll talk about the values. But what we're really trying to say here is that we're talking about hybrid. And it's not just one thing for hybrid. Hybrid is about multiple cloud platforms. It's about multiple dev tools. We did samples before. Everybody uses Amazon, but some people use other things. We have Chef, Puppet, Ansible. We have different operating systems. We have a lot of different things that we have to consider when we talk about hybrid. So it's not just, hey, I want to use two clouds. I actually want people to mix and match tools because what we really want to do is drive a community. It's very disruptive to operational improvement if I get mad because you're using Ubuntu and I'm using Decentos and now we can't actually compare operational things. So we have to find ways to abstract those differences. And that makes us use infrastructure in a changed, tolerant way because you might decide to switch from Ubuntu to Decentos and you don't want that to break everything you've done. Or you might want to move from Amazon to Google, and that shouldn't break everything you've done, or use them both. And the reality is, and this is an IDC number, I think, is wildly low. But two-thirds of enterprises are hybrid. Most enterprises is a Gartner number. Most enterprises are using two dozen cloud services from nine different providers. So this is what people's IT environment looks like. And I think they didn't even factor in their own internal infrastructure as hybrid. And this is the use case that we're hearing from people. They want to be able to move back and forth. They don't want to be locked into VMware. They don't want to be locked into OpenStack. They don't want to be locked into anything. And the extent to which OpenStack will succeed is the extent to which we create portability between cloud providers, between OpenStack cloud providers, and between competitive cloud providers. Portability is the most important thing from an OpenStack adoption perspective to me. That's why I was chair of the deaf court committee for the whole time. I just gave up that long-suffering task to two very qualified individuals. So obviously, the answer is one ring. We need one API to rule them all. No, yes, no. I'm not making a randy bias suggestion here about EC2 APIs. Actually, my experience of this tells you that API is really not that much of a factor. We use different APIs. I don't expect Google and Amazon to converge their APIs. Not going to happen. And so no API wins. And I use this slide a lot when I talk about what we do. I think it's very helpful. We have orchestration at the top. But the thing that I want to call your attention to here is that it's a decomposed architecture. So every time we hit a new infrastructure, there are different operational needs at each one. There's a lot of overlap. I like to describe it as an 80-20 rule. 80% of the things are the same. But those 20% differences don't all happen in one nice little layer at the end. They're all spread throughout. You do a couple of steps, and then you stop, and you have to do something that's unique to that infrastructure. Physical happens all the time. That's why we started on physical. But it still happens on Amazon, and Google, and OpenStack. You have to be able to have your deployment, which is 80% the same as one on another platform, account for the 20% differences that are interspersed, like raisins in your pudding. So this is what the world looks like. The contrary to Boris's perception, when I'm talking to people, when people are using cloud, it's Amazon. In this room, everybody is multi-cloud. It's Amazon and something else. And so when you look at the market, if you are not working with Amazon, you are not working in cloud. Now, that might sound harsh, but if you're a product company, and it says vendors at the top of this, if you're a vendor and you're not selling into Amazon, you have limited your market to effectively zero. I know the OpenStack vendors on the floor here might disagree with that, because they're happily selling you into this market. But if they were OpenStack and Amazon, their market would be 99 times larger. That's the way cloud is right now. That's the way we approach things. So you can't overlook the statistics. You can't ignore that Amazon is the gorilla. And if you look at this from the IT user's perspective, they have a very different perspective. They don't see Amazon as huge. But they do see it as the main thing that's eating their lunch. When they're dealing with shadow IT and IT transformation, most of it goes to Amazon. Most of the vendors they're talking to are Amazon vendors, and that's where they're going to direct them. But they also look at it and say, well, what are my alternatives? Enterprises want multiple vendors in their mix. They want to have alternatives. They want to be able to have business continuity plans. And so they're looking for alternate public vendors. They're looking for vendor relationships that they have. I'll spotlight IBM here. IBM has deep relationships with a lot of people in market, and they have a lot of different cloud offerings. So they can help people get to Amazon, but then they can offer them off-ramps from Amazon based on their relationship. So relationships are important. And of course, with OpenStack, private cloud is a very serious thing. So Amazon's already there. People are looking for ways to control, have internal control, or cost management, or security. And so they want to come back into private cloud and make that an option. And the highlight here is that all of these things are really about portability, choice, moving things back and forth. If we don't do this, it just becomes an Amazon eats the world market. And you wouldn't be sitting in this room if you thought that was the right exit for data center IT. And what that means is we need beneficial diversity. Hybrid is hard because everything is a little bit different. And the thing that I want to convey here is that that's not bad. It's actually very good. You're an open source community. Diversity is the key to strengthen that community. And it's the key to strengthen even in operations, but it just makes it harder. We have to deal with beneficial diversity. And the number one thing to do when you consider that is that these choices aren't wrong. So one of the things that we do as a company that we do when we talk to people is we don't start with, we have the right way to do this. And your way is wrong. Most people I know who are running data centers, those data centers run. They deploy servers. They get things done. They don't do it as efficiently as they want. They might not do it in all the choices. They might not love what they're doing, but it works. And they do it that way for a reason. And so walking in and telling them that they can't deploy OpenStack or they're doing it. Oh, they really, you're idiots because you're not using cloud native applications. Those are the wrong messages. What we really want to be able to do is we want to work, especially in hybrid, with an acknowledgement that different choices that people have made are that way for a reason. And work with them on that. And then the beauty of hybrid is now you're giving people an exit path. That same portability that enables us to talk about hybrid gives us an exit path so that people can give up their bad habits. If you want to stop smoking, you want to start chewing the gum a little bit and get off of the habit before you give up nicotine altogether. Cloud native applications are like going cold turkey. And we already know it's very hard for our companies to do that. So I'm going to pause again for a second. I'm going to jump back to demo and look. Because I'm going to talk about what happened when I started doing some of this hybrid work. It'll be helpful to show you what that looks like. All right. So this is the Kubernetes multi-deployment that's been going on. And what you'll see here is I have a whole bunch of nodes from a whole bunch of different providers. You can see the providers down over here. And the ones that came in from other infrastructures, I might have gone over my quota for this second OpenStack deployment. So my OpenStack ones are here. My Google ones are here. When it came through the system, I got an IP address. And then what we've done is we've built all of these different deployments. And so if I want to pick on, I'll pick demo one. This one's already completed. All right, hold on. Let me step back. So this screen shows you basically how we've brought systems up into a common workable state before we deploy Kubernetes. So in this case, I have all the nodes I've brought into the system. And what's gone on here is that all of these infrastructures, I have certain steps that I take to make the node what we call ready state or available for the next level of deployment. If I'd done this on metal, I would have added about 15 extra steps. And so it takes it through this process of inventorying and getting it on the system and things like that. And then it puts them into scope boundaries called deployments. And those deployments, actually, I'm going to switch and show you the Amazon one. So what it's doing is it actually puts them into a scope boundary. And it says, well, each one of these systems has different roles. In this case, we're just using the Ansible Playbook. So we fed a whole bunch of data into the Ansible Playbook. And then we're running this Playbook. If you're used to something like Ansible Tower, we're just running a multi-node Playbook to accomplish this. The thing that's really fun to me is that I didn't have to build an inventory file. It dynamically assembled all the information that I needed to make things go. I'm happy to talk, especially in Q&A, about I'm powering through this, so we'll have plenty of time for Q&A. But I'm happy to talk through about how we would actually take this community Playbook and break it into pieces to make it more sustainable and upgradeable. Those are really fun Q&A conversations. But this is the key idea. We're taking this infrastructure. And we're actually running this infrastructure through a common set of steps, even though they've come from different places. And so when you think about how do I make a hybrid operational step work, what you want to do is you want to create isolation between the layers. I'm going to talk about that a little bit more and why we do it like that. But this is that demo proceeding. If I jump over to my OpenStack demo, what you'll see in this case is of the clouds, I'm in different states. It looks like they definitely exhausted my quota at DreamHouse. But what's happened is that some of my cloud providers got different levels through the implementation. In this case, I actually got an external address. But I wasn't able to complete my automation for it. And I can look at those things and see what the deployment issues were and come through and troubleshoot what's going on. In this case, there's a permission denied issue. I couldn't actually log into the system. I'm actually going to go through some detail on this. The reason I want to show you what's going on is, for me, especially with this type of talk, it's important for you to see that this is the type of work you can repeat yourself. You can go through this process. You can say, look, I actually want to be able to try and replicate hybrid operations. I want to use Amazon and Google together. I want to be able to try in different OpenStack environments. And then ultimately, I want to find out ways that I can help my OpenStack environments create consistency, which brings back this question. If I did more than five OpenStack deployments and I was able to actually go to each one of those deployments and the APIs worked, I created VMs. Otherwise it wouldn't have even shown up on that list. VMs were able to create. The networks were able to attach. My keys were uploaded. What happened? Why did I not succeed in finishing those deployments? And it looks like this. And it's worth noting, no cloud that I tested for OpenStack was successful in deployment. Even my favorite here, Dreamhost, I needed information that was not included in the OpenStack.rc file, that file you downloaded that has your configuration settings in it. And none of them had something perfectly consistent. Some of the things I could cope with really easily will walk through that. The number one challenge that OpenStack clouds have from a cloud consumption perspective, remember my test is Amazon. If it works like that in Amazon, remember the big graph, everybody's going Amazon first and then coming back, we need to be very Amazon compatible. So Amazon gives me an external IP address, only one of the clouds I tested gave me an external IP address. And I'm going to come back and stress, it's not wrong. It's different from Amazon. And that difference from Amazon is frustrating from a user experience perspective. Complete configurations. I had systems that were delivered to me in places where I couldn't actually sudo into the system. Actually, I have a blog post I published today about exactly what our steps are for this automation and what we expect to be able to do. But sudoing, definitely one of the things I expect to be able to do. And then we create a root account and disable password logins. I want to be able to get a standard operating system. I expect CentOS to be on a cloud that I consume. We need to have certain safe defaults. I should be able to always get a Muntu and CentOS and consider us as OSs. And you could make the argument, hey, I can upload anything I want. But I'll tell you, all the public clouds offer a significant library of default operating systems. Very easy. I don't have to think about it in order to start using their cloud. Network names. This was significantly problematic. I have examples where the external network is called private. The network named external is actually not allowed for me to be used. I can't use it because it's a protected network. But I can see it. And so we end up with these very frustrating experiences where, since I don't know what networks to use by default, there's no easy way for me to figure that out sort of experimentation. It's a very frustrating experience. And then log-ins are very different. And this is actually true. If you've used Amazon, Amazon creates their own user account and you have Amazon, so everybody knows that. But it's not root. It's not CentOS or Ubuntu. It's AWS user. And so needing to have to understand what account I use to log in, and I have to log into a machine before I find that out, it's frustrating. So those things, those details really break hybridization. That means every time I step up to a new OpenStack cloud and try and run the automation, if you're going to use Digital Rebar to automate against OpenStack, you're going to have to come in and figure out the answers to these questions on the cloud that you're going to try and use. So why is this a problem? This goes back to a lesson that my team learned at Dell when we were first building Crowbar and even before that. When you're going to set up a cloud, this is really a cloud user audience, but when you're building a cloud, you have OpenStack software. It works. There should be no doubt in your mind OpenStack software works. Hardware is pretty easy to acquire, and it's pretty standard. OpenStack reference architectures are well enough understood. You can go buy reasonable gear for OpenStack and be 100% confident it's going to work. The reason we have these challenges is because a significant component of the work done is ops. It's not the OpenStack software. It's not the hardware. It's the ops. And the choices that people make in ops end up having dramatic impacts in their ability to be successful. And the thing to realize, going back to the earlier picture about your ops is not wrong, is that there is no one ops. Oh boy, that was bad. There is no one way to do ops to actually be successful. There's no cookie-cutter recipe because everybody's operational environment already exists. It's already working. And somebody's correct monitoring solution, or IP ranges, or network topology, is not going to match somebody else's. Now we're back to our 20% differences. The snowflakes really cost a lot of time and challenge. This is really where OpenStack doesn't address this problem. It walks into this problem and then suffers as a consequence. And so when we look at this, we're starting to talk about something called hybrid DevOps. So our focus, what we've been thinking a lot about is how do you create operational constructs that span infrastructures? For us, originally, that was how do we span data centers? How do we help 10 customers with 10 totally different data centers, with different vendors, with different networks? How do we help them? But you take that a level up and you're like, well, if you're going to use Amazon and Google and OpenStack and OpenStack and OpenStack, those are all different environments, too. How do we create a hybrid experience against that? And I add the word DevOps here because it's not just operating it, it's automating it. To me, DevOps is a lot of things, but it's taking us from an ops perspective to an automated ops perspective. And so when you think about what does it take to create a hybrid DevOps environment, I've come to this sort of simple Venn diagram. We have configuration management, Puppet Chef, Ansible Salt, they do a great job configuring systems. They don't really do any good job configuring services. And I would encourage you to think about, especially in hybrid APIs, cloud services, you actually have to configure systems and services, right? DNS and networking and we're creating services every day. So configuration is configuration and service management. To make that work in a portable way, you have to have composability. I've been talking about composability. Some of what I showed you in demo is composable. But you have to be able to say, I don't have one long stretch of automation. I actually have a lot of little pieces of automation. If you think back to the UIs I'm showing you, those are little pieces of automation. And the reason you do that is because if you want to make a choice, I want to use Calico instead of OpenContrail, then those two changes are going to have ripple effects, but they're actually, they could be isolated into pretty narrow pieces. What you don't want to do is you don't want to have automation that does both. Because changes to one are going to break the other and you're going to run into trouble. Same thing is true with operating system installs and components like that. The more we can get composable, the better. The reason people aren't that composable and haven't really been working in composability is because that brings in a requirement for you to chain the components together. So you can't decompose these configuration steps without some type of orchestration. And so they really have to be done together. You can't break things into a lot of pieces without having a way to execute that as a chain of logic. So next time you're looking at a really deep puppet manifest or a long chef run list or something like that and you realize that all these things are chained together and very brittle because of that, this is where I'm talking about looking at things as a much more composable way. And I will tell you, because I feel like we're doing very hybrid type work and we're very flexible from that perspective without composability, we couldn't have that discussion at all. The demo I'm doing right now is based strictly on these three principles intersecting. This is what it looks like, sort of the A-B pattern. So our fragile mono integration says I'm gonna take all these pieces and ram them together. Usually that looks like I have a big start file that describes my whole infrastructure and then I have a big red button and I click it. And if it's a good day, I'm a full moon with high tide and sun, then everything's gonna run through and it'll be great. Those stars don't line up that often and so a lot of times we'll find that those changes are very hard to put together and pull off because of exactly these changes. So what we really find very effective is looking at this as an interchangeable composition where I can take different pieces of this puzzle and step it together, stitch it together and then start changing things out. It's still gonna break, but it's gonna break in smaller steps and it's going to allow you to then start layering things on top and not having to reinvent the world. So if I'm doing an open-contrail network, I can reuse the logic I did for open-contrail in OpenStack and Mesos and Kubernetes and Docker Swarm. I can actually reuse that component in multiple ways. It could still break when it breaks, but I'm gonna be much better able to cope with that change and continue to reuse it. So I'm gonna jump back to the demo in a second, but I'm gonna go ahead and summarize where we are. So there's five real takeaways we have for this. One is hybrid infrastructures to the new normal. If you are not building your infrastructure with the assumption that it's hybrid, boy, then good luck because you're gonna be, you're gonna really be stuck in that decision. You're gonna be taking away freedom from your business to make operational decisions, right? You're driving a technical decision for your business rather than you're making infrastructure the lead instead of the caboose. And you really don't want that to happen. It really translates into poor business practice. And it's actually really the reason why people are abandoning private infrastructures because they got locked into private infrastructure. And the only way they feel like they can improve their operational capabilities is by burning down their own infrastructure and by moving it all the cloud, the shift and lift strategy. Operations can work hybrid, right? So it is possible to write operational automation, hybrid DevOps capabilities, but you have to be able to isolate those changes. You have to be able to work that in. And I really suggest, you know, think about how this translates in because it's gonna make all of your infrastructure, your automation more resilient. I have a long blog post about the next point. Amazon is the operational benchmark. Even if you are OpenStack only, the closer you are to those Amazon practices and patterns, the faster your team is gonna go, the more resilient you're gonna be, the better options you are, the more easily you're gonna take in technologies from the community because the community activity is centered around Amazon. There's a funny anecdote. Somebody was talking, was arguing about cloud foundry, PAS, beating, people love to have this, who's gonna win the container orchestration game? And I saw a tweet, somebody said, Amazon is 20 times, 100 times, all of you put together in their ability to do container orchestration. It's just the rules of gravity right now. So we end up in our own communities very siloed. We have blinders on about what's going on. If you start with an Amazon assumption, then you're gonna not have those blinders on, you're gonna be able to pick up technologies, pick things up a lot more effectively. Implementation choices do matter. And so as we go through these pieces, it does have an impact on how portable you are. The OpenStack choices that we've made, we wanna have floating IPs, we wanna have, oh, name your machines whatever you want, it doesn't matter what the login account is, it doesn't matter what the networks are named, those choices really have big impacts on usability for people coming into your infrastructures. And really, on top of all that, let's make things more composable. And the reason why I would suggest you do that is because it's going to improve your ability to reuse other people's work. When I started all this stuff, we were at Chef and Puppet were really hot at that time and there were big libraries of Chef and Puppet routines, big libraries of Ansible routines. And what people would do is they would take a big playbook or a big set of roles and they would fork them, they would put them together and use them for their piece. So basically day one, they immediately drifted off the community. If you look at what's happened in the OpenStack community, so you don't have to go very far to see this, we have a whole bunch of vendors with forked operational techniques that are 80% the same and 20% different. And so they can't come back and collaborate the way we'd like. And it's a real frustration because that means the users, the operators, the customers here are really stuck with individual vendor stove pipes rather than being able to share operational best practices. And it slowed us down as a community. All right, pause for a second. I'm gonna jump back to demo, but I'm doing really well on time. Are there questions before I start showing more in the demo? Come to the microphone. Excellent, thank you. I'm not sure this is so much a question or a comment, but first I wanna thank you for your very thoughtful presentation. I think we don't have to go too far field to look at some of the challenges that we're gonna be facing in the future. If we look at things like what Google did with Angular going from one to two, what just happened in the Node community where you remove one small bit of software out of the repos and everything breaks. So I think we're going to be facing challenges in this environment as Amazon grows up, as Google grows up, as Microsoft grows up, and they can pull the rug out at any point in time, right? And it will affect everybody who's doing automation in this space, right? So I'm kind of obviously concerned about that happening. You should be, that's a very reasonable concern. So I kind of invite your thoughts on that. So, and this is, it's really interesting. So in this year, 2016, the level of interest in hybrid has skyrocketed, right? It was, you know, 2015, it was sort of considered a unicorn, right? People were like, ah, hybrid clap, blah. And then this year people woke up and were like, oh, we're hybrid, we gotta be hybrid. So there's a business continuity issue. The thing I add into that, and I'm starting to see a trend line on this word too, which is composability, is that you have to, if you want hybrid, you have to be composable. And that composability story ends up being a really important one. The challenge has been, and for something like what we're doing with Digital Rebar, I love being able to show these one-click installs because it's sort of, it's like, ah, okay, I can do it fast. To build that, you have to be willing to say, all right, how do I decompose my work into smaller units? So your NPM, so people who don't know the NPM story, they're going through some really good community process where they're rationalizing and making sure licenses are correct for the shared libraries for JavaScript. And no JS for NPM. And somebody didn't like the process and they yanked all the permissions off of certain libraries. And it turns out that the left pad library, does nothing but left pad strings, was used in, I think, 80% of other people's libraries. And so that one library went offline and everybody's node applications broke. That's simple. So this is where composability becomes really important. And granted, in some ways, I'm talking out of both sides of my mouth because that is actually a composable story because you're using other people's things. What happens in ops is that we don't isolate those changes. So in this case, somebody could write a new left pad, you could change it, and now you're back in business. In ops, finding that error and fixing it is much, much harder because we've built these vertical silos. So I can't go out and get somebody else's left docker install. And if y'all haven't been trying to install docker in production, it breaks all the time. They're changing the APIs, they're changing the repos, they're changing the way you get it. It's not just docker. Things are constantly in flux in the data. So you build this big stack of logic. And what's really sad to me is you know when somebody changes an RPM or an app get install instruction and it breaks in a certain way, you know that there's no one place that people go to fix their operational logic. Everybody's discovering it independently. Everybody's digging into their fork of that script and they're all trying to fix it. And that's what makes it really sad to me. We don't have as good constructs for ops as we have for programming. But the charge here is that if you're doing ops and pretty much everybody has to do ops at some point, you have to take time to invest in composability. What I see people do is they jump in and they'll grab Ansible or Terraform or something like that and they'll build this really big stack of vertical stuff and then it starts growing horizontally and it's very hard to go back after the fact and decompose it into much better logic. All right, I had another question, Joseph. Yeah, I think I was kind of in line with like what you didn't really kind of talk about, but it was on your hybrid DevOps part of the slide, the composability, you know, like the automation, we know that's all kind of figured out, but yes, we can kind of get to this point where like as you, you shown like Kubernetes is a way to kind of like really normalize across clouds. But is it really the challenge of like a lot of the tooling even just to get to that point? I mean, that's kind of what in our journey is some of the challenges of like, you know, how do we consume it as well as like the other piece within AWS is all the big data. So that sometimes drives some of the decision-making. So I mean, what's your perspective on that? In that, how do you bring in the different bits and pieces? Yeah, I mean, what are you seeing? What are you recommending to kind of adjust that gap? So I want to make sure I understand because one, you know, there are definitely different services in AWS and one of the things. So AWS in my mind is two different, two very different things. One is it's an infrastructure capability and that's relatively straightforward to make hybrid because it's relatively portable. If you turn around and start consuming services in AWS, like are you talking about like a big data service or an analytics service or something like that? It's a classic use case for us. So when you start doing those things, you end up much more tied into what their service is capable of doing. And so the composability piece, you'd end up wrapping that service into a composability layer. And to the extent that you have alternatives, you could actually do a composable infrastructure for that and say, all right, big data is a composable unit and I'll just tell you how to consume it. But you're gonna, and so it's entirely reasonable to do that. You're about three steps beyond where we usually get from a composability perspective, but that's where it would go logically. And that's where like drives a lot of our challenges. And that's why we would choose a cloud formation because we're trying to access all these resources. We don't live in just a, there's all this other legacy, there's all this stuff that we've built up over time. So it's just kind of hard for us to kind of neglect that piece. So I feel like if anything, there should be a focus on a lot of that infrastructure to kind of give you some of that extensibility to kind of like get to those resources and then you start getting into a true hybrid DevOps. And you're entirely right. So the balance is if you do cloud formations, that's Amazon's tool for doing this orchestration. If you do that, you're gonna be able to be successful really fast in Amazon and they start making their services available. But the challenge is that takes you down a monocloud path. And so what we really need to be able to do is start building the tooling that lets us be more composable at every level as we go. And we're just starting to get there. And part of the challenge here is that the nice thing about a Terraform or a cloud formations or a heat template is that everything looks like it's in one nice little place and I can check it in to get. And you're like, hey, look, now I can repeat my infrastructure. What you haven't done is you haven't protected yourself from all of the variations and orchestration needs and all the pieces. And so what I'm seeing is that those capabilities while very fast become brittle over time. Just like Chef and Puppet, big Chef and Puppet cookbooks and playbooks became over time. And so this is my concern. We're getting faster. But those vertically integrated stacks become very fragile. And from a business perspective, they're going to limit you in a lot of choices, right? You don't want to be told, oh, we can't move out of Amazon even though it's costing us $100,000 a month for this application because all of our automation script and our CI CD pipeline targets Amazon. And it's very easy to build those deep hooks into the APIs and pipelines and things like that and not think about how you isolate it, right? And cool. Good questions. Thank you very much. If you have more, I'd be happy to keep talking, but I do need to exit the stage. So thank you all for coming. I hope this was helpful. Thank you.