 You're going to speak a little bit about resource management in the Enterprise data center. We'll find out what exactly I'm trying to say when I put that title in. My name is Ragwinder Arney. Most people call me Arney because they can't figure out what to do with the first name. So quick survey to find out who your people are. I won't judge just to find out how the talk's going to go. How many of you here are a part of Enterprise IT doing something with the Enterprise IT? It's a good chunk, I would imagine. And how many of you are in the infrastructure ops side, responsible for standing up infrastructures? And how many of you are dev people? Writing code? How many of you think you could do all the above? So resource management has been in computing for a long, long, long time. It's a really simple problem. You have on the right side you have resources. Resources expressed as how much CPU, how much memory, how much disk, how much network, et cetera. And on the other side you have people asking to get work done. It could be a user session live, which is interactive, or it could be a batch session. And the tussle always is who gets what resources? How do you balance it and make sure that you're maximizing your resources while keeping your clients happy? And the clients could be internal or external. And over the decades in this field, if you're in operations research or production engineering or something like that, this has usually been expressed as a mathematical optimization problem. I mean, if you've done anything along the space, if you work for logistics management firms, they all express this as some form of an optimization function. That picture there is a paraboloid, if you've not seen it before. And the point at the top, the peak of that is when you're maximizing your resources to the fullest. The right resource schedulers out in the industry will try to maximize that. We'll speak about the state of art, what has been done before, what's happening right now, and what has the Cloud Foundry team done to kind of get that at the top of the curve. But things have been changing. We'll look at the right side of the evolution of resources, the evolution of demands, and how the optimization models have changed over the years. First is resources. I mean, that's CPU, network memory, et cetera. Initially in the main frames was different, big monolithic expensive boxes. Client servers, slightly cheaper distributed boxes a little bit. VMware and firms like VMware came along heavily virtualized it. But the last big change that we've seen over the last few years is that everything is distributed. No more do you have big expensive centralized boxes. You have a lot of cheap distributed boxes. And that has caused a huge change in the way in which you design your optimization functions. We'll get to that in a little bit. That's number one. On the right side, the resources are changing. On the left side, the demands are changing. If anyone's seen OfficePay, that's Bill. And he's like, I need my TPS report if you know what those are, test specification, something. But IT, which is a backwater of just doing accounting stuff, has now become front and center, wherein you've heard this word quite a few times, software is in the world. So you're going from IT, which was nice to have, whatever they do, to IT, which is becoming really important. So as your resources change, as your demands change, your clients are becoming more and more demanding and creating new workloads, you've got to rethink the way in which resource management is done in the enterprise. So looking at a quick history, I'm a huge history buff. So for me, it's always good to understand how we got here. The first official resource manager kind of was created by IBM. It's a CPC, PMS, control, program, convolutional, something. But it really came out of a competition that MIT had asked for, where they said they needed a time-sharing system before one person would get his or her own computer. But they wanted time-share, which again is a classic example of lots of resources, and I have a finite set of people I want to manage. So CPC, PMS, and the guys who built it built the first virtualization platform. And if you've thought VMware pioneered it, IBM built the first virtualization thing. Second came UNIX. I'm an old Bell Labs head, so it's great to see what UNIX has done in the enterprise. But UNIX was the second big one that came out with user-sharing. Now, of course, it's a little different from virtualization, which is done at the kernel level, but UNIX brought user-level sharing. You don't realize that when you log into a UNIX shell, you're getting your view into the big system. There's one way of sharing resources. The third one that came, this is when I was doing much later grad school. I didn't do grad school in 1991, I'm not that old. But in a grid computing and HPC and that whole space, where you can submit a job, heavily used in computational space. We used to do a lot of this grid computing to do graphics rendering. Pixar does a ton of that stuff. And cluster computing and grid computing were really big. If you remember, SETI at home, like decades back, how common that was. And Maui that came out of a few commercial firms was one of the most popular, one of the first cluster scheduler that came out. Very powerful platform, but still never saw mainstream adoption. It was for a small set of scientific workloads. The big shift really happened in 2001 when VMware came out with ESX and the whole ESX set of platforms. And when VMware team came out with the product, it was initially a desktop virtualization product, but very quickly they figured out the money is in the server virtualization world. Where they can look at the data center as one single big resource. And they went after that really hard. But for the most part, everything at VMware, and I used to work at VMware before I joined Pivotal, the real abstraction is a virtual machine. No matter what you want, you want coffee, we'll give you a VM. It's always a VM that we always operate on, which is fine, which is actually really powerful if you look back in 2001. But you want to find the right abstraction. If I want to scan an app, do I want a VM or do I want an app? So the evolution of the abstractions has changed over the years. And even VMware has started to review their abstraction. They've gone beyond the stack. They do more than VM abstractions, but for the most part, VM was the way in which they started to abstract things. So how is resource management done with VMs? It's really simple. You define your org, you define your departments, you define VMs beneath that. And anyone who's used resource pools and DRS within VMware will be very familiar with this. And you'll start to see how this thing starts to blend with Cloud Foundry really fast. Then you define reserves. So you say that I want, no matter what, I'm important, give me at least 1,000 megahertz. And you can define that across the entire cluster for your organization, no matter where they sit. That's number one. You can set limits. No matter what you do, I know this department is important, but I don't want to go beyond a specific point. So you kind of define the upper and the lower bounds within VMware. And then finally, you set relative weights. Prod is at least four times more important than dev. So when there's contention, ESX will automatically try to balance out resources. This is extremely important if you're trying to manage resources within the data center. If you don't understand resource pools in DRS, it's a big problem. And as you start to look at what you do with the pads, writing on top of this, you need to know these concepts. And when we speak about cloud front, we will get to this. But the way in which you define your resources has always been CPU memory disk is the most common thing. But the workloads you specify could be very different. And we'll get to the varying workloads as we go more and more advanced. But then VMware is fairly simple. And the IT landscape that has existed since the mid-90s through now, you'll see that most IT firms love to buy and not to build. You speak to the average CIO and you tell them, hey, let's build an application to solve this problem. You'll see their face turn white. They're afraid to build software. Because software for them is hard. So what do they do? They go and buy software. It could be SAP, it could be whatever. There's a whole bunch of things, BPM, ERP, CRM, and 80 more acronyms like that. But what that really creates is you're kind of pushing yourself into environments where the resources are really the mercy of the vendor that's being provided to you. And it's very hard for you to break down that big blocks and manage resources easily. For the few that have been bold and brave to create applications very quickly, that's probably a real expression from an executive, like it becomes a monolith, a big, massive monolith, which can be broken down. And for anyone with kids who's played with Legos, there's a reason why Legos start out this small. You don't have Legos that look this big. Because you want to be able to build more flexible models by having slightly smaller pieces. And the whole rise towards microservices is primarily because of that. You don't want to start with something really big because it's hard to build things on top. It's hard to move it. And you start to feel like this poor truck is starting to feel. Even though your VMs are big, that's hard to move stuff around. So the mindset of VMware and firms like, you know, again, it's not a VMware bashing, right? I'm a VMware guy. But the mindset always was the whole pets and cattle notion where we would always treat a VM as holy. When anything would be done, any work had to be done on a host, we would very carefully move the VM one to the other using VMware's technologies like vMotion, et cetera. And for VMware, it worked out great. Collectively, they made $22 billion for the last seven years, just, you know, going after this problem. The other thing with the IT, with IT is they love silos. They love organizational silos. Like, for example, you have an apps team that's responsible for building the apps. They have no control over a database team that's managed by someone else. You have hundreds of packaged apps which is managed by completely separate teams. You have an ETL team, which hopefully is looking at Hadoop or something like that right now. All they do is informatica. You have an ETW team and so on and so on and so forth. This capture is probably 95% of the traffic within the enterprise. But the problem with this is you cannot abstract resources across all these different tiers. Each of them has their own silo that they like to hoard and keep, right? So how do you find a way to maximize the resources across? You have 20% utilization of CPU here, 20 here, 20 here, 20 here. The CIO comes back and says, I want to maximize utilization. Let's bring them all together. It's just simply not possible in the existing enterprise right now because of these silos. So while this existed, something interesting happened, right? Going back to history. 1999, around that time, right? Google started. Now, Google had a very different problem to begin with. They didn't have the luxury of the average enterprise. I remember this scrappy startup. This came out of Stanford. They had big visions, right? You can see the clear vision of the founders when they named themselves Google. They knew what they wanted to be, right? So for them, the sheer scale of the problem they're trying to solve meant that they just cannot go after monoliths, right? They cannot afford to build monoliths. I've very rarely ever seen an enterprise software vendor walking around Google campus. They wouldn't care for them at all, right? Despite the stakes and the scotches. And Google never really cared about VMs at all. They knew that the added abstraction of a VM is going to slow them down significantly. So what they did, they built C groups, which is containers on top of Linux. Rohit Seth and those guys in 2006, and they did something cool. They built it, they used it, they pushed it back in the kernel community. I forget which version of kernel, but C groups has been around since 2006 since Rohit and team actually built that. But that gave them much, much smaller Lego blocks, like VMs of this big, right? They went much smaller as they built it, which meant that the code that was being written at Google started with Python and expanded to many other languages was much easier to construct, move, and resource manage, right? And that kind of paved the way for a lot of things that Google started to do. There's an explosion of innovation after that at Google. I don't know the exact years because it does not announce products, but Google came out with the Borg cluster scheduler, roughly in 2004. They came out with Omega in 2010, right? We'll speak a little bit about what Borg is, what Omega is, et cetera. In 2011, an open source project came out of Berkeley called Mesos, which eventually became really popular and used heavily by the Twitter team, and now there's a separate firm called Mesos where you're trying to go after that space. Google also created GFS and Hadoop, et cetera, right? And it came out, coming out of Google. The Hortonworks team, you know, Murthy and guys, they created Yarn, which is a heavily interesting scheduler but heavily focused on big data workloads, right? So you saw an explosion of schedulers across the ecosystem. A bulk of them in the open source, right? Of course, Borg and Omega are internal to Google. And what Google did is in 2014, they kind of created an open source version of a blend of Borg and Omega, which we'll get to in a second, and I call it Kubernetes. One of the weirdest names I've seen for a product, but it's still a pretty compelling product, right? If you've not played with it, it's a nice thing. You can get up and running Kubernetes in like 10 minutes. It's fairly straightforward. But what you do with it after that is something we'll speak about, right? So that's the ecosystem that kind of evolved outside the enterprise. No VMs, all containers, fairly advanced cluster managers, because keep in mind, these guys are operating at a scale which is like never heard of. Hundreds and thousands and thousands of machines, right? Well, that was happening. What Google really pioneered, like I said, the abstraction changed from a VM to a container. So as a Google Dev, you would always ask for a container. If you look at Kubernetes, right? The abstraction is all around a container. Give me a container, give me a set of containers. There were a couple of nice abstraction like replication context and services on top of that, but the container was the abstraction. Now, if you look at this classification of the schedulers that came out over the last seven years, right? The first one is a very monolithic scheduler, right? This is kind of the state of art as we speak. Leftmost is if you look at Borg, you have one big scheduler, all requests come to me, and I know exactly how my resources are, where my resources are. I would go and say, all right, you go on that host and so much CPU, so much RAM, right? Fully centralized scheduler. Now, Borg has served Google well, but over the years they found that centralized scheduling is not ideal for them. As their devs created more and more frameworks across a range of dev paradigms, like MapReduce being one, and Graph Crossing being the other, they wanted to find a way in which they could decouple that in some fashion. So they went ahead and created what is called as Omega, which is a shared state scheduler, which I'll get to in a second, but let me explain what Mesos is before that, right? So Mesos, the guys who built Mesos actually worked at Google, and they partnered with, you know, their friends with guys at UCAL Berkeley and said, hey, we need to build something in the enterprise, or at least something that can be used outside of Google. So they went and created, you know, Mesos, the open source state. I think it's a part of the Apache Foundation now, I forget exactly, but what Mesos is is actually pretty interesting. It's a two-level or two-stage scheduler, which is that if I'm building a framework, if I'm building an application to a framework, I can schedule resources, or I can ask Mesos and say, hey, I need resources. Can you help me find those resources? So you have two layers of indirection, and what this enables you to do is you can actually have a growing ecosystem of schedulers on top with a central scheduler sitting in the middle that is not doing any scheduling, but all it's doing is giving the controller scheduling back to the parent scheduler. That abstraction is actually pretty nice because you can now create a lot of schedulers on top of the central one, but it has a significant limitation, and there's a bunch of papers. This screenshot you see here is not mine. I can't draw those nice things. This is directly taken from the Google paper that was published a little while back. But the problem that you have with the scheduler in Mesos is because the top level scheduler cannot see all the resources that are being parceled out. It doesn't have a full view of the data center. As a result, you're not maximizing your resources. When you build your optimization functions, they're not at the fullest. It's a big problem. While Mesos tries to work as kings, and you'll see yarn with Mesos, et cetera, they need to find a way to improve that. What Google did is they looked at the Mesos model and said, this is just not going to work for them. They went and created Omega. This actually is a very interesting paradigm wherein they kind of distribute the state, and that's why the shared state. There are multiple schedulers, but the state is maintained not just in one place, but the state in many. So you kind of don't have the problem of a two-level scheduler, and you don't have the problem of a monolithic scheduler. So that's kind of the evolution of how you went from Borg to Omega to Mesos. The big data ecosystem started to look at the same things. Now, yarn was initially built off, modeled after two-level scheduler. But the design along the way, and I've had conversation with the yarn team, the application master that sits in yarn was supposed to be decoupled from the resource manager. But over the last couple of years, no implementation actually exists that makes it look like a two-level scheduler. So yarn really, as of today, is a monolithic scheduler. And we'll speak a little bit about what are we doing in Cloud Foundry and where do we fit in this classification. It's a pretty common taxonomy for schedulers today. So that's the whole scheduler stuff. Some research, if you're a computer scientist, there's a lot of fun stuff going on in this space. All right, so that was going on. Along came another firm, right? We know exactly how much, 5. Some billion dollars. Amazon came along, right? Now, Amazon brought a bunch of innovations, but none of that really came from this space. All they really did, not insulting Amazon, they kind of recreated VMware but in the public cloud. They didn't model a consumption, but the core technologies remained the same, which meant that my abstraction is a VM again. If I want an app, I'll give you a VM. If I want a coffee, I'll give you a VM. So if you're an ABDEV, again, Amazon gives you the same problems, right? I don't want VMs. I want a better abstraction when I'm writing my code, right? So let's ignore what all Amazon has done. They made some improvements with, you know, they've created Docker, a brand-new container management system, so that pans out. So that was the Amazon story. And along came OpenStack, right? What has OpenStack done? Not a lot, aside from causing confusion in the marketplace, right? OpenStack has been one of the most painful things for enterprise IT. They have to try it. The CI is like, we have to do it, and very quickly they figured out that the damn thing doesn't work, right? No matter what you do with it, right? Some service providers, you know, have put blood, sweat, and tears and tried to make it work. Here's a great example. Unfortunately, I didn't put a link to this, but you can Google the words, you can see that. And this is a startup, by the way. This is not even the enterprise. They tried to do something, it did not work, right? So I did not want to do the stock, but not speak about OpenStack, because that would make me lose credibility almost instantly. I wanted to bring it up and explain that if you have to work with OpenStack, leave where you're working. I'm just kidding. Speak to the CIO. Have a quantization. And some of our savvy clients have put OpenStack in dev and something else in product, right? So that way you're kind of minimizing your loss and the threat of your job. What do you really think? Sorry, it's the people I hang out with. All right, so then the date of this is not clear. It's between 2011 and 2012. Actually the first code for Cloud Foundry was committed way back, but the real spur started between 2011 and 2012. So Cloud Foundry came along and it has a very interesting birth. It was born in the enterprise, but the people who came to it came from the Google world. So like all things people do, they brought some of the ideas and concepts from Google into this. Number one, container is the right abstraction. VMs are nice, but containers is what you want to operate in. Number one. Second, resource scheduling is a fundamental aspect of a PAD. Without a resource scheduler, you really can't manage your workload. So you need a robust scheduler within Cloud Foundry. So we'll speak a little bit about what Cloud Foundry has done, but keep in mind that Cloud Foundry brings a completely different way of looking at things. The abstraction is now the application. When you want an app, you ask for an app and Cloud Foundry figures out what needs to be done. No VMs, no containers, right? Whatever happens behind the scenes, you ask for an app, you get an app back. This whole CF push seems very simple right now, but four years back, it was a hard decision that the Cloud Foundry team, when they came up with the whole CF push idea, throw my app at it, and then stuff magically gets provisioned for me. So let's look at Cloud Foundry. Cloud Foundry is an interesting beast. It was born in the enterprise, VMware initially, but ideas came from the Google ecosystem. So what do you end up with? You have VMs and containers, both of them exist. There's very few, if any, deployments that do containers directly on bare metal in Cloud Foundry as of today. A fairly commonly asked question, but the enterprise workloads are very different from your startup workloads. We speak to a bank and they're like, we don't want to put our PCI, DSS, compliant apps all in containers because security model and all that is being worked out. So VMs still offer you a level of assurance that containers on bare metal still cannot. But my strong belief is at some point that VMware is probably going to go away. That's my guess. But we have hosts, we have VMs, we have containers right now. So what do you start with? Go back to the first, the resource manager spoke of. You need resource pools. If you're not, if you're looking at Cloud Foundry, deploying it on vSphere or even OpenStack, if you're not thinking of using resource pools, you're not maximizing your resources, unless you're doing standalone Cloud Foundry deploys. I've seen a ton of clients deploy a dev, pre-stage or a QA environment all in a single vCenter install, but with the resource pools kind of carving out and managing resources across your different installations. That's number one. Second, the containers inside of Cloud Foundry are standard C groups. Nothing proprietary, nothing crazy. It's all open source. If you don't believe me, go take a look at Garden and what we've done in Garden. A bunch of innovations done in Garden. But when you request an app, when you do a CF push of an application that has an X amount of memory, when that gets pushed to the back end, it's a resource schedule, et cetera. It lands in the container. The C group scholar is made, looks at the memory and says, all right, you need eight, one gig. I'll give you one gig. It gives you the C groups, you know, SLA. But at the same time, what you're also getting is the CPU that is scheduled. We use the kernel's fare scheduler to do it. But the amount of CPU you get right now is proportional to the amount of memory you ask for. One asking for one gig, the other asking for two gig. The two gig app gets twice the CPU as the one gig app. Again, it's a standard we chose. You can potentially tweak that, but that's what we chose. Eventually, we'll actually have a way in which you can specify actual CPU shares so you can more finely control the resource of being asked for. That's a big, interesting innovation right there. The second one, the team didn't sit back. Keep in mind, right, we started this whole talk off with the optimization algorithms. You cannot really have a true resource management within the data center without optimization algorithms burnt in. If you're afraid of linear programming, do not look at that source code. But it's actually really fun if you like linear programming. So the guy's actually coded the whole linear... the resource management for Diego in two classes, the two go classes. So there's a great blog post done by Amit, the guy who actually wrote the code for this. He's a math major. He knows... he goes into a fair amount of go-to detail. But what it does essentially is when I submit my workload, Diego behind the scenes does an auction. It says, hey, you know, here's my job. Who can handle it? There's an auction that happens behind the scenes. An optimization algorithm is run on the fly. And then the right host, the right VM, is picked for the job with one simple goal. First, to keep the user happy. And second, to keep the operator happy because you want the operator to be happy because you want to lower your total cost of ownership. And then you have a happy user, but you have a total cost of ownership which is completely out of line with what you're trying to do in that set, the data center. So you truly need an optimization algorithm. But guess what? Kubernetes, right? Messers. None of these guys really have that yet. They'll eventually get it. Yarn doesn't have it. If you want it, you need to plug in your own algorithm. So when they release it out in the open source, it just uses standard fair scheduling. It doesn't do any of the cool stuff that Diego is doing. So think of that when you start to plan out your data center. So having heard all this, right? If you're a data center operator, what should you do? That's an actual picture from one of my clients, by the way. So he has no idea looking at all the different pieces. So what should you do? One, most importantly, no matter what you do, many of us are computer scientists here. Read the papers. I mean, look at Yarn, look at Omega, look at Borg, right? Look at the source code for Diego, look at the blog post by myth. If you don't have it, then I'll tweet this out. All these resources. But the important thing to understand is that the schedulers are not a pass. They are a very, very small subset of a pass. There's a lot of things you need to add on top of a scheduler to make it usable within the enterprise. So if I'm doing a CF push, I want to get something back. In fact, this is not my image. This is Gartner, which is actually pretty cool. And they said, one does not simply build a pass. This actually came a couple of weeks back. So understand the core concepts, but don't go around building a pass unless you have a lot of time on your hand and you don't care for your job. But what should you really be doing then? So if you go back to the data flow, the different parts of the enterprise, and you put your operators head on, you're at CF Summit. So if you're expected to hear something different, something's wrong. So apps and databases, all the apps should be managed by Cloud Foundry, Diego is available in beta. You must have heard on C-Stock, right? It's going to come out fairly shortly. There's existing resource management right now, but Diego takes it to the next level. So think very hard before you use something else for your apps and data storage, your custom apps, you know, going outside of Cloud Foundry. Second, what do you do with the packaged apps? I'm an SAP shop. I'm an XYZ shop. You can't use containers. Those are big monoliths, right? Those systems. So stick with VMs for now, but challenge your packaged provider and tell them I'm going to kick you out. I'm going to build my own thing, because you add no value for me, right? And ask them to move to a past model. Some of them are smart, SAP, right? Making fun of the guys, but, you know, they have seen the light. Hybris, which is one of their additional commerce systems, all rebuilt on Cloud Foundry, right? There's a talk that René did probably a little earlier today that talks about how he's using Cloud Foundry for this. At the same time, SAP's core HANA Cloud, you know, is there looking to run it all on Cloud Foundry. So some of the ISVs are starting to see the light. Mendix, another good vendor. So going back to Sam's, Sam Ramji's point, the reason he joined the Cloud Foundry Foundation, he likes a CF push. He's an ecosystem guy. What he really sees is hundreds of these players are all going to get certified and run on Cloud Foundry. That's where the real value is for Cloud Foundry, the platform. So push your vendor to do that. Third, if you're doing any Hadoop, do not do it without Yarn, right? There's a lot of value that Yarn brings to the table. It opens up the workloads, right? And you'll see more collaboration with the Yarn community and the Diego community, where you'll see a way in which you can manage the sources across those two teams. But look at Yarn for that, right? For your data warehouses and data markets, which are usually appliances, tell you a vendor to stop shipping appliances. They add no value, right? Ten years back, maybe, but Moore's Law has changed things dramatically. Whitebox's x86 smart distributed software is going to be ten times faster than your data warehouse. So speak to your vendors and pick an open source, you know, EDW or a data mart. And finally, the same goes for the BI, right? So again, this is not a prescription that would apply for everybody, but it's one way of, when I have conversations with clients on a way in which you can manage the sources within the enterprise, right? Last 30 seconds, what's the future going to be? Again, look at the disclaimer below and read it very loudly, right? These are, another thing is official. It's just some of the personal observations based on client conversations. You'll see Cloud Foundry and Bear Metal, somehow or the other in the next, you know, I'd say tops in one year. Some version of it will be available, right? Number one. Second, you may want a global resource scheduler, one that will rule them all, right? To be able to manage across all of them. It's nice to do it, right? If you're Apple, you can pull it off when you're building a city cloud, but it's very hard to pull it off in the enterprise, very hard because even if you want to bring the infrastructure together, the political silos are not going to make that happen. But what will absolutely accelerate change is in new app workloads, right? You know, going back to the whole build point, you're going from TPS reports to the point where IT is enabling you to do some amazing stuff. So these new app workloads are going to accelerate change much faster than you think, so be prepared for them. And the last one is you will see, I think in the next one year, at least a couple of dozen ISVs get certified on Cloud Foundry, right? So you can go to Marketplace, click, click, and all of a sudden you'll see that, right? So that's all of that, you know, has a direct impact on resource management. But again, this is very early. We're trying to see what exactly the future is going to look like, but that's some of my perspectives. So that was it. I'm open for questions. I know I ran over by a few minutes. Constraint optimization equation, which you showed. Does that exist though currently? Absolutely exists. That is actual pseudocode, the real code. If you go to GitHub, the all that exists in real code right now. Absolutely. That was not fancy stuff. It was real stuff. The reason I ask that is because the whole availability zone, if I have one application-free instance, I want to be, because that's what the equation shows. Correct. That you do a model of it. Yep. Do you currently do that? Absolutely does. And I can't lie. The code is there. Can we do the GitHub public then? Absolutely, yeah. I can send you the blog post, which goes into a little bit more detail, which again references the source code, right? I'm a smart guy. I don't know if he's here at the session, but he's a sharp guy. Is he here? He should be here. I haven't seen him, but he should be here. He's around. He's one of our best engineers. Any more questions? So, Meso versus Diego auction, it's the same thing. Say it again? Meso versus Diego auction. That's a good point. Yeah, I mean, Diego, yeah, that's a great point. Diego is a monolithic scheduler. It's not a two-stage scheduler. So again, Ansi brought this out specifically in the Diego talk. We may see a Diego and Meso's plug-in or a kind of integration, but Diego has more control of the resources. It knows exactly what's happening. It's actually running the auctions, right? Meso says, think of it like a much lightweight component in the middle, right? Meso's on its own is useless, right? You need to have frameworks like Marathon or something else on top to do the full end-to-end work. We expect to see Diego on top of Meso's as a viable example. Correct. That is, yeah, if you want Meso's and you want Diego, that would be the most logical way to go. Yeah, like Yarn Meso's is similar. If you Google for it, Yarn Meso's, Kubernetes Meso's, same idea. But it goes back to again, in the enterprise, building that one massive global resource scheduler is going to be tricky. Any more questions? Say it again? Yeah, I mean, those are the last famous words. When you say, is that a layer we don't want to be, you know? No, SDN and PAS are still an emerging topic irrespective of what we spoke about today, right? Right now, most of CF assumes reasonably static networks like a VLAN or something provision. Ideally, yeah, I mean, this is a common request. Clients are asking, and as a part of my app provisioning, can I provision my network also, right, on the fly? We haven't seen that, you know, in the product yet. But when that happens, we probably will see Diego get it updated to include network as one of the constraints when you're doing the optimization. Good question. Any more? All right, thank you.