 So thank you everybody, and I think we need to get the session started. My name is Cliff Grossner, and I'm responsible for Data Center, Cloud, and SDN Research at IHS Technology, and I have with me a very esteemed panel to debate what I think is maybe a topic that we're all probably opinionated on, and maybe even already think we know the answer, but I think the devil's in the details, and that's what I think we're trying to get out of the panel and with the topic we have today. So with that, I'm going to ask the panel to introduce themselves. Maybe we can start over on this side. Rion, yes. All right. Hey, I'm Toby Knell. I'm one of the founders and the CTO of Mesosphere. I'm Tim Hocken. I'm one of the founders and technical leads of the Kubernetes project. Christopher Williams-Stope, Chief Architect for Project Calico. Hi, my name is Mike Gelzer. I'm the product lead for Docker's runtime. So that includes the Docker engine, which is the component most people know, and also some of our orchestration technologies, Docker Swarm, and some of the technologies that glue things together, compose, machine, our provisioning tool, et cetera. Hi, folks. I'm Pino de Candia, CTO at Midokura, the maker of MidoNet Network Virtualization. Well, thank you. As you can see, we're very well represented here today. And what I'm going to do is lead off the discussion with just a few minutes of sharing some of the research that we've done recently that I think might be of interest to everyone in the audience. And it's something I refer to as the MetaCloud, which I believe containers are a very important technology to make this happen. And so with that, let's take a look at sort of our view of how the market's been unfolding. And I think we all know history, right, around moving to off-premise clouds, computing and bare metal servers. And then we went through a server virtualization phase with on-demand computing. And that brought in our ability to be very agile. And that's what I think many of the enterprises that I work with have been working on. And that's meant they needed to deploy orchestration in their data center. And some of the statistics we heard earlier today in the keynote around that certainly bear that out. And also, that's borne out in our research surveys as well. But I think we're actually entering a very rich period now from 2016 to 2019. And I call it the containerized server period, coupled with the MetaCloud. And when I think of servers, I think of them being multi-tenant, be them virtual machines and containers. And the question we're asking is, how is that going to work together? I think there's still some pretty important details that have not been worked out. As we get past 2020, I think we're going to get to a point where we see cloud brokerage and very, very huge compute farms be exchanged on a daily basis on the spot market, just like we do copper today or other minerals and other elements. So I think when we get to that stage, I left the question mark because I actually don't know the answer yet as to where we get to. So with that, what I want to share with you is some research where we look at cloud services. And before I do that, I just want to share with you our segmentation for the off-premise cloud services market where we actually separate out from something we call cloud as a service, which includes orchestrated platforms with containers such as Kubernetes and the other competition that do the same thing, and look at services that provide orchestration as well as infrastructure as a service. The reason we split it out in our market research is because I think that's where the real innovation is going to happen in the next few years, and that's really where we want to understand the growth. And so with that, I'm going to share with you this chart, and you can see that cloud as a service portion is the blue one, the dark blue one actually, where the gray one is infrastructure as a service, and the light blue one is platform as a service, where developers are provided pre-built application building blocks. And of course we have software as a service on top. So the entire market by 2020, we project to be just over $275 billion, where cloud as a service, as you can see, starts to become a very sizable proportion, and I don't break it out yet, but I believe containers will be a strong driving element of that market segment. So I believe there from our research there's a good amount of growth now. Another element that I think is going to be driving growth in container usage is the fact that when we ask service providers in North American Enterprise, what kind of cloud architecture do they want to use? Is it public cloud, private cloud, or hybrid cloud where we move workloads from an on-premise data center to an off-premise data center? It turns out that hybrid cloud is the architecture that is slated to have the, that respondents to our survey expect to use the most over the next couple of years. And of course I'm not sure if any one of you caught it, but I saw a demonstration on Google about their latest visualization tool to go alongside Kubernetes that allows an enterprise not only to move workloads from one data center to another, be on-prem, off-prem, and other third party data centers, but also visualize the workloads. So it's for me the first step of providing the tools to really move to the multi- or metacloud, depending upon how you want to refer to it. And with that, I'm going to leave you, I think this is my last data point, and then we'll turn it over to the panel. I asked North American Enterprise, on average, how many different cloud service providers do you use? And in 2015, the response came back as 10. Now this includes SaaS providers, but before doing this survey, if you had asked me, I said, oh, I think it's three. So this is obviously much further advanced than I thought, and also expected to grow to 14 by the respondents. So with that, let me leave you with a couple of final words that I think metaclouds are here and we're well on the road to building them. I think that containers are going to be a key enabling technology for this, and I think it's up to the panel now to answer the question, how far can we go? So with that, I'm going to ask a question that the panel has actually been eager to respond to, and this has to do with one of my favorite topics, and that's really power struggles, because I think at the end of the day, not only is there technology, but there is the human side of any technology adoption curve, and so the panel was saying that they would like to talk a little bit about or speculate on what happens in terms of who's going to control what going forward, is it going to be the ops people or is it going to be the dev ops people or the dev people? So with that, I'm going to pass it over to the panel and please go ahead and take your turn. Okay, I'll start here. This is a subject that I'm interested in. I'm a Linux nerd at heart, but I'm also interested in how power works in organizations and how technology changes that. And so one of the things that we really, at Docker, we're a very developer focused company. So one of the things that we have seen over the past couple years of the sort of the Docker Revolution or the container revolution is that the balance of power is shifting more and more away from ops and toward developers. And I'll give you an example relating to networks. Networks used to be purely an infrastructure concept, and now with containers, you know, a network is something that an application developer actually defines in his or her deployment manifest, you know, Docker compose manifest or some of the other people up here represent other approaches to orchestration. But either way, these are increasingly software defined entities that are not being defined by ops, but they're actually being defined by the application developer and tailored to that developer's application. And so that has huge implications for ops, because if you're in an ops role, you want to enforce certain policies and you're going to be held responsible for security. But at the same time, you've got developers just sort of operating at a higher level of abstraction with applications communicating in ways that you can't control and with TLS, you may not even be able to see what they're doing exactly. And so I think that's a really interesting area where containers are kind of changing the game. So I mostly agree. I think I'll put a different color to it, though. So I've had the experience of both operating infrastructure as well as vending into infrastructure and being in a company that writes applications across the spectrum here. And I wouldn't even necessarily say that it was a power balance problem before. I'd say it was an impedance mismatch problem. The problem was that what infrastructure exposed to developers was a very infrastructure-centric thing. It could have either been in the battle days tickets or in more modern days. In this infrastructure that we're talking about here in OpenStack, we make developers think in terms of, oh, I need what segment, what network am I going to put this application in versus that application in? How do I connect those two networks together? Do I create an L3 segment or an L3 router? How do I connect these things together? I'm making devs think in the very twisted mindset of network operators, and I'm a network operator, so I'll say that. So we've made them think that way. On the other side, though, the operations guys would see these artifacts from the developer. I don't know what the developer is trying to achieve. He created an L3 router. Was that a firewall? Was that a... There's a question. Can you hold till... We'll have 10 minutes for questions at the end. Hold your questions till the end place. You can be first up. So I end up looking at this as an operating. What was he trying to create a firewall? Was he trying to create a router? What was he trying to do? So I'm lost because I don't have the developer's intent. The developer's annoyed because he can't do things like he thinks the way he normally thinks. What we've done with Docker Compose, with Kubernetes fine-grained policy, with net modules is we've allowed the developers to say, things A, need to talk to things B with this protocol or with these characteristics. That's the only thing the developer really cares about. It doesn't care how the infrastructure is put together. So he can just define what he wants. The infrastructure renders it. The infrastructure operator can look at that and say, I know exactly what the developer was trying to do rather than trying to guess. So I don't think this is a power balance thing. I think we've done is we solved the impedance mismatch and now both sides are more empowered and more powerful than they were previous. So I think it's a net gain for both sides. It's not a power balance, not a power shift. Anyone else? So I think the future of this layout continues to have two well-defined hats. There's the operator hat and there's the developer hat. And sometimes the same people wear two different hats and sometimes you have different people wearing the different hats. But I think to latch on to the example of networks, you as an application developer, I think network is the wrong abstraction. I think you want to talk about what can talk to what. Draw the graph of your application and let me enforce the graph and don't tell me how to do it. And then as the operator, it's my job to figure out how do I implement that. How do I take my network layer and implement the policies that you've described because now I know the intentions and I have the freedom to change the implementation without actually breaking you the user because they didn't expose you to constructs that are a little too brittle. In small organizations, even medium organizations, these people may be the same people. The guys who set up the network may be the same folks who are writing the applications. And that's totally okay, but as you scale and as you move into larger and larger enterprises, I think the two have to be different, at least in some regards. When you start talking about certifications and compliance and things like that, it's a big deal and you can't just give the developers the keys. So I think the split is here to stay, at least at the large end of the scale. And I agree strongly that policy and description of intent is very, very powerful and that's the direction that I think things need to move. Yeah, so I agree with that point. The two roles are going to stay. A developer cares about, I want this thing to just work. I have my architecture, just take that blueprint that I created here and get it out there. And as an operator, you want to make sure things stay up and running and you want to make sure there's enough capacity, enough machines, enough network bandwidth and all that stuff. So those things are not going to change. But as Chris was saying, I think we just have much better tools available now so that impedance mismatch is not there anymore. And Dev and Ops can actually work together better and speak a better language together. So we fixed the interface. I think that's the main thing that happened here. In conversations I've had real quick, just that impedance mismatch, in old conversations you'd see the Dev guys and the Ops guys on the other side of the table. And now we start talking about this stuff. They all actually seem to like it and they're actually having constructive conversations rather than throwing flameballs back and forth. So, yeah. I guess I have to have the last word. From my point of view, there is a power struggle because although we're going in the right direction with the intent expressed in code, the fact is that the technology isn't yet quite there. So for example, we're using technology that cannot express security policy yet. And so that means that customers or companies are rushing to deploy microservices on technology that doesn't quite express all the concepts that the security team needs or the infrastructure team needs. And that's okay because they're moving fast and they're going to do it. And I think the power struggle that we'll see with container technologies is sort of similar to what we saw with VMs. With virtual machines, we virtualize. And some of the traffic that could be seen by the security team before could not be seen once we had virtual machines. Containers does the same thing. And you might say that, okay, maybe you weren't seeing something in monolithic application. You weren't seeing the process interactions. But actually, now you've split up that monolithic application into microservices. There are more risks because any of those services are now exposed and can be attacked and then the attack can expand laterally. So I do think that there is a struggle and there is sort of a fear of what you put into the microservices based on the maturity of the technology to express some concepts. Well, thank you. I think that covered a wide range of things. I'm going to ask a question now that is actually dear to my heart and that I kind of changed my opinion on over the course of the last few months. So if you'd have asked me a year ago, I'd have said, there's no room for innovation in hardware. It's all going to be done in software and hardware is going to be vanilla, especially for switching merchant silicon. But I've actually now been convinced otherwise that there is room for innovation in hardware. And I want to turn this over to the panel to talk about what does this mean for containers in terms of where we might go with innovation in hardware? We certainly see innovation in hardware helping on the networking side. So we're seeing network chips now in white box switches that allow us to do... We'll get to look at packet queues and look at congestion and drop events so we can help view exactly what's happening in the underlay for flows between applications. And we can identify those flows, trace them back to the services that they belong to. Even on the server, we see that happening where the hardware... Look at TPDK and maybe this is really an interaction between software and hardware. But we see chip sets coming out from various vendors, both for network switches in the underlay, but also for the servers themselves to go a lot faster. But certainly we have to do a lot of work in servers since that community has shown that in the software to enable the hardware acceleration. Yeah, at the container infrastructure level, there are also some interesting things going on. I'm an open source PM, so I don't want to come across as plugging particular vendors, but Intel is doing an initiative they call Clear Containers where basically they'll spin up a hypervisor and run your container inside that. So that'll give you greater... If you're concerned about containers not being fully isolated, that's a technology that you can look into for hypervisor level isolation. And I think that that trend is only going to accelerate and other chip vendors are going to get into that and look more closely as production workloads move more and more toward containers. So I think that interface between the software and hardware is interesting, things like DBDK, I mean the FDIO project, they just got kicked off in the CNCF, or not in the CNCF and Lynx Foundation. It's a high-performance vector processing, packet processing forwarding path that's tied partially to DBDK. That's an interesting data path from the networking side. I'm a networking guy. I think there are some folks noodling around with exposing, doing TPM chains where even containers can authenticate the underlying OS, which can authenticate via TPM to make sure you have a trusted chain all the way from the hardware up through the actual application and potentially even tying that into network policy, i.e. if you don't have a trusted authentication chain, you can't talk on the network. That's an interesting thing as well. One thing I do hear from a lot of customers though is they don't want to tie their infrastructure to a specific proprietary implementation and burn too many times. So I think the hardware innovation is interesting, and people might take advantage of it if it's there, but I don't see people building infrastructure betting that a particular piece of hardware innovation that's proprietary is the basis of that infrastructure because at that point they've just walked away from all of the open-source things that they've been driving toward which is to make sure that they have flexibility of movement. So they might take advantage of it, but are they going to key their design off of some proprietary piece of hardware? I'm not seeing that going forward, but I may be wrong. You know, I'm rewind a little bit to the advent of VMs and at first VMs were a pure software construct and all the hardware vendors sort of, that's silly, go away. And then as VMs gain momentum, you see guys like Intel and AMD coming in with instruction set extensions and new platform extensions to make virtualization harder boundaries, faster and more reliable. I would never bet against Intel on this. I'm sure that they're over there dreaming up ways to make container isolation more secure. Clear containers, they've got a bunch of brains over there that are working on fun new ways to engage hardware as a way of making this solution better because they understand fundamentally the easier it is to use something, the more of it we use. And what does Intel want? They want us to use more CPUs. I also think that there's a lot of room for cool stuff to happen around the authorization, authentication stuff, expanding the role of TPM. I think there's going to be neat stuff. In some sense, containers take machines that people had scoped for, it'll run two applications, three applications and we're saying it's going to run 200 applications instead and some of the platforms have to adapt to that sort of orders of magnitude scale change. So I think we're going to see some growth there. I think there's a ton of innovation happening in hardware and if you talk to any of the hardware guys, they're actually really happy about containers and cluster schedulers because what virtualization did to most of their products is it kind of just hit them away. So you couldn't expose the special characteristics of this machine because all it was is virtual cores and so on. And so these guys are super happy about containers because what you can do when you have containers and schedulers, you can say, all right, this machine has a certain GPU on it or this other machine has an FPGA or a TPM or whatever the hardware characteristics are and you can then use that to place your applications or even architect your applications. And so the hardware guys are really excited about this. I think they're all looking at what can we do to enable containers and I think that last point from Tim, the containers put a lot of strain on the network so the NICs are, just because we're running so many on a host and so the NIC vendors are looking into what can they do there. So yeah, we're going to see a lot of hardware innovation and I think we can surface it more with a container-based infrastructure. Okay, I actually have one element or thrust here on the hardware innovation side that I don't think anybody mentioned and I'm just asking one panel member to be brave, maybe if they can talk to that, is that the P4 initiative out of Stanford and if there are any relevance here. And maybe I'm off base on that. I don't know about it. Okay, well then something to look at, effectively it's a project out of Stanford. I know what it is but I'm not qualified to comment on it. No worries, for those of you that haven't heard of the P4, it's a project to, basically the way I think of it is a compiler for Silicon to indicate how the Silicon infrastructure should process packets, making the Silicon very field programmable. It's kind of like microcoding for CPUs, but now for network chips. Okay, let's move on. And so one of the questions that I think we should really touch on, because I've heard people ask me this and maybe the answer is easy, but do virtual machines disappear? And if so, why? And if not... All right, I get that. If they stay around, if they stay around, there's enthusiasm on this one that's great. If they do stay around, how do they get integrated to work together with containers? All right, I'll answer first. So VMs will be around for a while. If you look at any of the public cloud providers, obviously it's all VM-based. There is tons and tons of workloads where VMs will always make sense, where they have a very heterogeneous OS environment and so on. But we're also seeing more and more workloads moving to containers. So a lot of greenfield development happens in containers. It's a really great fit for microservice architecture, which a lot of people are doing. They're also a great fit for workloads that are fairly short-lived, where the virtualization overhead, just the overhead of booting a VM is too large. So I think they're going to be around, but we're going to see more and more workloads shifting to containers. And we're probably also going to see VMs actually running in containers, which is already happening at places like Google. I agree. VMs are here for the foreseeable future. At least the next decade, they are an important piece of the ecosystem. They solve problems in certain ways that are better and different than what containers can do. And I think that there are problems that demand VM solutions for quite some time. I do think that the balance will shift towards containers as people go for the efficiencies and the density that will make these things more affordable. But I don't think VMs are going anywhere anytime soon. So, yeah, I did, though. However, I do wonder over time, will those VMs that exist in the infrastructure, will they stay what I would call a heavy VM as a full-blown OS image, or do we start seeing more micro or runt VMs that surface just the bits of the virtual machine that are unique for this application that isn't generally available? I'll put on my large infrastructure provider hat. I used to wear it, so I think I can still put it on. One of the things you don't want to do when you're building an infrastructure is say, okay, I'm going to have X percent of my infrastructure dedicated to X and the other part of the infrastructure dedicated to Y, because forecasts will always be wrong. You end up with stranded assets. So at some point, you have to say, what is going to run on what? You also don't want necessarily two schedulers fighting over the same set of assets. I think there was a cartoon in South Park, and the ski instructor saying, you're going to have a bad time. At some point, some things got to own the resources in the infrastructure. I look back to the late 90s when we were building internet backbones. 90% of your traffic is IP, and it's growing 100% of the year, and 5% of your traffic is ATM. For the folks here who don't know what ATM was, I'm showing my age. It's not an ATM machine. It actually was a protocol, and it's growing at 5% a year. Do you build an ATM network and put IP on it, or do you build an IP network and put ATM on it? So I think as things move more and more into our containers, you start saying, okay, at some point there's going to be an inflection point where you stop running containers on VMs, and you start running VMs on containers because the container workload, if you believe in this, the container volume and percentage of applications running in containers is increasing. So that should eventually become your native infrastructure and your native scheduler, and then containers become an application on top of that. If you believe VMs will continue to dominate, you do it the reverse way. Yeah, I don't have a whole lot to add to what's already been said. I basically agree. I would point out that, especially in the early days of containers, there was a lot of confusion. I think they were often compared to VMs, described as like a lightweight VM, whereas I think they solve a somewhat different problem. Fundamentally, a container is a single isolated process that is still running on the host kernel, whereas a VM is a totally hard isolated, even hardware isolated separate machine that just happens to be using the same hardware. I think there's a big difference between situations where you want one or situations where you want the other. So I think VMs will continue to live on for a long time. I mean, it's also a conceptual model that's very easy for people to understand, whereas I think containers are a little bit more confusing. It's like, what is it? Is it a process? Is it a lightweight VM? I think that confusion could persist for some time, so. I don't think I have a lot of wisdom to add, but I guess I have a slightly, maybe a slight nuanced view, which is I think that containers will be on VMs for quite a while. I think that the VMs will guarantee the portability. As we see hardware tricks that make certain, that can make containers faster, VMs perhaps come back, because they might be cheaper to do than compared to today. So the trade-off of VM to container might not be quite as severe in terms of loss of performance. But I think certainly all the public clouds are going to be running containers in VMs for a very long time. Then if you're architecting something that you want to easily move workloads back and forth between public clouds or hybrid cloud, you're going to want to have something similar, but nor do I confine the role of VMs to act as homes for containers. I think just like we still have mainframes, we're going to have applications that are quite complex, that are perhaps not so easy to containerize, and so we'll see that for a very long time. Okay, well, we're into the question-and-answer period, so there was a gentleman over there. Don't know if you still have your question. Okay, so you're off the hook. But you raised your hand, so now you have to ask that question. If someone does have a question, please come. So why don't you get in line and be next then? Okay, so we'll have time at least for two questions. If you have one, go ahead. Great. So we're talking about how Kubernetes is trying to capture the developer intent in terms of policy language, and the decoupling it, and the operator going and implementing it on whatever abstraction means he has. But the intent is only a portion of the intent really comes from the developer, and there's a lot of the intent that comes from a lot of other people. And this artifact needs to be portable across multiple people. It could be like a load balancing guy, it could be a security guy, it could be some guy who's doing operations and wants a bunch of chef and puppet agents. So how do we really want to make this artifact truly portable across different personas, and then have the final infrastructure go implemented? That's something I think... Are you guys thinking about it? Because that's probably... I don't see it spoken a lot, but I think that's really required. So maybe I'll take the first shot of this question. I think that once... So I'm in networking, so part of the way I think of this is once we can do the same thing for containers in terms of networking that we do for VMs, then the part of this discussion that is container-focused sort of goes away. You have workloads, you have endpoints. Does it matter if they're containers or not? From a network standpoint, from a security standpoint, it really doesn't. You're right that application owner only expresses part of the intent. There's other parts of the organization that can impose policy on the workloads and so on. So you do need policy management. You need to provide ways in which you can sort of overlay and advance security, for example, transparently to the application. So service chaining, of course, becomes really important. But the management piece is crucial. Sort of the APIs that... Or whether it's templates or APIs that allow you to put multiple layers of intent on top. Just to finish, before I hand it off, I'll say that sort of... Are people thinking about this? So absolutely, I think once, for example, the trunk port... The port in Neutron or VLAN Aware VM is the original name of the blueprint, that makes... Gives a Neutron port to every container and then all of the advanced services that are offered by Neutron will be available for containers. In particular, the service chaining. At Docker, our orchestration system is called Docker Swarm. And this portability issue comes up with a lot of customers and it's something that we're very actively working on. Specifically portability between different developers or different development teams. So I absolutely agree that that's an important area. So this is something that... Yeah, we're definitely looking at... Again, I have an operator hat on. The idea is still it's an intent model. So the way you want to think about it is there's different layers of policy and policy can trump... Higher levels could potentially trump or override lower levels of policy. So the highest level might be the operations guys and their intent is we're getting an SSH door-knocking attack from this block of addresses somewhere. So on every single endpoint, be it a VM or a container, I need to block inbound 22 from these addresses. And I need to deploy that everywhere, no matter what the developer's intent was or not, because I need to save my infrastructure. Or it could be the security guys below it saying, I don't care if this workload says it's PCI or not, but he has to pick one of the two. If he's PCI, he ends up in the PCI walled garden. If he says he's not PCI, he doesn't have access to PCI-enabled resources. And if he doesn't say either, he can't talk to anyone on the network till he identifies himself which camp he belongs to. This is just... But it's the same kind of intent. He doesn't have to think about this workload and that workload. And he says, yeah, things in PCI that have identified PCI or go get this policy versus another policy. It's the same intent model, just who's applying it and what priority they end up in. So I don't have a whole lot to add to that. I want to refocus a little bit on this idea of layering of abstractions. We've got five. People built out VMs. We built that open stack as a way of managing, at least in part, applications on top of metal in a way that was more manageable than metal. Now we have containers and people sort of like containers in some ways better than VMs. Do we build containers on top of VMs or do we build containers alongside VMs or perhaps instead of VMs? And some of the infrastructure that we've built around VMs, maybe we don't need it anymore, right? Maybe we can just throw some of that out because it was useful for VMs. But maybe it's time for a little bit of a technological contraction, a complexity contraction here. It's not necessarily true. There's going to be plenty of places where virtual networks are going to continue to be really important. But I think there's a lot of places where people did them because that was how they thought they had to do them and that maybe they don't have to. Yeah, nothing much to add. I agree. Everything is intent driven. No matter which persona it is. And depending on where you live in the stack, you get the right abstractions to express your intent. Okay, we have two questions and we have three minutes. So what I'm going to suggest is go ahead and ask your question, you're next. But only one or two panelists that really have something feel very motivated answer. That's a real quick question. Can containers be first class citizens without IP addresses? Yeah, absolutely. I don't understand the question. I feel like IP is such a fundamental part of what we've built that it becomes... Actually, I'm surprised you came down on that side considering pods versus containers. I'm making a distinction there. Oh, okay, sorry. So to make that distinction, Kubernetes has an abstraction that's sort of one very thin step above containers and I think that IP addresses are an important part of the identity at that level. Of course, there are jobs that don't need networking and they don't need IP addresses and that's fine, right? But I think in the abstract, containers, this concept, the network is the computer to borrow a phrase. I don't need to add anything. Well, those are good questions. Almost stumped the panel. We've got one more and a couple of minutes to answer it. Hey, guys. My question is about impedance mismatch. So in a declarative model like HEAT, the VMs have a certain prescriptive way to be described and in the container space in Mesa host or Kubernetes, they have a different way of being declaratively addressed. How do these come together in a container-first model? Take one for the team. I think of it as two different platforms. We have certain tools for VMs. We have certain tools for containers. Why can't we do the same kind of templates for VMs that we do for containers? Maybe because they're too slow to spin up. Can't we just all get along? Do we want to get along? I guess my view, not HEAT specific, but OpenStack in general, OpenStack tends to make people think about the underlying infrastructure. We phrase our questions the way we define things in terms of the underlying network. The developer thinks about L3 segments or networks or VLANs. We make them think about the underlying infrastructure. That might not be the right abstraction going forward. It should be more of, I need this much compute. I need cycles of type X. I need this amount of hash space or memory space or storage, not thinking about how that gets rendered. We get into trouble when we make the developers think in terms of the infrastructure and then we lose the actual developer intent. We get what the developer thought they needed to say to get what they intended. I think that's the wrong construct. I've been hearing this a lot. I asked makes developers think about low-level details. It's certainly true that as a young technology, OpenStack made people, you can see horizon is structured around having to choose all these things. You have to understand IP prefixes and so on. There are certainly layers in OpenStack that are starting to abstract that all the way. You can use catalogs now to launch applications. I think we'll see more and more of that. HEAT is a template model to launch. Maybe HEAT isn't quite as convenient or according to your taste as Kubernetes templates or Mesos or Marathon templates. But there's no reason why. We can't build the same kind of templating on top of VMs. Sure, there isn't. There isn't a reason why not. Okay. I think we're at the end of our time slot and I just want to thank the panel for the very brilliant discussion and the leadership they provided here today, thought leadership for us. So thanks again, everybody. And on to the next element.