 My name is Dan Kahn. I'm the Executive Director of the Cloud Native Computing Foundation, and I'm thrilled to welcome our first-ever public cloud panel, where we have representatives from almost all the biggest public clouds around the world. And I'll start out by just asking each of them to introduce themselves, also how you joined your company, where you're from, how far you came to get here, that kind of thing. Sure, I'm Gaiman Roy. I'm the Lead Program Manager for Containers in Microsoft Azure. I came to Microsoft about seven months ago by way of acquisition. And yeah, I'm always in airplanes, so it wasn't much of a trip. My name is Hong Tan. I'm the Chief Architect of Alibaba Cloud. I've been with Alibaba Cloud for more than seven years, so we've missing the growth from a couple of hundred developers to now more than 7,000. Thanks. I'm John Middlehuzer. I run the Container Native Development Group at Oracle. I've been there about six months. Todd Moore, a longtime IBMer local. So thank you all for coming to Austin. I really enjoy it when we do these events here and appreciate it. I handle open source for IBM. And in particular, I work with the CNCF as the governing board chair. And I'm also the chairman of the board for the Node.js Foundation. Hi, I'm Aparna Sinha. I'm with Google Cloud. I lead product management for Kubernetes, the open source project, as well as for Google Container Engine, which is our hosted offering. And I've been with Google for a little over four years, and I've spent most of my life in enterprise IT. I'm originally from Silicon Valley, or that's where I'm traveling from. So this is a little bit of a recreation of our CNCF governing board meeting that we had yesterday where you're five of the Platinum members who support CNCF in all of our projects at the highest level, and I appreciate that. I guess I would just start, maybe we could start with Kubernetes, but then since this is also the Cloud NativeCon day, I'd love to go on and talk about some of the other 13 projects. I'm curious which ones you're using today in production in the clouds and then which ones either current CNCF projects or prospective CNCF projects you're particularly excited about. I think there's a lot of opportunity there. I'll just go down. Maybe we could start with the Perna. Sure. So obviously I'm very involved in Kubernetes that takes up all of my time, and I love Kubernetes and the community. But in terms of other projects, Istio is a huge effort for us at Google, and I participate in that, and we're very, very excited about it. I think actually a decent number of talks here are about Istio, and it'll be part of my talk as well. Also, the open service broker, which is a collaboration with Cloud Foundry, that's an important one. And then GraphAS, which was something that we open sourced recently. Google has kind of a great open source history, and so there are many that we are involved with, but those are two that I'm particularly excited about. Yeah, obviously Kubernetes is really important to all of us. We wouldn't be here without it. But ContainerD, very, very important to the things that we're doing. Istio as well too, working closely with Google on Istio, and you'll see all the tracks and all the talks that are happening here. Having that mesh layer that we can depend on really is super important to all of us, I think as we build really complex applications, and want to manage that policy disconnected from the application level, not having to modify applications to change policy. So really super important to us. As everybody said, sort of Kubernetes is the heart. So in terms of released products that we're working with customers on, our core product is offered Kubernetes service underneath. We obviously use Prometheus and offer ways for customers to do that as well. Looking forward, we also are looking at microservices and service mesh. Istio being one. We're active in the serverless working group. We announced our FN serverless cross-cloud product. And then just today, we announced an open source to a multi-cluster management platform that's built on top of Kubernetes Federation called Navarrocos. It's a Greek for admiral. It's following the same theme. So we just basically published that, and we're going to talk with a bunch of the folks here in terms of where it should fit in into the Kubernetes ecosystem. At Alibaba Cloud, our work mainly involves Kubernetes, the integration of Kubernetes as a managed service on our cloud, and also enabling people to use Kubernetes to run machine learning, HPC kind of workloads. We're also looking into integrating with container D, and also we are also looking into some other sensitive projects in how that can help us. Inside Alibaba Group, obviously, we also use Kubernetes in various departments, particularly in the AI and training kind of workload. Yeah, and Microsoft, we're very focused on delivering what I like to refer to as sort of the CNCF stack of stuff, and making that available to customers, things like our Kubernetes service, which we just launched and previewed a little while ago. Also technology like the open service broker, we actually just today announced the new open service broker for Azure, which is sort of a ground up rewrite that conforms to the new open service broker spec, sort of glue Kubernetes to other outside services. But I think what's more interesting is the degree to which the CNCF technologies have started to penetrate inside of Microsoft. We're starting to see a lot of teams, just Microsoft's a big company, right? And so people are just popping up kind of everywhere who are using, hey, I'm using GRPC, or hey, I'm using Open Tracing. And I think that's really just a good barometer of how healthy the CNCF community is that we're starting to see that kind of adoption inside of a company like Microsoft. Great, okay, let's see if we can make things a tiny bit more controversial. Let's talk container runtimes. So there was a very fair tweet from Vincent Bats pointing out that Michelle Neurali congratulated container D and Rocket on hitting 1.0, but Creo also just hit 1.0 and as a Kubernetes incubator project is also a CNCF project. So is it the case that each of you are using Docker today, not container D? And then can you make a prediction a year from now on runtimes adoption in your cloud between container D? And actually I'll go ahead and sense a mod reference Cata containers. So I'll say Docker, container D, Rocket, Creo, Cata, or something else. And maybe we could start with Hong. Sure, yes, currently we mainly using Docker as the underlying runtime. We don't pick battles. To me, I think that the container runtime, even the orchestration, would eventually be standardized. And we think that going forward, likely we are gonna go with the container D but we are open, really open to whatever is available there. And also we think what's really important is from the control plan side, it's the API consistency. From runtime side, I would say it's performance, stability, and cross-platform compatibility. So that's my answer. John? It's basically the same answer, right? We try and be agnostic. I mean, my group within Oracle is all built on open-source non-forced technologies. I mean, same thing Gabe basically said, which is we're building the CNCF offering for our customers. But just to be clear, all three, the container D, Rocket, and Creo are all open-source, OCI compliant. Exactly. And we've worked on the OCI. We actually wrote a Rust implementation of OCI to demonstrate and to improve that format. So it's not something we have particular religion on or are gonna make a call on. It's not our place to do that. Yeah, I don't think it has to be a religious argument. So Phil Estes from our team, you'll see a lovely talk on container D from Phil and showing something. So look up Phil's talk. We're gonna provide what people want to have. And our current IBM cloud is Docker and Kubernetes. And it won't be a religious argument for us. It'll be what is being asked for by the folks at LARDS, the end user base. Yeah, so I think one of the best things that we've done in the Kubernetes architecture is to develop the CRI, the Container Runtime Interface. It has taken us more than a year, I think. CRI, the Container Runtime Interface. The interface which allows you to plug in multiple different runtimes and switch them out depending on which one is better for your application. And we started that process in May of last year. So that there could be many different runtimes and I think after that we've been using in Google Container Engine Docker because historically this is now we're in our third year of the service or it's been a while. But we also have contributed to Container D right from the beginning. And so Lantau on our team is an essential part of that project. And we collaborate with the other runtimes. I'm very excited about Cota Containers. I think actually it opens up a new range of applications. And so I don't expect the future to look like the past. I don't necessarily think that it'll be one runtime. It may be different runtimes for different applications. But again, the product manager in me it's going to depend on as far as Google Container Engine it's going to depend on what customers want. Anything to add, Gabe? Yeah, I mean you wanted a controversial question. Yeah, go for it. Who cares? Seriously, because customers I talk to they don't care what Container Runtimes are. They want to deliver code. They want to deliver applications. I mean we use Docker today. I think there's a lot of options going forward. But at the end of the day this is a commodity component. In the stack it shouldn't be doing too much. It should be boring infrastructure. We hear that term thrown around a lot. So for me I value mileage. I value how much production use does this have. How hard and is it that sort of thing. And if customers are aware of what Container Runtimes they're using beyond things like billing models and things we talk a lot about service containers in a bit hopefully. But if customers care what the runtime is we're doing something wrong. We failed if that happened. And I really do just, he's not here but just call out for a second the Open Container Initiative which is a sister project of CNCF that's been led by Chris Anczak. And the sea change here versus two years ago where we would get a panel up and they would argue constantly what the future's going to be. And yeah, back to the Tim Hawken quote that this is just a fantastic situation that is boring infrastructure. But those projects can go compete for mind share and market share and technical improvements but it's no longer the same kind of political battle it was. What has mattered as far as the runtime is stability and responsiveness and inclusiveness as far as the community that is around that project. And maybe that matters more from an engineering perspective but it does manifest itself into customers if there's bugs and there's lots of patches and it's not evolving. The enterprise customers like stability. They want to see the thing that's well maintained that's stable that has a roadmap that they can buy into and that they see a large developer base that's excited about working in it. There's a metaphor in the Linux world that the current developers run a Linux Plumbers conference. And like plumbing it's kind of boring until it doesn't work and then you get very, very upset. So one more try for controversy. Austin Collins of serverless.com claimed to me last week that within two years 75% of the applications deployed in public clouds would be serverless. So A, give me your own percentage and then B to make it a tiny bit more realistic maybe you could give some examples of decomposing what it means to be serverless so that folks can get some of the functionality that they're excited about with Lambda in a more generic infrastructure. Yeah, so I think I mean just to answer the question I don't know 20% something like that I don't actually think event based programming is suited to every workload so I think to me serverless is just a terrible term it's just completed a bunch of things I think we all can agree on that but the best definition I've heard is three things. One is you don't see the infrastructure. The second is micro billing right people want micro billing and the third is the event based programming model typically associated with functions as a service now what we've done at Azure is we've actually said well that last one that event based programming model that's a little restrictive so why don't we focus on delivering something that's micro build and invisible infrastructure but use containers as the form factor because this gives a lot more flexibility so we released back in July this thing called Azure container instances we were the first major clouds come out with what I call a serverless container run time and I think this is a really interesting space in fact today just at 11 o'clock we just announced this new thing called the virtual kubelet and this is pretty fascinating because what the virtual kubelet does is it allows you to take Kubernetes and basically have a virtual node with unlimited capacity backed by one of these serverless container run times so you get the benefits of serverless you know in terms of micro billing and invisible infrastructure you're not restricted by the event based programming model and you get to use the Kubernetes API to drive it so we're excited this is now a sort of a community effort we've teamed up with Hyper who is delivering one of the other serverless container run times the repo is open source and available today we're blogging about it and talking about it actively so definitely recommend checking it out I also want to argue that serverless is probably a misleading word now people started using serverless as a substitution of function computer lambda but I would argue you know the first serverless platform is really the Google App Engine you don't know about anything about the server so if you really take serverless literally it really means cloud native you know you really want to take advantage of the elasticity on-demand billing of the cloud services we can ask Obeda who created Google App Engine but let me just rephrase the question so if you're really asking about event based computing I would say really depends on how you define this percentage I would argue that most likely 75% application use more or less of those events triggering to glue those together maybe that's a realistic number but in terms of the computing resources being consumed I doubt that's going to consume a lot of resources but I would still argue there would be the diversity of computing paradigms including still people deploying to the virtual machines using containers even using for the data analytical workload which is a bulk of the computation being used people want to use managed big data services so in terms of computing resources being used I doubt it's going to be a big percentage I think you guys see different customers than I do one thing about my group is we're really targeted not surprising the Oracle customers so what we are looking at is the large enterprise customers who frankly are still running java web logic local on-prem so the transition those customers are looking for what we call modern application development and it's the stack we're talking about it's CNCF based Kubernetes transition etc frankly Docker containers is pretty new for a lot of those 75% of applications in the cloud being serverless it's ridiculously high it depends on what you mean by in the cloud which I think you have to define first if you're saying new applications being written by the guy at Stanford pulling out a credit card and getting an azure account or an AWS account I think a lot of those things will be architected parts of them in an event based architecture like Gabe talked about but applications in general to start with all the massive applications out there are we talking quantity of applications or quantity of users my first instinct was 20% but again without defining the terms it's hard to say one of the other things Oracle thinks is important or my group thinks is important is that serverless which I agree with the consensus is a horrible term shouldn't be a vendor lock in type thing and the way it started obviously it became that the primary implementation people think of is Lambda and that is really tied in deeply with AWS so FN one of the reasons we open sourced it and announced it was a framework by which you as an developer could create serverless function based programming that was cloud agnostic just the way Kubernetes is and you should be able to move those applications between any Kubernetes cluster on prem on any public cloud on your laptop and we think that's a significant benefit to the end developer which is really kind of what my group is focusing on so I'm the open source guy at IBM and open is what I believe in and for this world I believe again as John has talked about that we should have an open alternative that we can all get behind to that end we took our open whisk infrastructure we brought it out to the Apache software foundation and we're actively working with the likes of Adobe and Red Hat and others to go and build up essentially ubiquitous widely available open source project for serverless it really comes back to the eventing though in the end we all have this multitude of serverless platforms that we'll have but what is important is to get to the eventing model and some specifications and things that allow for the events to happen triggers to happen policies to be followed and work to be done and that people will use that technology as appropriate to orchestrate and run the applications that they're building and running but it won't be the thing that everybody does it'll be a big part of their portfolio of how they go about doing work and with some success and of course Lambda, I get to plug Node.js now so Lambda and others and ourselves really came together around Node.js as the way of going and doing that so anybody who's looking at serverless also then needs to go look at Node.js as how they're going to go put that together and use it and I think there's a great synergy there small tight quick start up run do things and that's what we see and you know currently our world with OpenWhisk is based on Docker but we're working quite nicely with the Kubernetes team now to look at moving over on to Kubernetes and I think the future is bright for serverless and there'll be many light bulbs. Excuse me, I think I didn't get John and Todd to get pinned down on that percentage number on their clouds in two years. I gave you my answer didn't you? I mean I said 20% depending on how you define a lot of terms. In terms of what he said though I think it is worth pointing out in the context of CNCF that the serverless working group is working on the open event specification. I believe all of us are taking part of that. I'll give you a counter prediction I think that 80% of public cloud will run containers in three to five years I hope that the majority of that will be running Kubernetes. I think the reason for that is because Kubernetes is open and it runs anywhere and I think that you can apply that principle to serverless as well. I think Hong mentioned that App Engine is the first serverless offering in the cloud and that's true. The benefits of serverless are really when you don't get charged for when you're not running and you can quickly scale to zero and quickly scale up those benefits are difficult to realize on-premise and I think that's a limiting factor a lot of enterprise IT is on-premise and so if you have a paradigm which doesn't work in a hybrid environment then that can limit its adoption. The other piece is that much of serverless today isn't open and I think this is where open-wisk and kubeless and some of the other vision and the frameworks that are coming up in the community are going to fill that gap of being open. I think if serverless can be open and have an analog on-premise then it has a higher probability of adoption in public cloud. I think 75% is aggressive because of that. One other thing I'd point out is that the openness is not just so that we can all feel good that we're using open source. Part of the reason here is that functions are going to live in an ecosystem that's heterogeneous. Functions are going to be talking to containers, talking to VMs, talking to legacy stuff, and if it's ideally if we can get it all running on Kubernetes as the partners is suggesting we can allow for coherent network policy management and things like that to actually work across a common compute substrate. I think that's really important to realizing the vision of serverless going forward. The network side is the hard part on this. I think functions is very compelling for IoT and we have to see kind of how that evolves as well. App Engine is very compelling for a lot of different types of applications. We know that. Serverless is a category of making infrastructure boring. So is microservices and Istio and service mesh and some of the things built on top of that. A lot of what we're worrying about is how you do abstract that away. They don't care what cloud they're running on. They don't care what infrastructure they care about. What is giving them the characteristics they need that's data locality or security or performance or any of that. They're writing applications and then obviously we as cloud providers compete on those other factors but the end developer is specifying characteristics of their applications that they need. All five of you offer managed Kubernetes service in the cloud. With Todd there's a sister service of IBM cloud private. I was saying on bare metal and so I guess although from the cloud service you might prefer enterprises to just move everything up. Could you talk for a second about your hybrid cloud strategy when enterprise says I'm only willing to move some of my workloads into the cloud? It's more than that. Enterprises are also not willing to go and re-engineer legacy applications and that thing that lives in the corner that is ancient that IBM probably helped you go and build and define a long time ago right but with the technologies that we have we're able to front end those and turn them into useful services now that they can depend on and you have to have that available to the enterprise. It's just too difficult to go re-engineer everything cloud native so our strategy is to enable the enterprise user base to continue supporting the things that they have to do put the APIs in front of what they have to do but then also be able to have a way to develop in your own private cloud with the exact same set of things that you can find in the public cloud and be able to migrate workload as you need to so in this problem space of where you don't necessarily have that ability on premise to take advantage of what's there you can take that workload and move it over into a public cloud environment using Kubernetes, using containers and keep on going and this is something that we really embraced ourselves internally as well too so even Watson everybody hears about Watson right Watson is all guess what running in containers on Kubernetes now Watson the services are turned over every 24 hours and they pick up the very latest updates out of all the open source code that's out there and the code clearance is continuous integration continuous deployment 2000 packages are 95% automated and just clearing them as they go through and it runs and keeps going and every seven days those servers just get restarted and continue right it's a lovely way of doing things, lovely. Can you go the other way to Perna please? Sure, yeah so I actually have a talk on hybrid on Friday at 11 so please come to that I think that our open source initiatives obviously Kubernetes, Istio Walker, Graphius they're essential to enabling hybrid the customers that I talk to who want hybrid are they tend to be really large enterprises they have large IT teams they have very much a DIY kind of operation and so the more we can give them to run open source software on their premise in a way that is consistent with Google Cloud and with other clouds the more we enable hybrid Gabe do you want to go ahead? Yeah sure Microsoft has a lot of experience dealing with enterprises been around for 20 plus years doing that and one of the things we heard when we were planning out our cloud development strategy was things like Azure stack and the ability to have a cloud in a box that you can run in an environment is important to folks folks run a disconnected environment like a cruise ship or have a retail operation there's lots of different branches and so we're committed to having the same set of cloud APIs you can use up in the public cloud the hyper scale environment get those same APIs sort of in an edge form factor running inside of a branch location it's part of our strategy customers love it and we're actively working with folks using Kubernetes in those environments So to us we do believe that in the longer term the majority of computing would be happening in the public cloud data centers just like today the majority of the electricity is generated by those big power plants but we do recognize that hybrid cloud is here to stay for probably quite forcible future and we think it's not just you and the all it's really different shades of how much you mix the on-premise cloud and the public cloud it could be 0% to 100% it's a long way and also we think that the cloud native is not a religion we want to force upon to the customers we really think it's a choice your people only want to pick that choice when they help them improve the development the efficiencies to improve saving their costs improving their agility and in terms of the offerings we actually offer a multitude of solutions to help them through that journey so from say on the public cloud we want to embrace as much as possible the open source so that we run as many open source software as managed service as possible so that when customers already have their applications using open source on the on-premise environment and when they move to the public cloud they don't see much frictions so those open source could run and in fact we have we develop by our own and secondly we also provide a product called the Express Connect which essentially link your on-premise network with the VPC so that actually for those services that cannot be exposed to the public network you still can run those components together as a single application and thirdly we also provide a bare metal service I think a lot of our providers also do that now so that it's easy for you to run your applications currently running on physical machines and running on bare metal on the cloud right and also we provide our shrink wrapper version of our Apsara software we call it Apsara stack it's essentially the same code base so it's only tailored for a smaller scale deployment so I guess we also work with some of those enterprise solution vendors so that we can integrate our storage gateways with some of those storage appliances so that they can use cloud as a backup and sometimes when disaster happens they can use cloud as a fallback and gradually they would realize more about the value of the public cloud and then they would obviously move into the public cloud. And John can you finish up for us quickly? Yeah, I mean similar to everybody else Oracle has a cloud system offering that is basically I think we probably have the most on-prem software still running of this group right? So it's very I don't think so. Windows serve Windows so the advantage of Kubernetes is it is that abstraction layer so applications written for Oracle Kubernetes engine also work on on-prem Kubernetes also work on the other public clouds that's you know I mentioned the multi-cloud management platform that is one of the ways that we believe you know application aware hey I need this to run on-prem I can scale it up as load demands I can run it across regions you know again sort of abstracting away the application knowledge from the infrastructure on-prem is just a set of characteristics it may be caused by security or data locality or cost or you know because it's already a sunk cost it's a characteristic that I use to decide where I'm running a particular application I developed a particular workload we should support that as well as everything else well and I do just want to mention that several you refer to it but the certified Kubernetes program which all of you were launch partners on all five of your clouds and have also been supporting as Platinum members of of CNCF I think is really a core part of ensuring that cloud portability it's a core value for I think the foundation right that we ensure portability and that's our mechanism to work that process certified conformant Kubernetes allows you to run your applications anywhere okay well that's our time thank you all so much for coming out thanks everyone here