 All right, good afternoon, everybody. Thanks for coming to our session. Today we're going to be talking about this panel is about enterprise adoption of Kubernetes and containers. My name is Chris Hodge. I work for the OpenStack Foundation. I work for the marketing and trademark program, as well as our community relations. We have a fantastic panel here today. We'll just start from here and work down the panel here. Our first panelist is Adrian Otto. Adrian's been part of OpenStack since the very beginning. He spent 10 years at Rackspace. He's been to every single OpenStack design summit and conference. Since that time, he's been the PTL of several projects, including Magnum, which is the container orchestration project for OpenStack. Like everyone, to welcome Adrian. Our next panelist is Jonathan Cheng. He leads a team of cloud solution architects at Comcast. Comcast was one of the early adopters of OpenStack. And they were really deploying OpenStack at scale and delivering probably a lot of content that you, if you're a Comcast viewer, you've probably seen content delivered by them. He's definitely not an expert on containers or orchestration, but he is really excited about sharing his company's experiences and direction. And to help you make decisions about how you want to deploy your infrastructure. So everyone say, give Jonathan a welcome. Tony Campbell is the director of educational services at CoreOS, where he's responsible for training, certification, and general education around Kubernetes and CoreOS. He's been a member of the OpenStack community since 2011, since he attended his first OpenStack summit in Boston. And so this is kind of a nice homecoming for him here. He hasn't missed a single summit since. Before CoreOS, he was at Rackspace, which is the company that co-founded OpenStack with NASA. And he's excited to be here to talk about enterprise Kubernetes and enterprise OpenStack. And our final panelist is Bic Lee. Bic is a co-founder of Platform9, which is a OpenStack cloud provider in our ecosystem. Not only do they provide OpenStack cloud services, but they also provide an integrated Kubernetes service. And so Bic will be able to share his experience on providing public cloud services in both OpenStack and Kubernetes. And so everyone, welcome, Bic. OK, and one last thing before we begin the panel and start asking the panelist some questions. We've set up an etherpad for this session. If you're new to OpenStack, etherpad is a way for you to take collaborative notes on a session. So if you go to this URL that's at the top of the screen there, you should be able to kind of add your own questions, your own comments, your own notes. And these notes will be available after the panel and the conference is over too. So without much further ado, let's begin. So let's start by setting the stage a little bit. And I would like all the panelists to kind of like share their experiences with both OpenStack and Kubernetes adoption right now and kind of their experiences with it so far. I guess we'll start with Adrian. Hello, everyone. Thanks, Chris, for the great introductions. Kubernetes has got to be the hottest buzzword since OpenStack. I could go on. But there's a reason, and I think people are really excited about the innovation of the container and the related technologies. And once you start adopting a microservice architecture for your applications, you start to discover that there's something missing. If you think the way you've been thinking about cloud for the last seven years. And what you'll find is that there needs to be something that coordinates all of those microservices together. And Kubernetes is really good at doing that, which is why it's so popular. So Comcast has been doing OpenStack since the early years. We're one of the larger deployments in the United States. And part of the reason why we deploy OpenStack is that it takes really good advantage of one of our biggest strengths, and the strength is our network. Now, when you apply that to containers and orchestration on top of a very robust network and potentially virtual or bare metal infrastructure, it starts making sense that we kind of deploy both. So that's kind of our story. Our video IP team has a fairly large Kubernetes deployment on bare metal servers using Docker containers for our cloud digital video recording service and encoding service. So we'll probably talk a little bit more about that later. Got it, Mike. So for me, I'm with CoreOS. And for those of you who may not know about CoreOS, we provide self-driving infrastructure for Kubernetes. So if you're running a Kubernetes cluster, things like EtsyD, Rocket, Flannel, those are things that were invented at CoreOS. So we've begun to see a lot of customers coming to us from the enterprise who are interested in Kubernetes, but interested in running it on top of OpenStack. So we've begun development on installers and products to allow that to be ran on top of OpenStack. And then before CoreOS, I was at Rackspace, so I breathed OpenStack all day every day. So it's kind of nice to be able to continue to work with this awesome community in my new role. Hello, everyone. I'm Big Lee. So Platform9 started as an OpenStack company. We provide a managed offering for OpenStack. We call it SAS-based. And we were one of the first ones to do this. And for some reason, it seems to be a very popular delivery model now. I think this morning you heard of a remotely managed OpenStack, and that's what Platform9 is all about. We started our Kubernetes journey based on pure customer demand. We had a large customer come to us one day, and they told us, we decided to write all of our new apps on the Kubernetes API. We're betting the company's future on Kubernetes. What do you have to offer? And that's when we decided, oh, we better get into the space. Today, we provide both OpenStack and Kubernetes side by side. Customers can run them together, or just one of those stacks. And so we provide choice, and I can get into why we're offering them side by side, as opposed to one on top of the other later. Thank you. And I actually think that that brings us to a really good question that I know that I get a lot. So I'm very involved with the Kubernetes community. And oftentimes, people will look at me and say, why would I want to run Kubernetes on OpenStack? Why wouldn't I just run Kubernetes on my bare metal? So with the wealth of experience on the panel here, can everyone kind of share their experiences with that on Kubernetes on OpenStack versus Kubernetes on bare metal? Yes. When you have a toy application, you can run it on anything you want. And then when you have a real application that needs to manage infrastructure, you realize that container orchestration software, like Swarm or Kubernetes or MeSos, are not designed to manage infrastructure to the extent that a system like OpenStack is. So these are complements, not substitutes. So I think of these as a set that overlaps in the middle. But you've got infrastructures of services over here. You've got container orchestration over here. There's a bit that overlaps in the middle, but they support each other. So for example, if you want to be able to dynamically create storage volumes or networks and connect those up to your applications and have that all work well, you can have some trouble if you're using Kubernetes by itself, because the driver ecosystem hasn't developed yet to an extent where you have lots of supported choices to get those things done. Whereas OpenStack in combination with Kubernetes gives you the capability to do both of those things. Yeah, I totally agree. Today, Kubernetes is optimized to run on public clouds. If you try to run Kubernetes on-prem and you try to deploy one of the examples in the Kubernetes repo, a lot of those examples will fail because they'll try to allocate a persistent volume or they'll expose a service that needs to be exposed externally using a load balancer. And when you're doing Kubernetes on-prem and you're missing all of those pieces, the app just isn't going to run. And so I agree with what Hadrian just said, and that is those missing pieces, the infrastructure, the storage, the load balancers, those are things where I think OpenStack can help fill those gaps. It's definitely a common question that comes out. And I think you have to look at the application that you're trying to deploy in this environment. If you are super sensitive to performance, you might say, hey, I'm going to go bare metal and get rid of the VM overhead. But if your app has some tolerance for performance, there's a lot of great benefits to running on top of a cloud, particularly when it comes to adding nodes and doing some other things that are just automated. We are all used to working with infrastructure through cloud APIs and whatnot. So being able to do that on a cloud provider is very compelling. But if you drop down to bare metal, yeah, it's possible. You just got to do some more plumbing. So we do do it on bare metal for the purposes that Tony brought up. Encoding is a very CPU-intensive process. And we run very specific hardware to do encoding. Imagine multiple nodes inside of a 2U form factor. And so it wasn't a simple task, kind of being on a leading edge of that kind of stuff. They started, the Viper team started this effort more than about two years ago when that kind of. Can you describe some of the challenges you faced, maybe? I mean, are they what people, what they've all described? Essentially all the things that Tony and Big talked about, load balancing and networking, and just the general, what Adrian said, the general immaturity of those states and Kubernetes today, and having to kind of invent those things and learn by in production. So building on the bare metal, you had to build those solutions yourself or did you wind up drawing from other open source projects? Definitely drawing from other open source projects, but at the state it's in, you're right, there's a maturity issue, but there's also the benefit to be cleaned from running on bare metal, so that was kind of the. So what if there were an open stack project that allowed you to run Kubernetes on top of bare metal infrastructure? Would that be something you'd be interested in? Magnum 101 this afternoon. Yeah, so what Adrian is talking about is, so Magnum is an open stack project which is designed to run Kubernetes on top of an existing open stack infrastructure. Traditionally this has been running on top of virtual machines, but Magnum, I guess the newest releases is also tuned to run on top of bare metal. Yeah, it's an agnostic to the actual Nova instance type, so by default it's whatever your Nova driver is. Right, but it will still take advantage of your network infrastructure that open stack provides and your storage infrastructure and these sort of things that are there. So having said all of this, now that all of you have used Kubernetes in production, how, what does that experience like? Like what is the experience of running Kubernetes in an enterprise environment where you're providing services in terms of scaling stability and even security? Yeah, I'll tell you, I've told this to my team, the thing I love about Kubernetes, and I'm simplifying it, but a lot of it just works, which I thought was really amazing. A lot of times we grab some open source software and you have to do a lot of hacking to kind of get us to do what you wanted to do, so Kubernetes has its issues, but for the vast majority of it, it works. And I'll confess I am a software developer by trade, so I hack on software and Kubernetes allows me to focus on my application, use the APIs. I don't have to get waist deep in infrastructure where I'm not as strong, so it allows me to be able to just to do that in a very simple way. So I love it for that reason and super excited about seeing how it works with OpenStack and maybe Magnum and pulling that all off. As far as for enterprises going, sorry, I'll be quick. You know, Kubernetes is not for every application. In the case of what we're doing for encoding and DVR, makes sense, it's a single tenant and 10 different data centers. We're not trying to deploy a variety of applications, a very simple one-dimensional application. But then you start thinking about other workloads like people want to do Hadoop or Yarn. You've got kind of these inception problems where you're trying to build a scheduler on top of a scheduler. So it's not quite there for everything yet, so that's kind of my enterprise take on it. Yeah, so the question was around running applications of Kubernetes in production and I guess there are several topics and angles we can look at. In terms of, let me bring up one angle and that is for us just to operate the Kubernetes cloud in production. Our experience with OpenStack was that it's pretty hard to keep a cloud running, right? And for some reason, we thought Kubernetes would be simpler, more lightweight, easier, but it turns out that it's just as hard. You know, Kubernetes is a deep layered stack with storage network, Docker engine, and then the Kubernetes layers themselves and things can break and they do break and when they do break, pinpointing the problem is actually pretty challenging. And so from maintaining the health of Kubernetes cloud, it poses challenges just like it does on the OpenStack side. Yeah, complex systems are complex. Yeah, no doubt. One of the cool things that I think is coming out of the Kubernetes community though is around operators and being able to take that operational expertise to maintain, to scale, to upgrade, and to build that into code so that you can treat that just like you treat your applications in Kubernetes. So I know that's still early on, but I love the way this community is kind of pushing the envelope in that direction. But I think this also brings up kind of an interesting idea of, like I think in the last year or so we've seen, you know, with Stackinetys as being one of the leaders in that, but more and more projects and more and more enterprises are starting to use containers as a deployment strategy for OpenStack. Kind of keeping that in mind, what are the things that the Kubernetes and the OpenStack communities can really learn and leverage from one another? Because it seems like that there are opportunities to learn from each other and use each other's technology is to kind of ease these burdens of upgrades and management and kind of keeping services alive and around so that all of your users can have a seamless experience. That's a tough one. So when you have a cloud-native application and you want to upgrade it, you can do a rolling upgrade across your application. And if your infrastructure is arranged in a distributed way and it has versioning support built into it, then you can do the same thing. But it turns out it's not as easy as it sounds. The Magnum team has been trying to put in-place upgrades for existing clusters for over a full release cycle and we'll be going into the next release cycle figuring that out for all the different drivers that it supports. And what works really well now is if you take the YAML descriptor, the description of your application, the pod file, and you use that in order to deploy your application, you can create a new cluster and deploy your app into the new cluster and that could be your upgrade strategy. But if you don't want to do that, if you don't have the capacity to move from one to another and skip, and you need to upgrade it in place, then it requires a more complex upgrade strategy. The good news is that Kubernetes' recent versions have added support to do that thing as a built-in feature. So this problem is getting less and less dramatic over time but it's still not a snap your fingers and you're upgraded. You need to do it in a certain order. You need to understand what you're doing in order to successfully upgrade a cluster. I mean, to piggyback on that too, upgrades are really important, particularly from a security perspective. Keeping things updated is real important to keep your clusters secure so we have to streamline that process and there's a ton of work being done in the community. I know for Coral Assets, one of the things that we're mainly focused on is being able to push updates out to these clusters to keep them passion secure. Yeah, I agree with all of those points. Today the best practice way to upgrade a cluster I think to do a rolling upgrade of the nodes and that performs a clean drain of each node so that pods get terminated and rescheduled somewhere else, which is great for applications that are designed to tolerate node failure but I don't know if we're gonna get into application compatibility and what kind of stuff runs well on Kubernetes but not all applications may be designed like that and so depending on how you've done your application, it could experience some downtime. So I believe Kubernetes and the Kubelet is being evolved to allow pods and containers to continue running across an upgrade so perhaps that will result in better uptime for applications. Well, it seems like running OpenStack on top of Kubernetes is almost a perfect stress test because it's highly stateful. You can't get beyond if you're managing machines and data and databases and there's so much state there I mean to me it's almost like you can't be able to run these services successfully without managing the state and yet somehow there are projects out there that are running OpenStack on top of Kubernetes and finding ways to using Kubernetes strategies for managing and upgrading OpenStack projects and this morning we saw another project which is attempting to manage Kubernetes clusters across many different cloud providers. In terms of running OpenStack on Kubernetes, there's a lot of projects that attempt to do that. Colas, Stackinities, it's harder than it sounds. We also have been on a journey to try to migrate our OpenStack deployment process to run on Kubernetes and we've started but we're nowhere near the end and one of the challenge is when people think about an application sometimes they think about just the program that runs but where we found the challenge is how the application is configured. If you've invested a lot in custom tools scripts or Ansible playbooks to configure your application containerizing it requires you to kind of rethink how you do the configuration and that it turns out to be one of the biggest obstacles that we've faced. It might be worth it but I want to revisit a point that you made earlier, Vic, which is when things go wrong if you have a very complicated system you might only have one or two human beings in your organization that understand it well enough to be able to diagnose and successfully solve the problem and so I would be cautious to recommend to somebody to run Kubernetes OpenStack, Kubernetes like this Kubernetes Sanders thing. I think it's interesting because one of the premier Kubernetes provider out there is I think would be Google, Google Container Engine which is running on top of a virtualized infrastructure which is running on top of a containerized infrastructure and one of the questions I came in from the audience here is what is the VM overhead? But I think more generally speaking what is the complexity overhead? A response to that is one answer is you can wind up with competing schedulers. So the complexity, how was the, for application stacks that are meant to help us manage our complexity and simplify it, how do operators manage the complexity of merging these two systems together? Well, what I've seen is they don't attempt to auto scale the infrastructure layer and they don't, so they are not exercising the scheduler simultaneously, they pre-allocate resources and then they allow Kubernetes to manage the static resources that have been allocated to it and they don't try to have that thing do bin pack and schedule simultaneously the application layer and the infrastructure layer because they're not coordinated tightly enough. A lot of people want to do that because there's a potential cost savings to, especially if you want to do this like on top of a third party cloud. But I just don't see it in production today. Have you guys seen it? No, I agree. I've seen the infrastructure layer is still manual when it's upgrading the infrastructure layer. The application, yeah, totally API and everything but when it comes to adding nodes to my cluster, still manual. Personally, Comcast has had enough challenges with open stack that running a sandwich is not a good premise for us. Yeah, and it's tricky, I think too, because I think Kubernetes has nailed stateless apps, right? Got that easy peasy. And now we're working on staple apps. You have things like staple sets and things like that but it's still early on with those technologies so there's still work to be done. It is doable, there are people doing it, running databases and whatnot but it's not for everyone. So, staple sets, we're building state into Kubernetes and breaking some of our cloud native principles and doing it but it's part of where the future is going and as we kind of get closer to the end, I want to get a sense of what everybody's thoughts on what direction do you feel the future is going in and where do you see the relationship between these sometimes competing, sometimes complementary technologies. I think if you got a cloud native application that's green field, you're gonna put it in Kubernetes and it's gonna be great. The trick comes once you've got all of your green field applications happily running cloud natively, stateless or not, then you start to run into okay, you've got this giant long tail of traditional applications that are in your infrastructure and you're tempted to put them into this new world and you're gonna start to question whether they fit or don't fit and I think there's a huge opportunity to make it easy to lift those old applications in and when you do, you shouldn't expect them to behave like a cloud native application. They're gonna still behave like a legacy application but they're gonna be slightly more portable. So looking to the future, one interesting trend that I'm observing is these legacy virtualized applications. Some people are asking the question, hey, we love the Kubernetes API, would it be possible to use the Kubernetes API to manage my old VMs? And we were joking about it a few weeks ago but then we started looking around and there are several open source projects that have started and at the last KubeCon, Red Hat gave a very intriguing presentation and demo about how to model virtual machines in Kubernetes using things like third party resources and that's a potentially very interesting development in how to bridge the legacy apps and the new stuff. Yeah, for us, our adoption of Kubernetes is gonna be very tightly coupled to the integration of IPv6 and right now we feel like it's fairly nascent but as far as Greenfield goes, that's about our extent of it. There will be Greenfield, there will be IPv6 and not in the way that IPv6 is replacing v4 but really taking advantage of all the capabilities of v6 in a very v6 native way. That's interesting too because sometimes it feels like Kubernetes is very IPv4 centric. I mean the idea that you have your local hosting, your netting and you're kind of doing these things, do you see improvements in the technology there between the sort of shifting data packets versus direct addressing of your application containers? We hope so, I mean we see our network becoming v6 only in the next few years. We're making extreme strides towards that and just right now the community is accelerating in a lot of things but that's an area that we think there's a lot to be done there and there's a lot of benefit that we can glean from there and the reason I'm bringing it up is there are a lot of nodding heads out there and they can kind of join forces and maybe we can accelerate that aspect of it as well. So for me, I think the future looks a lot like operational expertise being captured in code. That's a lot of what we're doing around operators. Being able to make it where an operator doesn't have to get up in the middle of the night to do a maintenance window for an upgrade but it's all done in code and one of the steps that we're taking towards that path is even how we install Kubernetes is Kubernetes on Kubernetes. So we're actually running the control plane within Kubernetes itself which is this crazy inception sort of deal but there's a- Not the Kublai. Not the Kublai. Yeah, so it's the control plane and we use BootCube to pull that off but that allows us to scale API nodes, schedulers just like we would scale other apps and allows us to walk towards building out these operators where we can do things like Prometheus are at CD and we're able to do those programmatically to upgrade them and to push updates to them. So I think that's what the future is kind of heading and more and more apps like that up the stack. So I was gonna, in the last few minutes we had a few more questions from the audience. We actually had someone from the new stack who just asked a question here saying kind of asking is there a relief that Kubernetes is just kind of filling this application role that maybe OpenStack isn't filling? Do you have feelings about, any thoughts about that? You know, that there's, them as complimentary technologies and how those are competing? Okay, I'll bite. I do think, I do love the way that the OpenStack community and the Kubernetes community are collaborating with each other. I know there's a lot of talk out there about them competing and one destroying the other and things like that but that's not the approach I'm seeing taken by the two communities which I think is extremely healthy. You kind of alluded to it earlier Chris. I think there's a lot of things that the Kubernetes community can learn from OpenStack and a lot of things OpenStack can learn from the Kubernetes community. I do know as a software developer, Kubernetes makes things very easy and the interface is very simple but I don't have to take care of it at night or I don't have to maintain it. So I think for someone coming from an operator perspective I'm sure they have a lot more to say around OpenStack and the strengths that it brings to the picture. I for one am relieved. So I founded an OpenStack project called OpenStack Solom which was pre-Kubernetes and it was designed to try to address this gap and it's a tough gap to fill. And I think that if OpenStack expanded its ambitions to try to solve all of those issues as well of all of the infrastructure related ones that focus would be diverted too much and I didn't always feel that way but I feel that way now and I'm glad. And I think you see it maybe in the architecture of Kubernetes too where so much of it is dependent upon providers for clouds to provide services, recognizing that you have strengths and that you can play to those strengths and that you can hand the strengths off to other experts who are also providing a value to the community. I know that in my interactions with having kind of grown a lot of my career in the OpenStack community but also beginning to work more and more with the Kubernetes community, I'm excited about how the possibilities move together and I think that we've reached the end of our panel discussion here. So if there are any final words for our panelists as we kind of close out this session? Yeah, if you'd like to know more about the Magnum project, there's a Magnum 101 session later today. No, I think this was a great discussion. Many years ago at these Kubernetes conferences, there was always a joke about we don't want Kubernetes to turn into OpenStack, right? And I think they were referring to maybe to the breadth and the depth of the ecosystem and just the size of it. But guess what, it looks like Kubernetes ecosystem today, it's evolving into something that's just equally big and complex. So the two ecosystems are overlapping and that truly is happening. So it's unclear how it's all gonna end, but that's what I'm observing, yeah. Coop Kettle or Coop CTL? CTL. I say control. Yeah, there's a lot of parallels you can draw on agreeing with Big Air between how OpenStack started and how Kubernetes is kind of adopted right now. And I think I heard Amit Tank call it developer hubris, but we see a lot of decisions being made based on the new hotness and not so much. Really, if one technology's fulfilling capabilities that another isn't, it's not so much about that. All right, well, thank you everybody for coming to this session and a big round of applause for our panelists. And we hope you have a fantastic week. Thanks for coming.