 Oh, it's on now, hello, wow, oh my gosh, I really, I can't really see you, these are really bright lights, I can see the first two rows, you're very beautiful, thank you for sitting up front, appreciate that, I'm Lisa, I met a few of you, I always like to know who's in the room and who we are speaking to, come on in, there's some seats in the middle and in the front, and it sounds to me like we have a mix of people in the room, but I'm just gonna ask, because I didn't get to talk to everybody, show of hands, who here is a software developer? Okay, how about operator? Ooh, good, and architect, good, and what did I forget? Goat herder, no one? Oh one, excellent, okay, cat herder, no, no, that's my job, and I herded some awesome cats for this panel, look at me grabbing all the keynote people, oh my gosh, I mean a main stage isn't big enough for a small, our little main stage I guess, I don't know what I'm trying to say, it's really for me in some other time zone, okay, like I was saying I'm Lisa and I have been with OpenStack for many years and also Kubernetes from the beginning, I live in the San Francisco Bay Area and I run the OpenStack user group there and now the Cloud Native Compute Foundation's largest user group as well, and having done 19 meetups on Kubernetes in the last two years, I've gathered a lot of information from the community and from a lot of customers, I work at Portworx, we also have a lot of customers, we help people run stateful applications on Kubernetes, which is a really fun day two challenges, fun problems to solve, so I've been in the Kubernetes world for a while and I've been in the OpenStack world even longer and I have gathered some of my absolute favorite people on the planet, those of you who looked at the program and saw Tony Campbell, Tony sends his apologies, he could not make it to Berlin and so it's not a backup plan, it's probably an upgrade and Tony's going to watch this and he's going to agree with me, but Joseph wore a CoreOS shirt just in Tony's honor, I noticed. I had to recognize him. Yeah, I saw that, Joseph's actually with Adobe, not CoreOS, but we all love that shirt. That was what, three companies ago? Cross that out, put Red Hat, cross it out, put IBM. Okay, all right, I'm going to let these guys introduce themselves. Joseph, how do you start? Sure, I'm Joseph Sandoval and I am with the Adobe Advertising Cloud. Everything I'm telling you today is my own opinions and experience. This is actually my second go round with the company I'm currently at. Previously I was at a company called Lithium where we did OpenStack, but if you were at the Tokyo Summit, you probably saw us in our early nascent attempts to run Kubernetes on top of OpenStack and so I had a lot of lessons learned from that, as well as currently at the Advertising Cloud. We're also now deploying Kubernetes along with OpenStack. Hi everyone, I'm Mohamed. I'm a big part of the OpenStack community. I'm the vice chair for the OpenStack Technical Committee. That's kind of, and community-wise, I'm the project team lead for the OpenStack Ansible project. It's a project that allows you to deploy OpenStack. But on the commercial side, I'm the CEO of Exos Company that provides public private clouds and consulting services and we've had a lot of experience in helping our customers deliver Kubernetes on top of OpenStack. So really how to bridge the gap and make it easier to get Kubernetes deployed on OpenStack using things like Magnum, but also making Kubernetes better integrated with OpenStack such as with things like the OpenStack Cloud connector, which allows you to create things like sender volumes for your persistent volumes and things like using the load balancer resources and integrating them with Octavia. So a lot of my experience comes in how to move away from having two systems working independently and making them more working together so that it makes operators' lives a lot easier and making deploying Kubernetes a lot easier for them because on its own sometimes that's quite the task. Great. I'm Robert Starmer. I run Community Technologies. We're a technology consulting company that helps people figure out how to get the right cloud put in place. So whether that's something like OpenStack or Kubernetes or a combination of the two, we look at making sure that you're getting the right solution. And obviously the more important part even beyond that is figuring out how to make the best use of those resources. So getting the system in place, getting it operable as part one, but then actually being able to consume it from a development perspective is really the other aspect that we help people do. Okay. And there's still a couple of open chairs if you guys don't want to sit on the floor. Now would be the time to crawl over your fellow colleagues and grab one of those open chairs in the middle of some of these rows. Okay. So we have a nice mix of people in the room and we will have something for everybody here. The questions will go from here to here. So just hang with us. We also want you to ask questions yourself if you can crawl back over your colleagues and get to one of those microphones. And if you can't, you can let me know. I'll hop down and pass this one around. These are our Twitter handles. So you can tweet us questions now. Some of you have already started that. Thank you very much. That was very funny, by the way. I appreciate the jokes you guys are throwing at me as well. They come through on my watch. But anyway, we'll keep the conversation going. I actually opened my DMs. Very courageous move. And so I will keep them open, at least throughout the conference or at least until I get spammed by people and that starts driving me nuts. And that will probably be about 24 hours. But anyway, if you follow me, I'll follow you back and we can keep this conversation going. It doesn't have to end 40 minutes from now. Well, let's start by talking about Kubernetes. Let's talk about just getting started. You know, some of the installers. And I'm going to ask this one to Robert because he's out there in the real world putting this into customer sites all over the world. So probably, of everybody, the most experience, we live in the fake world. We work for companies and we just get to play with software all day long. But Robert's in the real world kicking the tires. So can you talk about getting started? Well, yeah, I think there are a couple of different approaches. I think the first question you have to ask yourself is, do you want to do this yourself? Do you want to operate the Kubernetes infrastructure directly or do you want to work with a cloud provider and have them provide you with a Kubernetes service that you can consume? You've made that decision and that is driven usually by the technical competence of your team and not that they're incompetent. But is this really something that you want your team focusing on is sort of the first question that you have to ask. Because the installation process, if you've already gone through the pain, for example, of working through and getting your team up to speed on OpenStack, adding Kubernetes is actually fairly straightforward compared to that. But it is still more operational load that you still have to manage, especially if those Kubernetes environments start to scale and start to grow. So I think there are times where it makes sense to sort of set back and say, is this something that I want to run? Or do I want to use a tool like Magnum to help me deploy the core resources? Or do I even want to go to a cloud provider and just say, look, provide me Kubernetes as a service, let me leverage it at that level and then let me focus on the applications and how the applications are going to integrate into that environment. So we see sort of all those different requirements coming forward as a part of that decision-making process. And I think, you know, I mean, Joe, in your environment, you have very specific requirements that are driving you to do something specific, right? Yeah. First of all, can you guys hear him or do you need them to lean into the mics? It's okay? You can hear? Good. Thank you. Yeah. I think, you know, for us, the OpenStack journey taught us a lot about just installers. And, you know, previous company, I kind of had some challenges definitely with it. I think we used to have the just, yes, you can get it installed and it gets up and running, but then you have to inject your own personality into these clouds. And then we all have these standards and requirements that we have to follow, existing systems. And so I think that first generation, I guess, is how we call them distributions, you know, that was always a challenge because then I started seeing patterns of where we're running automation on top of automation because the distro was using, maybe it had some puppet installer and then we were using Chef. And then now currently, you know, here we are using puppet and I've seen, you know, OSA, which is a great community. But, you know, we have to live in this role that all of a sudden we have this duality of like automation tooling. You know, I like to say that, you know, we have 99 problems and an installer's not one. Only because, you know, we have really learned how to build and run our software. And I just feel like for the type of teams that I have, and I think sometimes it's a downside of when you work for a software company, you just tend to be like, well, I don't want to pay for something. We could just build it. But there is a cost. And I think it's in two areas. Obviously, there's just the financial cost of like getting the right team and staff because we're getting into a really, really complex environment of challenges that is putting pressure on my teams to keep up. You know, I'm here envisioning all these things and I'm bringing requirements back and I'm demanding like best and breed, you know, so even though there's things like Magnum, which are out there, which just got to conformance. So for us, that was like a no deal. We needed to know that the things that we're running in public cloud, we could run them back in our data center, feel reasonably confident that those workloads were going to be performing consistently or at least close to consistent. But then there's other things we have to consider. Like I'm running a platform that is, requires very low latency. And so we also want all the other things that come along with it, like the observability. We want to know like, you know, because we're getting billions of packets per second of requests per day, we want to make sure that things like Ingress are able to keep up. And so, you know, a lot of times we'll pick and choose the best and breed, you know, we'll evaluate things like linker D or other types of things to handle some of the load. But it definitely comes at a cost. It is complexity. It's like, you know, I'm always concerned about am I up leveling my team. And it brings a lot of tension and challenges. Yeah, I think that there's like, it's kind of two problems that you kind of have to look at them, which is like, how do I get it deployed? And then how do I operate it? And I think those are kind of two distinct things a lot of the times. A lot of times it's really easy to deploy Kubernetes. There's like 70 million tools to do it. But like you were saying, that's the easy part. The harder part is like day two, how do we operate it, how do we monitor it, how to make sure that if something breaks, we can get that fixed. You know, I'm sure many people here, maybe in the room have certain businesses that run on top of these clusters and that these clusters aren't operating, their businesses are not making any money. So, and really that's one of the important things to kind of keep in mind is like, how critical is this infrastructure to your business? And then like, do you really have the resources to put at it? And do you have the resources that will be able to resolve your issues in a timely manner? So I'm a huge kind of person that kind of encourages to increase the adoption of Magnum because it kind of provides a lot of aspects that integrate with all the different infrastructures that OpenStack provides as well. So it kind of sits in with that same open API. And then I would encourage people to, once they have that environment deployed using Magnum, then starting to bring all the other monitoring tools and infrastructure because once you have that Kubernetes cluster, it can be really treated just as like any other Kubernetes cluster. Now that it's conformant, it really is no different than any other ones deployed by any other tools. But I think it's nice that you kind of, if you have that OpenStack expertise in-house, you're working within an OpenStack project. If you want to make a fix, you're already familiar with the contribution process. And Magnum has really tested its scale. CERN is one of the biggest contributors and users of it, and they do all sorts of crazy stuff that is beyond my understanding in science, but they create thousands of clusters both on bare metal and VMs, which I think is also interesting because if you're going to deploy Kubernetes clusters, you need underlying infrastructure to do it on. And a lot of times getting that infrastructure and managing that infrastructure is a problem on its own. And the nice thing is working with something like Magnum. If you already have Ironic doing bare metal for you and you already have Nova doing VMs for you, it's literally a matter of just telling it deploy on that environment. So I think that Magnum has a little bit of advantage on that in terms of facilitating everything underneath that stack. But if you're a large company and you already have your big internal hardware or VM management infrastructure, it might not be the answer for you. Okay, but since we're talking about infrastructure and Magnum and bare metal and Ironic, there's, by the way, no one way to do this. And we've had a lot of fun over the last couple of days talking about the many, many ways to do this. But let's talk about Kubernetes and bare metal because this is what a lot of our really large customers are doing, but they're not using OpenStack. They're not OpenStack shops. So, Robert, why don't you talk about Kubernetes and bare metal and what you've experienced? Yeah, I find it actually kind of interesting. In much the same way that you have to make a decision as to whether or not you want to operate the resource or just consume the resource, the same thing applies, I think, when you start thinking about whether you want bare metal, do you want and or need a virtual machine management solution as well? Is that a part of your current application stack? In which case, then I think it sort of drives you to definitely saying, yes, I'm going to put in an OpenStack. I'm going to run my own infrastructure. But there's basically that level of hierarchy. If your application environment, starting from scratch and you have enough money to sort of build your own environment out or need to, there isn't necessarily a requirement anymore to have a virtual machine layer underneath a Kubernetes environment. At the same time, it's, again, a question of operations. Now, having a VM layer really sort of simplifies the operations, the deployment, the redeployment, the recovery from failure type of operations that you might otherwise have, and potentially gives you the ability to migrate workloads instead of rebuilding things from scratch. If you haven't completely operationalized your Kubernetes environment and some of the VMs around that, you can leverage that VM infrastructure that we now have to move workloads around. And I think that's a large part of what you have to look at when you decide, are you going to go all bare metal and sort of recover those, some of the pain points that metal had in the past, depending really mostly on how your application is going to consume those resources. You know, I mean, we often forget, I think many of us were living in this layer of infrastructure and infrastructure operations only, the application is the thing that's really got to continue to run. That's what's going to drive our businesses forward. And if you're operating at the level of just looking at the application, if your entire development staff is only application developers and not really operators or SREs or whatever it is you want to call them these days, then there is potentially a benefit to saying, let me find the simplest solution to get access to Kubernetes and that goes back to the previous discussion. Do I have somebody else operate that infrastructure for me? I just focus on the application. If I need something more detailed, I need to have appropriate GPUs, I need, you know, I have latency concerns I have to deal with. I worked with one customer that actually does a lot of telco-related services. They have latency issues. They have their own data centers where they've already dealt with sort of incoming and outgoing packet latency. They're not going to a public cloud where they can't control that anymore. But they're still looking at a virtual machine layer and then a Kubernetes layer on top of that because they're looking at continuously changing the scale and size of those individual clusters. That's how they're dealing with their tendency or multi-user sort of aspect on the cluster. So again, it really comes down to how the operations are going to tie into your business and how close you can get to that metal level of performance I think for some applications as to whether you want to go into that bare metal model or whether you keep the virtualization layer. I think it really interestingly ties into what I was talking about yesterday, which is how we kind of think that OpenStack, everyone's kind of historically thought, oh, it's just VMs, it's just VMs. We're kind of going through this transformation where you don't have to use VMs at all. You can just use Ironic, which is just a bare metal management infrastructure. What we find is at some point you're going to have to manage all that infrastructure. Even if you're using bare metal machines, you will have some sort of tool that does imaging or does deployment and managing of all of these physical servers because I hope none of you are still going through the pain of running through installers and hitting Next and writing IPs down. We've all moved out from that. But to kind of run at scale, you don't want to be doing that. You want something that manages it all and Ironic provides some of that stuff which I feel is really useful and you can just run that without running any of the other OpenStack components. You can run it to get your bare metal and then install your Kubernetes on top of it. So that is kind of one of the views. And actually, there is a project even if you don't want to even touch the OpenStack, it's actually taken Ironic and extracted it into its own individual component. It's just like a bare metal provisioning tool, period. You don't have to have Keystone, you don't have to have Noble, you don't have to have Glass. It's just like an individual tool and so that is kind of an interesting thing to kind of look into. Or you could use Moss or... That's Bifrost that you're talking about, right? And there's actually another one that is there called Metalsmith, I think. I remember the governance change came in, I haven't followed too much since then. But yeah, Bifrost is one, but yeah, Metalsmith is one that is actually aiming to build an API as well. So Bifrost never had an API. Right, because it was intended to have one. Okay guys, we're getting in the weeds here. And I too love Project Ironic and Dave and Julia and all the wonderful people that have worked on that over the years. But yeah, I have a lot of questions for you. Let's talk about more buzzwords, multi-cloud, hybrid cloud. Joseph, we had a great discussion yesterday about public cloud, private cloud, what you're running where, why you're moving things, where you're moving them. So let's talk about hybrid cloud. Kubernetes is finally making this a real thing, right? You know, I mean that's probably been one of the things I was early on and previously at this company, I lived there. That was a lot of my driver behind, you know, we had public cloud, but we also had things in the data center and I wanted to normalize it with the infrastructure as a service and so OpenStack was really that answer. But there's like a cost. I mean, to me it's honestly, we had certain drivers that have moved us back into the data center. We looked at a lot of our ROI as well as the workload that we had. It required to have close proximity. So we had to really think about as we're evolving these things and to be more cloud native, we had to really tight requirements to be able to meet service levels that our customers are looking for. So that was also another reason why. But now we live in this world where we run an Amazon and we have stuff in the data center as well. Kubernetes is kind of like, to me, the closest to getting into that model of being able to run across clouds because OpenStack, we were trying to do that and we were trying to do that and OpenStack, we were trying to use tools, we used different ways to kind of give that functionality. But like I said, if you don't have to, I honestly wouldn't recommend it because you're absorbing a lot of overhead. I do like what we're seeing now because a lot of the tooling is getting a lot better to enable this and with Kubernetes. So if you have to, it can do it. I'm really excited about some of the things that are happening with public clouds. Where like control planes actually can, you know, you can burst across and I'm like, okay, this truly looks like what I thought five years ago I envisioned. So it's going to be interesting to see how things turn out in the next year or so. Yeah, and we'll talk about some of those tools in a bit. But tell me, we were, talk about private cloud a little bit because this seems to be a trend where everybody moved everything to the public cloud a while ago and now everybody's moving everything back to the private cloud and oh my gosh, I'm oversimplifying. Ridiculously. Some of the stuff we were talking about earlier about why you're moving back to private cloud at Adobe, yeah. And this is specifically for the advertising cloud because the majority of the company runs in public cloud and we have different requirements. Like I said, the first off, we did look at a lot of the analysis, a lot of costs and the types of instances that we had to choose that would meet our requirements were cost prohibitive. I mean, it's the one thing that in public cloud when you have network requirements, you're starting to get into really more edge case instances that definitely are pricey. So for us, that was one of the big drivers. I think as well as like, you know, for a lot of customers and you know, I know with the rise of privacy and things like, you know, data gravity is real. So oftentimes it's less about like all the stateless, all the microservices things. Those things are pretty easy to kind of move around and refactor if you need to. But wherever your data is at, that's where you're going to live. And so that became a challenge. And we're definitely a big data company. And so that was another big factor for us to move back into the data center. We still on the edge use public clouds for reach, but for like our core processing areas, we find that the data center better serves our use case. Okay. Robert, do you want to add to this? No, I think actually a couple of things that Joe touched on are really important, right? And it comes back again to the question in my mind, which is do you really want to run this yourself? Is there benefit in you running it yourself? Or do you want to have somebody else take on a portion of the operations and then scale it across a much larger set of customers? And the public cloud versus private cloud question always comes back to that. Most small companies do not start out buying their own servers these days because it's more cost effective and it's a lower overhead on the development side of the house to borrow them from somebody. At some point though, it might be that it does make sense to move back into a private environment because of some specific requirements. Again, application latency, infrastructure latency, the whole managing your data, the fact that once it's in the cloud it tends to stay in that cloud makes it much harder to move from one cloud to another. Those sorts of aspects are the ones that I think are, again, driving a decision making process to whether or not you want to go back and forth. But I think there's another part of it which is getting kind of interesting, and that is that people are starting to see that when they have a consistent workload or a consistent load on a system, to basically pull that back into a more centrally managed set of resources. Again, whether that's still being hosted by somebody or being run in your own data center really comes down to the scale of your organization. But owning those resources more consistently, this is why there are spot instances and there are also long-term purchased instances. How much of the data plant do you actually want to run? All these things have to be operated at some point and there's a level of scale that some customers get to, you know, folks like Adobe obviously need that scale very easily, but even small companies, once you start getting into hundreds of servers in consumption full-time, I think it does start to make sense to start looking at these multiple options. I think the other aspect though is the question about hybrid cloud. And I think a tool like Kubernetes has provided yet another application interaction layer that simplifies the consumption of hybrid resources, in other words, resources of the same nature in multiple different cloud locations. It might still be one provider, it might be your own data center and a public data center, but you have one interface that supports that. And as Joe was saying, as the ability to make that one cloud look like a single cloud rather than eight different clouds, that further simplifies the process of making that hybrid decision. So I can select lowest cost consumption for any one compute resource. And I think that's the other thing that's really changing the discussion about whether you want to run it yourself or whether you're going to continue to run it in the cloud or do I have the flexibility of moving from or across multiple clouds? All right, so now I can start thinking about do I put my data into a private location where I have tighter control over it and, you know, I'm actually paying less money in pulling data into a public cloud than I am trying to pull it back out, right? So there are some ways of thinking into that way as well. All right, and we'll talk about the data in a minute. One of my favorite topics. But just real quickly, a show of hands. Who here is running Kubernetes? All right, who's running Kubernetes in production? Who's running stateful applications on Kubernetes? You guys are good. You guys are better than my meetup group. That's more hands than we usually see go up at these things. So before we get into tools and data and all of that, I want to ask a question about distros. This has come up a lot as well. Rolling your own, using someone's distro. Yeah, Robert, take that. So the question of distros is always an interesting one. And especially with things like, you know, now cloud foundry is basically a Kubernetes distro, except they're not, but they really are. You know, you have OpenShift, which is a Kubernetes distro, except that it's not, because many of these services want to also put their own platform layer on top. And I think that's the thing that you really have to look at, is the installation process really so hard that you then want all the additional set of services that are there? Or is it really what you're after is the additional set of services and you don't really care about the fact that it's running on top of Kubernetes, which is also possible. But for me, it's always been, you know, let me see if we can't get this done in the simplest fashion possible. And again, this is where tools like Magnum on top of OpenStack, I think make a lot of sense, because they are basically pulling from the standard upstream repositories rather than any one particular distribution of something like a Kubernetes deployment. Yeah, but Joseph, you have particular reasons for doing it the way you're doing it. Well, you know, I think for us, I mean, as I mentioned earlier, you know, oftentimes we kind of get in this thing that, you know, because we can, we can do it. And it has been like a great teaching tool. And, you know, I look back at just, you know, like three years ago where, you know, Kubernetes was at and getting things up and running. And we had to use things like OpenStack to kind of feel around the gaps, you know, whether it was just, you know, providing like security isolation or, you know, other things that Kubernetes didn't have. Now, here all of a sudden now we're in a world where, like, we have all the things around that support the lifecycle management of it. You know, you got package management like Helm. You know, you have operators that can ease things like, you know, getting permit these, all the things that need to exist outside of once you get Kubernetes up and running where, you know, it makes it easier. So I'm a, for our use case and stuff, I tend to be like, we tend to be very like very modular, very pragmatic, like only use what we really need to use. And, you know, the great thing about the Kubernetes is that the community is so strong, you know, like the support, you know, that's always like a big challenge when you jump into, you know, distributions and I know a lot of these companies do, you know, really good job at trying to tackle it. We've had times where things are very tactical, where I could jump into Slack and, you know, we had some deep questions about like pod security policies and how we were trying to apply them right away. We could get in and get answers. So I think that really drove us to be like, you know, this actually works for us. And, you know, but we did go in with like a certain bar saying, you know, some of the challenge that we had with OpenStack was really maintaining like the upgrade lifecycle and staying current. And a lot of operators had that as challenges. And with Kubernetes, you know, we've, with our own approach to taking it, we're able to stay current. We're probably like always a release behind. The challenge that comes in that is that I keep seeing down the line as our cluster is starting to get bigger is, can we still maintain that same cadence, you know, but up to this point, I feel like because we had such a rigidness about how we build our software that I felt like we were pretty comfortable with it. And it would fall in that line where we needed to stay very lean in regards to like how we are, you know, building our infrastructure underlay. This is one of the things I hear all the time. The people are getting so far behind in releases. Yesterday, somebody was on Main Stage during the keynotes talking about, you know, how quickly, you know, they're getting to the next release. And I was sitting next to one of the rather large telcos and she was like, slow down. No, these releases, I can't keep up with them. And like, she wanted everybody to go a lot slower, not release every six months. And we hear that a lot. But with Kubernetes, it's not as complicated. It's not as hard when the new releases come out. How do you handle this, Robert? I was actually going to say one of the things that I've seen is that the companies that are out there that are currently offering, you know, sort of a bundled Kubernetes for you to deploy. So Kubernetes for private cloud. Many of them actually focus on that upgrade process and simplifying it. But also adding all the other services, right, like Joe was saying, installing it is one thing. Maintaining it, operating it, you know, capturing the right metrics, understanding what to do with those metrics. I think that's where some of the distribution players really do come into their own. But when the distribution is turn on Kubernetes, that's not really where the value is, right? And so I think that's one of the key things that people need to be aware of. But I also think that's where, again, you know, there are tools out there that simplify this process. If you have an open stack environment, Magna makes it really easy, right? So why wouldn't you use that? Now, did that also install Prometheus? Last I checked it doesn't it? It turns on a Kubernetes cluster, right? And it's thinking about the fact that there is a day two set of operations that you also want to keep track of. You know, it's easier, again, another one of Joe's points was the fact that now there are package managers like Helm or there are application managers like Scaffold that are sort of out there to try to help with that application management aspect and they're using Kubernetes as an interface for that. So you're going to end up wanting to layer Kubernetes on top of however you've deployed your actual physical infrastructure. But do you need a distribution for that? It really depends on, again, how much you want to operate those resources, how much of that you want to own and how much of your engineering budget you want to apply to continue to manage and maintain that. I think it also depends on the type of organization that you are. Some of the organizations are more focused on being upstream first and wanting the ability to do a lot of customizations in what they do. So I guess using a distribution will be a bit limiting in terms of that because most distributions will have maybe a few use cases or a few ways of deploying it and they're probably not going to support you outside of these few very specific use cases. Whereas working with more upstream based environments really does have less features. We can kind of say that. But it gives you the ability to iterate and customize and go over the improvements that it's provided. So actually, interestingly, you mentioned it, but a few releases ago, I think it's a few weeks ago, the Magnum now deploys for Mithius if you would like to deploy, right? Excellent. The thing is, it's progressive, right? It didn't do it right from the start. It took some time. It took an operator that had to come in and step in and do it. And then things like dashboard, you took a while for it to come in. But at the same time, it also depends on, again, the type of company that is like, okay, we're open source first. We know that, you know, there's going to be there's going to be great support, but you're never going to get an SLA. But you have the advantage of going out, being able to read the code, fix it yourself, if need be, versus, you know, having a more Kubernetes in a box solution where it just works in one way. But you don't have that amount of ability to do a lot of customizations and how you want it to work. I threw previous, I threw previous a little graduation party a few months ago. Congratulations. Yeah, these are new projects, new tools that are coming out, and they're not always, you know, ready, but they have strong communities behind them. I'm going to make a plug for one of my favorite new projects, Istio. I threw the first ever Istio Meetup a couple of years ago. It has come a long way. If you open source contributors are looking for another side hustle, I'm going to throw my hat in the ring for Istio, and I'm going to run a really cool project with a great community, so join us and contribute. But what are some of the other really cool projects out there that you want to make people aware of, and not just to contribute to, but to add to part of their ecosystem? Well, I mean, when I look at the spectrum, you know, use Kubernetes as a base, obviously, Prometheus is an important aspect to that. I also look at Istio as a way of building your application connectivity and also potentially dealing with some of the potential customer VPN related issues that might otherwise exist. I think there's some interesting ways of bypassing that by using tools like that service mesh class of technology. But then there's also, as a part of the continuous integration domain, I think tools like Spinnaker are interesting ones, and they also have sort of a really nice connection back into the kinds of environments that Kubernetes tends to favor in terms of development process. Spinnaker, oh, that's a good idea. Maybe I should throw a Spinnaker meetup. Who would come to that? Ooh, not many. More open stack and influenced ones is Courier is an interesting one that's actually trying to work on bridging the kind of your Kubernetes networks and your physical infrastructure networks using Neutron and some other stuff that are a bit harder for me to understand, but I know that they've done a lot of good work and they're kind of increasing the adoption, so I think that's a really interesting one to kind of have a look at, but indirectly also Zoolz, so Zoolz recently added a lot of features to help with doing things like to help you build whatever images you need to do, or actually do deployments, and now with what was updated today by Monty talked about how you can use, you can kind of link your Kubernetes cluster to Zool and it could kind of give you a name space for you to run all your jobs and bring up a deployment and do some testing and then it will destroy that name space afterwards, so I think it would be it's a really interesting thing to hack around. Zoolz is an awesome project. I also threw the first ever Zool meetup. Monty says he's going to come back and we're going to do another one. That is a really cool project. If you guys don't know about Zoolz you should definitely take a look at that. I promised you guys could ask questions too and I've left you about two minutes to do so. Does anyone have questions? Yes, please take the microphone so that everyone can hear a question, including the recording. I was wondering if any of you guys have any experience with running sort of soft and hard multi-tenancy in Kubernetes and any of the pains that sort of come along with that? I've been hacking around a lot around Cata containers. So I think Cata containers is a really really interesting aspect and I think that that's probably the best way to go around it. I think Kubernetes has pretty reasonable amount of multi-tenancy separation at this point but the problem is it's all kind of soft separation and I think some project like Cata containers brings that hard cut which is like you cannot get out of this. Can we have someone that can talk to you a little bit more about Cata containers if you want? Eric? I think there's another approach which comes to depending on what tendency you're really looking for. If you have completely separate organizations that need that complete level of segregation it might be easier to deploy Kubernetes per customer and that's one way of approaching that and at least amongst the namespace that you're working within additional roles-based access and those sorts of credentials can be layered on top but at least you've provided that level of tenancy so from a cloud operator perspective breaking it up that way is one approach. That could be challenging though because as you get scaling the management is becoming that. That's where I think I would probably go along with the possibility it just took some time with all the container interface and the run times and how that got sorted out it took a little bit of time to get there but we've been talking about it ourselves of looking at that as for security. I think the roles-based access interfaces within Kubernetes have gotten a lot stronger as well so you do have decent segregation at that point. I still think that there's a lot of confusion at least even around exactly how to set those roles up appropriately so that you really are providing true segregation. But again, it depends on how you're integrating and I will say that I've deployed the Cata container solution underneath Kubernetes and it is the simplest container run time swap for something that now is an actual VM based image that I've ever gone through so another plug for Cata is a great solution. Another cool project through the first ever Cata meetup it's very popular and it's really coming into its own now so if you haven't checked out Cata containers another plug for that one. Go ahead, Joseph. Oh, I thought you were leaning in to say something. Okay, so we're wrapping up. Do you guys have anything? I've got some of your questions on Twitter they're too long to answer right now but we will answer them. I promise if you tweeted a question at us we'll get to it. We are around all day and this is how you can find us. Any last words? Thank you very much everyone. Have a great rest of the summit.