 I'm going to go ahead and get started. My name is Dan Kahn. I'm the executive director of the Cloud Native Computing Foundation. I want to absolutely thank you from the bottom of my heart for staying so late on a Friday afternoon and not rushing out to try and catch your airplane or your train home or more time with the kids or whatever. It is very nice of you to come and see this talk and hopefully I will not bore you or waste your time on it. For what it's worth, the Cloud Native Computing Foundation will talk about host Kubernetes, a number of other projects. As executive director, I sort of have to come up with something new, interesting to say, hopefully interesting to say, in this space every six months or so, and this is my talk for the next six months. So you guys are my virgin audience getting to try it out and I'll see whether I'm on track or not. Just as a quick reminder, so we are about 18 months old. We're now hosting nine projects. The best known is Kubernetes, but I think a lot of folks have also heard of Prometheus. So when you have a Kubernetes or a Swarm or Mazers cluster, you often need monitoring and tracing and logging. Prometheus, Open Tracing, Fluent D are great ways for those. LinkerD and Core DNS are our two earlier inception level projects. GRPC is a really amazing remote procedure call framework, a replacement for JSON REST that's much higher performance. And then just a month ago, we had our two newest projects, ContainerD, which is the core upstream runtime from Docker, and Rocket, which came in from Core OS. And we are part of the Linux Foundation, we're a non-profit, just like OpenStack is, and we have a number of Platinum members, including some overlaps with OpenStack and a lot of other similar members. So the Linux Foundation, just what I was told not to do, the Linux Foundation, you probably know it because we host Linus Torvolds and Greg Krill Hartman, the Linux kernel maintainers. But we've actually expanded dramatically in the last decade into a lot of new areas. So Let's Encrypt is the largest certificate authority in the world. It gives out free HBS TLS certificates. ONAP is a major new networking initiative. CNCF in the cloud. We have an amazing automotive grade Linux program that's putting Linux into tons of cars and accessories. And then Hyperledger is a blockchain system that's gaining a ton of traction. OK, so this is the summary from my last six months of talk, which is sort of how I envisioned CNCF, which is to say that this is my very brief history of the cloud that you had the application building block when it was back in 2000. You had Sun selling you server hardware. Then VMware came along and said, no, you can use these virtual machines, share multiple applications all on the same server. Then you had AWS popularize the concept of infrastructure as a service. And then Heroku came along with this platform as a service and the magic of being able to get push Heroku and deploy a new version. All four of those steps were closed source proprietary companies. Then the four new steps, OpenStack, which is of course why we're here, which was essentially an open source version of both AWS and VMware, Cloud Foundry, an open source version of Heroku. Then Docker came along with the fastest uptake of a developer technology ever to popularize and create an easy user interface around containers. And then in 2015, we launched and with Kubernetes, the Cloud Native Computing Foundation. Okay, this is a totally insane document that I'm not gonna quiz you on or anything. There's 448 different projects and companies on here but at the very bottom of it is github.com slash CNCF slash landscape. And if you're curious, I encourage you to download and take a look at the higher res version. And if your project or company is missing, please feel free to open an issue and we'll add it to the next version. But here is in green the nine CNCF projects and in blue are some additional projects we're talking to and considering hosting. And again, all these are on that same URL, github.com slash CNCF slash landscape. Okay, so just a quick reminder, I keep saying Cloud Native, what do I mean by that? We think of it as having three main components, microservices that you divide up your application into multiple pieces, you wrap each of those in its own container and then you orchestrate those containers dynamically in order to optimize your resource utilization. So why are businesses going Cloud Native? Obviously for an OpenStack conference, it's amazing that there's like 30 different Kubernetes talks here. And the big ones are avoiding vendor lock-in, that's the same value proposition of OpenStack, obviously to have the value of open source. Enable unlimited scalability, so there's a statistic that Google starts two billion containers per week. This is actually as of three years ago, which is about 3,300 a second on average, but of course their peak is much higher than that. And increasing agility and maintainability, this is, I have so much respect for these pirates who are able to get the containers into these rickety boats, but this is the concept of microservices that you have different containers that are all talking to each other. Improving efficiency and resource utilization, so here's our orchestration slide, and resiliency so that your individual container can fail, a machine, even entire data center, and you can adjust dynamically to varying levels of demand. So what are the results that folks are seeing about this? This is statistic from a puppet talking about high performance teams, and what they've been able to show is that those teams can deploy 200 times more frequently, over 2,000 times, shorter lead time, and they have fewer failures, recover faster from failures, three times lower failure rate, and so just this idea of going from a lot of organizations that are maybe on a one month deployment model or every two week deployment model and trying to get into a model where they can be doing dozens of deployments per day, and that kind of modern continuous integration, continuous deployment. Okay, so the message from all of this is cloud native application architectures are the default choice for new green field applications. If you're building something up from scratch, this is a fantastic architecture to go with, and I'll just mention that the leading choice for cloud native orchestration is a project that we host called Kubernetes, and so we have a lot of partners and members and folks we work with like Phillips, Ticketmaster, Box, Zalando, Nordstrom, eBay, New York Times. I mean, there's a list of hundreds and hundreds that have chosen this architecture, and when you look at Kubernetes, it's one of the highest development velocity projects in the history of open source, has just an incredible collection of about 1,300 developers who've worked on it in the last year, amazing group of technology giants and startups that are working together. And so with all that, you can say, except for one thing, and this brings me to one of my favorite quotes from John Maynard Keynes. In the long run, we are all dead. Economists set themselves too easy, too useless a task. If in tempestuous seasons, they can only tell us that when the storms long past, the ocean is flat again. And so the phrase I love from that is too easy, too useless a task. And so I wanna make an argument to you that that story that I just told a second ago about Greenfield applications is not remotely interesting enough story. And the reason is that the real world consists of Brownfield applications. So if you look at the gross world product, it's about $100 trillion, and well more than 99% of that flows through a set of Brownfield applications. And even if you go look at some of the most forward-looking companies like a Google or Azure or some of these other folks, if you actually looked at where a lot of their money flows through and say, hey, they're probably still using ADP for their payroll and they're still doing wire transfers through a bank, and that money is still in a large way going through a ton of legacy Brownfield applications out there. Those applications are generally monoliths. They're the exact opposite of everything that I've been talking about here. So CloudNative hasn't been around that long. You have, this is the 2001 image folks hopefully have seen the movie. They're large integrated units, they're not containerized, they're not microservices. So the question is you have a monolith, what do you do with that? Are you just stuck with it? And one simple, one answer that a lot of companies try is, oh, well, we'll just rewrite it. It's used, maybe it's in COBOL, maybe it's in Java, but it's not in one of these new trendy languages like Go or Node.js with JavaScript. So we'll rewrite it. And the answer is that that almost always fails and there's a specific term call for it, the second system syndrome, which was invented, which was described by Frederick Brooks about 40 years ago. And the idea is that when you try to rewrite a system that's in production, you have real needs in production that have to get fixed. So you continue to be adding features and fixing bugs and making enhancements. And what you find is that that second system almost never catches up. It just requires an insane level of resources to try and both maintain the primary system and rewrite it. And I've actually suffered through this and definitely will say never again. So the argument is that monoliths are the antithesis of cloud native. They're inflexible, they're tightly coupled, they're brittle. How do you evolve? And the main idea that I wanna get across to you is a metaphor. All metaphors are a little limited, but I want you to think about this ice block and say that the first step is lifting and shifting your monolith. And so this is the idea that however archaic your existing system is, maybe it's a Java virtual machine that needs eight gigabytes of RAM. But whatever it looks like, you absolutely can run that inside of a container. People think of containers as these very spelt, easy to manipulate little objects and that's the goal, that's what you're trying to get to. But to start out with, there is an immediate value of containerizing your application and in particular, trying to set up a CI CD process around it so that you can make changes to it, have immutable infrastructure, have some of the other advantages there. But we've even seen folks on Kubernetes who have a mainframe interpreter running inside a mainframe emulator, excuse me, running inside of a container and a running old mainframe code or old PDP code, PDP emulator, et cetera. So you start with that monolith and then you can put it up either in the public cloud or on your own private cloud. And if you decide to use Kubernetes as the infrastructure is the platform for that, there's a very useful tool called Stateful Sets which essentially can just pin that container to a specific machine with a sufficient RAM, sufficient capabilities to meet your needs on it. That's been available since 1.5 in the version before it was called PetSets but Stateful Sets is the official name. And now the concept is that you start shipping away at the monolith. And so your product people, your sales people come in and say, hey, we need to add OAuth to this application. We need to add SAML authentication of the application. Instead of continuing to enhance the monolith, you add an API to it, which you probably already have even if it just is emulating a web kind of request. And then now you can put that new functionality into a separate application. And that separate application, that separate microservice can be optimized for your team for the language. So maybe you decide that Node.js is a really great tool because there's wonderful OAuth libraries for it. You write that separately and have it communicate back to the monolith. The same thing, maybe a higher performance requirement you wanna use Go or you have a set of teams that have a library, have a framework that they prefer. Now you're no longer coupled entirely to all being working on the monolith simultaneously. And you can start getting the benefits of having your different teams evolve at different lengths. So sometimes you can't just chip away at it and you need a chainsaw. And so it's absolutely the case as you go forward that you decide there's certain parts of your monolith that are so painful or causing so much problem that you can reimplement aspects of it without trying to rewrite the whole thing. So Node.js, the performance sensitive tasks. And then you keep progressing hopefully. And then part of the lesson is to start with stateless services. And so the argument is that Kubernetes in particular delivers the largest immediate value today for those front end application servers where the database that need the resiliency, the load balancing, auto scaling, et cetera. Trying to store a database, trying to have all those services that can have stateful backing is still a challenging issue, a challenging problem. There's a ton of solutions for it. So you can look at a distributed file system like a SEF or a Gluster or new one ones like Rook. You can look at these new cloud databases. If you're in a public cloud, you can make use of their services like an RDS from Amazon or a Spanner from Google, the new distributed one that Azure just announced this week. But the idea is that there's a real value to starting with stateless services. And as an example here, and then you transition your data stores last. So an example here is MediaWiki, which is for Wikipedia where they are putting all of their PHP MediaWiki application servers onto Kubernetes and running that as a cluster. But the MySQL backend that hosts their database, they're still just keeping it on bare metal servers that they administer and expect to for the continuing future. And there are interesting projects out there. In fact, there's one that's talking to CNCF right now called Vitas, which is a MySQL fork that was created by YouTube and is used at scale. And there's interesting things along those lines to look at, but for today for production services, you definitely want to transition those data stores last. And then the other message I would say is to consider complimentary projects. So when you look at Kubernetes, it solves a lot of issues around orchestration, around scheduling, around resiliency, authentication. But as I mentioned, monitoring, tracing, logging, Prometheus, open tracing, Fluent D. And then when you have more complex applications, Linker D is a really interesting one where it allows you to provide errors, information and information about servers that are overloaded and provide that through so that you can dynamically route your traffic to the nodes that are best able to handle it. Core DNS is another very powerful service for platform or project for service discovery. And finally, when you've carved enough, hopefully you have created your beautiful microservice. Now, probably a better image here would be like a series of different little ice sculptures that are next to each other. But the main message that I want to get across to you is that just thinking about Kubernetes and cloud native technologies as being suitable for Greenfield is what I call the software, what's been referred to as the soft bigotry of low expectations, that to say that you need a greenfield rewrite in order to get the benefits of cloud native. But I would definitely say that Kubernetes loves brownfield applications. And when you look at the majority of them out there, I think the vast majority of folks running on Kubernetes are not doing it with completely new apps. So that is the main story that I wanted to tell you and talk about. I would be very happy though to answer just kind of a range of questions about CNCF, about Kubernetes, about some of our other projects. And I would also encourage you to follow up with me here at, here's my email address and my Twitter that you're very welcome to reach out to me. And I'm kind of happy to talk to you about any of these spaces. So any questions? I guess I would also even more, the other piece that I will mention to you is if you're in this talk, you should please definitely consider coming to KubeCon, cloud native con, which CNCF just had our European one in Berlin a month ago, but our biggest event will be in Austin and that's December 6th through 8th. So there's still plenty of time left five months away, but it's really gonna be an extraordinary event. We're gonna have probably triple the attendance that we had in Seattle six months ago. Kelsey Hightower is one of our co-chairs and we'll be keynoting. And so we're gonna be bringing together all the core developers within the community, developers from a lot of end users and others and we're really pretty excited about it. So Austin, December 6th through 8th. Sorry, I left that out. Any thoughts or questions? Yeah, if you don't mind walking up to the mic though. Hi, my name is Fabian Salamanca I'm a cloud architect for Huawei. Oh, great. Do you have any real life example with lessons learned of transitioning monolithic application to microservices? Sure, so like I'm sorry I sort of slid by a few of those. One of the great ones is KeyBank in North Carolina and if you just Google KeyBank monolith there's a nice write up from Red Hat with OpenShift where they walk through their experience with it. But that was just a very nice one where it was a huge monolithic Java app and in order to get the performance and in particular just the CI and the CD experience that they wanted, they added a Node.js layer as a front end and moved their existing Java app to be more of a back level app. But they walked through some of the experiences on there. And yeah. Right, no, thanks. Sure, absolutely. You had any experience? Sorry, could you just do it right into the microphone though? And sorry, also who are you? Christine Olefrangues, I'm a systems administrator for Data Direct Networks. Have you had any experience working with containerizing and splitting up Windows workloads? So for going from a Windows environment to a more containerized environment for Windows on Kubernetes or... So the answer is that Kubernetes now definitely does support Windows workloads. Microsoft has made a huge investment into this and one of the three co-founders of Kubernetes left Google nine months ago, Brendan Burns, and is now a top architect at Azure. And so Windows capabilities are in since 1.5 and they're supported by Docker. I actually, I would love to find a great case study for it because I don't have as clear as one. Although, I mean, and I would say that all of the same messages apply around, I would much more want to see a .NET or an ASP application move first and SQL server to be the very last thing that you would consider moving over or maybe moving to a cloud SQL kind of capability. Now, the other option, of course, since Microsoft made this enormous investment to open sourcing .NET is that you can run a lot of those applications on Linux, but that actually shouldn't be necessary if you're able to containerize it as a Windows app, as a Windows server app, which you can today with Docker, then you absolutely should be able to administer it with Kubernetes. And I think a key message is the heterogeneous concept that you, and I will make a quick pitch here again for GRPC where as things become more performance sensitive, instead of having a JSON rest as the API that's connecting together the different parts of your containers, but let's say you started with a ASP running on Windows server as your front end, and then you wanted to add in some extra piece of functionality, maybe like a React kind of front end to it. You could definitely look at doing that in Node.js on Linux, but there are great, very well-debugged GRPC libraries for every library, every framework that work on both Linux and Windows, and would definitely be worth looking into. Thanks. So it's down to me. Hi, Dan, Scott Fulton from the new stack. Hi. Good to see you again. When the concept of cloud native software architecture was first brought up, it was described to me like this, an application that is designed for the cloud on a cloud platform will not only work better on that cloud platform, but will work better than the applications that were not designed for that cloud platform, and that was the distinction between, hey, let's build things in the green field versus everything that was still stuck on the brown field. Besides the observation that you made, to paraphrase Paul Simon, look around, fields are brown, there's a patch of snow on the ground. We're gonna add that quote into the next version. Yeah. What is the rationale for transfiguring and moving and immigrating an entire planet full of very old applications onto the same platform where stuff was born to work better, and then, if I may say so, pretending that there equals. So the idea is that you are chiseling away on the old one until it does get small enough. And maybe someday, as Grover Norquist says about the federal government, he wants to shrink it down to the size that he can drown it in the bathtub. But I absolutely stand by the idea that you can evolve networks to cloud native, that trying to say, oh, you need to go rewrite it is just going to marginalize cloud native into such a small market. I mean, it's wonderful, obviously, to have all the startups out there super excited about Kubernetes, super excited about these technologies, and the Newstack writes an article on Kubernetes, and we post it to Hacker News, and it pops right up to the top. That's great, but I really do stand by this concept that we need to be bringing these advantages to big enterprises as well, or we're just wasting our time. It's just such a small, those startups are such a small percentage of the total IT market, and the number of folks that we can be helping. And so, I mean, when, so I would say Craig McLucky, when he decided to bring Kubernetes outside of Google and brought him to the Linux Foundation, he was very clear on not calling the Kubernetes Foundation, having it be the cloud native computing Foundation, had these three definitions of cloud native, microservices, containerization, and orchestration. And my argument to you is that it just doesn't take that much work to get those three things. It takes work. It is a completely doable task to apply those three core components of cloud native to legacy monolithic apps. That containerization, although we think of it, and in a lot of ways it works best with tiny apps, you absolutely can containerize huge ones. And then all of these apps already have APIs. I mean, maybe it's like a 50 year old mainframe app and the API is a taxed interface to it. Like for, I think, Amadeus as an example in Europe with their, the airplane ticket network, they still have a bunch of mainframe stuff. Same with Ticketmaster, where they have 50 years of systems including a bunch of mainframe code that they work. But as long as any API to it, you absolutely can build a microservice that connects it together. And then containerization, microservices, orchestration, you're gonna have all three of those pieces. So I do agree with you that cutting it down to size, trying not to dig your hole deeper by making your monolith bigger and bigger has a huge advantage. But I mean, the other, I think one of the most important parts of the story is that the cloud native architecture enables and in a lot of ways forces you to have a continuous integration and continuous deployment story. And for me, for a lot of organizations, it's that CICD is the most magical part because it helps fix all of these bureaucratic processes of saying, oh, I need a two month cycle and I need to have all this change control and everything else. And then it starts encouraging to say, no, for this little piece here, like I said, maybe it's the OAuth segment, I have tests for that. I now feel confident that I can roll out new versions of it constantly and can evolve from there. And so yeah, I've been through this process myself in a startup and essentially went the wrong way of trying to do the second system syndrome. And I guess the other part of my message is that that doesn't work. Yeah, you have another one? Uh-huh. OK, but I'm only saying lift and shift is the first step. My argument is that they're going to wind up in the same place because what's going to happen is that at each point along the way, they're going to look at their monolith and it's going to be wrapped in a container and they're going to say, this aspect of it needs to be rewritten. Should we rewrite in the monolith or should we rewrite it outside? And the answer, once it's lifted and shifted, is almost always to do it outside. I will also quote Joel Spolsky, who has a great series of essays about trying to do rewrites. And he talks about, oh, you look at this code and there's all of these if statements here that are all these strange little conditional loops and you're not really sure why they're there. And the answer is every line of that statement of that code was answering someone's bug over the last 20 years. And yes, it's probably not documented well. I mean, it would be great if it says this is to fix this regression bug number, so and so, that links back into GitHub. And you probably don't have that in your legacy system. But the idea of, oh, I'm going to throw that out and now I'm going to have this nice new clean code base. And now I need to go find all of those bugs again and upset all of those customers and deal with it again and essentially re-implement a ton of business logic that's never been documented anywhere other than the code is super problematic. As opposed to saying, OK, over time. And I mean, there's a ton of examples of this. So another one would be Twitter, which started out as a monolithic Ruby on Rails application. And at last, they were able to evolve it to over 1,000 services. They invented Meizos and then open sourced it, but it's exactly the same thing. Another one, a statistic I love is Uber, where I think the claim is that they have 1,200 engineers and 4,000 microservices. But again, they started with really very simple app. And just over time, I think one of the biggest benefits is just the bureaucratic one of there's a limit to how big of a team can cooperate on the same code base. So just that ability of not having people step on each other's toes and being able to go off and have somewhat of an API and a guarantee of how the different teams can interact I think has a massive benefit. Could I answer anyone else's? I'm happy to get in a longer debate with Scott here about. I'd love to see the new stack argument for we should all stick with our monoliths. But no, you're arguing for the rewrite. Yeah, and my point is it's not economic. By the time you've completely rewritten your application, your competitors have come along and just eaten your lunch. Yeah, essentially that's right. But as you can whittle it down, then hopefully you can do that. I mean, there's a ton of subtleties here as well, where if you're on a really archaic data store, eventually you'd like to evolve onto a modern one. But yeah, eventually the principle is that most monoliths probably can evolve. And then maybe someday you can actually shut it down. But maybe not. I mean, maybe it just needs to continue forever. Well, I think I'm going to end it there then. I really appreciate everyone's time. And please do consider reaching out to me if you'd like to hear more about CNCF. Want to tell your story and also comments or suggestions on improving this deck, because I am hoping to get a few more months out of it. I have presentations coming up in Tokyo and Beijing and a few other places over the next several months. Thank you all very much.