 Hi everyone. My name is Dan Kahn. I live in New York City and run the Cloud Native Computing Foundation. I'd like to especially thank Nori and the events team here for really just doing an extraordinary job on putting together this... I've actually been coming to Japan with the Linux Foundation for almost a decade on and off and it's just amazing to see the growth and level of interest. So I wanted to dive in and tell you a little bit about CNCF and how it matters and why it may affect you. And CNCF is part of the Linux Foundation. We're just a little bit less than 18 months old. We now have 10 projects in the foundation of which Kubernetes is the best known. It was the anchor tenant originally, but we've now added in a number of new ones including the Container Network Interface, CNI, which just came on last week. And I want to give a shout out to our Platinum members that provide the majority of the funding, especially Fujitsu, which was a founding member of the Cloud Native Foundation and NEC, which has also been very supportive. And as you've seen from the last few days here, the Linux Foundation is much more than Linux. So later today, there's a talk from Lenson Crypt, which is providing free certificates to the world's websites and creating more than 50% of the pages on the web or now on HTTPS. Arpit just spoke about ONAP. I talked about CNCF. Dan Kaushy has the whole parallel sessions talking about automotive grade Linux. And then Brian yesterday had a great conversation on blockchain. But these are actually, there's several dozen more projects within the Linux Foundation and this URL brings you to them. Okay, and then I just want to give a very abbreviated version of the talk I gave yesterday about how Cloud Native fits in to the history of application development. If you go back to 2000 and at that point when you wanted to launch a new application, you needed to buy a physical server or often a rack of servers. And that was very expensive. It led to a high stop price for Sun. But then in 2001, VMware came out with the technology to allow virtual machines that you can put an application in a VM and have multiple applications on a box. Then in 2006, Amazon Web Services popularized the idea of infrastructure as a service to allow, rather than having to spend CAPEX on your hardware, it could become an operating expense that you rented by the hour. And in 2009, Heroku popularized the idea of a platform as a service and the magic of being able to type git push Heroku to have the new version of your application deployed. And what's interesting is all four of those steps were proprietary companies. The next four on our little brief history here are all open source offerings. So in 2010, you had OpenStack, which took the technologies from VMware and Amazon Web Services, made those available in an open source platform. Then Cloud Foundry that Abby spoke about two days ago is an open source platform as a service similar to Heroku. And then in 2013, Docker came along and popularized the concept of containers that you can wrap your application in a container, making it much easier for that to move around. And then finally, the last stop in 2015 was the creation of CNCF where we started out hosting Kubernetes. And we defined Cloud Native here as having three key parts that you divide your application up into microservices, you package each part in its own container, and you dynamically orchestrate those containers. And so let's just talk about why people are so excited about these changes. This is kind of a crazy chart where it's 448 projects and companies in this space. I will point out that this is on GitHub in a high-resolution version. And if your company or project is missing here, please open an issue and we'll add it in the next version. Well, what we're trying to do here is build a map of Cloud Native and say that there is a destination that we're all trying to get to, but there's actually multiple paths that take you to that destination. And then the projects that are in green here, so Kubernetes, Rocket Container D, CNI, Core DNS, and others are the ones that we're hosting at CNCF. We're not saying that these are the only way to be Cloud Native, but we are saying that we're working and have picked a very good group of technologies that we are sure will work well together and can meet enterprises and start-ups' needs. Okay, so why are people making this journey? One of the biggest is to avoid vendor lock-in that opensource software enables deployment on any public, private, or hybrid cloud. And you can download the software, you can use it yourself, and there's multiple vendors that can support you, so you can switch from one to another if you want to. A really extraordinary level of scalability. Kubernetes is evolved from Google's experiences with their container system board. That system launches more than 2 billion containers per week. That's 3,300 per second on average, but of course at peak it's much, much higher. And so the system is designed to support thousands of self-healing. My favorite example of this, and what finally got my sons to take my job more seriously, was that Pokemon Go runs on Kubernetes, and I've heard that it took up to 60,000 nodes at the peak demand. Cloud Native is about increasing agility and maintainability. So the idea that you split up your application, that each of the parts are separately described in a way that your team can now scale much better. And this is the concept of orchestration, that you can improve efficiency and resource utilization by dynamically managing and scheduling these microservices. And that also allows you to have a really extraordinary level of resiliency. So your individual container can fail, machine, even entire data center, and then as your demand goes up and down, you can adjust dynamically to that. So how does that impact companies? This is a statistic from Huppet that high performing cloud native architectures tend to have 200 times more frequent development, more than 2,000 times shorter leads, lower failure rate, and a much faster recovery from failures. So I think the takeaway from all this is to say that if you're building a new application from scratch, a green field application, that cloud native application architecture is the way to go. And in particular, I would say that the leading choice for cloud native orchestration is Kubernetes that's been selected by a large number of companies, and it's backed by this extraordinary group of members, including many here, and it's one of the highest velocity development projects in the history of open source. So we should be done, but now I'm going to just read one of my favorite quotes from John Maynard Keynes, that in the long run we are all dead, economists set themselves too easy, too useless a task. If in tempestuous seasons they can only tell us that when the storm is long past, the ocean is flat again. And I just love that phrase, too easy, too useless a task. Because I want to make the argument that there actually aren't that many green field applications out there, that the real world consists of ground field applications. And if you look at the Gross World project, product, that's all the GPs from all the countries added together, it's $100 trillion. Essentially all of that flows through ground field applications. And we use this term monolith to talk about big consolidated, often like a large Java, or other kind of legacy application that's very hard to evolve. So I'll just quick question for the audience, how many folks have seen the movie 2001? Yeah, see with the younger crowd, I need to do more like a Star Wars reference or something, but it popularized this concept of a monolith that's this big imposing shape. And the idea is that nearly all production applications in use today are monoliths, and so they're really the opposite of cloud native. And that brings the question, well let's just rewrite it. There's this famous book 50 years ago, The Mythical Man Month, that shows that that almost never works. It created the term, the second system syndrome. And it says that many rewrites ended failure because the first system is evolving even as you're trying to replace it. And sometimes that first system evolves faster and you can never catch up. Okay, so monoliths are the antithesis of cloud native. They're inflexible, they're tightly coupled, they're brittle. So how can we evolve it? And step one is if you're in a hole, you want to stop digging. And so the idea is to try and stop adding significant new functionality to your existing monolith. And then there's this concept of lift and shift. And the idea is that whatever your legacy application is, you actually can containerize it. People think of containers as these very small, agile kinds of things, but you can take a Java application that requires 8 gigabytes of RAM and you can wrap a container around it. Or another fascinating example is Ticketmaster where they have code that still runs on a PDP-11 and they were able to get a PDP-11 emulator running inside of a Docker container in order to be able to containerize that legacy application. With Kubernetes, there's a specific technology, StapleSets, what we're formerly known as Petsets, that allow you to lock a container to one piece of hardware in order to make sure that it has adequate performance. Okay, and now, and this is really the key thought of the talk, you start shipping away at the monolith. And so Ticketmaster, as I mentioned, has this challenge where essentially every time they put tickets on for sale, they're launching a distributed denial service attack against themselves because so many people are coming in. And so what they needed to do was to have a set of front-end servers that could scale up and handle that demand. Rather than trying to write that in their legacy application, they put that new technology in a new piece. Now, sometimes chipping isn't enough, you actually need a chainsaw to cut away some of the original functionality or new pieces that you're going to write. For example, if you want to have co-op functionality, maybe that's a Node.js application that you put in front of it. If you have a particularly performance-sensitive task, maybe you write that and go. And you're still having API calls back to your existing legacy monolith, but the new functionality can be written in more modern languages by different teams that can work with their own set of libraries and dependencies, and it starts splitting up all of that monolith. KeyBank in North Carolina had very good success putting Node.js application servers in front of their legacy Java application in order to be able to handle mobile clients. Now, a key thought is that the highest value today from Cloud Native is with stateless services. So application front-end servers where you need resiliency, load balancing, auto-scaling. An interesting example here is Wikipedia where they are taking their media-wiki PHP application servers and putting those into Kubernetes, but their data store, which is a massive MySQL database, is remaining on a bare-metal server because they don't have a good enough story today to justify moving into Cloud Native. And the idea is that when you are eventually ready to transition your data stores, it's still challenging today to use legacy systems like a Postgres or Redis. There are some interesting Cloud Native database solutions like CockroachDB and the TAS, which came out of YouTube. There's also, if you're in the public cloud, a number of hosted data services like Amazon RDS and Google Spanner and others. But the idea is that this really should be the last thing that you're transitioning to Cloud Native architecture. And then I would make the argument that you should consider a constellation of complementary projects such as the ones within CNCF. And so when you're in a Cloud Native environment, some of the biggest priorities are monitoring, tracing, and logging. Prometheus for monitoring, open tracing, and FluentD, which was developed here in Japan, majority of the developers are Japanese, and we had the FluentD talks yesterday. Then I'll just mention a few others. LaborD is a service mesh to support more complicated versions of routing. GMPC is an extremely high-performance API system that can replace JSON rest. CoreDNS is a service discovery platform. ContainerD and Rocket are both container run times that were recently added. ContainerD is the upstream run time that's used in Docker. And then finally, the Container Network Interface, C&I, is an architecture for network plugins to support more complicated architectures. And so eventually, when you've chipped away enough, you can evolve your monolith into a beautiful microservice. Now, Grover Norquist in the United States, who's always trying to lower taxes, has a phrase that he wants to get government small enough that he can drown it in a bathtub. Maybe your goal is to eventually kill off your monolith more realistically than it's going to stay around forever, but hopefully you can evolve pieces of it off and even have a beautiful collection of different microservices that are all connecting together and being orchestrated in a single system. And so when you think of Kubernetes and the kinds of architectures that it can work with, I really want to emphasize this concept. There's a term, the soft bigotry of low expectations, that you shouldn't just think, oh, I need to do a Greenfield rewrite in order to get the benefits of cloud native. The big message here is that Kubernetes loves brownfield applications and that there is an evolution path that almost every enterprise and company out there should be on. So if you download this presentation later or take a picture of this, these are some detailed articles on a few of the companies that I mentioned and there's a lot more case studies on Kubernetes.io that actually talk about these similar themes. And finally, I wanted to invite everyone to the big event that we're going to be having in Austin. This is actually going to be one of the largest events of the Linux Foundation.