 So hello, everyone. It's good to see folks out in the audience. It's good to have folks make a fairly early keynote. But today, I'm going to talk a little bit about some updates from the Cloud Native Computing Foundation, which I helped form a couple of years ago when I joined the Linux Foundation formally. So to give you a little bit of history, the CNCF was formed in about mid-2015 with the simple mission of bringing a form of computing called Cloud Native Computing, essentially to the world. And when I say cloud native, I essentially refer to the type of computing that has been pioneered by internet-scale companies such as Google, Facebook, Twitter, and so on, who have essentially been running microservices-based architectures that run in containers that are essentially orchestrated by some central process over hundreds of thousands of machines. And so this is the type of computing that I refer to when I say cloud native computing. So when we were founded in mid-2015, we started with a humble set of 22 members and a very basic mission statement. And at the time, Google also seeded the project with Kubernetes, which I'm sure some of you may have heard of that project. Yeah, everyone knows what Kubernetes is. I'm assuming, yes. Yeah, it's been, you'd be surprised a couple of years ago, not too many people knew what Kubernetes was. But it was good. We started with 22 members. I'm happy to announce, two years later, roughly, we've had some great growth. So we have basically over 5x growth in terms of membership within CNCF. Today, we have about 118 members. And I'm super stoked, essentially, to have all these companies involved basically trying to move forward the cloud native mission. And if you kind of step back a little bit, I think this is really the first time in the history of open source, especially in the infrastructure space, that we have really the top five leading public cloud providers in the world sitting at the same foundation table, driving cloud native forward. So I'm super stoked to have that happen. But for some of you that may have been paying attention on the previous slide, we do have new members that occasionally come and join us within the CNCF family. So today, I would like to announce one of those. And I would like to welcome John Middlehauser from Oracle to talk a little bit about what's going on on their side of the world in cloud native land. So welcome, John. Thanks, Chris. As Chris said, we have joined CNCF. So announcement is fairly self-explanatory. I am VP of engineering of a group within Oracle whose charter is to drive cloud native development, all the developer tools. So we're very active. We've been active for a while. We've been kind of unofficially members and are proud to announce that we are now officially a Platinum member. And I am on the governing board, as well as everybody else who is up on that stage. So we have a couple of announcements today of specific products. And then we have more coming that we'll announce soon. Specifically, here, I'll go through these. I think the obvious question is why we joined. And I think the answer is fairly obvious. We believe strongly that developers need an open, cloud-neutral, and community-driven native technology container native. Avoiding cloud lock-in, enabling hybrid mode between cloud and on-prem. And obviously, that gives that technical business flexibility that you guys are looking for. We've been embracing this for a while. CNCF is clearly the main hub for that. Kubernetes is kind of an anchor of some of the work we're doing and a bunch of the other technologies, as well, coming soon. So today, specifically, we're announcing two products, or two releases. The first of which is the Kubernetes install. This is Terraform scripts for Oracle's cloud infrastructure. What we've been calling bare metal cloud is now rebranded as Oracle Cloud Infrastructure. Fairly self-explanatory. It's Terraform scripts that create highly available Kubernetes control planes, clusters, et cetera, on bare metal. So for existing customers, easy way to get up and running with Kubernetes on Oracle on bare metal cloud. Secondly, announcing Kubernetes on Oracle Linux for public, private, cloud, and on-prem. So customers running Oracle Linux, Kubernetes trivial installation, freely available to download, and specifically, we give premier support when you deploy it on our cloud infrastructure. Most of these are just teasers, though. We've been active in a bunch of these communities. Those of you who've been in the working groups have seen us start to show up. We have a lot of contributions in the pipe around security, around federation, networking, conformance. I was in the working group for that yesterday under Kubernetes. As many of you guys know, we have our big conference coming up in three weeks, the beginning of October. So look for some major announcements at Oracle Open World at Java One. We'll also be at KubeCon as sponsors. So we're very happy to be part of this community. It's not just Kubernetes for us. My group is also leveraging a bunch of other technologies, Prometheus, Open Tracing, GRPC, and some of the things that are going to get announced right after this. So happy to be part of the community. Obviously, if you guys have questions, follow up with me afterwards. And I'll hand it back to Chris. Thank you, John. All right, thank you. So for those of you that are counting, now with Oracle part of CNC Officially, that is the top six leading public cloud providers in the world. So members are great, but really the lifeblood of CNCF are its projects. And honestly, my favorite part of CNCF. So currently today, we have 10 amazing projects in CNCF. Kubernetes, obviously, our seed project, is one of our most popular projects. But we have projects in the spaces of monitoring with Prometheus, tracing with Open Tracing, logging with Fluent D, Container D, Core Container Runtime, Rocket, and so on. So we have a lot of these awesome projects that we've mapped out and brought into CNCF over the last couple of years. And to me, it's really amazing to see how things have grown. As I mentioned earlier, two years ago, not many people really knew what Kubernetes is. And the amount of adoption that Kubernetes has seen in the industry in the last couple of years is phenomenal. So I'd like to point out one crazy thing. I don't know if people saw this. It was a couple of weeks ago, I think, was Kubernetes, where the source code lives on GitHub. Now GitHub itself is running Kubernetes. So it's like Kubernetes is honestly self-hosted on itself, which to me is absolutely amazing. So they've also wrote a great blog post on how they're moving their infrastructure and how they got there. So I highly recommend you taking a read of that. It's super, super amazing, in my opinion. Another thing that we've done is one thing with CNCF is we're all about our projects and cloud-native. And we kind of have this landscape diagram that people who love or hate. But I think it's really important to show that there are many aspects of cloud-native. There are many ways, I think, to get to cloud-native for your company. Obviously, we think that CNCF projects are battle-tested and generally well-proven. But those are essentially a set of recommendations. Here's a guide on how you potentially may get to cloud-native with a list of projects that are sponsored and blessed by CNCF. But for those of you who look a little bit deeper in the diagram, there are sections in there where there are potentially missing projects that necessarily aren't part of CNCF. So I'm happy to announce today that we're going to fill the gap in some of these areas. And we're going to hear from some new projects in a little bit. So first to announce, and I'd like to introduce, the Envoy project will be coming into CNCF. It will be our 11th project. And to learn a little bit more exactly about what Envoy is, I would like to invite Chris Lambert on the stage, the CTO of Lyft to tell us a little bit more. So thank you, Chris. Thanks for having me. It's great to be here. So I'm here to talk about Envoy, but first I want to talk a little bit about the origin story for how we started development of Envoy and the problem it was solving for us. So going back to 2012, Lyft was just a small startup, a handful of engineers all working together on the same code base, the same service. And it was a relatively straightforward system. You hit a button, someone comes and picks you up, and takes you wherever you want to go. That worked great in 2012, but our growth was pretty crazy, both for the number of engineers, the number of rides, and the number of markets. And we quickly outgrew this single service model and started moving away from our monolith to a microservice architecture. And initially, this worked out really well, but we started to run into some pain points. So 2012, we're doing a few thousand rides a week. 2017, we're now doing over a million rides per day. So pretty massive growth, and along with that, we've seen pretty massive growth in our service infrastructure as well. So what started as one service, where everything lived in the same code base and was deployed together, is now a set of 200 plus microservices, all communicating with each other over the network. And one of the pain points that we encountered was that when something goes wrong, it's not obvious where the problem is. Could be the physical network, the virtual network, could be your upstream service, downstream service. And we spent a fair amount of time trying to debug problems in the microservice architecture without knowing where the failure points were. And we ended up in this scenario where there were certain key pieces of functionality that we wanted to keep in the monolith just because we couldn't reliably trust the network as we started to move functionality into these microservices. So that's obviously not ideal and pretty prohibitive to what we were trying to do. So we set out to solve this problem, but it wasn't straightforward. And we looked at the open source community and said, what are the options out there that could work for us? And we didn't really see a great fit. One of the things that's unique is that when you hit set, pick up, and request a ride and go through that process, you're actually talking to 100 different services under the hood, all written in different languages, run by different teams. And we really wanted to have a consistent set of metrics and reliability monitoring on all these systems as we move things out of the monolith. So our fundamental guiding philosophy as we set out to solve the problem is that the network should be transparent to applications. We don't want you to worry about what's happening in the network. We want it to just work. And if it doesn't, we want to make it really obvious to the developer where the problem is. So with that, we built Envoy. And this is a really lightweight architecture diagram just to go through how it works, is we built Envoy as a sidecar that sits alongside all of your application servers. So for us, it's also our front proxy that handles all incoming connections from the mobile clients. But it also sits alongside every upstream and downstream service and handles all inter-service communication. And what that does, it gives us incredible insight into how services are performing and where problems are. It's also given us a powerful platform to build on top of where it's not just service-to-service communication. We've been able to instrument our data stores, whether it's DynamoDB or MongoDB or Redis using the same infrastructure to give us the same insights on performance, which is pretty amazing. So there's a lot more to say about Envoy. But we started work on it in 2015. And about a year ago today, we open sourced it. And it's pretty amazing year. We open sourced it thinking maybe we'll get some other startups excited about it. And they'll help us maintain the project. And within a year, we've seen some pretty massive adoption. Everyone here is actively using or contributing to Envoy. And we now have more people at companies like Google that are actively developing Envoy than we do in House at Lyft. So pretty amazing and definitely not something we take for granted. But we've realized that Envoy is now, this is much bigger than Lyft. And there's a lot at stake here. So we're really excited to announce our donation to CNCF today to make Envoy part of this community and to really help move the industry forward. So really excited to make that official. And excited for all the support that CNCF has given us to date. And really looking forward to partnership with our peer projects over the next few years. One last thing I'll leave you with is Envoy has been really instrumental in our work on the cloud to date. But we have big plans at Lyft. We just started a new level five engineering center focused on autonomous vehicle development in Palo Alto. Envoy's already proven pretty helpful there. And we've got big plans. So if that's interesting to you, definitely come talk to us. We'd love to have your help. And we're hiring all across the country. So thanks again for having me. And look forward to the next few years. Awesome. Thank you, Chris. Super exciting. I'm a huge fan of Envoy. So I mentioned that we're going to have a couple projects. So first, we had Envoy come up. And we'll we have another major announcement today. So I would like to announce that we have another project coming in CNCF called Yeager. Has a very cute logo. I'm a huge little fan of that logo. We'll see how that looks on the CNCF website. But I would like to have Yuri from Uber to come up on stage and talk a little bit about Yeager and what it's all about. So please welcome Yuri Shrikko from Uber. Hi, everyone. Thank you for having me. I'm very excited to present Yeager. So I'm a staff engineer at Uber. I've been there about two years. And that's about the time when we started Yeager. And my story will be somewhat very similar to what Lyft had in terms of how that project came to be. It just solves a slightly different problem, but came from the same kind of backgrounds. So as you probably all know, Uber has seen insane growth in the past two years. And I remember just not long ago, we've hit a milestone of 1 billion completed trips on Uber. And a few months ago, we hit 5 billion. And then we're heading towards 10 billion trips. So with all that growth, and it's not just trip, there are other business lines. Obviously, like Uber eats for food delivery, Uber rush for package delivery. So all that caused us to build a lot of new features in the system and create a lot of complexity in the system. And at the same time, we needed that system to be scalable, to be reliable, and we needed to keep the developer velocity up. So fortunately, containers and microservices were invented, and our engineering kind of embraced that paradigm, and we moved into that space. And as a result, today, Uber architecture contains like several thousand microservices. So this is just kind of a segment of the dependency graph that's generated by Yeager. So every time you take a trip, seem like you've seen on a lift slide where like you take a ride, but even actually when you're on the ride, the application always talks to the backend. Like every few seconds, there is something communication going on and there are transactions happening across a whole bunch of microservices, whether it's to get a new route update, new ETA, maybe you're picking up another rider on the pool. So all those transactions hitting a whole bunch of different services. And in order to maintain the complexity of that system, and it happened in like billions of times a day, right? So to keep that system working, we have to monitor it in a very robust way. And at Uber, when we started doing like very intensive monitoring, we had to build our own metric system because there's nothing on open source or in commercial that could scale. But metrics, they're great for monitoring, but like if you think about metrics, like every one of those nodes in a diagram is kind of, that's what metrics tell you. They just like small little story about one single node in architecture. They don't tell you the whole picture of they don't tell you anything about the transaction happening across the architecture. And that's where Yeager comes in. Yeager is a distributed tracing system which essentially traces transactions. And it allows us to solve other problems as well. So when something goes wrong, we can look at traces and deep dive, like follow the path of the transaction and so usually find the root cause of what's wrong with a particular request. Or if things are slow, then you can also analyze critical path and find out why things are slow. Sometimes the system can just point you to the root cause it directly. The service dependency diagram that I showed before, that's very useful for developers to even understand what this whole architecture is doing. Like no one can keep in their mind like several thousand microservices and all their interactions is what they're doing. So it's very important. But on all of those functions they're actually made available by the generic context propagation layer that client Yeager libraries provide. And this is a feature that has been in tracing always but somewhere in the background. We brought it on top of, we've built a lot of features on top of that. So a bit of a history. Yeager as a system was inspired by Google's Dapper paper and OpenZipkin project which came out of Twitter. We started about two years ago and we open sourced in spring. So it's a fairly young project. We built it with open tracing support from the beginning. We're actually one of the founding members of the open tracing and other CNCF project. And in terms of technology, we go shop in New York City. So most of our backend in Yeager is written in Go. We support pluggable storage. So we implemented Cassandra and Elasticsearch backends but we're working on other ones if people interested in some like in FluxDB and maybe other backends. React frontend, the instrumentation libraries are really what most of the people are exposed to because that goes into your applications and open tracing is the standard that we embraced early on. Our community has been growing since open source. We were lucky to get partners in Red Hat. They've been contributing a lot of features already to Yeager and between us and Red Hat we have like 10 full-time developers working on Yeager all the time. And we have a number of contributors on GitHub from other companies as well as many other companies already using Yeager today. This is a screenshot of Yeager frontend. It shows you kind of the sample trace view on the left you see the hierarchy of the calls, the call graph, the time sequence diagram at the bottom is like one of the segments shows what details about this particular service or particular operation within the services doing. This is kind of the things that help you root cause analysis by drilling down into the execution of the transactions across the stack. So at Uber, Yeager has been integrated with over a thousand services today. We use it for obvious features of tracing like root cause analysis and dependency analysis but we also as I mentioned build a lot of functionality on just a general context propagation facility provided by Yeager and we also now focusing a lot on the data mining part of actually getting insight from tracing data in aggregates rather than just looking at individual traces because that really tells you a lot about how your architecture is behaving when you have so many services. The roadmap for Yeager is you can find it online, I'm not gonna go through that. Some of these things already in progress, some will be coming up and we welcome help and other suggestions that people want to see in our project and definitely very exciting to join CNCF. CNCF mission is kind of to provide a set of standard tools for people deploying cloud infrastructures and we want Yeager to become one of those standard tools. I firmly believe that in any microservices, moderately complex microservices architecture, you cannot survive without distributed tracing, you simply don't have the visibility into what your application is doing. And we are also looking forward to integrating with other CNCF projects, we already can run on Kubernetes, we get traces from invoice, we can send metrics to Prometheus but we're looking for more integrations as well. And as I mentioned, all contributions are welcome and Uber is still growing, our engineering team in New York is happy to see more people. So if you're interested in what we're doing, come talk to me. Thank you. All right, thank you, Yuri. Distributed tracing is amazing and to me it's really great to now have 12 projects as part of the CNCF family. So just live right now, we went back and updated that wonderful landscape diagram. So now you have Envoy and Yeager part of that. So I'm so glad to have these projects part of CNCF. I love all of our projects equally. I've actually been taking an Uber and Lyft on a rotating basis all week. So it's been great to have these two projects involved with us. So I want to basically end things off is that we have a big conference coming up in Austin in December, which is my hometown, which is great. It's great not to actually travel for an event. Woo-hoo! So hopefully some of you can actually come to Austin, attend CloudNativeCon at KubeCon and come learn a little bit more about the different CNCF projects we have out there and come meet and learn about the future of essentially cloud infrastructure. So thank you very much for your time and I'm gonna hand it back to Jim to kind of move us forward through the day. So thank you very much. Thank you.