 Hi, everyone. I'm Constance Caramelis, and I'll be doing your CNCF project updates today. Before we get started, I want to talk a little bit about something we're changing this time I'm trying to see how it goes. As you'll notice during this KUKON Cognitive Con, there's an increased focus on telling end-user stories. And we're going to continue that theme here today during project updates. Let's get started. Our first project we're going to talk about is FALCO. FALCO is a runtime security project. And it consumes signals from this Linux kernel and container management tools such as Docker and Kubernetes. FALCO parses these signals and asserts them against security rules. If a rule is violated, an alert is triggered. Now let's look at some of the cool statistics that have happened in the past year. One, there have been over 700,000 downloads of their kernel module in 10 days. That's an amazing number. 60 new contributors this year. And these contributors come from companies like IBM, Covio, Shopify, AWS, AppDynamics, InnoTeam. Really wide variety of contributors. That's amazing. There are 12 adopters and many independent users. Now what have been some updates to the project? One is user space driver additions such as support for Fargate workloads using PDig. Support for RM64, RM7. Yes, that means lower energy processing. Stability slash improvements to the FALCO engine for better performance. Additional community integrations such as FALCO sidekick and a growing number of additional community integrations. And there have been some user adoption improvements such as improved helm charts, FALCO CTL, FALCO cuddle. I'm going to need someone to correct that for me afterwards. FALCO exporter for Prometheus and improved event generators. I really like the call out here about user adoption improvements. It's always about making sure that it's easy for people to get on board and this is really wonderful to see. In terms of end user, I will not be able to do justice to Shane Lawrence and Chris Nova. If you didn't have a chance, I highly suggest for you to look to watch their keynote from EU. Shane Lawrence comes from Shopify and he describes what, how Shopify uses FALCO to give you the TLDW that too long didn't watch. Shopify uses FALCO and its rule sets in production for detection and prevention. It does things such as detecting data egress, example, coin mining, metadata server abuse and a remote code execution output. Please check out their keynote. They will do much better description of how things are done. Thank you, Shopify, Shane Lawrence and Chris Nova also for doing KubeConU and thank you FALCO. On to our next project, Thanos. Thanos is a Prometheus based scalable time series database that provides a global query view, high availability and data backup with historical cheap access as its core features all in a single binary. It is three years old. They have 281 contributors and direct quote from the maintainers. They are very grateful for that. Thank you contributors. There are 6500 star gazers. And in 2020, they focus on mentoring. They had nine students. These students came from programs like CNCF Community Bridge and Google Summer of Code, and they had a really positive impact on the project. I'll mention one of their additions in a minute. And they have high commit velocity over the past year. They've been averaging six commits per day. This is amazing. This is a chart here on the right that I want to see for everyone. It's just great work everyone. What about some updates? So we need to give Thanos a round of applause, that virtual round of applause and slack everyone. It has reached incubation status in April 2020. Congratulations Thanos community. That is great work. They've added support for native response caching collaboration with Cortex project. I want to highlight collaboration with other projects. This makes me really happy to see it is, you know, we are all providing solutions for the cloud native world and seeing this collaboration just makes me really happy. They've added new react to eyes to all components. This is also a really big thank you to the community and to the mentees. We focus on analytics OLAAP integrations through Opslytics community sub project. There's offer full multi-tenancy. This allows for flexible deployment of various read, storage and write tendency. And the rule of the limitation from career prom QL single CPU concurrency and Prometheus. It means it's multi core. It has sped up querying. Thank you Thanos. Let's talk about Rook. Now we're going to have to give Rook a round of applause. They graduated earlier this year. Rook provides an open source storage solution for Kubernetes through operators and CRDs. It automates management of storage providers, including Ceph, NFS, Cassandra, CockroachDB and UnibikeDB. There are over 7,800 stargazers, 295 contributors that want to say again thank you to the Rook community maintainers. Over 180 million downloads. That's amazing. And most recently there's a release of V1.5 and it contains improvements such as support for Ceph stretch clusters, encryption with KMS, clearing block data across clusters. Now the next end user I'm really excited about. Mostly because it's not, you know, we don't really hear so much about research and education use cases. So the Pacific Research Platform project is a partnership across many, many institutions, both at the government level, National Science Foundation, Department of Energy and also multiple research universities in the United States and around the world. So PRP aims to create a seamless research platform that encourages collaboration on a broad range of data intensive fields and projects. Now, naturally using Kubernetes. It was a good idea for that. So they run a Kubernetes cluster. It's mostly located in California, but also other parts of the country. And they have some nodes in Pacific Asian region and Europe. All of this is connected through a, with a fast 10 100G research network. Now this cluster incorporates four Ceph clusters managed by Rook. Two petabytes in the US Eastern region, 500 terabytes in US Eastern region, 8 terabytes video wall SSD only local, 130 terabytes NVME only local, and soon to be added in the Asian Pacific pool. Now, the following is a direct quote from PRP about how Rook has enabled them to be successful. Rook is tremendously helping us to deploy a reliable, reliable fast storage of scientific users. Rook makes administration of a large Ceph installations easier and removes multiple tasks of nodes management from personnel, which helps reduce costs and increase efficiency. Data storage is a key component of scientific computations and Rook does it perfectly for us. Thank you PRP and thank you Rook for sharing. On to the last project we're going to talk about today is Vates. The test is a database solution for deploying, scaling and managing large clusters of open source database instances. It supports MySQL and MariaDB. It is architected to run as effectively on public or private cloud architecture as it does on dedicated hardware, and it combines and extends many important SQL features with the scalability of no SQL database. And so some of the things, problems, you know, that it tries to solve are scaling SQL database by sharding it, migrating from bare mental to private, private or public cloud, not private, but as one word was trying to say, and deploying and managing a large number of scale SQL database instances. The last fact that I know about before is that Vates was actually running on communities at YouTube before communities 1.0. It's been around for a while, it's awesome. In terms of some updates, there have been four major releases this year. And the latest release 8.0 is compatible with over 10 modern SQL frameworks, including WordPress and Ruby on Rails Active Record. And the last end user story, Slack. So at the beginning of 2017, Slack began adoption of Vates. By in August of 2017, it started serving production traffic. After completing a rearchitecture and migration of Slack's core product features by early 2020, roughly 70% of traffic was passing through Vates. And they had designed a framework to migrate one table at a time. And they were actually wondering, what about the remaining 30% and that is a crux of the problem. They had done the calculation that it would take three to five years or at least four engineers to migrate the remaining 30% of the legacy system. And that's a really long time. Now, they knew it was going to take a while. And so in 2019, they spent effort investigating alternative strategies to do the migration. And what they found was if they're leveraging the replication functionality is that they can migrate entire clusters to Vates. Now this resulted in, as of now, the 99% of traffic at Slack uses Vates and that the migration will be complete by end of year. And just to share a direct quote, we moved hundreds of terabytes of data over 1000 MySQL shards, zero downtime, not a single outage. Equally important, it was transparent to our applications. Thank you Slack for sharing and thank you to Tess. Now, before we cut off for a moment, I want to call some other graduations. Take me tough the update framework, harbor and helm. These are other projects that have graduated since we all met together in San Diego last year, please give them a virtual round of applause. Use those slack emojis please. Congratulations to the community contributors, maintainers. A lot of hard work and it's well earned. And our last mess last message of the day, please go to soon CF dot io slash projects to get a full list of all the different projects that are supported. And also use lever to maintain our track session to meet the maintainers. I'm sure you probably have a lot of questions. And so please use this time to reach out to find out what's going on with the project and ask, you know, get answers to your questions. Thank you very much, everyone. And have good coupon. Bye.