 Okay. Thank you, everyone, for joining us today. Welcome to today's CNCF webinar. Welcome to Cloudland, an illustrated intro to the cloud-native landscape. My name's Ariel Jitib. I'm a business development manager for Cloudnative at NetApp and also a CNCF ambassador. I'll be moderating today's webinar. And we'd love you for you to welcome our presenter today, Kazlin Peck, developer advocate at Google. Before we get started, a few housekeeping rules. During the webinar, you're not going to be able to speak as an attendee. There's a Q&A box right at the bottom of your screen. Please feel free to drop your questions in there, and we'll get to as many of those as we can at the end. And a reminder, this is an official webinar at the CNCF, and as such, subject to the CNCF's Code of Conduct, please do not add anything to the chat or questions that would be a violation of the Code of Conduct. Basically, be respectful of all your fellow participants and presenters. Please note that a recording of this talk and the slides will be posted later today at the CNCF webinar page. That's at cnc.io slash webinars. And with that, I'll hand it over to Kazlin to kick off today's presentation. Hello everyone and welcome. Welcome to Cloudland. So I suppose we'll jump right in. If you were on right at the beginning of the call, I had to change this slide to April 3rd from April 2nd. Time has no meaning anymore, so I hope that you all will have some fun with this. Things have been stressful, but this is going to be a fun presentation. I hope to learn something new. So once again, I'm Kazlin Fields. I'm a developer advocate at Google, and I'm also a CNCF ambassador. I focus in cloud native DevOps and Kubernetes things. And I also create little comics and illustrations about these topics, which I post on my blog at Kazlin.rocks, because I'm terribly humble. Kazlin.com was taken, so, you know. So for the last several years, I think like five years, I've been the container person wherever I've worked. And for the last three or so, I have been the cloud native person wherever I've worked. Oh yes, and all of the illustrations you'll see in this, including the PowerPoint template, were all drawn by me. Anyway, so I've been the container person. Now I'm the cloud native person. And to give you a little background on why I'm giving this talk or how this talk came about. When people talk to me about talking with customers or they just want to learn more about what I do, a question I get asked is, tell me about cloud native. And when I first started getting this question, I would go, great, what part? And for the most part, people wouldn't really have a part, they just want to know about cloud native. And I want to specify that this is a fantastic question. It really made me think about where the people asking me this question are coming from and what they need to know. When you're just starting out learning about cloud native, there's this huge world in front of you and you need somewhere to start. So that's the goal of this presentation. I want to give you an introduction to what cloud native is. And what some of the technologies are that you're going to need to learn is you start looking into adopting cloud native for yourself and also why you would want to do that. And that story starts by talking about the cloud. If you're really familiar with the cloud, you may have heard some of this kind of thing before. But stick with me, it's all part of the story. So once upon a time, everybody had data centers and a data center is a big box in the middle of nowhere that's filled with computers. I imagine it having lots of AC units on the outside of it because having all those computers in one space gets kind of hot. So it takes up a lot of power to do this it takes up a lot of space to do this, but modern companies need a lot of computers to do business. And that's not just tech companies. It's random other companies to like a company that makes toilet paper is going to need these data centers. They need to use. Don't worry, we'll get deeper later. They need to use these computers to do things like payroll, manage inventory, manage logistics, all of that kind of stuff. So, like I said, this takes up a lot of power, a lot of space, it all costs a lot of money. If you are a toilet paper company CEO when you look at your budget and you see all of this stuff on your budget, it makes you angry. You feel more like big box in the middle of nowhere CEO rather than toilet paper company CEO. So that's where the cloud comes in. It allows allows these companies to have one kind of source for all this computing these computing needs and kind of consolidates it into one box of the cloud. Give it make it somebody else's problem basically. So, and also it's important for people who have been in the space for a long time to realize that this is still happening today. I still meet with company companies who are not in the cloud yet who are just now considering it who need to know why it's important. So, still relevant. And there are two main ways that companies move into the cloud. Lift and shift, and then we start talking about cloud native. So, lift and shift is where you take everything that was running in your data center and you just move it straight into the cloud. It's a really easy way to move things at the beginning, but the concept of cloud native is the cloud is a unique environment and it provides you unique capabilities and unique benefits. And so in order to use those benefits to your advantage, you need to adopt this new way of thinking about things which we call cloud native. So that's an introduction to the term. So if you think about it in house is running things in your data center versus cloud native just running things using the best capabilities of the cloud. You're like making microwave popcorn. Sure, it's great. It's popcorn. You can make it at home. It's easy. But it's not the same as like movie theater popcorn or popcorn you would get at a festival like kettle corn because you probably don't have a giant kettle sitting around that you can make kettle corn it. The cloud might have a giant kettle sitting around that you can just try out and use to do important things. So you're making use of different capabilities in the cloud versus in house. And another another important thing here is I'm going to be talking about a lot of open source software that you could run in a data center and some people do. But a lot of it works really well with the unique capabilities in the unique environment of the cloud so that's why we talk about it being cloud native. So if you're new to all of this. I want to suggest an exercise that you might try you've probably done it before just installing something on a local machine how many steps does that take and then think about if you had to do that 500 times. How annoying would that be. And how would you do that would you automate it all with you use certain tools to do it. How would you do that. So we've learned what cloud native is. And from that point in comes the cloud native computing foundation. You're obviously at a CNCF webinar so you probably already know about the CNCF, but it is an independent nonprofit organization whose mission is to promote the growth and adoption of cloud native technologies. Great. So I'm getting all these people asking me about what is cloud native. Here's a group that tells you all about cloud native technologies I will just show them what the CNCF is doing. So they maintain this cloud native landscape to help companies understand the breadth of cloud native software and kind of help guide them towards the software that might be useful to them. And here it is. If you haven't looked at it before, it is a bit of an eye chart. So it can be a bit intimidating for new users. So the goal of the presentation today is to break this down into usable chunks and kind of help you understand what all the chunks of the landscape are and how they might be useful to you. So these are the parts I'm going to cover in this presentation today. Like I said, it's a very high level. We are going to talk about what they are and why they might be important to you. And then you'll want to look more into those and see what exact projects within those categories might be useful for you. You'll see as we get more into it. Last night Kelsey Hightower posted a tweet that I thought was great for illustrating this. You want to roll your own application platform or do cloud native yourself. All you need is all of these things can be a little overwhelming if you're just getting started. So we're going to break it down and talk about each one of these. And I love that he added yet another one. Right after that. Yeah, so now we get to the meat of the presentation. Welcome to cloud land. I'm going to use an amusement park analogy to talk to you about all those different categories of the cloud native landscape. And to start off with what are some of your favorite parts about an amusement park or festival post in the chat. I know for me. I love rides. I love the environment of excitement and all the colors and all the decor. My favorite thing is the food. Yeah. Cotton candy. I don't have cotton candy in here maybe I should. I originally made this talk by the way for a meetup in Tokyo so I use some Japanese foods don't worry I will explain them if you're not familiar. So we're going to start off by using festival foods as an analogy. I'm familiar with containers. Yes, I know it's still very one on one but if you're not very familiar with containers containers are kind of at the core of this cloud native way of running things. The benefits of popcorn and why it's very popular in an amusement park or festival space is that you can store it very efficiently. You can store it in little kernels but then it pops up into a larger amount of food. So that also makes them very portable. So it stores small but it's able to feed a large crowd. It's repeatable. You get pretty much the same experience every time with popcorn and it's quick and easy to make, especially at scale containers have a lot of the same benefits and this is why people are excited about them. They can be stored very efficiently as container images. They're portable you can run those container images on pretty much any type of Linux system it depends on Linux kernel. You can start once and then spin up a whole bunch of containers of the same type on mass, and they're repeatable, they spin up the same way each time, and they are very quick to start up you can do a lot of these things with virtual machines as well. Virtual machines are a bit slower to spin up they take up more space. I have a whole other talk about that. So the first part of the CNCF landscape. I want to talk about is container runtimes. So, you've probably heard of Docker Docker is probably still the most common way to run containers. Interestingly, not on the landscape as a container runtime. I like to talk about Docker sometimes as more of a usability company. They made containers really easy to run. A couple of the components of Docker are in the landscape, which are container D and runs see I think. So you'll see that the ones in blue boxes are CNCF projects, the ones that are not are not CNCF. Yes, I will be providing a copy of the slides after this. So the ones that are not in blue boxes are not CNCF projects. The lighter boxes I believe are graduated projects in the lighter boxes are sandbox projects. The CNCF has a tier of the projects that they take under their wing that describe how mature those projects are so container D is a mature project. And container runtimes I think are the they don't hear people talking about enough that people should really talk about more. Because the type of container runtime you use has a lot of implications for one common question I would get is how many containers can I fit on a machine. It depends on your application and also depends on your container runtime and how it does things. The type of security that you'll need for the containers that you're running also depends on the container runtime. For example, G visor and Cata containers have ways to further isolate your container container is a form of isolating a process so that it won't interact with other things on the box. Cata uses kind of a lightweight VM approach so that it's really isolated but still spins up pretty quickly. So there's different levels of isolation that these container runtimes use and different ways that they do that. So this is a very interesting area to look into. And it's kind of like popcorn kernels. And popcorn can come in a couple of different types, like snowflake or mushroom. So like I was saying, these container runtimes do things a little bit differently, but they're all containers. So the next thing is container registries. A lot of people have probably heard of Docker Hub, very common one online, super easy to use. But if you wanted to run your own container registry, there are tools for doing that. Harbor is a CNCF project in the space that I hear talked about periodically. And this involves those popcorn kernels. A container registry is where you store the container image, which you can then spin up on all sorts of different machines somewhere. But where you go to find those images is your container registry. So it's about storage and retrieval. If you want an exercise to learn more about containers, you could try installing that same application that you used in exercise one just installing it on your machine and try installing that on a container. InginX is a great example. They have an image on Docker Hub. It's one command to spin it up. It's pretty cool. It spins up super fast. And then think about if you had to do that 500 times, it would be pretty easy, right? So that's a fun exercise. Now we start getting into the things that people really want to talk about these days, which is Kubernetes. Container orchestration is the area that Kubernetes fits under. And there are other solutions, but Kubernetes is really far and away the leader in a lot of ways. Mesos is an interesting one that also works for VMs and other things. So it has some other use cases. The goal of Kubernetes and these other container orchestration tools is to help you operate at cloud scale. Great thing about the cloud is that you don't have to put in an order and wait a month or more to get new hardware into your data center. You just say, hey, cloud, I want more infrastructure. And so when you talk about containers, you can spin up more and more containers really, really easily. But when you're running 10,000, 100,000 containers, how do you manage that? That's where tools like Kubernetes come in. They help you manage all of your containers and how they're working together. So I use popcorn as the analogy for this once again. It's about the actual containers that are already spun up. It's not talking about images. It's talking about how you manage a huge amount of containers. If you want to try that out for yourself, try taking that same application that you've just ran in a container, try running it on Kubernetes. You could either use a free trial on a cloud like GKE has a free trial I looked up. Or you could use a mini cube. It's a great tool that you can install, run a little cluster on your machine, super easy. If you haven't done it yet, I highly recommend Kubernetes the hard way. This gives you a view into how Kubernetes actually does what it does in Linux under the hood. It's made by Kelsey Hightower. A while ago, he changed it to use Creo, CRIO-O, I think, as the container runtime rather than Docker, which is pretty cool and interesting. Other runtimes other than Docker. So that's a great way to learn about Kubernetes if you haven't done it yet. Highly recommend. And think about what tools is Kubernetes provided as I'm spinning up these containers. That would help me run hundreds of thousands of containers. Serverless. So serverless is something that's been getting a lot of attention lately. Serverless is basically the concept that you can focus on your code instead of focusing on your infrastructure. It's basically taking a piece of code and saying, hey, go run this to a cloud provider that's running a function to the service and they will do that. Great thing about this is that it's pre-packaged. I use Carmel Korn as the analogy here. It's got all of the stuff already done for you, right? It's pre-popped, pre-candied, pre-packaged. All that stuff is really hard to do on your own. But if you consume it through serverless, it's really easy and already done for you. And so you give them this code and say, I need to run this for like five minutes at a time periodically when this new piece of data comes in or something. Serverless is a great use case for that because you only pay for it while it's running. So if you have a piece of code that you only need to run sometimes, serverless might be a good option. Here's an example of a use case. So say you have an app on your phone that is like a photo booth app. It does some photo processing and makes a funny picture for you. So you take a picture of yourself and your friends and that picture gets uploaded into object storage. And then that image is processed using serverless function, which spins up, processes the image and then goes away so that you're not paying for it anymore while it's not running. And then it spits out a processed image, which is then stored elsewhere. That's a good use case example for a serverless function that allows you to focus on just the code of the image processing rather than focusing on all the infrastructure would take to do that piece. You do still have to worry about storage though, which we'll talk about more in a bit. So if you want to try out serverless functions, you can try it out using a cloud provider's free trial or something and think about what use cases serverless would be good for. Infrastructure as code. When I talk about DevOps, I always like to mention infrastructure as code because I think it's kind of the culmination of the two disciplines. So dev ops is combination of the disciplines of development and operations, developers obviously usually have expertise in coding and app development, things like that. Operations is more on the infrastructure side. It's talking about how to set up the computers that the code runs fun. Bringing those two together is something like infrastructure as code. Terraform is a very common example of this. And the thing that Terraform does is you define what you want your infrastructure to be. Say, I want this many VMs of this size. I want this much storage. I want it all configured in this way with this network. Describe all of that in code and then say Terraform go make this happen. And it does. So I used the Japanese festival food Okonomiyaki for this, which literally translated means your preferences grilled. So things that you like grilled up into one convenient package. So this is all of the cloud resources that you need for your infrastructure, all in a convenient package of a piece of code. A related concept, I think is GitOps, which is the idea of using source control to manage your infrastructure. When you've got it as code, you can manage it as code if you want to. So as your infrastructure changes, you have kind of a record of what happened before and you can go back and manage your infrastructure as code. It's a very interesting concept. So you might try deploying that application from before as Terraform. Sometimes there are existing templates on GitHub. I think I forgot to change the think about I'll fix that later. But think about how Terraform would help you spin up a whole bunch of machines at scale. The next one, surprise, is service mesh. So service mesh is kind of beyond the basics of containers, Kubernetes, how we spin up infrastructure. Once you've got some of that stuff down, then you start talking about service mesh. I would refer to it as day two Kubernetes boogaloo. So service mesh is often talked about as kind of the day two Kubernetes thing you need to learn. There are several options for this. The most common ones I hear talked about are Istio and LinkerD, LinkerD being a CNCF project. For this, the food that I use is Hanami dango, which is another Japanese festival food, especially relevant at this time of year, which is three different flavored rice cakes on a stick. Service meshes are a bunch of different tools that you're going to need once you're running Kubernetes. It's usually the use case it's used in. But you will need once you get started like monitoring tools and all sorts of other tools that you need. So it's a combination of different things all in one package that you're going to need on day two of using Kubernetes. And that's kind of like Hanami dango, which is three different flavors of things on one stick. So if you want to try out service meshes, if you are running through all of these exercises, you should already have a Kubernetes cluster, try installing a service mesh on it. Great thing that people like about service meshes is that it uses sidecars, so sidecar containers to monitor your applications so that you don't have to add extra code into your applications to understand what they're doing. It provides some ability to monitor your applications without having to deal with changing the application itself. So I highly recommend trying that out if you haven't yet. Spin up an application on a Kubernetes cluster, install a service mesh of some sort, and just learn about how you would use the tools in that service mesh to monitor or do other things with that application that you're running on Kubernetes. And how would that help you at scale, always think about scale when we're talking about cloud native. And now we're going to move into things that are not food, because I ran out of ideas for how to talk about cloud native technology with food. I know shocking. So I was just talking a lot about observability and monitoring with with service meshes, but you can also run observability tools of course on their own. You probably want include the cool sidecar piece that service meshes give you, but they're important tools to know about and understand how to use. And interesting thing in the CNCF landscape observability is this huge subsection of projects and concepts that includes monitoring logging tracing and they include chaos engineering, which is cool and interesting. But the concept of observability is that you know your system, and therefore know how your business is running and the health of it. So in an amusement park context, this is kind of like that screen that you go to and it's showing you all of the different rides and the wait time or the status of the rides, whether they're closed or how long it's going to take you to get on the ride. Think of this as like your applications and how they're doing. If they have significant latency spikes. If they're using too much memory and storage observability is the set of tools and the concept of making sure that you understand what's happening with your applications. So, I have a few more I'm going to dig into this of monitoring logging and tracing not going to touch on chaos engineering today but it is very interesting. So, observability monitoring the example I'm using here is an operator of a ride or something at an amusement park who is at this terminal running the ride or thing. And there is a light that lights up indicating that there is some problem. Tracing is putting tools in place to understand when things are going wrong and when things are going right. It can be a lot of different things so I actually looked up a definition to to give you because monitoring can be applied to tracing and logging as well. But it can also be more specific and the description given in this random blog that I found is instrumenting an application and then collecting aggregating and analyzing the metrics to improve your understanding of how the system behaves. So monitoring can include a lot of different types of tools. I see data dog at every cube con Prometheus is the CNC F one that I've used before and it works quite well. Grafana is also great for visualizing monitoring. There's a lot of different tools in this space that you could learn about. And I highly recommend looking into some observability tools if it's not something that you've specifically dug into. I also apologize some of the illustrations for this I didn't quite get done in time but let's say that that thing that the operator in the last slide was monitoring was not a ride perhaps it was something a bit more dangerous. Arguable. But anyway. And now we're talking about logging and I'm giving the example of some movie that we've probably all seen a dinosaur escaping and someone saying who unlocked the velociraptor paddock by the way velociraptors are tall. Technically. And I've got a caretaker sign in sheet here that you would look at to see who was last in the velociraptor paddock. So logging is about making sure that you know what happened before in your application can go back and look through your logs and see what was happening. Fluent D is the CNC F tool for that believe it is graduated. So it's a pretty mature project. If you're interested in a tool for logging that's what the CNC F. Tracing. So we talked about monitoring and like understanding what your system is doing at any given time we talked about logging what's happened before tracing is specifically tracing one action back in the flow. So something went wrong. Follow the trail and see where it leads. Fluent D also has tracing capabilities. It's interesting how in the observability area a lot of them hit a lot of the checkboxes. So, we've got observability, which is the overall concept of all the stuff monitoring which includes a lot of things and is generally making sure that you understand what's going on in your system. Logging what happened before tracing. Follow the trail and see where it leads. Security and compliance is one that's been a huge topic, especially lately I've been going to more security conferences and in the cloud native space. And the concept that I think about with the CNC F security or general cloud native security tools is if you want people to do things right, make it hard for them to do it wrong. So here I have a security guards standing next to some park rules in front of a fence. If you're at an amusement park, they put up fences and things so that if you're a park goer and you're not supposed to go somewhere, you know that because it's fenced off and you just won't go there. They also have rules posted around probably that you can read so that you understand what you're supposed to do. And if something goes wrong, they've got security to make sure that they fix it. So the tools in security and compliance kind of fall into this space. Common one that I've been talking about a lot lately is open policy agent. I keep seeing it at cube cons. I'm hearing it talked about all sorts of different contexts and policy we're talking about kind of the rules section of things, making sure that policies are in place for how things should work, and then it also enforces those rules in a cloud native environment. So these are tools that can help you implement good security practices in your cloud native environment. This is really important because when you move from going doing things in a data center to doing things in a cloud native way. The way that you think about security can change because you're generally running your applications in a more distributed way, especially with containers you can have pieces of one application kind of spread out even in different machines. So how do you manage security for these huge distributed systems in the cloud. That's where these security tools come in. So since you have actually has a bunch of them. Like I said, OPA is a common one I've been talking about lately. It's still incubating. Notary also gets mentioned sometimes Falco cystic does a lot of stuff talking about how Falco works. I saw a great demo of it by Chris Nova at the last cube con. She gives lots of great talks about it. So if you're interested in Falco definitely check that out is open source and CNCF incubating and then tough as a CNCF graduated project so that was more mature. So if you're interested in the security tools available to you in a cloud native space. This is where you look on the landscape streaming and messaging streaming and messaging is about consuming data at scale. I used to work at a storage company and we'd always talk about how the data load is just increasing. You're just getting more and more data in. There's this huge flow of data that's coming into your company all the time, and you need some way to manage all of that data. That is where this section of the cloud native landscape comes in the CNCF tools in this space interesting that they put. Let's got messaging in there too. Okay. They, they point out cloud events and nets cloud events is an interesting new project in the CNCF that is more about standardizing messaging between cloud providers and things like that. So making sure that events have the same format everywhere. So I talked earlier about the use case for a serverless where you take a picture and that goes into object storage and that spins up a serverless application to do the processing on that image. The image going into object storage would be an event that then triggers the serverless application to run. So that's the area where cloud events is focusing streaming. Everybody always talks about Kafka, which is somewhere in here. There it is. So I hear Kafka a whole lot talking about managing data streams into your company. So that is a very common one spark. I also hear quite a bit. So. Oops. Thanks for that. I just noticed someone mentioning on the, the tracing page I had some information from logging. So I will fix that later. Remote procedure call is another one that I want to point out here when you've got these huge distributed systems. Sometimes you need to call a procedure from somewhere else in your from where the initial kind of trigger takes place. So these are tools that help you do a remote procedure calling GRPC is one that I hear talked about all the time. I've been in several groups where people have asked for use cases about GRPC and want to learn more about GRPC. So it's a really good one to get familiar with if you are interested in learning more about remote procedure calls. So distributed systems means distributed communication. And that's where remote procedure calls come in. This one I put in the roller coaster again, because you don't see a driver on that roller coaster. There's someone in probably where you took off on the roller coaster from who pushed a button who made the roller coaster go and also made the harness come down over you. All of those were triggered remotely and then happened in the car. So remote procedure call. I said earlier I was going to come back to this topic of storage. So in these distributed systems storage gets very interesting in an amusement park you have these storage lockers at each ride right and you might see them in main areas of the park as well. It might be useful to think about cloud native storage a bit like that. Traditionally you would have these big storage boxes in your data center where all of your data would live. But in these distributed systems and with the latency needs of different applications within that system sometimes you need some data to be closer to the application that's using it then being in one collective space. So cloud native storage tends to consist of a combination of approaches, including the more traditional one big area of data and ways to get that data closer to the various applications that need it. So this includes tools like Rook is from the CNCF still incubating. NetApp has solutions in this space, LMC, Hewlett Packard Enterprise like all the major storage companies are trying to help solve this problem. It's very interesting, especially with containers because containers are really focused on an application and all of the things that that application needs to run and do its job. It does not include the storage if you put the data that you're running that application on in the container and then you kill that container. All that data is gone. So you've got to store it separately. You might want it to be close. You might want it to be collected in one place. These are tools that can help you figure that out. So that was my last one for now. It was all very fast and covered a lot of stuff at a very high level. These are all of the different areas on the CNCF landscape that we covered today. Hopefully you have got a better idea now of what information you can find on the CNCF landscape. Another thing that I didn't mention earlier about serverless is that it has its own whole box down here the bottom right. It has a bunch of different concepts within it. Yes, you can run serverless in your own data center, but then you are running all of the infrastructure, which takes away the part that I was talking about earlier about serverless being encapsulated and you pay for what you need. So serverless is a very interesting space in open source. Anyway, I hope that you learned something new about the CNCF landscape and some tools that you can use. And thanks for visiting Cloudland. Hey, thank you so much. I'm glad I ate prior to coming on this because that first section on food. I have a suggestion for you on chaos engineering. Yes, a haunted house or a house of mirrors. Oh, I love it. I think a question we do have one question and feels kind of broad but I'll pose it there up here which is becomes from an anonymous attendee. What do you expect to be the near and medium term impact of COVID-19 crisis on cloud native transformation and Kubernetes adoption. Interesting, I haven't thought about this much so I will kind of think out loud on this. There are some really interesting efforts going on right now to try to help with the crisis using cloud native technologies. For example, there's a project where if you have a Kubernetes cluster that you're not using all the time, you can donate some of the cycles of that cluster to these projects that are trying to find ways to fight COVID-19. So these technologies play a role in helping to fight the crisis. The economic impacts to the country and to the industry are going to be more telling for how it's going to affect cloud native adoption and Kubernetes adoption. But an interesting thing about this is everyone's home right now, right? So all of these retailers, toilet paper manufacturers who normally would get a lot of their business through stores are now getting all of that traffic online and they are getting hammered. So a lot of companies, especially if they're still in data centers, they can't scale to meet this brand new demand. So they're talking about cloud providers and they're talking about cloud native even more right now because they need the scaling capabilities that cloud native technologies provide so they can handle this new level of influx of orders from people staying at home. So there's a lot of adoption going on with the companies that are dealing with these huge web influxes and then we'll see how the economic impacts turn out. But like I said, the scaling of cloud native is getting a lot of attention right now. So it's doing quite well. Yeah, yeah. So one of the interesting way I've seen this on Twitter is that graphic that circulated with the three check boxes what drove your digital transformation. CEO, CIO and COVID-19 and it was like the check mark with the COVID-19 right where all virtual so like VDI, all these things are now, I think I've heard estimates 40% traffic increases across networks overall. So it's quite even to kind of piggyback a little bit on your toilet paper metaphor. If you're a business and you have, if you've adopted cloud native technologies, one of the ways that you can conserve cash if you're an e-commerce solution and you're not seeing a lot of business now for whatever reason is that you can more easily scale down the resources that you're consuming. If you happen to be in the toilet paper business, conversely, you can more easily scale up as demand has increased. So that kind of flexibility I think will benefit those who have already adopted cloud native technologies and also kind of help further promote adoption as some of these companies look to get that flexibility around spend in some efficiencies there. We have Richard Simon ask where does AI ops and I imagine that some somewhat analogous to ML ops fit into the observability section you talked about. And I'm not sure that fits in there but I'll throw it over to you and see if you have. I haven't, I think I've heard AI ops before but I haven't had many conversations about it. So, just kind of thinking out loud again, artificial intelligence and operations bring the concepts together. And then monitoring that and observability. Observability fits with everything I think. I don't think that you could, I couldn't come up with a situation off the top of my head where observability didn't fit. But depending on how you're doing your AI applications and what that means to AI ops. I imagine some of these tools will be useful. One way or another you want to understand what's happening with your AI applications at any given time. So you're going to want to find tools that can give you useful information about whether or not your AI system is healthy. So you will probably find some tools within the observability section of the landscape that can give you some insights. You want more specific insights than they provide and a lot of the tools and observability it's important to note are very flexible. So depending on in Kubernetes we talk about like health checks and liveness checks. You can check that an application on Kubernetes is alive. And then you can check, you can create your own health check to make sure that the application is doing what you want it to do. So in observability, you can use all these different tools to make sure that it's doing the right thing. So that was a very generic answer, but yeah, yeah. Yeah, I think it happens at layers right when I think about ML ops, I'm thinking about flows, things like Q flow that are targeting data engineers and data scientists and kind of the platforms and ultimately those things can reside on Kubernetes cluster. Right, and that's going to be the IT folks and they're going to have the, they're going to need the observability because ultimately what they're looking are at the systems to make sure that these pipelines, these ML pipelines are running as efficiently and they're making use of the resources and so that's where I think the observability and all that stuff. But yeah, the AI and ML ops that probably fits, you know, it's something I've been thinking about because I think the, I think the Linux Foundation, right, the sister organization of the CNCF has like an AI group or something or foundation. So, yeah, payment Parsi asks about presentation deck and that's going to be posted later tonight on the website from the CNCF.io slash webinars, and then Richard Simon asked another question why is everyone getting worked up about how Kubernetes is provisioned. For example, a cluster API, rather than focusing on how to optimize manage and patch it. So I think we're looking for your opinion here. That's a really fun question. I really like it. Thanks, Richard. That's fun to think about. So the way I think about this is kind of we talked about lift and shift versus cloud native earlier. At the beginning, you need to make sure that you have enough capacity to handle what you need to do when you lift and shift into the cloud. Generally, it's a bigger footprint than when you go cloud native. And you start optimizing and making everything work as best it can within the infrastructure that you have. Like I said, with the toilet paper analogy, and I know it seemed kind of strange that I was starting off so basic, but there are a huge number of companies that are just starting with this stuff, especially with Kubernetes. There are so many companies that are now going. Oh, maybe I should think about using this. So a lot of these companies are just going, okay, how am I going to get enough Kubernetes to do the thing that I need to do. So we're talking about things like cluster API and making sure that you can run multiple Kubernetes clusters, depending on how you're sectioning out the work that your company needs to do. How many clusters are you going to have, how are those clusters going to be managed. So that's all stuff that people have to address first, and then people are going to start drilling more into how to optimize manage those clusters in the most efficient way. That's how I think it is. You have anything to add Ariel. Yeah, I don't think I think Kelsey speaks about it all the time like it, we should be working towards that place where installing it is not. It's not the focus. It's not the focus. And I think that's a that's a work in progress right when we think about how long the project has been around, right, five, four or five years at this point, gosh, it's like dog years. It's all could happen so quickly. Yeah. Fun fact on that by the way is that I, I once created a guide for how to deploy Kubernetes on a cloud from scratch, like, even like, more basic than using a QBADM just like actually each piece of Kubernetes deploying it from scratch super difficult. We're getting more into companies using these managed services which makes deployment a lot easier so I can totally understand the folks on installation. Sorry. Yeah, absolutely. I think that's that's part there's distributions as with Linux and ultimately we will focus on these other things just because you get it installed doesn't mean you have to or optimizing any managing patching and securing it. All that all those same things and I think that harkens back to that tweet of Kelsey's right that you mentioned earlier it's like you have Kubernetes and it's like well that's only a fraction of the way. Yeah, you got to start somewhere, but you're going to build and build and build over time. Also on security, everybody start learning about security now. In cold water they've been posting a bunch of great stuff about security they're always posting great stuff about security, but Kubernetes is not secure by default, you have to learn a little bit about how Kubernetes works to make it secure so if you're starting to learn about Kubernetes put it on the list that you're going to have to learn about security real soon. That's one of the advantages to the distributions like GKE or AKS or any of the other ones is that you get a lot of this for kind of out of the box. It's still not necessarily secure by default though because there's so many options with security. A lot of those managed services take a very broad approach, which is probably not as curious you want it for your use case so even if you're using a managed service you're going to have to learn about this. Absolutely great advice. One final one from Richard and it's Richard. Yeah, thank you Richard. Can you give some examples of messaging and streaming. What is the difference. I find it really interesting that they combine these two concepts in the CNCF landscape. Like I mentioned, I've talked a lot with people about cloud events because I knew some people who are working with that effort in my previous job. That's more talking about how even different clouds can talk to each other and that's about messaging. It's sending events in a way that everybody can understand standardizing. So messaging is about sending the right messages, right. Whereas streaming is about this huge flow of data that you have coming in and how you're going to manage it. I don't know if I have a really good example for that. I used to use the example in this talk of like ticket sales. You're constantly selling tickets to your amusement park and you're getting in all of this data about the people buying these tickets, name address, phone number, personal information. That's the same thing. But anyway, you're getting all this data in and you have to manage this stream of data and make sure all that stuff is being stored in the correct ways, making sure that you're using it to do any processing that you have to do. Streaming tools are about managing that stream of data and making sure that everything goes into the right places and is processed in the right way. That's kind of how I think about it. Do you have any additional insights there? I think that's great. I think that's great. I love the going back to the park metaphor. Yes, I used to have the ticket booth in there, but I decided I like the log flume better. So I'm going to replace that. Well, thank you so much for taking us on a tour of Cloudland. We shall connect later, because I'm still curious about the roots of the illustration. Wonderful. Somebody earlier mentioned that their 13 year old was really enjoying those. So thank you for sharing your part with us today. And again, the webinar recording and slides will be available later today online. cncf.io slash webinars. We look forward to seeing you at a future cnc webinar. Have a great day. We'll see you next time. Thanks.