 Hello and welcome to a flight over the cloud native landscape. My name is Carson Anderson I work for Weave not that we've not the one you're thinking but we'll get there in a second You can find me on Twitter at Carson underscore ops and on GitHub at Carson oil I actually have the distinct honor of having more followers on GitHub than on Twitter So feel free to follow me on one or both of those platforms if you want to see what I'm doing In fact every presentation I've ever made all the artwork for this presentation to my others is always open source on GitHub So follow me there Now before I go any further I want to say what I mean when I said not that we've the weave that I work for is Not the cloud native vendor you might be thinking about my weave is an end user company And if you're curious about what we do you can find us at get weave comm I'm saying this because I want you to know that I'm going to teach you about a lot of different projects And I want you to know we have no stake. I have no personal professional stake in any of these projects So you're not getting any sportive specific vendor pitch here Now what I want to do in this presentation is cover the 12 graduated and 21 incubating projects in the CNCF that's 33 projects total and I have less than 35 minutes to teach them all to you Now I'm not gonna obviously in that amount of time be able to teach you everything about a project or cover all the features and nuances of any Given project, but I want you to leave this presentation having a base understanding of what these projects do and how they relate to each other I know which ones you're curious about and might want to learn more about or might want to use yourself So before we go further let's say a big thanks to Fifi.io for these characters I'm gonna use them pretty liberally in the presentation because I find them to be a lot more interesting and fun than just having Boxes called app 1 and app 2 so you're gonna see these characters show up quite a bit Specifically going to see Fifi and when you see Fifi It's I'm gonna refer to a legacy application written in an unspecified language So could it be any language and when you see Goldie, it's specifically a newer application written in go Before we go into the projects themselves. Let's talk about what it means to be graduated in the CNCF So if I've got a project and that project wants to be a graduated project You have to pass a few things first you have to be receiving constant Contribution from at least two different organizations into the project that that means that you know that if you're gonna use a project That's graduated you're not dependent on one organization to maintain the project You also have to certify that you're passing the best practices for core infrastructure and fully open source software You have to pass a security audit publish the metadata around who Governs the project who's in charge of the code and who's using the project Once you do that you pass a super majority vote from the CNCF and you've become a graduated project now Notice that never in that did I talk about something being quote-unquote production ready Graduated status in the CNCF you don't have to be graduated to be production ready There are plenty of incubating and even sandbox projects that might be production ready for you depending on what you're trying to get so Don't be afraid to use the projects that are incubating or sandbox Although it is great to be a graduated project It's not necessarily just a measure of production readiness, but rather a measure of openness and transparency Now let's go ahead and dig into all these projects got a lot to cover The first one I want to talk about is container D It's a daemon like you might expect from the D meaning it's a process that runs on your systems to help you manage containers So you can use container D directly in your code to build container images and to manage containers But most of us won't use container D directly most of us will use container D as part of something like Docker or another Project so when most of us build a container image today and we do that with Docker We're doing it with container D. So in fact this project is so useful that it's actually built into a lot of the public cloud Kubernetes offerings and the k3s project which is in sandbox state that allows you to run a full or mostly full featured Kubernetes cluster All in one binary and that does that using container D So if you're curious about low-level container operations check out container D Next is tough tough stands for the update framework and it's all about dealing with managing updates So tough is a series of standards and tools that we can build into our code and use to deal with updates I'm being intentionally vague here because the tough standards don't prescribe to any one specific kind of update You might be they might be package updates or image updates But anything that you might want to get regular updates from and verify the source of So that's tough. I won't cover it much because most of us won't use tough We're going to use the reference implementation of tough or the main implementation of tough in notary Now notary takes all the tough ideas But it builds us some actual tools that most of us can use rather than writing our own code to deal with update management verification And most of us will probably use notary as part of image Signing and verification to ensure that when we get an image we get it from somebody that we trust And it hasn't been modified along the way so that when we pull images when we pull code into our infrastructure We know that we can trust where it came from So that's notary again tough and notary have this tight relationship where tough is the Standards and notaries and implementation of it, but you should look into both of those if you're curious about that Next is harbour harbour is a private image registry and it has all the features you might expect from a private image registry You could upload OCI compliant images and get things like validation through something like notary or even image inspection or vulnerability scanning And along with a bunch of other features like you might expect from a private registry It also has a really cool feature in that it can be a Pull through cache so you can run harbour in your infrastructure hook it up to public image registries And then have your clients point to harbour and when they need an image if that image exists in a public registry Harbour will pull it through and cache it locally allowing your clients to pull from harbour instead of always going over the internet So if you need to reduce your overall image pull bandwidth, you might check out harbour for that Either way if you're curious about having a private image registry check out harbour Next is kubernetes, of course, we can't talk cloud native without talking kubernetes kubernetes was the first cncf member project to reach the graduated status And it is at its core container orchestration engine So when I say that I mean we've got a suite of back end machines in kubernetes We call these nodes and we want to run workloads on those nodes and we can tell kubernetes Hey, I want you to run a workload here is what it should look like and kubernetes will put that workload somewhere We can then of course scale up run multiple copies of our workload In fact kubernetes can handle tons and tons of different workloads with different configuration And because it's an orchestrator it has things like automatic Dealing with things like nodes going down So if a node goes down kubernetes sees that and can redistribute the declared workload Or it can automatically scale up and distribute workloads It really lets us stop worrying about nodes. In fact, most time when we talk kubernetes We don't even talk nodes at all. We just think of our kubernetes cluster as a whole cohesive unit And we run our workloads in kubernetes and it deals with the underlying infrastructure for us And of course I have to bring in captain cube here because they're amazing and Why wouldn't you talk about captain cube when you get a chance? kubernetes is really kind of the heart of a lot of other things that i'm going to talk about And it is that way because it provides a lot of touch points A lot of integration points for other systems to build upon kubernetes to provide more value using kubernetes as a core So kubernetes has the ability to add storage or networking layers And it can add custom resources, which we'll talk about in a second or even extend brand new apis of the kubernetes api ecosystem So really kubernetes is the heart of cloud native in a lot of ways Now you don't have to run kubernetes to be cloud native, but a lot of us do Speaking of kubernetes helm is a package manager for kubernetes So helm is something we do actually have to run in kubernetes because it's specifically around Creating and maintaining applications in kubernetes So we know that we can run our workloads in kubernetes and we call those pods But it turns out there's actually a lot of things in kubernetes that we could create we can create configuration or services or different routing information to all help Tell kubernetes what our application looks like to describe our application to kubernetes Helm allows us to take all that configuration Put it into one thing called a helm chart and that chart is sort of like a package that says Here's what my application looks like here's all the things you need to make and how they relate to each other We send that to helm and helm creates it in our kubernetes cluster like we expect The great thing about helm and helm chart is it's this kind of recipe this redistributable thing We can take our chart and not only use it internally But give that chart to other users or distribute that chart out into the world to help other users install our Applications into their kubernetes clusters So you'll very often see a lot of the things i'm going to talk about be installable into your clusters using something like helm Another way that you might manage your applications in kubernetes is using something like argo Argo is also an application manager, but it takes a different tact than helm Argo is a get ops based system for kubernetes application management Meaning that we set up one or more get repositories and in those repositories We describe what we want our application to look like in kubernetes We then hook argo up between the repositories and our kubernetes cluster or clusters And it takes the described application and makes it true in kubernetes and it's doing get ops So it's always watching the repository and as the repository changes It syncs those changes into kubernetes as they happen and it allows it to Take all the get ops tools we know and love and use them to manage our kubernetes applications So if you're interested in get ops and kubernetes check out argo I will also say that argo You don't have to just use argo or just use helm argo actually knows how to leverage helm And customize and some other kubernetes deployment mechanisms. That means they're not mutually exclusive. You can use argo and helm together. It's absolutely fine One other way we might manage applications or the other Applications may create for us to manage them is something called an operator and the operator framework is a set of tools And libraries that help us build operators now my 30 second. What is an operator talk? So an operator can be thought of as an engine It's a process we run inside our kubernetes cluster that knows how to create applications for us And we create something kubernetes called a custom resource and that custom resource describes just the minimum amount of Configuration that would need to exist to describe our application And the engine we've written called an operator knows how to take that resource And make the application in kubernetes force based on that resource And then if we make a different resource with different configuration The operator can operate on that and make that copy of the application really the focus here is these crd's Rather than build a helm chart or a git repository We actually put a resource into the kubernetes api describing our application And we use the operator framework or other tools to create an operator that knows how to take those custom resources and Turn them into applications in our cluster So if you're curious about a more advanced way to manage your applications throughout their entire life cycle inside kubernetes Check out the operator framework and just like before the operators and helm and argo Those are not all mutually exclusive and a lot of them can work together very well Next is contour contour is sort of fills another gap in the kubernetes ecosystem So we talked about kubernetes can run our workloads. It can also run these things called ingress And ingress really is just a set of configuration It's a set of configurations like host and path based information that says hey If you're coming in for this host and this path go to this workload But there's something missing ingress is just configuration something has to exist in the kubernetes cluster to make that configuration real Something has to run in the cluster receive user traffic Read the ingress configuration and route to the right place based on that config And that's what it called is called an ingress controller in kubernetes And there are a lot of ingress controllers you can run in kubernetes Many of them are legacy web servers kind of jammed into this ingress controller role But contour is built from the ground up to be an ingress controller for kubernetes So if you're looking for a cloud native ingress controller or an ingress controller that's built specifically for this job And to try to do the right things for you right from the start check out contour Next is cube edge cube edge is interesting because it's orchestration built on kubernetes So we already know kubernetes can do container orchestration But cube edge is a platform that leverages the kubernetes apis and extension points to allow us to do edge compute management using the kubernetes apis So if you're curious about doing edge management and managing your computer at the edge And you want to use the kubernetes tools and api check out cube edge Next is rook rook is also orchestration that runs in kubernetes But instead of managing other devices or containers rook is about managing storage inside kubernetes So rook runs in kubernetes and allows you to deal with block storage or object storage And it can do things like provide persistent volumes for your workloads So they can have a volume that follows them around in your cluster Or do other things like like said object storage and other Kinds of storage extensions that you would build on top of kubernetes using rook So if you're curious about storage and kubernetes check out rook Next is cryo or cri o cri stands for container runtime interface It is kind of a layer that we describe that we define to help kubernetes run Containers and oh stands for oci compliant so Every single node in our kubernetes cluster runs this thing called a cubelet And it's the cubelets job to create and manage containers over their life cycle on the node But there's this kind of squishy blue layer I've drawn between the cubelet and actually doing things with containers And that is where cri or the container runtime interface lives And it is definitely where cri o lives because cryo is a A container runtime interface built specifically for kubernetes to be simple and fast and efficient So if you're curious about container runtime for kubernetes built for kubernetes check out cryo Next is cni cni stands for container network interface So we've got our workloads running on multiple nodes and often they'll talk to each other over the network Sometimes that's on the same node, but very often it's across nodes And something needs to exist to kind of define and implement standards for how we set up networking between our workloads in a cluster Or in the cloud and that's what cni seeks to do It's a set of standards and tools and some kind of low level Helpers to help you build tools to deal with container to container Networking in the cloud or in kubernetes. So if you're interested in the low level operations of networks check out cni Next is grpc So we've got our applications and they need to talk to each other one way they might do that is over something like htdp It's been around for a long time. It's kind of a low level interrupt But htp, although it's great has its problems primarily It's got a lot of overhead because it's connectionless and stateless There's a lot of overhead in every single htp request to describe what's going on grpc exists as an alternative or can run alongside htp as another way for our applications to communicate with each other over a network And this is stateful and has less overhead. So it can be a lot lot faster than htp grpc also has cool things like bi-directional streaming where applications can stream over a single connection both ways If you leverage things like proto, you can also get things like type safety using grpc So if you're looking to do application to application communication over the network and want to go above and beyond what you get from htp Check out grpc Next is core dns core dns is like you might expect a dns server built for the cloud So we if we're all honest with each other dns is the oldest form of what we call service discovery, right? We ask for something by name dns gives us back where that lives And even though we've got all of these cool new ways to do service discovery in the cloud We still tend to tend to use dns a lot And core dns exists to be a brand new dns server that is built for the cloud And this picture that i'm showing you seems empty seems like there's a lot missing And that's because core dns really exists at the core of this big ecosystem of plugins Core dns has plugins for multiple ways to serve dns traffic Whether that's the traditional udp or new protocols like htp2 or grpc It also has the ability to bring in configuration and receive both Initial configuration and constant active reconfiguration from multiple sources including things like kubernetes etc which we'll talk about or even public clouds where core dns can privately serve the records that you define in your public cloud dns systems It also has plugins to help you do things like rewrites and tracing and metrics on your dns And it really brings dns into the modern age In fact, all these features make core dns the recommended and go to dns solution for doing dns inside of Kubernetes And it has been that way for quite a while now So if you're curious about a modern dns implementation, check out core dns Now before I talk about the next two projects, I want to briefly describe what a service mesh is So we've got our applications that they're talking to each other And if we want to add features to this network communication of our applications features like end-to-end encryption Transparency load balancing traces name any feature you want to add that's around networking In the past we've had to code that feature into every application And we've had to ensure that all those applications Support the same amount of features and their features work the same way and it can get very onerous And sometimes not just not be possible to change the code of a specific application A service mesh says well, what if we write a proxy process? And this proxy process can be thought of as living around our application Although it technically lives just between the application and the And the network and that proxy is responsible for implementing the code to do all the things I just talked about transparency encryption that kind of thing Well, once we've got all these proxies distributed and running in our ecosystem We'll want a control plane that can manage these distributed proxies and give us a way to View what's happening with them and control them You combine a proxy and a control plane and you get a service mesh and you get powerful features like metrics Load balancing encryption and transparency and tracing all from the proxy without ever having to change your service code And this is very very powerful and there are two projects I want to talk about that are part of a service mesh The first one is linker d linker d is a cncf project that is a complete service mesh offering So we've got our applications talking to each other linker d comes with the linker d to proxy Which is a proxy process written from the ground up for linker d to do this service mesh to implement these service mesh ideas It also comes with the linker d to control plane that allows you to manage all the proxies So it really is a full complete end-to-end service mesh solution It's also very easy to get up and going and to use linker d So if you're looking to implement a service mesh and you want to get up and going and want a complete solution You can check out linker d Another alternative for a service mesh is envoy Now envoy is a bit different in that envoy just focuses on being the proxy process for the service mesh If you're asking where is the control plane envoy doesn't provide when it doesn't prescribe one It leaves that open to the implementer and this seems like a downside that blank spot may seem initially compared to linker d Like it's a problem, but it's actually a great power The fact that the envoy proxy folks project focuses entirely on being a service mesh proxy Or just a service proxy means that it can really focus on that and provide the best possible proxy that you can need And actually envoy is the backing proxy between a lot of other cloud native projects So if you're curious about just running a service proxy check out envoy Next is open tracing like you might guess it's all about dealing with traces So we know that we can use something like a service mesh to get automatic tracing between Our user request and the applications it bounces around But if we want to know what that user request does In each application it visits as it kind of goes from method to method and spends a different amount of time In each application doing different things. We need to instrument our application We need to write code to create these tracing these traces And that's what open tracing exists to do It like you might have guessed from the name open tracing is provider agnostic So the great thing about it is you can instrument your applications You can write code in your applications to generate trace data and open tracing works with a multitude of providers Meaning that it doesn't care who you use you instrument once and never have to do it again One place you might export this trace data from open tracing or elsewhere is to yeager Yeager is a trace aggregation and trace management platform So we've got our trace data that we've got from our service mesh or some from something like open tracing We need to take that data and send it somewhere so that we can aggregate it all and view it And manage it and that's what yeager does yeager provides a ui where you can dig into your traces and see exactly how they break down It you can search through your traces and it has other api extensions that lets yeager really be the core of tracing in your cloud native system So that's yeager if you're curious about somewhere to send your traces and view your traces in the cloud check out yeager Next is prometheus prometheus is all about dealing with metrics in the cloud Now we know that we run applications all over the place, especially in the cloud and those applications are generating data right They want to generate data around how many requests they're making how many requests are succeeding and filling that kind of stuff What we used to do when we wanted application metrics was Instrument our applications put code in there They had the applications take those metrics and export them to a specific provider And if we ever wanted to switch metrics providers, we couldn't do it because the applications would have to be retooled Prometheus turns out on its head Prometheus says well part of the prometheus spec is saying here's a standard way that you're all going to serve up Metrics you're going to just expose a web page and give me your metrics And prometheus is going to go to each application individually and pull that metric data down and aggregate it Uh, and it knows where your applications live because remember when the cloud things are coming and going all the time It knows where your applications are and how to find them by having cloud integrations or kubernetes integrations So that as your applications come up and down and move Prometheus always knows where to go to get that metric data and it kind of flips the whole idea of metrics on its head Once prometheus has gone and scraped that metric data and pulled it in It pulls it into its own internal time series database and can give you features like charting and alerts And metric searching and other api integrations that really sort of like yager was with traces Let's prometheus be the core of metrics in the cloud So if you're interested in an open source metric system check out prometheus Now Thanos exists alongside and with prometheus to solve some specific problems It is very easy to get up and going with a single prometheus instance in the cloud No big deal you can get up and going very quickly But if you're going to run multiple prometheus instances and if they're going to be distributed across geographic regions Or you want fault tolerance The prometheus project isn't really focused on solving that right now But the thanos project is you can think of thanos as a wrapper Around one or more prometheus instances that allows you to aggregate data and go to thanos and Run a metrics query and have it go to all your prometheus instances and run that query for you Thanos also has the ability to take that data and export it into multiple cloud storage mechanisms So that you can have long-term prometheus storage because prometheus doesn't tend to want to keep data for very long So if you're curious about distributed prometheus long-term metrics from prometheus check out thanos In that same vein there's the cortex project now the cortex project also exists to solve the multiple prometheus problem But it works a bit differently It's designed to always pull all of the data rather than wrapping it just ingests all the data from all your prometheus instances And stores it in its own internal architecture That way when you query your metrics, you don't even have to go to your prometheus instances They just act as a data source and really cortex is at the heart of your metrics at that point So if you're interested in long-term aggregated prometheus metrics, you can check out cortex Now I know those two projects thanos and cortex seem very similar And that's because they are you'll have to do your own research to find out which of these two projects might be the right one for you Next is fluent d now fluent d is all about dealing with Streaming text processing and very often log processing in the cloud and elsewhere So at its core fluent d can be set up to take in multiple streams of text Read those streams as they come in process them internally and spit them out into other places to clouds To files to other integrations. It really kind of exists as this integration and glue layer Between your data sources and where you might want to put that data All of this makes fluent d a really great place to handle logs from kubernetes So we've got kubernetes We've got our workloads running and those workloads are all generating log data Which is text data and they're generating that all the time and coming and going and Very often fluent d is the engine behind most of the prometheus Or sorry most of the kubernetes installations that you run Where it's reading those logs as they're generated Reformatting them and sending them to some cloud integration so that even though you've got containers running across many many back end machines You can view all their logs in one place. Thanks to the aggregation and export and manipulation Provided by something like fluent d So whether you're using fluent d and don't know it because it's in kubernetes Or you are interested in doing log processing directly with fluent d you should check it out Next is vitess. Vitess is all about dealing with relational databases in the cloud So it's easy easy easy to run a relational database in the cloud or in kubernetes But the problem with these databases is they tend to have to scale vertically And that is really vulnerable and brittle in the cloud because we don't want things to scale vertically We want to scale horizontally want to be fault tolerant and distributed So we want to split that big horizontal or vertically scaled database into multiple smaller databases and vitess exist to help us do that Vitess is a layer that runs on top of the mysql Or maria db engine that you already know and trust but allows for powerful features like replication and sharding And you can actually increase replicas and reshard and do all sorts of database manipulation using vitess While while just using the standard mysql engine vitess also has a proxy process That you can run that allows you to take in sql or grpc traffic and distribute it to these kind of more dynamic Sharded replicated changing database instances All this allows vitess to be a really great solution for running relational databases in kubernetes It allows you to scale and distribute and be fault tolerant all while still using the kind of database interface that you know and trust So that's vitess Next is titanium or ty kv Ty kv is a key value store. So it's all about dealing with key values So it does the things you might expect from a key value store does ads. It does updates It does deletes But the cool thing about ty kv is that it scales horizontally like vitess. It scales horizontally very very well In fact according to the vitess page They say that it scales up to petabytes worth of key value data. That's a huge scale Another really great thing about ty kv is that it supports distributed acid compliant transactions So you into a ty kv installation you can say I want you to update this key Delete these three keys change those two keys and I want you to do all of those operations at once Or not at all that ensures that if you're doing multiple key operations in your key value store You don't have to get stuck in a halfway state where one transact one operation worked and the other failed and you don't know how to recover So if you're interested in really high-scale key value store or transactional key value store check out ty kv etcd is also a key value store that's cloud native So it does all the same things that I've just described ad updates deletes But etcd rather than focusing on sheer scale has focused on simplicity So it's very very easy to get up and going with an etcd installation or an etcd cluster. It's often just a few commands or a single file away from having a fully functional etcd cluster So it has all the things you might expect leader election fault tolerance distributed load But it's much much simpler to run than some of the other offerings In fact the kind of combination of features and simplicity have made etcd the go-to back end for the kubernetes api for a long time For a while. It was the only back end There are a few others now But odds are really good that if you're using kubernetes You're probably using etcd behind the scenes to store all the data you're sending into the kubernetes api But you can absolutely use etcd directly for yourself as a key value store Next is dragonfly dragonfly is all about peer-to-peer file transmission So we've got multiple peers they can send files to each other dragonfly is agnostic about the file content But it does have some first-class integrations for images So dragonfly has native integrations to deal with image transmission peer-to-peer and that's not very interesting But what's cool about dragonfly is it's distributed peer-to-peer transmission So you set up dragonfly nodes throughout your system And when anybody wants to download a specific image rather than having to go out to a single place to download the whole image They can download parts of that image or parts of any file from the peers that have chunks of that file rather than always having to go out So if you're curious about a better way to do peer-to-peer file transmission or a better way to do image transmission check out dragonfly Next is cloud events cloud events like you might guess is all about dealing with event infrastructure in the cloud So we've got our applications and they have the ability of course to we say we decide that we want to do event-based infrastructure One thing we might not agree on is the exact format of our events We may want to use different structure different terms and it makes it really hard to say well We all want to do events, but we can't agree on what the event should look like Cloud events exist to be a series of standards and SDKs for us to work with event-based infrastructure And I'll agree that we're going to use cloud events so that we can very easily and efficiently Interop with each other because we all use the same back end event structure So if you're curious about event-based infrastructure, check out cloud events In that same vein, let's talk about gnats gnats is at its heart really a message bus So you have producers and consumers where you can put messages into gnats and get them from other processes And gnats of course can run distributed and deal with lots and lots of producers and consumers It supports a lot of different event bus systems So you can do things like a pub sub where you publish a message and multiple subscribers get that message You can do request reply where you send specifically to someone and get a specific answer back Or you can do topic based or streaming event processing all using gnats gnats also scales dynamically and very efficiently So not only is it fast and flexible and really efficient, but it scales really well Which makes it a great solution for doing event-based infrastructure in kubernetes So that's gnats if you're curious about building an event-based system, check it out Next is spiffy or the secure production identity framework for everyone And it's all about like you might guess identity spiffy is a set of standards and tools for dealing with identity in the cloud Now when I say identity, I don't just mean users I mean identity at the node level Identity at the workload level or identity at processes inside the workload spiffy is really about saying Well, let's take identity and go as deep as we need to and be more dynamic and more fluid and more granular If we need to to help us deal with identity in the cloud Uh spiffy is another case where you probably won't use it directly You probably use the spire project which takes all the implementations and standards of spiffy And builds some tools the spire server and the spire agent Where you can implement some of the spiffy concepts and get this kind of identity stuff that i've been talking about Without having to write your own code So if you're interested in identity and you want to write your own code check out spiffy or if you want to just leverage the spiffy concepts Check out spire Next is open policy agent. It's all about dealing with policy enforcement So we feed open policy agent policy documents saying here's what we do and do not want to allow into a given system And then as we feed objects into that system opa either accepts them or rejects them based on the policy It's been given i'm being intentionally vague because open opa doesn't prescribe a specific thing that it's an enforcer of opa has been used to enforce policy for a ton of things One thing opa fits really well onto though is kubernetes opa can run up on top of and in front of your kubernetes api So that you can kind of control what you allow into your kubernetes cluster and what you don't based on the policy that you give opa So if you're interested in policy definition or policy enforcement check out opa Last but not least is falco falco is all about container runtime security So we've got our images and we can use things like notary to validate that we're running images We trust but we might still want to watch these images or these containers or these processes The entire time they're running so falco does that falco exists to run in our infrastructure and watch our processes all the time And it has a set of internal rules that says what it expects them to do and what expects them not to do And if any of our processes does something we don't expect like accessing a database We didn't expect it to reach falco can see that and send an alert when it happens So you get always on active security for your workloads. So if you're curious about that check out falco So that's it. I have covered all of the projects in this amount of time Hopefully you kind of wait kind of wait your appetite for a lot of these Giving you a basic idea of what each of these projects does and how they fit together So you can kind of go forth and learn more about them as you see fit One last thanks to fitby.io for these great characters. I absolutely love them Uh, I use inkscape to create my presentations soz to animate the presentations and open clip art to get art when I can't draw things myself And that's it. Again. My name is karson anderson. I work for weave Not that weave you can find out what we do at getweave.com You can follow me at twitter at karson underscore ops and on github at karsonoid Thank you so much for all your time and uh, I hope to see you again soon