 So hi everybody. Thank you so much for joining us today and many thanks to kubecon and cloud native con for having us We're really excited to be here In our session today, we're going to be covering an introduction to kubernetes get ops and observability We're going to start out with a few introductory slides before moving on to the hands-on tutorial portion of our session today I'd like to kindly ask you to ask questions in chat And we will be having a dedicated Q&A session following the workshop And to get us started, I'd like to invite Joaquin to introduce himself briefly. Hey, everybody My name is Joaquin Rodriguez. I work for Microsoft on their commercial software engineering. I I guess help customers with kubernetes and open source Happy to be here. I'm based in Austin, Texas Yeah, again, thanks for having us And I'm Tiffany Wang. I'm a solutions architect on the customer success team at we've works and Immersionally from Southern California, and I'm now based in London and like Joaquin I work with customers to streamline delivery and deployment to kubernetes clusters both on-premise and in public cloud following get ops So before we continue on with the introductory slides, I'd like to invite you all to register for the hands-on section of our session today If you could please navigate to htvps Cube 101.dev the username and password are displayed on the screen and Hopefully it might be available somewhere in chat but if it's helpful the username is cube and the password is capital V at symbol Lensia 22 exclamation point and Once you've Conducted the basic off to the registration page. You'll need to register with your github username once you've registered you should check your email for a Email invitation to the kubernetes 101 github org We'll be using the cube con 2022 repository For our github code spaces, which is where we're going to be conducting the workshop today so take a minute to take note of the username and password and I will be moving on from this slide But we have a wonderful moderator that hopefully you've got the username and password. Awesome. Great So while I give you a few minutes to do that, I'll begin with an introduction to kubernetes Since you're all here at cube con this is information that you probably already know but we can start from the very beginning and Talk about how kubernetes is an open-source cloud native computing foundation project for container orchestration It was initially created at Google and now it is maintained by cncf Kubernetes allows you to define a declarative configuration to manage containerized workloads and services and kubernetes kubernetes is cloud native It's highly distributed resilient to infrastructure failure and outages enables frequent releases and It provides automatic automation and observability Self-healing and horizontal scaling service discovery and load balancing and it is scalable It's capable of running on-premises and in public cloud or a mix of both so you can have a Similar deployment experience regardless of target cloud cloud provider On this slide there are a bunch of components, but these components are what allows your kubernetes clusters to work on the left hand side, you'll see the control plane elements and The control plane elements typically run on control plane nodes The control plane components include the API server which allows users to interact with kubernetes API server And it validates and configures data for kubernetes objects We also have the controller manager which manages processes for nodes jobs endpoints and many others We also have at CD which is the backing store for your clusters state We have the scheduler which determines where workloads should run in your cluster And we also have an additional cloud controller manager which handles any cloud provider specific logic on The right we have three working nodes and working nodes are typically where your workloads will get scheduled to run Some key components of the nodes include the kubernetes which manages containers created by kubernetes As well as make sure that if they your containers run in pods and that they come up successfully We also have the cube proxy Which manages the network rules for your internal and external communications for your cluster We'll go into a little more detail about kubernetes resources and the kubernetes API server to understand how kubernetes resources are grouped and managed There are a lot of kubernetes resources and they're grouped by their primary function As an example, we have groups that manage that include role-based access control resources Scheduling admission registration auto-scaling events and many more But today in our workshop, we're going to be focusing on the core API group objects That is resources in the core and apps API groups Some of these resources include namespaces deployments services secrets and the API server allows us to create read update and destroy these resources You can extend the kubernetes API by defining custom resource definitions, which kubernetes controllers know how to interact with We're going to be using flux in our workshop today, and that's a great way to explain and see one in action It's really important to understand that all of the kubernetes resources are declaratively defined using yaml The declarative definition of resources helps to simplify some of the complex processes that happen within the cluster so how do all of these work resources work together and Eventually express an application that end users will use This slide highlights some of the components and we can actually start from the innermost part, which is the container Now a container is an immutable copy of your application code and all of its dependencies and It runs within a pod A pod is the smallest deployable unit in kubernetes and it can include one or multiple containers Containers within a pod share a network namespace and the pod is ephemeral It's assigned an IP and you can add metadata to it like labels While you can technically deploy a pod by itself It's recommended that you instead define a deployment or if you're running staple workloads a staple set And a deployment allows you to specify the number of replicas that you'd like to have running in your cluster Now if you have a deployment with n number of pods each with its own IP You can define a service that maps to the deployment The service then gets a virtual IP It's mapped to endpoints via labels and it is also named in DNS The service and the deployment both run within a namespace which itself is another kubernetes resource Namespaces provide an opportunity to logically group your resources So kubernetes does a lot of the heavy lifting when it comes to creating your resources and scheduling them, but There are a lot of resources to manage especially in a you know, microservice architecture and we need a way to reproduce what we've deployed to our clusters and We need to also understand conclusively what it is that should be running in our cluster and for this we look to get ops on This page are the open get ops principles that are or rather the get ops principles that are managed by the open get ops group This is an open group. So if you're interested in becoming more involved We invite you to join Get ops builds on dev ops and infrastructure as code by and by adhering to these principles You're able to empower developers by improving productivity improving stability and reliability and enforcing consistency and security The get ops principles then require that one your entire system state is declaratively defined to Your desired state should be stored and versioned and immutable Get lends itself really well to this and also allows developers to stay in the workflows that they know best three software agents like flux should continuously pull from the desired state and for Software agents should continuously reconcile The conscience of the desired state to your running cluster So if you're new to get ops and you are thinking about all the benefits that it can bring And you're wondering if you have to scrap everything it is that you've built so far and the answer is absolutely not On the left-hand side, you'll see a continuous integration workflow that starts with a developer writing application code Subsequently building and testing that code and the CI workflow culminates in an immutable Artifact whether that's an image or a helm chart and then on the right-hand side We have the Kubernetes and get ops workflow This is also centered around get but this time for your just Clarity of lead defined desired state Get ops allows you to streamline deployments to your clusters Observability within your cluster and operations for your cluster If you treat get as your single source of truth then any changes that you seek to make to your cluster should be made via via pull request and This allows you to easily see the difference between what's running in your cluster and what you're defining as desired state and Software agents like flux automatically reconcile the two This gives you an inherent audit trail and You'll know exactly who made what change when and if the commit messages are good also why Get offs also allows you to Easily roll back to last known good state It's as easy as a revert or a fixed forward commit So we've actually coined the term get ops And today technology has advanced to the point where it's no it's not required that you use get but some other source control versioning system But that is where the term came from So get ops is the practice of using get to store declaratively defined desired state and Continuous delivery agents like flux to automate the reconciliation of current state to desired state With get ops CI and CD are effectively decoupled So get ops itself is agnostic to tooling but in today's workshop. We're going to be using flux which is an open source CNCF project created at Weaver's and Flux's runtime is comprised of several Kubernetes controllers as well as their corresponding CRDs Today, we're going to be focusing on the source controller and the customized controller The flux source controller interacts with custom resources like get repositories buckets Helm repositories and helm charts and in today's workshop. We're going to be focusing on the get repository a Flux get repository allows flux to know which repository it is you want it to monitor and pull from as well as the branch There are some additional fields that will be covering in the hands-on section of the workshop as well We're also going to be using the customized controller, which whose corresponding CRD is the customization And the customization custom resource allows you to tell flux the path within the specified get repository in our workshop today That flux should read pull resources from and reconcile to your running cluster Flux is able to keep your Kubernetes in sync with what you've defined in git And it automatically and continuously reconciles running state to desired state So now we've got Kubernetes that easily allows us to declaratively define our resources that comprise our applications And we've got GitOps to streamline the operation operations by which they get deployed to the cluster We need a way to easily understand the goings-on within the cluster and we can accomplish this with observability Monitoring and observability go hand-in-hand where observability is arguably the superset Observability allows you to inspect observe explore trace and create custom queries to understand how your system is performing With monitoring the metrics alerts and dashboards that you set up should be actionable There are a lot of tools that accomplish things like metrics logging and tracing but today we're going to be using Prometheus for metrics collection and Prometheus is an open-source CNCF project created by SoundCloud And it stores your metrics in a time series as time series data we're going to be using Grafana for Visualizing our metrics and Grafana was created by Grafana Labs Finally, we're going to be using fluent bit for log log and metrics processing and forwarding and fluent bit is under the fluent D umbrella created by troops treasure data And it is also a CNCF project So now that we've covered Kubernetes GitOps and observability Let's dive into the hands-on tutorial and see it all in action And I will hand over to walking to get us started So Yeah, so let's get started Like we mentioned before in order to if you choose to participate in the in the hands-on tutorial You need to join the GitHub org today We'll be using GitHub code spaces as our platform for development So if you wish to participate make sure you're part of the org The username and password in case for those who join late It's right here. Let me see if I can find it That's the username and password and you have to go to kube101.dev to Register you should get an email with an invitation and once you join you have access to code spaces Okay, so let's get started. So the first thing Sorry, what? Yeah, so the first thing that we're gonna do to get started You're gonna see a little green button up here called code and then you're gonna click on create code space on main Hopefully the Wi-Fi is it's nice right now. It's really poor Okay, so How many people have registered to the org successfully at least Okay, okay once I mean hopefully once this starts We should be good to go it's just like this first piece that you know, it is the wife has a little poor Hopefully we should be good Just by the show hands was anybody able to create a code space Okay, we have a few. Okay. That's awesome Well, I think Tiffany had a backup code space already open so I'm gonna just yeah And by the way, if you fall behind or if it takes a little while all the instructions are On there read me so don't worry about it like everything that we're gonna be showing it's all documented So don't stress about it if your code space doesn't load yet. So, okay, so this one is coming up Okay, so essentially we have a pre-built image for code spaces that We have everything that we need in there. So it's pretty cool because once you create your code space We're gonna have a K2D cluster. We can have all the deployment demos that we need We have some CLI tools already pre-installed. So it's kind of nice as a you know development environment This is not meant for production. We mostly use it for interloop development and testing and demos like like this So, okay, so it seems like it's coming up. So that's great Show of hands now for how many people have their code spaces up We're right there with the rest of you who haven't risen your hands yet Because the demo gods are not being friendly today The Wi-Fi gods the Wi-Fi gods, right? What about the other one? See which one comes up first Maybe we wait for this we can start by Going through the Read me or I was thinking I can Connect to my computer on hotspot and then just just do that Yeah, let me tell I'm gonna see if I can connect to my computer via the hotspot and see if that helps a little bit So, let me do that really quick. Give me one second. I apologize for the inconvenience Were you able to get it? No Wi-Fi is gone. It's really you don't have any Wi-Fi and I connect either here Okay, so should we do with plan B? Yeah, okay, so we Have somewhat of a contingency plan And hopefully Thank you all very much for your patience and we'll keep an eye on our code spaces Periodically, but we have Recorded a screen recording Of what it should look like if we had Wi-Fi Um, but once you get back to your hotel and you log in you should be able to do all this You know, we tested it like so many times. So it's just the Wi-Fi, okay And all of the commands are included in the read me along with some descriptions of the significance of each step so Bear with us. We're going to be playing the recording and pausing as Required for us to explain some of what's happening Okay, so like I was saying I created the code space Typically takes like 45 seconds for the environment to get up and ready It's pulling an image and with this image image, like I said, you know it creates a Kubernetes cluster for us and it also gives us All the tools that we need out of the box, right? We have the yaml's for deploying the applications that we need We have some CLI tools What does it like flux for example? So, yes, it takes like 45 minutes or 45 seconds to Get started so you can see we don't want to the image the containers getting built and Now it got created Okay, so now see Yeah, okay, we have you turn it now. So this will work Let's test this really quick and hopefully I'm sure I'm on the internet, but I see this Okay Like we're in business Thank you guys. Thank you so much. And again, you'll be able to run this for yourselves You know back at your hotels or when the Wi-Fi is a little more stable And thanks again for your patience Okay, so now let's get started. All right, so It's like I was saying, you know, we created the code space And the first thing that like I wanted to show you is that here out of the box. We have a K3D cluster K3D is a lightweight single node Kubernetes cluster Essentially, it's a wrapper for K3S and it runs as a doc container. That's what which that's how we were running it And just out of the box if you run coop CTL Get all Get us to get all the resources and the dash capital a is meaning that we want to check our cross all namespaces So as you can see, we have different resources that are already created by default We don't have any of our resources which involves, you know Observability and floods and our custom application. That's not deployed yet. And that's what we're going to be doing first Likewise if I run coop CTL get pods I'm able to see all the pods Available on the cluster and is that too small? Should I zoom in a little bit or better, right? Okay So something that I want to note is that by I'm sorry That out of the box. We have some port forwards defined So these are defined you to click on their ports. You can see here We have you know, Prometheus and port 30,000 IMDB, Harpy, Grafana, etc So we have these out of the box that way whenever we want to connect to these services We're able to you know connect to them But right now since we have not deployed at the application yet. We're not able to connect to them yet So I just wanted to know that really quick But okay So let's get started. So the first thing that we're going to do we're going to deploy an application called IMDB IMDB Essentially, it's an application written in net. It's been containerized in an image and we're going to be deploying it to our Kubernetes cluster This application Essentially runs a in-memory movie database and it accepts different requests, right? Like you know get and pose, etc And we're going to be using that as our sample application for today's workshop So the first thing that we want to do to get this deployed we created a workshop manifest folder so we're going to see the into it and The first thing we're going to do is we're going to create a namespace So our resources and need a place to live so for that we have a namespace And like I noted here on the notes in Kubernetes namespace provides a mechanism for isolating groups of resources within a single cluster and Name of resources need to be unique within a namespace, but not across different namespace so In order to Oh, yes. Thank you. Are we gonna go to IMDB? Okay, so in order to create the namespace. We're gonna do cope CTL apply Dash f which the f is for you know for file and I'm gonna apply the zero one namespace And just like that you can see that our namespace was created And if I go if I just want to take a look at that YAML under the namespace YAML file It's very simple. All it is is hey Kubernetes I want you to create something of the kind namespace, and I want you to name it IMDB pretty straightforward And as you can see if I do coupe CTL Get an S. Oh, by the way For coupe CTL, there's a lot of different pronunciations for that. Some people say coupe cuddle or coupe control So you might hear a little bit of everything. I think Tiffany says coupe cuddle So I think there's like a YouTube video out there that talks about all the pronunciations So it's it's kind of funny. But anyway, so just want to mention that Okay, so now To deploy our application. We're gonna use a deployment file Right and essentially this deployment file. Let me show you how it looks like So like I said, you know, it's the kind deployment we want to create this deployment in our IMDB namespace and Disdeployment is gonna reference this image right now. We're using github container registry But you can reference any container registry that you like another popular one is docker hub We're gonna pass in a few arguments to our application You know, we're saying that we wanted to run in memory and we're passing in the zone in the region Which is this case is dev We're also passing in what port we want the container to run on in this case of say 80 some probes to make sure that the application is up and healthy and Also some resource limits for that container so To deploy that we do it exactly the same way. So we do coupe CTL apply that chef to Deploy so as you can see right away our application was our deployment was created and We can verify that by doing coupe CTL Get pods dash and IMDB That's and means what namespace we are referencing. So in this case is IMDB And as you can see 20 seconds ago, we have a part of the name IMDB that was was created Okay, cool. So our app is up and running is created. Okay. Well, so then let's let's access it, right? So let's call this Endpoint I'm using HTT PI HTTP PI Which is an HTTP client you can use curl if you choose to we chose this one because it was just you know It's pretty easy to use and very human readable, but if you would choose to use curl you can do that as well So, okay, so let me copy paste this. Oh, by the way, if you get this message right here You say allow that way you can copy paste into the terminal Okay, so oh no it failed that we cannot access our application This is expected to fail. The reason being is we don't have Service right so in order in communities if you want to access your Application you need to define a service. So for service, there's different things that you can create One thing is you know, you can create a cluster IP You can create a node for you can create a load balancer in this case, we want to create a node port meaning that we want the Endpoint to be exposed in our current nodes IP address so In this case only have one node So we have nothing to worry about and let me show you how that looks like really quick so again, we have a service and It references the indb namespace and we're targeting the port 8080, but on the node port we're targeting port 380 So likewise we do CTL Apply dash f zero three service and Our service was created. So now the same way we test the endpoint We get a 200 so that's good so the we have access to our Application so now if I go to this link coral HTTP Essentially, it's just a VSCode extension that allows you to hit different endpoints But it's kind of useful if you want to test things So in this case, for example, like I want to see my Prometheus metrics. So if I just click on send requests You can see that you know the response here on the right or for example, if you want to query all our actors You can see here 50 cents is there Or like movies for example Okay, so it's working right that everything is good everything is great So so that's great Oh, and one last thing I know earlier I talked about poor forwarding. So We do have poor forwarding by default. So if I go under ports and if I click on IMDB app You see this little globe icon It's gonna create a poor forward for me automatically And I can access the swagger docs for that application I'm just gonna try it out And it's all good Thank you, okay, so again This is a very very Basic kubernetes deployment. I mean obviously there's a lot more to kubernetes than this. I mean we're at a conference of kubernetes But the goal here is to show you, you know, what's like the like the basic on how you do a deployment In a little bit Tiffany is gonna be talking about get ups on how we can scale this up, right? Imagine you don't have one app. What if you have hundred apps, right? What if you don't have one environment right now? It's just that right? What if you have pre-prod and prod and tests and What if you have a hundred clusters, right? So how can we scale this up? So Tiffany will be showing that in a little bit So the last thing I want to do I'm gonna clean up The deployments that I do the problem that I just did so the first thing I want to do is I'm gonna delete the service So Cops ETL delete service IMDb dash and IMDb Oh, did you mean delete? Yes, I meant delete Okay But the pot even though I delete the service the pot is still there So we need to you know, you might think okay Well, I need to delete the pot, right? Let me show you something right really quick So if I do Cops ETL delete pod and then I'm gonna copy paste my pod name dash and IMDb Again, I cannot type today. Sorry about that Okay So great our pot is deleted, right? So we might think okay. We're done. Our application has been deleted Well, that's not the case if I do Get pods You can see that a new pot was created 11 seconds ago This is because the pods are managed by the deployment, right? So the deployment it's making sure that you know, you have a pod Based on whatever description you have defined in your deployment. So if you leave if you delete the pod The deployment is gonna be created for you automatically So if you really want to delete the application then you have to delete delete the deployment. So I You can also do this Cops ETL delete Dash f and then you can reference the file name that you use to To deploy it, right? So I'm gonna do that Our application is deleted if I do Cops ETL get pods You're gonna see that the pod is being terminated Terminated and now it's gone Same way, I'm gonna delete the Name space Okay, so that's The first portion of this demo, you know, like I said, I walk you through how to do a manual deployment of Kubernetes Kubernetes application But I was gonna get pretty interesting. So now Tiffany is gonna show you how we can achieve something very similar Using get ops. So Tiffany saw yours. Thank you very much Joaquin And actually before we get started If you wouldn't mind Creating a branch For today's workshop, we're each going to be creating a branch that is unique to our username as well as it's appended with a Random set of characters at the end and we're going to check out that branch and push it up and we're going to Change directory back to the root of our repository Now flux has already been installed on your cluster And so you can do a check of the flux resources by running flux check The flux CLI has been included within the code space. So that's how we're able to use flux CLI commands Great. We're looking for all checks passing successfully And as Joaquin mentioned, you know You can absolutely deploy things manually to a kubernetes cluster, but when it comes to being able to reproduce the state of your cluster you need a way to Easily get back to your desired state And as Joaquin mentioned, we have some manifests that are already included in this repository Now flux has already been installed in the cluster And what this means is that the flux runtime components like the controllers the crd's are all up and running in the cluster However, we want flux. We want to point flux to our get repository our branch and a path within that get repository so we can do that very simply by Running the flux bootstrap get command and if you just copy and paste this block We'll give that a minute to run and I'll go through the significance of each of these arguments that you pass in the URL Signifies the URL of the get repository, which is in the kubernetes 101 organization the kubecon 2022 repository and We're passing it in the branch variable that we just exported in the previous set of commands We're specifying token auth and what this does is tell flux that we want to Use a github token and conduct basic auth to be able to allow flux to read from the specified get We're also passing in a path and this path is used in the flux customization Telling flux to add its installation manifests and the sync file to the designated path So great If you've run the bootstrap get command successfully we can next pull To get the latest contents from git and you'll see sorry from that flux made to The repository on our behalf and you'll see a few files that got added They include the GOTK components YAML the GOTK sync YAML and a customization YAML GOTK stands for a get ops toolkit And we'll go into more depth about each one of those files in just a minute Aha, so there's lots that has been going on but the latest two commits on my branch that you can see are Flux adding the component manifests as well as the flux sync manifests so Now what we can do is take a look at the flux resources that got created for us and We can do that by getting flux get all and by default the namespace is flux system For resources and other namespaces you can pass in the namespace argument So we see that we have two Flux resources the git repository and the customization You'll see that the revision includes the name of your branch as well as the latest commit on your branch so 3b5c 3b5c and more And we can see that both the git repository and the customization are reporting as ready in the read me I've added some Comments to explain How the values got passed in to create the git repository and customization pair that lets flux know Look in this repository and this path within this repository So that's great. We have a flux Set up and we've pulled down at the latest Commits that flux made to our repository And the geotk components includes all of the flux runtime resources and the geotk sync Includes the git repository and customization and there is also a customization YAML and I want to point out that this is not a flux customization resource This is a customized overlay And Aziz name suggests the customized controller works with customize, which is a configuration customization tool native to kubernetes as of 114 that allows you to do some That allows you to keep your repositories very dry And if you're just using plain kubernetes manifests like we are today the customized controller will add a customization YAML by default and this is slow to load Yeah, so I'm just going to use this so Now that we have flux pointed at this repository You might have noticed that the path was very specific to the deploy slash bootstrap directory Now this means that flux is only monitoring this repository right now But we have other resources that we'd like to deploy to our cluster including the contents of the application directory and the contents of the observability directory So what we can do is create a flux customization to tell flux to monitor and Pull resources and reconcile those resources from our deploy slash application and deploy slash observability directories We can very easily just Run this command and flux will create the customization and begin the reconciliation process We want to conduct we want to make sure that we're adhering to get-offs principles So instead what we're going to do is export the contents of that customization And in a second here we go We've now created a customization resource Placed it in the repos in the location that flux is already monitoring And so what we can do is simply add commit and push our latest change great So right now the sync interval that we've set on the git repository is set to one minute and The customization sync interval is set to 10 minutes We can trigger an automatic reconciliation using the flux CLI and we can do something like flux reconcile source git and Subsequently running a flux reconcile reconcile on the customization Great, so these are showing as successful and what we can do is take a look to see how the customization that we've just added is doing and this This Unready customization is actually an indication of success in our part on in For this purpose and this is telling you that the application customization is failing because the observability Customization is not ready If you look closely In the application customization we've added a field called depends on Now this depends on a field Tells the customized controller not to reconcile the contents of this customization until the dependency is ready So in this way flux natively supports ordered installation of manifests so let's go ahead and add the Observability customization in In the same exact way that we did for the application one this time we don't have it depends on we're just going to Add commit and push the observability customization Great now Previously we did a reconciliation of both the source and the sorry the the git repository and the customization But you can do both in one command by passing in the with source argument So what this will do is first reconcile the source so tells this tells flux to pull the latest contents from the source Repository and then subsequently to reconcile the customization So I might have taken too long and flux might have beaten me to it Okay, so if I had been a little bit quicker you would have seen For about a minute or so that the application one might have reported as Unready because the observability would have taken a little bit to come up But we can see now that the observability and application Customizations have both successfully reconciled So it's great that these flux resources are reporting as ready But what does this actually look like within the cluster and what have we deployed in the first part of these this workshop? Joaquin deployed a deployment for the IMDB application And he also deployed a service This IMDB YAML includes both the deployment and the service And we also have a web v heartbeat heartbeat and web v Deployment which allows us to run low tests later on in the workshop We've also deployed the observability stack including fluent bit Grafana and Prometheus And all of the resources that it takes to Deploy and configure your observability stack are expressed declaratively including your Grafana dashboards Which are stored in config maps So Now what we can do is take a look at all of the resources Within the cluster now and we'd expect to see some resources running in the logging namespace the monitoring name space the IMD IMDB namespace and Did I say monitoring already? The heartbeat namespace as well. So all of our resources are up and running second so we've verified that we have all of our pods up and running and Flux is also capable of detecting drift between what you've defined as your desired state and get and the running cluster So we saw that if you deleted a pod the deployment would bring it back And Kubernetes would do that by default We also saw that if you deleted a deployment the pod would no longer come back So what we've done now is pointed flux to a repository that includes that deployment definition So what we can do is run a kubectl delete deployment for our IMDB Deployment and We can put a watch on getting the deployment now this can take a few seconds But effectively what we what we will see Is the fact that flux is going to bring back the deployment on its next reconciliation loop? Great, so we see that it's coming up now And in a few minutes up well not minutes. It's there We see that one out of one containers are ready for the IMDB application We can further verify by taking a look at the pods great So flux also detects drift within the resources that you've defined in git so this command here is going to describe the deployment that we've defined in git and Just to show you the replicas field Sorry, we have specified that we want one pod running for the IMDB deployment and we can show you I can show you that here as well great so What we can do is manually edit this deployment and we can do that several different ways If you want to follow along with the workshop you can export a cube editor environment variable and by pasting that in we can specify that we want to edit in VS code We can subsequently run a cube cuddle edit command and that will bring up the existing resource And you can edit this file and as long as the valid Sorry as long as the yaml is valid it should save and automatically apply that change Great, so I'll be quick here because I'm competing with flux now awesome. So we see that the replicas has been updated to two and Great, we can see that we have two pods running Which is what we've manually edited the deployment to show now Once we wait about a minute or so what flux should do is reckon is Realize that the replica count in the deployment no longer matches what you've defined as your desired state And so you can see now that the second pod is now being terminated now has been terminated and we can take a look at the pods Within the IMDB namespace great. We see one We hit up arrow a few times we can see that the replica Field within the deployment has also been changed back by flux so Yeah Flux now we've deployed the sample application the observability stack as well as Checked out some of flux drift detection and reconciliation capabilities, and I'll hand over to Joaquin for the Observability section. Thank you Okay, so Tiffany already got the applications deployed One way to verify, you know deployments and everything. It's the one of my favorite tools is Canines so essentially canines is a UI Tool that runs in the terminal and it allows you to inspect and you know see what's going on with different Kubernetes resources so In order to get started with canines all you have to do is just type canines on your command line And if you press 0 you're gonna see all the namespaces So as you can see here, we have different pods across all namespaces and The first thing I'm gonna do is I want to check this Wet v heartbeat if you press L It's gonna pull out the the logs. So as you can see that it's making a request around every five seconds and likewise I can open up my Wet vimdb pod and if I press L you can see that it's sending around 10 requests per per second, right? Likewise on canines, you can see different types of resources not just pods, right? If I do Shift colon and if I just type for example Deployment You can see that we have all these deployments already in the cluster I can do let's see secret for example Different secrets are defined Etc. So pretty useful tool Highly recommend I'd be especially if you need to Kubernetes It you know, it's it's great now the first Component that I want to show is fluent bit, right? So fluent bit It's a log processor and forwarder that allows you to collect log events from different sources and Deliver them to different and backends, right? So For this example, it's essentially we're collecting logs from our application But we're not really forwarding to any cloud provider right now We're just pushing it to standard out just for the terms of simplicity But should you choose to there's different connectors that you can use in fluent bit in order to push, you know Your logs to you know, you know, Azure Google, etc Should you choose to? So all I wanted to really show from Fluent bit let me go to pods and If I just go under Fluent bit and if I press L and then I can do let's see w or For the for the wrap and as you can see like all these logs are the ones that are getting printed to standard out Like I said, you can configure it the way whatever way you choose to whatever provider you want to use as well Now the next thing and that is actually pretty cool that I want to show is Prometheus select Tiffany mentioned on the slides Prometheus is a metrics collection and alerting tool It records real-time metrics in a time series database and it's built using an HTTP pool model Right. So as part of Tiffany's demo, she already deployed Prometheus So I can show you that we have it right here. The pod is running and I'm able to access the Prometheus UI if I click on ports Prometheus you see the little globe icon here and we're gonna port forward to our Prometheus UI Coming up coming up. Let me try again Great. Oh, we saw that last time. Yeah, let's refresh a little quick. Oh not this the Prometheus tab Interesting. Are we losing Wi-Fi? I mean we're plugged in now. I mean, let's see Okay, I'm gonna refresh my code space really quick. Prometheus. Okay, it's not showing Yeah, try it. Oh, it's running Just thinking out of curiosity is anybody was it anybody able to start up their code space and cool Okay, that's awesome. It's in the same program Okay, so must be something. Yeah, I don't know why it's timing. Let me try it again one more time. Yep. Yeah Okay, there you go. Prometheus is here. Awesome. Okay, so this is how the Prometheus UI looks like I'm not gonna go too much detail given the time now, but a few things that I want to show So if I you go under status and then targets It will show you what targets Prometheus is currently scraping So for example like our imdb app that we show before it has a Metrics endpoint. So this is one of them that you know, Prometheus is scraping and you can see the status set to up And then as well we have some for our low testing which is like the web v Applications so everything seems to be good So if I go back to the main UI and you see this little icon right here If you just press there, it will show you what are the metrics that Prometheus has been able to scrape for us And it's stored in the database. So For example this one You know it shows there So yeah, so we have the metrics cool, but we want to make it pretty we want to be able to make it more like user friendly right so for that we have a grafana and Grafana it's an open-source observability platform that allows you to visualize metrics logs and traces from your application so So yeah, so you open the grafana port and the username password is admin and Then coop con 101. It's also in there in the read me. So I'm already logged in. This is a grafana The first thing I want to show you if you go under the little Gearbox gear tool icon and then press data sources So Grafana can connect to many different data sources For this case right now, we're just connecting to Prometheus, but I just wanted to show you that You have the capability of connecting to so many things. You can connect to elastic search, you know Grafana temple Asher monitor is there Grafana cloud, etc So there's so many options so many so many things out there. There's so many plugins out there that you can use For today's demo. We're just focusing on Prometheus, but I just wanted to like I mentioned that This exists right and you can play with it. Should you choose to so okay? So we have a Prometheus data source It's pointed to our Prometheus service If I click on save on tests, we know that everything is connected as expected And now if I go here, you can see that we have a dashboard for our imdb application which keeps track of what how many requests per second are coming into our application and Also, how long the requests are taking and if there's any errors at the moment So right now we have no errors and the request seem to be pretty constant Okay, great So now we're gonna load. Oh, we're gonna run a load test So that we can see some some action here in Grafana So to do that we have some already pre-made tools to load Our load test you don't have to worry about too much about what it does in the background But all you have to do is just run this KIC which stands for Kubernetes and code spaces Test load. Let me quit from here And I want to run a few integration tests. Actually, I'm gonna run them all at once And the reason being is I want things to fail on purpose. That way we can see some of those errors and In Grafana, right? Somebody's gonna run it a few times. I'm gonna hit it a few more times So, okay So now if I go back to Grafana, you can see that our load starts to go up Right. We have more requests coming in per per second And then in a little bit just give it a few seconds. Oh, there you go You're gonna start seeing that we have some errors in our endpoints as we expected you can see that the Request per second also went up. Now we're in an orange So, yeah, so very useful very valuable you can do so many things with it Maybe you haven't used Grafana. I you know, we highly recommend it Okay, so I think that's it in terms of Demo if you have a code space still running you can stop it by pressing this little code space button down here, and then you can see Stop current code space We're gonna leave this axis open for you know, maybe like another few days because I know not everybody was able to get The access today that way you can test this back at your hotel or back at home But that being said I want to turn it back to Tiffany who's gonna show us some sites right, so unfortunately only a handful of you were able to you know conduct the hands-on portion alongside with us, but Today, we've covered Kubernetes get ops and observability We've deployed a sample application via flux in our Kubernetes cluster As well as the observability stack following a get ops workflow we've also monitored that sample application and even ransom integration and low tests to simulate application use and success successful and failed traffic We Have just a few minutes and I'll just Mentioned very briefly that Joaquin and I will be here for the rest of cube con cloud native con And you can find us at several booths Including the Microsoft Azure booth There's a flux booth as well and a we've worked spoof And in this deck we've included some additional resources for the topics that we've covered today So just wanted to get that out of the way so that we can use the rest of the time for any questions that might have been asked Yeah, sounds good. Can we take a selfie really quick? Yeah, are you okay if we take a selfie? Yeah, let's do it really quick Okay, cool. So let's do the Q&A then Q&A I think that there will be a microphone that will get placed in the center aisle if you had any questions and And thank you all again so much for your patience Really appreciate it. Yes, there's a microphone back there. Oh, sorry Shall I shall I stay here or whatever you're comfortable? Okay. I have a question regarding the flex. What's the similarity between helm and flex That's a great question And helm is actually a great templating tool. That's you know cloud native and allows you to package up all of your resources so Flux and helm are Different I would say helm does include some CLI commands that allow you to you know do things like helm install helm upgrade, you know, etc but what flux Adds to that is the ability to declaratively Define helm resources that live and get that the flux helm controller manages so I Might have mentioned during the talk that the customized controller is one of fluxes reconciler controllers and fluxes helm controller is the other Reconciler controller and it interacts with a helm release which defines a helm chart from a helm Repository that flux then uses To do something to do, you know installations and upgrades But instead of you having to manually call helm install or helm upgrade You just add it to get make sure that it's in a repository that flux is monitoring and flux can perform the installations and upgrades on your behalf and You can define values either within the helm release So some so similarly to passing in a values YAML to any helm commands or you know setting any values And you can define those within the helm release itself or Externally in a config map that you subsequently feed into the helm release. Does that answer your question? Yes So in other words, it's like a formation between repository and deployment process. So it's like another layer over the Helm right, so just the link between git or repository and the deployment process in the Grantus. Yep. Okay. Thank you very much. Thank you Would you mind passing the microphone? Thank you so much for a great demo. Thank you. I'm having all the patience and it's all the Wi-Fi issues. I Have two questions One, how do you manage your secrets? We have an issue with continuous Cycling of tokens and managing it. How do you manage secrets? Second, what other competitive products exist or equivalent for flux like in the in the same Space as flux water does. Thank you. Yep. So secrets is a great question and I Mentioned I mean and one of the GitOps principles is that all of your all of the things that you intend to deploy to your Cluster should be defined in Git. However, Kubernetes secrets are, you know, only base 64 encoded and effectively plain text So in that case, there are a lot of different options for secrets management if you You know are looking at Like open-source tooling, there's the sealed secrets controller Which which includes a an encryption key that it only lives within the Secrets sealed secrets controller. So if you store the secret in Git It's it's no longer a plain text secret effectively You can also use and flux now natively supports vault integration as well as SOPS So those are some other options and I believe that your third your second question was other options Apart from flux and I did mention that GitOps itself is agnostic to tooling and so there's a Few others that are available and I think that you'll be able to find some talks on those as well at Kubecon so would invite you to appreciate if you can name few so that I can go and read about those Got it Argo comes to mind Yeah, Argo with the HelloFresh and blocks Yeah What other ones are out there? Yeah, sorry, I didn't come prepared with the list of alternatives apologies Thank you so much. Thank you. Yeah I will just briefly ask Since you mentioned Argo, which is another Continuous delivery tool for GitOps as well. So Although I'm not experienced with it. So I want to ask that What do you find the key differences between flux and Argo or what strings do you find in Flux that you might want to choose over Argo CD? Yeah, that's a great question and I will be the first to admit that I am not, you know Supremely well-versed in Argo, but from Flux's perspective Some of the benefits include the fact that it leverages a lot of native Kubernetes functionality Include including things like role-based access control and Flux natively supports multi-tenancy within a cluster And while the Flux reconciler can while Flux might have widespread access within your cluster you can specify that when Flux is reconciling contents of a customization perhaps from a you know an application team's repository you can lock it down by defining Roles or and role bindings to attach the service account for that Application team shall we say and what that will allow Flux to do is only Deploy and reconcile resources that are allowed in the RBAC rules You can similarly define a cluster role and cluster role binding if there are clusters go for resources that That application team might need to deploy And Flux also scales really well and is able to manage I don't know. Yeah, there'll be vendors for both Flux and Argo the boots right so you can always ask more questions. Yeah Exactly the similarities and differences. Yeah, so yeah Yes, please feel free to come find us at the boots and happy to chat further. Thank you Are there any other questions in the back there? Thank you Hi, thank you so much for the talk My quick question was around monitoring and observability. I know that's not the point of this talk But I want to get your thoughts around Using the open source to her tools like Prometheus versus like third-party tools Maybe like data dog and other such I'm wondering what your thoughts around like, you know The pros of using the native open source tools versus others Well How deep are your pockets? I only say that as a joke, but I would say that if you're beginning, you know testing out and Figuring out what metrics it is you're interested in collecting as well as carving out, you know and and creating specific alerting around specific metrics and gathering information about the General performance of your cluster open source is a really great way to start And I know that you I think you mentioned data dog There are a lot of additional capabilities that you can have vendor support for But I would say that the open source community is Quite generous with their knowledge and you'll find a lot of examples and and Like resources resources online as well. So they are battle tested if that makes sense Thank you. Thank you So Yes quick That KAC load test tool is that something you created yourself? Yeah, it's like a little tool grid in and go that allows you this to automate this essentially Isn't the background is just runs a bunch of bash bash scripts just to make it a little easier for us Yeah, thanks, and you can actually see the contents of the load test and the integration tests in the Repository itself. I think it's under the dot KIC directory. I think that got refactored. Oh it got refactored Yeah, but you want to contact me? I can you know, I can point you in that direction. Yeah, yeah, thanks I guess this isn't the last opportunity to chat with us Thank you again so much for your patience and for joining our session today. Thank you