 Alrighty, we're going to go ahead and get started now. So I'd like to thank everyone who is joining us today. And welcome to today's CNSEQ webinar to Kubernetes Zero to Hero Deployments and Management. So my name is Daniel O. I'm working for Red Hat as a technical marketing manager, specializing in the cloud and application development. And I'm also responsible CNSEQ ambassador. So I will be modeling today's webinar. And we'd like to welcome our presenter today, the Anthony Romirez, the director of Concerning Nebula Works. And there are a few couple of things to housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. So there are Q&A box at the bottom of your screen. So please feel free to drop your question in there. And we will get to as many as we can at the end. So this is the official webinar of CNSEQ. So as such, it is subject to CNSEQ called of conduct. So please do not add anything to the chat or question. It would be violation of the code of conduct. So basically, please be respectful for all of your fellow participants and presenters. Please also note, the recording and slides will be posted later today to CNSEQ webinar page at www.cnseq.io slash webinars. With that all, I'm gonna hand it over to Anthony to kick off today's presentation. Anthony, take it away. Thank you so much, Daniel. And thanks everybody for attending today. Great to see the participant list filling up here. I'd like to take a quick moment to thank Kim, Christie, and Daniel who are part of the CNSEQ that helped put all of us together, and Anne Lin who is a marketing director at Nebula Works that assisted me in getting all of this put together as well. So thank you all. I appreciate all the time and effort put into this. And hopefully everybody is of good health and staying safe. And today, I'm gonna be talking about Kubernetes. I'm gonna be talking about how teams can start to bootstrap themselves into leveraging Kube, leveraging a tool called Helm, and how Terraform or infrastructure's code fits into that vision. So my name is Anthony Ramirez, as Daniel mentioned. I've been working in the container space for about four and a half, five years now. Working at Nebula Works for about five years. For that, I was doing a short work assignment at NASA JPL and also have done work in systems integration. So the cloud and Kubernetes has been part of my duties for a few years now. So in this talk, I hope to share a few things. So this talk was designed for full stack engineers, DevOps engineers, or generally anybody that's working on managing infrastructure. So nowadays with responsibilities shifting way left, we're finding development teams having to manage infrastructure more commonly. And the entire stack from infrastructure provisioning to application configuration and deployment is now the responsibility of maybe one team versus silo teams. So this talk is for that persona and specifically people that are trying to understand how to get started with Kubernetes, understanding some cloud native and open source models that they can use to start taking advantage of container orchestration platforms. In this talk, I'm talking about open source tools. I'm talking about deploying EKS clusters or Kubernetes clusters to Amazon. I'll talk a little bit about how containers provide productivity for developers, discuss infrastructure as code and some of those concepts, why it's advantageous. And I'll show a demo of EKS cluster I have provision in Amazon using Terraform as well as a demo of deploying a application with help. So this is supposed to provide a cohesive understanding of how all of these tools fit together. And this is from my experience working with very large organizations adopting containers, deploying applications and systems to the cloud. So you might be familiar with some of these concepts. You might be doing it in a similar way or a different way but generally it's to put all of these tools together and show how they fit. As we've seen in the last couple of decades, there's been a huge shift from how teams are managing applications, managing infrastructure, patterns of monolithic applications are now transitioning to microservices due to technologies like containers to the contributions that Google made to the Linux kernel that includes C groups and namespaces. And generally I'd like to address those things. I'd like to talk about how we can increase developer productivity. And since there's so many tools out there, I hope to distill down the tool set that you would need to get started with Kubernetes. So Docker has become a very popular open source container runtime. There's an enterprise wing to it. However, the open source tool itself has gained a lot of popularity over the years. And there's other runtimes that exist and that have become useful for teams but Docker seems to have a large community of people using it. So there's other runtimes that are compatible with Kubernetes like ContainerD and Cryo. But my experience over the past few years is primarily using Docker as the runtime and leveraging the Docker file as the method of container creation, image creation. So working for Nebula Works which is a consultancy over the past few years, I found myself in meetings with teams trying to justify the use of container technology or building a business case to potentially adopt these patterns, adopt these technologies and propagate this across teams. So this persona may use on-premise hardware. They may have strict silos in their team structures and are trying to figure out how to develop containers and how to secure them. So I believe that there are some benefits of using containers. Those are matter of fact type things. And I'm not saying that containers are silver bullets either they have their weaknesses, their vulnerabilities, their quirks and their exploits. So it may not be right for every use case. However, they have advantages that we should always keep in the back of our heads. So first and foremost, one of the things I enjoy about containers is that they're based on Linux technologies. As I mentioned, they are a result of Google's contribution to the Linux kernel and about a decade ago or a little bit less than that which included C groups and namespaces which provide the ability to create isolation for services running on the same host. So these containers operated similarly to things like Solaris zones or BSD jail. So if you're familiar with that, the container concept is very familiar the way that you actually use the interface or the APIs is different. So Docker itself is pretty easy to use. It has a very streamlined developer workflow which I'll be talking about in a second. But like other server templating tools, containers allow us to package our apps and our dependencies into a container image using a copy and write file system. So the process of building containers results in artifacts that are number one lightweight and they're more lightweight than VMs. You package a single service or application into a container and maybe have some sidecar for logging or a metrics collection or something like that. But the idea is that you wanna have one application running in a container. You wanna avoid noisy neighbor syndrome. You wanna have separation of concerns and create services that are discreet and that can be scaled independently and horizontally versus vertically which you would have to do with the monolithic application structure. Second is that they're portable. We know the classic, the constant reminder to us why we use containers is to avoid the it works on my machine dilemma. So if a development team was building something in an operation team or an individual was supporting the infrastructure, sometimes these devs would throw applications over the wall and just have the operations teams figure out how to deploy them. So with containers, we can shift left the creation, testing and build of these containers with a Docker runtime on your workstation or having a CI system that has the Docker runtime. A lot of CI systems nowadays leverage containers as their runners. So that makes it very easy to run unit tests to have consistency across the build stage all the way to testing and development staging and eventually production. So packaging is streamlined across this workflow and we can place these minted images once we build these images into a registry or container registry resulting in a consistent experience for everybody that's pulling and deploying that. And since containers are inherently smaller in size, as I mentioned, they can be scaled horizontally which is very advantageous for us. It takes less hardware, we could leverage more densification in our servers and it allows us to use and it encourages us to use microservice-based architecture patterns. And microservices themselves is a lot of content on that. I recommend reading about it through some blogs through ThoughtWorks, I thought had some great stuff. However, microservices essentially allow us to create discrete services exposed and via some standardized API like a REST API and their separation of concerns between different services. So if there's many discrete teams, they could iterate on their service independently without affecting each other. This is great because we have higher velocity and feature creation, there's not that many dependencies and since everything's exposed through a single API, there's not much changing for consumers of that service. So anything happening or changes happening on the backend behind that is abstracted away from different services. So there's a few advantages to microservices, they can get overly complicated and they're not, again, a silver bullet but they do promote advantageous patterns for debt teams. So containers are very useful. They show to be very useful in my experience for teams. And a very common pattern that I see that I have seen over the years is the way that teams develop containers. So the first step would be a developer that has a container runtime on their workstation. They're building, they're testing, they're breaking their containers, they're baking in their application, their general purpose programming languages into these containers. And since containers were intended to be a developer-centric or developer-focused tool, they are really easy to create. They allow developers to go ahead and create different versions of their images without affecting any production environment. So developers now have the ability to be a part of that deployment process, that delivery process. So just to give you guys some context, the code that this developer would be writing would probably be stored in a place like GitHub or GitLab, and teams are most likely following a standardized branching strategy and a versioning strategy, such as trunk-based development or GitHub flow. So this happy developer gets some warm tea and some strong coffee, iterates on their application, builds an image and pushes their code up to a repository in their feature branch. There's some pull request review process that happens. And once this happens, a CI job typically runs and once the Docker file runs a set of unit tests for the general purpose language and any other tests that this team finds relevant, typically there's a, in the baking process, there's tools like twist lock or native features in the elastic container registry in Amazon that provide native image scanning solutions. So you could run vulnerability scans against images that you create. And after the approval process happens, the container is ready to be pushed to a registry. So a container registry holds a production ready image that's versioned, it's tagged, and we understand that it has been tested on different environments and works across a series of different environments. So eventually when it runs in production, the experience of deploying this should be very similar to what we have done in staging development and on the developer workstation. And I'll share a little bit about how to create consistency across these environments when we introduce terraform to manage our clusters. But generally this workflow is very common and when it comes to building Kubernetes applications, containers are wrapped up in a pod. So containers have to exist in this lifecycle. So understanding that this is a common workflow and that in this workflow, there are things like branching strategies that must be taken in consideration, versioning and generally release engineering practices that relate specifically to image creation. And at one point, a few years back when there wasn't that many orchestration tools when Docker Swarm was barely coming out, there wasn't Docker Compose, there wasn't any sort of Docker stacks available. So running these containers, building these containers was pretty much something you had to have a really good handle on. And once Kubernetes became more popular, once Docker Swarm had more features to deploy multi-container applications, those types of patterns started to arise and have their own testing related to them. So if you're deploying services, you want to be able to test that when you deploy a stack that the three instances of some application can connect to each other. That's kind of a different problem set. This one is specifically around image creation and minting. So working for Nebula Works, I've had the privilege to work with some very large brands providing build engineering services, training and consulting. And there's a continuum that we have found that is somewhat consistent across teams that are attempting to adopt containers. So the initial step is to build an orchestration platform for a team to use. In the past, I've worked on bootstrapping or automating the deployment of open source Docker Swarm clusters. I use Ansible to bootstrap Kubernetes onto on-prem nodes as well as Raspberry Pis using managed services like EKS or AKS. But the idea is that you need a cluster up and running to start the journey. And obviously this is not gonna be a production grade cluster, but it's something to get you going. It's something that allows teams to start experimenting. And once you have this process baked out to deploy one, hopefully you have some automation or using something like infrastructure as code that allows you to duplicate these environments very easily. So the first step is to get started there. The second is to identify the domains to test and secure. For example, testing the general purpose programming language that you're writing in, as I mentioned before, Docker file linting and image build, testing process, container deployment, and so forth. So certain teams may have different needs in order to secure and test their applications. So understanding what those domains are, getting the team together, understanding what their requirements are, the developers themselves and helping create some alignment and some level setting across the team to make sure that all use cases are accounted for. And you can set up the appropriate guard rails for development teams in order for them to focus more on their application. So the next step after identification of these domains is to actually execute securing them. So as I mentioned, there's some image vulnerability scanning solutions that exists that can get you some very easy wins. There's other open source solutions like Clare, anchor that you can use and integrate into your CI process. There's also native cloud image vulnerability scanning solutions for your containers. So understanding that those tools exist and understanding how to use them is very important. And finally, and telemetry and security. So monitoring and logging containers versus virtual machines or bare metal is slightly different. There's more layers to begin to analyze here. So first there's the container level, the container logging metrics, application tracing, machine metrics for the nodes that are running as part of a cluster, as well as the Kubernetes or the container registration platform itself. So all of these tools, these different systems need to be monitored and logged. And so this may take a little while, your organization may have some standard tools for logging and metrics collection. So integrating those into the container solution is something that I find is takes a little bit more time. And then being able to consolidate that data, whether it's machine metrics or logs consolidating that and being able to perform some analysis on it in order to extract relevant information. So setting up alerting, things like that. So over time, once the team understands the domains that exist in this kind of factory, this workflow, their skills with containers, with Kubernetes with the tooling around the testing, the securing, the telemetry begin to increase and they could start driving business value much faster. So as you can see, it's a progressive journey. It's not a one state that you get to when you're done. It's understanding that it is a journey and that sometimes teams that may just be getting started need a path forward that's simple, that is transparent and that gets them value fast. There's been times where people have worked with teams that are building POCs, but there's no attempt to standardize on the continuous integration workflows or the continuous delivery workflows. There's no intention to standardize on the branching strategies. So having that in the back of your head and understanding that when you do have standards, when you enforce the standards, it typically makes it easier to automate things versus development teams doing, if they're doing kind of whatever they want to do, it makes it a little more difficult to understand what tools can help them achieve what they want. But having a baseline standard and going from there, in my experience as help teams, really take advantage of these technologies. And now Kubernetes. So containers provide some isolation for us. They provide a streamlined workflow for packaging up our images from a development perspective. After a few iterations of building containers, running through the pull request approval process, this becomes second nature. So now if we wanted to deploy hundreds or thousands or tens of thousands of containers, it would be much too cumbersome to do it manually or even with a script. So in order to solve this problem of massively-scaled container deployments, we introduced not myself, but the community introduced container orchestration platforms. So Kubernetes is that. And Kubernetes makes it easy to deploy and manage container-based applications. And Kubernetes is like an operating system for a cluster. So developers don't have to include infrastructure-related components or services within their applications definitions. So infrastructure is abstracted away from developers. There's a lot of reasons why Kubernetes is a great tool to use if you're not already using it. It exposes compute resources as a single deployment platform. So you can define a cluster. And if you provide a manifest, you post a manifest to the Kubernetes API, it will go ahead and deploy that container application on your behalf. And you don't really have to worry about where it's being deployed to. You could even provide specific selectors or options where if you have a requirement for that application to learn a specific type of hardware, Kubernetes will go ahead and find that out for you. So generally, it's a scalable platform. It's flexible. It's a platform for building platforms. So how did we get to Kubernetes? Well, about a decade ago, Google had a internal orchestration platform called Bored. If you're familiar with Bored, it was kind of the first container orchestration platform that might have been the first at Google, but it was widely used at Google and began to create a long line of other orchestration platforms to get us where we are today. So Bored is an internal clustering platform that is similar to Kubernetes, but the interface and the API looked quite different. So over the years, Google understood that there was some inefficiencies with that Bored architecture. So they created a tool called Omega, and Omega intended to improve the design decisions, the way that developers were posting jobs and general internal architecture was improved upon in Omega, and those changes folded back into Bored. So over the years, Google was consistently improving their clustering software. And last but not least and most recently, Kubernetes came out of that type of work. So this container orchestration platform was designed based on all the stuff that Google learned building Omega in Bored, and they wanted to make a very developer-focused platform to abstract away all the infrastructure, make everything a REST API, and so to simplify that architecture as well as make it easy for developers to consume. And this is a architecture diagram that you all might be familiar with. It was just borrowed from the Kubernetes.io website. So on the left side, we have the Kubernetes control plane, and this control plane consists of a series of services that essentially provide the ability to create applications on the Kubernetes cluster. So this includes at CD, the API server, a bunch of controllers that are in charge of creating specific objects, the scheduler, and on the right side we have the Kubernetes nodes themselves. So this is the machines that are actually running the workloads. And it's best practice to not run workloads on the control plane. So everything's running on the right. So one thing to note is that if we were to self-host this, it's good to have an HA setup. So you might need to backup at CD, have a three master node minimum for the control plane, be able to have automation to easily deploy this control plane, update it, manage it, and so forth. So managing this on your own might be a little bit cumbersome. However, it's been done in the past. I've done it personally about a dozen times. And as long as you have the automation, you have the scripts, once you build them, it's a downhill from there. However, having a team or individual manage that control plane creates unnecessary overhead, which is why if you are getting started, I would recommend using in Kubernetes managed service. So one that I'm very comfortable with is the elastic Kubernetes service. So this is the Kubernetes platform that's available to use in Amazon. The control plane is abstracted away, so you can just basically focus on provisioning the nodes themselves and then connect to your EKS cluster with a certificate that's provided to you and run the Kubernetes applications that you like. So the EKS service or any other services that are similar to this make it very easy to get bootstrapped. And today, if we look at the three big clouds, they all have a Kubernetes managed, they're all generally available. They're all maybe one minor version behind the latest Kubernetes release, one or two, there's always some version like there. They have RBAC, they have multi-AZ. And about a year ago, this chart or this table would not have been true. There wasn't GA and AKS and some of them did not support multi-AZ. So just to give you a comparison, as I said, I'm most comfortable with Amazon. So I would use EKS, but the service model for these platforms or these platforms as a service or they're more infrastructure as a service are very similar. They abstract with a control plane. You manage the nodes that you want to be the worker nodes and you begin to distribute applications to that cluster. So imagine that you're gonna take the plunge to start using Kubernetes in the cloud or on-premise. It doesn't really matter. How would you manage that infrastructure? Would it be manually? Maybe it's using a configuration management tool or bash scripts or any other method in order to configure services onto hosts. The way that we have managed infrastructure has evolved over the years. So instead of manually configuring and installing servers and networks, we can represent infrastructure virtually and as source code. So why is that advantageous for us? Why should we use infrastructure as code? Well, for starters, since it's code, we can apply software conventions and standards around how we build something. We can add comments into what we're doing. We have the ability to take advantage of declarative languages. So taking advantage of a declarative language will get into a second, allows us to define the desired state of something and let that go reconciled for us. We can encourage self-service. So if we have an infrastructure as code base and a development team is leveraging it, this encourages everybody to participate. If there's a single repository where there's code that exists, we can make pool requests. We can create a backlog of issues and let a team or the community help burn that down. So these types of patterns that exist in software engineering and development can be applied to our infrastructure today. Another great reason to use infrastructure as code is that you can move faster and safer with automation. So if you have CI, CD1, CD2 workflows, you can add in automation to test the infrastructure as code that you're building. You can have release engineering processes over it. So there's a lot of great reasons why infrastructure as code is advantageous. So at NebulaWorks where I work, we use Terraform. So Terraform is an open source tool that is cloud agnostic and allows you to deploy and provision resources using a declarative language. There's other tools like Pulumi, which is also a great tool for infrastructure as code if you haven't used it, allows you to use any general purpose programming language in order to provision and manage your infrastructure. The idea here is that if we have an agnostic tool, we're able to pivot from cloud to cloud as we deem necessary. So having something that's very specific to the cloud platform like ARM templates or cloud formation, they're good tools, they work well. However, they don't really provide transferable skills. So Terraform, for example, if you learn Terraform, you learn one domain-specific language called HCL and you're able to transfer those skills, that knowledge across multiple cloud platforms. Declared it, so declarative languages operate differently than imperative languages. So the main difference here is that in imperative languages or imperative tools, you have to provide a procedural definition in how to execute some program. So it's step after step, where declarative is providing an end state or a desired state and allowing the tool to focus on how to actually reach that end state. So the difference between, for example, Ansible, which would be imperative, and Terraform, for example, would be if you wanted to provision 10 instances with Ansible, you could do that as well with Terraform, and you want it to scale up to 15. If you added five to that Ansible manifest, it will create 15 more. It doesn't really understand that something already exists, but with Terraform, you can upscale that node count, and Terraform will understand that something already exists, so it only needs to add five more versus 15. So there's some advantages to that. Another one is that since we're using repositories to handle all this source code, we could treat the source code as a source of truth. There's a popular term that's kind of going around called GitOps that essentially means driving all operations through the code, through the source code management tool. So we can apply these software development, software engineering practices, continuous integration, for example, approval, review approval processes, all these things that we do with general purpose languages, we can apply to infrastructure as code. So we can add a rigor to it. We can add a layer of automation, security, and so on. And another great feature of Terraform, or generally these types of infrastructures called code tools, is desired state management. This means basically state is information about infrastructure that you have deployed. So if you wanted to make changes to an existing deployment, Terraform would be able to reconcile what exists in reality based on what your manifest defines in your local workstation. So if you wanted to make an update, you could transparently do that with the Terraform plan and apply. So why am I talking about Terraform so much? What's the point of infrastructure as code and how does it relate to Kubernetes? Well, here's an example, and I'll jump into a demo really quick, but just to show you, this is Terraform. So the resource is a keyword here, these are keywords, and the second variable or value here is the resource type. So in this case, I'm deploying an EKS cluster and an EKS node group. So the control plane and the worker nodes. And I'm naming them something that is identifiable for myself. And since I had a pre-provisioned VPC that I was using in our sandbox environment that's provided to us by my organization, I just referenced these and I could have used a data resource here to pull down data from that VPC, but for the sake of simplicity, I just added in a few private subnets that I'd like to deploy my node groups to, my instances for my node group to. So here is that two resources that allow me to create and manage an EKS cluster. There's a couple of more that I'm not showing and those are role policies and an IAM role. And this essentially allows these EC2 instances that are part of the cluster, specific permissions to query metadata and so on. So I'm gonna pivot over to my terminal. Hopefully everybody sees this okay. I think the font is big enough. So this directory has a few resources. So I just showed you the EKS file. So again, there's two resources here, the cluster and the node group. And in the variable section, I have a cluster name that's just something I had a default for since in our sandbox environment that I'm testing this in, we have a VPC with subnets that have pre-existing annotations, Kubernetes annotations. So I just had to match this up to what was already pre-provisioned for me. And this deployment is actually already done. I did this deployment earlier because it takes about 15 minutes. So I wanted just to prove to you guys that this deployment exists. So I'm gonna run the terraform show command that's just gonna show me pre-provision or infrastructure that I've already provisioned. So we have the EKS cluster is providing me the certificate authority. There's information about the VPC, where the node groups are gonna live, the node group itself, the AMI type that I'm using. All this stuff is provisioned in reality. So I'm just gonna make sure my profile, my AWS profile is set here. And there's an AWS EKS command that I could run that allows me to update my Kube config in order to authenticate to the cluster. Quickly about terraform, there's a few nifty things that you can do to validate that the files that you're building are built correctly. So I just ran terraform help here. And there's a couple of commands that I'd like to show you. One of them is terraform fump. So terraform fump could be added to your VIM to auto-fump something to your, when you say it could automatically run this, or you could add fump to a CI process. But essentially this provides you spacing standards in all of your terraform files so that there's consistency across all the spacing and how you set up a general resource in terraform. Another tool that I'd like to share with you is called validate. So if I accidentally made a typo, there's no variable called blustered on name. But if I accidentally did that and I ran terraform validate without having to plan or apply anything, it'll be able to spit back to me, hey, this variable doesn't exist. Did you mean cluster name? So I can go back in to that file, update the error and run the same command. And it shows that it's valid. So this is just a very basic example to show you that you could add terraform linting to a CI process. So to show you that the cluster is up, I'll run a few QCTL commands. So I did just provision two nodes here and show you desired size two and size one. So there's a scaling configuration associated with the node group and the instance types or my nodes are running M4 X-ray larges. The reason I have M4 X-ray larges was basically I was running the Kubeflow platform which is just a machine learning or data science platform that you can run on top of Kubernetes and the minimum requirements were a single node of 12 gigabytes and two BCPUs. So I just chose the appropriate size. One thing I learned though was that the instance types are variable. So in terms of their compatibility with EKS, so make sure to double check the instance type and make sure it's compatible to be used as a worker node in your Kubernetes cluster. There was one that was a, it was like an A1, it was the A series instance types that was cheaper than the M4 X-ray larges with the same specs but I tried to use that instance type without checking the table of what the compatibility and it didn't work. And I was trying to figure out what's going on and it was the instance type that was not compatible. And there's also some AMI types that are not compatible but you could also pass in your own AMI types. So that, just to also show you just some more information about the Kube cluster. This is working well. It's allowing me to make queries to it. And that is basically how you can manage a Kubernetes cluster with Terraform in a production setting but what we have done is to make a structure similar to this and this is not a full working example of that structure. However, typically when we do build, we're working on engagements, building a two Terraform code bases for our clients. We have discrete directories. So development, production and staging, as you can see in this tree here on this level and a modules directory. So I'm not gonna get into modules but basically if I pulled this tree into the modules directory, I can reference it in this directory dev prod and stage and make a module reference. And that allows me to have discrete state management for three discrete environments. So this example was just to show you the very simple way that you can get started with Kubernetes. So back to the deck, there was one tool that I would like to share with everybody here and it was Helm. So if you're not familiar with Helm, Helm according to the website is the best way to find, share and use software built for Kubernetes. And this tool has evolved a little bit in the last cube con I went to in San Diego. They announced the Helm three and that was basically removing the server side component tiller that existed with Helm two. And that's a great tool to package and deploy applications. So I wanted to just show a quick demo of running a Helm chart onto my cluster. So I'm gonna deploy a tool called Prometheus and Prometheus is basically gonna help us extract metrics about our containers, our nodes and so on. So Helm is a CLI tool. So this is a preloaded command that I have that installs Prometheus. One thing it does require was a namespace called Prometheus. So I actually already created this namespace already. So just to double check, I just prove CTL get namespace and was able to see that the Prometheus namespace was there. So when I run the Helm install command, the few options that I can pass in, it's creating a persistent volume. It's providing the namespace called Prometheus and currently installing this Prometheus deployment to the cluster. So as we can see, it's sent back some information. It's talking to us about how we can access different endpoints that were made available by Prometheus. So for example, this is the endpoint 9090 provide this access to the dashboard. So if I run this command, it just exports the pod name and then runs the QTL port forward command and will allow me to access the service on my local host. So my local host, 9090, here's the Prometheus dashboard. So if I wanted to get information about container metrics or container memory, I could select an option here. So generally you have a different selection of stats that you can monitor. So for example, just grab in a random one here. So go metrics, any sort of go metrics, you can select that, execute and build graphs. And also there's some endpoints that are made available, such as the gateway that allows you to scrape metrics from Prometheus. But the purpose of this demo is to show you how easy it was to deploy a home chart. You just leverage a CLI tool with a preexisting cluster and be able to consume pre-built applications such as Prometheus. And I'm gonna cancel that port forward. There was one other thing I'd like to share with you guys and it was, as I mentioned earlier, a deployment of Kubeflow. So this is just a machine learning platform that I've deployed to this Kubernetes cluster. And I use the tool called cubes or CFCTL. And this is the tool. So if you go to the Kubeflow website, you could download this binary, set some environment variables. So like what manifests to deploy and run CFCTL apply, if you have a previous cluster created and it will be able to deploy a bootstrap version of Kubeflow to your cluster. So I just run a KubeCTL get pods dash end Kubeflow. So this is the Kubeflow namespace. So all of these services are related to Kubeflow. So we see we have RGUI for CD, some pipeline services and I am running some Jupyter notebooks on this. Interesting note about Kubeflow is that this platform runs Istio in the backend for all traffic management. So it's a great, it's a very interesting tool. It's composed of many different services. So Istio was a great option for them to build in. So I just wanted to quickly share with you the fact that for example, if there's a data science team that you're supporting and they wanted to run some of this cluster, an infrastructure team that isn't trying to terraform could easily provision that Kubernetes cluster for them. There could be some automation involved in bootstrapping the Kubeflow platform using KACTL. And from this platform, so I'm doing a port forward on another terminal off-screen here, it's local host 8081. So this is running in my Kubernetes cluster. I could provision what are called notebook servers. So a new server could just be selecting an image. These are all pre-built TensorFlow images that Google provides you. But for example, if you're familiar with Jupyter notebooks, this will look very familiar. But basically you could leverage a interactive development environment in Python or any other sort of application dependencies that you wanna inject in the container, you can build that container and build the notebook from it. And this is just running an experiment importing the TensorFlow library in Python. And it's pulling some images from this public database that has a bunch of images and doing some, it's just running some algorithms against those images. So just to recap this, this is an entire platform Kubeflow and provides a very niche set of libraries and tools for data scientists. So Kubernetes is, as I mentioned earlier, a platform for platforms. So if you wanted to get bootstrapped quickly and be able to deploy these sophisticated tools like Kubeflow, Kubernetes makes it pretty simple. As I showed you the history here, I just ran a history, piped it to grep for KFCTL. This is basically what it took to deploy that Kubeflow environment and essentially running a port forward against an Istio Ingress gateway. And it allows me to start running experiments with the models or algorithms that I would like. So that will conclude the demo section. So to recap, I wanted to just share with everybody what I would like the main takeaways to be. So containers enable developer productivity, they enable portability, seamless transition from development way left to production, which is on the right. Kubernetes, the container arbitration platform provides the stability to deploy and manage container-based apps. The cloud offers great options for managed services. Infrastructure as code provides a very sane way to build and create repeatable and transparent infrastructure. And Helm is also a great tool to deploy applications onto Kubernetes. So all of these tools put together can really help bootstrap your application teams, putting standards around these tools and processes. In my experience has provided teams with much higher velocity than methods that they were using before. So that concludes this presentation and thank you so much everybody for attending. And at this point, I'll hand it back to Daniel. Awesome. That's awesome for great presentation and the really practical demos. I love that. So now we have some time for questions. So if you have any questions and you would like to ask, please drop it in the Q&A tab at the bottom of your screen and we'll get as many as we have time for. So it's time for two questions actually. We just got one question here. Yeah. So will we use Helm plus the infrastructure as a code as just one from Naja? So could we use Helm plus Isaac? Oh yeah, so that's a good question. So there are providers in Terraform to manage Q&A's applications. However, I think there needs to be some, a discussion around the scope of each of these tools. So infrastructure as code tool like Terraform is great at deploying raw infrastructure or what I like to call infrastructure scaffolding. So setting up the raw resources like the EC2 instances, the load balancers, your VPCs, the subnets, all that information. All those components are all created through infrastructure as code. Helm on the other hand is a tool specifically for deploying applications to Kubernetes. So it has some lifecycle management features. So you can do rolling updates against your charts. You can update these applications in real time. You can manage all your deployments with Helm versus Terraform is more of a infrastructure focused tool. There is a blurry line between the two. So I would say use Helm for Kubernetes application life cycles, because you don't wanna tie in together the life cycle of your Kubernetes applications with your infrastructure life cycle. So having those tightly coupled, I could see create some problems in terms of updating and releasing your applications. Because if it's all tied to the infrastructure then they're kind of very coupled there. Nice. And another question just came up from Anthony, another Anthony, what would be your recommendation on deploying the mutual TRS within the cluster to secure the pass? What do you commonly see within the mesh? Yeah, so I've used tools like console connect. So that's a good question and it's a broad one, but I'm trying to distrust from my experience. So if you're focused on understanding and operationalizing service meshes, that type of technology like console connect or Istio or even things like Envoy, those provide MTLS out of the box. And so for a specific project that I worked on that I can speak to was deploying Kubernetes running console connect. And then additionally in that same environment deploying vault enterprise. So vault was a secrets engine that was being used to consume or to distribute secrets and some other encryption based initiatives. But basically we wanted to use a tool like console connect in order to control traffic between not only the applications running in Kubernetes, but when we have a heterogeneous workload where we have VMs and Kubernetes applications, we use console connect to control traffic between those two. So there's a few options out there that exist. Istio is also one that provides you the ability to control or set controls over what services can connect to others. Cool. Yeah, so another question just came up. What are the change required in Tata from apart from the change in provider when you spin cluster in EKS, EKS or GW or anywhere else? Yeah, that's a good question. So that's a common thing I run into when talking about Terraform. And the answer is that there's no transparent portability in Terraform, which means each cloud platform has similar services, but they're called, the naming conventions are different. The nomenclature is different. So what you provision in AWS versus what you provision in Google would look different. So what you can do in order to understand what you would need to do for that is I would just say, do a search on Terraform, Terraform, let's say GKE cluster. So you could just do a quick Google search and you could find what it takes to actually go ahead and provision that. So obviously these providers use your native authentication mechanism that you're using, but in this case, compared to what I had, let me move this over really quick. This cluster versus this cluster, so AWS, EKS cluster. So there's the name, the location, the node count. So they're a little bit different looking, but that could just be an easy or quick Google search and obviously it takes some understanding of Google Cloud. So you have to kind of know the basics about each cloud platform when you tend to use it, but generally the way that you use Terraform is the same. So resource definition inside of this code block or the stanza, it has some attributes that you're passing some values to, and this is a basic example to get you going, example usage. So if you wanted to understand what the specific differences were, I'd recommend that you go to the Terraform website and do a search about the specific resource that you'd like to create. Cool, thanks for the answering. Another question, can you talk about ownership of the tool you mentioned based on your experience, which team is supposed to take ownership for ham, Kubernetes and Terraform? Yeah, that's a good question. So typically there's a separation of concerns for developers and operation or infrastructure teams, networking teams and so on. So in my experience, there's an operations team or infrastructure team that is managing the Terraform code bases and they work very closely with their customers which are the development teams. So typically developers that are using, or if Kubernetes is a component of a workflow within an organization or business unit, the developers have an understanding of Kubernetes. They may not be managing the cluster themselves, but they're building the Kubernetes manifest, they're building CRDs, they're building the container images. So if you shift the helm and Kubernetes stuff is more shifted left on the developer side, but also really depending on how the team is structured and the infrastructure management, the infrastructure requests that are coming in, people that are burning down the backlog for all the Terraform related initiatives is typically the operations team or the infrastructure team. That's from my experience. Yeah, maybe we can call it the DevOps team. Yeah, DevOps. Yeah, it's like everybody's kind of responsible for more, people are more aware of what each other is working on, so. Yeah, that's cool. All right, I think it's going to be a normal question. So all right, that is all the question we have time for today. And thanks again for answering for great presentation and a really lovely demo. And thanks for joining us today and the webinar recording and slides will be online later today, I already mentioned earlier. And we are looking forward to seeing you at a future C&C webinar and have a good rest of the day. Thank you. Thanks, Daniel. Thank you, everybody. Thanks, Anthony.