 Hey, we're going to go ahead and get started. I'd like to thank everybody for joining us today. Welcome to today's CNCF webinar, Getting Started with Containers and Kubernetes. I'm Taylor Wagner, the Operations Manager here at CNCF. I'll be moderating today's webinar. We would like to welcome our presenter today, Wayne Warren, who's a software engineer at Digital Ocean. Before we get going, I'd like to go over a few housekeeping items. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your Zoom screen. Please feel free to drop your questions in the Q&A rather than the chat. And we'll get to as many as we can at the end. I'd like to remind you this is an official webinar of the CNCF and, as such, is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of your fellow participants and presenters. And also a reminder that we will be posting the recording on slides to the CNCF webinar page later today. With that, I'm going to hand it over to Wayne to kick off today's presentation. Thanks, Taylor. So first off, thanks everyone for joining. Today, I'd like to present an introduction to modern distributed application design, containers, and Kubernetes. And throughout my presentation, I'll be threading in a demo of deploying, of building a Flask-based web app that will deploy onto a DigitalOcean Kubernetes cluster. So as Taylor mentioned, my name is Wayne. I'm a software engineer. I'm based out of Chicago and I work at DigitalOcean on the Kubernetes product. And again, our goals today are going to be discussing trends in application design and deployment. We'll get a high level overview of and motivation for containers as a technology. And we'll learn about Kubernetes architecture and objects. Much of this through a demo where we'll build a container image for a demo Flask app. We'll deploy that Flask app to the Kubernetes cluster and then we'll make it publicly available using a load balancer. So when we talk about application modernization, we're talking about transitioning from a legacy monolith architecture towards more of a microservices architecture. And microservices are a core concept in cloud-native apps and infrastructure. So we'll start by considering what a monolithic application is. And here on the slide, you can see we have a Flickr-like app that includes user management, photo management, database adapter, and a front end. And what makes this monolithic is that all of these components are intertwined in a single large code base, which presents a number of challenges. So all of the components must be deployed as the whole and they must be scaled together, even if only one of the components, say photo management is overloaded for the resources it has available to it. And speaking of resources, each component may have distinctly different resource needs and deploying them together makes it impossible to optimize each one according to those needs in isolation from the others. And then releasing a monolithic application can be tedious and error-prone because if there's a bug in, say, the photo management component of the application, you have to roll back all of the components to a previous version as opposed to rolling back just the buggy component. And finally, we have lots of code handling different pieces of logic intertwined and dependent on one another, which makes refactoring or swapping out chunks of functionality difficult and inconvenient. So what's the alternative? Well, the alternative is a microservices-based architecture where we split up the app into microservices, a collection of loosely-coupled service apps that each handle a domain-specific subset of the overall system functionality. So here in our presentation, you can see we have our front-end web UI, we have our photo management component, and we have our user management component. Each of these components is free to use the appropriate data store for the data type that I'll manage. So here we have a user management, relational database management system, and a photo management relational database management system with spaces for an object storage. So you can imagine storing the actual photos in the object storage system and metadata about users and photos in the relational database management systems. So one of the advantages here is that each of these components can be scaled in isolation from the others, allowing for more flexibility and efficient use of resources. And yeah, so in one of the considerations that you have to keep in mind when building a microservices-based architecture is that within a given engineering organization, typically microservice teams will have to agree on protocols or APIs for inter-service communication as opposed to making direct calls to functions or libraries within a monolithic architecture. So it does introduce some complexity in that sense. But yeah, so why is this relevant to containers and Kubernetes? The microservices architecture lends itself especially well to Kubernetes because Kubernetes has built-in abstractions that parallel this design pattern. For example, services to expose groups of identical containers as a single endpoint and deployments to manage groups of identical workloads in order to scale them quickly up or down. We'll learn more about these shortly. But first, now that we've discussed some advantages of the microservice architecture, let's introduce a method of packaging and running these smaller self-contained applications containers. To understand the motivation of containers, it's helpful to know what came before. Early on, in infrastructure consisted of a one-to-one relationship between an operating system and a hardware computing platform, which could lead to resource inefficiencies in the sense that you have to figure out how to pack all of your applications onto a single hardware host in a way that they all work together without interfering with each other in terms of dependencies or shared resource usage, like memory or ports that they depend on in order to provide their services. So the next step in evolution from hardware computing platforms were virtual machines. So virtual machines introduced the ability to run multiple full operating systems on a single physical host from virtual images containing all the software necessary for the application in question. And then these multiple full operating systems are managed by a low-level hypervisor operating system that allocates the actual, the physical systems resources to the virtual hosts. This allows for more granular application sandboxing and versioning, and it increases efficiency compared to the use of physical hosts because it allows otherwise underutilized compute resources to be shared between virtualized applications. However, there are still some inefficiencies here because each virtual host comes with the overhead of its own full operating system. So the next step away from virtual machines is our containers, which are essentially lightweight virtual machines that accomplish the goals of sandboxing apps and providing a consistent reproducible runtime with less infrastructure overhead. Some advantages of the containers over virtual machines include they don't require their own full operating system, but just a container runtime. The container image files are generally much smaller than virtual machine files. They generally start up much quicker than virtual machines and there is an ecosystem of pre-built, pre-configured images available for use. For example, images that provide specific versions of GoLang, Nginx, Python, Node.js, et cetera. But what are containers really? Well, we've discussed how containers are kind of like VM, but more lightweight and portable, but how are they implemented and what do they look like? At their core, containers are an abstraction built on top of two Linux kernel features that allow you to isolate and contain processes, namespaces and C groups. We won't go into these concepts in detail since this is a beginner-oriented webinar, but they are worth reading up on if you're curious. The important thing to know is that they help accomplish the goals of sandboxing apps and providing a consistent reproducible runtime environment much more efficiently than full-on virtual machines. Now we'll take a look at some of the more practical terminology surrounding the container's ecosystem. So first off, a container is one or more sandbox processes running within their own root file system managed by a container runtime like Docker. And the container runtime allows you to run containers on a host operating system. And oftentimes, as in the case of Docker, the runtime also lets you build and push or pull images to a registry. A container image is a set of file system layers and metadata with all of an application's dependencies, libraries, system utilities, et cetera. In the Docker ecosystem, we often define and create images using a Docker file as we kind of show briefly here. We'll also take a more in-depth look at a Docker file in an upcoming slide. And then I also mentioned a container registry that the runtime will allow you to push images to and pull images from. This is where the ecosystem of pre-existing Docker images comes in. So the container registry is a packaging system that makes container images available for download by the runtime. So you can think of registry as being similar to code repositories, except they're geared towards container images. So examples include Docker Hub, Quaid.io, Google Container Registry, and Digital Ocean Container Registry. And then examples of container runtimes include Docker, as I've already mentioned, Container D and Cryo. All right, so the next thing I wanna do is introduce you to how to build and run a container locally. So we're going to do that by showing or by using a bare bones Flask app that will eventually, but not right away, deploy onto Kubernetes. So the first thing we're gonna do is we're gonna take a look at the Flask app. We're gonna look at the Docker file that we use to build a Docker image from the app. And then later on in the presentation we'll look at how to deploy the Docker image we build to Kubernetes. And for those of you who are curious, Flask is just a lightweight Python web application framework designed to make it easy to get up and started. So the code you see here, we won't talk about in depth, but it's all we need to get a web server up and running on port 5000 that returns an HTTP body that just says hello world. All right, so how do we get it running in Docker? In order to do that, we need a Docker file where we define a series of steps that are involved in building the layers for the Docker container image. And so yeah, we'll start with the from step here. I'll just describe each of these steps and what they do. So the from step says our image isn't starting from an empty file system, it's starting from a pre-existing container image. So this is a Python specific container image, specifically one that's built on the Alpine Linux distribution and it's specifically geared towards Python three. So what we have here is an image names and then a colon and then the image tag that specifies the version of that image that we'll end up pulling in order to begin building our container image. So the next step is Worker, which just sets the working directory for all the future steps in the Docker file. And then we have a copy which, so let me just switch over to my terminal here and show you where I'm working from. And that is our, Kate's Intro Meetup Kit, which is used by the DigitalOcean community folks to demonstrate this presentation and the demo in meetups situations. So yeah, so we have the app directory here, which contains all the files that we'll be talking about while building this container. So we've got app.py, which we already looked at, Docker file, which we're looking at right now, and then requirements.txt, which we don't need to look at, that's just part of building a Python application. So we'll be copying that requirements.txt from the working directory context into the container image at this layer. And I just wanna pause here for a second and talk about layers. So when I say that a Docker image consists of a set of layers, what I mean is that every one of these steps that runs creates a new layer in that file system. And the advantage of using a layered approach to building file systems is that, say you have like five different applications that you want to build containers for, and they all have requirements.txt file, and they all have an app.py file. And the only difference might be, say, you're exposing a different HTTP or sorry, a different TCP port for the application. Everything up to, I guess, copying in the requirements file, if each application had a different requirements file, would be able to reuse the previous layer. So all the layers that go into building the original Python app, the layer that defines the Workder, and then any other common files shared between those applications would be deduplicated through the way container images are described at the metadata level, which we won't go any further into the details in that, but it's suffice it to say it's a way to efficiently store lots of similar images in a given registry. All right, so moving on. The last thing I had described was copying the requirements.txt into your container. The next thing that you can do is you can run arbitrary commands within the build context of that container. So here we're gonna tip install, which for those of you who aren't familiar with Python packaging, PIP is just a Python packaging tool that lets you reference a set of dependencies and install them in your local file system. And then we're gonna copy all the rest of the files from the current directory into the container image and then we're going to expose port 5000 and then we're gonna say that the default command that runs when we run this image, when we run Docker run here will be Python app.py in the Workder slash app. All right, so now that I've described the Docker file, the next thing I'm gonna do is switch over to my terminal and I'm gonna run Docker build. And what this command does is it builds the image just described that I just described and this dash t flag specifies the name that we're gonna give our image that we can reference it later. And then the dot at the end of the Docker build command line is basically saying the context for the build is the current directory. You can actually specify an arbitrary directory here and that's the directory that you specify is what Docker will see when it's building your image. So without further ado, let's do that. And I've got some extra arguments here to make this work on my machine and I'm also gonna say no cache so that it doesn't use the build cache so you can actually see it doing, it actually running the build steps because I've previously built this image tour. So yeah, again, we're pulling from Python three dash alpine setting the work directory, copying the requirements at TXT and then we're running pip install so you can see all of the packages that get installed in the image. Then we're copying the app.py into the image exposing port 5000 and setting the command that we wanna run. So let's look at the images we've got here. So I'm gonna say Docker image LS and I'm only gonna look at the top 10 lines of output because I have a lot of images on my laptop. And here you can see we just created this image 27 seconds ago. It's 118 megabytes in size and the tag name is Flask or the tag name is latest and the repository name is Flask and then our image ID which is a shot 256 digest of the image manifest is shown here and this is kind of a, to kind of like go off on a little bit of a tangent here. This image ID and the tag plus the name are, or this, sorry, I'll start over. The image ID and the tag are alternate ways of specifying which version of an image that you want. So if I wanted to Docker run, Docker run Flask, let me refer back to the, we'll port forward 5000 on the local host into 5000 in the container and then we'll say the, we'll say latest here. So this is one way to specify the image that you want to run. Another way to do it would be to replace this tag here with the image ID, oh, sorry. Yeah, replace the tag with this image ID so that yeah, they're just alternate ways of referencing the same image. One thing to note about tags is that they are mutable. So you can overwrite a given tag. So if we rebuild, sorry, we build, if we rebuild this image, we're gonna end up with a different image ID but since we specified the Flask tag here, we're gonna get, we're gonna overwrite the existing tag with this new image ID. So let's go ahead and show that our app is actually running in the container by running at latest and then curling local host at port 5000 may not be working. We'll skip this because it's not that important. The important thing is container clusters. So now that we've discussed containers, we've built our first container and we've kind of shown it working. Let's talk about how we move from this sandboxed application running in a local development machine to a production deployment running in the cloud. So yeah, we've introduced containers using Docker and a bare bones web app but now say you're running multiple copies of this container and you want them to scale across multiple physical or virtual machines. How would you manage the lifecycle of these containers, roll them out or roll them out as blue, green deploys or perform other types of distributed system management techniques. That's where container clusters come in. So examples of container clusters would be mesos, Docker swarm or Kubernetes. This talk is gonna be focusing on Kubernetes which consists of a set of master nodes that manage the cluster, scheduling, health checking, maintaining state and worker nodes that actually run the containers and communicate with the masters. So Kubernetes, I'll just give a brief overview of its history. It's often abbreviated as K8s and the K8s is just kind of a play on the number of letters between the K and the S here. So you have one, two, three, four, five, six, seven, eight. Yeah. And Kubernetes is an open source project that came out of Google's internal cluster management system and it's now one of the most popular, or it is now the most popular container cluster management system. Most cloud platforms have some sort of managed Kubernetes offering. Features are added pretty regularly, bugs are fixed up to three major version or three minor versions back and it is facilitated. It's the community is facilitated by the Cloud Native Computing Foundation which also facilitates other projects such as Prometheus, Fluent D and others that don't have listed here. So let's talk a little bit about Kubernetes architecture. Since we've covered containers and we've made a case for the need to manage them. Let's, yeah. So yeah, Kubernetes has a client server architecture. I mentioned previously that the server manages the cluster. We often refer to it as the control plane and then we have clients which are the nodes that actually run the workloads that you deploy to your Kubernetes cluster. And they are managed by the control plane. So here we'll talk briefly about the control plane. It's broken down into API server scheduler controllers at CD. The API server is essentially the front end for Kubernetes. It's where all of the API operations land. So it has a REST API over HTTP. It stores, and it stores all of these API objects in the persistent storage backend, XED, and communicates with nodes through a component that sits on the nodes called Kubelet. The scheduler is what decides where to run pods. It schedules them onto worker nodes based on resource availability and other constraints. The controllers are, you can kind of think of them as loops that maintain a desired cluster state. So for example, if you have block volumes or load balancers or virtual machines that you need to manage the cloud, there are cloud specific controller managers which perform all of that management. And then you have the cube controller manager which manages Kubernetes resources like groups of pods, endpoints, deployments, et cetera. And then I mentioned earlier at CD, it's basically a persistent data store for Kubernetes cluster data, which it can be deployed in a highly available distributed manner to be a reliable key value store. And it's also a CNCF project. Together, these form the control plane that manage the operations of Kubernetes cluster. And in a managed offering of Kubernetes, the Kubernetes API is often the only thing that's really exposed to the user. All the other components are typically hidden and can't be customized, modified or interacted with other than through the Kubernetes API. So now let's take a look at worker nodes. The central component of a worker node is the kubelet which is an agent process that manages containers running on the node and it communicates with a control plane API server and receives pod specs and performs all of the interactions with the container runtime. Which like I mentioned earlier, it could be Docker, Cryo or container D or some other runtime. Yeah, I won't talk about kubeproxy or C-advisor too much. kubeproxy is basically a network proxy that runs on each node and it allows inter node communication between pods and C-advisor is a container metrics component, which basically reports metrics back up through the, sorry, back up through the Kubernetes masters. But so I mentioned that the API server is how users interact with Kubernetes clusters but it's actually more simple than just hitting arrest API directly, which you can do if you want. There's actually a tool called kubectl which is a command line tool that interacts with the control plane via the API server and abstracts away the arrest API details that most users shouldn't care too much about. And it provides a different functionality for mutating your cluster like creating different resources, listing results and filtering them. So the next thing I wanna do is for this presentation, I've pre-created a Kubernetes cluster in using digital ocean. So I'll go ahead and download the Kubernetes config for my cluster so that I'm ready to create resources in subsequent steps. And it'll also be an example for me to show you some kubectl commands. All right, so I'm going to, so like I said, I already created my cluster. Next thing I'm gonna do is get my kubectl config. So that's basically do-cuddle Kubernetes cluster kubectl save and then the name of the cluster. So what that does is it adds the cluster credentials to a kubectl config file, my default kubectl file and then sets the default context for that kubectl file to that new cluster. And the reason it needs to do that is the kubectl config data structure actually allows you to have multiple clusters referenced and it allows you to swap between multiple clusters quickly and easily. Yeah, so yeah, that's downloading the kubectl config. Now we can type kubectl cluster info just to show that the kubectl config is active for the cluster that I created before this presentation. So it tells me where the Kubernetes master can be reached, which is necessary for, so when we were talking about the API server that's what this address is referring to. And it also talks about core DNS, which we don't need to worry about for this presentation. You can also run commands like kubectl get name spaces or kubectl get nodes or get all dash and cube system. So basically I'm just showing you that kubectl provides a really user-friendly way to view the system resources being managed by your Kubernetes cluster. So yeah, I talked about kubectl, different commands that you can run. Here's just a list of different commands. So you can explicitly create or delete resources. You can expose services running in your cluster, but we'll move on. So we've covered how Kubernetes is implemented and designed. And now let's talk about how to actually create Kubernetes objects. So the first type of object we'll consider and that we'll create in our cluster is a namespace. And to set up the motivation for why you'd want to create a namespace, consider you have 100 people working against a single Kubernetes cluster and you want to limit their access and organize their workloads so that they're not stepping on each other's toes, creating resources with the same names that override each other. A namespace is what allows you to provide this logical separation and access control. Workloads get launched into the default namespace unless you specify a namespace in a manifest or on the command line. So I already showed in the terminal here how to list resources in a specific namespace. So the way I did that was I ran kubectl-n kubesystem and then I just wrote get all. I could also write get a specific type of resource. So just get the pods in that namespace. Yeah. So the next thing we'll do is we'll create our namespace. So we're gonna create a namespace for our flask app. So we're gonna call it flask. kubectl create namespace flask. So now we've created that flask namespace and let's just show that it's empty. It doesn't have any resources in it yet. So we'll type kubectl-n flask get all and kubectl tells us that it didn't find any resources in the flask namespace. Yeah. Moving on. All right, so the first type of resource that we're gonna add to our namespace is a pod which is the fundamental unit of work or workload in a Kubernetes cluster and it differs from a container in the sense that a pod can run multiple containers and can also attach volumes to those containers. A pod kind of models a logical host in the sense that it provides everything you need to run an instance of an application. For example, if you have an app that serves files consisting of a container that does the serving and a container that fetches the files and does some processing, these two tightly-coupled containers can run as a single pod. They would share storage, they would talk over local hosts and they're guaranteed to run on the same physical node. Most pods, however, will consist of a single container. They tend to be ephemeral and when they died, new pods must be started. And finally, just to reiterate, you don't explicitly run containers on Kubernetes, you run pods. And to show you an example of that, here is what's called a pod manifest. In Kubernetes, you define and create objects using manifest files, typically in YAML, but you can also use JSON if you want. And yeah, I'll just quickly step through the different fields here. We've got API version, kind, metadata, and the metadata consists of kind of arbitrary key values, although some of those key values are, or sorry, it's the labels that contain arbitrary key values that allow you to associate different parts of your cluster with a key value approach. But other metadata includes the name of your pod and then finally, the last top-level item we have here is a spec and this is a pod specification. It allows you to define the containers and volumes that you have related to your pod. So here we have an image called digital ocean slash flask dash hello world, colon latest. So we're not actually going to use the container we built, the container image we built earlier. We're going to use one that has already been pushed up to Docker hub, which is kind of implicit in the name here. There's another part of an image name that you could add, which would be the host name of the container registry, but Docker by default just implicitly assumes that you're referring to hub.docker.com for your image registry. And then also at the bottom of the spec, you can see we're exposing the container port 5,000 within our cluster. So yeah, let's create a pod. Flask dash pod. All right, so what we're running here is kubectl dash n flask. So we're running in the namespace flask that we just created and we're going to apply the manifest pointed to by this dash f argument, which is flash dash pod.yaml. So let's just take a look at the contents of the kates directory that we're working in, which it'll contain flask dash pod, then later we'll look at flask dash deployment and flask dash service. All right, so we've created our pod, let's, apologies for that. I lost my teamwork session, flask app demo, here we go. Yeah, so let's take a look, get all in the flask namespace. So we can see that our pod is now running in the cluster. It's been up for 20 seconds, status running, zero restarts, one out of one container is ready within that pod. And in order to show that our app is actually running in the pod, let's create a port forward to that pod. Sorry, it's actually the flask dash pod here. Unable to listen, it's probably unable to listen because I'm still running this app here. Now retry that and this time the curl should work. Yep, here we go. So we can see handling connection for port 5000. That's a message coming from kubectl indicating that it's handling that port forward connection. And then here below our curl command shows that we actually did get a hello world body out of when we hit localhost at port 5000. So that proves that our flask app is running in the cluster, we can move on. I think, nope, let's first delete that pod. Yeah, so I mentioned earlier that you can either create or delete Kubernetes resources using kubectl. So here we're running kubectl dash n flask and we're using the delete command and we're deleting a pod specifically. And the name of that pod is flask dash pod. And we're deleting it because we're gonna be creating a different type of resource shortly that creates its own flask dash pod instances. So we're getting kind of short on time. So I'm going to skip over discussing labels. Suffice it to say labels allow you to associate different resource types such as services with other resource types such as deployments and pods. Now we're gonna talk about workloads. So we've covered the core Kubernetes unit pods. Now we don't always work with pods when we're creating like a full application because we often want more abstraction on top of pods. So deployments are probably the most common workload controller and as defined previously, deployments are used for stateless applications and allow you to run several replicas of a given pod without having to create each of those pods manually. So through a deployment, you can update the pod image, scale the number of replicas up and down. And it's the only workload that we're gonna cover in this presentation. Also worthy of mention are stateful sets, daemon sets, jobs and crown jobs. Worth, so it's worth looking these up in the upstream Kubernetes documentation if you're interested, but we're not gonna cover them because we're running short on time right now. All right, so we previously covered pods and deployments allow us to run multiple copies of a given pod. And within a deployment, there's another level of abstraction between deployment and pod called a replica set that kind of handles a lot of the details of managing the number of pods running at a given time, but you typically won't have to worry about replica sets. You'll most likely be working with deployments directly. And as I mentioned earlier, deployments are used to run stateless apps and they're stateless because when a pod gets destroyed, it doesn't have, none of the data that it's created locally within its file system gets preserved. And deployments allow you to control rollout rates, which is like the rate at which the pods, the number of pods scale up and down. And they also allow you to roll back to a specific release like with a release of your container image. All right, so moving on to a deployment example, similar to our pod manifest, here we have a deployment manifest, some similarities you can see API version, kind metadata, and then a spec top level directive. But within that spec is where the deployment begins to differ from a pod. So here we've introduced replicas, a selector, and the selector will become, so later when we're working with a load balancer service, we'll be using this selector here in order to tell the load balancer which deployment its ports or its traffic should be forwarded to. And then the final part of the deployment spec is the template, and this is specifically a pod template. So it's gonna look very similar to the pod manifest itself, just sitting deeper within the data structure hierarchy. So I won't talk about that again since I already talked about the pod manifest. I'll go straight to creating the deployment. So I'll apply flaskdeployment.yml and then you see again, we have deployment.apps flask-dip created and let's take a look at what resources that created within our namespace. So it created first off the deployment and then it created a replica set that manages two pods. So we've got, this time we have two pods automatically created by our single deployment, which kind of gives you some beginning of insight into like how a deployment is a more, like a higher level abstraction over a pod. Yeah. So again, we can port forward here, but instead of port forwarding to directly to the pod, we'll just port forward to the deployment itself and then we'll curl to show it working. Similar to the pod, it, yeah, works fine. Yeah. Let's move on to talk about, so up to this point, we've talked about objects and components internal to the cluster, but we also wanna be able to expose those workloads to the outside world and provide a stable endpoint for a set of running pods. Services provide this external exposure. And I'm kind of going to skip over a discussion of the different types of services because the one we're most interested in is a load balancer because that's the one we're going to be creating. And what that's going to provide for us is it's going to give us an external IP that traffic from outside of our cluster can hit. And then that external IP is going to get translated into addresses and ports internal to the cluster that ultimately get distributed to instances of our application running on different nodes, depending on the load balancing algorithm here, which we won't talk about those details either. We'll just go ahead and create our service. And again, I'll just briefly talk about the service manifest and point out how we have some similarities with a deployment and a pod in the sense we have API version kind metadata and then a spec. And then inside the spec is where the service again differs from the pod and the deployment. We're gonna say we're gonna build a load balancer type service where we're forwarding traffic on our external IP at port 80 to port 5000 on our internal applications and the internal applications are specified by the selector at the bottom of the manifest which says app colon flash dash hello world which that's what I pointed out to in the deployment manifest earlier. So let's go ahead and start creating that service because it takes a few minutes for the external IP address to get assigned and for the service to become available. So I wanna get this going here and we'll also run kubectl dash n flask get service and then we're passing a dash w flag so that kubectl doesn't exit right away but instead continuously shows updates to the object as they happen. So the thing we're waiting for here is for this external IP field to change from pending to a specific IP address. And then once that happens we'll curl to that IP address at port 80 and observe from outside the cluster accessing the application we deployed inside the cluster. And for those of you following along at home if you have a terminal open and you're comfortable writing a curl command line you can follow along and do the same thing. So here we go. You can take this public IP address and run curl HTTP colon slash slash and you can also access this application running in the cluster that I've created for this presentation. Yeah, pretty cool. And let me see. So I'm gonna very briefly talk about other types of Kubernetes resources. I'm not gonna put too much time on this because I wanna save a couple of minutes maybe get one question in if anybody has a question. So other types of resources includes configuration maps, secrets, volumes, persistent volumes, persistent volume claims. These things are ways that you can persist or so the config maps and secrets are ways that you can pass configuration into your application without storing it in the application itself. Volumes and persistent volumes are a way that you can share data between containers and a pod and they get mounted just like a normal block storage volume on your file system but inside the container file systems. Other features include resource requests and limits, auto scaling, node affinity, taints and tolerations, dashboard, metric server and then you also have third party applications that provide some of the previously mentioned features but also third party like open source applications like Helm which is a kind of Kubernetes package manager for common applications such as Nginx, Apache, MySQL, Postgres and other things I can't think of off the top of my head. All right, so if you wanted to learn more you could, these slides will be available, will be made available after this presentation and you can click through some of these links to learn more. So for example, the Digital Ocean Kubernetes community tutorials, the Kubernetes white paper which kind of goes more in-depth into Kubernetes architecture. There's the history of Kubernetes and the community behind it. The community's GitHub project itself and then the official documentation which is spectacular. I can't recommend it high enough, highly enough. All right, does anybody have any questions? Yes, we have quite a few questions in the Q&A box. Makes me sorry, I didn't stop earlier. Yeah, they range all the way back from 1015 to just now. So I don't know if you wanna go far back, I can ask you a couple of quick questions. Or if you pop up in the Q&A you can take a look as well. I've got the Q&A open here. So why don't I just skim through it for a few seconds here and think of what I could answer in a minute or two. Let's see. So somebody asked running the container runtime without a hypervisor is possible, but how many users actually deploy that on bare metal compared to VMs? That's not very common in my experience. Typically you have virtual machines running the container runtime as kind of like another layer of abstraction within the cloud provider that allows the cloud provider to make the most efficient use of their hardware resources while still allowing you to create a cluster that's separate from other users clusters. But there are some use cases like on-prem where people will deploy a container runtime on a bare metal operating system. Let's see. Let's see. Somebody asked wouldn't the build process overwrite files in previous layers? So the way the layered file system works is that yes, if you create a file at the exact same path as a file that was created in a previous iteration of the layers, that new layer will be authoritative. Like users of that layer will only see the file that it'll essentially mask out similarly named files in previous layers. So yes. Somebody asked our commands such as from worker, copy, run, expose command. Are those case sensitive? And no, they're not. Can you explain how the port mapping and Docker containers actually work? Like when we search for HTTPS colon slash slash local hosts colon port number. I can't explain that. I'm sorry. It would take, like I have like some inkling of how it works, but it would be a big wrap hold to try to explain it here. Should we use Docker compose for the prod? Can we have pros and cons for it? So for those of you who don't know Docker compose is a command line tool written in Python, which allows you to create a bunch of Docker containers together at the same time, like within your local development environment. So say you're working on a stateless app that requires a MySQL service running and you want to run integration tests for your MySQL service, Docker compose, the Docker compose use case for local development would be to create a Docker compose file that describes each of the Docker images that you want to build and run. And then you would run a Docker compose sub-command that runs it on your local system. And yes, and to answer the question, should we use Docker compose for prod? I don't think so. I wouldn't, I would stick with Kubernetes just because I'm biased towards Kubernetes since I work on the Digital Ocean Kubernetes product. And I've got a few years of experience with it now. And Docker compose doesn't actually interact with Kubernetes. It's a Docker specific tool. A question around here shell operating system, how can you delete a block of command at once? For example, in your shell window when you type in command Docker run flask, how did you manage to delete flask in one go? Oh, those are, that's kind of an off topic question, but they're different like control plus letter keys. So like, if I want to just delete one word back in my shell, I would type control w and that would delete one word back. That should work for most shells. And then the way I call up old command lines that I've already run is hitting control r. And there's like a whole category of like control codes that work in most shells that let you do things more quickly at the command line. That was a fun question. What is bootstrapping of Kate's mean? Does it include creation of Kubernetes master and worker nodes? Yeah, bootstrapping a Kubernetes cluster is exactly that creating the Kubernetes master and creating worker nodes that then attach to the master. And that's the process that cloud providers like GCP, AWS, DigitalOcean, Azure, that bootstrapping process is what cloud providers abstract away from you, the user, so that you don't have to worry about managing the cluster itself. You just have to worry about or you just get to worry about managing your applications running in the cluster. All right, somebody asked what is a Kubernetes object? So when I talk about a Kubernetes object, let's go back here for a second to our service. When I'm talking about a Kubernetes object, I'm talking about the data structure that you see here in this manifest file, which like I've mentioned before, consists of all the metadata about your Kubernetes object that Kubernetes needs to know in order to continuously assert its state in the control, the Kubernetes controller manager loops. Let's see, today we manage a lot of applications specific configuration, which is unique to servers we deploy for using Docker, how will we manage this application configuration? So if you have application-specific configuration that is unique to your servers, assuming you're running virtual servers in a cloud somewhere, you probably have virtual machine images and you start with those virtual machine images, you create a new instance of your server and then the way you might deploy your configuration to that server, your deployed application and the configuration might be through a configuration management tool such as BrainFart, Puppet or Chef or Ansible and then so you would run those, you would use a configuration described in the languages of those tools to specify what applications and configuration need to be on your server. With Docker, the alternative using Docker is not to use those configuration management tools but to build images that contain your applications and then have a cluster management system similar to Kubernetes, specifically Kubernetes in the context that we're talking about today. That's where your configuration would lie. So if we scroll back a little bit or maybe it's forward here, let's talk about, so I talked about config maps and secrets earlier. So a configuration map is basically a Kubernetes like resource object that allows you to store configuration files in the XCD database of the cluster and provide those configuration maps mounted into your applications, pod containers at runtime. And similarly, secrets are similar to config maps except they are typically encrypted and when you view a secret on the command line, say using kubectl, the output would be obscured so that you don't reveal secret information accidentally. Wayne, I think we need to wrap up if you are almost on with this question or yeah. Yep, yep, I was just gonna reiterate what the question was because I kind of went off on a big tangent. Sure. So like I was answering the question, what's the alternative using Docker containers and cluster management systems as opposed to a traditional either virtual or hardware server where you use some kind of configuration management tool to push application and configuration code to your server? Yeah, awesome. Well, like Wayne said earlier, we have links to the DigitalOcean community page that we'll make available in the slides you can get from the webinar page later today. Let's see, that's all the time we have obviously and we'd like to thank Wayne for such a great presentation today and thanks everybody for joining. We'll have those slides and we're recording up later today for your viewing pleasure and we will see you guys again at the future CNCF webinar. Thanks everybody so much. Thanks Taylor and thanks everyone for listening to me talk. Yes, great presentation, great questions everybody. Thanks so much.