 Hello everyone. I want to welcome you all to my talk today. The title of the talk is Lowering the Barrier to Kubernetes Proficiency and what I'm going to touch on today are some topics that will or should help folks understand a little bit about Kubernetes and what it takes to kind of get initially involved with Kubernetes and understanding how the system is architected. The intent here is to again help folks kind of on board with their Kubernetes journey. So here's a quick agenda. I'm just going to cover like why Kubernetes jump into a quick intro of Kubernetes. And then I'm going to talk about some of the key elements to implementing Kubernetes. First, I want to introduce myself. My name is Angel Rivera. I'm a developer advocate for CircleCI. And at CircleCI, my main focus is to engage the developer community, to engage technologists, to engage anyone who's interested in technology for the most part. And the key role there is for me to go out and understand how folks are using technology, understand what new technologies are out there, maybe also looking at some of the patterns for folks and how they implement technology. All of that is useful information that I bring back to my team at CircleCI so that we can discuss on how to build bigger, better, faster, more beneficial products and features for our customers so that they can get their jobs done quicker, faster, and also more reliably. So I just want you all to know that if you ever want to discuss anything with me or reach out to me, I can easily found on Twitter. My Twitter handle is punkdata and definitely reach out. I'm open to all kinds of conversations, so I'm interested in hearing from you all if you decide to do so. So let's start with the question, why Kubernetes, right? Is your boss telling you to deploy Kubernetes? Or are you just one of these techno junkies like me that likes to try out new technologies? There's a lot of reasons why folks actually look at Kubernetes and research it and try to understand what it is and how to use it. But I wanted to share some common reasons. These are just reasons that I hear all the time from folks in the community about why they're using Kubernetes. So if you want to increase your software development practices or the frequency of your software releases, Kubernetes is a great tool and platform to help facilitate that for you. Kubernetes is obviously more of a modern day platform for application hosting and management of services. So it's definitely one of the more modern platforms that actually implements current software development practices. So one of the nice things about Kubernetes is it kind of lends itself to more of the DevOps continuous integration, continuous delivery type software practices, right? So it's definitely a platform to help you advance your team's velocity in software releases. Some of the other reasons why folks would look at Kubernetes is because Kubernetes is able to do a lot more with less. And what I mean by that is the Kubernetes platform deployed appropriately can optimize some of the performance on applications and services. And it also has the ability to manage resources on the underlying hardware a lot better than more of the traditional sense of platforms, right? So if you're looking to cut some costs and gain some efficiencies in managing resources and infrastructure, Kubernetes is definitely a platform that you should look at to help with that mission. And finally, Kubernetes, one of the top kind of answers folks give me when I asked them, why would you look at Kubernetes or why do you want to implement it, is they're looking for solutions to help with keeping their services available as much as possible. And also they're looking for platforms and systems to help scale their applications and services when they're under heavy requests or heavy demand. And Kubernetes is great at being able to automatically provide these two features, right? Obviously, you have to architect that and engineer your Kubernetes infrastructure. But at the end of the day, these are native concepts to Kubernetes. And these are true value that come from deploying a Kubernetes. You definitely get that high availability of services and applications as well as the elasticity for when you have your demand requirements on those services and applications. So it helps you easily scale when you need to and also scale down the resources when they're not in such a high demand. So let's jump into a really quick introduction to Kubernetes. I really want to just cover what Kubernetes is, some of the architecture pieces to it, and also some of the key elements and services within the Kubernetes platform. It's just an intro to kind of help folks understand how Kubernetes is basically architected. The first thing I want to cover is the acronym K8. When I first started learning about Kubernetes, I saw this acronym quite a bit. I didn't understand what it meant until I went to some conference. I think it was Tecton back in the day and they started talking about what K8 stood for. So I'll be using this acronym quite a bit in my presentation. And for those of you that don't know, K8 basically stands for, well, the letter eight in K8 or the character eight or the number eight in K8 stands for the eight characters in the word Kubernetes. And it's the eight characters between the K and the S. Really simple, right? Kind of magic there. But I just did want to kind of point that out because I'll be using the acronym K8 throughout my presentation and I want everyone to have an understanding of what it means. So let's talk about the history of deployments. Back in the day when I started, if you look to the left at the traditional deployment diagram there, you can see there, right? We just have three kind of layers to the traditional deployment. We deployed our applications to an operating system that then ran on top of some hardware. If you fast forward to today, virtualization kind of came on the scene to help manage the resources. So when we were deploying in the traditional sense, it was really, really hard to scale applications. And the reason was if you wanted to add more resources for your application to perform better under huge demands, then you would need to actually add more bare metal iron to that infrastructure, right? So imagine you have racks and racks of servers and you need to add more horsepower so that your application can easily serve up information under high demand. You had to add more and more bare metal resources, right? Servers. Now, if you fast forward to today or at least fast forward to the virtualization era, that kind of changed the game for hosting applications and infrastructure because if you look in the virtualized deployment diagram there, you can see that there's a little bit more to the stacks now, right? So you still have that hardware layer, you have the operating system layer, but now you have the inclusion of this hypervisor, which is, like again, the game changer in my mind on how to manage system resources. So the hypervisor is just some software services that run on top of this operating system and it knows how to manage the underlying hardware resources a lot more efficiently than the traditional deployment or traditional sense. So on top of that hypervisor though, you were able to now isolate your application in what we call a virtual machine and that virtual machine had its own operating system, right? Within that whole package system. So now, right, you get a lot of flexibility because everything's kind of isolated in its own little bubble, so to speak, and you can run your application a lot more efficiently. Now, if you fast forward to today, where there were some gains in the virtualization world, we've actually made even more gains in the container deployment era. And if you look at the diagrams between virtualized deployments and container deployments, you can actually see that the hardware layer doesn't change, the operating system layer hasn't changed, but now we've kind of swapped out the hypervisor for a container runtime. And all that is, is again, just some software service layer that understands how to speak to the operating system and the hardware system and manage the resources of those two things, right? A lot better, a hell of a lot better than the traditional sense. So what we did was we added that layer of containers now as well. So on the top, instead of having a virtual machine, you have this basically what we call a container image. And the container image holds the application and all the required software libraries and dependencies that the application needs to run. So now again, you're creating a way more lightweight footprint for your application to run in this container environment. And that's where we're talking about today with Kubernetes, is we're running things in this container runtime environment. And again, you have a lightweight footprint for your application to run. And these containers can also run anywhere where that container runtime is available to the operating system. So let's talk about Kubernetes. This is a definition I found about Kubernetes. But basically, the thing to take away from here is it's a system to help you run your applications in containers, right? And it also helps you with automatically deploying, scaling, and managing the system. So let's take a look at decomposing Kubernetes. I really like to kind of go through these exercises of talking about Kubernetes and and how the platform is as a whole, and then tearing it down a little bit more so that you can understand the critical elements to a Kubernetes platform. So the first element of a Kubernetes system is a cluster. I'm sure you've all heard this term before, but if you haven't, a cluster is basically a bunch of disparate servers or machines that are serving a purpose, right? And in this case, they're working in a unison or an aggregation to serve a service or application. In this case, we're serving up Kubernetes. So again, Kubernetes is made of a cluster. And then within those clusters, if you go down a layer, you can see that you'll have what we call nodes. Nodes, again, are just the individual computers, machine servers, whatever you want to call them, that compose or combine to create this cluster. And in Kubernetes, you have two different kind of clusters. I like to call the what I like to call primary and the worker nodes. So you have a primary node and worker node. And I'll get into the details of the differences. But if you look at the primary node, all that really does is host all of the Kubernetes centric services, whereas the worker node actually has all of the responsibility of providing for the most part, the majority of the work, right? Processing. It's where we host our containers. And when we host our containers, we're hosting them in a, this is some terminology that I want to share, Kubernetes pod. And a pod is basically a grouping of containers. So that translates to a pod is basically a grouping of containers that then translates into multiple instances of running your application, right? Simultaneously. So it's just again, multiple containers or just multiple instances of your application or service that are running. And they're running in what they call pot. So a pot is a grouping, again, of these things on a worker node. So let's talk about the control plane, the control plane in Kubernetes, again, usually sits in the primary nodes, right? And this control plane is basically the brains of the Kubernetes services in your cluster. So, you know, we're going to talk about Kubernetes and break that down a little bit into the different components in that architecture for the Kubernetes platform. All right, so let's talk about the Kubernetes cube API server. Kubernetes is built with a API first mentality. So what that means is it's really easy to build services and programs to actually communicate with the Kubernetes platform. Now, this is very useful because now you have the ability to integrate, right? Very easily integrate into the Kubernetes platform and control things programmatically. That's the key element or the key component here. And the key thing I want you to take away from this, it's really easy to control Kubernetes via programmatic, you know, services or applications. And you don't have to go in and manage things, you know, manually for the most part, you can write services and applications to actually do this. When having that capabilities really, really important because Kubernetes is a platform is essentially running applications, right? So it's kind of like a machine on machine action. So another component of Kubernetes that's really important to understand is at CD, at CD is basically a distributed persistence component or system and enables you to kind of or the enables a cluster to communicate amongst all the computers in the cluster, all the nodes in the cluster, and then basically ensures that they all have the same information on them, right? So and this is all pertaining to the Kubernetes level services, right? Again, you know, you have your applications and pods running elsewhere, but at CD is managing again the information and configurations for the whole Kubernetes cluster. So again, it's a persistence layer that's distributed and it's distributed among all the nodes in your cluster. So the cube scheduler, the cube scheduler basically enables you to manage all the processes that are executed on the Kubernetes platform. So what this thing does is keeps everything in line according to whatever deployment or configurations that you have on the cluster. Now the queue control manager, the queue control manager is basically a way to or a service that enables other sub components of the control manager. And they basically, you know, kind of keeps things like the state of the nodes. It also has the ability to control the replication of things going on in the environment or in the platform. And it also, you know, controls things like container or not container, but service accounts and token controllers, right? So Kubernetes has a role based access control security architecture. And again, the queue control manager just manages a bunch of services specific to things that are running on nodes. So let's talk about some of the components that actually run on the node systems outside of the Kubernetes specific services, right? So these are like non control plane components that I'm going to be describing here in a moment. So every node in the Kubernetes cluster has what we call a kubelet, and the kubelet is basically an agent, right? It's just software that runs on every node. And it's the way that the cluster itself is and the nodes within the cluster are sunk, right? So the kubelet agent runs on all the nodes and ensures also that containers are running in a pod, right? So that's again a binary or an agent on the system. The kube proxy. So basically, Kubernetes, right, has a bunch of networking requirements inside the platform. And this service runs on each node, and it maintains all the network rules essentially for enabling the different services to communicate to pods, right? So again, the pods are where your containers and applications are running, right? So kube proxy is a way for those communications to occur within the cluster. And it also enables you to have traffic coming in and out of your cluster, according. And finally, the one service, the other service that runs on all the nodes is the container runtime, right? And this is the big daddy, in my opinion. This is the big dog. This is the component that's the most important because remember Kubernetes is a container orchestration service or platform. And without containers or the container runtime, you pretty much got nothing, right? But I will say that currently, the container runtime of du jour or the default container runtime in Kubernetes today is Docker. But you can also run container D, which is an open source project, and actually is at the core of Docker as well. So again, the container runtime is a component that runs on all the nodes and provides the ability to actually run things, right? Run containers. So let's talk about some of the benefits of Kubernetes. Service discovery, right? So once you deploy containers, Kubernetes has this concept of service discovery. And it just basically means that you're able to kind of route traffic to the appropriate containers. And it does it easily, right? So it manages things like IP addresses, DNS entries, and again, it distributes that traffic across all the containers. So you can mount storage within Kubernetes, right? So there's going to be occasions where you're going to probably need to maybe, you know, have some information that needs to be shared to all the containers that are running an application for you. And Kubernetes has some really good mechanisms for doing that. But again, that's a benefit of Kubernetes is being able to kind of, you know, understand the different kind of storage mechanisms, especially when you're talking about deploying Kubernetes into cloud providers. It's a really important and useful feature. So let's talk about rollouts and rollbacks. And basically, Kubernetes enables you to automate these things. And when you can automate these things, you can actually define these desired states of your applications and how they're being deployed, right, into Kubernetes. And it's very useful for things like if you're doing canary deployments or blue-green deployments, right? It's very useful and gives a lot of, I want to say, ease into those concepts, right? If you're trying to maybe implement a canary deployment type rollout, or you're testing the waters with these types of deployments, being able to do this automatically actually adds a lot of benefit, right? To your learning experience as well, not only with Kubernetes, but in the new ways that you're going to be deploying software. So automatic bin packing. This is pretty much a benefit that helps, again, manage resources. So with Kubernetes, right, you have this concept of running things on clusters, which again, right, if you break that apart, you have a node and that underlying node has resources like memory, RAM, disk, networking, right? And bin packing helps you manage the allocation of your CPU and RAM, right, for those containers. And then it basically slots them appropriately. So you would define a deployment and that deployment would say maybe let's run every instance of a container with four gig of RAM. Kubernetes make sure that ensures that that happens. So one of my other favorite Kubernetes capabilities is the ability for it to self-heal. Now, self-healing comes in the form of containers being restarted or replaced, right? It's almost like Wolverine, that character Wolverine in the X-Men movies, you know, whenever he's injured or damaged in any way, his body immediately heals. Kubernetes has kind of the same feature for containers. So again, if you have any containers that are not performing as designed or maybe taking up too many resources or whatever kind of issues come up where it's not performing in its desired state, Kubernetes knows this and Kubernetes actually go ahead and terminates those containers and then provides a new container to replace it. It helps with maintaining that high availability that we talked about earlier on in the presentation. That's one of the benefits of Kubernetes. So let's talk about Kubernetes capabilities for secrets, management, and configuration. So you can manage and store sensitive information for your application configurations without having to rebuild container images and also storing them in securely in container images is not a good practice. So yeah, Kubernetes gives you the ability to define these secrets and then provide them to the containers when they need to through the platform. Very powerful stuff, very enabling in the security sense of aspects because prior to this, you know, as you all could attest to probably managing application secrets is a pain, is a real pain. So let's talk about implementing Kubernetes. I just spoke to, you know, all the components within Kubernetes, talked about the architecture of Kubernetes to give you a sense of, you know, what that looks like inside the platform. Now I want to talk about a couple of things that are not specifically related to the platform, but more along the lines of, you know, the proficiency part of this talk. So when we're implementing Kubernetes, you definitely need, as an individual or as a team, you definitely need a set of skill sets. These are not, again, directly related to Kubernetes in a sense. They're more skill sets that human beings actually should learn or develop while on their Kubernetes journey. So the first thing or the first skill I would recommend folks develop is a sense of security. Not that you are all insecure or insecure, but in the same token, Kubernetes deals with role-based access controls. And I definitely recommend folks have a firm understanding. Again, you don't have to be a master, but just a foundational knowledge of roles-based access controls and security concepts in general. That just helps with, you know, troubleshooting problems that are security related. So again, if you don't have that background, it'll take you a much longer time and actually provide a lot more frustration for you when you're resolving issues that are security related. You definitely want to jump into learning YAML. YAML is a data structure, declarative data structure similar to JSON, but laid out in a different manner. It's actually designed more to be human-readable syntax, but it is kind of the de facto dataset or data structure that's being used throughout like the cloud-native genre. So definitely have a foundational knowledge of YAML, what it is, and how it's structured so that you can easily, you know, implement Kubernetes. And since Kubernetes is a container orchestration system, I highly recommend if you do anything else of any of these recommendations for sales sites that I'm giving you, definitely dig deep into understanding what containers are, the container runtime, and basically the nuts and bolts to the Docker and container D engines. Again, you're orchestrating containers within Kubernetes, and a lot of folks kind of don't focus too much on this, and that leads to deep, deep, deep frustration, because again, right, they don't have a fundamental understanding of containers. And again, Kubernetes is one of these container orchestration tools. And if you don't understand containers, you're definitely not going to understand Kubernetes. So again, please have a foundational knowledge of containers before even, you know, jumping into the Kubernetes sphere. Also, this will be very, very useful to you, since again, Kubernetes is, you know, managing traffic for you to your containers and your applications. Having a firm understanding of networking fundamentals is very, very, very helpful. I know having a background in networking helped me understand Kubernetes a little quicker than some of my peers. So definitely, you know, focus in on that if you don't have a, you know, firm understanding or experience in networking, you don't have to be a network engineer per se, but definitely understand things like domain names, services, like DNS, IP addressing, routing, also network network address translation is also something that you probably want to understand. And then of course, ports and firewalls. So APIs, I did mention that Kubernetes is kind of an API first approach when they designed and developed the platform and application. So definitely have an understanding of APIs. That will help you kind of understand the whole application interface for Kubernetes. And most of you are probably developers anyway, so this shouldn't be foreign to you at all. But again, you know, for those that are not developers jumping into the tech space and development space who are looking to work with Kubernetes, definitely learn APIs. I talked about this earlier. I have just the fundamental knowledge of persistence layers and storage in general. Different cloud providers have different ways of implementing block storage, right, in their systems. So, you know, just have a fundamental knowledge of storage. Monitoring and logging is very important. This is the way that you're going to understand how your applications, pods, containers are running within Kubernetes. And it's also a way for you to understand how Kubernetes is actually functioning as well. So again, as developers, right, we're used to logging. I'm safe to say as developers, we're not that familiar with monitoring, but it's no different than, you know, perusing logs and then understanding the patterns within those logs, right, to find out what's actually occurring within your application. Monitoring in this sense, what I'm trying to denote here is, you know, understand kind of observability tools that are out there. There are plenty of them out there. You might want to implement a few of them inside of your Kubernetes infrastructure so that, again, you get some good telemetry. And then when you have issues, you'll be able to troubleshoot faster and more easily. Infrastructure's code is really, really important. I would put this on par with Docker in a sense. The reason why I say this is it's one of these skills that will translate outside of Kubernetes across the whole cloud-native sphere. So, you know, once you learn the fundamentals of infrastructure's code, depending on what tool you pick, you'll be able to now have a new skill in your tool belt and your skills tool belt, right, so that you can implement infrastructures in a codified manner. So infrastructure's code, tools like Pulumi or Terraform HashiCore, both open source, both phenomenal, and they're really, really becoming the industry standard in technology. So, you know, have a firm understanding because when you're building those compute nodes and service nodes, right, in AWS or some other cloud provider, GCP, whatever, you definitely want to, you know, bring that consistency and repeatability. And that's what infrastructure's code does. You basically define all those resources in code, and then you execute that code to build infrastructure. So it's a really, really awesome, you know, genre that you definitely, and a skill that you definitely will benefit from in the future, as well as current day. And CICD, DevOps, right, continuous integration, continuous delivery practices, along with DevOps culture, will definitely be really important moving forward, I believe, in this industry. What you get from CICD and DevOps cultures are secure ability to securely test applications, build them, deploy them, monitor, right, and then have that, again, repeatability factor where you kind of iterate over again. So, you know, just build out these practices amongst your teams, amongst your organizations, so that you can then build your software faster, more reliably, consistently, and then repeat that process all over again, right. And that also helps you leverage automation, like CircleCI, the company I work for, and the platforms that we offer and services we offer. So, again, with continuous integration, continuous delivery, DevOps, right, I recommend that you automate everything, right, make sure that you're using infrastructure as code, you're packaging your applications into containers, using automation, it's pretty much how you're going to gain reliability, consistency, and also velocity, right. So if you're automating things and you're having machines build them, it's way quicker than if a human being was partaking in these manual transactions. Now, with Kubernetes, there's a couple of flavors, right, and actually, there's two flavors of Kubernetes that I kind of boiled down to. And what I'm going to talk about is the self-hosted versions. That pertains more to like Kubernetes, if let's say you have a data center, right, your infrastructure, your resources are not in the cloud, but you're deploying Kubernetes to something that's on-premises, right, in a data center somewhere. Some companies, like some government agencies, some financial companies or organizations are not allowed to because of regulation, are not allowed to partake in certain cloud infrastructure or put their infrastructures in the cloud because of regulation. So they're bound by deploying systems and services to those resources within their data centers. So that's what I mean by self-hosted is kind of like, again, a non-prem type of scenario. They are more difficult, and the reason is you have to manage everything, right. So you have to manage networking all the way through to hardware, all the way through to memory on the system, and you know, you physically have to manage that whereas a cloud provider, you don't have to do that. So, you know, self-hosted option is definitely possible. And if you have to go that route, just know that it's going to be a little bit more difficult in a sense. And also, to be honest, it'll be a little bit more expensive because you have to throw some bodies at it as well, right. You're going to have to have skilled humans that understand Kubernetes to implement it, maintain it, and then configure it for you, right. The other flavor of Kubernetes is basically what I call managed services. Now, these are services that are kind of pre-canned and managed in the cloud by the cloud providers. Amazon has its version, Google Cloud has its version. Pretty much almost all the cloud providers out there, popular cloud providers out there have some version of Kubernetes. Services that they offer. And what this does is enables you to kind of build these clusters out and they manage, lightly manage the underlying hardware for you, right. So, it's just like you gain the benefits of utilizing compute nodes on these cloud providers. Same kind of concept, but they're giving you an easier way to manage that Kubernetes cluster overall, right. Some providers give you a dashboard, right, that you can see what's going on and manage it from a web page or a GUI. Others just provide you, right, access to the Kube CTL command line. So, again, those are the two different flavors, you know, basically self-hosted or cloud provider managed type Kubernetes services. And this is what I wanted to share with you as well. Some resources that I use that I find useful. One of the most useful ones here is the Kubernetes community link at the bottom there. But you can, if you're a developer, right, and you want to like run Kubernetes locally without having to pay for a cloud provider, right, that doesn't make any sense if you're just experimenting. MicroKates from Canonical is a really, really good option. So, you can install that locally on your laptop or local development environment or machine. And you don't have to, you know, kind of spend money or set any kind of accounts up. You can just run these locally. And by the way, Kubernetes, for the most part, you know, standard Kubernetes is pretty much the same self-hosted installation or a managed installation on a cloud provider. They're pretty much, the engine is pretty much the same. So, right, you can do some development locally using MicroKates or Minikube is another one. There's a ton of other development tools out there. But these are the two I'm most familiar with. And I really like them compared to some of the other ones that are out there. And by the way, right, we're in technology things. They're always releasing new development tools that we could leverage. So, that concludes my presentation today. I hope that I was helpful in kind of giving you some advice and guiding you through, you know, tackling Kubernetes. Again, it's a very robust and complex platform. And, you know, it's going to take some time for you to kind of understand all the moving parts. Like I showed you, there are a ton of moving parts. And that was just scratching the surface. But at the end of the day, you know, have some patience, definitely focus on some of those skill sets that I mentioned earlier. Those will definitely help you in your journey to Kubernetes. So if without further ado, if there's any questions, I'll be around to answer them. And if you have any questions beyond this presentation, please reach out to me at punkdata. And I wish you all the best. Thank you. All right. That was an amazing talk. Thank you so much, Andrew. If anyone in the audience has any questions, feel free to put them in the chat and I can read them out. I think there was one for video. But yeah, so the good starting point would just be good to the Kubernetes community page right off their site. That's generally where I direct people. And, you know, other than I really haven't seen like a complete kind of tutorial on it, right? Because it's a complex system. But I think I would, like I said in my talk, start with Docker, right? Get to know Docker really, really well, understand what it is, how to use it. And then I would recommend you go into looking at Kubernetes because that's what it's doing, right, is orchestrating all of your awesome Docker containers for you. So, yeah. But, yeah, it's kind of all over the place, to be fair, right now. More questions? I can hang in a little bit. Yeah, we can wait for a little while. So how was your day today? It's been interesting for sure. Cool, cool. It's actually my first time attending something that's fully virtual, like a conference that's fully virtual. Yeah, yeah, yeah. Well, I mean, this is a good, it's good. I wish it was in person, but well, next year hopefully, right? Yeah, hopefully. And also, if there's any more questions, you guys can also head over to the breakout room. I just put the link in the chat. Yeah, maybe you can roam around there for a while and answer all your questions. Yeah, I have about, yeah, I have 10 minutes. I could do that. Nice. Awesome. Thanks. Thanks for having me. And nice to meet you. Thank you so much. All right. Take care, everybody. Thanks.