 Okay, so thanks everyone that's joining. Welcome to today's CNCF webinar. The topic is lowering the barrier to Kubernetes proficiency, navigating the stormy seas of information overload. So my name is Sanjeev Rampal. I'm a principal engineer at Cisco and a CNCF ambassador. I'll be moderating today's webinar. We would like to welcome our presenter today, Angel Kravara, developer advocate at CircleCI. Before we get started, a few housekeeping items. During the webinar, you won't be able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. Please note the Q&A box is different from the chat box, so please use the Q&A box for questions. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or the Q&A panel that would be in violation of the Code of Conduct. Basically, please be respectful of all your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page, which is at cncf.io forward slash webinars. With that, I'm going to hand it over to Angel to kick off today's presentation. Thank you, Sanjeev. I appreciate it. Welcome, everyone. I'm hoping everyone's safe and healthy during these hard times with COVID-19 and hopefully this presentation will educate you a little bit on the barriers of entry to Kubernetes and then how you can navigate the stormy seas, I guess. In full disclosure, I was thrown into this a little bit. I had a colleague who was supposed to present today, but he's unable to due to the crisis. So, yeah, we're going to have some fun here. So, all right, starting already. All right, so here's a quick agenda. Basically, I'm going to cover why Kubernetes, right? Talk about the reasons why you would use Kubernetes or, you know, basically the business reasons why you would use Kubernetes. Then we're just going to jump into a very high level introduction into Kubernetes, where I will go into discussing basically like a teardown or decomposition of Kubernetes and discuss the different components that comprise Kubernetes. And then the last topic will be implementing Kubernetes, where I will give you some basically advice on the things that you will need as far as like human resources, assets and even adopting certain modern practices and principles and software development to help you, you know, with your Kubernetes migration. So, yeah, so my name is Angel Yvette. I'm a developer advocate for CircleCI. My background, I started my career off the United States Air Force. I was working in US Air Force Space Command, which I believe now is US Air, US, no, sorry, US Space Command. It's really hard for me not to say the Air Force part. But basically, again, I started my career off programming professionally there. And I've been doing it for a very long time. My experience with Kubernetes, previous to working for Circle, I was a DevOps engineer or manager with the previous company I was at. And that's where I actually started, well, working with Kubernetes professionally in full disclosure. I haven't really been playing with it that much, you know, as far as like full scale production Kubernetes. But I have a touch on it here and there during my tenure at CircleCI because I do a lot of demonstrations and talks about DevOps and tooling. So, let's get started. So why would you use Kubernetes? I know it's a fancy new buzzword in folks, you know, I speak to a lot of folks in the community that's part of my job as a developer advocate. And I learned how folks are using technology and the reasons why are they using technology. And I also often help them with problems and the struggles with technologies that they have. But again, a lot of the questions that I ask for folks when I'm discussing technology with them is, especially if Kubernetes comes up is I ask them why? Why do you why are you interested in it? What problems do you have? And again, folks are, you know, describing problems. And anecdotally, a lot of responses that I get, I thought I'd share with all of you is, you know, hey, this is a new hotness, right? I'm a tech junkie. And Kubernetes appears to be the new hotness, new cutting edge technology to run applications. And, you know, a lot of folks as we do, right, when we're new we get it wrong. But at the end of the day, these are just like, again, anecdotal reasons why folks are interested in Kubernetes. Another one is keeping up with the Jones. Jones is meaning, you know, they're probably a part of teams that are innovative, right, they consider themselves very innovative cutting edge. And they need to be competitive because there's other folks in their same industry that are also competitive and cutting edge. So that's another anecdotal reason that I often get right from from the community and in my discussions. And finally, right, I think we all probably seen this one, whereas, you know, the boss is basically telling us our leadership is telling us, hey, we need to be innovative. And they probably sat on a webinar just like this, and decided that, you know, yeah, that's Kubernetes stuff, we need to put that in our environment and make it run right so again, these are all anecdotal. But some of the common reasons right and I would say more more serious reasons and by the way those anecdotal excuses or reasons that people are interested in Kubernetes they're totally fine right no judging there. But these, these are the more kind of serious conversations that I've had right so teams want to work faster. As far as, you know, getting their software delivery processes to be to be much faster at a higher velocity right and being able to shorten those release cycles. So that's a really good reason right to to look at Kubernetes. If you want to cut infrastructure costs that's another good reason. So back in the day, we used to run. You know if we wanted to scale out in an infrastructure we would have to add physical servers which I'll talk about a little bit later. But, you know, by deploying Kubernetes you definitely save some money, especially if you're running things at a huge scale right. And then, this is one of the most, I would say popular reasons why you should run Kubernetes again this is basically improving the availability of your applications, and their scalability of in the scalability of your infrastructure that run those applications. So now I'm going to dive into Kubernetes and by the way if anyone has any questions going to this is going to be a little bit long in this portion. But please ask the questions there and if one of the moderators could just call out if there's a question in the Q&A and now I'll just stop and answer it very briefly if I can. So, yeah, just put your questions out there. The first thing I want to talk about is the acronym K8. So, I started using Kubernetes years ago I mean even like we're in this in this development developmental phase. And I always wondered what this k8 things stood for. And I was at a tecton conference in New York for Coral West years ago, and they brought it up and they said well this is what k8 stand for in a presentation which was awesome so I wanted to share that with folks and encounter folks who don't really understand what the k8's acronym is. And it's basically it's just the letter K, right, stands in the Kubernetes and then the S which is at the end of the Kubernetes and then the characters in between there's eight of them. So that's where they get the number eight. So k8 is just essentially the characters in between the K and the S in the word Kubernetes. So for those of you that didn't know, that's what it is. So let's, before I go deep dive into the Kubernetes world, or the Kubernetes world. Let me talk about how we used to deploy software. So back in the 90s when I started in this industry. You know, I used to deploy software in the traditional sense right so if I know it's covering the letters are covering but if you look to I guess you're left. In the traditional deployments, we would literally have multiple applications running on a rack of servers or a server asset right so that was fine until you started gaining, you know, needing to scale that up right so if you had a lot if your workload or your load increased on that application, the only way to scale that was literally the by new servers. Right by new storage by new memory by a new rack by new network switches and install all of that stuff right and then you know connected with the network right get all that networking. Stuff out of the way. It was a really very complex and tough provisioning process right to get that hardware up and running. Then came along virtualization right so that helped us tremendously because we were able to now run these applications and isolation meaning the introduction of this of this hyper what we call a hypervisor right so it's a middleware between the operating system and the hardware. So the hypervisor was basically orchestrating all the resources that the server had so now you could squeeze a lot more you know a lot more performance out of out of a single unit because of the virtualization it knew how to allocate resources right when the system was under load a certain application was under a certain load. The virtual machines would then you know, be able to do a lot better job of handling the resources and managing them between the application loads. And then fast forward to today, which I think there we go, we're in the container deployment era right so now it's, it's very similar so if I had to remove this letter from here, you could see that in the container deployment block. Basically the container runtime so what that in essence is is Docker or container D right these are the runtimes that are that are actually managing the containers when they're running. So containers are really lightweight right so, and they tend to handle the resources on the system, a lot better, in my opinion than a hypervisor, because they are so lightweight, and these applications run in a single process which gives you the ability to manage right your resources, a lot better than even virtualized hypervisor type scenarios. So, this is a quick definition I got off of Wikipedia regarding Kubernetes. One of the cool things about Kubernetes is that it is open source right, and it is a container orchestration system platform right and basically what it helps you do is deploy your applications and scale them, as well as managing it right so it eases that process from one platform, whereas before you would have to set up to a bunch of different systems right to do all of this. If you wanted to not have any kind of manual touch points and even then you would still have these manual touch points right so if you needed to deploy an application. A lot of times you know even with deployment software I would have to click a button or you know do something like that but with Kubernetes kind of takes all of that out of the way and handles it for you once you kind of have everything set up and configured. And that's the whole point of Kubernetes to ease the operation of your of your application. So let's talk about Kubernetes and the critical components or the core components of Kubernetes. I want to kind of do a little teardown of it and talk about the the again the core components that compose Kubernetes. So Kubernetes operates under the cluster concept so that means you have a an asset that's kind of controlling some sub assets which we'll get into a little bit later but basically, I'm pretty sure I'm pretty confident most people here know what a cluster is. So we'll skip that one. So look at the diagram in the back. For the most part you have within that the Kubernetes cluster you have what we call master nodes or control plane nodes, and then you have worker nodes right. And within those worker nodes, you have pods now the worker nodes are where all the main work happens right. So what we're doing in the worker nodes is where we're running the containers which are running your applications we're also running, you know, any kind of process intensive cluster type operations. Again, the worker nodes are actually the muscle right for for Kubernetes to put in a, in a later terms. Within those worker nodes we have pods now pods are I did the lowest, or I would say the simplest object within Kubernetes but pods do are basically orchestrate or manage your containers right when they're running loads. So, at the lowest level is a pod within Kubernetes right just to keep that perspective. The control plane, like I said it's pretty much the the brains of Kubernetes right so everything, all the services that actually are running your cluster are running in the control plane. And it has components right so these are the services within the control plane sector that I'm going to be discussing the next few slides. So the first thing is Kubernetes API server. One awesome thing that I'm really really happy with Kubernetes as a developer is that they took an API first approach to developing the system. And what that gets you is a lot of capabilities right so everything you do can be can be accessed and yeah through through an API, and you can control things for this API as well. So having that capability in the API servers really awesome. Again for, you know, just normal operating and then also being able to extend what Kubernetes does. So, that's a component in the service that's in the control plane. Another piece is or component is Etsy at CD, which is basically Kubernetes kind of persistence. So when you have a cluster and you have all these assets working within the cluster, you need to share information right and that information has to persist. And the way that it happens is using this service which is called at CD this is also another open source project that I believe is under the Linux foundation or CNCF. And if you don't, if you don't have never heard of it check it out it's really cool. I think it was started it was started by the folks at core OS before they got acquired. But it's a really cool project and it's a key value store service and enables you to again push around bits of data between your assets in the cluster, and also persist that information so everybody's kind of in coordination right. And here's another component the Kubernetes cube scheduler which is basically ensures that your, your system is actually deploying or actually making sure that your pods are alive and well and that there's, you know, it schedules the work so if you have an asset that's not really being used, or it doesn't have any work having a worker know that's not having any tasks assigned to it, then if your load goes up the Kubernetes schedule will identify all the nodes that have resources available and will start scheduling jobs or tasks, meaning, you know, standing up pods and then your containers will run in those assets. So the cube scheduler is definitely really cool and helpful in helping you to have that kind of, you know, set and forget it mindset in the sense that you know you don't have to worry about the system or your system not not functioning properly because the cube scheduler will go ahead and initiate jobs and pods within within the, the worker nodes if there's if it needs to. So, the Q control controller manager, it has this is a little bit more complex has rolls up a lot of services under it, which are basically loops right so these are applications that are running in a loop and constantly monitoring for different things. One of them is a note controller right so it's basically looking for any nodes that are not responding or knows that have gone down, and that's again a service right and then it lets the system know what's going on. The other piece of this, or another component could be the replication controller right, and it's maintaining the correct number of pods for every replication controller object in the system. So finally, one of the other controller to control manager services are are the service account and token controller, which basically it's, it accesses tokens for new and creates new tokens for new name spaces. So these are all like again subsystems within the controller manager, but they're really important in helping the system actually, you know, operate with the call automatically. Right behind the scenes, but I thought I'd highlight some of that stuff because folks, you know, just hear about Kubernetes but they don't really realize what's going on underneath the hood, which, by the way I recommend you do anyway. So that was the Cube controller manager piece. Now I'm going to talk about the Kubernetes node components. So these are the services that run on every. Before you go ahead, there's been a few questions. Do you want to maybe. Oh yeah, sure. Those. Yeah, that's great. Let's go ahead and try and see where these can, the questions are. Can you read me a question. Sure. Yeah, the first one was, can you have multiple parts for worker nodes. So I've answered that already that yes, you would. Yeah, that's, that's correct. And what was the other one. Second question was, does the hcd database hold all of the deployment yaml's that we enter into the API. Good question. Does that send you. Yeah, I guess, basically the resources that the deployment yaml creates are what are presented in the city database and the deployment itself is a resource as well. That's right. Okay yeah that's what I thought I just wanted to confirm like I said I haven't really touched it but that's right so so you feed it, the data structure is yaml but when it gets imported into that CD. It doesn't stay in that data source does it send you. Yeah, it's the deployment resource is what is archived as a Jason in the city database. Yeah, right. Okay, that's why I thought yeah turns it's converted back to Jason right or into parts are serialized. And Stefan, Stefan has another question what happens when the master node goes down. Well that's where you would have to have a multiple. I would I would recommend right if you're doing a deployment to production obviously having multiple control planes, but at that point that's where at CD kicks in. Correct me if I'm wrong, Sanjeev that you you you bring that node back up. If you don't have a fail over that CD should right repopulate. Most of that information, unless there's some security. Yeah, so if the master node is not using ha that means if it's a single master node. The worker nodes will continue executing the parts that are already scheduled. So, at least the workloads that are already scheduled will continue to run in a best effort manner. Very often, completely, completely transparent to the master node going down. Of course, you won't be able to schedule new nodes or a new pods, and so on, and take new API requests. We typically want to run the master in an ha configuration, but at a minimum at least the worker nodes will continue running the pods, even if the master goes down. And take just another question I think we'll get back to the presentation after this one. The pods are deleted. Is the pod information deleted in the city, or sorry, I'm not able to follow the question here. I think they're trying to ask if the maybe correct me if I'm wrong but if the container is is terminated. Will the information persist on the system is that. Maybe what they're asking. If that's the if that's the question then once the container is terminated, unless you have mounts persist right the container. Some persistent layer mounts in that container then the data is gone right so it's a stateless generally run stateless, unless you have a requirement where you need to run like you know a mounting or some some volume to it. But yeah, I think the other thing to note is that very often you wouldn't deploy a pod manifest directly you would deploy something like deployment or an application controller. So in that case, deleting an individual pod would simply cause that part to get recreated. So if you create a resource of a pod by itself, and if you delete it in in the API server, then yes, it does get deleted in its city as well and eventually cleared out in the cubelet as well. I think let's continue I think looks like there's a bunch of questions and I'm sure we'll get to more. Yeah, thanks and G for helping me out on that one man. Appreciate it. Okay, so let's see. So it's about the node components right. So right, send you just mentioned cubelets and cubelets are basically the agents that are running on all the nodes in the Kubernetes cluster. So the cubelets again are basically ensuring that your containers are running in pots. So Q proxy, Q proxy is basically the network policy network fabric within Kubernetes right so that's a service that maintains your network rules and allows the communications to and from pods inside the cluster and out right. So again, Q proxy is a more of a networking layer within Kubernetes which, if you remember what I'm what I was describing earlier when I was doing traditional deployments back in the day. And you know it's talking about wiring up new network capability switches and all that good stuff. So this is where Kubernetes does a really good job of taking care of all that for you right right out of the box. So you don't even have to really think about it to a degree. Just have to configure it and make sure that those configurations staying tap. And then finally, another node component is the container runtime right so again this is just basically where you're the software that runs your container and it's usually Docker or container D. Before I go into the benefits of of Kubernetes or some of the. Yeah, the benefits of Kubernetes. Is there any other questions. Maybe we can hit real quick. Yeah, so, Ramazan asks, what is the best product practice for production apps, would it be to run a master and multiple workers for multiple apps. Separate master and worker nodes for each app. What would be a best practice. Yeah. So I, I think it, I think it would depend on everyone's situations are different. I actually ran both in my previous life in both configurations, just due to a lot of how to do with, you know, depending on where we were so we were in production and this is talking to production. We ran I ran multiple h a right clusters to me or control planes to make sure that you know we had the fault tolerance we needed, and also that the system was available at all times, obviously right. So, I guess the answer is I would definitely go with option where you know you have the highest availability, especially in production. I don't know if I answered that right. Send you do you have any. Yeah, no, I think Angel you got it right. I would. The only thing is I would add is you some some deployment practices use one app per cluster. If you want that app to be totally isolated from other applications and create multiple clusters. But very often you may want to choose, let's say an application per namespace within Kubernetes. And I think you may have some slides on namespaces later. And that way you can support multiple apps within a single cluster. So there are various degrees of sharing within a Kubernetes cluster. And depending on the scenario each of those can make sense. I don't have any that detailed in this presentation but that there's tons of documentation on how to how to run that stuff and then I'll have some resources later on. So, yeah, let's jump into the benefits of Kubernetes. What you get out of the box of Kubernetes is automatic service discovery right and load balancing so the service discovery piece enables you to expose containers via DNS right or IP address. And with the load balancing capabilities it basically distributes your network traffic evenly right so you're not getting you're not you're not hitting a certain read pounding on a certain resource. And the load gets heavy right the system knows that hey I need to equally distribute this traffic so that that it can maintain a efficiency or optimize performance. The other piece for Ben the benefits of circles circles have Kubernetes is the storage orchestration like I mentioned earlier. You can mount and different types of storage is like elastic block storage from Amazon, or any other cloud provider out there. And also right if you have local storage right that you want to use you can do that in a very easy manner, and then Kubernetes will orchestrate that as well for you. So one of my favorite features, the rollouts and rollbacks type features where you know if you, you have a container that an application and a container that you need a certain state meaning, it needs to, let's say, you have a new release of your application. Generally you would create a new, a new, a new container or new Docker image or application image Docker image. And then you would roll that out and if you're using like a canary deployment approach, where you just update or roll out these new releases to a specific amount of folks, you know, like let's say, you have 100 or 1010 nodes working in your cluster, and you, you want to roll out to 10% right every, every, every, you want to update 10% and a 10% increment, then right you would do one at a time because you have 10 nodes 10%. The system will go ahead and say all right I have a new, you schedule up a new deployment, and then it will just start updating as, as you know start draining a node. Once the load is pretty much zero, Kubernetes will then start deploying these applications, these new releases via container to these, these, these nodes, and then, you know, progressively then then shift the traffic over to the new updates. Now if you have a failure in these rollouts so let's say you're getting halfway through your canary deployment. And, you know, you'd realize wow we have a problem with the application. You can just as easily deploy, you know, a rollback right so meaning you just deploy the old version of the application again, and that Kubernetes will handle that for you automatically right and nice automated manner. So that's one of my favorites this actually saved my butt quite a bit of times in production when I was running Kubernetes. Yeah, so automatic big packing basically what this is is, obviously right so you have to set a threshold for, for resources, meaning CPU and memory for your containers. And what Kubernetes does is automatically right figures out the allocation, and then tracks it and then if there is a open slot let's say you need to, you know stand up a new container. Excuse me new pod. In fact, we'll go ahead and find a slot for you and start deploying the application within that right, and it manages again like for for CPU and memory. It's a really nice feature. Whereas before I used to have to manually like keep track of things have set up a system that wasn't really that great at reporting back and you know there was a lot of manual and manual handling of this type of work. It's a system like Kubernetes. It's all kind of handled for you. The other piece is self healing. So what that means is, if for any reason, you know, a container just runs out of memory or CPU or there's just a bug in the software, and it's terminated or dies. Kubernetes will automatically right replace it, or restart it. Now, it'll also, if it has an unhealthy container right so it's Kubernetes actually you can even set parameters to say like look if this is either performing at a certain metric right at a diminished capacity then Kubernetes can you can you can tell Kubernetes to go ahead and add a certain threshold right terminate that and then replace it with with a healthy container. The other piece is, which is again, one of my favorites is the secrets management of Kubernetes right. So now you can actually, you know, configure Kubernetes and store secrets on the platform itself without having to expose a meaning so like let's say you have a token for maybe a third party integration that you have in your application, you could actually set that up where you, you know, insert that secret into the system. And then, when you're when you're deploying your, your pod you could actually have a placeholder in your manifest for the deployment in the YAML that basically has placeholders right so if you have like a username or password or API token. You could just put the, I think they're called mustaches or handlebars around it, which are the braces. And then, you know, that's a signifier that you know it's a secret that needs to be populated once the system serializes into at CD. So that's basically the benefits of Kubernetes. Now I'm going to be talking about implementing Kubernetes right and some of the reasons some of the things that you're going to need to really think about and before you even start trying to go into production with this. So let's talk about skill sets first because let's face it without people and without people with skills, you're not going to be deploying anything in production, as far as Kubernetes goes. So, first things first, I'm a big fan of security. So definitely have a background or have some knowledge and security and role based access controls. That's the way Kubernetes operates. So having, you know, some knowledge in that space is definitely something that I would highly recommend so that, you know, you understand how the system is actually protecting itself and also doling out permission and privileges. So definitely have to learn about YAML right so it's a big part of this. I know there's ways you can do it other mechanisms, but YAML is a very it's a data structure right not a programming language. So but it's a very declarative data structure. So you can actually, you know, define things in a human readable manner. These configuration files can tend to get a little bit verbose. If you're doing, you know, many things are trying to do many things in one YAML file. I've seen a lot of different approaches. I prefer to have separate files for the different things I'm doing. So if I'm having a, maybe I want to, you know, create a new namespace likes and you talked about I do it with a YAML, a separate YAML file. Or if I want to deploy, you know, a new pod or something like that, I definitely would use separate YAML files for different different kind of transactions within Kubernetes. That's one of the recommendations I'd make. I would definitely definitely definitely this is really important. We're, we're, if you think about what Kubernetes is doing it's orchestrating containers. So you'd need to have a deep understanding of Docker, right? You need to understand how to build images. You need to understand how to how the networking works, right? Like support forwarding commands. You need to also understand how to mount volumes if that's something you're going to be doing. Right. All of these things will will are definitely required for you to accomplish the things you want to accomplish within Kubernetes. I can't tell you how many times I speak to people who have tried many, many times to deploy Kubernetes and then it becomes abundantly clear to me once I'm talking to them for a few minutes that they have no clue about containers and they really don't have an understanding, which is, which is a shame. And then I tell them, hey, you know, you probably might want to spend a little time learning containers and that technology so that you have a better understanding. And sure enough, you know, I see them a few months later, maybe a year later. And they, you know, commend me for giving them that advice and it did help them in their Kubernetes journey. I would also recommend that you have a strong, at least fundamentals right know the fundamentals of networking, because that's going to be a lot of what you're doing within the system. Again, it's handling it for you for the most part, but there are some fundamentals that you need to understand like IP addresses again firewall rules, you know load balancing is something you know again you don't have to be an expert at it, but you need to understand the basics and how all of those things interact so that you're well prepared for for handling Kubernetes. APIs, if you're not familiar with application programming interfaces definitely get familiar with them because it's like I said, it's an API first approach with Kubernetes. And again, it's, you know, something that you really should have a good grasp on if you're going to operate a Kubernetes cluster. Yeah, understand storage. You know this is probably not so important. But at the end of the day, I think, if you're going to do anything that you know you want to save data or your application has some state to it. You're definitely going to want to understand how this all fits together. You know, in a sense, this is really important as well. Can't tell you how important this is or stress how important this is. If you stand up a cluster, and you don't have proper monitoring or logging, you're not going to understand what's happening with your system if things go wrong right so definitely look into some of the options I think Kubernetes has. It's been a minute since I actually used it but in Sanjeev maybe you can speak to this but they have improved some of the monitoring tools or features in it and also you can use third party tooling as well, which I also recommend if you if you need like a lot more telemetry or specifics, but definitely get familiar with monitoring logging, because this is going to be where you know the rubber meets the road if you have issues with your with your Kubernetes cluster. And I think this is the last, I guess, skill set definitely look towards infrastructures code. Please, please, please look into codifying all of your infrastructure. This will help save you so much time. It's a little bit of work up front, but the end of the day once you have the infrastructure codified, you can you can do, you can save a lot of time right and and also you can sleep rest well at night knowing that, you know, if you have an issue, and it's related to any kind of like provisioning or you know, mistakes you made maybe you used the wrong resource class in your, in your, your assets for the cluster, you can go ahead and fix that right super simply just by changing a few characters in your code and then submitting that right to to change that back the infrastructure to to the appropriate level. One of the things I definitely want folks to understand is that when you're using Kubernetes and you want to gain velocity, you're going to also have to adopt modern day software development practices like continuous integration continues delivery and DevOps practices. So, with CI CD right you're basically enabling your your developers to confidently secure test build deploy monitor all the code that they and all the releases that they that they build right and then you can iterate on that meaning. So if you're secure if you're testing the release while you're building it right in continuous integration principles, then you'll know you have a feedback loop right you'll know when your your software is broken, and you need to fix it. And then once you have that fixed in place, it runs through this whole process again. So it'll be again security when I mean security is you know you're running something like sneak and sneak will test your application check for any dependencies that have vulnerabilities right so it'll stop your build and tell you hey you have security vulnerabilities before you even get to the phase of testing right. So these are all components of continuous integration continuous delivery that are critical. And if you want to, you know, again gain velocity and develop software faster you're going to have to adopt understand and adopt these CD principles. And then when it comes to DevOps right you just want to automate everything meaning, you know, to gain that velocity. You're going to have to implement tools like that's what Kubernetes does it automates a lot of the work that we used to do manually. So again have that automate everything mentality when you're trying to you know innovate and and bring again velocity to to your operation. So the last bit I want to talk about is the different flavors of Kubernetes so when I started. I tried to do this. I tried to self host Kubernetes and when I started Kubernetes was in its, I don't say infancy but close to it. And there wasn't a lot of resources out there it wasn't as widely supported as it is today. Because it was so new right, and folks were still juggling between like, was it masosphere and Kubernetes right these are that would though those were the two kind of top dogs during the time I got I started using it. And this is very difficult though right so if you're going to do a self hosted solution with Kubernetes and your first run right into production, I would I would rethink that strategy, and this is just coming from experience. Not that you know you couldn't accomplish it, but when you do a self hosted Kubernetes implementation, you're responsible for everything soup to nuts right that means hardware, or if you're in the cloud right the resources. That means you're going to have to wire up all of the networking you're going to have to make sure you have fast enough disks you have to have fast enough processors enough memory right all of these things. Like in a physical data center, which which I speak to a lot of folks who still do that and especially in corporations and they have reasons right. At the end of the day, they're moving away from that like these physical data centers and they're going into this cloud native or cloud provider type right scenarios. So again, if you're if you're doing, you know thing about doing a self hosted Kubernetes, especially for your production environments. You know, do your due diligence do your homework on what it takes to actually do that. There's tons of blog posts and and folks in the in the community in the industry that have done this and they actually operate like this. And you can get a real good sense of what it takes to actually do this right. Now your other option is what I call a managed Kubernetes service right so these are things like Google Kubernetes engine. What's this one here is elastic Kubernetes service from Amazon, I believe Azure Kubernetes service, LKE which is a linode Kubernetes engine and I, and I'm DK I was assuming it's digital ocean. I could be wrong but I just threw it in there anyway so if you're digital digital ocean fan. They also have offerings right so what these gain you is the ability to, you know, easily use their services. So they kind of abstract and the control playing for you. So that makes it a little bit easier right instead of you having to manage that. I mean you start to manage it but at the end of the day you don't have to provision it right you can easily bring infrastructures up and down with using our Kubernetes infrastructures up and down using these services. That's one of the, the nice things about it right just takes away all that manual process that you have to do, you have to do. Again with the networking right you still have to manage, you know the networks, as far as like rules and stuff within the services, but you don't have to have, you know, a lot of you don't have to put a lot of brain power behind it. But that's the benefits you get from using a managed service so if you're if you're going into your first foray into into Kubernetes I would definitely highly recommend that you, you know, look into any services. You know, and then, you know, trying to build your skills on that and then move on to if you're going to do a self hosted environment and then you have the experience that you could build on to then you know go into deployment. And that's self hosted. Yeah, and one of the things that I learned early on to and these resources didn't exist when I was deploying Kubernetes with my team, and we were operating with it. Definitely look into, again, the containers you have to understand containers to really appreciate what Kubernetes is and actually use it effectively. Ubuntu has come out with this mini k8 which is designed to be a single install type of Kubernetes environment for developers right so that you can run these Kubernetes infrastructures you can actually develop it and run it locally before you go into any kind of production scenario any kind of staging environments right. They're all designed to be local. I actually heard someone deploying a service in production using mini cube a couple years back, and I can tell you, it did not end well for that person so don't take these tools and try to, you know, use them in production. That's a big, that's a big risk in my opinion and the person that you know did this. Yeah he paid a deep price for for for it. And you know, it was a mistake he learned from it but I'm just letting you know like you know these tools are here for you, basically learning, and also, you know understanding the system, and then also developing right so you can use them to develop against, you know, a future Kubernetes infrastructure or something that you're going to deploy, but do not try to refrain or refrain from using these in production. I think that's it for me. I don't know what we have some time or we're over, but I can take some questions if people are still around. Thanks, Angel. That was great. I think we definitely need lots of these presentations to share knowledge about Kubernetes. We've had a few more questions come in. I'm not sure how much time we have to get through those but let's try. Jonathan asks, would be nice to know a battle tested storage solution that is ACID compliant and responds well to scaling. So I think he's asking both about storage and a database solution that is ACID compliant and what would be recommended for Kubernetes deployments. Yeah, databases I can tell you from my experience, and it's still I think I checked it a couple months ago. I tend to warn people against not trying to deploy databases in Kubernetes because of the persistence, even though it does have like great features for storage orchestration. I feel for a database. There's some impediments because things are virtualized. It may work for you up to a certain point, but once you start getting heavy load and huge queries and data is pushing through that database, you're just going to see huge problems with performance. I don't know, Sanjit, have you had that same experience? I would say that and also I think question was both about general block storage and databases wasn't particularly clear. So there are various block storage solutions, some of which are sort of the scale out hyperconverged type solutions, as well as network attached solutions. So there's a wide range of options, some of which you would want to run in the Kubernetes cluster and some of which you may want to run outside of the Kubernetes cluster for performance. And again, if you're doing the ladder running them outside of your Kubernetes cluster, you have to make sure that you have the throughput that you need for your network. Because again, these are the things that it's stumbling blocks that I encountered. And back in the day when I was running Kubernetes, there was only like two or three vendors that was doing block storage type orchestration, and it wasn't that great. Things have obviously progressed and since there's huge adoption vendors are making huge strides to to to offer really good performance storage solutions. But again, you know, everything has a price right and you have to constantly monitor again bandwidth and in your data right how much how many bits you're pushing through per transaction because that stuff adds up and if you know if you're if you're going to clog, you can clog your your your plumbing basically, if you're not careful. Another question is gates versus serverless why would when or why would we prefer gates or serverless. Yes, you can run those serverless frame infrastructures on Kubernetes now isn't there. I'm not too familiar with the opens or the serverless world but I believe there's one called open FAS. And I believe you can run that that serverless framework inside of Kubernetes. I've read a couple blog articles about that I haven't had any experience myself with it. But if you all want you see my Twitter handle just to tweet me and I can, I can point I know tons of people who do it, and I can point me in the right direction so it gets you the proper answers for that question. Next question on what was the name of the securities code scanner that you mentioned angel. Oh, sneak. So it's an SNYK. I think.io or calm so SNYK.io calm. Yeah, those guys are pretty cool and they just hired. You know, Patrick DuVois, who's the man who coined the term DevOps. So he's now working over there which is pretty cool. I met Patrick a few times. Nice guy, but the product is pretty solid it's the one I've been using lately so check it out so open source as well. Let's see there were a few other questions which were slightly generic yes the the slides in the session will be made available. What certifications might be available for people to obtain. Oh, Sanjeev, I think you're more qualified to me I don't have any certifications so can't answer that one or I don't know, to be honest. The CNCF has a couple of certifications certified Kubernetes admin and certified Kubernetes application developer. I think those would be very worthwhile to pursue, and you can look them up on the CNC website. Any other questions about how what are the best options for beginners to get started pointers to tutorials blogs, I guess Kubernetes.io is a good starting point, and there's lots of blogs and YouTube videos. Okay, so I put it if you go here to Kubernetes.io community. I've, this is where I go to look for for resources, not only do they have their stuff on there. There's other, you know, they also refer to other folks to that are active in the community. And there's, there's a ton of a ton of resources out there that'll give you a even better explanation and I did I'm sure on Kubernetes. I think any of these like if you look at mini cube and mini k8 you'll see it'll just, you know, cascade into different resources as well. And I'll just leave that up for folks if they want to screenshot it or whatever. Are we time for one one or two. All right, cool. Is Kubernetes HIPAA compliant, or the Kubernetes services are HIPAA compliant. You, well, I think the way HIPAA works is you, you have to kind of implement right. The, the, the, what do you call it the, I guess the HIPAA compliance. So what I'm trying to say is the owner of the system is responsible for making the system compliant right. So, there are tools I know I used to work in the federal government and I dealt with this HIPAA process all the time. You know, vendors, they say they're HIPAA compliant and when what that means is, yeah, they're, they're basically building or they're their software operates at a certain level like so everything's encrypted at rest and in flight, or, you know, it has role based access controls. So, I don't know the answer to that to be fair but you know, if you're really trying to get HIPAA compliant. You know the owner of the system has to kind of make sure the system is deployed in a manner that's compliant with HIPAA from, you know, the way I used to operate. There are things that I believe there's a ton of healthcare systems already using them, the federal government's using it as well. And there's a ton of resources, like from DoD as well that you could actually look up so if anybody has any questions about that I would refer you to NIST. I believe it's NIST.gov, it's N-I-S-T.gov, and they'll have all of the regulations, all the HIPAA, you know, compliancy and they'll have recommendations as well for, for, you know, locking down systems like Kubernetes, Linux, Windows. This is the standards that the government operates on. So yeah, if, and again, if you tweet at me at punkdata, I'll get you, I'll at least send you the links to some of that stuff so that people can have it. Okay, great. That's I think that all the time we have. Thanks, Angel, for a great presentation. Thank you everyone for joining us today. Thank you for joining and the slides will be online later today. We look forward to seeing you at a future CNCF webinar. Thanks everyone to this at the CNCF and thanks and have a great day. Bye.