 Hello everybody, thank you for joining us. We have Karthik with us with this talk on CubeMe, this Kubernetes best practices. I think Karthik, you're good to go. Yep, sounds good. Thank you and welcome everybody. Thank you for staying late today. We're gonna talk about some Kubernetes stuff. This is gonna be like a fast kind of session because I was trying to cut down on points but then I thought it would be important to actually have them in there. So I want touch on the presentation bullet by bullet. I'll put the presentation in the link for download and you can just take a look at all the points specifically from that point of view. So welcome, my name is Karthik Gekwad. You can find me on Twitter at iteration one. I used to work at Oracle until a few months ago in their Kubernetes and Cloud side of the house but I recently joined a company called Verica and I'm the head of cloud native engineering. So I'm still doing a lot of Kubernetes stuff. So I mentioned, I was on the managed Kubernetes team for Oracle, then developer relations for Oracle Cloud. I've done a ton of stuff with respect to DevOps and Kubernetes, I have a course on Kubernetes and cloud native stuff on LinkedIn learning and all of this got started because I had a very popular Hello World Docker container on Docker Hub. So I've been kind of in the ecosystem for a really long time. One quick thing, if you wanna learn more about chaos engineering Verica or does the O'Reilly book Chaos Engineering written by Casey and Nora, you can actually get this book free, the PDF version of it because everyone keeps asking us, what is chaos engineering, blah, blah, blah? So you can get the book at verica.io, you know, slash book. So today we're gonna talk about Kubernetes and I get beginner questions about Kubernetes a lot and I think it's the best way to frame this conversation is to do it around three things, development and the architectural portion of Kubernetes and how you like deploy applications, DevOps and then if there are agile folks or management kind of folks in the talk, we could talk about enterprise transformation and how that plays a role. I already see questions. Okay, great. If you have questions, you know, yeah, you can put it in the discuss section. I'll get to it at the end and if not, you know, just find me in the lounge and we can talk about stuff. So let's take a look at development or architector specifically, whenever we talk about Kubernetes and building applications, at least talk about microservices and if you don't know what microservices are and if you're not sure kind of where to start with microservices, one thing I would recommend is take a look at the 12 factor app design. So this has kind of been around for eight, 10 years or 10, 12 years, I think, at this point in time and it's 12factor.net and it's built on some of the principles when Heroku was doing development for software design and deployment. They were like, hey, we should follow some of these principles to make our development and deployment lives a lot easier not just for development engineers but also like for the DevOps side of the house. So if you're working in really large companies it actually really helps out because you can bring the whole team kind of together. I had like a couple of more slides on that but I kind of hit them but they'll be in the slides specifically about what is 12 factor, et cetera in the notes. So let's move on to design patterns. When people talk about Kubernetes and if you've never used Kubernetes before the probably the place to start is it's called the Kubernetes deployment and deployment, there's many constructs in Kubernetes. So there's deployments and pods and services and when you look at it the first time this can be really overwhelming but I think like 70 to 80% of everything that people deploy in clusters that I've seen have been surrounding deployments. So it's the most common Kubernetes object that are used for applications that run inside of Kubernetes and deployment is basically like a specification you can think of it as a specification and in that specification it's used to create a replica set and then the pods is associated with that. The pods are actually the containers that package your application and those are the things that run that but the question that I get most often is hey I have this monolith application I'm trying to run this in Kubernetes and I don't know how to do that can you tell me more about the architecture? So if you take the specific use case in mind you kind of have two choices. So a good way to think of this is either you can take your big application and use a single deployment model or you can take your application and then if you have like developers that can help you can kind of break it up into do a multi deployment model. So what does this mean? Yeah, single I talked about one single object for your whole application backed by a bunch of pods behind the scenes. So if you're taking like if you have this one big Java file like a war file and it has all the different portions of your application you can take that and deploy it as a single unit or the multi idea is to kind of break it up before you bring it into Kubernetes and then run those as separate applications that might work together either like via HTTP or some kind of communication between all of the different components. So this is like analogy in the Java world is if you have like many war files for your app you would kind of deploy them as deployments in Kubernetes. Kind of moving on when you start running Kubernetes production it gets confusing. So when you have multiple deployments equals multiple pods and then when you're trying to deploy everything in once one common state that I've seen a lot is when pods take a while to actually come online and folks are like, wait, I thought everything is supposed to run in a timely manner what happens, but you end up in this weird state of like things actually not working or not running, et cetera. So what do you do? When you're building your YAML, your deployment YAML there's a couple of probes that you could use for this. One is called the liveness probe and the other one is called the readiness probe. I kind of think them more as health checks. So the readiness probe is where the user defines health checks that tells Kubernetes like when the container is ready to go and serve requests. So you can have like all the startup stuff that it needs to do if it needs to like configure talking database make the state ready, et cetera that the readiness probe will wait for the container and not start the deployment until all of those checks pass. And from a liveness probe perspective it's a health check to indicate whether like a container is running or not. So over time you can kind of check like an endpoint or run a script, et cetera to make sure that the container is still running okay. If that fails and Kubernetes will kill the container and spawn like a brand new one I put a link to the docs but this is something that people forget a lot initially. So it's kind of like a best practice to take a look at it. And then also when you're working with multiple deployments it's better to have like a version endpoint for your pods of containers. So that way you both like what actually running one strategy that I've started following over the past like five years or so is to basically be able to tie whatever's running in your production back to your source control. So I recommend like if you're using GitHub or something have a get hash that's deployed in your version endpoint. So you can say, okay I'm sure that I have this version of my code running in my production system. So you have some kind of tie back to whatever you have in your source control because typically what happens is you'll deploy these multiple times and then you won't really know what you're actually running in production. So it's kind of like an important thing from that standpoint. Let's kind of move on to a different section that I also get a lot of questions on which is authentication authorization. Basically there's two things in Kubernetes. There's the authentication portion and the authorization portion. If you've never dealt with this in Kubernetes probably the most important thing to realize is the idea of a user doesn't really exist in Kubernetes. So if you're in AWS or something like that and if you're trying to like use your AWS user inside a Kubernetes ecosystem like there's no like easy way to like plug those two things in, you kind of have to do all the management yourself to do that. So basically, but that being said there's actually many ways to be able to authenticate. So that's kind of nice. And there's a bunch of ways you can kind of take a look at the docs for which way might suit your specific use case the most because there's kind of different ways and enterprises have different ways of doing that. But the most important thing is make sure you actually pick an authentication for your cluster because if you don't pick an authentication section you can't actually pick an authorization strategy and that's actually ends up being more important. And a lot of applications today actually expect you to have like an authorization strategy for your cluster. So if you don't have anything then things start to fall apart. In terms of like what to pick for authorization there are different modules for it. And probably like the only thing to talk about here is there's two big ones in the ecosystem. There's attribute-based and then there's role-based. Activity-based was like the first model that they had created so you end up finding like a lot of documents, et cetera for that. But truth is everyone has kind of evolved and actually ends up using RBAC. So use that as your standard versus something else like versus A-BAC or something like that. You can also use multiple authorizers together but there's very like small subset of use cases for that. If you want to learn more you can come chat with me in the lounge after that then I can tell you some stories on using webhook and RBAC together. So logging and monitoring, this topic ends up being very important to the DevOps folks and how do you kind of go about that? You know, you can actually run a QPCTL command called QPCTL logs and you can follow logs, et cetera. But we all know from experience that you never catch the issues at the time. You're actually trying to find like past issues. So how do you look at log files beforehand? So think about logging and monitoring early on before you go into production with Kubernetes. And more importantly, the thing that's essential is to tell your engineers how to actually use the tooling. So how to actually debug and monitor because otherwise there's this time gap of, okay, we're in production, oh, something went wrong and hey, there's a problem with your code and then nobody knows how to use EFK or Prometheus or Datadog or whatever tool they're using. So yeah, basically like more time friend to play with the tooling gives you lesser time to actually debug productions. So in terms of like specific recommendations or tools, et cetera, if you're going to open source route, there is the EFK, which is the elastic stack, Cluenty and Kibana for logging. You know, if you're in an enterprise, you already use Splunk or you use Sumo Logic or something that those have adapters to Kubernetes. Same thing for monitoring and observability. If you're doing it from an open source point of view, Prometheus and Grafana are like the two major ways, but if you use something like Datadog or any other tooling, those have like see, that has Kubernetes plugins as well. So all of this stuff wouldn't really exist without containers. So what are some best practices? Containers are based on images. So from an image perspective, the smaller the image, the better. So there's less things for an explorer to attack and also like from a practical basis. If you have a, you know, a one gig image that has like all of the Java JDK, et cetera in it and everything that you need. And if you don't need it, like why actually push that and pull that every single time. Also, this is more specific. Don't rely on the tag. A latest image yesterday might be different from what is today. It might be different from tomorrow. So you might not know what version you're actually running with. So it's better to tag it with a specific version that you're operating with. And also like consider using a private registry. So, you know, you're, it's a lot of like company IP that you're storing. So, you know, enterprises have different concerns for data storage. When I worked at Oracle, like most of our clients ended up using, you know, and a private registry in order to do that. So, you know, if you're in the cloud, a lot of every cloud provider has their own registry that's a little bit more secure than using Docker Hub. So, you know, consider something like that. So moving on to the DevOps side of the house. The most common question is like, should I install my own Kubernetes or should I use a managed service? What do you do? So pros and cons, from a cons perspective, it's not 100% customizable. If you're using, you know, like EKS or, you know, Azure Kubernetes service, there might be hidden costs. So load balancers, if you're provisioning load balancers inside of, you know, a cloud provider, those actually cost money. So there might be like hidden costs like that. But from a pro perspective, you are not, you don't have to care about the control plane management. So the provider handles that for you, there's a lot less maintenance that your DevOps team has to handle. And you're actually spending more time with your applications and the nodes that support the applications versus, you know, like, oh, why is, you know, why is to bled down on a specific node, et cetera? So moving on to what are different strategies for, you know, depth test production. I think I have two more minutes to go. So I'll kind of go a little bit faster. But I think there's two primary strategies. It's either use different namespaces in a single cluster or different use different clusters for depth test production. Most folks that I talked to, like I think two years ago, a lot of people would use, you know, different namespaces for depth test production inside one cluster. This was because it was actually really hard to, you know, create a Kubernetes cluster. But I think most people have kind of evolved to using multiple clusters for, you know, one for dev, one for test, one for production, et cetera. So there's like separation of concerns and it's become like really easy to, you know, be able to create clusters, especially if you're using cloud. So that's my recommended approach for enterprises, especially if you're using, you know, any of the big four cloud providers. The con is that you do have more environments to manage, but I think that outweighs the benefit of, oh, I broke development. And as a result, I also broke production. So tagging nodes, all of these things, you know, when you're running your workloads, you run them on nodes. And if you have like multiple clusters and if you have three node cluster and if you have like five different clusters, you already have 15 nodes. So, you know, Kubernetes has a concept of tags. So use those tags to tag the node. So when you look at your AWS screen, you can at least know like what nodes came from what cluster, et cetera, that you can, I have a link to the docs over here, but, you know, more importantly, just remember to tag your nodes. And then from a pipeline perspective, consider using a pipeline. So, you know, I can talk about this more in details when, you know, it's folks that are interested in the lounge, but you can kind of accomplish all these four things using CICD tooling, you know, we at Verica, we end up using serve tag behind the scenes to do a lot of this work. So you can kind of like, you know, follow a similar infrastructure. And then from a transformation point of view, I think the big thing is like, where do we start? How do I do this? So my recommended approach for folks that are new to the Kubernetes ecosystem is like, get some experience with Kubernetes first. Like, don't commit to saying, hey, we are going to do Kubernetes. Like, make sure it actually works with your workloads, you know, take an application, either split it apart or run it as a single thing, convert that into Kubernetes. And then take that application and, you know, promote it to run in a production setting and then understand how to do the DevOps stuff behind the scenes for that because, you know, it's easy to build something for Kubernetes, but it's harder to figure out, you know, when things go wrong, how do you manage that? And then, you know, once you're comfortable with one app, you know, you can add multiple things in there. That's how like most folks, you know, take a look at this. And also from a high level, like know your team, every organization is a little bit different. So build your cloud native transformation around your teams. If you're very DevOps heavy, you know, you can do a lot more things from that standpoint. And so, but organize into like development DevOps and SRE teams to kind of use the different, you know, the different facets. Also leverage the community for Kubernetes and cloud native is really big. So leverage open source technologies where it makes sense. And you can take a look at the landscape on cncf.io. So have, I guess, I am out of time. Thank you so much, Gayate, for being with us.