 Hello and welcome to our next talk. Our speaker is dysphoric unicorn. She's a web developer and also runs infrastructure and She will tell us what to do when your personal infrastructure isn't quite enterprise enough and how to change that with kubernetes And warm welcome to dysphoric unicorn. Yes. Thank you very much. I Hope everyone can hear me correctly It would probably be best if we could see my slides, didn't I connect the cable correctly or? Ah, perfect. So yeah slides. That's a good thing to see All right. Yes, I'm going to tell you about running personal infrastructure in kubernetes How to do it if it's a good idea why I would one even have that idea and And yeah, let's get started. So first of all Here's my infrastructure. I guess that explains everything. Do I even have to hold a talk anymore like? Obviously, no that does not explain anything The graph is actually still missing some connections, but while it is Unclear on purpose it would have been even more unclear I just wanted to poke fun at these startup websites that just throw a bunch of logos at you and Yeah So, yeah, first of all agenda what part one So first of all, what is kubernetes? Why why would you do this then? How would you do this? Then what part two where I will be going to explain my stack a bit Difficulties that I encountered lessons learned and at the end should I do this? So what part one what is kubernetes? So it's a tool for container orchestration What the hell does that mean? so basically it just chains to better together a bunch of Very specialized software to distribute workloads across multiple servers Enterprise Software developed by Google internal, but open source For their infrastructure. So yeah, it runs Google. So as enterprises you can get it's also very good for your CV and Maybe it's the future. Some people are definitely going to be telling you that it is I'm not an enterprise as you could probably tell. So why would I do that? Well for the means of course um No, seriously the community and in part their memes because like I was following a couple of people because I liked them and They kept posting like kubernetes adjacent memes and I wanted to understand those so it's a bit for the memes It's also a great learning experience like Some people might say upskilling so you could find better jobs, but I just enjoy learning stuff Curiosity like what are those enterprises doing internally that is not Java code because I'm not going there My old infrastructure was already running on containers. So it was pretty easy and Maybe a bit system stability like now I've got three servers instead of one But that's not too important to me How did I do this? Basically, I developed my infrastructure in four phases in the first phase. I had a local mini cube installed on my laptop To just see the viability test out stuff a weekend or two Of time was it that I used that then I switched on to local VMs Which was about a month's worth of weekends and then I deployed to production Which was only a couple of hours of time. It was dangerous since Yeah, it was not a zero downtime deployment. I killed my old server before I Deployed my kubernetes cluster and then we've got also ongoing maintenance ever since I set up this cluster So phase one local mini cube First of all, is this viable for me? Can I do this? Can I learn this quickly enough that I don't get bought before I finish? um Can I even afford the service necessary because like servers are pretty cheap in Germany compared to other places, but they're still expensive Then also I made the first infrastructure decisions. Which ingress controller should I use? Which cuban it is disrupt distribution this disruption that would be funny I know distribution should I set up and also I wrote the first many firsts like deployment scripts It's not scripts, but they described my deployments and I got my block running inside my local cluster Part 2 the the longest phase was local VMs So we just set up three different VMs on a local host and I started writing Ansible playbooks First of all, I had to learn what Ansible was no and I knew what it is, but I didn't know how to use it um Then as so I wrote this row for control and worker nodes. I made even more infrastructure decisions. So Which CRI like? basically which container runtime like There's container D which is from Docker. They They used to be support for a Docker directly, but it isn't supported anymore And there's also CRI O which is another one or zero. I don't know. So So, yeah, that was the decision I had to make How I mean going to do persistent storage because Kubernetes is great for stateless applications once you introduce date, it's a lot more difficult and Yeah, I had to have like a state for persistent storage. How do I automate DNS search renewal? because I used traffic for that before and Why traffic is available for Kubernetes? The community edition is not very good in my opinion and Also, how am I going to do networking between my nodes? So container network interface and I've wrote a lot more manifests like I already had my old infrastructure in Docker compose and I basically just Translated those Docker compose Deployments manually to Kubernetes manifests There's a tool that can automate that but I didn't want to use that. All right. So phase three production deployment. I Only had two servers at a time. So I needed the third one because three servers is the minimum I Reset two of my existing servers to become the worker nodes I ran the same playbooks that I had already used locally Then I debugged my persistent storage for an hour or so Because well partition labels do matter even though you don't really see them that often success Part for ongoing maintenance because it projects are just never truly done First of all deploying new apps whatever stuff I wanted to to throw in there Improving Improving improving monitoring Installing minor updates upgrading to a new Debian version because I when I started it was still Debbie and 10 now we're on 11 and Also, I added tail scale which is like VPN but much cooler and less config I Hope that's a decent explanation. Well, I added that so I could also add nodes at different providers because at this point All of my container nodes were at the same server provider and I had a VLAN between them with tail scale hopefully at some point I can Also have nodes hosted at different providers. So when one provider goes down my infrastructure does not go down with it That's not entirely completed yet Yeah What part to so what are the answers to all those questions I had First of all storage I Decided that getting a whole second cluster just for storage wasn't going to be Financially viable. So I used to save cluster within Kubernetes managed by Rook It's very resource hungry. Like it's the most resource hungry thing. I've got It's a bit difficult to debug But it's super easy to use once it's up and running I'm also planning as Restorage based on Minio Seth also comes with a S3, but I want Minio for this To back up my PVs because at this point. I don't really have automated backups because Every backup solution you can find for Kubernetes assumes you've got some S3 storage And I don't at this point because S3 storage is expensive and Yeah And then I want to back up to my storage box which hosts my different backups for other stuff The above backup, but that's not done yet Network infrastructure, so I Didn't use a CNI. I used a replacement tool that is really cool and not as resource hungry And I don't actually understand how it works, but it's great Like the person who voted is in this room. So it's really great It fits where we really well into my initial VLAN setup So, yeah, that's good I've also got tail scale in addition to the VLAN for inner cluster communication as I said I'm currently building this Up a bit more. I've also got tail scale to run the cube control command So to tell my cluster what to do from my local device. I also connect via tail scale I've got the engine x ingress controller running So that is the tool that Decides where requests are going in so if you enter one of my domains Which container is actually going to handle that request and also like this I think only one way a viable option for TLS third creation and renew which is third manager. I Procrastinated setting that up for a long time, you know, it was super easy Like it was the last thing I made before going to production Monitoring and observability Sorry Prometas collects metrics and triggers alerts. I've got the an alert manager, which Messages me via telegram usually in the middle of the night when something breaks I've also got a graphana that has super fancy dashboards that I barely ever use, but they look nice Then I've got Loki which collects all of my locks to put them in one place And I've also got FICO which monitors make my Colonel for potential security issues. So when someone hacks me, I hopefully find out pretty quickly Continuous integration another important part I've got my projects all hosted on a on GitHub I just need those tiny green squares that tell me I've I done good. I even got more than a thousand contributions this year Yay Well Container images are hosted on a harbor instance in my cluster Harbour is a container registry that also does security check-in and a bunch of other cool stuff I Commits to the main branch of my block for example trigger a build and push the container image via canary tag releases also build a new image and Take that with the current version number. I Used to have John C. I to do that but it broke after I upgraded to Devane 11 and The stack I used with John C. I actually hadn't been updated in multiple years. So Either write a whole new one or Or just switch to something else for now Now I've got github actions which is free to use like two thousand three minutes a month. That's enough for me But also it goes down all the time Like if you've got an outage that you can fix yourself. That's okay. Like yeah, I fucked up But if github is an outage and you can't do anything about it. That's much more frustrating So yeah, I'm going to switch back to several so it's here at some point What difficulties did I encounter during this whole project Yeah storage did not work at the beginning. I Forgot to set the correct tech on my partitions and thought they didn't exist Alert manager did not send any alerts after I restarted it. Well Stuff generally tends to work better when you actually store the Configuration and don't delete it after every start and then there's helm like basically a Kubernetes package manager in My opinion it's the Kubernetes equivalent of running a curl domain Zudo bash so Most people don't know what they're doing when they're deploying him charts and honestly I don't either in many cases because those things are huge and understanding them It's just a lot of work. So it's always a bit tempting to just run this one command on their dogs and suddenly it works but it's much better to write your own many tests and Yeah, the only applications that I really had issues with on runtime where those that I deployed via helm So what did I learn? Because learning was like the main motivation for me Even complex looking architecture can or infrastructure can become super easy if you take it one step at a time Like I've had this layered approach where I Would only do one thing and then I move on to the other and it just worked. It seemed easy to me Also always read the manifests just as I said with with the helm stuff Because you have to understand what you're actually deploying if you want to be able to Debug it well Better still write them yourself in many cases. It's possible. They're probably going to be a lot shorter because they're For your infrastructure and not for every infrastructure, but maybe out there Yeah, monitoring is easy Observability is hard. So I've got this prometoise and it sends me an alert when something breaks, but it will be much better to know before something breaks so that's why I got low key and bunch of different tools, but it's really hard to To know when something is going to be breaking. So that's an ongoing Project right now. I'm basically mostly reactive in my monitoring Also another thing During my initial Initial setup, I ran into some things that were not documented. So I just Once I found out how they worked. I created a pull request and then It was accepted and people were thankful. So you don't have to understand the application correctly to make Valuable contributions to open source software Another thing issues don't show up even if they should so Maybe you've done something that broke an application, but for some reason it only crashes a week later and now you don't know why That happened a lot And also another thing even enterprise software can actually be very fun which like Many of us have probably written Java and don't think that's fun, but Yeah, Kubernetes can be very fun So now that I've walked you through it quickly Should you do that? Is this a good idea for you? Yeah, probably not unless you're Unless you're similar to me like This just doesn't make much sense pros. It can be very fun Dev of specialists are really well paid. The community is great and Also, you can do zero downtown system upgrades, which I did when I upgraded to Debian 11 I had zero downtown for my important services, which is neat cons Even if you go with low-cost v-servers, you're going to be paying 45 euros a month or more My cluster is not highly availability at highly available at this point. I only have one control node So if I wanted to go a HA with that that would be a two more service at least like 15 bucks each or something in that Range, it's really expensive You're also going to be Needing a lot of knowledge that's not going to be very useful for other things so like Yeah, I'm a web developer, but I'm also at a very small company. So we all basically can do anything If that is needed, so The stuff I learned is also valuable for my work But if I was a woodworker in my job, I probably wouldn't be using my Kubernetes language knowledge for anything and My time would probably be better spent learning something else and also another thing You need to constantly keep an eye on security and use to upgrade everything once it breaks because you can very quickly Get attacked and breached and you really don't want that obviously a secret Additional point tell us more about the memes Jokes generally do not get funnier when explained But an explained joke is still better than feeling like you don't belong because you don't understand the end jokes So I thought I'd explain some of those memes that got me into this whole thing First of all, what is Kubernetes? There was this wonderful exchange on Twitter That's actually if you think long enough about it. That image is a good representation of what Kubernetes is You just got to wrap your mind around it. Well, Kubernetes is just extremely difficult to explain. I hope I did a half decent job at doing so Also, these logo clouds are everywhere and they do not help Every damn blog post that tells you how to do something in Kubernetes has to explain what Kubernetes is for some reason probably zeo It's annoying and those explanations usually are also pretty terrible Then also we've got these like on my shirt and generally honking It doesn't really have much of a connection to Kubernetes in itself It's just that the untitled goose game was very popular among Kubernetes professionals Goose are terrifying like the I was bit by two animals in my life of three one fish That barely hurt a Dog that also barely hurt and the goose and it hurt like hell. So These are terrifying, but they're also cute. They're associated with mischief and Therefore also kind of hacking because we're a mischievous bunch at least a bit And it was just picked up by some very popular people and everyone went with it Also, we've got the saying like that there's also outside of the Kubernetes community, but very much inside it as well Fuck around and find out Does this count as a meme? I don't know, but it's my approach to learning something like I Just try doing it. It's going to break But then I find out why it broke and then I'll try again until I understand So, yeah, it's a very nice approach to doing things Thank you for listening and I also like to end my talks by telling people to join a union Even though it doesn't really have anything to do here with anything here, but yeah Thank you this fuck unicorn for this really cool talk. We have a few minutes for Q&A So if you have a question, please raise your hand and I will give you the microphone. Are you happy with harbour? Yes so far very much set Setting up Kubernetes cluster. Is it easy with The tools like QBADM, CubeSpray or the Ansible playbooks means What's your experience and it's what you can suggest maybe so I Mostly set up the Kubernetes clusters using Like I said Ansible playbooks and those Ansible playbooks ran a QubeADM commands Okay, so QubeADM makes it super easy to administer install and upgrade the cluster Okay. Thank you So say I want to try this How brutal do I have to get on my existing infrastructure? Do I have to wipe everything clean or get new servers or is there some way I can like Play around on my existing stuff without wiping everything I have already While that depends on what stuff you've got like If you're just playing around The documentation for Kubernetes actually has like some Interactive things that let you set up a server that runs somewhere on Google cloud infrastructure for free while you're just figuring out how stuff works That's a great first start Then you can also just figure stuff out using mini cube or or VMs But if you're going to migrate you're probably going to have to wipe everything and like restore from backups So my blog posts are still online, but I had to do an SQL backup and play that in later Do you any more questions? Oh, yes. I Imagine that you encountered some So-called chicken and egg situations For example, how did you do monitoring because I imagine that you host your monitoring solution inside your Kubernetes and If you might have an intruder then maybe that's not that Obvious because he could compromise your monitoring Yes, that is true and the best practice would always be to host this stuff outside of Kubernetes. So like FICO the Security monitoring that should be installed next to your cluster and not within your cluster. You can also So Did do you host this at your place at home on a pie or similar or what what do you do with that? Special part of your system. I do not follow that's best practices at all times. So My stuff is not that interesting to hackers. I don't have that many compute resources to mine shitcoins So, yeah, I do not have the resources another thing you can do is like mirror your Prometo is at least to another Prometo's. So if you've got a friend who's also running a cluster you could kind of monitoring yourself Yes, monitoring in between and at least when your Prometo's goes down you will know Okay This looks like we don't have any more question then another big round of applause to this for a unicorn You