 Welcome, everybody. Thank you for coming out. Today, we're going to do a little bit of playing with some technologies, but we're going to try and do it in a very approachable way. We're going to be talking about BuzzGrate, but we're going to talk about iteration number two, the web-scaled shoebox. So some brief introductions. Myself, I am Jason Plumb. I'm a developer with Arch Linux ARM. I currently work at GitLab doing Cloud Native Deployment making some crazy things of reality. I am an open-source developer in my spare time, and I work primarily in OSI compliant licenses. I have random tidbit is the inadvertent edition of the Bridge IP flagged Docker some years ago, never knowing what the people at Flannel and Kelsey Hightower would eventually do with it. My name is William Christensen. I'm presently working at Change Healthcare as a Cloud Engineer. I got a background working in Windows software development for applications, mostly for manufacturing or for ASIC development. Slowly but surely, the community has turned me into a DevOps engineer as I started playing with Arch Linux ARM. After meeting Jason in IRC and I've become a product of our local Linux community, as well as working with DAO technologies throughout my becoming a open-source contributor. So why BuzzGrate? We get asked this a lot. But the important thing is, it's really about making it easy for developers to get hands-on with their own systems, really understand how they are and not have them be living off somewhere in a Cloud or somewhere where IT keeps it from you ever actually touching where your applications will run. For sysadmins, we want to enable you to actually continuously spin up, test, spin down and replicate these environments, especially with Kubernetes and how complex applications can end up being in that scenario. We want you to be able to test it, play with it, break it, and prove that it's going to work well in production. In the end, for DevOps, we want you to be able to do the exact same thing. We want you to be able to test it, we want you to be able to approach it, we want you to be able to really not say, what works for me? We want you to know that it not only works for you, but it works for anybody who's going to run it, because you can spin up the exact same environment they can. We have the established businesses. What if I told you that I can give you a proof of concept without weeks of training and massive capital inputs, and just literally, okay, here's a small budget. You can spin up a full proof of concept, demo everything very, very easily. How about students? Here's the one that everybody seems to not always take into account. We're working with open source technologies. Why should it cost money to try them out? We want to make it approachable. We wanted to make it something that can be easily approached, whether it was cost, whether it was locale, or it was experience. You should be able to play with these things, get your fingers in there, see how they work, and not be restricted by anything that has to do with an economy. We also want to enable people to grow personally their own skills. These are complex systems, but there are tools out there, such as the ones that we'll use today, that make everything easy to use, and they do it in a very understandable way. So let's have an easy entry point to these complex systems so that you can go and explore and learn on your own time. We want it to be fun. We're talking about enterprise technologies, but they're entirely usable by enthusiasts. You just have to have the ability to touch them, feel them, and work through them so you can learn. It doesn't matter if you're doing this in the office, and it's a small cluster on your desk, or if it's a small cluster in the cloud, it's even friendly to be a home lab. It doesn't really matter. We want you to be able to do it as fast as possible, or as slow as possible, to meet your own learning and goals. And it needs to be simple. It's gotta be as simple as we can possibly make it. In two hours, you can take this, read through everything, walk all the steps from scratch to completely new and working. Once you've done it once or twice, you can get this well below 10 minutes. The biggest thing is that we're doing this for y'all. We want this project to be put together and organized because this is a complex set of technologies. We want them to be simple for you. We started approaching them when they were in Alpes and Betas, and they are horribly complex, and we had to go collate, okay, here's a guide here, here's a guide here, and we've even worked with ARM. We've worked with the kernel, we've done libraries, we've done dockerization and all this stuff, but now let's try and make this a consumable amount of material. This is what this project is meant to be for you. So really, BuzzCrate, it started out as a project where Jason heard about this new beta thing that Rancher Labs has talked about, which gets K3s. And he said, hey, it's supposed to support ARM. I'm like, cool, how do we install it? And that is where we started going down the rabbit hole. Well, as you can see with a little icon on the corner, Arch Linux. We're not the first to be thought of when people release new software usually. And as we were trying to learn Kubernetes ourselves, we started noticing the steep learning curves just as Jason mentioned, and one of the alternatives is Minicube, and it doesn't really scratch the itch for exactly what you want to do when you want to play with a cluster. Especially if you're more admin infrastructure focused, playing around with one node did not exactly the most appealing. But unfortunately, to make it to play an environment that is the most appealing is normally with AWS. And that we've discovered the breakpoint or the break even for us for buying this hardware at the time was about three and a half months. And it was actually the same for GCP. Another factor of this is the fishbowl effect. When you're working on Kubernetes, it's nice to actually see, touch, hold the cluster. There's a reason why a lot of homelabs have a lot of like home servers. This fits on your desk. It makes it so it doesn't matter if you live in the city and live in a tiny apartment or if you are, you know, have a nice giant home at home, it's there and it's energy efficient. It's air gap capable. So if you need to bring it to work then with strict security requirements, there's a possibility of it running there without any question. And most importantly is we got into it because it was fun for us because the means that came around with it. First you're like, oh, it's gonna be like loot crate. Oh, and then like the buzz was just started flowing. For instance, anytime you can work on a project that has a replicable without a PhD and hybrid network conversion to cluster logical automated cloud development operations with angelic ROI is awesome. And I can't believe I just nailed that. So, but more importantly to the means also long which is it doesn't fit in a shoebox. Yes, it will a Jason Plum has can offer can attest the fact that it has fit in a shoebox. It will travel to KubeCon and it costs less than a pair of Air Jordans that would normally fit and said shoebox and it allows the run not walk approach more importantly. So you can actually play with an entire cluster. And I don't kind of click that button. All right, so as we said, this is a second generation. So without, you know, with everyone introduced to it. Hi, we did not meet all of our, okay, ours the last one, so more buzzwords. But the major one that we hit was the we have a cheap proof of concept deployable in less than 10 minutes. That is very key for us and having the less than $500 for hardware you can have in hand is something that we want to open up the concept of Kubernetes to everyone with the lowest barriers that we knew of at the time. But unfortunately, three failures that I think the community and the people we need to reach out to really there were just markers that were missed. One of them is, you know, CI CD. We wanted to introduce the concept of CI CD since we're saying, look at all these things we're deploying because the original BuzzCrate was a lot of Ansible. Well, that's nice, but how do you get the applications on there? How do you build them? There's, you know, we do address it this time, but it took some effort. IoT platform. The IoT platform at home is something I personally care about because it is a lot of technology and embedded work that I get to work with again, but tie it into enterprise clusters and then finally creating an ecosystem. I would love to be able to order parts off of SparkFun or Adafruit and just have it tie into an app where I would just deploy like, let's say via Helm directly onto like my home cluster and this enables what I want in the future. Going also with that, we discovered that since the first version, there is some additional failures that we have discovered which is cost of entry. Cloud has gotten a lot cheaper and a lot more accessible and there are a lot more vendors allowing for tools to use such as CloudForms, Terraform, that kind of stuff. So we need to address that. The last one, it was interesting to Jason and I, but it didn't reach the community as much as we wanted to. Edge cases, it was only focused on edge cases at the time. So we had some issues with just focusing more on IoT and people who were working on the home lab. And more importantly, we didn't have enough buzzwords and I'm just noticing now that we're missing some emojis. So that is a conversion failure I own up to. All right, next slide. So tools used. Jason, feel free to tie into this because I have to go kick off the Terraform if you want to start talking about it. Sure, so I'll just randomly pick throughout here. First off is actually, yes, we're running Arch Linux. One of the reasons we're doing this is because we can get a consistent platform behavior between the ARM boards that we have and the VMs that we're running in Linode. We're using Linode in this particular case because they actually do have Arch images available for us, but also because they happen to be a local business that runs fully open source. And we really appreciate that. So we're gonna be using that one. One, it's very accessible. Two, it's cloudy. And three, why not? It's buzzwords. When it comes to our choice of actually deploying Kubernetes, we've chosen to use Rancher's K3S. It's a distribution of Kubernetes and it's one that is designed to run in more constrained environments. They've stripped out some of the things that are needed. The agent can now run in sub 100 megabytes instead of closer to a gig. And we can run it in the ARM boards and we can run it on small nodes in Linode. So we're keeping costs down as well as making it approachable for people who may not be able to make use of the cloud all the time. We're using GitLab in this case for the ability to do CI CD. It's got a built-in container registry and we're using it for source control. So all the things that lead up to this project, the building of the containers, both for ARCH, I should say the ARM64 ARCH as well as the X86 ARCH, we're using it for those things. So we can push them into the registry and then we can pull one image to all of these. We're using Terraform for the deployments into Linode to go and create the VMs and resources that we'll be using and Ansible to do all the configuration of both sides of these clusters. Jason, quick pause. The Terraform just completed just to further emphasize that this project is made to be done at home as quickly as you can with as little at least amount of effort as you can. All right, anyway, Jason, back to Ansible. And of course we have some demo apps written in a mix of Go and Python with Flask. The reason we choose these two particular languages is because we know for a fact that these are highly portable. So we can containerize them and not worry about what the destinations are where we're gonna hit some sort of odd item in terms of compatibility due to a library. Mostly it's, they're easy to learn. And for someone who wants to go from, I don't know anything to playing with a cluster and deploying an app, you can get your hands dirty with any part of it. Feel free to take it, mess with it, expand it. It's there for you to have fun with. Now, we gotta start on this project, right? As Jason told me, hey, Rancher just announced it. It wasn't even 0.10 at the time or was it 0.12? I think 0.12? It was definitely alpha. One of the entries to qualify it was, I was like, ooh, ooh, we can use this. Yeah, and we did. And there were some painful things. And it was a learning process on Rancher's side too, but in the end they made a better project. And more importantly, they engaged the community, which I love seeing being a part of helping to emphasize. But the biggest thing is that K3's, the biggest part of it is they removed a lot of the alpha, the beta, and a lot of the stuff that has kind of passed its prime for the Kubernetes distribution and they just made it. So it ends up becoming a much smaller base of about, I think about 40 megabytes, as much as it takes about 40 megs in RAM, which jumping through here, the biggest thing is that no LCD means simpler to deploy. It means you don't need as much resources to discuss LCD as a hog. It's very good at what it does. It's just not efficient when you're stuck with two gigs of RAM on ARM boards. Right, and environment item. It works perfect in enterprise, but when you've got one, maybe two, if you're lucky, four Raspberry Pi, for example, not so much. But the biggest part that we found out about is without LCD and without needing multiple masters and the simplicity was it, which is we can start having a discussion about ephemeral clusters. You can start deploying Kubernetes in the same amount of time it would take to actually make the containers and publish them, which means ephemeral clusters. It means that so if you need to update to a new version of Kubernetes, you can just tie it in with your deployment and test it. You can test your infrastructure anywhere. So it makes a better, nicer, more friendly DevOps platform for that. All right, we're gonna kick through because we're a little short on time for right now. So, Jason, no talking. GitLab, reason why I started using GitLab is not just because Jason's working there is because I really enjoy using it in terms of just source control management. And getting involved with CICD platforms, I found the GitLab runner to be the easiest way for me to start playing around with builders. It was very simple, no Java, and it was so lean on resources, I can throw it on an ARM target and it supported ARM target out of the box that it wasn't a problem to build the builders. By the way, you can go to gitlab.com slash buzzcrate slash details for all the details, the slides that we have right now, as well as in the buzzcrate project, we have all the code that we will be running, including the pipelines that we were even working on this past weekend and a little bit last night for this presentation. The container registry, we discover, or I discovered a feature, GitLab's UI doesn't even know it's supported yet and supports actually very poorly, but the registry itself functions properly with supporting multiple arch containers and having it listed in a single manifest. So when you just say, hey, I wanna grab the go echo container as long as you have the latest tag, it will automatically figure out whether or not you're on an ARM V8, the ARM 64 V8 box, or if you're on an AMD 64 box, we can support more architectures, but we were limiting, because we're using our own hardware and simplifying the container build. And we're doing the entire build without Docker, which is also an interesting exercise. For me, this is a learning platform, canary deployments, automated provisioning, direct Kubernetes support. This is stuff that I've planned to play around with my own learning with the GitLab tools, with BuzzCrate. And I invite anyone else to do it. Rancher Labs does support some other things so actually tie in multiple clusters. I have actually not played with that yet, but I do plan on playing around with this platform as well. So that aside. Jason, you wanna take this one? I had to unmute. I had to resist there to talk because I figured I'd hardware mute. Okay, so we have both kinds of projects here. The last time we did talk in regards to BuzzCrate, we only had the ARM boards. So this time around, we've got two. On the one side, we've got Linotes Cloud. We're using it as an infrastructure as a service. We've got Arch Linux images. Thank you for those. We're gonna start ourselves off with a small node for the controller, which is actually, believe it or not, slightly oversized. But it's easy for everybody to use. That's the important part. And we've got a couple of worker nodes. Then we're gonna have Terraform, which Will has already gone ahead and run. As he said, it ran that quickly. We'll show the screen it a little bit. And then we're using Linotes CLI to actually go and pull the information we need out of the Terraform and Linote itself to actually turn around and generate the Ansible inventory. Then we'll use Ansible to actually deploy and join all the K3S nodes together into a usable cluster. On the on-prem-ard-arm hardware side, we're using Arch Linux ARM and we have a couple of choices of boards. But this in particular, we have the one, we're using HardCrate. Yes. Yes. So we have HardCrate. If you go look up the details, HardCrate is a bunch of boards from Odroid. My cluster, LibreKate, is obviously a Libre computer. We've got two gigs of RAM. We've got 32 gigs of storage on the SD card, gigabit ethernet and ARMv8 64-bit boards. We're gonna be using Bash through the image deployment and then Ansible, once all of that is ready, to actually do a full installation on those as well to bring up a full cluster. All right, so there has been some talk, even including in today, about ARM and how much ARM support is there for, let's say, the future MacBooks. Well, I can say as of 2019, all these things that we tried, Debbie and Alpine, Fedora, GitLab, Runner, Python, Builda, Podman, Nginx, Docker does work too, thank you, Jason. And Postgres, all worked without thinking about it. The Multi-Arch support was there. My presentation is, sorry, this presentation is more about trying to get caught up with the manifests for what they did and made my life easier last year. So we can introduce the concept to you this year and also using GitLab's registry as well. So if you ever wonder, oh, and everything in our project has, everything done is Builda from scratch. We're not using Docker. We're not using from Alpine or anything. It is always from scratch and we build it from native containers on both of our platforms. You can see it in our build pipelines at gitlab.com slash bus crate. Next slide. Jason, this one is all you. Sure thing. So before anybody goes, Ness, why would we choose Arch versus Alpine for the containers that we run? So let me give you a quick rundown. First off, we did this all by hand from scratch in the first place without necessarily having it in containers. So it's all about the organic growth of the project as a whole from the pet project that it was as well as using it as a learning curve for Will to bring in some of the knowledge that I was able to teach. We're familiar about how it works and replication of those behaviors, whether it's in or out of the container. And we really appreciate the Kiss philosophy of the Arch way. Now, those are items you can easily look up, but we'll have them in the slide links later. The nice thing was we obviously are very familiar with doing images for arm boards and the availability of an image on Linode made it pretty happy. And it's turtles all the way down, but we lost some of the bogeys, so apologies for the gap there. Now for Alpine, yes, it's built around muscle and with that for the Lib C and busy box, it is definitely smaller. It is generally more efficient than many other distributions. However, due to it being built on muscle, there are several surprising hiccups due to the difference between G-Live C and muscle Lib C, several of which can cause some pretty mysterious failures inside of Kubernetes, and we wanted to save you from those because hunting them down is not fun. So we did one that works with the standard behaviors with G-Live C that everybody will be familiar with and you won't have to go hunting down the weird problems in DNS because nobody wants to do that. More importantly, I wanted to make sure that sys admins had a way of learning containers in a way that was more natural to them with package managers and building from scratch, I think is easier if you have any sys admin background or if you've played a lot with distributions. So this way, you don't have to worry about someone's making the image and what's there, you can put it all there. The nice part about doing with the Kisway, the Archway, replace pack strap with that bootstrap, there's a remote install with YUM, you're supported with any of the major distros that you want right there. You just gotta have the remote install process. All right, so hardware, these are the cool pitchers. We have hard crate. Yes, that giant sprawl of wires is all you need to have a running working cluster with hard crate or as I like to call it, the OG buzz crate. Libre crate, this is Jason's cluster that tends to go travel, but Libre crate transforms into KubeCon crate when you mysteriously throw it into a shoebox, check it into your luggage, and trips it over to KubeCon. So if anyone wants to go have fun, go check out the GitLab booth in any future conferences once we get them back. And I'm sure KubeCon crate will have a few additions. If you wanted to watch Jason be challenged, you can bring up an arm board and have them added to his cluster right there on the spot, but one thing at a time, no challenges, Jason. And finally, we're gonna share the screen because we've been talking about arm, cloud. Well, we're gonna have them race because you know what, we're bored and that's what we do. So I'm gonna go kick off, I'm gonna share it on my side. Jason, I want you to get the web-scaled stuff. We'll do a countdown when you're in the folder. So start getting that set up and share, minimize. All right, Jason, also can you check to make sure that I'm sharing my screen properly? I can indeed see your screen just fine. Okay, cool. All right, assuming this is right. So first of all, Jason's not the most comfortable with Ansible, so it's gonna be web-crate, Ansible, web-scaled. You're in the right one, config buzz. I'm in the right one. Ansible. What is in there, okay. All right, and then for you, you just gotta run the k3install.sh. All right, on three. One, two, three, let's go. By the way, if you haven't used Ansible with Cowsay, you aren't living. So what this is doing is it's actually going into all of the items. On each side, it's actually building and packaging k3s into a proper package, as well as pre-configured server and agent services. And then it's going to deploy that into all the nodes, and then join them up into a complete cluster. Feel free to start asking questions. Jason will try to pluck them as we go through. And it looks like Linode has gotten its first node built, yep, just built. All right, I swear this one faster on ARM before, Jason. I'm jealous. So right now, it's just waiting to get the node file so we can automatically grab all the agents and create the cluster. Come on, ARM, I believe in you. I'll let you, it's actually the SD card. It's doing the tar and retar. Yeah, that's gotta be it. All I know is it works pretty fast after that, but it copies and Ansible playbooks are essentially the same between the two. Oh, yep, they're starting to call in now. Now it's doing the install. And the nice part is that with this, it is simple enough, you edit the PKG build and you gotta update the SHA-256 stamp on it or it's MD5, I can't remember. It says in the PKG build, but you update that, you update the Ansible host file for either ARM or X86, and you have updated the latest version of K3s. We're about two months behind. So there will be an update shortly. It's just one last headache we wanted to deal with before doing a live demo, because the demo got sometimes demand blood and we don't want that this time around. Yeah, we may be arch users, but we're still not silly enough to do a live update in the middle of the demo. Yeah, we're not Linus Torval turning rock in the latest kernel, which by the way, excellent keynote. If anyone watched it, well, one of many. Another one that I wanna definitely address is that we made this as a teaching platform. So what Heather Miller was talking about with the influx of new software, that the need for new software engineers coming in. Hopefully this platform will help facilitate it and make it accessible to people that aren't just in the classroom, don't have access to this stuff at work. We want to make this so anyone can get involved because well, communities need to grow. And if we don't focus on growing, we're shrinking. And there we go. We have two up and running. And so now let's see, I got one last thing. Jason, if you want, yeah, oh, the Kube config. I'm not going to do it on mine. You do it on yours. Okay. Yeah, cause we'll focus on the, yeah, we'll focus on the web scale one. Do you want me to shut down? You know, let me, I need to pop back to config buzz, right? Yep. Let's see here. So we need to run go echo. So go back again. Yep. And then K3's numbers got that deployment thing that we worked on a little bit. Yep. Yep. What we're gonna do here is we're actually going to add in Linode's, I can't remember what the CCM is short for, but the cloud controller manager, that's it. So that we can actually have it deploy a load balancer from the Linode platform so that we don't have to like, somehow expose our own with Metal LB. That's not something that's exactly doable when you're in a cloud provider, unless you have a full VPC subnetworking and things like this, which is not a particular feature of Linode at this time. So we're gonna deploy that. And now as we go and deploy resources, when we set up a service and a load balancer, we'll actually get a load balancer from Linode. So just let you know for the ARM deployment, we do have K9's installed on the master node or the controller node. We're changing the name to controller node when we get a time in the next week here. So the controller node will have all this information for you for K9. So you can have like a little incurses GUI to run around. It's really handy. All right, which window are you in now, Jason? So Jason, let's see here. So do you want me to, I'll deploy Go Echo? Yep, go ahead and do that. And literally once you get the, once you run the cube config script, which is already done with Ansible, so you don't have to worry about where it's at. Some people have used this thing called catch up. It's just a matter of SSH into a box, grab a file, rename instead of local host, what the name of the IP address is. It's already taking good care of for you with my scripts. So we're just gonna do a cube CTL, apply dash of Go Echo deploy, and up. Of course, if I finger that one, and live demo everybody. There we go. You didn't see at least one of those. You didn't have a live demo. Exactly. So we've already deployed this. And yeah, you're welcome to view all these files for deployment, because this is where the difference between running with containers and playing with Kubernetes starts if anyone's brand new to Kubernetes. All right, so that's up and running. Jason, if you want to, you can pull up the namespace. And if you give the IP address, people can actually go check it out themselves. And for here, we want to do the book demo. There it goes. So if you were all to open up 45.79.246.29, slash anything, the server's very basic HTTP server. It's literally an Echo server. So you do actually have to give it some sort of query string, and then it will tell you what your query string was. It's just a very simple example. What's your first name? It'll respond. Oh, we should point out that's actually on port 81. You have to put the port 81 in. We'll have that set up in just a second. I need to get the service file in, and there we go. So what we've actually done in this particular namespace is actually deploy the basic crud, right? Create, reuse, update, delete. We've got a Flask application, and what we've done is deployed database behind it, and a Python container running a web server, and a very, very basic web interface. But in addition to this one, we've created the pods, and we've created the services, but we've also created an Ingress. Oops. No, we did not add that. I did not apply the Ingress on the Go Echo. Give me a second. We got to update that quick because we have to add the icon to Flask. Oh, that's right, it's on Flask. Where do you want it, please? All right, so what's the IP address? The IP address, our traffic is 4579247.34. Now, if you open the URL that you see there, and I'll repeat it here, books.45.79.247.3. 34.nip.io, you should actually get our little... Can you check for questions? I am on that. We have a comment of pointing out that live demos are awesome. So thank you for that, yes? I honestly feel like every tech talk should have some form of demo, and if not, the demo gods deserve some blood. So nothing like seeing a little failed demo just for a little extra debugging live, but hey, we're up and running. That went... It works as functional. Yeah, we can replicate the... We'll just keep the screen shared. Can you go ahead and actually show them in the web browser? I don't think I have that IP ready. Give me a... It's just a little quick on it right there. You're good. I don't have my settings set up for that quite right. Yeah, but yeah, we're up on an AKHFTW, yeah. By the way, the very wow and much wow, much appreciated. Demo gods smiling on you. Yes, they have smiled upon us today. Yes, if you refresh that page one quick second. Yeah, yeah, yeah, we use Arch. We use Arch like crazy, by the way, BTDubs, Doge. I mean, yeah. But anyway, so yeah, we're up and running. We can also replicate this in ARM, but I'd rather take questions at this time if that's more of everyone else's pace. Let's stop and share for now. Okay, so there's pretty much that. We've got one last slide, which is, hey, questions, anybody, somehow, some way? And also we have some questions for you to kind of take home and think about. One of the ones that we already opposed and one that I really like to talk about is ephemeral clusters. Have we been using Kubernetes for the wrong design pattern for everything? And can we, since this is so quick and easy to deploy, and since the uptime with cloud is so nice, can we just deploy clusters for what we need for a business unit? I work in a large company, and by large company is, we're a large company with many products and applications. Having a one-size-fits-all does not happen. We do not communicate that well. But what we do do is have really targeted business units that know how to handle themselves pretty well overall. So, this is one approach. There's a question that I wanna interject, where is, so any stats on ARM versus Intel? I'm not really sure what stats, particularly you want in terms of either, are we talking cost, we talking performance, we talking, I don't know. So, I will wait for the author that asked that question to go ahead and just pose a new question that's gotten more detailed for us. So, the other thing is too, security. I work in a healthcare company right now. Security's a big deal. So, my question is, so there's Twistlock and Aqualock for testing out containers and inspecting them. My question, base containers. If you've already certified your base image and you use the same packages to build your containers, do you need to scan them again? Little questions there, was it, how should popular projects support multi-arch containers? Should they do it at all? Are most people not going to touch ARM? With this question was written before Apple announced their switch over to ARM. Now that a lot of people are going to be ARM users in the future, should they support ARM containers? Do people know how? Should we, as a community advocate for it? I don't know, that's something for us to discuss. Microservices. Well, we've been doing a lot of talking about microservices, having a pod and that kind of stuff. But now that we can discuss formal clusters, can we just use microservices and have them have their own clusters then for load balancing in need? And little questions. We do have a clarification on that earlier question, which is definitely about the performance. Okay, so I can tell you right now, it's always going to depend on what kind of CPU you're actually running on in terms of whether we're talking a high performance CPU, a data center CPU, or a low energy CPU. We can say the same thing about Intel as we came with ARM. The cluster that we're actually making use of is built out of Odroid C2s, those are S805, I think. And they are effectively cell phone processors. Okay, that's originally what that board was built around, was that chip, and it was originally designed to be put into a cell phone. So that performance is not the best you're possibly gonna get out of ARM if you were to compare it to say, the Xeon processor I'm running in Linode right now, that's going to win, okay, in the terms of that. But if you were to say, compare that ARM 8 CPU compared to a Xeon from eight years ago, they're actually not as far apart on a single threaded workload as you might think. The biggest hindrance that you might run into is if you decide to go reading things from disk. But you're gonna hit that limitation no matter what you do if you have a difference in the speed of disk. So when it comes to just raw CPU capability and working with just the CPU and the RAM, right now with the boards that we're using, it's the same thing as using an old box from Intel. If you were to go with the latest stuff and you were to compare, say, a Graviton as opposed to what we have going on in the Intel world right now, you actually would be pretty amazed if you start trying to calculate Fibonacci sequence that's probably not what you wanna do in a container, but that is something to take into account. Right now, that balances how much do you need raw performance on a single versus how much do you actually need when it comes to parallel processing? Yeah, follow up to that point in is whether or not QPath on the Xeon versus ARM and the memory passed there. In this particular case, I don't have enough details to go into that one. I don't really wanna go too low level into particular CPU details because we're derailed a little bit from the original topic of the presentation. The truth is we haven't really played around with many ARM-based or ARM-focused data center CPUs on our desk. The truth is a lot of the ARM offerings just haven't been there, but if you wanna look at overall performance, the latest type of data the latest top supercomputers ARM-based, not Intel-based. So can ARM perform that way? Yes, and I would actually recommend you look at the benefits of risk architecture versus CISC architecture. The complex instruction set versus reduced instruction set has been a debate going on for a while. X86 just had familiarity with it, which is one thing that Linus just brought up today, which was the familiarity for what pushed the Linux kernel in his own development. So yeah, it's availability of hardware. What's on the desktop is going to really drive the performance of the system. And that's probably the reason why we're also approaching this is because ARM's coming up. We're showing you how to support it right out of the box if you're just curious. Jason, I have to admit, I'm not seeing any of the questions on my dashboard, but what I am doing in the background is I'm getting the installs for the original BuzzCrate on ARM and I will share those as soon as that's ready. So everyone can play with ARM and from their browser. Okay, we have a question on whether or not BuzzCrate can be run on a Raspberry Pi. The answer is yes. However, I will point out the containers that we are building are strictly for ARM 64 or ARM V8. So you need to make use of at least a Pi 3 or a 4. And why the hardware is actually pretty solid if you can get the 4GIG version, I would go for that. But it is designed to run effectively on any platform that you can run these binaries on. The automation for Ansible is written specifically for Arch Linux as opposed to Ubuntu or Debian or Raspbian. So you may have to individually set up the nodes in that regard. We haven't rewritten the modularization. You need the install. The install playbook needs to be rewritten. The rest of them may be used. So it's only one spot. So where we do the actual build and install, that is Arch Linux specific, either ARM or X86. Right. Okay, we have another question here. Given your wealth of knowledge and any advice for engineers switching careers from corporate IT, enterprise IT to sective ops, the side of the house is getting started with containers and Cates. So one, gratis on being open to expanding and learning new things, first law. Second, I would say the very first thing you wanna do in doing DevSecOps and understanding how containers and how platforms such as Kubernetes orchestrate those containers, is first things first, understand what container is and how they actually function. So understand that people talk about Docker. Docker is a form of tool to create and automate containers, but it isn't actually the only way to do containers. So look into what a container is and how it actually functions. And in Linux, that means it's using C groups, it's using namespacing, and it's effectively the same type of thing as a BSD jail. So when you look into these behaviors, you're effectively going, I'm going to cordon something off and let it run on its own. And this is not like, just I'm gonna run it in a shrewd. This is a highly constrained kernel jail set of processes. Okay. Once you understand basically how a container operates, understand a few of the run times, whether it's Docker or Rocket or container D. And from there, then you can start poking your nose, going up the ladder of how many technologies are involved and really understanding how the orchestration platforms work, whether that's Docker compose or Kubernetes or OpenShift. All right. So we're launching the last of the containers right now. And for these, so Go Echo should be up. So if you go to static serve.dds.org port 8001 slash test, that's the Echo container running on ARM. So you can play around with that. I'm still launching the book demo one. The only difference for the book demo, which will be running on my ARM container and running much longer after the conference is actually completed is going to be at static serve. I'm sorry. Yeah. So go to static serve.dds.org or sorry.dds.net port 8002 and it'll just show up. It's the same book demo. Same thing you've seen on the Linode cluster here. So if I do that here, if I should be able to do a test. So curl static serve.dds.net port 8001 slash we're going to do tests. Come on. And it failed to connect. Awesome. My metal LB get installed properly is the question. Wow, Jason. We'll try to figure it out. We got a failure. We got a failure, woo. If we can pop the slides back for a quick second, I want to pass our thanks along to the partners in the community that actually did all the work here or part of the work here. First off, GitLab and their work for the tools that they have available. Yes, they existed long before I was there. No, I don't work on that particular portion. Rancher for their work in K3S and in general throughout the greater Kubernetes and Docker ecosystem, Linode for their support and everything that we've done in this presentation and training and the rest of the team in Arch Linux ARM and their support. Specifically, who want to call out to Jason. I'll do that. And Tim Bach and Eric Wilson for all the help over the time that we've done this. Yes, definitely Tim Bach. Thanks to you. If it wasn't for the late nights that we had working on OpenShift, I wouldn't have learned half of what I have right now. Eric Wilson has been reaching out to the community for K3S for quite some time. If you have questions, I really do recommend reaching out to him. And let's see if I can get... That should be about it. Jason, are there any more questions? I am not seeing any more specific questions and yes, I am doing my best to make sure I read them all, everybody. I'm seeing a lot of... Appreciate that it was a live tech demo and not marketing. It's really interesting stuff. And all the blood to the demo gods. Also, if you need to reach out to Jason on Twitter, I will be making other avenues available if you have more interest to play with this cluster or if you want to learn in general. There is a community I am trying to build across Discord but I don't know how much that is going. I got to take a look to see how well that's growing and how much more effort to put into that. As a last item, we do remember gitlab.com slash buscrate if the project is open. It does have issues and we can take further questions or improvements from any community member who might be interested. So, with that, our time is run out. Thanks everybody for coming. Thank you.