 Hello and welcome to the edge infrastructures code session about edge lab digital where we're going to be talking about Simple repeatable and portable edge Kubernetes My name is Rob Hirschfeld with rack n and I'm your guide on how to run and set up edge lab including a demo rack n specializes in distributed Infrastructures code data center automation And so this is something that we've put together to try and create a repeatable experience for edge communities The mission for edge lab is simple. It's about collaboration We're trying to unify edge builders looking at developers platform providers and infrastructure Providers who all want to come together for both open source and commercial solutions I'm a board member for LF edge That's exactly the mission is to try and create a unified place where applications Platforming infrastructure all come together in a continuous feedback loop Problem that we have is that there's a high risk of proof-of-concept failures and distributed edge failures in general There's no shared architecture for the edge and design paralysis waste resources where everybody is reinventing how to do this work and That means that we're building components without end-to-end integration or without a real plan to how how things are going to scale And the solution is simple. We need to have shared resources We need a vendor neutral edge platform that allows us to have a portable application architecture That's something that's not just about the hardware but about the software and the platforms that you run on that And it's not some not sufficient for it to just be kubernetes But actually the whole environment that runs at the edge has to be design repeatable and scalable for edge environments Whether it's a single box tens of boxes or potentially thousands of Environments as we get into larger and larger co-located edge infrastructures And most importantly these things together allow us to simplify Collaboration so that if you do work that does something amazing at one layer of the stack that work should be portable So other innovators can enjoy it and leverage that work and what we know is really straightforward This is a quote from Matt breeze at with Gartner That talks about the challenges of these proof of concepts and the failure rate of a proof of concept moving into a Scale distributed infrastructure is incredibly high because just demonstrating that something works and integrates on one site Is a very different problem than a distributed management plan? And so this is where edge lab has become a pathway to production And it's not acceptable to just have throwaway environments and throwaway efforts We want to make sure that when you are doing work on a proof of concept or a desktop lab Or any layer of the automation that you can take that using infrastructure as code principles and scale it into a distributed Architecture for that reason the edge lab work that we're doing uses the same software that's in production in global spanning enterprise Infrastructures, but shrunk down fully featured to run on raspberry pies or whatever infrastructure you need We chose raspberry pies as the initial deployment model because as a Environment is incredibly cost-effective you can buy an entire edge lab including the switch and the supplies SD cards everything you need for under $500 Then that runs into a zero-touch install let you reboot and reapply We keep things network isolated so that chatter between the raspberry pies isn't bleeding into all of the other Wi-Fi environments is much more like what you would want to have in a real edge environment, and then we pre-configured The edge lab to be K3 s Embedded so that you can actually run a Kubernetes workload right out of the box in the infrastructure We also included open fast So you could do functions as a service also, and then the whole system is designed to have a rapid redeployment a lot of times with pies you have to re-bird SD cards and you end up with a lot of Moving parts to try and make things go with edge lab It's an automated infrastructure like a data center would be and so it's designed to have a rapid reset where you reset the power And you're back in business in less than 30 seconds So if you look at the environment that we're going to work walk in the demo it's for raspberry pies Of course you could have more or fewer the first pie what we call pie zero is the management note So it provides DHCP and pixie for the rest of the environment, and we do pixie boot the raspberry pies SD cards are just used for storage in this environment The Wi-Fi of that first raspberry pie provides the internet gateway So all the systems have access to the internet even without having to activate Wi-Fi on all the other systems It makes it very very convenient to get everything up and running quickly And then from there They're given static addresses out of the DHCP range, so we have known good addresses that makes it very easy to set up The demos I do in the way I operate I actually use a wired interface back into this switch So I set my PC to be 10 dot 3 dot 1 4 dot 2 and then I can directly attach to pie zero From a blank setup using its its initial SD card provisioning And then we have SD cards that enable pixie booting in the raspberry pies and a specialized configuration Built into the into digital rebar that enables that pixie boot configuration. This is a really significant thing We can pixie boot pretty much any infrastructure you have including the pies and what that means is that today? We're showing you at raspberry pies for the edge lab because of the low cost to get started But you could be easily doing this against any hardware if you wanted to use nooks full enterprise servers super micros Any edge-enabled device will work with this infrastructure. It doesn't require out-of-hand management. It doesn't require You know specific hardware or instruction sets Even VMs can work just fine and we test against virtual machines all the time So while we're showing you edge lab today on raspberry pies The system is already capable of doing much more broad hardware automation and being used in very distributed ways So let's get into some demos and I'll show you exactly what it's like to bring up an edge lab So welcome to the edge lab environment. This is a Perfectly normal digital rebar install I've already run through the initial setup phases and I encourage you to go to edge lab digital To review the documentation where you can see the build materials You can see bootstrapping videos including visual build instructions that take you through the entire environment Once you've run that initial SD card and gotten things running you will come into Our regular UX from this perspective and this is provided to create the infrastructure as code environment It's very important that to understand how deeply we've gone from infrastructure as code here Even the bootstrapping of the system itself runs out of our infrastructure as code libraries That means that if we make improvements in the catalog even though the edge lab is running completely Isolated behind your firewall it can pull in the latest infrastructure as code automation to build itself in a cup Comprehensive ways as we make improvements to the kubernetes installs or the lab automation Those can be brought in through the digital rebar catalog, which is completely open source So everything that I'm showing you out of these catalog entries. You can actually go to the digital rebar repositories, I can spell it correctly and You can look at the edge lab code modify it make improvements contribute back to the community and Enhance this environment for your environment in ways that allow other people to leverage that work Just like we've done with the k3s install in the open FAS installs So over here is our catalog and you can see the bootstrapping has come through and already set those up. It is also In our subnet definition for DHCP. It's also defined our edge lab infrastructure So that we can net boot those raspberry pies and it's to find itself Pi zero as this first machine so that it can complete the birth the work bootstrapping workflow That's done completely automatically for you after you start the infrastructure and we have other demo videos So I'm not going to show you the actual bootstrapping typing that initial command and watching it log in What I'm going to do instead is I'm going to go into the boot process for the other machines I have an edge lab on my desk and Because I'm using power over ethernet all I have to do is plug them in one at a time And they will go through and boot you can hear the clicks as the network cards start That bootstrap looks like this if I was to attach an HDMI capture card I would produce another video like this and watch the machine's net booting So it's going through a sample test infrastructure phase And then it'll very shortly actually boot the discovery operating system So we use an in-memory OS. This is why we prefer the four and eight gig RAM raspberry pies although other ones would work That will then attach the local storage and we will have the system running in an immutable state So we can net boot very quickly and do these resets And that's exactly what you're watching here is the pixie boot of those raspberry pies Coming into the system. So that's what's happening in the background. I Am waiting otherwise for them in the way a little bit for the machines to come in and be created and the leases to come through You can see the live event stream coming in and watching those So if I refresh here what you'll see is I have the systems and they're available They're being discovered if I click down into this What you'll see is that we've gone through our normal discovery process Which includes a very deep scan of all of the capabilities of the system and Inventoryed it so that you can then use that for downstream automation So now that we have all three of the systems registered and ready to go. It's time to install kubernetes Digital rebar does this as a workflow system and it's worth actually showing you Some of the swim lanes for that because the actual install process only takes a couple of seconds And so I want to show you how how that process works What we've already gone through is bootstrapping the infrastructure its code system downloading day So as all that happens automatically through our bootstrapping When we kick in the next process We'll take the discovered knows which we have now and I've shown you how they're net booted discovered and registered in the system Our next step will go through the kubernetes install process all through the machines will start at the same time work to elect a leader Once that leader is elected only takes Microseconds that leader will then proceed to download the kubernetes the k3s in this case Binarys while the other ones wait for the system to have that leader completely built. It'll build kubernetes Generate the credentials pass them back to the rest of the cluster and allow the system to join kubernetes That whole process once the binaries are downloaded usually takes about 20 seconds From there we will continue through the process using the freshly installed kubernetes We'll install the dashboard run install helm run any helm charts that are done And then if we've set they say open faz for our helm charts It'll complete that process and install open faz also which I will demonstrate as part of this run all of that process is Only takes a few clicks to set up and get running So start going In this case, I'm selecting my three machines I'm going to go ahead and put the open faz profile on them in that profile It has the fat the helm charts that are necessary to install open faz We won't have time to get that running, but I want to show you What that looks like just to get it installed what we are going to do is go ahead and select the k3s workflow Install them on the system and then let that process begin What you'll see happening here is the machines all start the kubernetes workflow and go through exactly what I was showing you in the slide process if I jump into the shared profile that's part of this you'll see that The leader of this cluster is machine 102. So if I come back here Look at the machine with IP address 102 and click into its k3s install We'll see a live log of the system actually doing that install process So it's mounted the SD cards and then it's running through Building the k3s service and getting that turned on takes a little while for that k3s service to come up on these raspberry pies And once it does it will then be able to generate all of the downstream components that we need to make things go so now we're waiting for some settling time and The cluster is now starting. So that's about it. It's generating secure tokens and admin config files and so we can watch the system go But hold on I'm gonna interrupt this Presentation I've realized it was 20 minutes long and I have 30 minute slots. So it's a bonus material for you In going through this demo really quickly one of things that we miss is the infrastructure is code components So that you can be part of the community and extend and add to this infrastructure And I'm going to take a couple minutes since we have them to show you exactly what that looks like Because really how you add and extend the system is critically important because you want to build on top not reset or recalibrate So let me show you how this looks and everything I'm going to show you is in GitHub So if I jump over to my github repository, so I showed you before in edge lab The parameters profiles tasks stages are all part of this in digital rebar the overall Intent is called a workflow workflows are made up of stages stages are made out of tasks tasks Or where most of that work is done tasks can actually have subcomponents called templates So if you have to build a configuration file out of yaml or a bash script that you use in a whole bunch of places Or a terraform or an ansible playbook All those things can be created as templates and also stored and then accessible as files when you run a task Pretty straightforward when you think about it. This is really just building up An application or a workflow automation out of out of components But doing it in a source controlled way is what makes it infrastructure is code So from there this workflow is actually composed of multiple stages the modular aspect is Essential to creating repeatable work because you don't want to have to figure out how to image Server or set its bios or install an operating system or install kubernetes We have a huge catalog of standard components that are proven at enterprise scale and can be applied even on small form factor devices As you build those things up you can then extend and add yourself And it becomes just a process of going through building up what those look like so let me go into one as an example Here's our k3s install process the k3s install stage has a single task that installs k3s We're actually in the process of refactoring this to break that into even smaller units so that it's more composable and That one task decomposes into a template now. You might be thinking that's a lot of decomposition Why do I need all those things and it's about code reuse so the example? I'm showing you for the deployed system is entirely read only if I want to make changes to it We have a way that I can go through and do Build actually build this content pack and then upload it as a unit I don't edit things on an endpoint at all. I actually do it all as code Upload and then test and we have videos explaining exactly how to do this But this is essential for infrastructure as code. So if I look at my k3s tasks, this is the one I'm looking at you'll see it's very short. It is basically telling me to use this template so I can come back up Look at the template k3s install right here And this is exactly the same thing that I'm showing you in the platform itself So all of this code here is what runs that infrastructure as code process and it's it's actually pretty straightforward stuff It checks the architecture It opens up the firewall to allow kubernetes to work correctly. It downloads kubernetes In this case, this actually mounts the drive so that they're available. This code does a Container D. This sets up the storage platform This code actually starts the download this if it's the leader will Start the bind process. Otherwise if it's not it will wait So all of those steps that I had are very clearly delineated in code and because that code is out in the open If you find a bug or enhancement, you can then extend it It's another important point that I want to show while we have a moment and that's this concept of Parameters so the items that are used in that code some of it's inferred from the system But a lot of it actually comes out of the parameter system Parameters in our system can be defined completely out of hoc But you can also define them in advance and then type them and control them and provide safe defaults So if I was to look for our k3s components here I might be able to come in and see that I have a download URL with a safe default in it So if you don't provide an override it will use that default that allows you us to have this very great out-of-box experience without taking any control away from an operator and We do this consistently over and over again by having parameters that are Secure so that you can store secure data and not have it generally available and also Well-defined and typed so even if I come in and look at my cluster definition The cluster leader spec here actually has a Jason schema And that schema is enforced down to the properties and types on the properties this type of careful Parameterization on your code infrastructure as code ensures that the inputs into that that workflow are consistent and vetted And these things together create a very safe environment so that you can confidently Extend and add without worrying that you're going to be breaking some upstream or downstream dependency and Finally How do you access all of this content? We've put together a catalog and made it very easy to extend and bring new components in so from the UX There's a catalog version. There's a CLI that actually lets you make catalog requests and request specific versions This is actually backed by a giant Jason file where you can look things up completely offline We have people in the community doing exactly that and so I can pick things that I'm interested in doing say burn-in Processes I can pick a version so we go back several versions or the latest version and then I can simply install them So if I click a button here It will go through and pull in the relevant content pack and add it into my system with the correct version In multi-site. We actually have versions that's and can synchronize this across a distributed fleet there's some amazing consequences of having a very clearly delineated content and and Catalog system baked into how you do and manage these deployments and these aren't just content But they actually are system extensions that we call plugins And you can actually see them if you come back up into the repository here Those same library items that I added are listed here for you to explore and extend Thank you. I hope this little Introduction to infrastructures code was helpful. I have much longer infrastructures code tutorials and now back to your regularly scheduled programming now in this case what you'll see is the Kubernetes on the Workers are already finished the leader is going through and doing additional work so in this case installing the dashboard and Eventually installing open FAS as part of its instruction set So what I want to do here is jump into that profile again You remember we have the leaders, but now that the cluster has been built We've collected additional data about what the system is doing So that means that we have a token to log into the dashboard that's been generated for us We also have our admin comp file And so if I have this file this includes everything I need to access the cluster I can download that I Have a pre-wired my environment so from there What I'd be able to do is use kubectl and then I can just get the nodes Since it already knows the admin comp file from a known location It's able to log in and address Exactly what I was hoping to do which is show the the systems in my cluster configuration I can also do things like get namespaces and See all of the infrastructure that I've built you notice we've already started building the open FAS in the background and the Kubernetes dashboard so No additional work on my part I've built a very sophisticated platform on raspberry pies in a completely automated repeatable way and So what I really like to do is log into that dashboard nice The CLI is great, but some people prefer a dashboard that'll let you see a little bit of what's going on to do that I'm just turning on the kubectl proxy and then I'm going to go in and find the dashboard now Dashboard is actually nested down in the API namespaces So I'm going to cut and paste this is out of the docs for that parameter And if I bring it up you can see I've done this before and a previous one and the systems no longer responding because we have Reset the cluster in this case. It's still downloading the containers. So this is a Container container based solution and it does have to download and pull things off the internet And so we do need to wait just a little bit for the Kubernetes dashboard to get up and running While we wait for that to happen I'm going to come back over and I'm going to get the there's my open FAS password I'm going to get the dashboard token. It's encrypted in the system. So I have to go collect that handy-cutten paste over here and so Now that the container downloaded I can get the Kubernetes dashboard prompt the token is exactly what I need to log in So I pulled that from cut and paste and now you can see that I actually have the dashboard ready to roll So I can check out the namespaces exactly what I saw in the CLI and if I drill down and look at Different components I can see exactly what's been going on in my cluster using the Kubernetes dashboard So even though this is k3s, it's kubernetes And so we can use whatever tools and applications and programming that we would normally not do Overall what I've just demonstrated is a five hundred dollar cluster that you do not require additional command lines and installations any additional work or learning to boot provision and get running and then of course you can go to the repos and See exactly what's going on because all of the infrastructures code components here are open source You can extend and watch and check and play and that's exactly what we want to do because the goal here Is that we're building a community of people at the edge who are able to share and leverage and collaborate both on hardware architecture Automation and platforms. That's the essence of building an open community And we hope you will come along check out digital digital rebar and the edge lab.digital and Let us know what you think this is Rob Herschel with Racken Thank you Hello, that's me live for questions and I'm happy to Engage with chat if people have interactive components or things that they want to know I do have a couple of questions And so I'm happy to go through that and this is your time to go in and go ahead and get that done We've already had somebody ask where they can get more information about digital rebar outside of edge lab so rebar digital it's a place to visit for information about digital rebar and Edge lab digital we like that digital top-level domain It's the place to go for all edge lab considerations including detailed instructions and a bill of material So some of the questions that we have been accumulating Is are there hardware limitation types? This was very much about raspberry pies Raspberry pies in some ways are the hardest thing to automate in this type of system Because they have very little out-of-band controls so no Pretty much every system and server that we put digital rebar under is available And the systems that's being used to generate to drive edge lab on the raspberry pies Also have all the software needed to run all the other types of servers. So you could boot Nooks you could boot fitlets you could boot desktop servers pretty much whatever you want from that perspective So there is no hardware limitation If you have out-of-band management, that's great. We support enterprise grade servers without a band management That's actually a lot easier than a raspberry pie where there's nothing at all from that perspective Speaking of out-of-band management, we had another question about that And for that Pretty much whatever out-of-band management you want redfish IPMI vendor tools It's not required so the systems will work if you don't have out-of-band management So we have a lot of testing a lot of use cases with DMs and using the DM managers as the out-of-band manager. That also works Really dynamic flexible environment from that perspective In this we showed K3s and a little bit of open fuzz Those things those are not the only platforms. That was a good question There are Kubernetes patterns in the Digital rebar family not in Edge Lab specifically But we are actually bringing all those things back in and should be able to do full Kubernetes installs The goal for Edge Lab is to have community participate in this and so Everything that's done in Edge Lab generally translates a hundred percent into the digital rebar community, which is much bigger Our goal is to have them be able to feed off each other The Edge use cases sometimes have some additional flavor Additional questions that I have I do speak dog and my dog wanted to know Operating systems. Oh an immutable boot Both things to consider together One of the things that we did that was actually really tricky here is that we got pick raspberry pasta pixie boot And we use an immutable OS So the OS is actually running in memory, which means you're not burning SD cards We do use the SD card for storage of containers and things like that. So you get the best of both worlds in this case That is a really powerful thing. We do it quite a bit with digital rebar Also with what we call sledgehammer to our discovery OS based on centos seven or eight But you can install windows ESX Linux of course depending on what you want to go can't do that on the pies Clearly, but if you want to try you're welcome to give it a shot that that would be possible And the last question I'm seeing let me see if anything new came in is Open source Edge lab digital is open source Please make pull requests and patches the content packs and all of the amazing Libraries and contents and things like that are open So you can go and review how we do these installs and things like that the digital rebar code itself The platform is a rack end thing. It is licensed for this type of use without any charge at all so You can run home labs and go crazy on your own infrastructure up to 20 machines And just get a self trial licensing go crazy. That is perfectly legitimate use for this and should be able to have a good time There's tons more feature sets in in digital rebar and this isn't about that. So Please feel free to look at that. I Think I am out of time if you have any other questions, please feel free to contact me I'm at vehicle and Twitter or you can reach me through rack end calm And we are giving away an edge lab kit for as an unboxing community experience So if you go to the rack ends website In our blog we actually have linked to how to register for a kit. Please do that We'd love to have put this in somebody's hands and have them just get that unboxing experience and share it with everybody else Thank you