 Hello, and welcome to this one summit presentation on the LF edge community lab. My name is Lincoln LaVoy and I'm very excited to be here with you today, albeit on virtual sessions instead of in person and sharing a beer as we discuss the lab or dive into more of the details. So I'm looking forward to getting together again soon in the future. But until then, I hope you'll stick with me for this tour through the community lab. Some of its purposes, how you can interact with it along with a demo of how you utilize the resources and a brand new feature that will be rolling out to the industry and you all in October. So as I said, my name is Lincoln LaVoy. I'm principal engineer here at the University of New Hampshire interoperability lab. I'm also currently serving as the LF edge community lab tack subcommittee chair. And I have other roles in broadband forum as their technical chair as well. The interoperability lab is an independent lab hosted here at the University of New Hampshire where we've been working closely with the LF edge and other open source communities to offer these type of resources to the communities to support the development efforts out there in the world as you all are making your contributions to open source. So in short, you know the question is what is the LF shared lab and really it's a way for the LF edge open source community to have access to hardware resources needed for development and testing in those cases. The resources might be supporting efforts where your hardware boundary needs specific hardware. You just need to be able to work on systems that are slightly larger than you might have available either in your home or your your home labs or you need to collaborate with peers from another company. And it's easy to work together to do that. So in detail, what the lab really is is obviously a set of many servers. We're currently able to provide both Intel and ARM architectures. There's IoT gateways, there's virtual machines and other networking resources available. LF edge community members such as yourselves reserve and book these resources through an online portal. We'll show that in the demo later towards the end of this session. And then the lab has a number of automated systems that actually deal with all the back end of the systems in terms of providing you with a VPN credential to be able to get to the lab and those resources once you've booked it. Setting up those resources, making them available and ready for you and getting operating systems installed and taking some other work. And then when the bookings complete and when you're done with your work, that hardware automatically gets recycled back into the pool. So it's ready for the next user. So we do this to try to encourage, you know, a fair bit of like good turnover and to make sure that the community always has access to resources as you're doing things. So some of the common things that might be going on is, you know, somebody being doing blueprint validation for like the Acreno project. Or working on building up a demo in like the Eve or the Elliott projects, just as a couple of examples of how this lab resources have been utilized. Governance wise, where the lab came from is really growing out of what was the Acreno project and kind of opening up to the LF edge community as a whole. And with that, you know, the lab is now organized directly underneath the LF edge tack and we host a regular bi-weekly meeting, essentially kind of off-counter from the bi-weekly tack meetings. LF edge projects would typically have one representative participating in that community meeting to make sure that your needs are being met by the lab and the community community is responsible for overseeing requests of approvals such as like changes to bookings. You know, if you want to extend your usage past the normal booking like time or proceeding with like hardware purchases or audit conditions to the lab. The common question is really obviously these are great resources that are out there in terms of like the number and size and scale of the hardware that's available. So who can use it? It's really the community resource for the lab as a whole that's out there. And so anybody who has a Linux foundation login and is actively working on LF edge projects is entitled to schedule and book resources within the lab. There is an acceptable usage policy that governs the usage of the lab. You know, so it does mention, you know, not doing negative things to the lab, not you misutilizing those purposes. If you're doing cryptocurrency mining or something like that, that would be frowned upon and it does currently restrict the work to LF edge projects compared to other open source projects that you might also be participating in just because that's tied to where the funding of the lab comes from. On that funding model, the lab is supported by the LF edge community and LF edge at large. So there aren't individual costs to the projects or community members when you're utilizing the lab. So this provides actually a really kind of good alternative to how you get access to these type of compute resources that you might not have or might incur projects, a cost to the projects rather, instead of using like a public lab resource. Additionally, if projects need additional hardware, that can be organized and presented to, as I mentioned, the tax subcommittee. And there are some budgets available each year to add to the lab or increase the resources within the lab as needed by the projects. So one of the common questions since, as I described, this is a kind of a booking process where people come into the lab through a web portal is what happens if I need the resources longer and that the answer is pretty simple and straightforward. Obviously, it's just asked for an extension. So the lab is incredibly flexible. We're here to support the LF edge community, but we needed to wait to limit the bookings initially just because we are hardware constrained and that these can be a scarce resource. And we want to make sure the resources are available for the community members when you all need them to facilitate work on your projects. So we limit the initial bookings that you can make through the website directly to a three week time window. Obviously, that might not fit exactly with how you're utilizing the project, whether you're doing a longer development story with the community or you need a little bit more time to complete a blueprint validation or something. And so there's a lot of flexibility that we have where, you know, we can actually extend a booking out and just allow things to go longer. All we need to do is just work directly through that tax subcommittee. And that's one of the reasons that we recommend that, you know, one community member from each of the projects can participate within that tax subcommittee process. Inside the lab, the users really have complete control over their allocated system. So this is one of the things that might be a little different from like a traditional cloud environment where you might be limited to, you know, the operating systems that are available or whatever is there or the dependencies here at a fundamental block, you're essentially booking bare metal access into the lab. So that means you can install your own specific operating systems or you can pick from operating systems that are already available in the lab. You can make networking configurations or changes as needed. So if you need specific networking between a couple of your different bare metal servers, if you need specific networking between your server and an edge gateway or something like that, that can all be facilitated either directly through the dashboard or in working with the lab staff to facilitate the changes within the lab. This works really well for a middle-work type of projects that need to kind of customize their underlying environment. So you can get a couple of different environments stood up if you need. You can get your idea working if you want and kind of have a recipe to do exactly that. And we're working to actually make those recipes even easier as we'll see kind of coming up in the next, in the demo later in this session. Talking through the lab structure and how this all works, you know, the kind of 50,000 foot view of the lab really is, you know, if you're a user here out on one side of the internet, there's a VPN tunnel that gets established into the lab. They eventually are provided as part of that booking process. Inside the lab, there's the course switching infrastructure and then, you know, a number of junk virtual servers that you're able to get access onto and then the bare metal resources that we tentatively call, you know, essentially a pod within the lab. And those pods, as I mentioned, can be either x86-based or arch64-based. And there's some other miscellaneous resources within the lab, such as, you know, IoT gateways and things that are slightly smaller, you know, more purposeful. So things like FIR or cameras can be added to the lab as needed just to facilitate projects that might be dependent upon those type of resources. So once you're connected to that open VPN, you have credentials that are also provided through that booking gateway onto your specific server or pod where you're then able to access those resources to facilitate the development or the work that you're doing on your project. So looking just at a high level, and I know this is a little bit of an eye chart on the screen, but I do want to point out some of the diversity of the hardware and resources that are available in the lab. So as I mentioned, we have both x86 as well as arch64-based systems. And some of those systems are actually coming in different generations of chip. So, you know, older ampere systems, newer ampere systems are available within the lab. And then you also have different server vendors. So you have servers coming from Dell, from Intel, from HPE, Gigabyte or Lenovo making up the lab overall. And that really helps us be able to, you know, facilitate the different BMC interfaces that you may need to work for in your projects and provides you with the ability to kind of pick the resources that are best suited to the work that you're undertaking. Overall, to develop, all of these are documented within the wiki. So there's a lot more detail than just what you see on the table here. So you know how to select the specific pod or the resources that you would need to complete your work. So if you haven't already done so, I would recommend taking a trip over to the LF Edge wiki. You can browse through the lab resources and see what's available for that buffing. So as I mentioned, I, you know, the common question of like, what if something's missing? My project needs X. Can I get X added to the lab? Quite possibly yes. Request for hardware donations or purchases are part of that subcommittee's job. Those are forwarded to the TAC once approved by the subcommittee for the final budget approval and sign off. In general, the resources should be able to be utilized by other projects. They shouldn't be something so specific as to support only one project or a niche. But that can obviously be a little bit more generalized depending upon the size and the scale of the resource that's being proposed. So how does this actually work? And we'll go through this in detail on the actual booking process. So don't worry. But essentially, you log into the portal that lab is a service LF Edge. That utilizes your LF ID credential to sign in. And then you click book apart. So it'll be pretty obvious and you'll see when we get to that website. From in there, you select the currently available community lab and then that highlights the available resources that can be booked from that lab. And some of the resources might show as zero, meaning that they're already booked. Some of the resources might show as one or more of the resources available for your booking. From there, you're able to provide some info on your project. So like what specific project are you working for? What's the purpose as well as set the length for your booking? And then as an optional step as you can add some collaborators or other users that should have access to your book resources. And what that does is the system facilitates automatically adding their SSH keys into the system as well as working to show email them their VPN credentials and get them access into the system. So that's it from your side on the lab side. There's a little bit more that happens in the back end, obviously, with respect to the system setting up the systems, getting everything ready for you. That takes a little bit of time as we'll see in the demo here. So without further ado, why don't we jump over to our demo here and get with the dashboard and some of the ways that you can utilize this and then access the servers. So bear with me. I'm going to jump out of the presentation mode here for just a second and we'll pull over my screen of the demo that we have up. So this is the main portal system that you have here. If you're not logged in already, then there would just be the typical login button in the top right there that will drop you over and over to the LFID system and then port you back into the portal after you type your username and password there. From there, their system gives you a fair amount of status of like what's going on in the lab, what systems are available, user lists for users that have made themselves public, i.e. people you could collaborate with, et cetera, as well as some info and booking statistics and stuff. But today, we're really going to focus on booking a pod directly. So if I click over to book a pod, as I mentioned, it's kind of a two-step process. You pick the lab. So right now there's just UNH IOL in here, but the dashboard is set up to support multiple labs. So if we have peers or partners that want to come on line around the world, we can work with them. And then you have the available resources within that lab. So we're going to focus today on the x86 demo pod, booking that over here. So clicking that from there, we just go down and we set some purposes here. So we've got like the one-summit demo on our purpose. We'll just say our project is demo for the time being and picking a length of like two days here for the booking since this is kind of a pretty temporary thing. From there, we go through and actually say host name. So like demo host for the host name that's going to get pushed onto the system as well as we can pick the operating system that we want to utilize. So like, you know, Ubuntu server in this case to push that actually on. So these are just the operating systems that are available and can get auto installed onto the hardware. Obviously, you're free to use like the virtual media on the hardware. If you want to push a different software or different configuration out there overall, that's no problem. whatsoever. There is this field over here collaborators where you can actually type in some email addresses of your peers and other portal users. And essentially what happens is when the system's spinning up your infrastructure, those users will get added to your booking. So they'll get their own VPN keys credentials sent to them. They don't already have them as well as their user accounts in a stage keys will get added into the system as well. So that way they have access to those same resources. So it makes it very easy for you to collaborate with peers on the project around the world. And then we're going to talk a little bit about this cloud config. This is the newest piece that we're rolling out here at one summit to the community where, you know, one of the things that a lot of folks need is the ability to kind of take these servers from, you know, zero to proverbial zero to 60. And we can do that with a cloud in it now. So, you know, cloud been a pretty typical thing. We're going to kind of run through adding a user well specifically my user and my SSH key as well as one of my collaborators here in the lab. Setting some SUD privileges and stuff like that now pretty specific. And then, you know, running some magic commands probably pretty common to a lot of y'all to just basically get this ready to be a worker node in a Kubernetes cluster. So getting, you know, the upstream mainline Kubernetes installed, installing the packages and stuff like that, as well as just a couple of other spare packages that I wanted on the system to help with some of my debugging between Vim and TCP. So you just, you know, copy in that cloud of net file. There you go. Jump that down there and then from there, that's it, you know, and you can do any number of things in those cloud config files. The only thing we recommend you don't obviously do is change any of the networking in terms of the IP addressing that's assigned to the server because essentially you can cut the server off from yourself and your access to it. From there, you can see that the booking process through. So while we're waiting for that booking to cut over, let's just take a quick scroll through what you see on the booking details and you can always get back to this page, looking at the booking details from your account. But essentially you can see a pod descriptor file. This is kind of a yaml that describes all the hardware as well as the addressing for the hardware that's allocated towards you for like the Nix and stuff. It gives you some information on the interfaces and such that you have. One of the things interesting probably with respect to this demo is if you come down and look at diagnostic information, you've got some config files. So there's actually two cloud init files that are there. One is the system generated cloud config. That's what you're going to pull in from our system automatically. So this is where we're setting IP addressing information for the server so that you can talk to it and such as well as setting the user names and stuff from the portal. And then the other cloud file is actually the one that we provided directly. So you can see those as state users that are in there as well as all the commands that we were asking about to do. So those are both the cloud config files that you've got on that system. So please stand by and through the magic of video editing and demos, we'll make this go a little bit faster and then once your server is all provisioned we'll circle back around to show you what that access looks like. Okay, welcome back. As I said, that was quick. In reality, it's about 10 to 15 minute process while the back end systems get the OS installed on the servers already for you. So let me come over here and grab my VPN settings window in your email. You would have a notification from the system with your VPN configuration and credentials. If you didn't already have that setup from a previous setup. I've already set this up on my system. So I'm going to start my VPN connection to the lab and from there I'm going to grab a terminal window and hopefully this is big enough for you all to see well on your screen. VPNs connected up. So I'll just SSH over to that system. That at demo host. So this is coming from the days of a cranial. So the domain name there still kind of picks up that little bit of legacy from that. You can see up like SSH to the server. So that's good. That should just work through. Yep, because my SSH keys were already pushed onto the system. So I'm locked in the demo. Let's let's make sure that we've got as you do privileges on here. So as you do that awesome. So I've got as you do just check that this is what I wanted. Yeah. So we're going to a bunch of server there and let's just make sure that we got the software I wanted on there. So installed. Good. Let's. Yep. All right. So Kubla it's all installed on there. So as you can see the system is basically up and ready and ready to go to continue our work. So from here, I would just move on to whatever the next steps are in my development process to get that all ready and can I keep working in on that? So let me flip us back over to our slides here. Or to our demo piece of it. Sorry. I need to grab the right screen will help. Okay. So that is our lab demo from there and I'll leave you on the end of that with the links and resources that will be available in the slide side here. You'll be able to download from everybody on how to get started and get you connected into the LF edge community lab. So from there, I wish everybody a great one summit event and a great Kubernetes event following up on that and hope to grab a beer with you in the future someday and discuss more the resources from the LF edge community lab. So have a great day everyone.