 So today I'll be talking about starting X edge deployment of Kubernetes clusters And I'll introduce starting X a little bit first But to make sure people understand what it is and I'll give a short demo of us actually deploying remotely What we call a sub cloud on a distributed cloud so starting edge is an edge cloud technology, so it's basically a Kubernetes built around Kubernetes built around CentOS operating system we can run open stack optionally, but it's a cloud deployment We can deploy it in number of configurations. You can see here We can apply it all the way down for an edge and typically an edge configuration down to one or two nodes One if you don't need high availability in a hyperconverged configuration We can deploy it in a duplex if you need higher or need a redundancy or if you need even higher capacity One of the other configurations we have is what we call distributed cloud and distributed cloud is where we take a central cloud run as a control plane controlling Monitoring controlling deploying updating running functionality like that also acting as a registry For all these sub clouds the sub clouds themselves or the these individual clouds are all autonomous. They're all their own Kubernetes cluster. They're all Typically targeted at high availability and low latency applications So the And we can deploy up to up to a couple hundred and we can deploy horizontally We can deploy with with our distributed cloud configuration. We can deploy deploy out to a thousand sites today So we can have a thousand sites all controlled all monitored all installed by a central site Optionally, I think I might have mentioned it too. Optionally, they're all they're basically Kubernetes, but also we can run open stack on them if you need virtual machines and container support Container support is what we get out of the box and then you can run virtual machines. We're not running Although you can we're not running Kubernetes on virtual machines. We're running Kubernetes next to virtual machines So this is just a little bit about what I'll show you and that kind of steps So we're gonna we have a system controller Installed and we want to just deploy a single sub cloud. We want to do it remotely This the assumption is that the server is out of the box installed has not had anything installed no operating system and it has a Baseband a baseboard management controller BMC in it So with the really the phasing is that you we would initially do the remote install Using redfish from the system controller install the operating system and Kubernetes We could do that in two ways one is we could pre we can pre-stage the files or put the files Locally on there and this helps if you have a scenario where you want to just put the machines out there And you don't want to download over the network for instance downloading an entire image and also downloading a lot of containers So remote registry can take a lot of bandwidth on what might be a spotty network to to That can take you know hours or you know just to transfer all these files So you can pre-stage them and that's kind of the pride when we show us the primary path here And then I was also if that path's not there or if you've pre-staged the files and they've changed there's been an update since you've deployed Then it can transfer the package as well and that would happen automatically you don't need to Make any any Compensation for that it'll deal do that automatically in the next phase once we've rebooted and installed the software on it Then we'll pull the images because there are part of there are containers that are part of our our platform And then configure it in the end will manage it and what I'll do is I'll just run through a but I think it's about an eight Or nine minute demo of us doing a remote deploy In this case from a command line So the first thing we'll do is we'll just like the current number There's no subplots. We have currently a system controller with no subplots underneath it And then basically one command and all we need to pass in it is some yaml files that give it information about how to what to bootstrap How to bootstrap? Where the registries are the credentials for the registries things like that Also, what the IP address is once it boots up it needs to come up with an IP address so they can manage it from there In this case, we're prompting for the passwords so now the system controller is kicking off in the background now it's creating a In ISO and it's going to download that ISO and to a virtual media device on the board a sport management controller at the remote site And that's what's happening during the pre-deploy phase. So it's creating that ISO downloading at ISO and then it will boot This is to our boot the remote device office in this case. We're just going to do one node We could we could end up installing an entire cloud When we do that it'll install it from the first node The only time we have to do all of this on is on the first node to download from then on everything will download locally Then here's on the UI. You can see the details right now It's an offline disabled state everything's unknown because that's we're currently in the process of rebooting and we'll show in a second Doing the pre install process. So now it's going through doing things like installing the disk partitioning the discs to get ready for our route And our other partitions that we need running and now it's going through and it's a certain install all of our packages So there you can see things like Kubernetes being installed Kubernetes packages being installed And then we kind of cut some out of the middle and there's about 1200 packages total being installed there This is actually what it's when the ISO boots it's booting and the ISO is actually doing auto install So we're watching the auto install with that They are in the ISO the files are in the ISO, so it's pulling them out and installing them And then now it's it's setting some of the apps so you can see it's we're tailing the anthem ansible file So we're basically running an ansible playbook here to do the install from Then you come back. It's now finding the points doing the post install scripts now. It's rebooting. So this is the reboot steps We'll see the HP Flash screens in a second so this is booting up with With the the operating system. We've just installed the Kubernetes all of our services and Starling X are being installed were installed And then the next thing we'll start to do is run. There'll be an ansible playbook that we're running to do the actual The next phase of the update to do a lot of the configuration now So we've installed all of the operating system and then tools, but we need to configure them So here we're going to boot into Linux and we'll see the login prompt in a second And you can see back on the system control and we're just waiting for the waiting for it to come up now Once we've given you these commands to boot then configurates are to boot and install now. We're waiting for it to come back up and Respond over the network. We should go back to the Then all that all of this was initiated by that first Command with the DC manager sometimes stall command. So we haven't had to do anything manually. We're just waiting for things to happen Now we're still offline disabled And you can see like up on the top We're still in a disabled state with one of them because of the single subplot that dashboard is by the way showing system controller If we had a thousand subplots, you'd see all of those So now we're running the The ansible playbook to really to do all the configuration and it's doing a lot of things. It's configuring a Keystone in case we're using keystone for identity We're using some of the open stack services without running open stack in the in the Kubernetes like like the Barbican and Keystone and now it's restarting some of the services after it's configured and waiting for that to happen It's part of the platform. Yes, that's what we use for it while we use that and horizons while we use those for as parts of our of our distribution Now it's doing things with the registry. So it's actually creating there's a Central registry and now it's creating also a local registry on the subplot So every time now for instance when our platform reboots it can get it can get the containers images from local registry Not from a remote registry And now it's waiting for the controller zero which the controller zero is always our first node in a subplot Waiting for that to come up And now it's waiting for the the Kubernetes pods that were started that are part of our platform So at this point, we're still getting the platform up and running and the animal to get it to the point Where we're ready to run workload. This is like that. There's also some armada and helm stuff that's going on That's what's happening here. We're running all that and the configuration and Rebooting and then now we can see we'll see when we run a DC manager subplot list command. We'll see that Now we're unmanaged but in the bootstrapping state. So it's still not quite done and now it's it is kind of getting to the end of the actual bootstrapping A few more seconds here And it actually does another reboot so that This is the shutting down Again, we'll see the splash screen come back up So when it comes back up, it's going to be basically everything's installed everything's configured The only thing it's not going to be is we haven't actually closed the loop to say now we want to actually have the system managed So it'll be out of sync and unmanaged initially it should be In a complete state So it's ready to be that and we'll we'll do one manual one more manual action to actually say I want to manage this So now it's unmanaged but complete and it's out of sync and it won't be in sync until we actually manage it If we go back up here, that's telling us basically the same thing The only thing it's in sync is the certificates And this is the step. This is the one thing. So now I'm going to say I want it's ready. It's up It's it's an unmanaged state. Now I want to manage it So I'm going to go back to the sub cloud list It is now managed to complete and out of sync. So now what's happening right now is now it's it's going to go synchronize everything Because there's inventory of alarms That's all going to get synchronized. You can see there's one major alarm that will actually go away in a second You can see now the services are getting in sync We do one more time here So now everything is everything is green in terms of managed online complete install complete sync We can ssa now we can ssa. So this is ssh ssh into the sub cloud itself We're just going to check one thing there just to See if it's happy and we're going to run An alarm command just to see if there's any open alarms from the sub cloud perspective not from the The system controller's perspective and there are none If you go back to here, it's everything's green So we've gone basically remotely zero zero touch provisioning We've we've deployed this entire sub cloud with with our uh with the starling x And now we could do things if we wanted to add more nodes, we could do that as well All right, because without done installing a you know any interest we're uh, we're upstairs if you want to talk to us in detail up there But if you're interested, uh, there's some links to do the starting x site It is a open infrastructure project. So we're Up there and there's a lot of details up there about that And I guess any questions Yes, yes, it's typically what we're doing is we're installing like a duplex system We actually have a floating ips and we have to be able to settle all of those ips statically that happens when we create We actually create the iso And the system controller one of those the amplifiers says what ips is Said what we call the oam address The national ips addresses They will automatically install those ips So you burn the iso And then push that you push that down And is that cash if you have so you have a bunch of nodes that you're going to Install it in x? Yeah, we install a bunch of it. We'll actually create that iso file Uniquely for each of the nodes because they might have other things that are different than ips The iso is about That's why the option of being able to pre-stage those or factory install files even second touch is better, right? So you don't have to you don't have to do that But you lose the ability to put an empty completely uninitialized server out there But you save the network and these networks you're putting these edge clouds on can be very touching Very limited bandwidth low priority for management traffic, so you don't want It helps you there because you don't have to deal with you know four hour transfer time of a two gig file And you have question What we would what we would see first is we part of the provisioning is we'd be told what the The baseboard manage the BMC controller address is and that's all we need So we can go out if that if that ip address is reachable then we can From there do all everything we need to do which is to push down that virtual media image Reboot it through the BMC Right. It's no no actual discovery. Correct. No actual discovery of the BMC. We were told the BMC's address and credentials For that we're using redfish Well, we're we're remote so we can't really pixie boot remotely over over a layer three network back to a system controller We do after we install that first node object called controller zero any future nodes That like a secondary controller or worker nodes They are pixie booted from the system control or sorry from the controller at the site So there's no going back over the network to do those and that's typical of a private net like a private 10 gig network So you don't have to go over the customer's network typically No, you're going to get one ISO pushed down to the site and everything else we pixie booted from the controller at that site Right Yes, yes Yeah, we don't have you know the idea of a seed node or something when we saw that first controller it becomes the source For everything locally after that point and the only the only reason to go back with the system controller is if You know, we're pushing down upgrades or pushing up Alarms or things like that or inventory It is but it's also a full up starling X node hyperconverged so it can be running workloads as well Yeah, so we don't we don't reserve a node for that purpose I'm not sorry. I understand you're you're pushing You could mean through redfish If you want to take this offline too, I could talk we can talk more detail if you want I'd like to understand your use case I think you might be getting close to some other use cases that we're looking at because When you get into industrial use cases and things you start to have A lot more devices that aren't Aren't cloud enabled devices or cloud devices, right? So how do you talk to them? Yes, okay All right, thank you