 Okay. I guess I'm just going to start. Hello, everyone. I'm Babak Sourashki and I'll be speaking to you today about Starling X, just an overview on how to get started with it. First of all, here's the agenda that I will cover. Starling X Kubernetes, what is it? What does it do? I will share with you a starting point where to start playing with Starling X if you want to just experiment with it, how would you go about? And then lastly, I will ask you to consider joining the community and contributing and answer questions. First of all, what is Starling X Kubernetes project? It's a private cloud software project that deploys Kubernetes across dedicated servers distributed across geographic locations. It scales from a single server, a simplex model, to a distributed cloud architecture that also includes duplex systems at remote locations along with storage and worker nodes. It provides you with management and orchestration tools for distributed deployments and is the fact to a single solution that scales very well. That along with its characteristics and performance and HA make it very well suited for distributed and cloud deployments and as well as edge deployments. It provides you essentially with both day one and day two operations. Day one being the day when you receive the order to go ahead and do the installation and configuration of your infrastructure and day two being the operation, fault management of all components including the server hardware, operating system and kernel as well as the components that are included such as Horizon and Barbican as well as services that you might have running under. This slide is a little bit outdated but nevertheless it shows what is involved with Starling X. On the purple side that is where the Starling X adds value. Integrates very well with Kubernetes. Sits on top of a hardened Linux with a number of open source projects such as open source as well as open set projects such as Horizon, self for storage, Calico and on top we have Kubernetes with home as a client API or application manager. We have Armada but Armada is being phased out from Starling X. It's going to be flux CD and Docker registry, private Docker registry that sits on top of the controller which in this case, in case of a distributed cloud it would be the system controller and in case of a duplex it would be just the controller itself. Scalability of Starling X as I said it scales from a single server where you have a basically a controller that can function as both storage and worker to a dual or duplex architecture where you have two controllers you may add, you may choose to add storage and worker nodes to multiple servers. So basically the deployment models of 5G as I mentioned it scales deployment models from one to hundreds of servers for a wide area of use cases. In telco it would be VRAM or industrial IoT, autonomous vehicles as well as just pure computation if you want to do augmented reality or virtual reality Starling X does support inclusion of GPU. Its aim is to minimize infrastructure footprint. So in case of Starling X, let's say you end up with an Intel processor that has 64 cores, we use only two cores for the platform. We use only two cores for managing the platform, managing the devices. The rest of the cores in this case would be about 64, two full cores. In this case we would be using four cores including the threads. So as such you would have 60 cores dedicated for your application. As I mentioned it's models include the Simplex which is a single server, duplex model and minimum of two servers with optional worker and storage nodes and all of the above in distributed cloud environment as well. Okay, that's basically a two-minute introduction about Starling X but where do you start with that? And I've been talking about distributed cloud, central controller or central cloud. So on top that is where we would start with our central cloud or system controller and then the edge devices, the installation of the edge devices or provisioning of the edge clouds would be instantiated from the central cloud or system controller as we call it. And if you count this and if you count the number of servers that we have here it's about 20 servers. So the question is how do you experiment with Starling X if you need 20 servers in a distributed environment? Right, not many of us have 20 servers handy. So Starling X has a project, tools project that is used for, that gives you the build tools, a consistent software development for building as well as scripts for virtualization. By default it supports the virtualization of Simplex and duplex but as I mentioned a distributed cloud consists basically of Simplex or duplex models, Simplex and duplex where the duplex has to be in the central location and the sub-clouds can be Simplex. So theoretically we can essentially build a distributed cloud with these components and here basically what we do is we instantiate the duplex on the right, we call it a central controller on the left, we instantiate the Simplex, call it a sub-cloud and what's in between them is basically plumbing for connection between central cloud to the sub-cloud. And the connection between them there is an OEM connection and there is a management interface. The management interface has to be routable. So the plumbing in here basically gives us the routing. And I have this basically set up on one of my laptops here and later this afternoon I will have a, I will go through that. I have another session where I will basically demo instantiating the distributed cloud on my laptop and that's the, and I also have a set of scripts that are at the end of this presentation which you can download and use. And once we create the VMs, we instantiate the VMs, we go through the process of installation, configuring the system controller. After we install and provision the sub-cloud, once everything is done, we can basically log on to the system controller's horizon interface and see our sub-cloud. And we can dive into our sub-cloud and see that it is online. Deployment status is complete and more than anything, sync status, it says that's in sync. What it means when it says in sync means that it is at the same patch level as the system controller. The two are at the same patch level. And that's good for if you want to just experiment with it, with Starling X and you have not been exposed to it, that's one path to take. On top you see a critical alarm. That critical alarm is because I only have one controller. I don't have a standby controller right now. That is why we see that critical alarm. And again this afternoon I will have a demo illustrating this right on the laptop. Community and contribution. Starling X is as I said, Starling X is a fully integrated project that delivers you Kubernetes. And we would like to ask anyone who wants to get involved, please do get involved. And I have a number of links here. Bugs are tracked via Launchpad. We follow the same OpenStack development process as everyone else. And ideas are introduced via specs, via specs as well as storyboards. And we have weekly meetings, by weekly meetings. Thank you. Any questions? Yes. No. And the sub-cloud, so it actually supports more than hundreds if we count the sub-clouds, right? Because the sub-cloud from the system controller's perspective, the sub-cloud that it is connected to, it's basically the main controller, right? That main controller within the sub-cloud can then go ahead and instantiate other provision, that can instantiate provision other units within its own environment, right? So it can be a duplex and can go ahead and add as many worker nodes and as many compute nodes, worker and storage nodes as needed. So the 100 is just for the system controller. And I think it goes at, if I'm not mistaken, I think the last time that we tried that was about a thousand. A thousand. A thousand sub-clouds. Yeah, that's the, if I am not mistaken, I believe. Yeah, pretty much. Yes. Is it possible to have more than two controllers for each sub-cloud? Within the sub-cloud, yes. Within the sub-cloud. Because the way the system controller connects to the sub-cloud is by a BMC, by a Redfish API. So it does need to know one Redfish API, one server. But theoretically, you could actually extend it to have more than one controller in that case. Yes. Well, okay. So the number of ports that you could get away with can be basically, if I'm not mistaken, let me think. In case of distributed cloud, we need at least two interfaces. Oh, processors. Yes. So we use two full cores for the platform. That means for the device drivers, for root file system, any daemon that runs on the system, these guys run on two full cores. Pardon me? Yes. Right. And we assign the rest of the cores for applications, for the parts to run on. Yes. Give me an example when you would need it. I mean, I cannot, because the purpose of the system is to run applications, right? I don't know if there is any supported way of increasing the number of platform cores that are used by the platform. It certainly is possible to do that, but we recommend essentially two full cores for the control thing. Okay. Can you say something on the Kubernetes version that you're using? And is it the recent version? Do you need some changes or is it just upstream? Yeah. So we have done some changes on the Kubernetes side. We support multiple, we can support multiple versions of Kubernetes. I believe if I'm not mistaken, the latest one that we are supporting is 121. But as I said, we can increase, we can change the Kubernetes version. Have there been changes? One of the changes that Starling X has is the concept of isolated CPUs, isolated cores and core pinning. Essentially, when you go with VRan and you have a DPDK application running, you need to make sure that that DP nothing else runs on that, right? So that's one of the changes that we had to apply in Kubernetes. Yes. How do I, okay, let me see if I understand the question correctly. How do we, sorry, I didn't hear. Okay. How do we, how do we track and make sure that the node that wants to connect? So the sub cloud is essentially an independent unit. Think about it this way. The sub cloud is an independent unit from the system controller's perspective or from its own perspective and its management and configuration is done through the system controller. So that is a good question. I don't think that that has been addressed or not 100% sure if that has been addressed. But I would think that is more on the application's perspective rather than the platform. Yes. Okay. So there is, so question one is how do we instantiate the sub cloud, correct? From the system controller, we have a command called DC manager, distributed cloud manager, that is, that we basically instantiate with a set of inputs. One of them is the OEM IP. The other one is the BMC address of the device that we want to connect to. And another one is the set of variables and configurations that the sub cloud has to be instantiated with. So there is basically an API and a tool that we use from the system controller to connect to the BMC via Redfish, mount the CD, reboot the system, instruct it to do the installation. And once the installation is completed, then the tool goes ahead and says, okay, now go ahead and run the Ansible Bootstrap script. That will then take the input that we have supplied and instantiate the cloud. So that is the first question. What was your second question? Yes. Yes. Correct. No, there isn't. So the idea is to keep everything at the same level, right? So if put differently, if there is a security bug or if there is a bug, that bug applies to everyone. Yes. No, you can't see the storage from the center. So via horizon, so if you come to the session this afternoon where we have this running in a virtualized environment, yes, you can see all the way down to the processor and storage nodes within the sub cloud or storage devices of the sub cloud controller. Do we rely on Kubernetes solely or do we also use starting X addition? Oh, removing a sub cloud. Adding or removing. So adding the sub cloud, do we rely on Kubernetes? Yeah. No, because we do use Kubernetes in order to add a sub cloud. So the DC manager that we have it eventually goes ahead and does a cube config, pass a cube, cuddle, apply of a deployment config file. So yes, Kubernetes is used in that instance. When we want to remove a sub cloud, no, we don't I don't see where we would use Kubernetes in that instance. It's just an entry that we that we raise in the system controller. More detail about the hardened Linux. Well, there are a number of patches that have been applied that we apply on top of Linux, right? So we again, for the one of the one of the constraints in VRan, let's say, I don't know if you're familiar with, I'm sure you're familiar with flex run. One of the constraints between flex run is latency, right? System latency system jitters. We have to fix that in the kernel. So standard kernel will not achieve that. We not achieve to give you five consistent 10 microsecond latency or below 10 microsecond latencies that the standard kernel will not achieve that. So that's one of the examples that we have done. Another example would be the core isolation and pinning to make sure that by default when you use an isolated core, when you instruct Kubernetes to give you an isolated core that you really get an isolated core and you don't end up getting a bunch of interrupts OIRQs on your on your core. So that's when these are for instance examples. Let me see if I have another example. This here is an instance of I just put this up here for the system to illustrate that. So on top we have the system controller just to show the distributed cloud. From the system controller we reach out to the small edge clouds and we say, okay, go ahead. This is a single server. Go ahead and instantiate that. Go ahead and put the cloud in there and then on the large controller, large to medium edge side, we instantiate the first controller and then the second controller and then that controller that we have instantiated will be responsible to go ahead and instantiate the rest of it. Any questions on this? Yes. Right now we run an Intel architecture or AMD architecture or x86 architecture can run that. It's not yet on R. That is your question. Yes. As of right now the system is solely on Intel architecture. But it does come with and that's something that we are working on. It does come with built-in support for a number of accelerators both look aside and inline accelerators. Some of those inline accelerators not some many of them are using ARM processors where they basically offload the L1 and L2 of a VRAM onto the accelerator. So from that angle, yes, there is some ARM. No, we don't. It's Kubernetes, Kubernetes K8. Okay. So again, I will have another session this afternoon. And my intention is not to give you a full 10-minute, you know, hour long what is and what is not installing X but to give you the tools to experiment with it, play with it and see how it works and see if it fits your projects or if you want to contribute. Again, we welcome any contribution. Thank you.