 All right, welcome to the Starling X project onboarding session. My name is Greg Waynes, Wind River. I'm Brent Rausel from the Starling X TSC. I'm Bruce Jones from Intel. And we're going to just kind of tag team on going through this onboarding presentation with you. I'll start off just with some high level project overview. So yeah, so Starling X, we're a new OpenStack pilot project under the OpenStack Foundation, running Apache 2 license. We're basically six months old. We got announced back in the Vancouver summit. And we were formed with seed code from Wind River's Titanium Cloud, the product portfolio. And at a high level, Starling X is basically deployment ready, fully integrated, full stack, OpenStack solution. We've got a lot of features that enable it for edge deployments. We focus a lot on high availability, high performance, scalability, and ease of use in providing that fully integrated solution. Our first community release actually occurred on October 24th. So we've got a first release out after six months. And then just wanted to point everybody, there is a Starling X.io website. It's a good website. It's got pointers. So obviously the code in the Git. And then documentation for both new contributors as well as end users, just for example installation guides for various deployment configs. And well, what this is about is all about encouraging more people to join the Starling X community and contribute. So yeah, so first just a couple of context slides around edge computing. So I think everybody knows about edge computing is about moving the typically centrally deployed cloud closer to the edge. And what's driving that is there's a number of drivers, but the absolute number one is around latency. There's a number of kind of new genre of applications that are running applications that need to be real time sensitive. So and they're running on devices that just don't have enough computing themselves. So they need the cloud services, the cloud compute services, the cloud storage services, the networking services. They need it and they need it like with low latency to provide kind of real time services. But there's also drivers like reducing the bandwidth. You don't want to be backhauling like a lot of these new apps are driving lots of bandwidth back to the cloud. So you don't want that crossing your network. And for security reasons, you don't even want that crossing all your network. So a number of drivers, but certainly number one is latency. Some of the edge computing challenges that I get presented any solution that's trying to design for the edge and areas that we've looked in in Starling X is, yeah, zero touch provisioning is one area. Certainly, you know, doing an install of a cloud is not a do it once and forget when you've got like 100 or 1000 edge clouds. You might be installing one every month or something like that. So certainly reducing the effort in doing installs is an important challenge. Central management, so certainly managing the 100 or 1000 edge clouds is a challenge. And you know, doing, you know, just doing things like managing software patches across software patches for the platform cloud infrastructure across that many edge clouds is something that you want to be, you know, not difficult to do and easy to manage centrally. Single pane of glass is, yeah, certainly all these centralized functions, you know, like even, you know, aggregating fault data and telemetry data across these edge clouds is something that you want to do without having to log in individually to each of these edge sites. So you want the single pane of glass for doing a lot of these orchestrated functions across the edge clouds. Scaling large and small is also a challenge. Large is usually easier because, you know, we're coming from a data center type environment, but and it's actually small, it becomes a harder one being able to, you know, shrink down your edge cloud solution onto like a single server and that sort of thing. Edge cloud availability and autonomy. So these edge clouds are out in the middle of nowhere, so they have to be highly available. You can't be sending, you know, somebody out to the site if something fails, so they have to be highly available. Autonomy, again, they're out in the middle of nowhere, you know, connectivity isn't that great in some of these, to some of these edge sites. So the edge cloud being able to be autonomous when it loses connectivity in any central site is important as well. And then, again, security, it's out in the middle of nowhere. It's not in a locked data center. So there's different security challenges with edge sites. So these are just some of the kind of key challenges that we've looked at with Starling X. Some of the use cases with Starling X, so certain, a lot of the edge computing I think kind of began in the kind of telco market, certainly with all the, you know, we've seen some 5G stuff around edge computing and that sort of thing. So there's certainly use cases in that area. But we've actually also seen a large kind of, a fair number of use cases in kind of the industrial markets. So, you know, energy and control system power plants and stuff like that that want to leverage cloud technology are really having the same requirements as what kind of telco kind of required for at the edge, you know, small devices, but still having a kind of cloud capabilities. We're also seeing in the healthcare area, you know, there's, you know, complicated machines like MRIs that require, that want to leverage cloud technology. And so again, it's a very similar edge type requirements for that as well. All right, I'll hand it off to Brent to kind of go through some of the details. Thanks, Greg. So Starling X is a complete edge cloud software stack. So we package all this up, starting with the, starting with the OS, which is based on CentOS, then we leverage various, various open stack components, various third party components such as Libvert, QMU, OBS, DBDK, and of course, and of course the various open stack services. On top of that, there's a, there's a number of Starling X specific services that have, that are part of the, the part of the stack. So first off, we've got the configuration management that provides host installation, inventory discovery and host configuration, as well as a system level configuration and configuration of the various platform, platform services. Host management, that service manages the life cycle of the host, provides host fault monitoring, alarming, and recovery from, from faults. Service management is a high availability cluster management for, for the platform, for the open stack services, as, as well as the other platform services. Software management, this, this subsystem encompasses software patching framework for, for the management and deployment of patches, as well as a, as an upgrade, a hit list upgrade framework as well for, for the system. Infrastructure orchestration, so this, this subsystem provides the high availability management of, of virtual machines, as long as, along with a, an automated, an automated deployment mechanism for patches and, and upgrades that I touched on above. And lastly, fault management, so capability for alarm and log reporting for all the various Starling X services. So the Starling X solution is, is scalable. You can start as low as one, one server, which would combine the control, compute, and, and storage functions, and this can run on, and this can run on the lower end hardware, such as a, as E on D. Then we've got the two-node version of that, which is a highly available configuration, moving up to a, a frame level system with, with the separate control, storage, and compute, we do support a, we do support an integrated, self-cluster as part of the, of the frame level solution. And, and then at the top, we've got the large-scale data center solution or distributed edge computing solution, which would be, which would be a multi-region model. So just going to go into a little bit more detail on some of the, the Starling X services. So configuration management, it, it manages the installation, which would auto-discovery of, of new nodes, installate, managing the installation parameters, bulk provisioning of, of nodes through a configuration file. The nodal configuration, stuff like the, the role that the node is going to assume, such as, is it a controller, is it a compute, is it a storage? The, the attributes of that node, the core assignments, memory assignments, including huge pages, the network interfaces and, and various storage assignments. It also does all the, the inventory discovery. So, number CPUs, the amount of memory, ports and GPUs and so on. Host management, so this subsystem manages the full life cycle of the, the host. It detects and automatically handles host failures and initiates the recovery. At the, those monitoring and alarms for cluster connectivity, resource utilization, hardware faults, it's interfaces with board management for out-of-band control as well. And this is all available via REST API. Software management, so with the Starling X services, we're able to deploy software updates for, to fix bugs, security fixes, and even to deliver, deliver new functionality. It's integrated and, and rolling, rolling upgrade solution. We, we support multiple types of patching in service and out-of-service patching for patches that are, that do require the node to be taken down. I need to replace my kernel, for instance, the orchestration framework would automatically migrate VM workloads off of that node. And, and, and basically do a rolling update through the, through the cluster. And prop grades and manage upgrades of all software, including it's a full stack upgrade versus just an open stack upgrade. The next thing I wanted to talk about is something, is a major initiative that we're working on for our next release is, is moving, moving our infrastructure to a container, a containerized implementation. So, in, in this implementation, Starling X will now be running a bare metal Kubernetes cluster with supporting a Docker runtime. A Calico CNI plug-in, Seth, leveraging Seth as the persistent storage back end. Authentication authorization of the Kubernetes API, leveraging Keystone. Hosting a local Docker image registry, again with authentication from Keystone. And, and pulling in Helm as the package manager and the airship amada for the orchestration of, and management of a multi Helm chart applications. So once we got that in place, then we're going to containerize the, the infrastructure in, in including an open stack into the, the deployment of life cycle management of that will leverage an open stack helm. And as I mentioned below, we're also leveraging our model as well. So once this is in place, we've got a platform that can support containerization of the infrastructure as well as application workloads. So we can support both containers and VMs. Another thing I just wanted to highlight for next release is CIDC and enhancements. So Starling X is a full source distribution. It, it currently requires the end user to take, take the source code, the build tools that we, we provide and build a full ISO for deployment. We're, we're partnering with a, with a Canadian non-profit foundation called Sengen and to provide a public repository for, for build artifacts, as well as pre-built ISOs. So this will, this will aid in onboarding new users to the community. On top of that for our next release, so we've got over 40, 40 initiatives plus is currently being prioritized and we're certainly, certainly looking for people to join us in the community to, to be able to drive these forward. And with that, I'll turn it over to Bruce. I'm going to talk a little bit about the community and we are an open stack pilot project. We firmly believe in role model, the four opens. I think we've all heard about that earlier today. We have a technical steering committee that is responsible for overall architecture direction, Ian, Dean, Brent, Miguel, all our members of SAAL, all members of our technical steering committee. We're completely committed to diversity openness and encouraging new contributors. It's a very large projects and one of the things we've done, a couple of the things we've done are a little bit different than open stack. We've actually split the project up into a set of sub projects. So each of the services that Brent was just describing is its own project within the community. We also have a number of horizontal services like documentation and release and build and security. And we're following, for instance, we're following the standard open stack security practices. We have a dedicated security team dealing with security issues. We have a dedicated release team. We have a couple of other initiatives. We're working to enable these services in DevStack right now. That should complete before the end of the year. And we probably won't need that project going on much further. We're also working on getting rid of our Python 2 code and converting that to Python 3. The other thing we've done that's a little different than standard open stack projects is we've split the role of the PTL. So each of these sub projects has a team lead and a project lead. And we've done that for a couple of reasons. One is I think Dean said earlier today, technical people are not always good at project management. Project management people aren't always good at the technical part. We're blessed to have an abundance of both kinds of people in our community right now and this gives us the chance to leverage those. We have core reviewers that have been selected by the technical steering committee, blessed by the team leads, determined by the project leads. And then we have contributors. Contributor is very simple. A lot of this document, this governance came from Cota containers. If you're familiar with their governance, it's a lot of the same words. If you've made a contribution to the project in the last 12 months, you're a contributor. That allows you to serve in a leadership role and it allows you to be run for the elected positions and to vote in those elections. If you're a core reviewer, you have not only the authority but the responsibility to review the code changes coming in, making sure that those meet the standards of the project that they're technically correct. And core reviewers can cause code to be merged. Talk briefly about the technical leads. A technical lead in one of the projects is a core reviewer, but they're also responsible for helping set the technical direction of the project underneath the guidance of the technical steering committee. While the project lead is out doing the coordination and communication and the tracking and all the typical project management kinds of roles. The TSC, most of which are in this room with you today, is responsible for the overall technical direction of the project. This is a little dated. We actually have eight people on the TSC right now. Brent, Ian, Dean, Saul, Miguel, Anna from Ericsson. I can't pronounce his name. Shikwen from 99 Cloud. We'll be moving to a nine person TSC in April. Did I miss someone? Curtis, yeah, it's Curtis. See here? We'll be moving to a nine member TSC in April with our first election. Well, we will elect five positions and there'll be five and four and five and four every six months. We welcome the involvement of the community. We're actively looking for help for users, contributors, and any way, shape, or form, all the standard ways of finding out about us are all out there. We have our web page. We have IRC. We're not always super watching the IRC. We tend to be an email based project. I mean, maybe Dean's watching IRC, but most of the activity seems to be happening on the mailing list. We have a number of calls. We have a weekly community call. We have a TSC call every week. Many of the sub-projects have weekly calls. Anyone's welcome to join in those. If you want to contribute, our bugs are in Launchpad. We are putting our new specs into a dedicated spec repository. You're welcome to look at those documents to contribute to the reviews that are going on there. And then we're using Storyboard with the mystical number 86 as our project group. So we have 20 some repositories in the OpenStack infrastructure right now. Good, something like that. It's about right. So Foundation asked us to remind everyone that everyone is welcome to contribute to the project whether you are a member of the Foundation or not. But we certainly encourage you, if you're not already a member, you probably wouldn't be here. But of course you can join. And if you're joining on behalf of an employer, please have them sign the contributor license agreement. And thank you very much for coming to our session today. We're happy to take any questions you have. So is there like a quick start guide or just, you know, easiest way to get a minimal configuration up and running on a laptop or something? Yeah, on our Wiki there would be a document, basically a quick start guide. As well as of yesterday, there's also pre-built ISO images. So you don't need to build, you don't need to actually build it yourself. You can just download that and take it at first spin. You can run it on a VM on a fairly beefy workstation and you can run it on Intel NUX. We have a number of people internally that have stacks of NUX on their desktop. I can run it on my laptop. Other questions? Cool. Thank you very much. Thank you. Appreciate it.