 Good afternoon. Thanks for joining us for the Starling X project update. First, just introduce ourselves. My name is Brent Rausel. I'm a member of the Technical Steering Committee for Starling X. Okay, so what is Starling X? So Starling X is an OpenStack pilot project. It was launched at the Vancouver Summit. It's under the governance of the foundation with the Apache 2 license. In short, what Starling X provides is a high performance, low latency, high availability platform for edge applications. We just recently pleaded our first community release on October 24th and of course we're looking for new contributors and operators to leverage our platform for edge computing. So just a little bit about edge computing. What's driving edge computing? So new genre of applications requiring low latency, high bandwidth, security and connectivity. It's all about pushing the compute to the edge to address these requirements. So somebody use cases here. So ultra low latency for 5G telco and industrial IOT, such as automated vehicles, cloud virtual ran, high bandwidth, large volume applications, such as mobile HD video. In the mobile edge compute or the multi-edge computing space, we've got use cases just as augmented reality. So the intent of the Starling X project is to take existing proven cloud technologies and reconfigure those for the edge use case. So orchestrate system-wide, deploy and manage edge clouds and share configurations. Simplify deployment to geographically disperse the remote edge regions. We're just going to talk a little bit about the technology. We have a number of sessions coming up later this week. We'll deep dive on this in more detail. So Starling X is an edge virtualization platform. It's deployment ready. It's scalable, highly reliable and provides low latency. It's a complete stack, starting with a OS layer. It's currently based on CentOS with plans in the future to move to multi-OS. Then we leverage a number of other open source building blocks and of course an open stack. Then we've got a set of Starling X services that are provided as part of the virtualization platform, configuration management, fault management, host management, service management, software management and a subsystem that we call infrastructure orchestration. Together all of these provide us easy deployment, low-touch manageability, rapid response to fault events and fast recovery from fault events. As I mentioned earlier, this is a scalable platform. It can go as small as one node which would combine the control compute and storage in a single box. There's a redundant version of that configuration and then there's the multi-server configuration that can scale up to 100 compute nodes. As part of Starling X, we also have a configuration for distributed edge computing. It's based on open stack regions. There's a central cloud region that would host shared services as well as provide system-wide infrastructure orchestration functions such as deployment of edge clouds, configuration of the edge clouds, fault aggregation across all the edge clouds and the ability to perform software update orchestration across all edge clouds from a single pane of glass. The remote edge clouds can be geographically dispersed, scalable as we pointed in one of the previous charts. An edge cloud can be as small as one node and scale all the way up to 100 servers. It's connected to this central cloud over L3 and the edge clouds would run a reduced control plane. I'm going to turn over to Dean to talk a bit about the community. Thanks, Brent. Starling X is interesting in that it's a little bit different from the way open stack grew. We're taking an existing base of code and creating a new community around it. In some ways it's a little bit inverted, but we started with a big pile of code that's essentially those flock services. It's what we have been calling the row in the middle of that diagram Brent showed earlier. But we released that and we're building the community around it. We're an open stack foundation pilot project. We've established the governance. That's the TSC that we're both a member of. We're forming that. We've got eight members on that right now and in April we'll be doing our first set of elections where we will elect half again of those and then add a ninth seat directly elected. We are very much, there's a little irony in this, but we are very much following the open stack principles. We do the four opens. Granted we started with some seed code that wasn't open, but from that point forward we are, we're following this as much as we can. The technical steering committee is a little bit different than open stack in that this is the group that's doing the architectural decisions and has the final say of the technical things. We are kind of organized as sub projects rather than, well actually that's the next slide. We think of Starling X as a single project and we've broken up the flock services into actual sub projects. We have split the leadership duties instead of a traditional PTL. We've got a technical lead and a project lead that lets some of our folks that are experts in project management help us out quite a bit because a lot of us technical guys are not good at that. But from there down it's pretty much the same. We'll have a core reviewer list. These are the folks responsible for reviewing and merging the code and then contributors. We have, is that eight, of the primary projects, sub projects that are the flock services and we've got a couple of horizontal projects. Some of these were actually calling sub projects and a few of them are very task specific like down at the bottom, you know, the Python 2-3 transition. All of the existing seed code base is Python 2 so we've got to get caught up to the rest of open stack with regards to being Python 3 compatible first and then exclusive. The multiOS is another one of those that we see as a relatively short term thing. Once we achieve that goal then that should not need to continue to exist as an independent entity. We did our first open release in October that is essentially taking the original seed code, some things got removed, some things got changed, preparing it to be opened and then going through everything like even having to learn how to build this outside of its original home was a bit of a challenge. And so the first release is the culmination of getting it back to a runnable usable state, a recognizable state as something that operates instead of a bunch of things that got broken up. The original flock, for example, the original flock code was all in a single mono repo and we split that up into 10, I think we're now at what 13 or 14 code repositories trying to break things down a little bit and make it feel a little bit more like open stack. We had to set up all of the Garrett work bringing, again, creating a community from scratch. We had to do all of this stuff over the summer. We're still working on getting things set up with Zool. We've got the basic structure in place. That's relatively straightforward. Converting the testing into a mode that we can run in CI with Zool is an ongoing process. And the docs, the entire docs project surprised me about how much work that was actually going to be to actually publish to docs.starlingx.io. All told, took about eight weeks to get rolling. You want to tell us more about the next? Sure. Okay, so I want to spend a few minutes and talk about some of the major initiatives that we've got on deck for our next release. So for our next release, we're moving to containerize the containerize our infrastructure services. So we're going to, we're evolving to containerize the open stack in addition to some of those flock services that Dean just talked about that would run on top of a beer metal Kubernetes cluster. And with the lifecycle of those containerized applications managed by open stack home and airship Armada. As part of this, we're introducing the beer metal Kubernetes cluster as well. Initial support for that would include Docker runtime, a Calico C&I plug-in, leveraging Ceph as the persistent storage backend, leveraging Keystone for the authorization authentication of the Kubernetes API, but also have a non-board synchronized local Docker image registry, again, authenticated with Keystone. And as I mentioned above, it would also leverage Helm and airship for orchestration. So what this gives us is it gives us a platform that we're initially going to use to containerize our infrastructure. But in addition to that, it's also, it's a Kubernetes ready platform for applications as well. Another major initiative for the next release is improvements to our CIDC process. So with the first release, Starling X is a source distribution and requires the end user to take the source, take the build infrastructure that we provide and produce a full ISO for deployment. What we have in progress is we're partnering with a Canadian non-profit foundation, Sengen, to provide a public RPM repository to help streamline the build process for end users. As well, we'll also be publishing pre-built images that the community can download to install with Starling X. In addition to these two initiatives that we've just talked about, we've currently got 40 plus other initiatives that we're currently looking at and prioritizing for the next release. So there's a significant amount of activity that's going to be going on in our next release. So a good segue from that. So we certainly would invite people to join the community, try out the code, contribute to some of the initiatives that we have in our next release and going forward. Documents are Starling X IO. We have a mailing list. We have regular community meetings. As well, we have additional summit sessions coming up. There's a keynote tomorrow morning. We have an Ask Me Anything session tomorrow afternoon and a project onboarding where we'll go through the architecture in more detail. With that, I'll open up to questions. I have a question about your story to back in on the next release. You mentioned how does it fade into the single device? We would run Ceph on the single mode. So we'd combine the monitor and OSD functionality on the single mode. So part of these flock services, as we call them, part of that is a bare metal management infrastructure. So that was part of the C code that was contributed. Continuing images are using to deploy with the second images. Yeah, we're building our own images. Oh, okay. Is there any communication on every edge sign that has a region? Do you have any communication between these regions? Network communication between, for example, VM, which is on network, one region, and not VM, which is maybe the same or another network on another region? Yeah, so currently, no, that is something that we are considering, but at this time, no, we do not. Stickers and a nice little one-page overview that's got our very popular architecture diagram on it. I have one question. Are you also addressing... I'm not familiar with the project. I'm also addressing what's running in the central region as part of your project architecture and development. So what you mentioned is what's running on the edge, but obviously there needs to be management and communication and so on between the central region and the edge. So is part of the project structure also provided? Yes, yes, it's, it's, it would be part of the project as well. And is it already part of, let's say, the initial release? Is there any artifacts that are already... Yes, as part of the initial release, there is a, there is a version of this provided. A pre-built image? Is that your question? So you said, where can I get the name of a company that is using Starling X in production? And the answer is we did our first release about a week and a half ago, and most other companies don't pick up new places that fast. So we're here this week to look for people to take that first step. Any other questions? Okay. Thank you. Thank you very much for your time.