 Good afternoon. My name is Mark McLean. I'm the CTO of Aconda, Inc. So we're going to spend the next few minutes talking about Aconda. And really, if you were in the keynote to either yesterday or this morning, you heard a little bit about the newest project in the OpenStack Big 10. We'll talk a little bit about that as well, since Aconda is one of the primary contributors to the Estara project. So when we set out to create the Aconda project, one of the things we wanted to do is tackle some operational challenges that we were encountering with Neutron and that it's managing multiple services is challenging. You can have some of the services have their own orchestrators. Some of them have different interfaces. And that if you're rolling out multiple clouds and we began the Aconda project actually at Dreamhost back in 2012, we wanted to make sure that we had the ability to roll out different deployments over time with a variety of different vendors. And new deployments were going to change over time. And we're going to change. So some may have one particular vendor for SDN, some may not, and so we wanted to have the ultimate in flexibility. And so within that, we founded the now Estara project, which is the newest member of the OpenStack Big 10. And within that, the project is kind of a little background since it's only briefly mentioned. It's really designed to simplify the deployment, be compatible with the existing OpenStack ecosystem. It doesn't replace components, it complements them, and also is naturally to be open and follow the four opens of the OpenStack ecosystem. So in terms of how Estara works, if you take a look at the reference neutron, you'll notice that on the right-hand side of the screen, there's a fleet of microservices. And with Estara, what we do is we actually simplify things by having a single process for Estara, which is then responsible for orchestrating the network functions. And central to that is a process which we've kind of nicknamed the rug. It's one of those escape prototype names that's kind of stuck with us. It's really, if you've ever seen the movie The Big Lebowski, there's a reference to the rug tying the room together. And so also, if you look up Estara, it loosely translates into carpet, which is how we got that name if you've ever wondered the genesis of the name. So the Estara orchestrator is really control point orchestration. No components of Estara are actually in the data path. Main reason being is that we wanted to be able to take the best of the network functions available, whether open source or proprietary, and then orchestrate those via our pluggable driver interface. Within that, the pluggable driver is nice, but how do you keep those processes alive and well? And so what we did is we wanted to have the control plane to be highly available. So we designed it to be multiprocess and multi-threaded. So from the deployment perspective, you have the opportunity to configure it. You can make the process as big as you need to be, or you can create as many of the processes as you need to manage the network service orchestration. Also, at the same time, we wanted to maintain the same API footprint from the tenant perspective in OpenStack. So we use the standard APIs for Nova, Neutron, Glance, and Solometer. And so when the orchestrator is running, it's responsible for watching changes to the logical model, and then it will render those down via the drivers onto the particular service. So that service can be rendered. We'll kind of touched on it a little bit in a number of different contexts. And also communicates with the workers, because in some instances, if you were, say, deploying a service in a service VM, it's very easy to go Nova boot, but it's actually kind of harder to make sure it stays up and is functioning and continues to work over time. So the ASTARA architecture in the base of the Open Source project is, if you take a look on the left-hand side, you'll see that Nova and Neutron, Kanda and ASTARA, run within the control plane. And then on the right-hand side of the screen, you'll kind of see more traditional network stack where you have your physical network, you have your overlays, either being, if you're using overlays, maybe being managed by OBS, Linux Bridge, or some proprietary. And ASTARA has an agnostic layer to it so that you can, from a ASTARA's perspective, we don't really care what the L2 system is, because really, ASTARA comes back to the network principles we've used for a very long time, which is honor the different layers within the networking stack. It makes it very easy to mix and match and switch out components. About that, you'll find the OpenStack APIs, and then you'll see the advanced services in terms of routing, load balancing, firewall, once the Neutron team in Meetaka kind of gets done rebooting that. And so taking a look at the Neutron reference, kind of from a data perspective, data path perspective, typically you're running a network node. You may have one of these, you may have 10 of these. But the one that challenges is those nodes become single points of failure, points of congestion. And so with ASTARA, we kind of work around that because now what we're doing is actually spreading out the services, either on the VMs and the particular hypervisors, or we're spreading them out in containers. That's where the driver model that I was talking about earlier is makes it very easy to basically say, okay, I want to orchestrate this particular network function in a container or orchestrate it in a service VM. You can also change the way it's deployed based on the driver as well so that you can have multiple different configurations. The new flavor framework gives you the option to kind of times that as well. One of the differences with ASTARA is that from say standard Neutron is from the ground up, ASTARA was designed for IPv6. When we began the project, that was one of the things that we were committed to knowing that the world's basically running out of v4 addresses. We wanted to make sure we were ready for that. Also wanted to support dynamic routing. One of the challenges with v6 still even in Neutron today is that dynamic routing's not supported. ASTARA does support PGP, OSPF, whether you want your flavor to be quagga or bird or maybe some proprietary ASTARA can handle that as well. As well as to also provide a fast path to roll out for advanced services. So like if you have a particular low balancing vendor that you want, writing drivers is fairly easy to do. And so, it's a little bit about ASTARA, but what have the communities been working on upstream and a condo as a company provides services support around ASTARA and is also one of the major contributors to the ASTARA project. And so what we've been doing in the last six months in terms of improving the HA story, I touched on that a little bit in terms of the scale out and scale up. It's very interesting in terms of you can actually start multiple orchestrator processes. They will all communicate amongst themselves and then selectively chart out the number of network functions each is managing and it's automatic so that if you add or contract processes that will automatically happen. There's no involvement to cause a reconfiguration to happen. The rug processes will talk to each other and figure out the appropriate thing. Also improving configurability, I touched on that with the drivers you can have the ability to have different services provided so that you can provide them on a cost differentiated basis. So if you have say for instance load balancing you wanna provide on hardware for production workloads and maybe in software for dev test workloads, you can do that. The quicker provisioning, one of the things is always challenging is when you have service VMs as it takes a while to spin those up. So within Liberty we basically instituted a pool manager so we can have warm spares ready and really cut down that time to provision especially for workloads that are required to run within a VM. And also we rolled out support for Neutron load balancing V2 now that API is final in Neutron we also wanna make sure that we are supporting it and so we have integrations with a number of different partners. And so talking about our partners, with a condo we want to have a growing ecosystem and provide that we provide services and support and then also wanna mix and match what's available to deploy your needs and so one of the earliest ones we've started partnering with is QMOS networks in terms of providing access to their dynamic lightweight network virtualization. It basically what allows you to do is get hardware accelerated VXLAN. It's a very simple deployment model and it's again uses standard open stack tooling so that it's easy to maintain deploy you're not having to write special tooling to deploy it and so this kind of gives you a little bit of overview. I'm not gonna dive in too deeply because this afternoon we'll have a much longer session about what this integration looks like. One of our other partners is Nginx from the load balancing side. We provide basically it gives you the opportunity to provide self-service Nginx to your tenants within your deployment. That's fairly simple again it's standard open stack tooling because it's supporting the V2 API. In terms of what specifically is available we support both Nginx and Nginx Plus. It's simple provisioning also you can get access if you want to integrate the Nginx Plus product you have access to the dashboard and then Nginx is something that a lot of people have been, we've all been running for a long long time. And so in terms of where Konda kind of are bona fides as a company we developed a Konda for open stack. It was the first thing we integrated with we've been running and it's been running in production now for years which is kind of it's not something it may seem new by being in the big tent but it's not at all on a daily basis it's managing thousands of virtual network functions. A Konda provides support for both Juno, Kilo and Liberty in making a star compatible with all three releases. So depending on your deployment you get the opportunity to deploy you don't have to have the latest and greatest. And then lastly it's compatible with a number of different overlays in terms of a number of different sorry L2 orchestration systems either based on OVS, Linux bridge even we've been testing it with the end development of the OVN project as well as NSX. Thank you very much.