 Good morning Like take a moment introduce Take a moment to talk a little bit about a conda as a the conda Corporation and also as well as the a star open-source project a little bit about me. My name is mark McLean I am co-founder and CTO of a conda previously served on the open-stack technical committee as well as former neutron PTL So when we start thinking about the conda and you know the backing a star open-source project We really start talking about the neutron operational challenges in terms of you know, how do we manage multiple services whether it's routing load balancing firewall You know, are we using SDN? Are we using multiple different L2 backends? You know, it's very hard to change them Additionally, you know, how do we handle multi-vendor deployments over time and you know And so what happens, you know when we started this project is we were all working You know, we're working at a hosting provider and working in public cloud And so day two operations were very important for us a lot of solutions You'll see are very easy to stand up on day one But in day two it would become very difficult to operate and so that was the motivational challenge for creating A conda and one of the other things we wanted to do is kind of capture Transformation we've been seeing over the last 20 years as you know the over-the-top Network services have been provided, you know It started in the 90s with voice over IP rolled through with video and public cloud hybrid cloud And so now capturing that and providing different services over the top So within that we birthed the Astara project and really key Three key things to that were you know to be hyperscalable You know provide a control plane that can support a large number of endpoints whether it's public cloud or whether it's containerized services Also to provide the overtop network services we focused on layer three through seven I'll kind of touch on that in a minute why we didn't go all the way down to layer two And then additionally we want to maintain those open-source API is that you expect to find an open stack Mainly we want to you know enable Users to continue using the existing tooling they have not have to write special one-off Scripts or one-off management or monitoring, you know make it very easy to use And so at the heart of the conda solution is the Astara orchestrator It kind of has the affectionate nickname the rug Mainly from the line of the movie the Big Lebowski for the rug really pulls and ties the room together So I think we've kind of all seen this slide in terms of what reference neutron looks like you have a you know If fleet of microservice agents they talk via message queue to the neutron server in the database Operationally it can be a bit challenging you have a lot of agents running you have a lot of agents speaking different protocols The protocols evolved over different over different times. They're not always the same payloads can be vastly different And so with neutron we simplified a little bit the L2 agent still there because we wanted to remain layer 2 agnostic and to provide layer 3 And above and so a star Communicates directly with the neutron server a star itself as a plug-in into neutron So you have the standard neutron API, but behind that you have a star managing layer 3 An alternate way of looking at it if you think about the neutron reference architecture You you know you have your hypervisors you have your network nodes in this case We've got one up there Typically the network nodes can become points of saturation points of failure points of congestion and with a star it's a little bit different So if we're archers if we're orchestrating The network functions within the deployment we can actually run them So in this case we're we're we're showing it as being run as service VM So if you notice they're actually spread throughout the deployment the one benefit to that is that if you have a failure of your network function the impact to Tenets is actually going to be mitigated a little bit because it's going to be localized and so it The impact is a lot less When we start in a star itself a star supports routing as a service load balancing as a service And so a couple of benefits you will find say over the stock neutron is a star supports dynamic routing both OSPF and BGP More recent releases of open stack, you know have added support for BGP speaker But we've been supporting BGP almost all the way back since Folsom It was designed from the ground up for IPv6 and as a matter of fact traditionally if you were to install Starra, you know the management plane is almost always v6 It can run on IPv4, but we decided to be more forward-looking and we wanted to be layer 2 agnostic There's a number of layer 2 technologies whether your your virtual switch is based on OBS You're or Linux bridge running on the host or you could be running say hierarchal port binding with some of the top-of-rack switching that's available and So being layer 2 agnostic enables you to have Different data center deployments you can keep them all consistent You can also over time as you roll out new deployments You can actually change your hardware mix without having to worry about you know, oh no now I've got to have a different layer 3 solution Another way of looking at this with the star with this with the architecture is on the left-hand side You'll see the orange boxes for Nova neutron And a star those are all run with the control plane the important thing about a star itself is a star is an orchestrator so No elements of a star actually run within the data path So if you're looking on the right-hand side, you'll see that you know, there's the physical network There's you'll see OBS or Linux bridge or some proprietary Solution which is managing layer 2 a star has a little bit of a shim Which really all that does is talks to the interface layer for layer 2 in most cases It's actually a no op the standard open stack API's and then advanced services for routing low-balancing firewall VPN and so You know The real benefit of the plug-able architecture for a star is that you know, we can support new services easily So in the metaka release, we were able to rapidly add VPN as a service and One of the other features we enabled is actually kind of what we've nicknamed bring your own network function So one of the challenges with a lot of open stack deployments is once the infrastructure set up You're locked into the particular solution you have for low-balancing locked in particular solution out for routing But what if tenants need a different? Provider for that what they need a different appliance and so with that An operator could easily enable to bring your own function. So a tenant could actually say hey, I have my own, you know Routing router that I want to run and so you can upload that into the cloud via glance and then a star I can orchestrate that for you Similarly is it orchestrates routing everywhere else And within that it's all driver-based. So currently within the open source tree. We have it support for ha proxy We have support for engine x both flavors regular engine x and engine x plus We're supporting the VPN as a service API That's via neutron and then we provide route even provide support for routing You know where those images are based on Linux BSD. You could have something like Cisco CSR It's very easy to plug these things in and that the plug ability is really key because you know today We have those network functions available But what if you want to add a new network function a star because of the way the orchestrator is written? It's very easy to write, you know and provide a little small driver that then will work that then teaches the orchestrator how to manage that resource You know It's all it's all there. It's all open APIs. It's all you know integrated very well in with neutron nova glance and sender Another one the interesting features about the control plane about a star is a star was since we designed it for the public cloud use case It was designed to both scale up and out So one of the more interesting things with a star is that from an operation standpoint? We start talking about day two operations is you have the ability to you know scale up You can add more threads as you have more network functions Running or if you know Optionally with VNF so you can with a star process. It's multi-process on a single host so you can add more processes Or one the newer features that we've added recently You know started working on the Liberty release and rolled out in Mitaka is actually with the stars You can actually stand up different a star instances on different nodes They're both multi-process and multi-threaded so you have the ability to scale both up and out and if you notice The set of VNFs that are managed is automatically repartitioned as you add Orchestrators so if I were to add a third one it would get partitioned again And so as a set of orchestrators expands and contracts a star is handling this for you There's actually no active involvement as An operator other than just configuring the service turning it on all the orchestrators talk with with each other And then partition the set of VNFs amongst themselves similarly if I were to contract that set it will repartition the set So within the Mitaka release I kind of touched on a little bit earlier to bring your own network function We think that's very powerful. We think that's it enables operators to provide some really interesting services To their end users Active active appliances is similar to the HA function HA routing pair that you would find in reference neutron. It's VRP support the VPN as a service and Then further refinements to instance pooling. So one of the challenges. I think anytime you're providing Services via VMs, there's always a little bit of time it takes between when you spin up a VM and when you When it's usable and so what we're able to do with instance pooling is keep a set of hot spares. So in the background the Astara orchestrator the Astara orchestrator is Constantly checking and the health of a particular network function if the network function degrades it can you know Spend down that instance grab a new one out of the pool and immediately configure it And your only loss, you know is going to be the time it takes to configure it If you're running active active you the tenant is likely not to see any impact at all So, you know as far as what a condo, you know when you take the open source of star You pair that with a condo the company and the service and sport, you know What we found over time is that it's it's a star in a condo significantly faster. Oops You know, it's 90% You know as far as setting things up This it scales significantly more and the cost is significantly cheaper over say some of the other solutions on the market so with that My PowerPoint keeps the one jump to the demo so we will So what I've got up here So what I've got up here if you look on The right-hand side what you'll see is you'll see I'm just kind of tailing the orchestrator logs What it does it gives you a good idea of just kind of the motion in the flow of what the orchestrator is doing in the background Kind of a little bit of telemetry and so what I've done here Standard vanilla open stack install with Astara enabled so if I go in and look at The network topology. It's kind of easy. We've got you know, we have one router. We have one network. We have one VM If I go in and you know Create a new network Got to make sure I give it and so in the background what you'll see is you'll see a star working You'll see you'll see the log files kind of scroll if I go click on the router and I add an interface to it And add the other network to it You'll see a star is watching these changes watching telemetry coming out of neutron. It's going through and proactively configuring The routing instance also adding attaching the network. So now if I go in and launch a new instance Can attach it to my new network. I just created And then So it's still that's still building and so from there you'll notice that the motion that a star is constantly watching It's constantly monitoring For the VM to be created And so that's kind of how it works is it from a user standpoint You're not necessarily noticing anything different from the operation standpoint you can go in And if we take a look underneath the hood, you'll see that the instances available So now I'm viewing more from an admin perspective You'll see it. There's there's no star service appliance running in this case because it's running my laptop I'm just running one. I'm not running in ha configuration You'll see it the management network is IPv6. It has the private network public And so from routing as a service, that's kind of how we provide it for testing purposes There's we kind of created like a little simple Linux box Image for doing routing now one of the other interesting things is let me switch over to this deployment is so let's say I've got two deployments here. I have this one here, which is very simple again network topology with VPN support I can go in and then I have another cloud deployment Maybe they're two different clouds in different regions or they're in different offices or data centers They're all connected in with each other and so if I go in and So I can open them up and so like if I click here, you'll see That you know, this is 157.2 And if I click here, you know, this is 150 This is 155.3 and so via the VPN that I've set up I'm able to go in and so if I open a console Yeah, I can go in and VPN across but what's interesting as well is because of the way the orchestrator works Underneath the hood as an operator if I ever wanted to go in and poke around at what's going on underneath You know, I can get access to So I can get access to that so if I take a look you'll see here that like You'll see it's it's configured strong swan IPsec VPN. It's doing a key exchange These are connecting via the link we have within our lab that kind of replicates a wider network As well as like if I wanted to go through and provision it it's just to kind of show you how I set it all up This is you know created policies created the service groups endpoints What's interesting about this is that you can actually use this feature and then you can use it to say bridge and open Stackcloud into say Google compute VPC or Amazon of us VPC as well the reason you can do that say with a star that you may not be able to do that say with You know the default reference implementation as a star supports PGP underneath the hood and so that you can actually create the IPsec peering and then you can and then additionally create the PGP peering session so you can exchange routes and get full connectivity so With that just kind of to wrap up again, you know a star was designed for hyperscale Scalability provides control plane, you know for clouds with a lot of endpoints whether it's containers or whether it's VMs The use case is very similar when you have lots of use endpoints. It's over-the-top network services today We have routing we have low balancing we have VPN Within the upcoming cycles we're talking about adding support for additional network functions The nice thing is you don't have to have wait for us. You can write a plug-in and a star can suddenly become aware of a new Service and network function and it's all open-source API's A condit a condit selfless company provides services and support We're happy to talk to you about how we can customize and work and provide plugins Thank you very much