 Good afternoon, folks my name is Eric Lopez. This is the open stack of Stara a hands-on installation and configuration workshop I Have two partners Phil Hopkins and Shawshank heady They'll be helping around You're gonna go through the labs. There's a handout that has all the access information. They'll really accessing a remote lab so Rip boot up your computers Get putty or SSH pain under what type of platform and we'll log into those systems I'm from I'm a solution architect at a conda. We are the main Supporters for project of Stara. It's a community project. We were just open Just got in part of the big tent for open stack a last summit Phil Hopkins is with Rackspace and Shawshank. He's from as a Rista. So we've all worked in this environment We kind of believe in this solution. This is a great solution if you're looking at possible ways how to to simplify and extend Neutron so Just a quick question. How many people actually know about the open stack of Stara project prior to this? Have people installed have their own compute clusters and have they used any other type of? SDN solution or other than the reference architecture anybody's done an asex Contrail Odl yeah, so this is a I think is a more simplistic of viewpoint It's a lot More lightweight than those other solutions. So Hopefully you'll get a lot of out of this particular environment, so Mentioned who we are so Phil Hopkins the principal engineer at Rackspace and Shawshank is a software engineer at Arista So if you have any questions raise your hand, they'll help you out during the lab Logistic wise the slide decks are available. It's a PDF document If you can right now actually download the slide deck if you want to go over it with me There's also the tutorial. It's a reading me file on github So these you can actually help cut cut and paste some of the commands that will be doing to install the labs And then the hands-on lab you we distributed a fly a flyer that has all the access information to how to get into the environment, so This is going to be our high-level agenda for today We have a lot to cover I'll do a high-level architectural viewpoint of a risk of a star and then we'll go over the tutorial Which is going to be install and configure a star and then what are the next steps? What we can do in the future for additional adding additional information to the tutorial how to actually contribute to this project it's all open source and We're part of the three of open So that's kind of key and then at the very end we'll have some Q&A session if you have any questions about the overall architecture or Questions about the tutorial and such so Our core principles From the get-go we wanted to simplify how neutron is Run right the reference architecture has all these agents that are running on your in different environments Want to be actually compatible with neutron? So we didn't want to replace neutron or replace the whole networking stack that Some of these SDN solutions do and we're actually a fully open development environment So the patch of two license you can modify the code Put it back up on github keep it so So one of the key aspects about this platform is our orchestration platform Like I said, we extend and simplify the environment. So the our orchestrator is actually the control plane orchestrator, right? It's logically centralized with that. We mean by that is actually we actually can create a Cluster of the orchestrator to actually provide the control plane high availability situation It's a pluggable driver model. So as new services or how you want to extend it We have a driver model. You can plug in additional features that you want From the get-go, this is a Python process We use threads So it's multi-process as well as multi-threaded. So And it utilizes the standard open stack APIs. We don't create additional APIs interfaces with Nova Interfaces with neutron and interfaces with glance, right? So we're keeping how any cross product project actually interact with each other. We do it by the API and interfaces so So if you look at it from a kind of a graphical viewpoint of this is we have the Stara management and orchestration platform Neutron and Nova you have your physical L2 network. It doesn't matter What model that you use? Plug in the VMs into your physical infrastructure. We can work with OVS We could work with Linux bridge or any other proprietary SDN solution that provides L2 connectivity for the instances to your physical environment That's one of the key thing is part of this simplicity. It doesn't matter what L2 We actually leverage. I think Linux bridge a lot easier than any other solution And so it makes it easier for you to operate on long run if you're running an organization you have to deal with learning OVS if you're going to be using some type of OVS platform or Platforms that utilize OVS, right? OVS is not as easy as what we've been used to with Linux bridge in the Linux environment as server admins Or if you're using a proprietary solution Right, you have to learn a whole new tool set as well So this is a not a black box. This is a white box so everyone can learn and know what's going on in the environment So since we're abstract agnostic to What overlay with support that we want and we utilize the open stack API's and then all the other features that we want to provide Are done through These service VMs that we spin up in the environment And that's kind of the key aspect of this because these service VMs are part are the data plane element in your environment The control plane we don't modify the control plane Unlike some other solutions So your basic neutron reference architecture, right? You have your neutron service that talks to the message queue or your message bus Which talked to the all these different agents in your environment. So you have the L2 agent L L3 agent DHCP event and other event services that you want to do VPN low balancer and such So in terms of the data path East-West traffic between L2 is through the network node that gets created into the reference architecture Right north-south traffic is going through that network node as well And then any other metadata DHCP goes are located on the network node. So now you have to monitor capacity plan for this Particular device now you have to cluster to this particular node. So you have to figure out how to monitor and maintain for Neutron utilizing a Stara we do away with all these additional agents right we supplement this by Having a service appliance that provide these additional advanced services for neutron So the orchestrator talks to you The neutron server listens over RPC calls and the different aspects from the message queue and Then whatever requests some type of advanced feature we intercept that and provision the service VM for those particular services So the only thing that's running is your L2 agents So if you look at that now you don't have any network node Right you simplify your you've automatically simplified your architecture and Now you can increase your throughput your high availability is now the service VM is utilizing your compute infrastructure Right if you need more performance you add more capacity to your compute infrastructure, which gives you more capacity for your Instances for your customers as well East-west traffic between L2 domains is through the Star service appliance north-south traffic is always through the service appliance. That's where the routing is going to be occurring at The service appliance model Is a per-project resource so each tenant will have their own service VMs. So one service VM Would not necessarily affect a whole range of different tenants unlike if you lost a network node You're gonna be losing or get disrupted for your whole environment or whatever portion of that environment and Our service appliance is a white box VM So it has all the standard open Linux tools that are currently available Can be expanded for as you want to add more features in an environment and if you notice The service appliance will be located at different hypervisors at different points in your environment So so we're also designed for scale. So hundreds of compute nodes thousands of projects Tens of thousands of nicks All right, we actually built in sort of the cluster control plane so You don't know when the orchestrator goes down. We actually have a cluster of Orchestrators that are working and providing the resources and it's all multi-threaded multi-processed and then the neutron resources Are done as HA pair as well so so all the advanced services will Know what to deal with When a particular v service appliance VM dies or maybe that Compute node dies. It'll actually re spin up The orchestrator will know that that particular node is down that particular service VM is Down because of the health checks are not seen and spin up a new instance of that particular Service appliance and reprogram it So any quick questions about so overall architecture of a Stara Yeah, mm-hmm per tenant resource So so if you spin up a load balancer, it's gonna you can technically you can spin up a different Service appliance VM for that particular feature, right or and in some cases if you wanted to have multiple Routers for a particular tenant. There'll be a service VM for each of those router instances Go on So if that's for the tenant so I guess that sits underneath the tenant the actual tenant itself can't see that So where would that sit in horizon? And so it's under the kind of under the cloud correct That well when you go through the labs will at the very end will show you where that Tenant lives it's actually lives in the service tenant So the actual tenant that's spun up that router won't see that instance in their particular view point so Any other quick questions? This is just controlling the resources that were spinning up for the for the environment the the data plane Is for the control plane orchestration can it can we also control the data plane aspect of it So that's the general just of the question We don't modify the data plane work since we're L2 agnostic The only way we kind of modify the control plane is these service Appliances get plugged in in the appropriate way in the environment, right? We're not programming flows on the v-switches or or redirecting traffic other than how typical L2 through L7 type of mechanism for Like our resolution IP What are the plugins we have on Astra for Doing the data plane or the flows that you just mentioned it we like I said we don't program the flows we The features that we do provide right now currently are L3 routers Low balancing as a service VPN as a service And then we have a framework if you come to a talk tomorrow There's a way to provide sort of an NFV type of per tenant provisioned NFV type environment bring your own networking elements, okay, thank you Yes, well once the API like I said once some of these API are we going to support firewall as a service? Yes When the API is immature, that's why we support low balance as a service v2. We don't support v1 so Now that VPN as a service matured in this in the talk release we supported that particular feature So right now log into your systems. We'll just take a quick Viewpoint of what is in your deployment There's a jump post that everyone is actually going to be accessing SSH in to There's one open-stack controller that is currently running keystone glance horizon Nova neutron So this is neutron reference architecture. We have three nicks in use the management Nick the tunnel Nick and then external connectivity you and Then we have a compute know that is configured the same way, right management tunnel and external So and this is kind of one of the also the different changes if you look at some of the reference architecture the new the Network node is the one that only has external connectivity in our case all your compute nodes have that external connectivity, right so Like I said, we'll take a quick five minutes and let everyone access this system So we'll jump the SSH to the jump hosts The cheats are missing a one particular field you have to do a minus P to tell what port to get to port 22 A star is the user name Everyone's going to go to the same IP address and then we're going to report redirect port 80 and 6080 if you eventually when we want to get jump on to the the nested VMs nodes Console page as well as the horizon that's available. So What's that does with the minus L enable port forward into the horizon UI enable port forward into the VNC proxy and The password is Austin summit to get into that first level and Then on your sheet after that will SSH to the to the controller so root at the IP address that's on your sheet and Password is going to be a star for it to get into that particular control node So raise your hand if you're having problems accessing the system and they'll and the helpers will work Getting you there, so So I'll show you Logging into the system bar bar CTO is here I'm not familiar with that particular problem. The question was that will able Will this solve the sri? I sri ov problem with security profiles of security groups Right because with sick if you use sri ov today you bypass security groups, and now that we're moving this outside of the compute node From marks familiar more familiar with this and he says it's not it doesn't change the model on that I don't solve that problem So any other questions while people are accessing the documents and the website the top one Should work well if you go to the github you can probably just go to the file directory and pull it down So so if you just go to the tutorial go to the files, and then you can pull that down that the files located in the same Tree so Everyone who got this information can I move it to the next to the slides where we're at Now I'll go to and show you out what we're going to do to for logging in don't have SSH But buddy should be working the same way so If you can do it importantly in putty you can do a port forwarding so So as H to port 443 I'm not going to do the port forwarding on this particular model. So Austin summit We'll get you to the controller node and then from there like in SSH as root 192.168 200 and then The last octet will be whatever is on your sheet. So My case it's 188 And then a Stara is the password to get into that system and then you can see if you do it if you can fig We have three interfaces and then varmint in my commands So the next what the next thing we're going to do Go back to the slide deck because we're going to verify that we actually have a working open-sack environment So one thing you want to do is is everyone got into their controller? It wouldn't have no issues the first thing we want to do is actually is Source the slash root slash Admin RC because we're going to actually validate that all the services are running before we actually run some of these commands So so I'll show you what we're going to do next So you do a source slash root The first thing we're going to do is Nova service list Make sure that we have running a run at least one running compute and all the Nova Services running These are all nest environments. So they're kind of a little slower And the next thing we're going to do is neutron agent lists Make sure that we have all the agents that are running You can see that this is a working There's a second compute. No, that's down. It doesn't matter at this point We can see that we have a Linux bridge DACP agent Linux metadata agents are all running So the first thing we're going to do if you look in the slides we're going to create just a You can do it just a net list to see that we currently have a blank system. So the thing Was that Yeah, so we're just validating that we actually have a working open stack environment, right? So this This is the reference architecture, right So we have the instead of having an additional node That's for the network part. We're having it on the controller. So So what we're going to first do is just net create Demo net Like I said, we just want to validate that we actually have a working environment, right? So that created The network we can actually now do subnet create created the subnet Say that again Well, yeah, if you look at the read me file for that's the tutorial so this is a These are just the distillation of the slide deck So what we're going to do is like it's verify our open stack deployment So next we're going to create a router I'm just going to quickly do just this first slide and then we'll do this in steps Just want to show you what we're kind of doing Right creates a router. So we actually have a working Neutron environment so And then we'll just boot up a quick saras image Image so we have to have the net id so we know what to attach that network interface port So this these are just the typical neutron commands that you go when so Yeah, yeah, I just want we just want to validate that this is an actual working environment. There's no smoking mirrors Right This is how things work. You see that you have all these if you go back you can see that you have tons of these agents, right And this is showing like a working environment how how easy it is actually to convert over to using Neutron with astara so And then from there like I said everyone just validate through your environment working because The next couple steps as we start configuring astara for neutron If you don't have the daemons running We'll just cause problems So take a few seconds any other quick questions anything I can help with So switch over to the slide deck Let them see what's on what we're doing in terms of Verifying the environment so Yeah, we're just putting we're just like I said, I just want to validate the environment So if you look at the next step, we're just going to clean it up, right? so And like I said for the next step we just remove what we just created And then we actually stop and disable all the different agents in your environment right so We create an override file so you on reboots that particular Agent won't start up and then we actually stop it completely from running And the next step we actually remove those agents from the neutron database So they're not running Neutron is unaware of those particular things so Give you a few more minutes to verify the environment and we'll start to the next go through the next slide deck Anyone else having problems? If you just do a nova list Do you see a vm instance? When you do the nova boot command you've got a success, correct? No, okay, it should be The demo user and I think demo demo user is the password if you look in that The credential file the user rc credential file Yeah, that's fine We have the typically this environment is with two compute nodes But since we're a resource limited we only had one compute node for each environment The password is secret with an e so Once we have the environment I'll quickly go over this and then you guys can follow along after So we're going to clean up our existing resources and then remove those resources from the uh, what we stopped in the environment so We'll go back to the Our controller so we're going to Stop the running vm What the what was the problem the issue? Okay, um, if you do a neutron Agent agent list do you see all your agents running? so if you can just log into the the compute node and then either restart the The agent that is down the most likely it's the linux bridge agent that's down so the ip address for the compute node is 10 0 1 dot 4 So Was that Yes, sorry. That's the other thing is you have to be admin Yeah, so like I said log into the compute node you can either reboot it or you can Start start the linux bridge agent. So So delete the interface from the router that we attached it to That will delete the network and then we'll delete the router Once we have all the neutron resources Deleted we're going to actually stop and this The service agents the l3 metadata and vhcp and then we do the neutron agent lists And we'll remove those agents from The database so the l3 agent we need to know the EU ID This is gonna You have to do this by the EU ID for the particular agent that you want to delete Correct from the data neutron database. So like I said, they're not used in the environment. So Say that again Just the linux bridge agents is all we like I said, that's the only agent that we acquire in our environment is the l2 agent. So Just just a sec. Let me finish this up and then I'll Answer your question all the event service agents l3 metadata at the hcp quick question 10 0 1 dot 4 So if we go back and do the agent list You'll see that we're only running the l3 agents In the environment, right? You notice that the controller actually has an l2 agent on it Because one of the features of the orchestrator to actually do Managing the the service appliance We do we connect in an ipv6 address nick To the service appliance and the controller has a service that manages that particular element so So you notice that the compute nodes and the controller have The l2 agent so wherever the orchestrators run in your environment will have an l2 agent as part of it. So Any other questions? Any help? Yes, I'll come down Can you see the slide? No agent showing Any other quick questions on what we've done so far? Like I said, we're just tearing down the environment of removing the things that that Like I said, we're simplifying neutron at this point So We'll just give you a few more minutes questions So we're going to continue on with the bringing bringing down neutron so We're now do the neutron configuration with the astara So this is where we actually start modifying neutron to actually interact with astara So we're going to go and edit Neutron Neutron.com we're going to change the plugin The core plugin that's going to be used. We're going to actually use the python library That we provide for doing ml2 And then we're going to change the service plugin to tell it what type of resources that we're actually Going to be providing. So this is going to be The service dash plugin the star underscore neutron namespace And then the where that particular library is located at And then all the additional api extensions that we provide will tell where the api extension path is located at And then we tell neutron to emit notification notices on the bus. So We have a notification driver We tell it to emit Those other rpc notifications And then we're going to edit the ml2 underscore comp dot ini file and enable port security Right so we can actually As an extension driver So going to our system I'll quickly show what what the edits are so vi to the core plugin We're going to change Well, we we have a lightweight ml2 but we underlining we still use whatever method that's There so So we tell it to l3 as a service plugin We tell where the api extensions are located We'll actually add that into the default section and then we'll Add the notification driver as well So this would tell newtron how to interact with astara And then we'll go to the ml2 config And we see the extension driver for port security. So it's there already and then depending on the type of ml2 driver you can have the linux bridge or ovs We work with both And when you're doing tunneling we want to enable l2 pop In the environment So the one thing is we want to verify the linux bridge Yep So we use keystone. Like I said, we're an open stack. We're a cross project. We do all that interaction So in this case the agent information is actually in the ml2. It doesn't have a separate file So l2 prop is enabled. So That's All we need to know Right So that's the newtron configuration that we have to do Then we actually have a nova configuration to tell nova actually how to plug things in By the newtron resources So one of the things that we'll be adding So take a few seconds Sorry a minute or so just do those quick edits switch over to the slides Show the l2 pop is enabled in there. Then we actually do some nova configurations We tell it to use ipv6. This is going to be the management It's kind of pointless to use ipv4 for the management board because as you start Multitenancies you're looking at thousands of tenants Right, so you're going to have all these ipv4s. You're going to restrict yourself. So it might as well use ipv6 was that But you could use v4, but we wouldn't recommend it so And then you want to enable service proxy if you're using metadata service you tell it nova to Use the service metadata proxy Right, and then the one thing that does change is the policy json right Currently the network attached for nova is can only be done by the admin api We're telling it that the role of services can actually plug into an external interface To the external network. So that's what this role service that we add to that particular policy json Right, otherwise it's going to say I can't connect The nova says that I can't connect to external network. So that's what that allows when the service vm gets spun up by this By the admin service, you know, it can actually connect to the external networks so And at that point we just restart The sir nova api no server and the linux bridge agent. So we'd actually There's some things we'll have to do right Then we'll actually create some Networks in nova. We just wanted to make sure that Get nova connected will define A star in that management network And we'll create an external network that will be attaching things to and then we'll We'll download the source. That's the next step is installing it So we'll actually go and install through pip right, so It's like I said, this is we're probably halfway done right now at this point once we restart the service So I'll just go through the tutorial quickly so people understand what we're doing and then we'll go through the commands So we restart the service We create the management network, which is an ipv6 network We create an external network that we're going to be attaching all the service appliances to You can see it's a 172 address that's going to be local in that system Then we actually clone the git repository For a star star neutron star horizon star appliance those are already located in your root directory. So We'll create a star a user and the required service directories So user add make directory Change the ownership for those directories to a star Then we go to the code base so Slash root to a star Do a pip install dot that'll install the software for a star and the star neutron And then we'll actually configure A star to actually connect to all the different projects, so We tell it to we tell a star to actually connect to the oslo messaging Bus so we tell it where the rabbit host is located where the rabbit user id password The deep database connection The keystone authentication Right, then we actually tell them that configuration file The management network that it's going to be connecting to So one other orchestrator spins up and knows actually what management network to assign to its namespace for the management Where the management subnet that's going to be located Define the external network that we're going to be attaching the vm2 And then we tell it what type of interface driver to utilize So if we're using in this environment, we're using linux bridge And or you can actually use obs And this is the point of configuration So yes the controller is on that will talk to the service vm management port So so the And from there the orchestrator we tell it where the provider rules For providing the services If you're if you do in metadata, you tell it how to talk to the nova metadata What's the shared secret between the two systems? And then We'll install the appliance the service vm that's already been built For you guys, so there's a cube cal image already been built Otherwise that would take like a half hour to build from source Oh, yeah The flavor We actually create a specific flavor for their service appliance So the service appliance you give it an id Right, so we have the service appliance you upload the service appliance in glance, right the image load create That we create a specific flavor for a star to spin up these service vms So ram 512 Decide the three gigabytes Number of vc cpu Okay, sorry about that. One of the key things about the service appliance I'll address that in a sec The service appliance actually will install does does config drive So we'll actually configure the interfaces and it'll actually upload a service key to the system so This ssh key will allow you to ssh directly without password For the system for that if you have that session key That we created Not in the admin project if we do a nova list dash dash all tenants It'll show up there. It's actually under the service tenant Yeah, so if you can log into the service tenant, you'll actually see it in that environment You tell it in the config file for the orchestrator you tell actually what's uid That you want to spin up at the star appliance. So since the image upload The uid that we uploaded that particular image for We tell it where What image to spin up whenever you get a router call? Right, this is the router section. You'll notice that there's different sections for load balancer And there's a section for if you want to enable vpn. There's a flag that you throw on the router And then what flavor you want to spin up that particular vm? Just the service provider so And then we actually create have a specific database Right you create a a database and we populate the tables give it access And then we create a service for astara Because we actually when you start doing the the high availability clustering Or if it's talking horizon needs to talk to the service. It knows actually where to talk to So we we create a service as well as their appropriate endpoints for admin internal and public Then we create the upstart script for starting it log rotate the sudoer files And then we start the astara process and that's pretty much It and at this point you can just now start utilizing it yep technically kilo Liberty is it's been out for six months the talkie has The vpn as a service Feature it has the clustering feature The the vrrp so there's mataka has some good features in this so And then under the hood When you do a nova list dash dash alternates you can actually see the service vm um And if you have that key you can actually ssh a star to the ipv6 address And you actually get straight into the appliance you can play around with appliance We do actually have a command line So if you do an astara ctl ssh and the router id it'll take you to that particular router There's a command line for rebuilding this the images. So if you actually then this is one of the I'm sorry to go so quickly, but like I said, we'll go back and We'll help you out in the whole lap I'll go out and it'll make it a little easier for you guys One of the interesting features if if there's a new service vm that you want to implement You can load in a glance And do a gradual rollout so any new change the orchestrator file So any new routers that Get spun up we'll use that new image and then you can actually use the star ctl command to rebuild particular routers So you do a star dash ctl resource Rebuild and then you give it the new uid the router id And it will rebuild that router instance that service instance with that new glance image So This is where you can see some of the more straightforward operational aspects So that's under the hood I'll quickly go through the the deck Like I said for here for the nova instance So you're gonna etsy nova So we'll add use ipv6 is true You see that the service meta neutron metadata proxy is set to true So the only thing we needed to add was the ipv6 And this is like I said, this is the important thing is the policy json Where you see the network attach external networks Rule admin is api only at this point You place that with roles services The neutron ones we mentioned those won't restart because we defined in the file to use the namespace for a star for those extended services, so we'll not start those particular services We'll kind of skip this section. I'm sorry about that We'll actually do do the pip install Right, generally you would download the software like I said git clone We actually do have packages available. There's a package repository if you look on the we have one for mataka It's a ppa There's the fuel plugin that we have developed That's under testing. We have a juju as well and we're working on a Ansible open stack ansible deployment as well. So So we quickly add the user You see that we actually have the Git repository already downloaded on that now we can just do pip install No, actually have the namespace of the pip library. We actually restart the neutron services So We can go back So taking that those configure trace those configuration changes that we did to neutron Now we could actually create The star management network Create the public network So if we do now a neutron Net dash list you'll see that we actually have two networks Define You see that they have ipv6 Right, we have a public network. We have a star management network Yes This this this is the admin defined network. So you do this as admin, right? because Yeah The orchestrator and the service appliance Yes So wherever the the orchestra the astara orchestrator is located That's why the l2 plugin wherever the neutron Sorry, wherever the astara orchestrator is running or the clusters There's an l2 agent that's attached That's running in the background as well Because it's going to be plugging in the management port to talk to the management network Of the neutron over the tunnel network It could be v vlan tagging. It could be vxlan it could be vxlan or gre such right But it doesn't not The one thing it does not need is it's connectivity to the external network So the the compute nodes will need connections to the external network, but the controller will not It just has the overlay network and the management. That's it It doesn't need external anymore So like you said, we're simplifying it, right? So And these are the uuids that you would need to when you actually configure the astara orchestrator file So Right, so when you go into the orchestrator Yep I missed one one step where you actually have to configure Copy over the orchestrator the etsy directory So i'll modify that So you see that where there's a public key These are all the defaults defined variables So then You'll see in the management network id that will configure external network id subnet The management prefix external prefix So with the values that we had earlier So here's the management uuid The external network id Which is the public network that we defined The subnet id Yeah, that's the the default one If you look at it, what we defined in that that's the default one we use in all our configuration. So Right, so we have those Those prefixes we tell it where the rabbit to talk to So if we go to the oslo messaging We can just cut and paste what we have there since We go to the database connection In the database section Then we can go to the keystone authentication part To verify that there's nothing defined these all should be default values We tell it what interface driver to use this will be in the default section right Tell it what interface driver to use And then we can create a ssh key We can upload The appliance image the system and so Like I said once we have the another uid of that image Right We edit the orchestrator file We go to the router section The image uid We tell it what it is We define the instant flavor that Was created We can start We now we create the star database We give it access For the star user Then we use A star db sync Using the config file that we're going to use for the system And we're going to create the tables And the star database We'll create the star service Keystone What was that? They worked on my Okay, that's weird It's going to fail Well, technically we don't need it in this lab particular Because since we're actually not using horizon Horizons not going to need to know where the end endpoints or service catalogs located But generally like I said once you start doing the clustering you'll need these The service catalog You'll also need the Service endpoints So it's going to go up to the gateway to the gateway on that external network and come back down so so um Like I said when the Router advertisement we can Like I said the bird is actually running in the service vm so We can actually do some bgp type stuff and That that will be on the roadmap so yep But generally going externally you want to go out so you have to have a default route so Should be there So now that we have the upstart file got other the configuration files for starting So we have the orchestrator started up with that changes the configuration file And from there You know we can create neutron elements So I'll do this as the admin user Or it could be the demo user. It doesn't really matter Create the private subnet for that So in some sense, it's we're not doing anything different We create the router We'll add the interface for that private network to the router Let's set the router interface to the external network Then we can boot a For that private network that we can create a floating ip We can associate that floating ip to that You notice that we actually have the vrp. You have the rug service That's a star created this is the You associate this floating ip to that port So if we do a double list tenant You can see that there's two vms that actually spun up You have the demo vm and then you see the ak dash router. That's the router instance as it was created so if we wanted actually This is h2 as a star To the management ip address Which is the ipv6 using the The key that we created because that got injected into the appliance when it booted up And it looks like it's not injecting the key This is one of the interesting features like I talked about how we do health checks if We do a nova list dash dash all tenants We can take this uuid. We can say nova delete Eventually the health check from the orchestrator will see that there's no vm instance in that particular case and we'll restart the stara router for you guys any notice like after a few checks that router Has been recreated in an environment Right, so we have these health checks to see that. Hey That resource is not available that service vm is not there. We need to start something up So So that's generally showing under the hood how things work where that service vm is located how you can actually ssh into it Right, so if I needed to rebuild the system or routers for a particular tenant It's really easy to kill it and it will restart it actually with a new image Or if you wanted to do a live migration you can live migrate people off and Impairs so Any other questions? Do you guys need help? so We can show you actually if you're interested in how to contribute you can get the source code where it's located at the project status documentation Where we have an weekly rIC meeting? so for the group We're all located on the rc channel. So if you want to talk to us open stack that's a stara Like I said, we actually have Working on different integration points. We do actually have a ppa So work the fuel plug in we have a oil And we're and i'm working on doing the ansible aspect. We do work with arista and cumulus partners in the environment if you really want to get into We actually have a very interesting fishbowl session tomorrow at 9 a.m It's extending the neutron event services Um, this is where a tenant could actually bring in their own network function service vm and not necessarily have to worry about the Admin of the cloud network to provide these additional services So there's a framework that we've developed And are developing in metaka and newton to actually to extend those event neutron services for the tenant Um mark mcclain our cto will be doing a deep dive on project astara That's tomorrow at 11 a.m. And then we have a meet-up on fridays So i'll be around for a bit since this is very end of the day. So you want to continue on need help We're here for you. So they'll start