 Yeah, I think we're still trying to get the display. So let me explain. So let's do a little housekeeping type things. How many of you saw on the web page for the talk, there were instructions for setting up your laptop for this workshop? How many were able to do that? Didn't work. Uh-oh. OK. Was the link broken? So that's my mistake. Sorry. Apparently I didn't check that. I wrote a book that has all this content in it. So if you just want to leave now and buy the book instead, that works too. All right, so hopefully a lot of what we'll do here is a lot of it is conceptual in the components of how OpenStack fits together. So not being able to have directly hands on it, I think you'll still be able to get a lot out of this. And then hopefully we can figure out how to correct the link. And I'll also show you a link you can go to that has the information on it. Really, the link in the description was just pointing to the RDO project website. And there's two different installation methods. So we can talk about those. And then hopefully a lot of the information here you guys can try on your own later. And it looks like there's a video camera in the back. So hopefully you could watch back through it again and walk through it. And we'll hand out our cards to everyone. So if you have questions, you can get personal support. I mean, you can have Ken's card. All right, so I apologize. So this could end up being a little bit more of a show and tell. But nevertheless, hopefully if you're new. So I think the way we'll try to do this is once we have whatever the audio-visual issues worked out is I'm actually going to start us off by giving you kind of a brief overview conceptually of what OpenStack is and what it's intended to be. And then what Dan's going to do is walk through actually the various components that make up OpenStack and show you what it looks like, talk a bit about what its intention is. And since I think given that Dan can be as much hands-on, I think the way we want to make this work is if you have questions as we're walking through these projects, whether it be architectural questions or implementation questions, you should go ahead and just ask them so we can kind of address them and make sure that everyone in the room understands what we mean when we talk about NOVA or what we mean when we talk about Neutron Provider networks, for example. OK, we should answer the beginning. We start. I always fix everything. We did that once already. It's shutting down now. OK, well, we're waiting. Probably got a good idea just to do some quick intros so you know who's up here trying to get this to work. So my name is Ken Hoi. I do technical marketing for a startup in the OpenStack ecosystem called Platform 9. That just came out of stuff last year, so I don't expect many have heard of us. But prior to that, I worked at EMC, doing OpenStack strategy for EMC. And before that, I was at Rackspace as the OpenStack evangelist, and also as a cloud architect. So I've done both kind of talking to people about OpenStack and also done some OpenStack production design and implementation. So, Dan, do you want to do a quick intro? Oh, yeah, so my name is Dan Radies, and I'm a senior software engineer at Red Hat on the OpenStack engineering team, currently working in the OPNFE community to help take OpenStack into the NFV world and have an open platform for telcos and those that want to run an NFV-like workload using OpenStack. So kind of taking it out of its generic cloud capabilities into a very specific use case. And previous to that, I ran a website called Tristack.org for about three or so years, which is a free place that you can go and use OpenStack. It's already installed. It's already running. You can, right now, it uses Facebook authentication, and we're working to move over to OpenStackID.org authentication. It's a free place to go and spin up instances and work with OpenStack. If you have a Facebook account right now, then you can log in through that. And then once we move over to the OpenStackID stuff, if you have an OpenStack foundation account, you'll be able to use that to log in. How long does it take for them to actually be able to use Tristack in the time they are registered? There's a approval process that we go in and just make sure that you're not a robot creating a bunch of Facebook accounts. And usually it takes 24 hours or so for someone. It's unfortunately a manual process where me and my teammates have to go in and click OK on people. So once you register about a day or so later, you should be able to get in. You guys can see that now? OK, I'm going to, yeah, let me get OpenStack installing so we can get started. Sorry, I had all this set up before the issues. And then when we reboot it, it resets. The demo isn't so happy about that, unfortunately. Really the first off presentation, do you want to show them the link to the RDO in case they want to try? If you'd like to try and follow the QuickStart for RDO, it's RDOproject.org. And at the top, there's a QuickStart link. So if you have a VM or something running and you want to give that a whirl, I don't know how well the wireless will hold out. But you're welcome to give it a try if you'd like. Well, you guys, if some of you want to try that, you can see that you may be able to, if the Wi-Fi holds up, you may be able to actually, while I'm starting, while I'm kind of walking through an overview of OpenStack, you may be able to get an RDO instance running on your laptop so that when Dan starts walking through, kind of the hands-on piece, you'll be able to follow along. If not, again, it'll end up being more of a show or tell. Apologize for that, but then you should still be able to look back over to video later. And hopefully, we'll still be able to explain enough of the concepts and the implementation details so you guys get a good understanding of what OpenStack does in a production environment. I'm going to roll forward to it. OK, I think we already did this. Yeah, OK, story of OpenStack, there you go. All right, so a little bit about, do you, do you all know the, which two companies started OpenStack? And NASA, right? So, I put this up because it's, it's kind of an important, it's important for setting some context about what OpenStack is, because I think OpenStack has morphed in some cases and in some cases has been confusing about what exactly is OpenStack's trying to do. But essentially, this is an email that was sent by an executive from Rackspace. At the time, Rackspace had a public cloud that they were using to try to challenge AWS. This is before AWS is the, it's the behemoth that it is today. And they had reached a point where they were having problems scaling that public cloud out. So they decided they would start from scratch and build something new using Python. And at the time they were doing that, it turned out, so they were building OpenStack for, to create a public cloud, an AWS type public cloud for cloud native applications. At the same time, a group of the NASA, the US, decided to build a private cloud app. That, because they weren't allowed, they weren't allowed to use AWS. And they also decided to build it using Python. And so Rackspace found out about it, contacting the, one of the CTLs of NASA. And basically this email started a discussion where they agree that they would create this new open source cloud platform and then essentially give it to the community. And so that's how OpenStack started. So, I'll just come over so I can do it. Okay. So, so essentially the history of OpenStack is it was really an effort by Rackspace, who's doing public cloud, NASA's issues in private cloud, to build an open source alternative to AWS. Which is really important when we start talking about later, when we talk about some of the things that OpenStack can and can't do, right? Because sometimes OpenStack gets faulted for saying, why can't you do something that I can do in VMware, right? And it's fairly important to understand that the context is Rack, OpenStack initially was never created to be a challenger to VMware. It was created to be an alternative to Amazon web services. And so it made certain design decisions about what it should support and how it should be architected to do that. And now from those two companies and maybe a few thousand lines of code, now obviously in many you know, OpenStack is one of the largest open source projects in the world. And again, at one time it was the fastest growing project. I'm not sure if I might have surpassed OpenStack in that for that title. And it's a continually growing project, more and more companies and individuals are being involved. So let's talk a bit about what OpenStack is because I especially since many of you are still new to OpenStack today. Okay. So one of the initially when OpenStack was first released, there was some confusion about what in that a lot of people thought it was a hypervisor, essentially a way to run virtual machines in a private or public cloud. And that's actually not what OpenStack is. So OpenStack at heart is really an orchestration platform that sits on top of a number of virtual technology, resources and technology. So it actually doesn't have a hypervisor. It actually lets you manage any number of hypervisors. So what I mean by that is you can use OpenStack today to manage KVM, but you could also use it to manage Hyper-V or you can use it to manage vSphere. So as long as a hypervisor has a driver for OpenStack Nova, which is the compute project, OpenStack can manage that. Now most implementations today are running supporting KVM and RackSpace, which has the largest OpenStack implementation in the world. It's running, Zen is the hypervisor that has been used for some legacy reasons. And out of the case, so it's fairly important that what it's doing is basically orchestrating and managing a pool of compute or pools of networks or pools of storage. So it takes all of that and basically gives, and what's very important, end user is the ability to do self-service, on-demand self-service provisioning of those resources. So if you think about what's going on before OpenStack and before AWS, which is what OpenStack is trying to emulate, right? If you had to, if you're a, actually let me ask, how many of you here are developers or do development? Okay, and how many of you are operators? Okay, so if you've been in this industry long enough, you probably remember the time where if you need it, if you're a developer and you need a VM, you need to send an email to an operator and you have to wait for the operator to get a machine up and running for you. And what AWS enabled you to do was to say, operators just have to set up, they don't have to be involved anymore with the provisioning standpoint, developers can do it themselves. And that's what OpenStack is trying to do, is basically saying, let me take the resources you have in your data center and present it in such a way that developers can actually access it and be able to provision their own resources using either the dashboard or a set of open APIs. So that's, again, a very important differentiation between what OpenStack can do and what a typical virtualization platform can do. And it does it in a loosely coupled architecture. So you guys see this eye chart here, right? One of the complaints that sometimes people have is that it looks like OpenStack is spaghetti of various components. But rather than it being a weakness, this is actually a strength, right? Because again, if you think about what OpenStack's trying to be is trying to be a platform for cloud native applications, which means it needs to be scalable. And you can't scale a cloud platform, the cloud platform is monolithic. It too has to be distributed in cloud native light. So what OpenStack's trying to do is actually, when it built the architecture, it tried to have this concept of loosely coupled components. So that you can scale the compute independently of the network, and you can scale the storage independent of the compute. And really the goal of what OpenStack's trying to do with your data center resources, it wants to provide self-service. It wants to be able to do it in a very fast way, and he wants to do it at a large scale, right? And you can think, if you think about why that's valuable, it's the same reason that people find EWS in many ways valuable. Which is, again, in the old days, you couldn't do self-service, and you had to get an operator to spin up a VM for you. It probably meant that you could maybe do 10 projects a year at a given cost, right? And maybe three of them are successful, and the other seven fail. Well now, if you can do self-service rapidly in a scale, now instead of 10 projects, maybe you can do 30 projects for the cost of 10. And now, if you get six projects that are successful, your rate of success is lower, but you've actually created more business-generated, revenue-generating applications. So that's kind of the reason why OpenStack was built, was to give developers and businesses the ability to do projects very quickly, and be able to, if you guys heard the term fail fast, right? The idea is, start something new. If it doesn't work, just kill it, and start over again. Again, that was something that was really hard to do, if you had to do it through the traditional data set up IT way, with this new way of doing it, since developers can spin up resources themselves, right? They can do it more rapidly. And today, there's actually three different ways. If you're a consumer of OpenStack, there's three ways to actually do that. One is obviously the public cloud, which is what Rackspace was interested in. And then there is kind of the private cloud distribution model, which is what NASA was interested in. And then a third way has emerged, called a private cloud as a service, or managed private cloud. So think about public clouds being multi-tenant, running somewhere else, not in your own data center. Whereas a private cloud is something you installed in your own data center, you operate it yourself, right? And using your own data center resources. Private cloud as a service is actually a middle ground, right? Where it's a single tenant, it can actually be running your own data center, but you don't actually operate OpenStack. Someone else operates it for you. So it's kind of a hybrid model between public and private. And this is from the OpenStack marketplace. So this is the various companies that are offering these different consumption models. So it's really important for you, if you're a consumer of OpenStack, if you're a decision maker for OpenStack, to say what model works, what model actually is most important for me as an end user, right? It's do I, is it better for me if I just don't have any capital at all, expenditure and all, and I do everything off-prem on a public cloud, right? Or you may be someone who wants to do a private cloud, but you don't have the engineering resources to do it yourself. In which case you can actually pay someone to essentially operate OpenStack on your behalf, but you can still have kind of the security of having everything running in your own data center. All right, before I go into kind of this last section, before I hand over to Dan, any questions so far about what OpenStack is, what it's intended to be? Okay, no? One thing I do want to say is there is a, I don't know if any of you at my panel was on yesterday, there is an open debate going on right now in the OpenStack community about what should OpenStack be designed for? And what I mean by that is, as I said in the beginning, OpenStack was designed to be an alternative to AWS, which means that it's really designed to do cloud native type applications that doesn't need, that assumes you have commodity hardware, that assumes the infrastructure fails and that your applications are gonna handle all the failures. And now there's a definitive segment of the OpenStack community that says, well, that's not what we really want OpenStack to be. What we want OpenStack to be is a open source version of VMware. And we wanna have it be able to do automatic failover, we want it to be able to run legacy applications like Oracle and Exchange versus cloud native applications and like MongoDB or other NoSQL databases. So sometimes when you talk to people in the community, sometimes you hear one, when you talk about OpStack, they're talking about as a cloud native platform and then you talk to other people and they talk about OpenStack as a, again, an open source version of VMware. And at this point, it's not clear to me yet which one will become or whether they'll try to be both at the same time. And I have all kinds of opinions about what we should actually do that I won't share today, but I did share at the panel. So, but that's something again as a, if you're deciding whether you wanna use OpenStack, that's something to be a kind of cute track of, right? To see where OpenStack is today but where is it likely to be in the future. Talk about if you're new to OpenStack, so some ways to learn about OpenStack, obviously you'll go to OpenStack Foundation website and there's all kinds of documentation and learning resources. As well, there's a couple of books that talks about, that walks you through how to use OpenStack. There's a cookbook that some couple of Rackspace guys did and then there's the OpenStack central book that I think Dan did which again walks through how to set up an OpenStack test environment. And a lot of the material in the OpenStack essentials book is written directly from this presentation so they map each other very closely. There's some differences as OpenStack's evolved, release from release, but the core concepts that are presented here are also presented in that book, walking through setting things up and knowing what the components are and how they interact with one another. Good, thanks. I'm trying to think of the one before Kilo, whatever that, Juno. Juno, right? Yeah, Juno, yeah. I do the same thing, it's like how do you do the alphabet backwards, right? Yeah, I believe it's written on Juno so obviously in two releases there's things that have evolved. For instance, the Converged CLI is not in there where each of the components had their own CLI and now there's the Converged so that's not in there. It's based largely on Packstack which I'll talk about in a minute and there's other installation methods that are available now too so there's a few updates that could be done and in discussions with the publisher of whether or not to do a revised edition to update these things. But the core concepts that are in there are very much the same and same thing presented that I'll get into. Okay, good, thanks. And then another way to learn OpenStack is there is a number of user groups that exists and these user groups typically run once a month or once every few months depending on the user group and they're basically a way where they bring users and vendors who can talk about various technologies related to OpenStack. So the OpenStack.org website has a community page. I encourage you to go to that page and find out if there's an OpenStack user group in your area that you can attend. If there isn't one, but you think there's a good core group of people who would want to be have one, you can certainly reach out to the foundation and there are people like myself but others all over the world whose specific job is to help new user groups get started. So I encourage you to do that. And I think, I wanna hand over to Dan. So Dan, this is based on the liberty So the current release that just went GA is liberty. So what I've got installed here, what I'm gonna show you is liberty. So where the book is to release is old at the pictures that are in there. This is the current stuff fresh out of the gate. I guess, yeah. So and I talked about it before what we're gonna install here is RDO. RDO is Red Hat's Community Distribution of OpenStack and what that means is that Red Hat takes all the upstream bits from the OpenStack foundation on release, brings it down and puts it into RPMs and then distributes it. So there's nothing specific to Red Hat about RDO at all. It's upstream vanilla OpenStack that gets distributed and all we're doing is packaging it in RPMs which is what we know and what we're good at. So we take that expertise of packaging it and distribute it so that you can get OpenStack through RPM installation instead of through source code. And it's once on Red Hat Linux or CentOS, right? Anything RPM based, yeah. And so one is supported and one's not is the primary difference between them. The pace that OpenStack moves is quick enough that Red Hat doesn't carry many patches that we distribute to our customers before a new release comes out. So the delta between the RDO project and RELL OSP, the Red Hat Enterprise OpenStack platform is, man, that's a mouthful, is there's very little between the two of them. There's some Red Hat branding and their support on OSP. But if you install RDO, you're getting almost an identical experience. It's not that it's no support, though it's community supported. Right, you're right. Yeah, RDO is community supported. So RDO has a very vibrant community around it. On RSC, there's mailing lists, there's the website full of wikis. So there's a lot of community support. Thank you. It's kind of like Red Hat Linux versus CentOS, right? You do, Red Hat's OpenStack, you have one throat to choke. If you get RDO, you have too many throats to choke. It's very, it's there like a fedora equivalent where it's, you know, everything's up to. RDO is the fedora of OpenStack for Red Hat. So if you're familiar with Red Hat's model, every product that we have, we also have a community project that we sponsor and help to cultivate. So Red Hat Enterprise Linux, the community project for that is fedora. So we take snapshots of the fedora project and that's what becomes REL and that's what we distribute. CentOS is a little bit of a different case because it existed outside of the Red Hat ecosystem before Red Hat began to help fund and keep it alive. And what CentOS is, it takes all of the source code from REL and then rebrands it as CentOS. So there's another way that if you wanna run something as close to REL as possible and give it a try, then, you know, run on CentOS. And that's, there's very little between CentOS and REL as far as the bits that come. It's the difference of community supported versus enterprise support. And is RDO close to, how close is RDO the trunk? So RDO is stable. Stable. If you're installing RDO, you're gonna be installing the Liberty GA bits and then when SR1 comes out, then you'll get the security release updates. But we do have packages that are built. It's called Delorean and that is actually master. So within the RDO project, you can both get the stable release or the master release in RPM. And so these are new terms. So without that means if you get that master release, you're getting bleeding edge code. Could be literally someone just put something new patch in like an hour ago. To this, right now I don't know, I don't know any company that's using trunk or the master. Rackspace comes close. They're public cloud. They are two weeks behind master at any one time. So, but most people have several months behind. Yeah. Okay, so this was a plug for Red Hat because I work for them and they pay my salary. So my family really appreciates that and that's why the slide is here. Now I talked earlier about pack stack is kind of referred to as a test case or a proof of concept type installer. There's a way that you can go in and do a pack stack all in one install and it'll take a single machine. It'll stand it up. You can do it inside of a VM and then you have OpenStack running all inside of VM. And so what I've used to install here is Ardio Manager and this is an installer in the Ardio community that is based on the triple O project that instead of using kind of a traditional install the operating system and then install OpenStack on top of it, it does an image-based deployment. So it will pre-build the images that are launched that are run on the OpenStack cloud that's being run and then pushes them out. And what this does is it mirrors the way that OpenStack works. So as we move through here that will hopefully make more sense but ask me later if we need to connect some dots there and I can expand more on Ardio Manager. But Ardio Manager is the community project for the supported Red Hat installer in our OSP release. The architecture of what I've installed and what we're going to use is based off of three nodes. So there's an inStack installer node and there's a control node and there's a compute node. So inStack is just that it's the installation method that or it's a node that handles the installation for us. The control node is where all of your APIs are going to live and what you're going to connect to to interact with OpenStack. And the compute node is the hypervisor. It's where the VMs that will be spawned will actually live and actually be running. The component tree of OpenStack looks, let's walk through the components that we're going to look at I guess is where I'm trying to go here. Authentication is handled by Keystone or Identity. This is where users and tenants and roles within OpenStack exist. We'll get into all of these components in more detail one by one as we walk through OpenStack. I'm going to run kind of quick through them here. Then we're going to hit Cinder, which does block storage, block volume management. We'll go to Neutron, which is the OpenStack networking component and it handles the virtual networking within the OpenStack cloud. We'll hit Nova, which is compute or hypervisor management and scheduling. After that we'll look at Cinder. I think I misspoke the one earlier. This one back here was Glance, which is image management. That's the images that the instance is run off of. Then Cinder is block storage, volume management. And then Swift is object storage. So these are two different types of storage that OpenStack can use. And then the way that we're going to look through all of it is in Horizon. Horizon is the dashboard that is the web UI to OpenStack. So we're going to interact with all of this so it's more eye candy and you're not just looking at gobs of text floating by on a screen, which is how some people like to work in the dark, I suppose. But in a setting like this, it's more fun to see the web UI. Ha ha ha, that was funny. Come on guys. There we go. So let's start with the dashboard. It's a web-based interface for managing OpenStack resources. The team that builds the dashboard has a commitment that any core project, well I guess core is no longer a thing now. I don't know what their commitment is anymore. It used to be that there was this thing called core and a project would be accepted as part of the core set of OpenStack. And so the dashboard would work to make sure that there was support in the web UI for all of these core components. With the new governance model that OpenStack has, big tent model, this idea of core has gone away and I haven't actually learned how, what the new way that the dashboard is committing to having parts in there. It's a modular plugin design, so it's intended to make it easy that when new projects do need web interface support that they can drop in a module and it'll magically appear within the framework. They've just rewritten it from Django to Angular, so there's kind of a big change that's happened under the covers to the dashboard in Liberty this release. And I've already discussed that last point is Moot at this point. So let's see if we can log into this. I had my web browser up earlier and I'm gonna stop making excuses about that now I hope. You guys see that? Yeah, this is Horizon, the dashboard. So this is our initial login here and when Instac the installer does the install it puts a couple of files on that Instac machine that has the password for the admin user. So what I'm pasting here is a pre-generated admin password. And so now I'm logged into OpenStack as the admin user and you can see there's information about usage, there's an interface. So let's move forward with what we can actually do with this. So we'll start with identity management in Keystone. So this is where our users are gonna live, how we're going to delegate roles to projects within OpenStack. Keystone is a centralized identity service that across your cloud you're gonna check into Keystone for identity management. You can do multiple forms of authentication so you don't just have to do the username password in the database for Keystone, you can plug in OOF or AD, I think there's Kerberos support at this point. There's forever adding more authentication support. So whatever kind of authentication you wanna plug into Keystone. If it's not there now, it's probably in the works, you know, the most popular ones. It's also a centralized catalog of services. What this means is that all of these components that we're gonna look at all have to be registered with Keystone so that they can communicate with one another. So each of the components actually authenticating the tokens from Keystone to be able to talk to one another. So when you put in a request to OpenStack to do whatever it is that you're gonna do, the component that you're talking to probably will have to interact with other components and to do that they have to know about each other and be authenticated with each other. So Keystone is kind of the glue to put all that together. So let's use Keystone to add a user. I'm actually gonna start by changing the password for my admin user, cause we'll have to log back in later. So here's my admin user. I'm gonna jump in and change the password here to OpenStack so you guys can all heck my cloud. Ha ha ha, come on, that was funny. So it's just to make something clear too. So remember I said that origins of OpenStack was rack space wanting to create a public cloud. So think about what does a public cloud need? It needs an admin user who's a super user for the entire cloud, but the assumption is you're gonna have multi-tenants, which means you need to have real group resources in such a way that only certain users can access those resources and not impact other users who are using the same cloud. So that's what, so right now Dan's logged in is logged in as a super user. He's gonna create a subordinate user who has rights to certain cloud resources and typically you're gonna group them by a tenant. In the case of OpenStack though, for some reason we don't call them tenants. Anymore we call them projects. So when you say a project, think tenant. Just think of a group of- They're the same word essentially. A tenant and a project is just a grouping of resources in OpenStack. Tenant would've been a better word. I'm the one to use it. So I've logged back in as the admin user now with a password that I can actually remember and not have to copy from somewhere. I went back to my identity panel, which was over here on the left-hand side and clicked my create user button, which was right here in the top right. I filled in a username, I've put my email in there so you all can tell me how much you love me later. Put in a password that's the same so you can hack both users. And then here's the project idea here. So this user has to be a member of a project to create user resources. So generally speaking, if there's one user in that project, you create the project with the same name as the user. If there's multiple users, you can create a project name that's relative to those users. So I've added the name for the project there. You can see OpenStack populated. The project name, create user. Now we have a new user. I can log out as the admin and log back in as the user that I just created. So the thing that you'll notice is over on the left-hand side of the screen now, the admin panel has gone away and now there's a bunch of panels for managing all the different kinds of resources from OpenStack. So let's move forward into more of those so that we can keep looking at them. So when an instance launches, it needs a disk. It needs something to run off of, an OS. So the way OpenStack does this is with images. So Glance is our image management system and it's just a registry of disk images for VM. So what you do is pre-bake these images and then stick them into Glance so that when you launch a VM, you can then pull out one of these images and launch the VM directly off of something that's pre-baked. They're all over the internet, so if you search for OpenStack image of your particular flavor of what you want to launch, they're all over the internet. I've downloaded one, which is a testing type instance, but most of the popular distros have them. The backing store is also configurable, so out of the box, it's just gonna put it on the local disk, but if you wanna put it on shared storage or some other place, if you have a lot of images or some big ones, you can configure that. So as my non-privileged user, my non-admin user, I'm in this compute section here and this link here is instances, so I'm gonna click on instances and I'm gonna say create image. Create image is more, I'm creating a record of an image in OpenStack. I'm not actually building the image at this point. Seros is the name of the image and it's just like I said, a testing image. It's not really intended to be used for production. It's very small and very lightweight and very insecure, so it's good for demonstrations like this or to test a cloud that you're trying to build, but it's not good for much else. And I've got a copy of it on my laptop here and I'm just going to upload this into this cloud. The format is a QCal image, so I've given it a name, I've selected the file that I'm uploading and I'm selecting the format of it. And if you download one off the internet, it should give you that format image. So it's gonna upload this image and it'll be registered in OpenStack so that we can then launch an instance off of it. The next thing that you need to launch an instance after a image is a network to put it on. So the next thing that we're gonna look at is Neutron, which is OpenStack networking. It's networking as a service, so it's gonna build these virtual networks and the idea is that you can isolate a network from project to project. So my user will have its own network space and another user would have, or another project would have its own network space and then how you route traffic in and out of that is what Neutron allows you to do and allows you to configure. Again, it's modular so that if you don't wanna use OpenVSwitch, which is under the covers by default, you can plug in vendor X into Neutron and use their integration with Neutron. And then I just talked about the tenant isolation there. So adding a network for this instance now, is this big enough? Can you guys see this? The screen kinda got squashed. So networks and create network, network name, I'm gonna call this internal and that'll become clear in a minute and I'm just gonna give it a 172 address showing that it's a private network here. And then within the network that we're creating, we want a DHCP agent so at the top of the screen it shows enable DHCP and then a name server because DHCP is supplying the IP address. You also give it a name server so the DHCP can tell it where to resolve its DNS once it comes up. So now I have a network called internal. Yay, you guys say that with me. Yay. Oh, you guys, come on. Yeah, there we go, much better, thank you. All right, we've got a network. So now we have an image in there that we can launch. We've got a network that we could launch the instance on so we've come to launching an instance which is handled by Nova, our hypervisor management. So it's gonna manage compute resources, you can have lots of compute nodes and Nova's gonna know about those nodes, they all check into Nova and then you can launch your instances and they'll be scheduled across there so it's providing virtual machines on demand. Kind of like Kim was talking about earlier how Amazon changed that game, that if you need compute resources, it used to be sending an email and wait a couple of months while they order the server and rack it and whatever they gotta do. Now you push a button, instance on demand, right? It gets spun up in the cloud and it's designed for horizontal scaling so if you need more compute power, you put more commodity servers into the rack alongside the ones you have and tell them where Nova is to check into it and you have more compute resources by using just simple off the shelf basic servers. So let's boot an instance. I'm gonna go back into the compute panel and select instances, launch instance and it's gonna give us this dialogue that has a ton of information. So first of all, we need to give it a name. I'm gonna call it first instance. The flavor you can see over on the right hand side how the flavor will change in that flavor details and all that's doing is showing how many VCPUs, how much memory, how much disk is being allocated to a particular instance when it gets launched. You can do more than one if you want in this instance count and then the boot source is, there's a few we're gonna use boot from image which is gonna pull an image from glance. So when I select boot from image, my Cirrus image is sitting there and I can now specify the image that I imported boot off of that Cirrus image so that when the instance comes up, it comes off of the image that I imported and expect to have it running off of. Next we'll move to access and security. The paradigm in cloud computing to get into an instance of a virtual machine like this is to use SSH keys. So right now I have no key pairs available. I could go and generate one but I'm just gonna add my own off of my machine so I can get into it later. Not right now. So we have put the instructions for how to install this on your laptop but we didn't realize the link was broken. So, but we'll probably go back and fix the link. Yeah, and I'll probably put them on SlideShare too. So if you search my name on SlideShare, I'll get them up there. Maybe we'll put them on Ken's too so that search either of us and. Sure. All right. And then you also need a security group. Security groups are built in like virtual firewalls for your compute so I'm just gonna leave it on default. You can name multiple and have different security groups if you want. We're just gonna operate out of default for simplicity's sake at this point. And then if we look at the networking tab, you'll see that the internal network that I created is already selected because there's only one. If I had multiple networks in my tenant, I would have to go and specifically choose a network but since there's just one it automatically selects it for me. So if I launch this, now the instance will, what's happening here is that Nova is going out and communicating to the hypervisor and it's spawning a VM and then it's going back to Glance and it's getting that image from Glance that we uploaded. It's pulling it over to the hypervisor. It's putting it in place so that the instance can launch and it's then creating the virtual ports in Neutron so that all the networking gets created and it's launching the instance. So the instance will come up and we'll be running and hopefully OpenStack won't make a liar out of me, right? And just so you can see, this is actually fairly simple. So a couple of things keep in mind. One is this is supposed to be an interface that a user, not the operator, right? So this is really a developer can use this interface. We've actually, it was actually a little more complicated because Dan was trying to show you how to create the networks and create the images but actually in a real production environment, ideally the operator, the super user admin would have, might have created all the images already, might have probably created all the networks and then as an end user, you're not creating those things. You're basically just saying I want an instance, I'm gonna choose this size, this network, using this image and then let it go. So it's actually even simpler than what Dan was demonstrating. Yeah, and if you go to tristack.org you get a real end user experience because a lot of the things that the operator or the administrator would have already done is already done there. So if you log in and launch instances there, you'll get the experience of some of the things that we're going to get to already being created, some of the images already being launched in there. So this is kind of going over what each of the components is, but you're right. So you're basically showing what an operator would do as well as what the end user would do. Now we created a network that is an internal network and to be able to get into this instance, we need to create external access to it because that internal network is a little island. It's an isolated network that your instance can't connect, that other people can't connect to. Your instance is on it, but it's not a routable network. So we're gonna add an external network and this is something, as Ken was just saying, is done by the administrator. So I'm gonna sign out of my non-privileged user and back into the administrator and we're going to manage networks. So you can see that as the administrator, you can see my non-privileged internal network, but we have to create that external access. So I'm gonna create this network, I'm gonna call it external. This is kind of a general purpose network so I like to put it in the service project and the service project is one that all the components in OpenStack are members of and it's not a project that any end user has access to and the way this external network works, putting it in this project makes it so that end users can't connect directly to it, they have to go through a router and we're gonna create that in just a minute. So trust me for a sec, why it goes in the service tenant. By default we go vxlan is how RDO comes out of the box so I'm selecting that and then the network has to be marked as an external network so that bottom check box there is saying this is an external network, not intended for VMs to connect directly to, it's providing external access that needs to be routed in. So we create this network and you'll see right away that the internal network I created has a subnet on it already but the external doesn't. So you would work with your networking administrators to get the information about the subnet so I'm gonna create this subnet and these numbers that I'm, the subnet that I'm putting in here is specific to kind of a vanilla installation of RDO manager. So this isn't numbers that you would have to conjure out of nowhere, they're numbers that would be provided by network administrator that is configuring the underlying network, the physical network that your machine is running on. The thing that's really important about an external network is to disable DHCP so I've unchecked that, enable DHCP and then what you'll notice is that in the network allocation I put the whole slash 24 block, like the whole block of IPs in there that the instances that is available for that the IPs that you want to associate with the instances would be a part of. So an allocation pool is a subset so if I only have say a hundred IPs that I can give to my instances to be able to get external access into them, they're part of a larger network and you have to specify that whole network but then you tell in that larger network that they're a part of, these are the ones in specific that I'm allowed to use. So in my allocation pool here I'm gonna put a collection of IPs that again would be provided by the network administrator that they would say, you're a part of this larger network but use only this subset of IPs and what these are as a collection of static IPs that will be assigned to open stack instances. So now we have the external network and we have an internal network and let's see what that looks like to an end user. I'm gonna log back in as the user I created and there's this kind of cool tool, the network topology that you can visualize what networks you have. So you can see here in blue I've got my first instance here so this is a representation of my instance and I've got my internal network so this is the first network that I created to attach my instance and you can see the line where I've attached that instance to the internal network and then you can see the external network that I just created as the administrator that specifies what block of IPs a network administrator has given to us to allocate to our instances but there's no line between the external and the internal networks. So we have to connect those with a virtual router. So here you'll see there's a tab for routers. I don't have any created so I'm gonna create a router and routers are specific to your project so I'm naming it the same as my project and then there's two connections that we have to make to that router so if we look back at the topology you'll see now that there's external network there's a router that's just kind of not being used at all and we're still connected instance to internal network. So on that router we make two connections once to the external network and that's called a gateway connection and once to the internal network and that's called an interface connection. So on my router I can go over and say set gateway I'm gonna select my external network and that then creates the connection between the router and the external network and then on the router there's an interfaces tab so we'll select add interface and I'm selecting the internal network and that creates the connection from the router to the internal network. So now if we go back to our topology you'll see that the lines connect between all of the pieces. So we have a first instance which is the name of the instance which has a 172 address, a private non-routable address. He's connected to the internal network and got his IP from the internal network and then there's the router right above it here which is connected to the internal network creating a connection into the internal network out to the external network and so what we're gonna do next is take one of those external IPs and map it to the internal IP. So you have an internal IP it first came up on an external IP that you can connect to and then we map the two together so the connection can go in and what that's called is a floating IP. So we're gonna go back to instances. I'm going to select associate floating IP. And there's two steps here. One is allocating a floating IP so you first have to request one. I need a floating IP that I can use so that's the allocation and if you can read this here it says no floating IP address is allocated so I go into my allocation box say I wanna select from my external network a floating IP I allocate one now he says oh you have this particular IP address and the port to be associated is the port on my first instance so this is the mapping taking the external IP I just requested and allocated to my project and mapping it into that internal IP. So we do associate and then in first instance you'll see now there's two IPs associated to that the 172 address which is the internal IP that's on it and a 192 address which is the quote unquote external in our demo here. So at this point there is an external connection and you would think that you could take this IP and then connect it to I don't know what happened to my terminal let me open another one. Oh it's over here there's my terminal. So here if I try and ping my address we actually can't yet and what that leads back to is our security group so by default your security group is nothing can get through and so remember when we launched the instance we attached we had to specify security group that this particular instance would be in it went into fault and a new security group has rules that say nothing gets in and nothing gets out I think it's nothing gets in but things can get out is what happens. So if we go back to this go back to the web interface where we can look at access and security and look at security group so here's our default security group right here where we can manage the rules on it. So what I'm gonna do is in this security group I'm gonna say add a rule and select all ICMP so let ping get in and this is an instant change so when I say add this we can go back there and you see that it starts to ping the instance right away I can do the same thing for SSH and say allow all SSH traffic into this tenant and now I should be able to SSH as the SIROS user to this IP address. Is that text big enough? Can you see that? There we go. SIROS has a standard password Cubs win smiley face so in general you don't get the password to the instances that are downloaded off the web but since this one is testing and super insecure we can all celebrate with the Cubs I'm not sure what that Cubs win means so there we are we're in an instance now that we've launched we've created this external access the external access is a lot it's complicated there's a lot of moving parts and that's what everybody says that starts with OpenStack I went through it you know people that we've done this presentation with say the same thing it's a lot of pieces and it's a complicated thing to start and if you're not a networking person I've been telling people networking is hard and I've really struggled to learn it specifically because I started working on OpenStack so if it didn't all connect the first time you know go back and read through this again read some docs online you know I have a whole chapter dedicated to this in my book specifically because it's a complicated topic and it's not something that as developers or operators people just kind of know by default so if that went by too quick or it didn't connect right away I'm happy to answer questions let's try and get through at least the rest of these things and then we'll get to questions and so one thing I would say I think the accounts are having you want to go back to topology for a second oh yeah sure topology yeah so one way that may help you kind of grasp what's going on think about each tenant and okay how about this one think about each tenant is a town right and the town has private rowways which is great for getting from house to house within the town but it can't get to any but it can't get to another town right because the private rowways are only internal within that town so that's basically your internal tenants of those internal networks are essentially those private row rates so what in the history of you know any country for in order to make it possible for you to go from one town to the other town you built the highway right and that highway basically spans across multiple towns and then you connect the rowways the private rowways through an ingress for rampway into that highway and that's how you can go from place to place so that's what that's so that's the external network is is essentially a highway that you've built that goes across all the towns that you have and then the router is basically that on ramp right from your private rowways and at any given town into that highway that makes sense analogy-wise so that's uh... that's one way to help think about it how's everybody doing not if you're alive anybody need a snack i'd love one too go get me one just kidding uh... so oh you mean the yeah i miss it too i like the line topology this is it's fancy but it's more complicated to understand i think i agree yes far as i know there's not i think it's in liberty switched over to this style and i think we're all just gonna take it okay so we've got an instance of uh... one of the next things that you probably want to do with this instance that has a little bit of virtual space is attached to some kind of storage that is persistent right because open-stack in general is intended or is presented as a ephemeral storage or an elastic cloud right that instances are intended to be able to disappear and reappear and so if you have something you want to save you need to save it somewhere other than on the instance because it could be terminated and disappear in the blink of an eye of course there's as ken mentioned a lot of controversy around elastic cloud versus enterprise virtualization and the paradigms between the two and whether instances should be long-lived or be quick to die and you can argue that with somebody else i guess or afterwards so cinder's block storage right we can create these these virtual persistent block storage devices on demand they get stored in a larger pool of storage and attach them to the instances so that if instances need to change or the storage needs to be persisted over a longer term uh... time frame then it gets stored on the cinder volumes uh... you can you have the ability to do snapshot so if if you need to do some kind of snapshotting with it you're able to do that it also has a a pluggable architecture for its backing store as well so by default out of the box it's going to do lvm storage and create lv's for each of the volumes that you create but if you wanted to attach to uh... some kind of storage appliance let's say and and attach that into open stack there's lots of vendors that have drivers that you can use uh... their particular flavor of uh... appliance to do that or i think nfs is supported as well so if you had like a nfs server uh... there's lots of different backing stores that can be yeah so let me mention one thing a lot of people get confused about this if particularly you work with enterprise storage arrays uh... you probably think of uh... a block storage device as a a volume on a sand that multiple instance servers can access that's not what's in there is the bet the best way to think about sender is that is it essentially a usb drive and that you plug into an instant at the end now you unplug and the key there is uh... while it is persistent right if you pull the usb drive out of a machine the data still sits there but that usb drive can't be shared between two machines right so sender cannot be shared between two different instances uh... what he can do is you can plug into one save stuff and then if you uh... either intentionally or unintentionally destroy that vm you can essentially take this uh... sender volume uh... and plug it into another instance like the u.s. you can do with usb drive okay make sense uh... which is one of the reasons you uh... and i bring that up because sometimes people go well now i have block storage which is like share storage therefore i should be able to do like the motion of of uh... instances but that's not what you can't do that to send it is not a shared shareable storage volume uh... there is as i mentioned that debate going on so there is some people who want to push to make sender a more a traditional sand type volume but that's not there yet and there's another project in the open stack ecosystem that does shared storage so while sender doesn't do it there is capability within open stack at large for shared storage on demand uh... in a project called manila working no there's some shared storage on the back end within nova which is where those uh... it has to do with the the instance drug uh... discs gets stored when you configure nova and if you can figure shared storage across nova then you can do those live migrations but it's it's separate from yes it's a little confusing so it's two types of live migration right in in the open stack with kbm one is where the this the data stays on the storage and you just kind of uh... moving it to another vm or you can actually move it where you have to actually move all the data also has to get copied over so if you use sender you're essentially copying the data over if you're if you want to do uh... live migration where you don't have to move the data you have to use something like nfs or some kind of share file system uh... potentially now the issue obviously we're using nfs is uh... for certain types of workloads it may not give you the that latency may be too high or it may not give you the band uh... the speed that you need i over quote requirements that you need so it depends so again that's why they're all these debates are in new blueprints are going in all the time sef sef sef can be used as a uh... backing store but but if you use sef as sender it inherits all the limitations of sender and i use limitation in quotes because again if you look if you're in a cloud native side things you would say well actually is a good thing that that all that you shouldn't you shouldn't need to have share storage in a cloud that's actually anti-cloud alright so creating a volume creating a sender volume here uh... i'll call it first ball guys see how i'm being very creative with the names here uh... and so it's it's got a size of one gig and it goes out and creates an lv in the lvm storage on the control node for this particular one for this particular configuration and then what we can do is manage attachments if we select an instance to attach it to the first instance which we created it will take this volume that we've created an attach it to that instance of the instance has access to it so if we go back out to our our serious instance and look at vd so if if you see there there's there's two volumes there's there's two block devices there there's vda which is the device that serious booted off of and then there's vdb and that's the sender volume that's been attached so at that point you could treat it as a normal block device you can go and put a partition table on it you can put a file system on it you can mount it and then in sender if you needed to detach it we can go back into manage attachments and say hey let's detach this volume and if we go back and look at the instance uh... it will so there are now only vda exists so if we attached it it was vdb and then i detached it and now vdb is disappeared it's really exciting right we should we go through and create a file system on it too you guys want to see that oh come on uh-huh now it's just presenting it so if yeah if you uh... if you're familiar with libvert it's just like creating another virtual disk and presenting it to that instance it'll show up as a block device but what you then do with it you know it's like if you bought if you have a computer and you stick a new hard drive in it it's not going to get automatically mounted you have to go and put a partition table on a file system and put it in a fs tab or something like that so it's i mean it's an extra hard drive that's being attached essentially so there's a lot of different levels of abstraction going on in sender so the storage arrays is creating a virtual volume essentially that's the hypervised the kvm the linux node then the linux node is actually mounting that virtual volume and then you have to create another virtual volume on top of that or files on top of that that the vm can mount so it's just different levels of abstraction that's uh... that's why it's not automatically mounted by default it's using ice guzzy to present the volume from the control node to the instance and so the hypervisor then takes that ice guzzy target and attaches it to the instance and presents it as a block device by way of ice guzzy so from the instance of the vm point of view it's a local disk but then if you use the different backing store it may not necessarily work like that yes so it depends on the hypervisor handle things a little differently so but it's still an ice guzzy target if you're using this this particular so whether you're using vm as a backing or kvm ice guzzy is going to be prevented vmware does something entirely different oh yeah don't listen to me about vmware vmware uses a vmdk so that vmdk is actually an extra level of abstraction so behind that vmdk it could be ice guzzy, it could be nfs, it could be fiber channel yeah so it's a completely different that's a completely different model alright so our last component we're gonna look at here is Swift and Swift is object storage so we just looked at block storage which is you know the equivalent of presenting a new hard drive or a usb stick or something to a machine like that you have to interact with it as a block device where Swift is object storage, it's an api where you have a very plain object, it's file content and it has a name and so you push it out to your object store and you pull it back and so we can connect to it through the dashboard and interact with an object store we can connect to an object store from an instance but wherever you're connecting to whatever you're connecting to this object store from it's still just file content, it doesn't keep any extra metadata or anything about it so Swift was the original object store in OpenStack and it itself is a distributed software-defined storage solution so similar to like you can use Ceph as a backing store instead of the Swift object store as a backing store you could use GlusterFS there's multiple different backing stores that you can use for Swift so in general Swift we look at as the api to a backing store that's used and you can use the Swift object store which is part of the Swift project but you also have different options as well so software-defined network or software-defined storage is a distributed software-based solution for storage and that's exactly what Swift is trying to do is take these objects and distribute them horizontally across the software-defined storage system there's redundancy and failure-proofing with not just Swift object store but other backing stores you can use as well as data replication for them here I'm just going to show you the interface here so in object store we could select containers create container this would be my first container you guys see the theme come on that was funny next you can tell me I'm funny looking right uh... so we have a container uh... first container here and in that container then we could upload an object so I'm going to choose a file and uh... how about I upload my serious image just for fun and so that particular whatever file got uploaded could have been anything I use that image it could have been a ssh pubkey or a text file whatever it is the point of it is it's it's just file content it's just bits that get put up there and then have an arbitrary name so the fact that it's zero zero three four whatever is kind of a moot point we could have called it one two three or this is my first file if we wanted to be really consistent and then what you could do on on an instance is install the Swift client and then connect to Swift and say give me you know list out my objects the rendering object store and pull this file down or add one back in so it's it's very simple file movement uh... without the overhead of block storage right so if if block storage got disconnected there's the possibility for corruption where with file object storage you're looking more at did the complete set of file get put into the API or not did the connection get severed uh... and the the block storage level where it's being written on disk is handled by the object store and not by your operation that you're writing give a good example of that so think of it as it's it's ideal for uh... when you have a lot of small files uh... that you need that you need to always be there and you need a lot of concurrent access the classic example is think itunes so itunes doesn't use Swift but it does use an object storage so itunes is what a lot of small mp3 files that apple can't afford to have suddenly not be available millions of people could be accessing that same mp3 or download at the same time so that's kind of the use case millions typically thousands to millions of small files image files music files whatever may be that needs to be always available uh... and needs to be accessed by maybe hundreds of thousands users at the same time so to be honest i think there are most open-stack implementation today do not have Swift that may change over time but obviously that just is just the case that most applicants you think about most there aren't that many applications in the world that required that's those specific requirements you know your Word document your home your home drives uh... for your Word documents aren't gonna need that that level of protection or concurrency is that helpful okay let's finish up here uh... we have about ten minutes left uh... there is a command line and so i talked about the unified command line project that's been happening if you go down to the command line in open stack you can ask for help and you can ask for help on specific commands and it's all in this open-stack CLI we don't have really enough time to get into that uh... but it's all out there and so all the stuff that we've kind of done point and click and click and click and all be automated through CLI commands and there's even libraries where you can hit open-stack directly and not even have to use uh... these clients here uh... i talked about in-stack having the file that had the password in it so if you're gonna use the CLI you'd use uh... a keystone rc file and this is an example of one that you can see the username and the tenant and the password the auth URL is connecting to keystone uh... and then when you source that file the bottom line shows okay i've got all these environment variables up in there my environment when i connect to open stack it's going to use that information uh... so components we've looked at here we've done all this through horizon dashboard we looked at keystone identity management we looked at glance image management we looked at open-stack networking creating a network for instance to be on we looked at uh... then using nova to launch an instance using the image and the network that was created and then we attached block storage to it and looked at how we could put data into object storage through sender for block storage and swift for object storage uh... so this is a few of the components that are an open-stack and there is a bunch uh... this is the official list from the liberty release documents of the components that are part of uh... open-stack and i'm not going to go through all these now uh... but you know you can see there's a lot of them and if you have questions about any of these what they do or how they fit into open-stack uh... happy to chat about it later you want to go back one second and one thing i want you to know here's a couple of things to take note one is if you kind of went through them you'll notice that there is uh... for almost all of them there's an analogy analogous uh... technology within amazon web services so again it gets back to the idea that really open-stack is trying to be an open source alternative to aws then it reminds obviously we've dance been trying to show you kind of how things work internally and setting up but the way open-stack really designs all these things once they are being set up by the admin there should be very little configuration work that has to be done by the end user again like aws it should be a service that an end user just selects and basically attaches to do you have anything to say about that thanks everybody for coming hopefully you enjoyed the snacks and the open-stack uh... if you have questions uh... happy to answer them i guess we have a few minutes if you want to use the mic but you're also welcome just to come up and ask questions so uh... thanks for your time today you know you've already had your quote of questions sorry somebody else i'm sorry go ahead if you go to ardio project dot org uh... there's a big icon that says ardio manager there and it has there's a little more complicated steps than just pack stack is pack stack is more of a group proof of concept really quick installation but ardio manager is much more full-featured has capabilities to do ha deployments and and plug stuff in for you uh... so there's a few more steps to get it done but it's all documented out there and uh... the script that i used to set up my demo literally just walks through the steps on that uh... wiki page there uh... ask packed publishing if they will do it because i i would like to have a chapter in there about ardio manager he says the book is good appreciate it oh i actually have a copy of it so if any of you are interested in looking through it um... it's it's uh... you're you're welcome to thumb through it and see if you're if you like what's in it yeah it's a quick read like the the glance image it it's a disk image so are you familiar with a liver or parallels or something like that that when you launch a vm there's a virtual disk underneath that that's backing that image that it boots off of and that the operating system is installed in and when you write something to disk in that image are in that vm it's writing to that virtual disk image that's what glances holding uh... just disk images that you can launch vm's off of the differences that in say liver or some hypervisor it gives you this vm and you usually give it an installation method so the first thing you do is you bring it up and you install something on that vm and then when you power it off and power back on you have something already installed on it an open stack instead of you being given this vm and then you having to do the installation we pre-baked the images so that they're already there they're already installed so when you launch it there's a few things that get generated on boot like ssh keys and that makes sure that the max don't conflict and and a few specific things but those images that are in glance are generic so that when the vm launches it can use the same image over and over for each of the images and your boot time is like this because you don't have to do the the os installation it was done ahead of time so the images that are registered in glance are are just virtual disk images like you would use in any virtualization system they're just pre-installed with the os for you in that respect it is a template the the image that you're booting from has been made generic enough that if you launch it multiple times it doesn't conflict with the other ones there's vm specific information that gets generated on boot so that you can take this glance image which is essentially a template of the disk image and make copies of it and launch instances and then those instances when they boot will do what they need to make that the their copy of the glance image uh... yeah it does uh... i'll have a different u i d exactly it'll be on the hypervisor by default so it'll glance has a backing store and when you upload your image into glance it writes it there and then when the hypervisor launches a vm it goes to glance and says hey give me a copy of this disk image and it puts it in its storage uh... and there's actually cashing in between that as well so it caches it and then it makes a copy of it and then it launches the instance off of the copy that it made and then if that instance goes away it throws away that disk image but the glance image that was originally uploaded in the cash copy on the hypervisor still exist so then on the same hypervisor if you launch subsequent instances it uses that cash copy but say on a different hypervisor you launch and it hasn't cashed a copy it'll go back in the glance pull it cash it copy it launch it so it's it's copying this disk image these kind of like you're saying kind of templatized disk images if you will uh... within the cloud system it doesn't get quite that granular uh... but it it does allow you to split some of that stuff out and and i believe that granularity is something that has been asked for will probably be allowed in future versions of it oh no not at all yeah you can and it'll support bare metal as well so if you've got bare metal machines and you want you know three controllers and ten compute nodes in a soft cluster it'll do that it can do that yeah the configuration i used here was very simplistic for demonstration purposes so it would run on my laptop oh did it for PaxTac for rdo manager oh great you want me to start over well hopefully you can hopefully the video will come up it was recorded if not you know my card and i'll it's an open stack project uh... it's faster because you pre-build everything and then you know how so we loaded a glance image into glance and that image got copied out to the vm and the vm launched it's very similar to the way that triple o works the in-stack machine that installation machine is an all-in-one open stack installation and you load glance images into there and then what it does is it's able to discover bare metal machines in a way that it can then write the glance image out to the machine so instead of having to install the operating system and then install the open stack packages and then run the configuration you have the OAS and all the open stack packages and a glance image that glance image gets written directly like block written to the disk of the bare metal machine it boots and then it does the configuration after there so there's an active there's not a consensus some people think actually triple is not a good idea why is it not a good idea? the same reason people think that elastic cloud versus virtual computer you know enterprise virtualization there's multiple different no one's ever accused open stack of being too easy but some people are saying I'm not saying it's a good argument open stack is hard to install as it is so why would I want to do something hard to install in order to install I think that's hard to install I thought you thought it was just a good point to install right, it's a good argument as well open stack is really hard so it seems logically it doesn't make a logical sense to take a very thing that you said it's hard to install and use it to make it simple to install does that make sense? that's all I'm saying with everything there's people on both sides of the sense so you know it's what works for you it's what works for you first I want to talk about and we are considering whether should we use open stack to support NMV for what I see open stack is not at all means it's still mature again the question is coming out is it mature enough to support NMV it should pretty good application I think the bigger question there is whether or not the SDM controllers are mature enough to support and that being said open stack has become mature to the point that it's running in production all over it does very well at running instances typically XMV is one of the ones not yet one in the production but not yet but not yet it's a networking technology that sits underneath and two what do you mean by mature critical application? telcos have been deployed because the networking support for NMV is there my view of why telcos have been adopted but the OPNMV project is basically a bunch of telcos that got together let's create an NMV platform so at this point we're working with the controller projects to try and help get the NMV capability simple like chaining it's not something that's anywhere close to being in open stack so being able to have that type of functionality for open stack is plugging in a more complicated networking system like an SDM controller than what Neutron can do and that's a lot of the work we're doing now is working with those projects to try and add NMV networking capabilities so that when you launch an instance, when you launch a VNF that it has underlying networking plumbing to pass the NSH protocol properly to get your service chaining in whatever networking, whatever it is that you need to do in the telco space I think that's the separation between open stack and telcos not that open stack can't run VNFs open stack can run a VNF all day long but is the network plumbing underneath to support the networking function that you need to have in port telcos to operate properly and that's what's immature, that's where OpenFB is trying to evolve in the market to support telcos Is there a difference between open stack service function chaining versus the service trunk function chaining for SD? Say that one more time Is there a difference between open stack versus SD? Oh, SD Is there a difference? Well, so open stack doesn't have service function chaining that's what OpenFB is trying to provide is that networking capability OpenFB is working with Etsy to do their, to follow their standards so that when OpenFB has something that they deliver and that it follows the Etsy standards so what comes out of OpenFB and what OpenFB is attempting to accomplish with the SD controllers is by Etsy guys Looks like you saw the presentation about service function chaining but you see that it's more like a function chaining or something like that In the virtual world it is so each one attaches an instance into one of those virtual networks that we created there's another layer of abstraction there that's a neutron port and so the port is the connection between the instance and the network and so in virtual service function chaining you're taking these ports that neutron does and they're creating chaining rules around them so that when the packets are passed they go through this simplification of the chain that goes through the ports because all the network all the packets that go in between at this point the BNFs are going to pass to the neutron ports yeah so neutron ports okay I already have money and I think I use Adios in store and I test that and we can test for just out of there because as a performance as well we keep talking about SRIOP and DBTK there's always a concern about the environment but DBTK I don't think we can do that migration right yes I think the NFV standard is still so new I would guess that unless you have a lot of engineering you have to wait but this is the thing we have to do we have to be better for to help us so the open energy hopefully if you open stack then you have a source behind it but open stack wasn't designed originally but that's not the thing I have to say that OPNF is more than the main 2.0 lady I can't remember I really want to meet you I'm not sure I was bothering you with this I want to give updates to all of you I don't know thank you so much it's a very good thing and happy to be here the NFV is already online one way one direction the community is very good I'll ask you for some recommendations when I do a revised edition I'm David nice to meet you nice to meet you I did the I did the triple O I understand the triple O challenges it's a it's a nightmare it's a different mindset not traditional installation and system administration operation as the data center knows it so to come in we're going to do image based installation now it's a big shift for the general ops population it has its advantages it has disadvantages just to say that the traditional installation it's again another one of the things what works right for you is what you need to do and what doesn't work right for you it's you know what you're give me a few seconds I understand that I don't move away from triple O you guys have another product or tool it's called he's a life cycle manager or something like that or we have deployment manager similar like that