 All right, just so everyone knows, this is a very 101 talk. So if you're expecting something a little more technical in depth, this isn't it. So if there was something else you were trying to decide between, this is 101. I just want to be upfront with everybody. And we do have another minute, so we'll get folks of another minute to arrive. I do tend to talk fast. I have left time at the end for question and answer, and I will try to go slow. I don't think I've ever had this packed a room except for getting Garrett lunch and learn. This is cool. All right, for those of you who don't know me, my name is Amy Marish. I am currently a principal technical marketing manager at Red Hat. A lot of you may know me as Spots from IRC. I'm core on a couple different OpenStack projects, and I currently sit on the OpenStack board of directors as an individual board member. I didn't even put a slide for myself up here. So agenda, because it's important. So we're going to start off with an overview of OpenStack, how the different projects works together. I do tend to slip between project names and service names. I will try to be good about that. So first off, the community a little bit of history, then the history of the project itself, how it works, what services you need in a basic deployment. We're not going to go into every service available on OpenStack, but the most commonly used ones. And then, as I said, we'll have some question and answer. So we're going to start off with an overview of OpenStack. Now, this is the same image that you're going to see on the OpenStack website. And basically, the definition of OpenStack is it's infrastructure as a service. It has common APIs, everything we try to develop with the same API so you send the same exact type of information, whether you're talking to compute or neutron or even bare metal. And more recently, we've integrated some Kubernetes capabilities, Cloud Foundry. And we really rely on the OpenStack STK, which is our CLI, as well as the Horizon dashboard. And you can use these to manage your bare metal, your containers, or your VMs. Some statistics from yoga. So this is our 25th release for the community. Now, now, all projects are in a regular release cycle. But overall, this is the 25th release. The development cycle was 25 weeks long. We tend to do six months, give or take a little bit, depending on schedules. The number of code changes, there were 13,500 within yoga. The number of contributors was 680 from 125 different companies. And I think we have countries here, yes. And 44 different countries. So our developer base is spread throughout the world, different companies. And that's kind of what makes us strong, is because it's not just one company who contributes. It's multiple companies. So even if your company contributes one thing, we consider you a contributor. So the history of OpenStack. And I made a timeline, because timelines are cool. So OpenStack officially came into being July of 2010. NASA had a compute project, which is now known as NOVA. And it was combined with Ragspade's Cloud Files, which has become Swift. So it was originally just compute and object storage. Now, in October of 2010, we had the first release, which was Austin. And here we go, Austin. And in 2011, there was the first OpenStack Summit. It was actually called the Design Summit. And just a fun fact, if you see something that looks like B-E-X-A-R, that is not Bexar. It's Bear. And it's named after the county that San Antonio, Texas. So the OpenStack Foundation wasn't even formed for two years after OpenStack started. And it was formed in 2012. In 2016, we were powering 10 million cores. In 2018, we became the top three open source project. In 2019, which was Denver, I believe, we went from OpenStack Summit to OpenInfra Summit, even though the foundation had not changed their name yet. And in 2020, for anyone who was around, we celebrated 10 years. We had 10-year parties around the world, because it was pre-COVID. 2021, as you've seen some of the signage, we hit 25 million cores. And as I've already mentioned, 2022 with yoga, 25th release. So some of the services. Again, this is an image you can see off the site. OpenStack is made up of various projects, and they comprise different areas. So ones we are going to be talking about is Horizon for the dashboard. Then we have Workload Provisioning, which we're not really going to get into. But if you do do Kubernetes, you might be interested in Magnum, Trove's databases. There's application life cycles. We're not going into those. Orchestration, the most commonly one used is Heat. Compute, Nova, Zune is for containers. We're going to concentrate on Compute, which is VMs. Storage, we are going to mention Swift, which I said was object storage. And Cinder, which is block storage. But there's also Manila, if you want to have shared files. Networking, we're focusing today on Neutron. But there's Octavia for load balancing and Designate for DNS. Again, not covering Ironic. There is probably some really good talks this week on Ironic. Cyborg is Accelerators. I do not know if there's any talks this week on that. But shared services we are going to talk about is Keystone. And Placement is an interesting project because it came out of Nova. And for a while, it was separated. And now it's back within the Nova community. And it is required to be installed. But again, we're not going to really talk about it separately. But as part of Nova, Glance will talk about, which is your images for creating your VMs. And Barbican, which is secrets we're not going to discuss. So this is kind of how it comes together. And just concentrating on the couple projects that we're going to discuss today. So Dashboard provides the UI for everything else. Neutron provides connectivity for Nova. The images used for Glance that Nova is going to use to create your VMs. And then you can store your VMs. I've got a typo in Swift Object Storage. Or you can do different type of storage for your Glance images. A lot of people use local images. Keystone Identity is going to provide the authentication for everything. And Cinder provides the volumes for Nova. And of course, as you can point up, it gets its UI from the Dashboard. And I'm actually going to have to speed up. I'm at eight minutes. So we're going to go to the Identity service. All right, so Identity service is the first service when you install manually. So I'm kind of covering these in the order that if you were going to do a manual installation, you have to. Because they all build on each other. So if you don't have your authentication, you can't build anything else. So the definition of it is provides API client authorization, authentication service discovery. So if you're looking for your catalog, you're going to actually be querying Keystone. So just some quick terminology. And I'm going to be really quick on these as long as we keep moving forward. So your credentials, who you are. And that can also be your service or a person. Roles determine what authority you have within your OpenStack cluster. And if you've heard about the RBAC discussions, very important for roles. Tokens is what you get back from the system that you then are authenticated off of. The user can be a service, or it can be a person. So service discovery, endpoints. You reach out to your endpoint, which can be administrative, public, or a service endpoint. So service itself, compute, glance for images, and so on and so forth. Multitenant authorization. So as OpenStack has become more complex, there's different ways of limiting people to different areas. And that's the multi-tenant domain. A domain limits where you have access. It's an optional configuration, so you don't necessarily have to configure a domain. But once you do, your users and your tenants are within that domain. Now, groups are a great way if you have different people who do the same functionality and put them in a group. So when you assign roles and permissions to the group, it applies to everyone within that group. Now, project you'll often hear us go between project and tenant. Same thing. It's the base unit of ownership. So everything is within a project or a tenant. And lastly, as regions, for bigger clouds, you might have things in different regions. Again, there is a default region that you do with your configuration. So this is kind of how you go about getting your token. So the user sends their credential to identity and receives a token and back. They then send the request and the token to the different service that they want to utilize. And the service then checks back with identity to make sure you have access to that. Once the service says, OK, they're good to use glance, it goes back to the user and then you'll see like in the dashboard, you now have access to glance. So this is a little more in-depth, but Keystone provided the user, the credentials went. Then based upon the type of access they have their role, they're going to be sent to one of those three endpoints. So if they have administrative access, they're actually going to get a separate section of the dashboard that gives them admin access. And if they don't, they just see the typical user access. Images. So this is the ARCID, yeah? Yeah, but they would like you to use the microphone. Is that you, Nils? I love it. I always wondered, why is both service discovery and authentication stuff in the same service? For me, it appears that those are pretty distinct functionality. Is there a reason behind this? I think because originally, don't forget, we only had a few different services when we started rolling out. I did have a slide at one point that said when the different services showed up in the timeline. So identity was added pretty quickly because you needed authentication. And you need to know where you're going within the services. So I think that's the basis of why the catalog is attached to the authentication because also don't forget, we're asking identity, can we use this service? What is the list of services that are available within my OpenStack cluster? Because it's only gonna report back which ones you have installed. And I think that's part of it because this is also, as I said, the first thing you're going to install. So then you tell Keystone what endpoints to add, where they are and which ones to add. Does that make sense? Yeah, okay, okay, thanks. And for the record, I am not Keystone team. So, so let's look at GLAMS. So again, we're communicating through the API. It then checks with the GLAMS registry to see what's available. It checks the database where it's available. And then it goes out to one of the stores to determine where to pull out your image. Compute. So again, we're communicating through the API and that can either be through the dashboard and API call directly or the CLI because they're all going to the same APIs. It communicates with the database. The conductor is actually the main place where you're going to talk to Nova because then from the conductor it goes to the database. So even though we kind of have API calls listed here, conductor is the gateway so that not everyone can have access to the database. Now the scheduler is gonna determine where and what compute you go to. And that kind of relates back to placement which I said we weren't gonna discuss because placement keeps track of the resources. And then it places the VMs in conjunction with Nova where they should go based on availability. And then the conductor and the API talk to compute which talks to the hypervisor to then create your VM. So networking, everyone loves networking, right? And we do have some Neutron team in the room. So basically you have your architecture that there's two ways of installing OpenStack networking. There's self-service and there's provider networking. Now your self-service networking is gonna give you more functionality. You're gonna be able to do load balancers and other features and be more granular than the provider network but the provider network is really easy to install and gives more direct access to your VMs versus having to assign your floating IPs to everything. We got some time. Okay, so networking actually was part of Neutron and it was called Neutron Networking. And then it was broken out as networking got more complex to a project called Quantum. Has anyone heard of Quantum? And why do you know of Quantum? So Quantum, there was a trademark problem. So it became Neutron because we fixed problems by renaming things. So Quantum was around for maybe six months before we realized we couldn't have Quantum as any. So then it was moved to Neutron and Neutron's been a separate project ever since. Now Neutron also originally had load balancing as part of it. Over time as load balancing got more complex it became a separate project which I mentioned was Octavia. So we have our networks, subnets, routers and they're all object abstractions. They're all virtual. But we do have different ways of connecting to the hardware itself. ML2 was a really great one. OBS, we're now moving into more of OVN to be more complex networking to work better with the hardware. So as things change the Neutron systems and subsystems change with it to go with the newer hardware. Sometimes around the border. So this is the architecture. So you have your software defined networking which is talked to through your API because again everything goes through the API. The Neutron server and the plugins like the ML2, the OBS, the OVN I mentioned and those talk to the database. Now there's a message queue with this that actually talks to all the different services. Most commonly used is RabbitMQ. And then we have the plugin agents, the layer three agents and the DHCP agent. And that's all the Neutron networking architecture. So this is most commonly what everyone knows. And the different public clouds have their own versions that are customized for them. But this is the basic overview system of what you're going to see on the dashboard. It's based on Django if you're talking Horizon. There is a new project, Skyline I believe it's called, that is making their own dashboard which will bring us to more recent architecture and programming languages than Django. But the project has just been accepted by the TC. They still have some things to work on. But the interesting thing to note about dashboard is that all the projects create their own plugins. So if you don't install say the Manila plugin you will not see Manila within your dashboard. Lock storage sender. So again we're talking through the API. We have the scheduler, the database and the message queue. But an interesting thing about sender is here you can see the sender volumes that have different drivers that go to the different storage. So say you have some Dell hardware and you have some HP hardware. You're going to have different drivers which then connect you from your basic sender to those storage capabilities. And we also have backup and backup has a driver and its own storage system. So it is separate than the regular volumes for backups. Object storage which again originally existed at Rackspace is Swift. So the overall view of Swift is you have your account. And within your account you have your container. We can have more than one container. And you have your objects within your containers. And access is totally determined by your, you're in the add account you're assigned permissions to the container and the objects. So a little bit about the architecture there. Swift can get confusing because it's distributed into rings. And then the proxy server is responsible for tying everything together. And a ring represents a mapping between the names of entities stored on disk in their physical location. So think of Swift of this is my location but I write everything three times to different locations. So it is backed up but if you connect to location A and then location C if they haven't replicated yet they may not have the exact same information but you know you're not gonna lose it because it is gonna be written three times. So that's the advantage of object storage over say block storage which makes sure all the information is there but it may not be in multiple locations. And as part of the rings you have your account your container and your objects and everything's broken up like that to make sure everything is replicated three times. And I know I'm going fast folks. So the hardware projects we have ironic which is beer metal and allows you to configure beer metal servers as part of your architecture. Cyborg which is your accelerators which is really important especially for the scientific community. For people who are doing networking I mean yes you can have your load balancers on top of your infrastructure but say you have two websites within your infrastructure that you want load balancing for. So you're gonna use Octavia and you're gonna have one VIP that points to your two VMs and you get your load balancing without having any extra hardware. And if you don't wanna run your own DNS servers or rely on something else we have designate. And there was supposed to be a designate talk today actually and it ended up having to be canceled. Deployments. Now there's different ways you can deploy. Triple O which is also known as open stack on open stack hence triple O is one method and that has an under cloud and an over cloud for your deployment. So you have that's where we get open stack on open stack. So your under cloud is an open stack deployment which then controls your over stack which is another open stack deployment. Cola Ansible which is containerized. Very popular. And open stack Ansible. Now open stack Ansible originally deployed everything on containers by default and it has now switched back to being more bare metal by default. But all these deployment projects they're part of their complexity is the fact that we tried to make it so it was configurable for everyone's needs. So they are very robust. They can be a little confusing because say you want one type of networking you want another type of networking you want Swift, you want Cinder. So there's all that configuration ability within the projects but it adds some complexity to them. And that's how you reach me. So we have about eight minutes give or take if anyone else besides Nils has questions. Triva, you especially know to go to the microphone. I noticed that pack stack was missing from the deployment options. Was that because it's more of a POC focused deployment method or is there anything happening with pack stack? So pack stack is more of a proof of concept. You can do development on it. Pack stack is traditionally out of RDO which is Red Hat's packaging system and project. It's very similar to dev stack which is more for the development side of things hence dev stack. But dev stack can be installed on CentOS or Ubuntu. So they're more proof of concept things. For example, triple O now has a all in one. Open stack Ansible has all in one. So you can try to run things and see if they're gonna work in your environments or if you just want to learn how to use the CLI or how to use the dashboard. These stand-alones, these all-in-ones are great for doing that. But keep in mind when you're talking dev stack that is where we're doing development. So it can change more frequently so it may work one day, it may not work another day because we're adding new things into test. Any other questions? I know I went fast. Yes. So regarding Swift, the object storage you were mentioning that the data was written three times in multiple locations. And can you elaborate on that just a little bit? Is that like different data centers, different geographical locations? It can be. And realistically, if you really want your data safe it should be. But it can be this rack, this rack and that rack over there. They're all on separate networking and they're all on separate power. The idea being that the multiple rights makes it more safe should there be an issue. Okay. And it's up to like the cloud provider themselves to decide like how best to do that. Yeah. I mean, the minimum is three, you could have 10. I mean, you should probably keep an odd number but the idea of being the multiple rights makes it more safe and secure. And I'm gonna say the bad word. It's similar to S3 if you think of S3. I try not to use the Amazon words even though they're more common because we're open stack and we should be thinking in the terms of our services and projects. But yeah, think of S3. You've got your music here. It happens to be in Washington and then it's down here in Miami and then it's over and now we're gone. So that's an example where you're having them in three different data centers but it could be row one, rack one, row five, rack three and so on and so forth. Okay, perfect. So this is my email. That's my Twitter and my IRC handle. I am spots as I mentioned originally. If you are interested in becoming a contributor during lunch, we have get and get lunch and learn. So we're gonna be actually, Fungi and I from the foundation will actually be helping people set up their systems so that you can become a contributor to open stack. Thank you everyone for coming. I will hang out for a few minutes if anyone has any questions.