 All right, we're gonna go ahead and get started. Thank you everybody for coming. I hope you enjoyed the keynote. I know that I recognize many of you, so looks like many people have traveled a long way. So today we're gonna be talking to you about what it is like to be an open-sac architect. So a day in the life of an open-sac or a cloud architect. My name is Vinnie Valdez. And I'm V.I. Chabolo. So we're part of a group within Red Hat called, it's within Red Hat Consulting. We're called the Emerging Technology Practice. So we're in a subgroup or a sub-practice called the Cloud Practice. So we focus on open-sacting containers generally and any other platform entities that would help support those sort of clouds. So I'm a senior principal cloud architect. I am a lead architect within the group, technical architect. And so we help design clouds and implement them for customers. And I'm the practice lead in the same group. I actually have a group of SMEs that actually work with me helping customers build clouds. Okay. So I'd like to make this a little bit interactive if we can. So what I'd like to do is just raise your hand, call out whatever it's like. But what comes from mind when you hear cloud? So industry buzz, buzz words, everybody have enough coffee yet this morning? What is cloud? Infrastructure as a service, very good. Anyone else? Microsoft? No? To the cloud? Is that what you're saying? No. I got you. Anybody else? Scalable, like it, yeah. Delivery? Agility? Okay, we like it, yeah. Self service? Cost, maybe reduce cost? Security? Three years ago? Pay as you go, okay. Either way. Okay, so, right, so there's a lot of ideas. We have internet of things, NFV. So everything that you actually said here probably falls into one of those categories, right? So you talk about agility, self service, that's your infrastructure service and DevOps. DevOps leads to discussions around containers because everybody you talk to these days wants containers in some way, shape or form. That leads into, as a service model, like platform as a service and software as a service and everything is kind of associated to cloud in some way or the other. And then you have this buzz about cloud native applications. How do you build my applications that actually can run on the cloud? So we also have hybrid cloud, right? What does that mean? Well, maybe you don't have enough hardware capacity within your environment. You need a burst out to public providers. And if you actually wanna do that, looks like the answer that you hear these days and mostly for the audience that's here is OpenStack, right? OpenStack is geared towards solving all those problems for you. And so we know, we just do a quick DevStack install and you get all of this out of the box, right? Optumly configured, performance tuned, no. So the idea here is you certainly can just start with a very simple install, work with the business, iterate, improve, rework. But people looking at that, business owners and C-level executives, are gonna look at the amount of time you spend. And if you haven't properly planned this out, it could be actually very expensive and very time consuming to arrive to the correct platform. But if we actually build a cloud, what does OpenStack give you, right? So the promise that OpenStack brings to enterprises today is, it was not enterprise ready three years ago, but today everybody is claiming that it's enterprise ready. It's ready for enterprises to adopt OpenStack. It is part of the mainstream. It's no longer just a pipe dream, right? It is real and viable in enterprises today. It's supposed to be agile. It is software defined, which is API first. It allows you to reduce cost because it eliminates metal lock-in. It's all open source, it's open and it's a scalable platform. Yeah, so very good. We heard most of these ideas from everybody. So when you think about what an OpenStack or a cloud architect is, what is that? Anybody have quick ideas? What does it mean to be an OpenStack architect? So Vijay, in your mind. So I'll probably give you the 50,000 foot view. An OpenStack architect is somebody who actually wants to build a cloud platform to actually solve everything that we saw in the previous slide, right? It's more to enable infrastructure platform that's actually geared towards providing agile, scalable infrastructure. And that's what an OpenStack cloud architect is. But if we look at the reality, the ground zero, what does that actually mean? So in my mind, I think of an actual OpenStack or cloud architect as someone who needs to have very proficient experience and skills in at least four major areas. So I think I consider Linux system administration, automation, networking, and storage. And you need to not only understand these topics in these areas, but you need to be able to implement them hands-on. A lot of people, I see people struggle, whether it's consultants or people we work with in various areas. In almost every engagement we work with, and I'm sure anybody who's done this hands-on with OpenStack, you bang your head on something, you bang your head on neutron, right? You try to figure out what the problem is and in the end it ends up being something in the environment. It's networking, the switch isn't configured. So you have to be able to troubleshoot that. You have to have that experience. You can't be learning how to use T3Dump and packet captures, even as an architect in my mind, when you're trying to address higher level implementation technology when you don't have the skills for the underlying technology. So in my mind, it's somebody who has hands-on, not simply somebody who's a hand-waver or a crayon-wielding box-drawing architect on the whiteboard, right? That's not just an OpenStack or a cloud architect in our mind. But how do they actually differ, right? We actually had the world architects in our enterprises for the last 15 years. The way I would consider the difference is, if you look at a traditional architect before the word cloud came in, before enterprises started adopting cloud, the traditional architect for all siloed. You actually had a compute architect was focused on compute infrastructure. You actually had a storage admin or a storage architect focused on just making sure that the storage is right. And they had the networking architect, so focused on routing and switching and making sure the network within the environment is good. But they're always siloed. They had different lines of bosses, different owners, and they would always have a big wall or a fence between them and it was very difficult for enterprises to work. It wouldn't actually give you for a scalable agile environment. But a cloud or an OpenStack architect needs to be able to be flexible across all of these functional areas. You need to be able to understand the complex integrations of various storage or networking vendors and how they interact with each other and how to increase performance and how to meet the actual business needs, which we'll talk about in a minute. But a traditional architect didn't have to worry about all of those because he actually was supposed to only build something the last three years. He would actually patch once a year. He was supporting a business where the product would only get released once a year so he was not worried about agility and scale and everything was pre-planned, you build and then somebody would actually come and use it. Yeah, so cloud OpenStack architect is gonna look at way more frequent builds, maybe daily or weekly. They're gonna enable their users to completely be self-serviceable. They want to develop iteratively and, as I said, daily, treat infrastructure as code. As a traditional architect, you're mired in business processes and idle processes. You actually had change management, release management systems. You take weeks and weeks to actually get anything approved and that's what a traditional architect always used to work with. So cloud architects will enable process and methodology in code, right? And kind of back to the fundamentals, everything should be API first. You wanna make it repeatable, automatable and be able to improve on that very quickly. So, Vijay, understanding this as a cloud architect, how would you go about building a cloud? So for me, cloud is very personal to every enterprise. There's no one cloud solution that actually solves everybody's problems, right? So one of the first thing that you actually do is you actually wanna discover what the business goals are. You wanna discover what the objectives are that you're trying to solve. Once you actually understand the requirements both on the business side and the technology side, then you actually go about designing the blueprint that you wanna use. But keep in mind that the blueprint that you design is not going to be cut in stone forever. It's something that needs to be iteratively developed and changed over time. You actually want to be able to deploy them in small sprints because you actually wanna build cloud the same way you're building applications or products. It has to be agile. It has to be start small and add features and functionality slowly to actually evolve your cloud because it not only gives you an ability to build it faster, quicker, but also enables operations to actually operate it better. If you're instead of building something using our old waterfall methodology, if you use the standard agile tools, you actually build small, iteratively develop and operate. It actually makes the cloud build something towards your business goals. So once that's done, once you have a deployment, now you wanna validate that. You wanna make sure that that matches what business needs or technology needs you are trying to solve for. If you are not, or even if you are solving things nicely but you wanna improve on it, you go refine the process. And again, very quick iterative deployment cycles. So once you do that, then you can mentor your users and your operators. Make sure that they understand the entire process. They understand how the design fed into what has been deployed and how to iterate and how to refine and automate and build. And then you wanna collaborate with all of the users, all of the operators. You want to build out best practices, make sure things are documented. You wanna make sure there are no one-offs, no special cases, no brints from the Phoenix project. We wanna make sure that everything is completely and fully automated and self-contained. So one of the first steps that we actually had was discovery, right? So what do you do actually in a discovery process? Some of the objectives or goals that you actually wanna give yourself by you're doing a discovery session is, one, you actually wanna get all the stakeholders together. You wanna make sure that you actually have full buy-in from all the different stakeholders in your enterprise. You wanna actually collect the business objectives to see the cloud you're building is geared towards that. You wanna actually then collect the technical objectives that are geared towards solving your business problems. And that actually should lead into your technical architecture. And what's very important, as Vijay said, is you wanna get all of the different stakeholders or product owners, anybody who has a stake in the business to be in the room. And a lot of times that's very difficult. People are very busy, but you wanna make sure that you have representation from the different areas. And it doesn't matter how you perform the discovery session that we're about to go through, but what matters is the process that you do it. It could be on a whiteboard, it could be with post-it notes, it could be using a tool like JIRA, Trello, anything like that. But what you wanna do is collect all of these objectives, whether they're from the business, from the technical audience. And you want to prioritize them. And so what we're gonna do is go through a couple of questions first. So as a technical architect or as a technical person, a lot of times I hear this, why do I care about the business objectives, right? I was told to implement something, I'm gonna do it. Gardner told me I need OpenStack, so now I need to go figure out how to do it. So Vijay, why should I care about the business objectives? Well, one of the reasons is as an architect and an enterprise, you're not building clouds to just play with it. You're actually building it to solve a business problem, right? You need funding from a business to actually build a cloud. You're not gonna make money up here from nowhere. Even as a CTO, you actually need to justify the spend. You're actually geared towards reducing costs. You're geared towards building clouds that are scalable, solving business needs. So we actually have to get business involved in the decision-making process. But then if I was a business-focused guy, why would I care about the technical objectives? I need something done, just get it done. So the reality is the technical implementation details are ultimately driven by the business needs. We can put something together quickly. We can throw DevStack down or PackStack or whatever, but it's not gonna be an optimal solution to the business needs, so we need to figure out what those specific business needs are so that we're solving properly. So examples would be SLAs, performance. We may have workloads that have certain IOP performances, or we need to be able to deploy to multiple sites, multiple regions, and have availability, or maybe data protection or isolation requirements or regulatory needs that we need to meet. So all of those tie back in. So we actually gonna walk you through a mock discovery session. We'll actually, this is something that we do with almost all our customers. Every time we actually talk to them about cloud, whether it's OpenStack or a general-purpose cloud or a hybrid cloud. So we'll walk you through a mock session and how that actually leads into a design and operations. And again, just to save time here, rather than going through the whiteboard exercise, calling on, normally we would have a conversation with the customers, and as I said, there'll be different representatives from different areas. You'll have business guys, you'll have technical guys, and what's really good about that is certain people will raise concerns or priorities or business objectives or needs that are important to their business, but it may not be to others, or other people may say, hey, wait a minute, that's not a priority to me. And so you'll start a conversation between everybody, and so you'll start to understand what the real priorities are, and everyone should agree, ultimately, on what these are. So we'll start out with the business goals. So when we start off doing a discovery session, we actually have to have firm goals in mind. What are you trying to achieve out of that session? What are you trying to achieve out of, because it's difficult to actually get all the stakeholders together, so you actually want to plan for your session when you're actually collecting other requirements. So our proposal is the one that actually worked for us is pick the top three business objectives that you're going to solve. Get your business by hand, prioritize those business goals, and actually identify those. You would actually then want to pick, at least the top five or six, technical objectives that are geared towards solving those business goals. If you don't, if you find a technical objective that's not aligned to a business goal, take it out. It's not meant to be there. That should actually lead into a technical architecture that actually supports those goals. You always start small, like you said, use an iterative approach, build an MVP first, a minimum viable platform, and then actually iteratively grow that to cover actually more use cases as you collect them. You also want to consider something. OpenStack releases every six months. So you don't want to plan for something that's required two years from now. You're not sure what OpenStack is going to evolve into in two years. You'll have four releases between now and then. So you actually want to plan your cloud. You actually want to make sure that when you're actually upgrading it, if it's going to be a forklift, if it's going to be a rebuild of your existing cloud, or you want to build parallel clouds to support different functions. All right, so let's go ahead and go through the mock customer kind of environment that we have put together. So first we're going to start with the business use cases. So typically these are, once they've been discussed and refined and prioritized, they're written as statements. So Vijay, go ahead and start with these. So the common goal that we keep hearing all the time is as a line of business owner, I want to actually build new products and roll it out to the industry pretty quick because my competition is coming out with releases every month. I want to come out with releases every week to actually stay ahead of the competition. I want to reduce total cost of ownership. I want to increase my margins. My shareholders are actually sitting behind me, asking me to make more money. So I want to reduce the total cost of building my product or solution and reduce the total cost of ownership. And I don't want the environment to fail when I have peak demand. So I want to actually have a scalable infrastructure that actually can meet capacity. This is a common use case for most retail customers, it's seasonal. Come Christmas, the storefront goes heavy, it goes berserk. We actually want to be able to scale out that actually can manage that workloads and have managers demands. Okay. And so let's talk about some technology use cases. So I would say something like, as a technology owner, I want to enable my users to be self-sufficient. Pretty common. I want to avoid vendor lock-in. That's gonna help me reduce costs. Want to enable elastic infrastructure to meet capacity on demand. Again, something we talked about with OpenStack. I need infrastructure as code to enable DevOps. I want to, I need to have a platform that is secure at all layers of the infrastructure. Very important. I would like to deploy across multiple data centers for resiliency and isolation. Back to the use case I mentioned earlier where you may want to have a set of workload applications that go across data centers. So now that you actually have the objectives, that's great, right? We actually have the IT owners telling you what they want. We have the business owners telling you what you want, but what does the application owner want? What does the developer want? So as an application owner, you actually want to have stateless elastic environments where if I have an application server, I want to build around thousands of application servers on demand and it should scale out automatically based on usage. I want the environment to support specific DevOps requirements for specific workloads. There are some workloads that actually require near bare metal performance, like Hadoop as a service, or I have NFE workloads for telcos. So I should be able to have a cloud that actually supports that. For databases, whatever. And you also want the security to be a part of the initial deployment, as part of the design, it should not be an afterthought. Usually what happens is, at least what you've seen over the last two, three years is, security is always an afterthought. You build a cloud, you build a cloud in a COE environment, you test it, validate it, you try to push it into production, and the security guys come in and put their hammer down saying, well it's not been tested, it's not been certified, it's not going into production. So then you have issues with compliance or audits, then you have to go back and do a bunch of rework, which is gonna make it look like it's a very complex solution and you're gonna be over budget. So then as an IT operations owner, I need environments to be isolated within projects. I would like workloads to be able to attach to physical networks directly, as an example. I would, as a developer, I would like to build my own application environment. That makes sense. So one of the, actually captured all of this, now what does it mean, right? You actually wanna then take those objectives that you heard and then try to see how it actually feeds into some of the technical requirements that'll help you design those clouds. So we think about one of the requirements around the workload and the performance. So we might propose that we use host aggregates, for example, for different types of hardware. We might have a general pool available, a general host aggregate and we can expose those to users via flavors or we could even use availability zones to expose them that way instead. SRIOV for PCI passer for increased performance. That might be something that could match one of the business goals or a secure virtualization for the security requirement. That's SBIRT. Obviously we want resiliency, so we have multiple NICs, we bond those together. So we actually heard about the networking requirements in some of these use cases, right? So we want to have tenant network isolation. So you wanna pick a VX standard VLAN based networks for tenant isolation. Those requirements that you should actually be able to connect to external systems. So you probably wanna enable some kind of floating IP within the environment or use provider networks. There were requirements for enabling bare metal performance. NFV actually has specific requirements for near bare metal performance. You actually wanna enable some of the NFV features in networking, in neutron to actually make sure they actually work. Okay, and then we have storage requirements. So listening to the requirements, in this case we would say Chef storage sounds like a good solution to the different requirements. There are multiple tiers of storage available for the high performance or the general performance. So we can enable multiple back ends. We could use storage replication for the multi-site requirements. And we could use stable volumes. So even though that the applications themselves will be stateless, the data could be on block storage that we can recover by attaching it to a new instance. Okay, so what we'll do now, now that we have all the requirements, we have the technical requirements, what we can do is propose different architectures. And this is a conversation that occurs with our customers. We would provide them a proposal for the solution to the different needs that they've given to us and the priorities. And so we may propose several different architectures. And as Vijay has said, we wanna start small. So we're gonna start with a minimum viable product first. So this is probably the smallest production cloud you wanna deploy with OpenStack. If you see here, you probably have highly available controllers that actually manage your control plane for all the API services. You have a bunch of compute to actually act as you know compute and you have self OSDs for your self storage cluster. So that's probably the smallest OpenStack environment you would build. And the reason why you wanna do that is one, understand if the enterprises can adopt OpenStack, they learn from it and actually can make it work. Once you actually have this up and running, you understood what OpenStack is. You actually design for the bare minimum use cases. Then you actually wanna expand it upon to actually add more functionality. So one of the requirements we saw was they actually wanted OpenStack deployment to be dual purpose. One for general purpose compute, but also they have wanted bare metal performance and NFP workloads. So how do you actually take this and extend it in to the next one? Yeah, so we could, as I mentioned, either using host aggregates or availability zones, we could expose specific compute nodes to the users. So in this case, if we look on the left, we could have a general user, or a user who wants just general flavor, general instance, nothing very specific, just maybe for testing. So they'd request that via horizon dashboard or via APIs directly. And they'd be given the host aggregate that exposes the general pool of compute nodes. On the other hand, if we have a user who needs NFV or something closer to bare metal for some of the reason, then they would be exposed to that host aggregate that gives them that. So next we could talk about automated deployment and extending to multiple sites. So how many here have used triple O to deploy? Okay, so we, this here talks about the idea of the overcloud and the undercloud. You may have heard that before. So triple O stands for open sac, on open sac. So what you have here on the outer box is what's known as the undercloud. So this is the initial open sac environment that's deployed. And the idea here is you see these various lines over here, one labeled ironic. Heat, for example, would, rather than deploying a set of instances and networks that would represent your virtual environment, it would actually interact directly with bare metal using the Nova Ironic driver. So Nova typically is your hypervisor which has a driver to livevert. Well in this case, using triple O, it's gonna manage bare metal via Ironic. We have neutron, which is gonna control your networking for your bare metal systems rather than your virtual environments. Glance is gonna provide the OS images that will get laid down onto bare metal. So that is what the undercloud is and you have various compute nodes which are all bare metal, which drive the inner box here, which is the overcloud. So this is the actual production environment. So here's where you have the standard open sac with the high availability controllers as we saw in the single site and the various open sac components with its own set of compute nodes which are also bare metal. And then Ceph, potentially. So in this architecture, that's what we recommend. So that actually takes care of building a cloud at one side. But what about multi-site? How do you actually go about doing that? So when you wanna talk about this? Sure, so in this example here, this is actually a real customer deployment. I'll talk about where this came from if anybody's read the architecture design guide for open sac. So this here represents four edge sites that possibly have some sort of service for the customers, could be content, could be whatever it is. And within each site, we have a failover region or environment of open sac. Now in this case, they're actually sharing management services. So we have shared Keystone, shared Horizon, shared Swift, for example. Which is one way to do it. Adds a little bit of complexity though. But how do you actually go and upgrade, right? You actually have, I see here, eight clouds all tied to a common management pool. So you actually wanna upgrade it as open stack is releasing every six months. How do you actually go and upgrade this without actually causing an impact, causing any downtime? That's a real risk. So one thing we can do is enable some sort of cloud management platform. So in this case, we have two independent regions. So this again is just a copy of the single site that we saw earlier. These are operating independently. With something like a cloud management platform, you can actually expose that to your end users, which has its own API, its own UI. And in this case, you can define sets of instances that could potentially deploy to either region, among other things. But you actually have a problem here, right? If you actually take this, a tenant in site one is not the same tenant in two. They have different EU IDs. They have different tenant IDs. So you probably have to build some kind of a common mapping or orchestration to actually ensure that the tenants are the same. So it really comes back to the enterprise and your use cases and how you're actually mapping your active directory and how you're mapping your identity systems to see which of these architectures actually suit your business needs. So the goal here is taking the use cases that we derive, but you actually come up with four proposed architectures and see what makes sense and how you wanna deploy. Okay, so now that we have them deployed, the next step is operations. So how many here have to deal with operations in your environments? Okay, everything just runs on its own. That's good to hear. So BJ. So one of the common problems that we've seen is how do you debug OpenStack, right? So well, OpenStack is up and running. I'm a traditional virtualization guy. When there's a problem, I actually get an alert with OpenStack. I have to go and debug 1500 logs, thousands of lines of code, look at and build your own correlation metrics about it, so how do you actually go about doing it? Wouldn't you just SSH into every node and just tail dash F star? Yeah, if it's a small cloud, but if you actually is a big cloud, you know everybody in this room to actually SSH in and help you out. Okay. So one of the common things you wanna know is, you wanna know when an OpenStack service fails. You wanna know when a service is misconfigured. It's not optimal. You also wanna know when a particular service is overloaded, it's not functioning properly. Like for instance, you request an over compute instance, supposed to spin up a VM in three minutes, it takes 10 minutes. Supposed to spin up in three seconds, Cinder doesn't respond because it's looping. How do you actually know that? What are the strategies you would actually like to deploy to debug OpenStack environments? And so those are good problems. Absolutely. What technical options do we have available for our OpenStack clouds? There's a lot of tools available in the industry to actually use and mostly open source. So this is one way of actually solving it, right? So we actually wanna build a log collection tool process. The one that we've actually been using quite a bit is Fluent Day to actually collect all the logs and aggregate it. Elastic search to actually be able to search through your logs and Kibana as your dashboard to actually give you that visual representation for the end user to actually build some of those. So is anybody using any of these tools? Okay. All right, good to hear. So the difference that you'll actually see in the previous slide, Vinny, is we're using Fluent Day instead of LogStack. It's just by choice because we actually feel that Fluent Day is easily highly scalable than Elastic Search is. So it's easy to actually be built in a highly available fashion. So you can eliminate single point of failure. So how do you actually set it up? So this is how a logging structure would look like, right? You probably have Fluent Day running on all your control nodes and your compute nodes, forwarding all the information to your logging node. And then logging node, you have Elastic Search and Kibana to actually help you search and provide you user dashboards. Okay. So next, we want to look at the performance and availability, and we want to do that through monitoring tools. So there are quite a few different tools here. A collectee, Graphite, Grafana, Senzu, Uchiwa, have anybody using any of these? Okay, some of the same people. Okay, good. All right, so very similar to architecture here. So we have agent running on our various OpenSec nodes. We have that communicating over the MQP bus, what RabbitMQ here, and forwarding to a Senzu server, which exposes it via an API and Uchiwa as the UI. So you can take a look at that. And you talk a little bit about performance. So yeah, you actually can collect the performance metrics using collectee. Collect all those metrics and actually have it viewed through Graphana and Graphite. So you actually see a lot of different tools that come in. You'll probably see a lot of the tools come in through Redats distribution soon. It's actually in tech preview right now. It'll probably be G8 around the next release. So some of the same tools because our goal, as we're building clouds, one of the biggest challenges that we see is it's difficult to actually operate an OpenSec environment. So our hope is that you actually build these open source tools and actually make it easier as an operator to actually run these clouds. Okay. So if you build a cloud, some of the other things that as an architect that you probably need to look at, you actually need to look and see where your enterprise is in the IT maturity curve. This is actually a slide that we borrowed from Gartner. So some of the other things as an OpenSec architect you'll probably need to look at. You actually need to look at image management. You need to be able to look at patching strategies for your enterprise. You actually have to standardize the build process. You have to standardize your templates. They actually want to build through. You actually want to screen and scan your images for vulnerabilities ahead of time so that as an end user, the application developers are not responsible for looking at the images. As the cloud architect, you are responsible for making sure that images that are actually brought into the cloud are actually scanned and cleaned. So as part of a consulting engagement, we can come in and work with your business lines and interview people and understand what your maturity level is. And that's what this is about. Are you at a level zero or are you at the other end of the spectrum at a level four? And this will help us and work with you to develop a strategy in terms of moving forward so that you understand if you truly are ready for OpenSec or if you need to take some baby steps before that. So that is the bulk of our content. Do we have any questions? Yes. Yeah, come up here please. We'll go back to the diagrams. Which one? Go ahead and state your question. I'm sorry. Just turn it on. Yeah, you had an example where there were, I think, three independent GOs, each GO had two regions. Right, so yes, in that case, in each rectangle, is each independent OpenStack region? Yes. And so each has its own API endpoint and? So in this case, you probably have eight OpenStack deployments managed by that central management, say here. Are they independent? So eight OpenStack regions, yes. But not totally independent, where the keystone is common. Right, that's correct, yes. So as an application developer, if I got to develop a right redundant application, does the application developer then get exposed to the different region endpoints and there's some instances on region one, some on region two, or do you have availability zones within each region then? So it depends on what exactly you're trying to do. So it could be that you want to, for this example, these edge sites needed to be isolated to the customers for locality reasons. So there was some other intelligence that happened before the application was deployed to determine which site it went to. Within the site then you had basically the multiple regions available for high availability within the regions themselves, right? So in the case of what you're asking, where does it go, it depends on what the use case is for that application. Does it need to be close to the customers, like this example here, or does it just need to be available in multiple regions in case one goes down? So let's say the use case is whereas as an application developer or a business user, I need to develop a fault tolerant application, and therefore I need to be able to spin up instance or distribute my instances over different regions using the same API, so that if one particular location goes down, maybe a power fault or something, my application continues to run. In that case, how would you architect? Would you use availability zones? You actually can use, it depends on what your failure domain is, right? If you actually were tying your failure domain to a rack of compute. Let's say it's a rack. If it's a rack of compute, then yes, you can actually have multiple availability zones or host aggregates across multiple racks and make sure that when you're deploying an application, you actually deploy across multiple availability zones. But if your failure domain is actually an open-stack cloud itself or a region itself, then you actually want to then have your application spread across multiple open-stack regions itself. So in classic case, you probably have, say a data center in Tokyo and you have a data center in the US and as an application you want to actually be able to be fault tolerant, that something happens in the US, the applicant still can perform, you actually want to spread across both. But in this case, that's geographically dispersed and you actually have some kind of a load balancer at the top as an LTM or a GTM and actually spread the load, that you actually can still access this application, still work. But if the requirement is localized, that you actually want to just make sure that if a rack of compute goes down, it's still available, that you can just have the application spin up across multiple availability zones. That might be where something like this is more appropriate for your use case. So you expose to the cloud management platform itself rather than to the underlying open-stack, so your application doesn't have to make a decision as to where it's going. It talks to, in this case, sort of a broker, right? The cloud management platform, which will then decide which site it should go to. Thank you. Thank you. Anyone else? Yes. Hi, maybe my question is a pretty detailed. I worked in an open-stack company and our biggest problem is the fast deployment. Since open-stack is a big system and we may face many, many questions and the engineer may ask me why the virtual machine can't pin. The problem maybe is an IP table's problem or open-wave switch or even MQ and so on. So we have talked about debug and operation I think can we, do we have a well-debuted deploy, automated toy rather than we have talked about the logging or other thing? So this is actually the tooling that we actually tell you how you should do it, but this will actually, it's something that we're working on actually making sure that it's actually productized so that as an operator you don't actually have to stitch this. It actually comes pre-built for you. So all you have to do is know how to actually look at your elastic search and look at your keywords for you to search for. The problem that you had, if you actually had this framework built for logging and you knew that you're actually having rabbit MQ issues or IP tables issues, all of that is already logged and you actually can build your own dashboards through Kibana that as an operator you just look at your dashboards and see what's happening in your underlying cloud because the logging is instantaneous. It's real time. It's not after the fact when there's a problem go, look, collect the logs and do it. It's happening in real time. So you'd actually be able to look at it better. One of the challenges that we've actually seen in the field in all our experience over the last two years is this is very primitive today, right? So we're actually working towards making sure as the community itself, I think there's a lot of work happening in this front to actually make sure as an operator you actually get the best practices built in in the standard tooling. Which distribution it comes from, right? You actually need something that's more standard. So we're actually working towards making it a more standard platform. But a little bit more to your question. If you are experiencing, like you said, R for IP table or specific technical issues, then that sounds to me like it hasn't been fully deployed yet. You're still sort of in the works of testing it out. So what you wanna do, my recommendation would be start small. Start with smaller POC type environment. As we talked about earlier, the minimum viable product. Make sure that it's working. Make sure your use cases are executable. The technical issues have been resolved. Then you can expand out to full deployment. And if you are seeing performance issues, that's where something like this can help you. And from a testing standpoint, you actually can look at a project like Rally to actually build your baselines and actually do your performance testing on your cloud. So you actually cast this problem before your customer talks to you about a problem. Okay, thank you. Okay, thank you. Anyone else? Yes. So this logging and monitoring, is there a way to integrate it with the physical infrastructure itself? I mean, this actually doing at the component level. So this is actually a component level, right? So if you actually go back to our triple slide with Ironic. So Ironic is actually a bare metal driver. So he actually will be able to monitor your infrastructure, right? He actually will tell you the health of your underlying basic physical infrastructure and actually be able to give you that. She'll actually be able to get some of your operations dashboards. They'll actually give you the health of the underlying platform, not just the virtual instances, but the actual underlying infrastructure. So it's just integrated with the, FluentD is also integrated with Ironic? So Ironic is running the under-cloud compute. So it actually has the horizon dashboard and actually gives you the Ironic dashboard. It actually gives you the health of the underlying compute. Anyone else? Yes. You only get one question per session. I'm just kidding. You're nice guys. How do you hand, let's say it's an application to have a developer or a business owner. I don't want to run into disruptions when you guys upgrade every six months. How do you get around that problem? So there's a magic bullet, upgrade in place. Sorry, I'm kidding. I wish. It's work in progress. It opens up as a community actually has been evolving. Our hope is it actually is getting better. So I think we are working towards making upgrades in place more feasible. So our goal is using the deployment tools and basically using configuration management to actually be able to upgrade and make that easier. And one of the common things is when you actually have multiple API services with different versions, right? The way we're trying to do is instead of actually treating the OpenStrike deployment as physical pieces of code, we're actually kind of turning it into an image and actually using image-based deployment. So it'll actually make your upgrades a lot easier. You upgrade the image, push it through glance, through Ionic, and the OpenStrike compute now that actually comes up with the new version of your image. But also the multiple regions availability is something else that you can do to mitigate that. So let's say there's one region. Let's say that there's one region and I want to upgrade. Are you saying you first upgrade a management plane and then upgrade your compute nodes one at a time? Because you actually are, you have to have the backward compatibility of your APIs. So there's both happening in the community upstream to actually make sure that the next release is actually upward compatible, downward compatible with the previous release. And when you have that API compatibility, then you actually can upgrade and have two versions and actually bring them together. So is there a way to solve it right now? No. So that's not solved. It's work in progress. You actually can do compute right now. There are certain projects that are not fully baked and this will work in progress to make it happen. Thank you. Okay, so just real quick. There's a lot of great resources out there. So some of the architecture diagrams that I pulled into here, I was actually part of a group of 13 people who wrote the OpenSec architecture design guide and we had a fantastic time. It was a five day sprint. We worked about 15 hours a day. They didn't let us leave the room at all. It was really fun. So we had other guides that were written in the same manner. Before you leave, those who are leaving, I can see you're back but that's okay. Thank you for coming. Appreciate it. Just want to let you know we do have a bunch of other sessions available from Red Hat. Obviously you're in this session so I can't recommend that you go see the other sessions at the same time. So we're gonna X those out. But we do have others available. Some great talks. Please come and see it. Thank you for coming. We appreciate your time. Thank you very much. One more.