 Okay, hello everyone. Thank you for being here. I hope you're having a good time. My name is Jira Chung I'm from orchestra and today I'll be giving you a brief presentation about our automation process for deployment of open-stack deployment Today's presentation would be about 15 minutes long We're just going to be going through the different features of automation that we provide in our tool We're also going to be giving you an overview of the you know possible benefits and Insights hopefully that we could provide you know so after the Presentation if you want to ask any questions or if you want to provide any feedbacks, please feel free to visit our booth During the marketplace mixer. We're also providing beers and beverages as well. So please feel free to visit us there. So Let's dive in So before going in I just like to give you an overview of the table of contents First we're going to be talking about a background to give you a better understanding of what we're doing and To give you an understanding of the challenges that we've been facing as well And then second we're going to be talking about the needs to why automated deployment is necessary in the perspective of Optimization and efficiency and then lastly we're going to be talking about the different areas of automation that we provide and how we Are providing them so the technical level of the presentation wouldn't be so deep But it hopefully it provides you a good overall insights to how and the tools of automation that we're using to provide these services So first who are we? Orchestra is a software company based in Seoul, South Korea We provide a full stack of cloud solutions to our clients so that they could deploy their own cloud data center We don't have our own data center nor are we a cloud service provider But our clients are probably like, you know, they're major data centers and within a conglomerate. So they're IT managers so Utilizing our cloud software is they're providing their IT services within their conglomerate and So to go briefly about you know our full stack of solutions that we have from the bottom layer We have our open-stack based IS platform that we provide We also provide the Kubernetes native Past platform and in the middle we have our orchestra CMP, which is our multi-hybrid cloud management platform that manages, you know Various wide range of cloud environments. So from VMware to Red Hat OpenStack to Mirantis as Nutanix AWS and GCP so it covers quite large of range The purpose of the CMP is to provide integrated management and unified GUI for more simplified Efficient management of the various cloud environments and I think it's a very Unique need that we face in Korea in the Korean market But yeah, it's one of the needs that we face And so we have like our own intro broker API management system within it that provides a you know unified way of management the workflow and Above we have the cloud service to optimize our cloud environments So from data ops to dev off for automated CI CD ML ops and AI ops so these are the full stack of solutions that we provide and Within among these we're going to talk about specifically our contra base, which is our IS solution So first let's talk about why automated deployment is necessary and this diagram you could see in the left side The manual processes that has to be taken through before automation takes place So if there's a request for deployment there the infrastructure admin They would have to communicate with various administrations from different areas So soft from some servers from network storage to security administrations You have to coordinate with them and there's a complex process that's involved in it that takes manual work But and then after that to deploy those systems You need to have you need to have reliance on an open stack technology expert. So these processes kind of May produce bottlenecks, especially for clients that are using repetitive manual tasks So instead of providing them those, you know hardware and burden We want to provide automation so that we could lessen these burdens and provide more efficiency So let's talk about the benefits of automated deployment So first it increases efficiency by eliminating the need for manual repetitive task as I already mentioned before And it also reduces the chances of human area variations in the configurations Second it increases consistency and standardization So it decreases the chance of misconfiguration as I mentioned before and errors in the components of open stack In the way you want to deploy it and the way you want to structure it in the environment and this The simplification and the standardization consistency of the architecture and the way you configure it it kind of decreases the burden of operation And third it increases scalability and flexibility in your environment So if your client would like to increase or like just you know scale up certain services is very simple for them And it also enhances version control. So Utilizing IAC infrastructure as a code It improves this chances it improves efficiency in tracking your code and how you've been updating them It also, you know opens up requires continuous management and updates to address You know security patches bug fixes bug fixes and feature enhancements So with these automation tools and deployments, we provide a structured approach for managing these And it simplifies the process by applying, you know updates and patches across the whole open stack component even Within the provisioning stage as well And then lastly it helps mitigate dependency on technical personnel. So even if you have a kind of hard time Recruiting open stack expertise is really just kind of decreases the burden in that area as well. So overall open stack automation kind of Deployments streamlines the initial deployment processes improves Efficiency ensures consistency and simplifies the ongoing operations within your cloud environment So in which areas did we provide automation? So there's three areas that we provide automation in is the first is deployment Second is operation and third is security. So in the deployment part You guys may be very familiar with the deployment part So we install and configure the os into the host os so and deploy and provisioning open stack Integrating open stack with the you know, wide range of hardware and network and storages that you need And high providing high availability and clustering of the open stack components Second in the operation level, we provide self server provisioning and We provide virtual machine management through our internal port and we'll be talking about why we use internal port in this area later on We also provide live migrations and version patch upgrades Also monitoring as well And then lastly for security. We provide automated patching for ccv cce and cve as well We also provide a self-sign certificate renewal for a tls communication and password change and we also provide automated fault detection and fault recovery and We're going to be going a bit deep dive into each area later on but not too technically So if you have any questions, please come visit us later on So this is an overview of our automation So if there's a request for a certain deployment for open stack or certain In different areas of automation We use you know our mass ansible and terraform as our basic engine and we use Jenkins to deploy them We also have github to kind of manage the you know the updates and the configurations as well And this is the automation layers and we added this slide just to give you an idea of Which automation tools we utilize in each layer of the deployment so from the bottom layer we utilize canonical's mass to You know deploy hardware deployments on os And then in the middle layer we've used ansible for the you know configuration of the hardware and software deployment and software configuration for open stack and above in the virtualized virtualized layer we use terraform and ansible together And we're going to be going a little bit deeper later on in the slides All right, so this is the you know the overall architecture of our deployment system So through our deployment tool as you can see we also not only provide deployments for open stack But we also provide automated deployment for Kubernetes native environments as well. So you could deploy them either way You could deploy the open stack above a docker But you could also deploy them on bare metal as well But you could also deploy Kubernetes clusters on the bare metal directly as well through our automation You could also we provide, you know, our open stack configuration file encryption for security issues We also provide, you know scaling up and down for the infrastructure And if you look in the middle layer So we basically use repository to manage these systems and to manage the metadata of all these automations And to provide, you know version upgrades and patches as well And as you can see The result of the test will go through the reporting tool and then we'll kind of automatically adjust the test variables And reconfigure them accordingly to deploy them in the way that we set them before So this is basically an overview of what we do and in the following slides We want to discuss more specifically on specific areas. So I won't be covering the whole entire automation process, but So the os deployment phase is basically very fairly, you know, simple very Familiar with you guys. So it's canonical. We're basically using canonical mass to provision the os So it kind of goes through four phases the enlistment phase the commissioning phase and the deployment and the release phase So to put it simple in the enlistment phase We identify the nodes that you want to deploy the open stack components to And during this stage you are kind of identifying which nodes are for which functions and purposes So compute compute nodes and the control nodes you're differentiating them there And the commissioning you're once you identify the nodes you go through the process the commissioning process So including network configuration and ensuring that the nodes are ready for deployment And then in the deployment stage you really just deploy the os into the nodes and then the release stage Once the os has been deployed the nodes are released from the mass control indicating that the nodes are ready to be used And utilized so this I think is fairly straightforward. It's fairly simple And then next is our open stack deployment stage. So Each layer of automation is kind of in a separate way separate function in itself So it doesn't have to go through the whole process from bottom to top It just kind of You could provide those functions automated functions necessary to where you need them and when you need them So as I already mentioned about mass about the host os provisioning I'll kind of go over to the above stage So basically during the first is you Automate deployment for the host os Once the verification is complete we use cloud init for automated os tuning to enhance the host os and open stack performance as well And then we deploy the open stack components to the designated nodes as I mentioned before To whether it is compute or control And then lastly after the open stack components are deployed our open stack automation performs Testing to ensure that the cloud is fully functional. So that happens in the Open stack deployment layer. So it would happen in this layer So after your open stack components are deployed into The nodes that you have designated to it goes through the it uploads to template guest images and also applications To test if the virtual machines are functioning as the way that you want and then it will kind of just Kind of release them again Just to test if there is any misconfiguration during the process And this is the process for our orchestra automation for operation Um, we're going to be focusing our presentation more on this aspect So from the bottom you can see that we provide automation for emergency failover For the security purposes, we also provide configuration for encryption And then we provide monitoring based on prometheus and grafana So about the open stack host os environment to ensure that it is running properly And then we as I mentioned before we provide a cve cc patch And then we provide virtual machine management through our internal port And today we're going to be focusing on three different areas about fault detection and fault recovery of the cloud Cloud services for our database service So First let's talk about our Fault detection and recovery. So it provides automation for immersion emergency failover. So in case there is a fail failure within the nodes Our open stack automation tool would naturally find an idle host And you know kind of go through the same phase as before for the open stack deployment So they would deploy the open stack components into the idle nodes And then they will live migrate the virtual machines that was running on that system To the idle host and then using mass tool We're going to delist and release the host os from the failed host and node So this is basically a Function that was really necessary in some of our client's environment when they need to have a you know service continuity service Yeah, so security as well And then next is we're going to talk about fault detection and recovery So our fault detection recovery is based is focused on the service the cluster service for With a rabbit mq per kona and pacemaker So these services are basically a you know single service for database To provide high availability and scalable database services within our open stack But as you may already know, a rabbit mq is one of the services that has a lot of errors It has a lot of faults in it So It's very it was very important for us to kind of focus on Providing automated fault detection and recovery in this aspect. So we provided that um our automation is in the background We use isc tools for it. So yeah, basically we provide that area And then we also provide a management through our internal port and the reason why we're using our internal port is to decrease the um, you know The unnecessary use of resources through, you know positioning the management and another separate virtual machine so Yeah, most clients and it's very specific to the needs that we faced in Korea in the Korean market So because of the government regulations and policies, they have to manage the ip's Of you know regularly, but with the expansion of you know, ip's that they have to manage It becomes more difficult in the operation aspect in the administration aspect So to increase their enhancer efficiency, we decided to use the internal port instead So internal port to manage those systems Not just in the ip management as a perspective. We also provide um the Management for deploying and precision provisioning the services So as I mentioned before all of our services are preconfigured So our virtual machine sizes and the applications that would be go on that'll that's deployed provision Within the virtual machines are already preconfigured before within our automation tool management So with through the internal port, you're just kind of provisioning these images through the virtual machine as required as requested by the users And uh, lastly, I kind of wanted to touch on why we're kind of why we use and utilize our repository or Our repository rather than um image based or other thing Um, it's because it provides a wide range of compatibility for hardware sdns and storages and um because our We our clients are major they provide Fast range of services to their conglomers within they have a lot of different hardware is they have a different sdn environments They have a lot of different storages that they need to Um plug into automation. So we use, you know plugins to automate the compatibility within the environment So when you're deploying the open stack components, um through our iac code in the back end it kind of You recognize the different hardware is and the through plug-in and repository you kind of automatically Are synced in to the system as a whole so it's kind of less since the burden of the whole process of manually um integrating the different systems And um, this is basically some uh, just the three major compatibility map for our servers and the different functionalities that is automated behind in the back end So, you know del fuji chief hp For os provisioning server virtualization gpu sri ov They're all, you know, pretty much uh preconfigured in our back end. So that is automated to be synced right integrated to these hardwares And then, you know, we have a vast range of hardware, uh, storages that is, you know, automatically configured so that is, you know Integrated through plug-in and as you can see it's just, you know, you could look look at look over it And then if you have any questions, please feel free to ask And then lastly is uh sdn so network has been a big issue in korea as well So we have integrated them a lot with the various sdns Through our portal. So our portal pretty much enables management of these sdn environments With integrated through our open stack environment open stack platform So, yeah, and these are the functionalities that we provide through our portal as well. So, um with this I'll kind of wrap up my presentation if you have any, um Questions or any feedbacks regarding the presentation or the content Please feel free to visit us at our booth. Thank you so much Oh, also, um, it's our first time at a global Summit so if it's possible, would you like to take a picture with us in the front real quick before we kind of go? Would it be okay? All right, if you could if you're more willing to oh, thank you so much. Thank you Um, would it be possible if we have the the screen up our front screen? Thank you so much Thank you Thank you guys