 Hi, this is Jae Seok-an, and he's a robot chef from SK Telecom. So it's about airship, but you hold up airship, right, a lot, through this summit. And they use forward airship to change for their production, and SK also use airship for our production deployment, but we use it very differently. Today we are going to talk about how we use airship project, especially there is a sub-project called Armada, so we are talking about that project specifically. So SK Telecom has been participating in OpenStack HEM project and airship project from the beginning, but we had a different constraint in our environment and different requirements, so we have different use cases from the AT&T. So I hope this presentation will give some hint or help someone who wants to try airship because of any reason. Not full airship itself, but with your own needs-based selection of airship projects, so that's our case, so I hope we can help on that. So before going into detail, I just briefly will tell you what we are doing currently. So we are using OpenStack HEM project and airship and Kubernetes and Corolla and lots of other open source tools to make an infrastructure delivery system. So we called TACO. I like TACO, I studied in Texas, so we have this name, but it stands for SKT or Container OpenStack. We come up with it later. So we build TACO, it's our internal product name, but it's basically just a combination of airship and OpenStack HEM and Kubernetes. And we have kind of ultimate plan from this stage, so our plan is extending and advancing this infrastructure delivery system so that we can deliver a required infrastructure. It can be OpenStack, it can be Kubernetes, it can be delivered on top of on-premise infrastructure like we do now, or on top of private cloud or public cloud if it's Kubernetes deployment. And you can add like unnecessary tools if it's like a machine-running job, you can add machine-running supporting tools on top of Kubernetes. So basically, SKT wants to have a simple, very flexible delivery system to deliver whatever infrastructure we need for our future needs. So if you look at that, that's just my conceptual picture, but you can have Belmeta and Kubernetes OpenStack, that's what we do now, but you can have Kubernetes on top of public cloud or Kubernetes on top of on-premise, but in standard and simple way. So that's our ultimate goal. So to do that, we think these four concepts in this slide are totally necessary to have, and this is exactly the same characteristic airship project defines. So it's from airship slide. So in other words, the VSKT completely agrees on what airship aims for. So that is main reason SKT is actively participating in airship and opens the camp project, but we approach it very slightly differently. So I'll talk about in this later slide. This is, if you saw like any airship presentation, you will be familiar with this concept. So it should be declarative. So you need to have just one declarative document to do everything you want to do in your infrastructure, and we use containers as a basic Unibob software delivery, and also we aim for one simple workflow, same workflow for every lifecycle measurement task. So even if you're doing like a new deployment or update or upgrade, whatever you do, it should be just one simple and same way to do those jobs, to be very predictable. And it should be very flexible for the different architectures and infrastructure software we are delivering. So with that background, I actually start talking about our story, how we start it, and how we are currently using it, and what we want to do from now on. So there are two main projects for us. Like I said, the first one is to open the camp. Do you know what the open stack is? OK, so I will not describe in detail. This is a collection of hemp charts to deploy open stack. But not only open stack, but relate to services like LMA, logging, monitoring, allotting, LMA stack. You do have those charts for the LMA chart. So with open stack hemp and with Kola or Rocky container image, you can actually deploy open stack as a container on top of Kubernetes. And then there's AirShip. For me, it's more like an AirShip fleet because AirShip consists of several tool chains. So what AirShip does is you define and generate the declarative document. This document includes everything you want to provision in your site, from bare metal provisioning and Kubernetes deployment, and SAP deployment, and finally, open stack deployment. So you define those documents. And you throw into AirShip tool chain. Then AirShip tool chain automatically render those documents, and provision parameters, and configure the parameters, and provision Kubernetes, and provision open stack for you. So it's a very simple way to manage your infrastructure. But for us, we only use an AirShip, not all the fleet of AirShip. We use the project called Armada. The Armada is like a tool for managing multiple hemp charts with dependencies by centralizing all configuration in a single Armada YAML manifest. And it provides life cycle hooks for the old Helmets. And basically, this Armada provides a rich set of features missing from the basic Helm client. So features are really necessary for managing multiple charts altogether. So we started to open the Helm chart. Then we had the problem to managing those multiple charts. So it was natural for us to adapt Armada, because it solved our facing problem. So from the start, we only used Armada among the AirShip project. But why was that? So there are some of the constraints at the beginning. Beginning, I mean, when we start looking into the possibility to put open stack on Kubernetes at late 2016, and we had those constraints. My team was in charge of whatever we want to do on the open stack. So we had full control over the open stack itself. So we could do anything on that. But we didn't have control over the bare metal provisioning because operation team has their own way of the provisioning bare metal. So it was out of scope at the beginning for us. We just need to use that. And also, SAP was a very successful independent R&D effort inside SKT. There was a separate R&D team actually developing the SAP for the old-fashioned drive and make appliance out of that. So we had to use their solution. We have to cooperate with them to deploy the SAP. So we didn't have any SAP deployment. SAP provision is out of scope for us. And even though the SKT is technical, the first demand came from IT department, like a private cloud. Or they want infrastructure for their VDI system, for your desktop interface, or the OSS, the BSS system. So we had to deal with those IT requirements. Then the network came along afterward. If you're comparing the cloud to the IT cloud, IT cloud has a variety of requirements than the cloud. So we have to deal with very different type of requirements. So that was our constraint. So if you look at timeline, in 2016, when we started looking into those putting opposite of Kubernetes, luckily, we found like-minded ones in early stage, like an AT&T dev team. Or they also started looking into Hamchart. So we were able to collaborate together. And so the SKT was able to do the full participation of the Hamchart project from the beginning. But we had those constraints. And also we had already made some initial decision to provision the Kubernetes. And we decided to use CubeSpray, which is Ansible for deploying the Kubernetes. And then in 2017, LSDB initiated the upstream project. But by then, we already used the CubeSpray. And so and very much provisioning was out of our scope. So we are not able to actually use whole LSDB tools. But we found out that, as I said in the previously, Armada was very helpful in solving our facing problems. So we decided to leverage Armada and contribute together on the Armada project. So 2018, the Airship, as you know, become the pilot project. And this year, 1810 SKT, using Airship and OpenTech Ham to do the production deployment. And at this stage, SKT is really navigating again on how we can align with Kubernetes more. So what we have now, what we end up with now right now. So we actually end up with developing Ansible. We call it Taco Play, to do everything before the OpenTech deployment. So the bare meta provision is not yet we are working on the irony. But with Taco Play, Ansible Playbooks, there is one Ansible Playbook comment to do everything I listed in here. So we do host configuration with Ansible. We do Docker registry installation, Docker installation. And we are using SAP Ansible from SAP community to do the SAP installation. And we are using CubeSplay to do the Kubernetes. And we actually recently started looking into Cube ADM. So we use CubeSplay, but we're using Cube ADM to actually deploy and managing the Kubernetes. That's what they do. And after we have Kubernetes, then we use Armada and Ham chart to do everything we put on top of Kubernetes. So we end up with, on the left hand, we have Ansible to do everything till Kubernetes. And then we have Armada and Airship, Part of Airship, and OpenTech, Ham chart to do everything on top of Kubernetes. So that's our current tools. And we have some of the ongoing effort. We decided to leverage one more project from the Airship based on what we really need, what problem is. So that's Deckhand. There was a very good presentation on the Deckhand yesterday, so you can look into what Deckhand is. Deckhand is simply the tool to manage manifest. Since we do have several different deployment right now, there is several different Armada manifest we use. And we start feeling we really need to manage all those different Armada manifest, the better. And Deckhand provides a deck capability. So we decide to extend one more step into Airship project and use Deckhand and Armada for our needs. Then we are looking into our unique as well. And for the stabilization, we are using Step Ansible. So deploy Step just on bare metal. But we start looking into containerized step deployment option. The Open Deck Ham has Step Chart. So they deploy the step on the Kubernetes. We are not using it yet. So we are looking into the Rook and Open Deck Ham Step Chart or Step Ansible with containers. Basically, we are following how step communities are evolving on this space. And we are going to convert to the container-based deployment, but we follow what Step Community does. And there's more topics. So since our ultimate goal is deploying infrastructure on top of on-premise, or public cloud, or product cloud, the Kubernetes cluster API, which is happening airport in Kubernetes community, is kind of important for us. And I heard a very great story during the summit that the irony is being integrated in the cluster API, so being first bare metal provisioning API for the cluster API, that will be fantastic for us. And then there is all the security on the containers, especially when you deploy Open Deck in containers. There are lots of security problems. So we want to focus on more on that and LMA. But in addition to setting up tools, but we are interesting to the real operation knowledge integration into this LMA stack. So that out of box, from the community software, we can have real operational knowledge into the system. And also, we are looking into more component to like Istio and Kubeflow for us. I have to say that although we are using just a part of AERSHIP, but AERSHIP community is very supportive. AERSHIP community is very open to adopt different use cases like us. And so we start discussing around bringing your own concept. So AERSHIP is looking into bringing your own bare metal provisioning. If you have your own bare metal programming mechanism, you can just plug into the AERSHIP and use it as it is. Or you can bring your own SAP cluster. Or you can bring your own Kubernetes deployment method and just plug into AERSHIP and it will work well. And that's the concept AERSHIP community is aiming for as well. So it is very beneficial for us because we need those concepts if you want to fully align with the AERSHIP community. And also, we are also open to collaborate with anyone who has our similar requirements as us. So this is our one-up effort to document integrating a decision step into OpenStack Ham, which is not the broad way of OpenStack Ham does, but our use case. So we are working together to make those different use cases available for the community. So we are very happy or open to collaborate with anyone in the community who has very similar requirements as us. So that's basic story of us using just part of AERSHIP and extending more and more based on our needs. And from now on, Robert Choi, my colleague, will actually talk about how we leverage those Armada in our CI-CD pipeline, which is very important for us. So it will be interesting. Hi, my name is Robert Choi. And I'll briefly explain our CI-CD pipeline and how Armada is used in our pipeline. So here's our full deployment pipeline for TACO. It's actually used in our production deployment. And among this, as Jessung mentioned before, the Bayomata provisioning is done manually now. And we are going to automate that soon. Ironic is one of the candidate tool for that. So once the OS installation is done, we install SAP using Ansible. We install DACO and Kubernetes using CubeSpray, which is also Ansible Playbook. And then deploy OpenStack and others using Armada and OpenStack Mchart. So all of these tasks are wrapped into one single Playbook, which we named TACO Play. And all of these tasks are scattered into many jobs in our Jenkins. So I'll show you somewhat detail about our OpenStack deployment pipeline. It's a typical CI pipeline comprised of unit tests and integration tests and promotion stage. On the left side, we have a mirror repositories for OpenStack upstream source code and other upstream code and wrapper repositories that have our custom configurations. So those two are merged and then go through all the pipelines on the right side. So actual pipeline consists of two sub-pipelines. The first one is image build pipeline, which is building container images of all OpenStack services. It fetches OpenStack source and cola source from upstream and builds container images using cola build tool and then pushes the built images into our internal DACO registry once the unit test is passed. The other is actual deployment pipeline for each OpenStack service. It deploys each service with the built container images to Kubernetes cluster using OpenStack Hemchart and Armada and then performs some scenario test, which is OpenStack Rally project. In this picture, you can see the blue star marks here and that means the stages in which Armada is used. So what's Armada and what it does? It simply deploy collection of chart at once with a single manifest. And what does manifest contain? All the values. So as you know, you can use original chart directly because you need to customize values so that they are relevant to your environment. So the Armada manifest contains all those necessary values for all Hemchart. So there are two use cases of Armada in our CI pipeline. We had to cover multiple use cases with unified deployment tool and as less number of manifest as possible for simplest issue. So one is full OpenStack deployment for integration test. The other is individual subs deployment for some scenario test, as I mentioned. The first one, deploying all services is as simple as running Armada apply command with some parameters such as till and point and timeout values. It's pretty simple. But the second one, to deploy some subset of chart, you need a little more work. The manifest hierarchy looks like the bottom. The manifest contains many chart groups, each of which has many chart in turn. So what you need to do is to specify chart groups you want to deploy and also a list of chart for each chart group. Of course, it's not easy to do this manually every time, so we need to automate that process. And here's actual code used in our Jenkins job for automatically composing necessary parameters for the previous Armada command. It's a group code and very simple code. So from now on, I'll briefly explain some issues I have gone through. So while using Armada, there have been some issues. So I'll show you one of those and how we fix that issue. Actually, it's a corner case rather than issue. So the problem was that the Armada process hangs for a timeout period. In other words, it's not finished until it timed out, even though all the necessary parts are deployed only. So for example, if you specify the timeout of 3,600, which is one hour, then the process doesn't finish it for an hour. So the weird thing is that it happens only in our CIC environment. And it's working fine in other members' local develop environment. We spent many days trying to figure out why. And it turns out that it related to how Armada communicate with Tiller, which is Helm server, and Kubernetes API. So Armada loads Kubernetes endpoint configs in two different ways. So if Armada learns its container inside some Kubernetes cluster, it loads in cluster configs and directly connect to Tiller and Qube API in the same cluster. On the other hand, if Armada learns outside Kubernetes cluster, it tries to load Qube config file from default config location to find out the proper Kubernetes API endpoint. Having said that, here's our CIC environment. We use Jenkins Kubernetes plugin, which deploys the Jenkins slave as Kubernetes part. We pre-built various slave images, including the Armada one. So when job learns, the Armada slave is created. And then it deploys OpenStack cluster into another Kubernetes cluster in a new VM, which is on the right side. We use a Kubernetes all-in-one VM for each deployment shop for isolation purpose. So in this case, the Armada on the left side should communicate with Tiller and Qube API in that VM on the right side. But since it's running inside the Kubernetes cluster, it uses in-cluster config. And it asks the long Qube API about the status of all the OpenStack service part. And I keep thinking that none of these parts are ready yet, because it cannot see any part on the left cluster. So it was really a corner case for us. So as a short-term solution, we add this parameter, like dash dash Qube config, to specify the Qube config file location to load, even if Armada learns inside the Kubernetes cluster. It's not yet submitted to upstream yet, but it will be proposed soon by our team member. Likewise, we have gone through many issues, and we are helping to make it better. And in fact, it's really getting better compared to the initial stage. It's quite stable now, and it covers many use cases. Other than Armada among Airship Project, we also considering to introduce Deccan. It manages various manifest files for each site. So it really helps to managing our various manifest site. So that's it for now, and I'll pass it to Jess up again. Thank you. Yeah, so in this year, using Armada and OpenStack, we are doing or did five products and deployment. And we are not only delivering OpenStack, we are also delivering Kubernetes. So if you look at the list for the ML infrastructure, which needs to leverage GPU resources, we deploy Kubernetes. Then using Armada to deploy all the LMA tools to operate Kubernetes on top of those Kubernetes. And also, we deployed Kubernetes in production to be used as a big data platform, like a Druid part and Druid containers. Then actually, we did production deployment for the SKT private cloud or the virtual desktop, like a bit-to-bit service infrastructure. And also, we are working on actually putting our OpenStack into our five-inch telco environment as a beam. So Armada, even though it's an OpenStack hand project or Airship project to deploy OpenStack, it's very beneficial for us to do Kubernetes deployment. So if you look at this, this is ChartGroup. So when you do the OpenStack deployment, actually, you define four ChartGroup in our actual Armada manifest. The first one is OpenStack Infra, which includes MariaDB and RabbitMQ and those system components to run the OpenStack. Then second one, OpenStack Service. This includes all the OpenStack services, including LeapArt and OVS and Tstone, Glance, and Cinder, whatever you need for the OpenStack. And then the third one is the Monitoring Infra, which includes Prometheus and Grafana. And the last one is Rogging Infra, which includes EFK, Elastic Search Front, Bit, and Kibana, and LDAP. So we have those four different ChartGroup. And when you deploy your infrastructure, you can select which group you want to do. So for the OpenStack deployment, we do all the four. But in this case, we commented out Rogging and Monitoring, because this environment had legacy monitoring system, so we didn't need to install our own monitoring system. So we didn't install just OpenStack. So different requirements, different deployment architecture, you just need to select which group to deploy. And when you do the current deployment, you don't see any OpenStack-related group, but actually, you deploy Rogging Infra and Monitoring Infra on top of Kubernetes so that you can provide the operation monitoring tools for your Kubernetes. So the Armada Manipest make our life very simple as a delivery, or we are both doing development and delivery together, so as a development part. And we just need to manage one document per one site. And this document is very similar across those older deployment sites. We just need to look at based on different requirements. So managing those different OpenStack-related deployment sites was simply down to the managing A document for us. So the Armada in this way helped a lot for us to deploy various different kind of infrastructure. So that's everything we have right now. And if you have any question, or you can contact this email address, or IR handler I put on there. And if you are in Asia region, I'm the one who like on the IR standard, because most of the OpenStack-AM development is happening in USA, so if you're Asian, there is sleeping while you have extra questions. So I can help on that. Any questions? Sorry, I for right now, we are just putting through it. We are looking into putting some of the more behave. It's very nice to see, actually, that you can deploy all the services of Kubernetes as a single Armada command. But what if you also need to apply some bootstrapping scripts for the database to define the databases or access control and stuff like that? Is that possible with Armada? I'm sorry, I cannot hear you, so I have to be bootstrapping script. So the question was, can you apply some bootstrap script in Armada? So what kind of bootstrap script you are thinking of? Do we need to? Yeah, I guess that kind of bootstrapping is can be done by Hem chart itself, not by Armada. But if there is some task that cannot be handled by Hem chart, you can specify some pre-installation script and post-installation script as well. So yeah. Usually with the Hem chart, you can define any job. So just like a job will be right during those user settings and this any job, and you really disappear after a job is done. So there is a concept of a job in Kubernetes. You can define that in Hem chart. Any other question? All right, thank you.