 Hi everyone and welcome to Open Infra Live. This is the weekly show hosted by the Open Infra Foundation to talk about all things open infrastructure. We have production use cases, open source demos, industry conversations and more. So every week you can tune in on Thursdays at 1500 UTC to stream on YouTube, LinkedIn or Facebook. Please note if you are following the UTC time we did have a change starting this week. So 1500 UTC is when you can continue to watch Open Infra Live with the rest of the global community. My name is Allison Price and I'm really excited to be your host today. Like I mentioned, we are streaming live so if you have questions throughout the show please feel free to drop them into the chat wherever you're streaming and we'll pass those to the panelists throughout the show and also answer some questions at the end. Today I am really excited about this episode because there's two things that I love, data and production open source users and right now we have both. And we have something that is also really cool about the Open Infra community, a global representation of users. So we have five users that we're gonna be talking to today and they're coming to us from four different countries which is really cool and shows the power of open infrastructure all around the world. So I'm really excited. Like I said, we have Kakao from Korea, Line from Japan, Shorts from Germany, the Australian Research Data Commons from Australia and T-Systems from Germany as well. So to kick things off, I'm gonna bring on Andrew Kong from Kakao and he's gonna talk about what Kakao is, what they do as an organization but also how they're deploying open infrastructure and production. Welcome Andrew. Hello everyone. Hello Allison. Yeah, I'm Andrew Kong from Kakao and Kakao is a number one mobile platform in Korea. Kakao is very popular for its messenger service. The name is KakaoTone. It has 11 billion messages per month around the world and there's a lot of applications we develop in a company level. It has games and pay and bank and mobile which we call the taxi support. Your convenience, yeah. So basically we try to collect everything and every application with our product. Yeah, next slide, thank you. Yeah, we develop a Kakao Cloud platform based on the opening product which is the open stack like this. Yeah, we define the Kakao Cloud platform which is the keystone tokens and the opening product APIs. This is necessary for the Kakao Cloud platform and based on the open stack in product APIs we develop a lot of service like Kubernetes service and container registry and add launchers and deployment as your service, things like that. Yeah, this service is based on the open stack and using the keystone for the token providers. Yeah, next slide, thank you. Yeah, this is the crane which is the basic infrastructure as a service for our company. Yeah, it has typical deployment like it had to share the regions like we share the keystone over the regions and we have a separate NQ and database and message queue for the shared region and we have both separate regions right now and it has a controller and it has two controls for each region for HA purpose. The controller has the API for the APIs like Nova and Neutron and Octavia and things like that and it has a separate levitating queue, MySQL and levitating queue too. The thing is that we don't actually had a separate network nodes because we developed the special network model for our open stack service. So the compute node itself has an L2 agent and L3 agent at the same time. So we actually using the router at the compute node levels so we try to steering the package in a compute node level. Yeah, so we make the SDN over the compute node itself. Yeah, thank you. Yeah, the transformation, we try to transform our infrastructure to the open state-based cloud that is a result. At the start of year, like 2020, we occupied 30% of product is based on the cloud, which means it's using the claim and the content orchestrator. Now it covers more than 71% in a product level. When you think about the server count, it was 25%. Now it's 56%. During the one years, it increased more than 50,000 in a server count level. And then we developed the data sets over our infrastructures, meaning of a cloud. It indexes logging data, like 53 terabytes per day. And when it comes to the, we actually had a central messaging connector for our service and the logins and the telemetry. We had more than 360,000 messages per second. Yeah, that was maybe twice larger than last year. Yeah, and we expecting this number to grow faster, I mean, next year. Thank you. Awesome, it's really interesting to see like that last slide in particular, how you're moving more and more workloads to the cloud because you're having more and more messages go through the platform. So currently, what is the scale of your open stack environment in production today? Yeah, we had a more than 10,000 COVID node and we actually never counting the VM numbers right now. But when I'm checking this morning, it's 150,000 VMs in our platform. Wow, that is incredible. And it's cool to see that kind of growth. Just even like you said, from 2020 to 2021 to go from 30% of certain workloads in the cloud to 70%, that is what I like to call explosive growth, which is what the focus is in this episode. So you see that continuing to grow over the next coming years with your open stack environment specifically. Yeah, yeah, because we had a new data center, I mean, next year, so we're trying to move every hour VMs and the hypervisors to that data center. So yeah, at that time, maybe actually, our every resource will be covered by open stack and this API, that's why I'm just right now. Wow, no, that's incredible. I know you mentioned Kubernetes briefly on your second slide. What other open source technologies are you deploying with open stack and production? Actually, we utilize every open source, I guess. So what these days are we trying to focus on the open telemetry because it's trying to define the standards over the telemetry cell. And we are focusing on the open policy agents right now because we're trying to build up the standardized the access control and the authorized control for our cloud platform. I think the OPA is the best candidate for that. Awesome, well, thank you, Andrew. And I think we're gonna probably bring you back at the very end for questions, audience, but I appreciate your presentation a lot. Thank you. So one of the things that I'm really excited about and I think Andrew's presentation totally set us on the right foot is that this year, even compared to last year, we've seen explosive growth among open stack deployments and production. Last November, when we held the virtual Open Infra Summit, we had 15 million cores in production, which was really exciting to celebrate with the community. But just this year, this November, one year later, we're at 25 million cores in production. So 66% more cores than we knew about last year, which is just incredible. And this is why I'm so excited to be here with these users who are seeing such incredible growth year over year and planning to see even more growth in the future. So one of these users that has really accelerated theirs and has actually crossed the one million core threshold this year is LINE. So to learn more about their organization, how they're using Open Infrastructure and Production, I'd like to welcome Yushiro Furukawa and Radip, welcome to the show. Hi. Hi. So yeah, so let us begin with LINE. So LINE is what it was like introduced and we introduced OpenStack when LINE in 2016. And LINE is not just a messaging app, it's actually a complete level of communication and infrastructure. And we actually have around 189 million total users with LINE and the total number of daily messages around five billion. And we have around 89 million active users in Japan itself. LINE's workloads include not only LINE messenger, LINE payments, games, music, live streaming and other services, but it also includes several ways of how we can actually deal with the Open Infrastructure. And it includes, we actually have like a Kubernetes workloads as well running online. And for the scale, I would like Yushiro-san to talk about in the next slide. Yes. Actually, we have about more than 40,000 hyperviruses. Oh, sorry, this is the data from the June, July, so actually more than 50,000 hyperviruses and more than 74,000 virtual machine and more than over 30,000 physical servers running in our private cloud called Veruda. I would just like to add like from the LINE's perspective, we also have initiated for the upstream. We introduced the Ostrometrics recently, which was led by Jean-San for the upstream part. And we are also trying to contribute as much as we can on that level. Wow, I can't like seeing the million and billions on the slides and then seeing that since July, you've added so many more hypervisors just to accommodate this traffic that y'all have. So what is driving this growth beyond or behind y'all's open infrastructure scale? Yeah, so LINE services, for example, banking, delivery, digital comics and a bunch of services running and our end user account are growing day by day. We are supporting the company growth from the infrastructure layer. And so what do you think is next for your open stack deployment? As you see the LINE customer base continue to grow, what additional workloads do you foresee being run on your open stack cloud environment? Yeah, next few years, we will expand some availability zones and other new region in order to support LINE's workload growth. This will significantly increase the size of our open stack deployment in the next few years. Awesome, and I know I asked this of Andrew as well, but one of the things that I find really interesting is in addition to open stack, there's a lot of open infrastructure projects that add different capabilities that integrate really well with open stack. So what other open source technologies are y'all integrating into your open stack production environment? Read it some. From the open stack production environment, except this, we actually include the Kubernetes complete cluster. So we also are using a rancher from our department and other work. Not only oslo metrics, which we are currently using, but also from the monitoring side, which we recently, me and Yusha-san recently presented in the virtual summit, we are using Prometheus Grafana and everything else to actually monitor how our clusters are actually performing, identifying SLOs and everything else. Awesome, well, it's definitely great to see all of the data and all of the growth behind lines production deployment. I'm sure we'll have more questions at the end of the episode, but thank you for that presentation and we look forward to hearing more from you in a bit. Thank you. Thank you. So like I mentioned earlier in the episode, these users are coming from all over the world. I know the line and Kakao teams, it's very late in the evening, but last week I was able to sit down with the Australia Research Data Commons team who's behind the Nectar Cloud. And because it's in the very early morning or early morning hours right now, we actually pre-recorded a segment where I asked them these same questions, learned how they're using OpenStack and what's driving the growth behind their OpenStack production deployment. So now we're gonna go back to last week and hear what they have to say and what's driving the growth in Australia. Excited to be sitting down today with Carmel Wash and Paul Cottington to talk about the growth that they're seeing with their OpenStack deployment at ARDC Nectar. Before we kick off, I wanna first walk you to open them for live. I wanna see if y'all can give us a little bit of background of what ARDC Nectar is and what kind of background do you have about the organization for our audience? So the ARDC is the Australian Research Data Commons. It was established back in 2018 and it's part of the national collaborative research infrastructure services that are supported by the federal government. The ARDC is made up of three heritage organizations. One of those was the Nectar Research Cloud. We also had research data storage and we also had ANZ, which is looking at research data management. So in relation to the Nectar Research Cloud, that actually started back in 2012 and it's a federation. So we run it across our node partners across Australia and New Zealand and our node partners are Monash University, University of Melbourne, University of Tasmania, CUSIVE, Intersect. And we also have partners in Swinburne University and University of Auckland. Awesome, so you mentioned the Nectar Cloud has been around since 2012 and I believe that's when you started with OpenStack. So how has OpenStack transformed your organization since then? The Nectar Research Cloud was set up to encourage cross collaboration across research institutions in Australia. So similar to America, Australia has states and state governments and federal governments and the laws in relation to data transfer can be a little bit different in those states. So we were keen to set up a research cloud that would encourage and allow that cross collaboration between different institutions because people access their data and their data collections at the institution or at the node. I'll hand over to Paul, just to give a little bit more background because he's been a Nectar for quite some time. Thanks, Kamal. So back in, actually it was 2011 that we started thinking about the Nectar Cloud and we wanted to put up a research cloud at that point, specifically targeted at researchers. And at that time there were sort of several different types of cloud infrastructure or cloud software that could be used. It wasn't entirely clear which one we should go with. We went with OpenStack and it seemed to be a very good choice and we haven't looked back. So yeah, so we started on the cactus release of OpenStack, so we've been doing this for a long time. Yeah, nine years. I mean, it feels like so long ago and now we're at Xena and almost at the end of the alphabet. So you started at the beginning and then here we are. So what workloads are you actually running on your OpenStack cloud? So we support, as I said, research with the research clouds. So we run essentially, we have about 1800 different projects, research projects. So it really covers the whole gamut of different researchers. We in Australia, we categorize the federal government has categories of research, there's 22 top level categories of research and we cover all of them. So it really is a broad spectrum of all sorts of research. And in terms of the type of things people do, it's everything from really just using the cloud as a compute resource. You know, I have my sort of grumpy personal computer almost that I can go and run or my research group can have several large virtual machines running that I'm doing simulation modeling, data analysis or whatever. Or I'm hosting databases, datasets that I can share with my colleagues or I'm hosting research support services. So one of the main things we support is what we call research platforms or sometimes known as virtual research environments. So online systems, portals that are sort of targeted towards specific research use cases. So we have things like ones that are supporting genomics research. We run the Galaxy platform, it's a common platform for genomics analysis. Ecological studies, there's a biodiversity and climate change virtual laboratory that looks at the impacts of different climate change scenarios on the distribution of different species across the country. We're also got a machine learning platform, one that supports drones and analysis of drone data and one for analysis of what we call characterization data, data from microscopes or that sort of imaging data that involves significant processing. So there are a lot of these sort of customized platforms that people use for different research fields that run on the Nectar cloud as well. And that's where we've seen large growth in the last couple of years. So the Australian Research Data Commons mission is to provide the Australian researchers with competitive advantage through data. So what we did a couple of years ago with my colleague is they launched the ARDC platforms program. So we have about 27 platforms being developed. And previously on Nectar, I think we developed over those five to six years. I think there was about 10, 10 or 16, sorry, 16, I think it was virtual labs. So it's quite a large amount of growth. And to enable that growth, what we have been doing is investing in leading edge infrastructure. So we've been investing in GPUs at scale, large memory machines at scale. And we're also looking at how we deal with allocation of those resources. So looking at having GPU services and stuff like that. So it's giving us that opportunity to invest more, but also to scale these services for our national research advantage. Well, Carmel, I'm glad you brought up that growth and how you're having to increase your scale because all of those different activities that y'all both just went over must require a massive cloud. So one of the reasons this episode, we're talking about explosive growth with open stack deployments is y'alls open stack deployment just compared to last year. So you've been in production for nine years. And just in the last year, it's grown 146%, which is incredible. We have several organizations that are investing more about this amount of scale increase is really incredible. So where are you in terms of scale and what's actually causing this growth among your organization besides kind of the workflows that you've just described? Well, we've invested in doing a refresh of our core infrastructure over the last couple of years. And basically that has led to us to be able to more than double our national capacity. So I'm just looking at the stats here. So remember them all. So in essence, what we have is 150,000 plus virtual CPUs now, 8,000 virtual machines, two petabytes of object storage and five petabytes of file storage. And by investing in that new infrastructure and refreshing, it's allowed us to do a lot more, I suppose. So the refresh was one element and that finished in July this year and we worked with our node partners on that. But as I said, they had a large element was with the platform. So understanding what kind of technology they needed and what technology they needed at scale. So Paul mentioned some of the platforms there. But another large element of it as well was being able to be agile and grow with the changing needs of research. So another platform that we're working on at the moment is called the Sensitive Research Platform. So it's called SERP. It's a product that was developed in Wales and the UK. And it has been running on our Monash node for some time but we're looking to expand it to be a national service. So to be able to expand it, we've had to invest significantly to be able to do that. And we've had to change our architecture a little bit in our open stack cloud. So I'll hand over to Paul just to give a quick summary of what we're doing there. Yeah, so for there, because it's sensitive data we need to have it essentially separate from the rest of the cloud infrastructure and users. So essentially we're setting up a separate open stack region in the particular nodes that are running that. So new control plane, new separate infrastructure and networking and so on. So as Carmel said, we're just expanding that now and we're looking at procuring additional infrastructure for that at the moment. That's incredible. I mean, the amount of different workloads you're running and the investment that you're making in open infrastructure is something I'm really happy to be talking to y'all about today. And in addition to these kind of growth opportunities in the future with the new regions, are there any other growth plans for the Nectar cloud or what you're doing with open infrastructure? And we're also looking at where we can use Nectar as an innovation platform and innovation at scale. So we, as I said, we talked about the GPU things but we're also looking at GPUs. And we're working with one of our node partners at Monash University, but in video as well to see how we could use GPUs for micro segmentation of security, cyber security. So actually customizing the cyber security per environment and per workload. So that's quite a bit of work that we're doing at the moment as well. We also have been looking at, oh God, I can't remember that but we've been looking more broadly because we are a national research cloud. We have been looking more broadly at how we can leverage on a national level commercial cloud for national research and how that would work with OpenStack and another area. That's an area that we've been looking at is kind of defining the workflows and understanding where you can get optimum compute. Anything else? I know I'm missing something Paul. Yeah, I think, well, the other thing is more around the higher level services that we support. There's a huge use in research of Jupyter Notebooks. So we wanna be able to support those kinds of things, make it easier for people to use the cloud using a virtual desktop sort of interface and so on. So we're building those sorts of services to make it easier for people to use the cloud to do their research when they may not be very, have much IT expertise, for example. Well, and research is an area within the global community that there's been a lot of momentum around OpenStack. So I know we're gonna be tuning into an Open Infra-Live episode in the coming days. Where can people actually learn how to collaborate with you and more about your use case with OpenStack and other open infrastructure projects? So you can access more information about the Nectar Research Cloud on the ARDC website. So that's ardc.edu.au and under our services tab, there's more information there on the Nectar Research Cloud. You can also ask questions and also see the kind of platforms that we talked about earlier that are running on Nectar and learn a little bit more. One area that we have been looking at quite a lot with new technology is containers and orchestration and how to develop those for research. So Nectar, we are developing those services for our national research loads, but also as ARDC, we are leading the national coordination to develop best practice and policies of how to adapt projects like Kubernetes for research. Awesome, well, that's incredible. Well, thank you both for sitting down with me today. I'm really excited for the global community to learn more about your use case and we hope to have you on a future episode of Open Infra-Live. Brilliant, thank you for the opportunity. Thanks, Alison. Thank you. Awesome, well, it was definitely great to hear how a research organization like the Australia Research Data Commons is using OpenStack to power research for their different workloads. One of the things that I really wanna emphasize to those who are watching and anyone who is running OpenStack in production or even if you're just evaluating it or running a proof of concept, please take the OpenStack user survey. It's openstack.org slash user survey. We have the link here on the screen. It's a great way for the upstream community to understand your technical requirements, but also great for other operators to learn how you're using OpenStack in production and what things they may be able to learn from you around challenges that they may be having themselves. So for our next user, we're gonna travel from Australia to Germany. And I like to welcome Adrienne and Marvin from Schwartz to talk about what they're doing with OpenStack and what their organization is all about. Hi, Alison, thank you. Awesome. Yeah, let's find out what the Schwartz group really is. I'm Marvin, I'm from StackEd, and we're part of the Schwartz IT, which you see in the middle of the screen here. I think most of you know Lidl and Kaufland, mostly the European people. We are there with the two retail brands, Lidl and Kaufland. We have 12,000 stores in 32 countries. Lidl is also in the USA. And we are one of the... We are the Europe largest trading company beside the retail part. We also have recycle labels, which pretty much makes a circle around it. So you come to the retail store, buy something, bring back your old bottles, we recycle them in the Schwartz production. We also have production facilities to like refilling the soft drinks. We also have bakery goods. We have own ice cream facilities and stuff like this. So, and then StackEd comes in. So it can go to the next slide, where we click again, so we see the rest of it. We saw the need that we need to build something for our own to get the transformation of the Schwartz group more in progress. And therefore we created StackEd as a print within the Schwartz IT to start the transformation of the Schwartz group. Currently we are on the left side and there are some workloads on StackEd, but the target picture is on the right side. We want that 70% of our infrastructure would be moved to StackEd. And then we have like the other 30% which are mixed between multi-cloud environments and the enterprise IT. To give you some examples what is currently running on StackEd. We have a car sharing app called 2Go. We have different parts of the of Lidl for in-store software which runs on StackEd. And we also have a logistic software for pre-zero for, yeah, also on StackEd. And yeah, Adrian will now tell you something about in-depth technology in StackEd. Yeah, hi everyone, I'm Adrian. So to power the growth of StackEd and everything we're doing on it, there's some technical backbone needed and some human backbone. So we're currently having four data centers located in Germany and in Austria. We are already more than 100 colleagues here at StackEd spread over Germany, Bulgaria and Spain. And we're growing even larger every day. To give you a little bit of insights on how large our infrastructure is or how tiny at the moment. We currently have 255 compute nodes running on them 5,600 VMs, have 6,800 mCPUs, 127 terabytes of RAM and two petabytes of storage. And as you can see in the slides, we expanded rapidly since the last September. So I think the numbers speak for themselves. Yeah, next slide, please. So to power this even more or to grow even faster, we partnered up with Cloud and Heat, Cloud on Yauq. Yauq is spoken or is yet another OpenStack on Kubernetes. It's a life cycle management tool for OpenStack. It will or is currently split it in three parts. That's Yauq per metal, which is currently ironic and some image or OS image creation, which helps us a lot. Then we have Yauq Kubernetes, which is sets up for us Kubernetes clusters where the Yauq operators and OpenStack with that will run. Yauq Kubernetes is mostly a collection of Ansible playbooks. Yeah, and after these have been set up, the Yauq operator take over and based on the Yama configuration files, it will set up your OpenStack clusters. It will set up databases, secret or user certificates, load balancers, SSL terminators, everything you need for a large OpenStack environment. So the hope is to even grow faster and more rapid in the future. Yes, to give you maybe some insights what services are currently running on our infrastructure, just a few of them. So some platform as a service are Kubernetes for our users or Cloud Foundry and some other services or smaller services are databases like MySQL, Postgrease, radius databases. We offer open search and other monitoring or backup solutions right now. We are not public yet, but more will come hopefully in the next time. Awesome, well, I like you said, Adrienne, those numbers speak for themselves. I mean, you had over 200% growth in a lot of areas of your infrastructure. And one of the things that I love to learn about and is why, why are you seeing this increased need for more and more infrastructure powered by open source technologies as well as some other technologies like AWS and GCE as well? Maybe I will answer. As you could see, we are a really range of companies in the Schwartz Group and they all have, let's say, different needs. The one of them just need fast acceleration of software productivity. For this, we have had a look on Cloud Foundry. It fits really good needs of a part of our developers. And we also have a Kubernetes runtime which we empower a bit with Gardner underneath. And also in Schwartz Group, we also have OpenShift, which is not yet part of Stack It, but could be also maybe in the future. So there is quite a range. There is, yes. Well, and one of the interesting things was the last slide around, and I'm gonna potentially mispronounce this and I apologize, but Yaouk, I think. Yes. And like Mohammed said in the comments, it seems like integrating OpenStack and Kubernetes together is really the way for open infrastructure deployments moving forward. So is this something that, I know he said and just learned about it today, is there a place where people can learn more about this, maybe even get involved with what you all are doing with Yaouk? Yaouk is open source. You can go to yaouk.cloud and there you can find all the information you need to contribute, take a look and read more what we had done in the past and currently working on. Awesome. Well, I actually look forward to learning more about that. It's probably one of my new favorite acronyms. It seems like there's more every day, but it is definitely fun to say. But thank you all for that overview and we're gonna bring you all back after our last presentation to answer some more questions. Thank you. Thank you. And don't forget, if you're sitting live in the audience, we will answer those questions live here on air. We have one more user that I'm really excited to introduce. We have Nils from OpenTelecom Cloud powered by T-Systems who is an OpenInfo Foundation Gold member. So he's gonna talk about their OpenStack-powered public cloud and why they're seeing some of the significant growth in OpenStack as well. Welcome Nils. And nice sunglasses. Yeah, well, those sunglasses well, had some use in the past days. Yes. Yeah, thanks for having me. Let me briefly introduce you to the OpenTelecom Cloud. You all probably have heard of Deutsche Telekom, the premier teleco provider here from Germany. But we are present in several other countries as well in the U.S. as a T-Mobile U.S. as an example. And as such, we have also not only telephony services, but also public cloud services. And the public cloud of Deutsche Telecom is the OpenTelecom Cloud. Technically, it is operated by T-Systems. T-Systems is a subsidiary of Deutsche Telecom and takes care of the big installations if it comes to IT setups. Well, by the way, we have grown to one of the largest public clouds in Europe which are actually based on OpenStack and to get an idea on the dimensions. I'd suggest to turn to the next slide, please. And I suggest that we start right on the top. So we've built quite some hardware in the meantime. So something like 18,000 rack assets, including servers, switches, security devices and other stuff as well. So we are almost touching half a million virtual CPUs at the moment. We should be a good bit over three and a half thousand terabytes of RAM in our systems. And well, all the other more or less boring stuff if you are familiar with OpenStack based clouds. But to name a few, well, we run OpenStack. We make a great effort in being and remaining Devcore compliant. We have quite some good connectivity as we are a telco provider anyways. With about 300 gigabits internet uplink. We have the typical infrastructure as well as platform services. And all this is split across two major regions which sounds not so many, but we spend some considerable work in setting up our systems to be able to scale to such a massive scale for our services within a single region. Each region is split then again down to three AZS. And we have two major data center sites. One is in Germany and another bond is in the Netherlands. That is very helpful to be compliant with the European GDPR and which is probably also one of the major drivers of our massive growth over the past couple of years. So one number we are particularly proud of is our storage capacity which must have been by now surpassed 500 petabytes of block and object storage combined. Since for example, just to get you an idea what kind of workloads we are running. We are storing satellite imagery that gets recorded directly from space, sent to the ground stations and then it's stored and also processed in the open telecom cloud. At the moment we are a team of about 350 colleagues out of the, I don't know how many 10,000 of Deutsche Telekom colleagues. And we are distributed at the moment to four major hubs, but a lot distributed. That is Germany as I'm from Berlin. We have a lot of colleagues, especially in operations in Hungary, but as well as in Slovakia and in Russia. Well, as you can see, would I just skip was the servers. So we have lots of different sizes which is important if you are acting as a public cloud provider since we cater so many different needs. And that's why we have a great assortment of very small as well as rather big machines including bare metal and some special stuff like GPUs, FPGAs and upcoming next also some artificial intelligence powering special chips and special CPUs or I think they are called GPUs. Next slide please. But it's not only the massive scale that makes up OpenTelecom Cloud, but it's also our connection to the community. And as an example, I brought you a number of projects that we participate in and contribute to. One is the OpenStack SDK and CLI, the OpenStack client where we are major contributors. We even, one of my colleagues, Atem is the project team lead for this OpenStack project. We also are engaged in Ansible Collections making it easier to manage and maintain your OpenStack workloads directly with Ansible as the same is true for the Terraform providers. We also invest quite some time in to make a great user experience and contribute back to the awesome community. So as you can see here, that was a small example of how you can facilitate the SDK to automate your workloads. That's a very useful thing for our customers, for our users, and we're using the same stuff also for ourselves. Well, as you can see, we are a great and happy team. We are hiring and that's for now back to you, Alison. Niels, I love it. Definitely a lot of great growth and it's really cool to see all of the different things that are happening there. I know you mentioned that, of course, GDPR is definitely a driving force for the public clouds, particularly in Europe. This year we've actually expanded to 175 public cloud data centers and it's still growing. And I know that y'all's team is a big part of that. Do you see that there are other things beyond GDPR that are driving that growth around public cloud? Yeah, it's not only the GDPR, but the greater term of sovereignty. So, sovereignty is something that is heavily discussed in Europe at the moment to became, to stay and become independent, both on a technology as well as on a data level of IT. So that means for our users, it's very important to know that they can put and trust their workloads that they sent to us. But if for whatever reason, I cannot imagine right now, but you never know, there is no Deutsche Telekom anymore. There are several other options as well and you can easily switch over or even combine those services, which is a little bit more probable situation. So, setting up real working and API compliant, multi-cloud scenarios is certainly a big driver as well. Awesome. Well, and I know we only have a few minutes left, so I wanna actually bring back all of our panelists. So, I have one question and of course, if you're in the audience and you'd like to ask anything of any of our panelists here today who are running OpenStack in production, please feel free to drop that into the chat. But for now, let's bring back Shoritz, Kakao and Line. All right. So, one of the things that I find interesting and I'll all really address, OpenStack is really growing at your organizations and you're continuing to invest in it. But one of the questions I have and I'll start with you, Andrew from Kakao is how has deploying OpenStack and growing your environment affected or transformed your organization? Oh, yeah. Maybe it's two or three years ago, we had actually 10,000 VMs over our infrastructure. And then we tried to persuade our developers and the people trying to using, not just using VM through the API, just using orchestrator with the VM or infrastructure's API, like the heat and the cloud formation itself. But the thing is it was really difficult, right? So, we tried to using the VM-based orchestrator, we should thinking about the VM image itself and the working code in a VM image. So, it was really hard. So, we devised it. Actually, Alison, you asked me about him, which open source is, are you interested? I skipped the important one. Actually, we are, how I say, we're completing the integration with the Kubernetes with our, the crane, the API. So, after we done that, like one and a half years ago, the developer, they sell by themselves, they are using Kubernetes really easily and simply. So, after that, actually our, the development environment really changes like, so it increased the agility and the efficiency of our infrastructure for the development developers. So, like a couple of weeks ago, we actually had to build the no-show COVID-19, the vaccine reservation system. So, with the, our, the cloud platform, that was done by, done in a 10 days, yeah. Yeah, that was great, yeah. Yeah, no, that's really awesome and it's cool to see that you're deploying OpenStack with Kubernetes. I have one follow-up question at the end that I'll ask everyone, but for now I wanna go to line, how has OpenStack transformed your organization? Oh, yeah, sorry, yeah. So, from our side, so, actually OpenStack has really changed how we are actually working and operating with our organization. So, OpenStack was introduced around five years back, as I said earlier, into Consisting. So, with the introduction of OpenStack, there has been a lot of acceleration and it has actually supported the line service growth a lot. OpenStack has actually helped us to introduce an abstraction layer between the actual data center infrastructure and how the applications have been developed by the developers themselves. So, previously we had the legacy systems, the legacy systems themselves had a lot of work to be handled with, which everybody already knows and that's why we are moving to a cloud-based production system. This abstraction layer, which OpenStack actually introduced. So, it changed how line is actually working. So, earlier it was more like doing something and now it has become more like developing something. So, that difference actually comes into the picture by the virtue that with this, with the introduction of OpenStack, the developers don't have to worry about what's happening underneath or in the infrastructure and they just have to worry about how their applications are working in and how everything is actually operating. So, this culture has actually reduced a lot of extra work and the developers can just focus on the service development right now. That's so massive. Awesome. And what about you, Schwartz? Since we are like the youngest of them all here, let's say it like this, we started our first OpenStack cluster in early 2019. It's not so long ago. And I think OpenStack brings us in the situation that we can use it as a base layer. Or to be honest, we have Yoke, then we have Kubernetes, we have OpenStack, but to be mentioned, yes, it's OpenStack. And therefore, this is the base for all of our product teams to build on. So, beside the infrastructure as a service, we also have Kubernetes, we have Cloud Foundry and all the product teams are currently building on OpenStack their product for like a managed database service and they all use OpenStack for it. And then, therefore, we are working pretty much under the hood to give the product teams the possibility to create products for our end customer, which is mainly the Schwartz group. But we also want to offer this to mostly the European people, which we also have the topic like the GEPR stuff, which Nils told us. And we also want to offer Stackit for the public. There are already some, let's say, friendly customer, which are working on our cloud with us together to make it better and to test it. And we want soon to go to a more public audience and OpenStack helps us with this. Awesome, yeah. And like you said, like you might be younger or your deployment might be younger than some of the other organizations, but it's actually one of the powerful things. I feel like we're seeing more and more organizations come online with OpenStack year over year. So it's interesting to hear your perspective having started in 2019 and hearing how you're integrating OpenStack and Kubernetes together. So it's exciting to hear your perspective on the show today. And Nils, last but not least, how has OpenStack transformed T systems? So when we started a little bit more than five years ago with our OpenStack installation, we decided as a company as whole to undergo a massive transformation to become much more agile, to become much more open, to become much more transparent. And setting up and developing OpenTelecom Cloud and leveraging OpenStack was just the right environment for that to actually implement this path. So ever since then, we more or less nearly doubled all our figures, our revenues and our number of servers and the capacities and so on. So take it or give it a little bit, but depending on how you count. So that was a major growth in scale. And this scale can only be achieved if the people who work around this have the right mindset and the right methodology to leverage these growth. And that was possible by our taking over responsibilities within small teams, moving quickly, trying out things, failing, but if we fail, then fail early. And on the other hand, making sure that our customers get a reliable service. And by that, for example, a lot more than a year we didn't have and didn't account a single major outage as an example. And that's something we are really proud of. So that's how we transformed here. Yes, no, that's fantastic. And it's great to hear not only impact on your revenue, but impact on your end users without having outages and things like that. One commonality, and I know we only have a few minutes left, but I think that is really interesting is not only are you all using OpenStack in production, but you're also using Kubernetes. It's been, you know, one year since OpenStack came around and it's become the standard for open source cloud. And Kubernetes has become the standard for container orchestration. So in a few weeks or not a few weeks next week, we're gonna be talking to more users who are following this open infrastructure standard where they're running OpenStack and Kubernetes and also the de facto standard for operating system, which is Linux. So we're gonna hear from some Linux, OpenStack, Kubernetes infrastructure users, like y'all that are running in production, they're growing and they're seeing how these technologies are impacting their organizations. So I personally wanna say thank you to all of you. I know we're out of time. I could probably fill another hour with questions that I personally have, but we're gonna be publishing your case studies individually on superuser.openstack.com. So if you wanna learn more about what these organizations are doing, please go and visit and learn more there. Next, I wanna actually thank the open infrastructure found members who funds these of the open nation and make shows like OpenInferLive possible. So thank you to everyone who continues to support the OpenInfer Foundation and makes this show possible. And like I said, I thought it was a few weeks from now, but it's actually next week on Wednesday and Thursday at 1500 UTC, we have the very special OpenInferLive keynotes. This is going to be a two hour, two day event where we are going to have users and developers and all of the open infrastructure experts all in one place to talk about what's going on in the open infrastructure community, but also answer your questions live. So please join us. Registration is free and open right now. And of course, I'd like to thank our sponsors for the OpenInferLive keynotes as well, including our headline sponsors, Red Hat and Wind River. In a few weeks, we're gonna resume our weekly show, OpenInferLive, but don't forget to tune in next week and have a great time with OpenInfer.