 Hello, everyone. Thank you for joining us and welcome to Open InfraLife, the Open Infrastructure Foundation's weekly show sharing production case studies, open source demos, industry conversations and the latest updates from the global Open Infra community. This is our 22nd episode and we're doing this live every Thursday at 1,400 UTC, streaming on YouTube, Facebook and LinkedIn. My name is Sunny Tsai and I'm going to be your host today. I'm very excited for today's show because we have some really amazing guests who have also won the past Super User Awards and we're going to talk about how they are making a difference with Open Infrastructure and what's next in their Open Infra journey. Today on the show we are joined by some of the previous winners of the Super User Awards including CERN, China Mobile, Ontario Institute for Cancer Research and VEX Host. For people who might not familiar with the Super User Awards, the awards were launched in the Paris Summit in 2014. Since then, the Open Infra Foundation has hosted the annual Super User Awards to recognize organizations that have used Open Infrastructure to improve their business while contributing back to the community. Before I introduce our speakers today, I would like to remind all of our audience that this is a live show, so feel free to drop your questions into the comments section throughout the whole show and we'll answer as many of them as possible. So joining us today, we have Bilmero Morara from CERN, Xiao Guangzhang and Zhiqiang Yu from China Mobile, Jerry Baker from Ontario Institute for Cancer Research and Mohamed Nazard from VEX Host. So to kick this off, let's hear from Bilmero who is going to walk us through how CERN's Open Infrastructure and Environment has evolved since they won the first Super User Awards at the Paris Summit about seven years ago. Take it away Bilmero. Thank you, Sunny. Hello everyone. It's a pleasure to be here. So CERN was the first Super User Awards. This is back in 2014 in the Paris Open Stack Summit. I have such good memories from that event and it was already seven years ago. So let's talk a little bit about the current CERN infrastructure. You see these dashboards. This is from our live monitoring and this was taken yesterday. So for you to have a very recent data from our infrastructure. You can see the size of the infrastructure today. We have around 200,000 cores available. Around 3,500 users, more than 400 projects, more than 20,000 virtual machines. In terms of hypervisors, you saw probably in previous talks that we are reaching 10,000 hypervisors. And now we have a little more of 4,000 hypervisors. And you see a big drop of the number of hypervisors that we have at the beginning of September. So this shows that the infrastructure continues to evolve. We are not removing or shrinking the cloud infrastructure. Actually, we are adapting it to the workloads that we have. But probably we're going to talk and I will touch this later during this stream. So moving forward, what we have today in terms of Open Stack projects. So we started the cloud infrastructure at CERN in 2013. Initially with only four projects. So basically the main core projects Keystone, Glance, Noven, and also our horizon for users to interact in an easy way with infrastructure. Today we have 15 projects that we offer to our users basically to fulfill the different use cases that they have. And you can see that because the kind of architecture that we have in our infrastructure, we are able to run these different projects in different releases. And this is due to different constraints. So we upgrade independently the projects. And we are running the from since the Nova and Neutron Barbican, for example, in Stein release that was launched a few years ago. And also at same and same clouds will be released with Raleigh and Keystone. So you see all these mix of Open Stack projects with different releases. This is a challenge for us to manage, but also allow us to be very flexible in the kind of infrastructure that we offer. And to support the cloud infrastructure, we rely on many other Open Stack projects. For example, Centus, we rely on RDO distribution for Open Stack. On top of the Open Stack infrastructure, we also run Kubernetes clusters that are deployed by Magnum and the applications configured by Elm. We configure all the infrastructure using Puppet and Foreman. For monitoring, we use CollectD, Prometheus, and many other tools. Storage, we rely on Sef, MySQLs, Postgres for databases. And you see that we have a lot of open source projects that we run to support all these infrastructure. And probably I missed a lot of them in this slide. So the point is that since 2013, that we run our private cloud infrastructure. And we rely in a lot of tools, open source tools, to offer cell service infrastructure to our users. Thank you. Thank you, Bill Merrill. It's really awesome to see all the open source projects and tools that CERN is using. And next, we have Xiao Guangzhang and Zhichao Yu from China Mobile who won the Super User Awards at the Barcelona Summit in 2016. They will introduce how their team carried out in-depth practice of Open Stack as the cloud infrastructure to build their NFV network. Take it away, Xiao Guangzhang and Zhichao Yu. Thank you. Next slide. Hello, everyone. I come from China Mobile. My name is Xiao Guangzhang. Next time I will give you some introduction about the in-depth practice in China Mobile. As you know, at Prometheus, we have a network cloud, private cloud, public cloud based on Open Stack. Just as the right side picture, you may see in China Mobile's Open Stack based infrastructure, we support both three kinds of cloud. And above that, it will support 4G, 5G business application and S computing and other IT services for internal use. And also, we support external business with our public cloud. Now we have skill about 300,000 physical servers and with 6 million CPU costs in total. The skill of China Mobile's NFV SDN network cloud is skills with more than 100,000 physical servers in eight regions across the whole country. In network cloud, we use the Metaca and of course with some enhancements of Pyke and Queen's version are abrupt. In future, we may turn to Queen version. Okay, next slide. During this years, we do some in-depth practice. At the left side, you may see architecture of our NFV. At the bottom, it is hardware layer. At the middle layer, it is Open Stack based virtualization layer. And above that, at the left side, we have some VNF business application. Of course, this application is orchestrated by the manual. So we believe China Mobile is now building up the biggest NFV network among the world's leading operators based on Open Stack. For each Open Stack instance, we mean the source pool. It has many 500 to 1,000 and 500 servers. The virtual network functions, it means VNF, for example, the 4G and 5G applications running on the top of the virtualization. Our team, that means I worked on the team, mainly research NFV system integration. We do some automation about the integration, both for the hardware and the software. Now we build up the CI CD process to carry out the automatic software integration and testing. That means different virtual infrastructure manager, of course, Open Stack based platform, and distributed storage NFV orchestrator and VNF are automatically deployed and tested through a unified platform. Currently, the CI CD platform uses Docker technologies. That's all. Thank you. Okay. I would like to add something more. We are really super user now. In China Mobile, we have network cloud, private cloud, and public cloud. And as you may know, there are three operators in China, TECOM operators in China. China Mobile, TECOM, China Unicom. All of them are the good member of Open Stack Foundation. And all of us are using Open Stack as our cloud platform infrastructure. So we're really looking forward to the next 10-year of Open Stack, next 10-year of Open Infrastructure. Thank you. Thank you so much. It's really impressive to see China Mobile has over 6 million cores. And I can definitely tell that China Mobile's team has grown a lot of its Open Infrastructure environment since winning Super User Awards. So thank you. And next up, we have Jerry Baker, who will give us an overview of what the Open Stack environment looks like at the Ontario Institute for Cancer Research since the Vancouver Summit in 2018. Thanks, Sonny. Yeah, I'm really happy to be here. I'm in some pretty prestigious company, I would say. We are by no means the size of China Mobile, VEX Host, and CERN. But we try to do good things with what we've been given. And Open Stack has kind of helped make that all happen. So yeah, I work for Ontario Institute for Cancer Research. We're located obviously in Ontario, specifically Toronto. And what we're trying to do is we have a cloud platform that we were awarded money for back in around 2014, approximately $2 million. And the goal was to build a public cloud so that researchers around the world could come and use our cloud and basically do cancer research. So the goal here was to, the goal was to provide a large amount of whole genomes of tumor normal pair, and so that people could do their own analysis on it. And so our cluster, which is called the Cancer Genome Collaboratory, we're currently running on USERI. That's always a constant battle to try and stay, to try and keep that up to date. But we'll talk about that more later. But we're around 3,500 cores, 84 compute nodes. We have just over nine petabytes of raw data across 39 nodes, and that would be our CEP cluster. When I created this slide just a few days ago, we were around 240 instances and 367 volumes. 121 users and about 60 active projects and 38 of those projects are from external cancer research organizations across the world. Many of them are in academia. Some of them are other institutions like OICR. And, you know, the team that runs this cloud, let's call it like a DevOps team, it'd be like myself, my colleague, Henrik Vetter, and another colleague, Yelazar. So it's just a three person team that keeps the infrastructure online. And some of the projects that we're running on the cloud here are sort of our main thing that we've been developing. So I said we have three DevOps teams. We have about a dozen or so hardworking developers, very bright minds that are contributing to an open source platform called Overture. And basically it is a collection of open source tools catered to bioinformatics. And within those tools, we've developed several international and national projects. The first one I'll mention is virus seek, given the current landscape of the pandemic, where we're trying to sequence up to 150,000 virus samples from Canadians who have tested positive for COVID-19. And back into the cancer. We, we've got ICGC, and that is the International Cancer Genome Consortium, where we have analyzed whole genomes from 25,000 cancer patients. And that was a successful project. And expanding on that, we recently launched ICGC Argo, which is a much more global widespread project where the goal is to answer us as to analyze over 100,000 cancer samples. Yeah. And anyways, I'm happy to be here. So thanks for having me. Thank you, Jared. I love the international reach of the Cancer Genome Collaboratory. And thank you for a contribution to everyone in the community. Next up, we have Mohammed Nasir, who will tell us what teams at Backsthost have been up to with open infrastructure since the Denver Summit in 2019. Hi, everyone. So it's crazy to think that Denver was only two years ago. I guess time has really gone by quickly. But I got a couple slides to talk about what we've been up to. So starting off, so we won the Super User Award in 2019, which is really exciting for us because we've been members of the community for a long time and contributing. We also announced a service for, you know, folks that want their open stack clouds to be upgraded, that we can kind of help them upgrade their clouds. Last year, we announced also our new cloud region, and I believe we did the announcement during the virtual open infrastructure life summit. And the region launched sometime this year. And it's been awesome to see users continue to pick it up. As of, you know, this year as of now, we're running Wallaby for the majority of our public and private clouds in terms of services. We're pushing really hard to try and get back to our theme of running releases before they're even released. And so we're going to see what we're going to do for the upcoming open stack release. And, you know, like I mentioned, our presence with our public cloud is with Montreal, Santa Clara, which has recently undergone a whole refresh and now running kind of end-to-end storage. Really, it's now on par with all the other facilities that we have. Amsterdam, and we've got some new locations that are currently in the works. And when it comes to private cloud, I mean, we continue to deploy and run private clouds for customers over the world. So anywhere from like North America to Middle East to countries like China where we operate private clouds for users out there. Moving forwards, so we are super involved in an open source and I kind of just wanted to talk a little bit about like the ecosystem around, because everybody knows that we do a lot of open stack, but it's kind of interesting to go beyond that. So when it talks to like kind of internal management and kind of code review and everything like that, we're using GitLab, which again, going over all of these are all open source services. Netbox, which is a data center infrastructure management tool, that's how we know where our hardware is. Because believe it or not, even though the cloud is really nice and you just do API calls to get VMs, someone has to physically install systems and know where they physically are. For our monitoring, we are big users of Prometheus. We also contribute to the open stack exporter, which is kind of, I think used across a lot of deployments now, to check for utilization and things like that. We have CICD at Zool, and that's kind of a no brainer given our involvement with the Zool project as well, and that's kind of what we use to do our all over internal testing and gating. You see a lot of icons and infrastructure automation, and that's because kind of that's what we're specialized in the most. So we use Helm to deploy all of our open stack services, more specifically with the upstream open stack Helm project. Terraform is what drives our health deployments, and so we use it for a lot of really interesting things, like automatically generating secrets and maintaining state of the clouds. Ansible is a useful tool that we also use to kind of bootstrap things before we have an accessible API, it's powered by Kubernetes and whatnot, and then obviously open stack is kind of at the core of all of this. For storage, I mean, Ceph is what we do and what I believe a lot of the large deployments do, it's really impressive. I tweeted about this a couple of days ago that we moved tens of terabytes of data, tens is probably understatement, I'd say thousands of terabytes of data without anyone even noticing anything. It's kind of crazy to think that we've moved so much data across so many systems and no one really notices a thing. Now for container runtimes, so we use container D, which is because we run our open stack inside of Kubernetes, and we're looking into Cata containers in combination with OpenStack Zune, so we can maybe look into providing containers as a service platform for our customers. And then container orchestration is obviously Kubernetes, that's what we use internally to deploy OpenStack, and that's also the service that we provide for our customers for container orchestration so they can deploy Kubernetes clusters directly in our public cloud. I'd say that's a summary of what our involvement is in OpenSource and kind of an update of where we're at, and so thanks for having the opportunity for us to kind of provide a little update on where we are. Thank you, Mohamed. I love seeing the full deck of OpenSource in Backstiles and also other OpenSource tools that you are using, and also it's very impressive that you're all moving so much data across different systems without anyone noticing. We would love to hear more about it, definitely. So now let's bring everyone back in, and here I have a few questions for everyone. And before my questions, I would like to remind all of our audience that this is a live show, so if you're free to drop any questions you have for our speakers in the comment section throughout the show, and we'll try to answer as many of them as possible. So as everyone has gone through their current size of their OpenStack environments, I'm wondering what has changed in your OpenStack environment since you won the SuperUser awards? Maybe we can start with the most recent SuperUser winner, Mohamed from Backstiles. Sure. Yeah, so we had a lot of changes as we were going through our deployment. I mean, I think what's happened is across time, the way that you deploy an application has changed. People used to deploy applications using different ways, and now we've got things like Kubernetes and containers, which are a really, really good way of deploying applications. It's a great pattern for something like running a cloud even as well, or just running any application. So a lot of what we've done is we've moved away from kind of manually deployed packages that are driven by an Ansible Playbooks or things like that. Do something that's API driven deployment where we are using Kubernetes and orchestrating containers through that. That has been a huge change in how we do infrastructure, and it's actually allowed us to do upgrades at a much faster pace, make sure our clouds are set up more reliably, and it also helps set a very, very good baseline across all the clouds with the same packages and the same software everywhere, because if anyone's ever done live migrations, you know what the pain of matching LibVirg and QMU and Kernel and everything all across. So by using these sort of things, we're able to easily match all this, and so our environments have become kind of a lot more intelligent using things like auto-scaling, auto-healing, and all these other cool stuff that we get natively out of Kubernetes. Awesome. That's great. Thanks, Mohamed. And maybe we can go to Jared Baker from OICR. Sure. So we won the Super User Award in 2018, and I guess what's changed since then, we've gone through six major open stack upgrades. So everything from Pyke to Asuri and everything in between there. We did do a modest expansion of compute and storage nodes back in 2019. We added just under 1,000 cores and about two petabytes of raw data to the Sef cluster, and then we've also done three major upgrades of the Sef cluster. So we went from Luminous to Mimic and to Nautilus, and we actually, we tried to go to Octopus about two weeks ago, but came into some issues, so we're still not there yet, but that will come. And over the years, there's been lots of improvements and refactoring of monitoring and logging over the past three years. Thanks, Jared. Yeah, upgrading has been, we have done a couple of episodes about upgrading, and so for folks who want to learn more about the challenges people are facing or solutions they have done for upgrading OpenStack, definitely check out the past episodes we had. And what do you think, Xiaobang and Jitian from China Mobile? After we won the SuperUrea Awards, I think in China Mobile, we have some great changes about the communication and the IT, which means the ICT and the converts. After that, we both build over public cloud, private cloud, and network cloud. Of course, I think we carried out in-depth practice of OpenStack as the cloud infrastructure. Of course, because now I focus on the network cloud, so I will give you some detailed information about our network cloud. We believe during this year's practice, we directly promoted the maturity of the industry and the maturity of our solutions based on OpenStack, especially in a multi-wander environment through the AFV and the environmental network. As you know, in our network cloud, we involved lots of wonders from the bottom layer, the hardware, including the servers, switches, such kind of hardware. And in the middle layer, we have different wind providers from different wonders. For example, Huawei, ZTE, and Ericsson, such kind of wonders. About it, it is about the virtual network functions. It will have IMS, EPC, such kind of 4G and 5G applications. So the major problem for us is how to deal with many wonders in the environment and how to integrate it. So our team is focused on how to fix the problems and the way how to do some automation project to deal with the problem. For example, we use some technologies from the open source project. For example, the ironic project and the airship. Of course, also we have some, for example, the Ansible technologies, such kind of open source technologies to help us build our own software to deal with the problem we faced in the last skill cloud infrastructure based on OpenStack. Okay, so now our network is already commercialized in the second half of 2020. And this year we continuously enhanced our network cloud now. Yes, we just want to tell the community and the vendors, let's work together on OpenStack. We'll really use it. Yes, I love that. And also it's great to hear that you all have been using airship, Ansible, and other open source projects to integrate and deal with multi-vendor environments. That's really great. And Bill Merrow from CERN, what do you think? Well, your simple question is a big, big answer from us because we won the Super User Award like seven years ago. Looking back, I don't recognize that infrastructure seven years ago. I went back to the presentation, we gave some presentations in the Paris Summit just to get a glimpse of what we were running at that time. So at that time we had eight compute cells. We were just migrated to ISOCE release. And we had the round 65,000 cores in infrastructure, which was a lot at that time, but nothing compared with today. So looking back, I can basically see three different phases in our infrastructure in the evolution. So at that time we were focused to basically convert all the physical nodes that we had in our data center to OpenStack compute nodes. And that was a very rapid growth number of resources that we then put into our private cloud infrastructure. And also that was a lot of work for our users because at the same time as we migrated the physical machines as compute nodes, they were basically migrating their workloads from physical nodes to virtual machines. Interest point is that at that time we were also running KVM and IPerV at the same time. It was a huge challenge. At some point we removed IPerV from the infrastructure. But at the same time that we are doing this, we start looking to all the OpenStack projects that start popping up. And there are a lot of projects. And try to see which ones could we offer to our users. Today we end up with 15. Of course these were added over the years. For example, we started to investigate containers back in 2014. Who remembers Nova Docker, for example. So we are trying those things at that time. Finally, I believe in 2016 we deployed Magnum. And that is another shift in infrastructure because then when we offered Magnum, our users started to move workloads that in the past were running in VMs to a containerized environment, deploying Kubernetes clusters. And today we have more than 600 Kubernetes cluster infrastructure running multiple workloads. And more recently with the introduction with Ironic, we see another shift in infrastructure. Ironic allowed our users to basically deploy bare metal resources using exactly the same APIs that were used to deploy VMs that they learned over these seven years. And that is another big shift because now applications with huge workloads that need to squeeze all the performance from the physical nodes, they can run again bare metal and we can basically track those resources. So what we see now is that those applications are moving back again to the bare metal. And basically is what we saw in the initial slides that I show up, that we see the number of the compute nodes going down a little bit because some of these applications are moving to bare metal and we are converting now the other way around the compute nodes to bare metal nodes managed by Ironic. Well, this is a very, very small summary of what happened during the last seven years. Thank you for the summary, Gilmero. Yeah, I definitely realized that several years ago, the assurance of infrastructure environment must be very different from now. And as we recently, well, maybe not recently, but last year we celebrated 10 years OpenStack and we will probably ask you the same question after 10 years and see what has changed by then. So I see we have an audience just dropping a question from YouTube and we'll probably go through that before proceeding with the rest of the questions I have. So we have Alison Price and she is wondering, when integrating all of this open source components together, how do you do it? And do you have a big internal team or are there vendors you rely on for support? That's absolutely a great question. And maybe I will go back to Gilmero from CERN if you have any answer to that and the rest. So we're looking to all those open source products and they give us a lot of value. We see them that they can be using infrastructure and they give us a lot of value to our users. At CERN we collaborate with all these open source teams, some more than others. For example, we are really involved with a lot of OpenStack projects. And there are other projects that we are heavily involved as well. And when we identify a need, we try to find out which is the project that could solve that and integrate it in the infrastructure. Currently, it's our internal team, not only the OpenStack team but also Monotrack team, the configuration management team that look at these different needs and different use cases and try to integrate these projects in a way to fulfill the use cases of our users. Awesome. Jared, Sauguan and Mohamed, do you all have any response to it? Hello, I have some response about these questions. Now, not only for some integration about the open source components but also we should do some integration about some commercial products from different vendors. So I will share with you something about our procedure because different vendors or different components will have different interface and some different computer reasons about our software. So if we do some integration, first we should standardize some data, standardize some interface. So of course, for some open source components, basically it has a standard interface. So maybe this, of course, for open source we can do some benefits from it. But for some commercial products, maybe different vendors have different interfaces. So we have to do some standardization about the interface and the data. After that, we have now inside the mobile, we have, I don't think it is a big internal team. We have about 30 persons for our team now. We do some automation integration. That means we have our CICD pipelines. In this pipeline, we can invoke some automation capabilities from our vendors. So I think we have this pipeline. So all the vendors will do some connection with our pipeline. So after that, we can do the integration smoothly. Okay, that's all our experience. Yeah, I like this question because it was one of the more exciting things that I was looking forward to dealing with when I joined OICR in 2016. And that was working on an environment where you don't even pay a single license for anything. Everything is open source and your support is essentially your colleagues. If you have any, that's specialized in that technology that you're working with or the community. So since we're a pretty small shop, like I said, there's only three of us and really only two of us really work with OpenStack on a day-to-day basis. We've had to be very careful in terms of what features we want to integrate into our cloud. I know OpenStack has dozens of projects and some really cool feature sets. But if they're not going to be heavily used, obviously it would be a disservice to the ops team to integrate that and have to try and keep it updated and maintain it. So yeah, we have to be very careful with what we want to integrate with the cloud. And we have to be obviously heavily involved with the community to get technical support and bugs fixed. And I'd also suggest that, you know, have some strong development skills within your team so that they can help you look into certain bugs and analyze code from SEF and OpenStack and try and understand why you may be having a particular issue. But yeah, it's been fun so far. And on our side, the way that we kind of see open source is like we see it as like a direct kind of project that we own. So I think when you view open source projects as something that as if like you had developed it internally and own it as kind of an internal product that really helps you visualize it in a different way. You kind of change from the idea of being a consumer of the project to being kind of a contributor and a developer of the project. And so that's kind of one of the things that I think when you do that shift, it helps a lot because you start thinking you're like, OK, well, we had to write and I'm just kind of like approaching this from like a business point of view. And it's like if you had to write all of OpenStack, it would be a very pricey thing to do. But you're only having to do most of it's already done. So you might just have to cover a couple of things that are going to be needed. And so I think when you contribute these changes upstream as well, it means that you don't have to maintain a local fork. You don't have to have all the headaches that come along about maintaining it locally. So for us, we are kind of maintaining and owning everything. We are kind of on the receiving end in a weird way because we are a vendor for companies that we support. And it kind of just always makes sense, right? So we have different companies of different priorities. Our focus is publishing and delivering OpenStack to users. So it makes sense that we have a lot of stuff for that. Some of our customers that are doing kind of AI or stuff like that, they're not in the business of OpenStack delivery. They're in the business of consuming infrastructure. So it makes sense for them to rely on a vendor. So I think it really depends on your scale. And it depends on kind of the way some companies are just vendor heavy and some companies are in house focused. Yeah, that makes sense. And thanks everyone for jumping in. Sweet. Well, since we're talking about the open source components, I'm wondering, so what open source technologies does your team integrate with OpenStack? Maybe we can jump back to Muhammad. Sure, sure. So I guess when it comes to that, a lot of what we do is we try and make sure that the ecosystem is as accessible as possible. So given that we're a public cloud, we don't consume our cloud as much as we just make sure it's more easily consumable. But the way that we go about doing that is by trying to support things in kind of like the ecosystem. So we provide resources for something like go for a cloud, which is the go client for OpenStack so they can do their testing, which indirectly then helps something like the Terraform provider for OpenStack. So for users that can leverage the ecosystem to make sure that it's a lot easier to integrate with OpenStack and ecosystem around it. But when it comes to kind of like on our side, obviously we've worked on the OpenStack exporter, which is something that we've built out for users. Oh, well built out and contributed to. We haven't, we don't own the projects to speak. We've just contributed to it. And as well as, you know, doing things like using Prometheus in order to integrate with all of our internal monitoring tools, things like elastic search for, you know, having centralized search and things like that. And as well as like I mentioned, Kubernetes is really at the heart of all of our deployment. So we rely on a lot now and we're trying to move towards some of the operator based tooling. So something like a Percona operator or RabbitMQ operator to deploy some of the core infrastructure services. And so the idea is to really, I wouldn't say the word eliminate, but to really integrate with as much things from other resources or other projects so that we avoid, you know, like avoid writing as much as you as you can and integrate with as much things as possible from outside. And I think that's the way to go and open source. Awesome. I believe Jared earlier mentioned a few open source projects they are using as well. And so Jared, what other projects that you're using, your team is using with OpenStack? Yeah, so we, I mean, I can probably say that I don't think we use anything that's not open source. So there's obviously going to be a lot that could be listed here. But the notable ones that integrate with OpenStack would be Ceph, you know, that's been very important for the success of the Cancer Genome Collaboratory. And more recently, it's been Kubernetes that has allowed us to run some really cool workflow execution systems on top of OpenStack and Kubernetes. And then from monitoring, you know, we're using Grafana and ZabEx, Elasticsearch, LogStack, FileBeat and many other open source tools. Sweet. And for ChannelMobo and CERN, do you all have any things you would like to add? I agree with what Jared said. Actually, in the platform, this layer, nothing is not open source, always open source technology. But as you may know, the Telcom component, the NFV component, they need some private source, yeah, not open source, but the platform we use, totally open source. I'm sorry. Sorry, please go on. Well, I was just about to say that you saw the slide that I presented at the beginning about all the open source projects that we run at CERN. It's your stack and I probably missed a lot of them. So we are really involved running these open source projects to support the cloud infrastructure. And there are these ones, very important ones that were mentioned as well, like SAF that basically manages storage for the cloud infrastructure. And also Kubernetes now that has these huge impact the way people deploy their workloads. I will have some experience about our product. For example, in our network cloud, we involve some wonderful product. But I think now maybe sometimes the product is commercialized, but it is still based on some open source technologies. For example, we have our virtualization layer product based on open stack. And I think someone that provided distributed storage based on the site. This is one side. Another side is that as I mentioned is that we build our CI CD pipelines. This pipeline is based on Zabix, based on the tokens. And we still do some monitoring based on Zabix. And we do some automation testing based on the puppet and as well. Such kind of open source technologies. I think now open source technologies is basic for our cloud infrastructure. Okay, thank you. Thank you. Thank you all. And I believe a couple of you guys also mentioned SAF and we have a seven episode coming up too. So for anyone who is interested, stay tuned for the next few weeks. So I will move on to my next question. So I'm wondering what workloads are you running on open infrastructure for your organization? And also I'm very impressed by Jerry's team with only two people for Ops. So maybe we can go to Jerry first. Yeah, sure. I would say the, you know, at least the longest running and most impactful workload that's been running at the Collaboratory would just be the fact that, you know, in order for cancer researchers to do their work to do their analysis, you know, they need the data. So part of that infrastructure is providing a fast and secure way to facilitate the retrieval of whole genomes so that bioinformaticians can come in and run their analysis. So that's been a big part of it. And more recently with leveraging Kubernetes and OpenStack Ansept has been the workflow execution system that we previously mentioned. And that's basically like a, let's try to be like the easiest possible way for a bioinformatician to be able to do their job with as few manual steps as possible. Because there's a lot of data curation and things like that associated with bioinformatics work that if we can eliminate as many of those manual steps as possible, the more data we can analyze and that benefits everyone. So I would say those would be sort of like the two main workloads that we've been running on our cloud. Awesome. Anyone else? Yeah, I think I find this question always interesting because when you're running a public cloud, it's a whole like mix of bags and nobody really knows what you're running or what their customers are really doing on it. We've got customers doing things from CI CD running kind of software service companies doing things like maybe some POCs for kind of internal projects. There's kind of all sorts of things. And I think it's always an interesting problem when you don't have a defined user because it's like, you know, I'm sure maybe some of us have kind of sat down with some of the hardware vendors and they're like, what's your user look like? What are you doing? Are you going to use a server for database or are you going to use a server for something? And it's just like it's going to do a whole bunch of stuff. So we've got to find like the most common ground that supports as much as possible. Well, I can relate a little bit with that in different percentages. So at CERN, more than 80% of our resources are dedicated to process physics data, mainly from the LHC experiments. But then the remaining resources, we have the zoo of workloads that support the organization. And this can range from administrative services for the organization, mail services, databases, software to build systems to even virtual desktops that our users deploy and they use it as their personal machine. So you see all these zoo of different workloads that we can have. And this is a big challenge to manage all of these and the different requirements for all of these different workloads. Sweet. Just a second, anything to add? Okay. Yes, no way. Because as you know, we have three kinds of cloud, our public cloud, our private cloud and network cloud. For the private cloud, we will have some workloads to spot our internal business. For public cloud, we will provide the service for our external users. For example, some computing stories, such title for high capabilities, and also some SaaS capabilities, such kind of things. But now I will give you some detailed introduction about our network cloud. Because time-mobile is veritors, so it will provide the service for the communication. For example, now in our network cloud, we're including the 18 core network elements and service platforms, such as the virtualized IMS, EPC, intelligent network, and the short message service, and the multi-media message service. So we will spot all kinds of our communication service and the software. Of course, next stage, we will fully spot our 5G service. Okay. Thank you, Sa Guang. Awesome. So I have one last question left. And that is, what kind of challenges has your team overcome using OpenStack? So I know for channel mobile as a user, you all still have quite a lot of challenges. Maybe I'll go back to you, Sa Guang, and yeah. Yeah. Our answer is questions. Thank you. Okay. So as you know, we are telecom operators. Now we are using OpenStack as the network cloud infrastructure. There is a 5.9 in telecom industry in 5.9. Okay. So we need to try to improve the OpenStack platform to meet the high availability. That's our big challenge. And yeah. And now, as you know, we still, we are trying to release a new concept in this industry. It's, as you may know, DevOps. Now we are new concept, DV integration of DevOps. We focus on integrate the OpenStack platform to support the network for the future. Network cloud for the future. It's a big challenge. And so we hope more people from community can work with us together on it. Yeah. Thank you. Thank you. What about Vextost? Yeah. So I think the biggest kind of challenge is really that we've overcome probably with OpenStack is the ability to have like a rapid growth in an ecosystem that already exists. Right. You know, it's something a very long time ago. Back when like OpenStack was probably in like its first year. We started looking into it. And there was a talk at like, I think a Picon by Vish. And they were introducing like the initial of what OpenStack is. And we had our own kind of little cloud orchestration thing that was built out. And, you know, I started looking at the code and I'm like, this is exactly what we're doing. And I remember having a conversation very early on with the kind of OpenStack foundation team at the time. And it was like, why are you getting involved in OpenStack? Because we're like, this is exactly what we're doing. Somebody's already done it. And then we can just, you know, help out improve the other bits of it. And kind of here we are many years later. And so I think that that was a huge advantage for us because we get to be part of an existing huge ecosystem. Rather than having to build out our own ecosystem from scratch, you know, if all of us here had to sit and implement our own APIs, which in turn means we have to implement our own Ansible modules that can talk to those APIs, our own Terraform modules that can talk to these APIs. Companies, you know, like China Mobile would have to develop their own NFV orchestrators that talk to some arbitrary API that they built and then they can't share that with some other company because they are not using the same API. So I think being part of that ecosystem and having that standardized API is kind of like giving us a huge boost because whenever someone comes to our public cloud or wants to use our private cloud, they've got a whole realm of services and options that they can just grab and start consuming right away rather than having to integrate with some existing proprietary thing that doesn't support it. Yes, exactly. Maybe we can quickly wrap up from Jared and Valmero. Sure. I would say the two key things that OpenStack has allowed us to overcome is one, being a government organization, you know, we've been able to be fiscally responsible with the money awarded to us in grants and also, you know, it's provided like a good generic platform to develop on and use cross-platform tools. And that's also allowed us to not have to train someone specifically for OpenStack because these APIs are familiar or cross-compatible. You know, if we're hiring a developer who has AWS experience and Terraform, they can usually fit right in just fine without any extra overhead or training. Well, in case of CERN, there are so many challenges that OpenStack allow us to overcome. So starting like in 2013, when we first launched our private clouds, these help us to change the mindset of people. At that time, when people wanted to have resources, compute resources, that took weeks for them to get their hands to these resources. With OpenStack, this allows them to prove vision resources, visual machines in minutes or even seconds. And that had a tremendous impact in the organization, in my opinion. Another point is the scale. So if we look back all these years, at that time, there was not a very good project that could handle the scale that we envision for our infrastructure. OpenStack allowed us to do that. And even if we had problems to reach the numbers that we envision, then we collaborated with the community. OpenStack was from the beginning open space for our collaboration. So we collaborated with the community in development of sales, V2, for example. And many other problems that we had or that we thought that maybe others also had those problems. And together we are able to solve them and to come up with some solutions. For example, printable instances, the collaboration on identity federation, for example, or even the container orchestration with Magnum. But other aspects like the OpenStack SIGs, they are very important for us because they are like a platform, a framework for discussion, debates, and share ideas. So for example, the scientific SIG or the large scale SIG, they are very good frameworks for debates, for example. And finally, basically running OpenStack allowed us to increase the compute resources that we offer to our users without increasing the number of engineers that we needed to manage those resources. And that was critical, especially in 2013 when the project to run the cloud infrastructure at CERN was launched. Awesome. Thank you, Bavaro. Well, I think we're coming up on time here. And thanks to all of our awesome speakers today. I appreciate you all joining us. And also thank you to everyone from the audience who has been participating and asking questions. So as we celebrate the success of the past super users have had with their open infrastructure, I would like to let everyone know that the 2021 Super User Awards nomination are also open. And your organization or other organizations, you know, have used open infrastructure to meaningfully improve their business. Please feel the nomination form below by October 15th. And speaking of this year's Super User Awards, the winner will be announced at an event we just put out last week, the Open Infra Live Keynotes taking place November 17th and 18th. This will be the best opportunity for the global community to get together this year to hear about all things Open Infra. And one of the best thing is registration is free and it is now live as well. So sign up today and join us for the keynotes. And next week we have a great episode lined up that we are super excited about. Open Infra Days organizers will be joining us to share some highlights from their recent events. That will definitely be one that's not to miss. So also remember that if you have any idea for a future episode that we want to hear from you, definitely submit your ideas at ideas.openinfra.gov. And finally, I want to give a quick shout out to all of our member companies, including Microsoft, that recently joined as our newest Platinum member for making Open Infra Live possible. If you're interested in joining as an Open Infra Foundation member, learn more at openinfra.gov. So mark your calendar and hope you all be able to join us next Thursday at 1400 UTC. Thanks again to Delmero, Jared, Saugwang, Jitian, and Muhammad. See you everyone on Open Infra Live next Thursday. See you. Bye.