 Yrwch yn siarad o unrhyw beth sy'n gyd yna arnyn am bobl ar y Llywodraeth, rym ni wedyn yn rhoi'r cyd-drygedaeth o ôl cyf comfort yn Gw Ass, a rwy'n gwybod yn ychydig i yw'r tynig eich ffordd gyda Cwbirneddau, a',r ffordd, a'r gweithio gyанаer Cwbirneddau. Felly yn mwy ychydig ymgyrch, ystod os attaeth Cwyrneddau, ei mi gydig dod o amdano'n gwybod allan rwy'n gwybod ond arnyn nhw. We are now just over 30 years old. We've been in the open source business since the start and it's our job really to make open source enterprise grade solutions. Traditionally, it started with our Linux distribution, SLEZ, but as time's gone on, we've invested more in software-defined infrastructure and with our recent acquisition of Rancher Labs, now we're in the container space and Kubernetes management as well. I do want to highlight that everything that we do at SUSE is open source. We don't have any proprietary code at all. Even our internal systems and our internal documentation is all open source as well. The only thing that our customers pay for is support or consulting professional services if they need that. The reason I'm bringing that up now is because I'm going to be talking about some of our open source projects and I want you to consider that from the open source project point of view, even though some of those projects do have a support offering available as well. I'm approaching this from the open source aspect. So, just very briefly, we're all hearing about how we want to shift workloads to the cloud and how the cloud is connecting all of us today. One of the common things that we'll often hear about when talking about the cloud and especially when talking about cloud-native technologies is containers. Now, I'm sure many of you will know already, containers are almost our next evolution, our step up from virtual machines. It's where the world is moving and now we're starting to see the entire financial industry start researching containers, container orchestration technologies and to see how they can fit containers into their organisation and get the benefits that come along with them. However, when it comes to actually running containers at scale and in production, they come with a lot of baggage. There's a lot of extra dependencies that are needed. We need ways to scale containers. We need ways to orchestrate them, to ensure that the right container is running on the right node at the right time. Because just simply running containers doesn't really get us much business value. We need to be able to orchestrate them, and that's where Kubernetes comes in. Now, I'm sure many of you have heard of Kubernetes before. Put your hand up if you have. Hand up if you are using Kubernetes. Oh, cool. Is that in production? That's quite common actually. We see a lot of people that I talk to, they're either experimenting with it, seeing what they can get from it, but we're starting to see more and more people move to production every day. I'm not going to teach you to suck eggs. We know what Kubernetes does. Container management, container orchestration. One of the more exciting things that we're starting to see is Kubernetes orchestrate more and more things, such as virtual machines, storage. We're even working on projects to help Kubernetes orchestrate data centre appliances such as network switches and routers and things like that. Really exciting stuff. When it comes to why we feel as soon as the finance industry should be looking into Kubernetes and containers, the key reason really is enabling innovation and modernisation. Obviously, there's loads of buzzwords that come along with that, things like hyper cloud, multi cloud, all of that fun stuff, but the main thing is enabling innovation, I would say. But additionally, when you start embracing Kubernetes, you can start being much more API native. Do you know that once it ties in with initiatives such as open banking? One of the other main reasons is because the competition is. If I'm not mistaken, Monzo runs most of their infrastructure on Kubernetes and AWS. So in order to compete with the FinTech startups, we need to be able to embrace the same technology. But there's also opportunities for cost reduction as well. When we start embracing some new technologies, the compliance cost can often increase, as I'm sure many of you can appreciate. But if we're able to embrace that technology in a more optimal way, we can reduce the infrastructure cost. We can reduce the hours got into maintaining that infrastructure to help offset that compliance cost. Very, very briefly, when it comes to Kubernetes architecture, we have a series of nodes. Some of them are running a database or a data store known as SCD. Some of them are master nodes running the control plane. And then we've got worker nodes, which is where the actual workloads run. And this is going to become important later on. As I mentioned earlier, we've seen Kubernetes adoption really explode, especially in recent years. However, one thing that hasn't yet really taken off is the concept of Kubernetes management. And that's really what I want to talk to you about today. When organizations first started deploying Kubernetes, earlier adopters started to build infrastructure in a similar way that they built it previously. They would build large monolithic Kubernetes clusters. They soon learned from those experiences, though. And they started to realize that when you do adopt Kubernetes at scale, single large clusters doesn't really work. It's almost taking a monolithic architecture approach and then applying it to cloud-native infrastructure. When you build monolithic Kubernetes clusters, you can't adapt them as easily to your needs. Different technologies and different projects that you're going to have will require completely different underlying infrastructure. And if you've got one large Kubernetes cluster or a few small large ones, you're just not able to kind of adapt to those different needs. As time has gone on, we're seeing companies and customers start to embrace more distributed Kubernetes clusters. So I'm talking about more smaller clusters, more agile clusters, clusters that are serving a specific use case. For example, there's an AIML project, a dedicated cluster just for that. And this is really where Kubernetes management comes into play. Now, as I said, it's a multi-cluster world. As we start building more and more clusters, we need a way to manage all of those. We shouldn't have to incur extra operational overhead purely because we want to be more agile. So that's why we say it's a multi-cluster world. When it comes to different Kubernetes distributions, there's hundreds out there in the market now from open source ones, from ones backed by vendors, so many different distributions. But the value of Kubernetes isn't in the distribution itself. Because there's so many now for all sorts of different use cases, we can't really say this Kubernetes distribution is better than that one anymore. They all pretty much offer similar value. And that's why we say the value is in Kubernetes management. It's how we orchestrate those different distributions, how we're able to manage that infrastructure, are we able to do things like bring centralized authentication to hundreds of clusters wherever they're deployed, for example. But I also feel that a good Kubernetes management project really needs to be open source. But it needs to have an open approach to open source. Now, the reason that I say that is there are Kubernetes distributions or management tools out there that have a sole purpose of locking you into their ecosystem. To give a good example here would be the cloud providers. Now, obviously, if you go all in on a specific cloud provider and deploy their Kubernetes distribution on their infrastructure as a service, you'll soon find that migrating workloads to other Kubernetes distributions, perhaps from other vendors or even open source Kubernetes vanilla can be quite challenging because there's lots of subtle differences between all of these different distributions. So a good Kubernetes management tool should be able to extract all of those differences away. So that you don't need to learn all of the different clouds and all of the different APIs for those different clouds. Now, I want to quickly talk about those four concepts and apply them to some of the different Kubernetes distributions out there. Now, I pulled this list up because I just googled best Kubernetes platform and then just took the top few lists, right, to kind of offer my thoughts or rather what I've seen in the market with these. To start with OpenShift, now, I think it's fair to say, especially in the finance sector, one of the leading Kubernetes distributions. However, OpenShift doesn't really have Kubernetes management. Now, we've seen recently, obviously Red Hat's launched an OpenShift ACM to start tackling that exact problem. We often have experienced at SUSE, customers deploying OpenShift clusters in that large monolithic style that I talked about earlier on today, and it can make it harder for them to migrate those workloads out. The same can also be said in a way of VMware Tanzu. Now, they do offer good multicluster management within the VMware ecosystem. But say you don't want to run VMware Kubernetes clusters everywhere. Perhaps there's a distribution optimized for an edge use case and you'd rather use that. If you wanted to be able to manage that through Tanzu, that can be much more challenging. EKS anyway is quite an interesting one because when it was announced, it almost went against what I just said that the cloud providers are trying to lock you into the cloud because you can then deploy EKS on-premise. But you're still deploying Amazon's EKS. With so many distributions out there, you have the right to be able to manage them in the same way and not be tied into those platforms. Now, canonical Kubernetes, great Kubernetes distribution, but it's not management. And that's what I'm trying to convey the difference. You have distributions and then a management layer around those different distributions. And this is where I'm going to introduce Rancher. Some of you may have heard of it before, the open source project Rancher that we recently acquired at SUSE. We tried to take those four principles that I talked about earlier and embody them. Although we have two or three, I think, now different Kubernetes distributions of our own, it doesn't matter what Kubernetes you deploy. You're able to manage it through Rancher. And when it comes to the cloud, if it's on Azure, Google, or AWS, for example, you can even use Rancher to provision those clusters and then manage the lifecycle of those clusters. And I say that that's important because it then becomes the Kubernetes management tool that's abstracting the underlying cloud. Your engineers don't need to learn how to work on all the different clouds. They just need to learn how to work on the management platform. The key thing as well, as I mentioned earlier, open approach to open source. As long as a Kubernetes distribution is CNCF certified, you're able to manage it in exactly the same way. And that's really key because how you configure, for example, Active Directory authentication between different distributions can quickly become a mess, especially if you're doing it manually or you're building that automation in-house. Something like Susan Rancher abstracts that away. You imply those central policies to all clusters wherever they're deployed. Thousands of different clusters, all running different distributions. You can even have some open shift in there as well and they can all manage it in exactly the same way. Now, I think that that's quite powerful. I myself come from an ops background, so I'm used to managing lots of different servers. And when Kubernetes came in, the management overhead just exploded. By having a single control plane to manage different downstream clusters, you're able to do a lot of exciting things. To try and visualize what I've just explained when it comes to the Susan Rancher open source project. Initially, you have the management server. Now, this itself is deployed on a Kubernetes cluster of your choice. It doesn't have to be a Susan distribution. It could be on the cloud, even the cloud provider's native Kubernetes distribution. Once that's actually deployed, you're then able to either connect that management server to existing clusters that you've deployed, or use that management server to deploy new clusters. Those can be on-prem in any cloud, even a cluster that you initially deployed four years ago by using cops. You're then able to import that and manage it in the same way. Once all of that has been deployed, that's when the value of management comes in. We're able to centrally manage all of those clusters, get those central insights, centralize monitoring, centralize logging. Again, regardless of what distributions are running under the hood. Then we can apply consistent operations to those. Things like cluster upgrades, for example. If it's running on a supported cloud, we're able to upgrade those clusters automatically. We're able to interface our different clusters between themselves as well by adding in an additional network layer on top that allows cross-cluster management and cross-workload management super easily. I think it's also key to mention existing systems. We often hear people talking about how everything is wonderful in the cloud. I'm sure it is if you're doing greenfield projects, but I'm sure no one in here really is these days. You have a lot of legacy infrastructure to integrate. Having a management tool that can integrate with that legacy infrastructure for all of your Kubernetes clusters and distributions can be really powerful. Just when we start looking at operations and management, one of the key things we hear from our customers at Kubernetes can be hard. That initial learning curve can be quite challenging, especially for people from the ops background. So an easy UI to manage clusters on the lifecycle of clusters is really key. But it's not just UI-focused. There's API-focused as well so you can integrate it with your existing DevOps or GitOps automation tooling as well. As I said earlier, managing the complete lifecycle from cluster upgrades, potentially cluster downgrades depending on the Kubernetes version, things like disaster recovery, all become centralized, back up and restore. Especially important for the finance sector, unified access control, security and policy management. As you saw on the slide before, your developers don't connect directly to the downstream clusters. They connect through the Rancher Management Server. Of course, you can enable direct connections to the downstream clusters if you want, but the main benefit of doing that is we can enforce central security and policy. We can do authentication with LDAP, AD, OCTA, et cetera. We can also enforce centralized policies through technologies like OPA and Gatekeeper, all of that's built in. We can run syscans on all of our downstream clusters, regardless of what vendor they're from, to bring those reports up as well. But it doesn't just stop at managing clusters. As time's gone on, we've branched more into workload management as well. Having a good UI to visualize and manage workloads is quite important. That's why we're seeing most of the other distributions out there bringing their own dashboard, for example. When it comes to workload management, this is when we start thinking about workflows such as GitOps, and that's why we do have GitOps integration as well. Again, regardless of what downstream cluster is down there, it's a project called FLEET. Hundreds of different integrations, right? All of the common ones that you would expect in a common CNCF landscape. Now, the reason I mention this is one example would be ITSIA, right? It's a common service measure that people deploy on Kubernetes. Import 100, 200, 500 different clusters into the SUSE Rancher Open Source Management Platform, and with a couple of clicks, you can deploy ITSIA to all of them. That can potentially save hours and hours of time. Trust me, I've been there before. The same is true for monitoring, logging, but also custom catalog as well for your own line of business applications. Just to kind of wrap up on that bit, I think what I'm trying to visualize here is that the management platform sits above Kubernetes. It is not Kubernetes itself. Once you've got that in place, you're able to integrate that with your existing DevOps automation, your existing security tools, and then get all of the benefits that I've just talked about. But it doesn't stop there. From my understanding today, there's only a couple of different distributions that run on System Z or Linux 1. OpenShift is one of those, for example. We're going to be bringing this to Rancher Management as well. One of the reasons we're doing this is because there's so many workloads that should still be run on the mainframe. There's a lot of value, rather, in running them on the mainframe. Bringing those into the Kubernetes fold and then managing them in exactly the same way as you manage your cloud-native infrastructure makes management of those workloads easier. It makes migrating them easier as well when that time comes. Now, some of you may recognize this slide. This one was taken from Andrew from BCG earlier on this morning. We took this photo and I brought it here because if we look at that top one, for each provider, you have to adapt to their provisioning tools. That's exactly what I was talking about earlier on today. I largely agree with him with the concept of get one cloud working well, then look into multi-cloud. With a good management solution, a good open-source management solution, we've done all of that abstraction for you. Your engineers don't need to learn the differences between the clouds. It's exactly the same clicks, exactly the same method to deploy Kubernetes in Azure as it is on Google, as it is on line node, for example, all through the same management platform. They never need to worry about the underlying APIs. But I also want to talk about that second point. Open-source reducing concerns about vendor lock-in. I touched on this earlier on. I talked to customers daily who have embraced open-source. They've gone all-in and they've taken advice from one specific vendor and they've now found themselves locked into that vendor. Now, although I can't promise you anything in that regard, the entire ethos of the Rancher project is to avoid vendor lock-in. As I say, work with any distribution from any vendor. So this can help insulate you from some of that risk. But where would I be without talking about a couple of customer examples? The first one would be Cardano Group. Now, for those who are unaware, they have recently, in the last couple of years, acquired a large pensions company and are now, I believe, the third largest retail pension provider. As part of that, that's in the UK, sorry. And as part of that, they've had to scale out their systems massively. Traditionally, their legacy infrastructure was designed for their financial analysts and it was all made in-house. But that just doesn't scale. So they started looking at what options are out there on the market. They realised that they needed to be able to give their developers the freedom to innovate. They wanted to give their developers self-service access. And it was about that time in 2017, 2018, when containers started to reach the enterprise readiness scale, enterprise readiness level, sorry, for them to look into adopting it. The benefits that they achieved are on the slide there. But when it comes to time saved, I think that's the key one here, 20 to 30 dev days saved per quarter. Their devs were able to more focus on the code rather than on managing the infrastructure. Now, I know some of this kind of goes against the DevOps paradigm that we've all been kind of moving towards over the last few years, but you can still embrace those methodologies. You're just doing it through a central management platform instead. Now, another case study to talk about here is ABSA, a large South African bank and investment firm. They're one of the success stories that we're most proud of purely because they used OpenShift before. In fact, they, at one point in time, was OpenShift's largest customer in the financial services sector. But when their contract came up for renewal in 2017, they started looking at other options. They had gone down that route of building large monolithic clusters that I talked about, and it was starting to impede their ability to innovate. By adopting that method, they were still having to have developers lock tickets for the IT ops team to go and deploy a new cluster, for example. So they wanted a management tool that allowed them to build self-service, empower developers to deploy and manage their own clusters. That's when they stumbled across Rancher. Now, although we obviously, they are a customer of ours now, initially they started 100% purely on the OpenSales project. They just went ahead, deployed it. That was their proof of concept. Then they came to us for official support. And I think the achievements that they've been able to achieve by embracing this technology is quite significant. But they still do have an OpenShift estate. They still do run OpenShift, but they're just managing it through the Rancher technology. And over time, they're migrating those workloads from OpenShift into the more vanilla Kubernetes distributions that we offer. But when it comes to open source, we do a lot more than just Kubernetes and Linux. We've got some, lots of new projects that are coming out at the moment. A good one is Rancher Desktop. Now, everyone put their hand up when they said they've heard of Kubernetes. Who has used Docker Desktop before? Cool. Do you still use it now? Or has your company recently been hit by the overnight license shift that Docker... We're getting nods, right? This is what Rancher Desktop is for. It's designed to be an open source alternative to Docker Desktop, but a true open source one rather than Docker's approach to open source. The difference is rather than deploying Docker, it uses container D and then it deploys Kubernetes on top using K3S. But you can still run Docker build, right? You can still do all of that same workflow, but it's just on top of Kubernetes. And the great thing as well is you can choose any Kubernetes distribution to run on any Kubernetes version, sorry. So when it comes to testing your workloads on different Kubernetes versions, it can be quite valuable. The other project I want to briefly touch on is Opinio. Now, Cloud Foundry used to be quite big in a lot of traditional enterprises. But as time has gone on, it's kind of grown out of fashion and people want to move away from Cloud Foundry onto Kubernetes. What Opinio does is give that Paz experience in a lightweight way on top of any Kubernetes cluster. So you can do Opinio push. It works with exactly the same artifacts as it did in the Cloud Foundry days. And so that can be a stepping stone for people to migrate from Cloud Foundry to Kubernetes. Plus, it just gives a really good dev experience. It's a lot easier. They're kind of the old-school CF push mentality. Obviously, that comes with less flexibility because it's a Paz and it's opinionated. That's why it's called Opinio. But it means you can get the best of both worlds from the same cluster. Finally, I want to talk about Harvester. Now, Harvester is Souza's approach to hyper-converged infrastructure again, an open-source project that you can go and deploy today. It combines Kubernetes and with Cuba with the Longhorn Storage Project to provide an appliance for virtualisation and container Kubernetes management on top. While it's relatively early days for the Harvester, we are directly going after Nutanix with this in the mid to longer term because we feel there needs to be a true open-source HCI solution. The other one that springs to mind is Proxmox, but that hasn't gone down the Cloud Native route. By doing this, we can bring your legacy workloads at running in VMs closer to the containerised infrastructure. We can manage VMs in the same way that we manage containers. We can bring DevOps to those VMs in the same way that we bought them to containers. Just before I close up for the day, I've mentioned a few times now that Kubernetes has a huge learning curve, but that doesn't mean it needs to be hard. It doesn't mean you need to fork out a ton of money to up-skill your teams. Both our sales and the Linux Foundation have a lot of free resources out there available. I said, go to rancher.com forward slash training. There's always no training courses being run. Some examples there are on the right. They're all completely free to attend. We also have the Rancher Rodios, which take you from building your first-ever container through to deploying Kubernetes using the Rancher Management Platform to then managing those workloads on top within the space of four to six hours. It's like an all-day event, again, completely free. Find out when the next one is running near you, and we've also got some recordings of the virtual ones that have been done as well. I touched on earlier, all our documentation is available. It's not locked behind any kind of payroll, so that's another great resource to use because that documentation will often apply to any Kubernetes distribution, as I mentioned earlier on. For those who have never touched Kubernetes, I do recommend the Linux Foundation introduction course as well. When it comes to what's next and what you want to do next, feel free to reach out to any of us at the team. I'm here today. You can come up and speak to me. I've got a demo available. If you do want to see Rancher in action, we can go deploy some Kubernetes to the different clouds. But we've also got email address kevin.smith. If perhaps you want to have a further discussion that's not technical, perhaps a bit more business-orientated or sales-orientated. As I said, just now, check out the free training that's available. The QR code there is for Kev Smith's email in case anyone wants that. And sign up and spin up a lab. If you've got some spare infrastructure, you can run it even just on a MacBook Pro and a VM. Give it a try. Follow through with documentation. You might be impressed with what you see. And I'd also encourage everyone to start thinking about how you're going to build the platform for your companies for the future. Kubernetes isn't going away. These technologies are here to stay for quite a while now. You want to get that initial architecture correct so that you're not doing an absur and calling someone else three years down the line because your renewal price has skyrocketed and you can't migrate your workloads easily. So I'd like to end it there and thank anyone. I think there's a minute before we're meant to end. But if you do have any questions, feel free to just come up and chat or you can ask them now. We've got a minute. That's me done.