 So my name's Ed. So I work with our customers to help around container adoption. So I work within Europe and beyond. Sort of similar things to what Jeremy was talking about. So I sort of feed in from a cultural perspective into things like innovation labs. But also looking at getting, helping you find the right kind of services teams to help you build platforms. And then also how you can kind of build out an approach for containerization of existing and legacy applications as well. So sort of covering some very similar things. What I was going to do today is talk about just containerization. So like taking existing apps and how we can get those into containers. So I thought it would probably be a good idea to just walk back on actually what a container is. So if we're running an application in a container, what we're expecting to see. So if I run an application traditionally on a Linux operating system, then I'll have an application. It may have a number of dependencies, its libraries. And then it talks to the Linux operating system or the kernel, the Linux kernel. When I containerize it, I basically do exactly the same thing. The only difference is that we tell the kernel to put that application process into a set of namespaces. And we can move those dependencies, those libraries into a namespace file system for that application so that it actually has all of its dependencies and everything bundled up together as part of an image. And so essentially what's going to happen then is we run our application the same within the container. But now that application is still talking to that underlying Linux operating system. And so we would say that then the Linux operating system that you have is still very important. So we're not virtualizing your application. It's still running on that. And that does have a bearing on what you can and can't containerize within, what types of applications you can and can't containerize. So why is that a good thing, putting stuff in a container? Well, first of all, we can distribute our applications much more easily. So previously, before we had containers, you'd have lots of different ways of packaging up your applications or explaining how to install those applications in an environment. Now we have a container image that we can take that container image. And if your ops team now had to run containers, then we're good and we're able to distribute that. And also because of the way that we build our container images with these layers, I only need to distribute the layers that have changed. So if I've only just changed the layer that has the app in, then the other layers, which have got the dependencies, if they've stayed the same, I don't need to send those out with that. So distribution is really straightforward and nice and easy. It does mean that we're consistent as well. And so the idea with a container is that we don't rebuild our application for every different environment. We build one container image, and then we distribute that across all the environments. We're going to run it. And also it's the same container image that we're running through the software development lifecycle. So you start in dev, you're moving into test, and then into production ultimately. It means that what you've got is consistent across those different areas. It also means that you can choose different software technologies and languages. So you're creating a nice abstraction between yourself and the guys that need to run the containers. So if your developers want to use Go, or Rust, or Java, or whatever it is, they're kind of free to put that into those container images. And we get, from a runtime perspective, we're able to run irrespective of the technology and language that's inside the container. It also means that your application is encapsulated and is protected against other applications and that underlying operating system. And because it has its dependencies within that, if the ops team wants to, say, patch the operating system, or another application that happens to run on that needs to be patched, all of your dependencies are actually within that container. And so we don't have that issue, well, if I patch this, then it's going to have a knock-on effect on my application. Because everything that application needs is carried with it in that container. Yeah, so we talked about that consistency. If it's gone wrong in production, how do I test that? Well, at least we know that container images are mutable. So when I'm testing that in development, I've got exactly the same image. I can link to exactly the same image that they're running in production. And so it should be a lot easier for me to repeat the way that we run our applications in production. The other thing, as well, is that we see a lot of them. And Jeremy touched on security as well. But often, your security teams may be interested in what you're actually running. Can you show that? Can we audit what's running in production and show that it's gone through all of these governance checks as it passed various tests and things like that? Well, what we can do with the container is we can sign it so we know exactly what it is. We also have a unique ID for that container. And then if you have the process set up within your organization, you'll be able to then reference that across all of those steps or all of the gateways that it's gone through to get through into production, you'll be able to show that. But if you're just running a single container, well, that's a trivial example. But in real world examples, you're not just running one thing. You're running a collection of applications that talk to each other. And so how do we do that? Well, that's what Kubernetes is for. So it'll take your applications, it'll orchestrate them, and enable them to talk to each other in a non-trivial architecture. OK, so that's great. So how do we take all those existing applications that we've got today that we've built using our traditional methods? And how do we get them into containers? Well, first of all, how do we tell if we can actually run that application in a container? So the sort of rule of thumb is if it runs on Linux, then it should be able to put in a container. Well, kind of. There are a couple of red flags that you need to look for. So you generally want something that's running in user space within a Linux system. So it's not tied to the kernel. It's not a kernel module. It's not got that sort of thing. So typically, a red flag might be that you've got some code that's got assembler in it. What is better is applications that are using higher level libraries like libGCC, for example, because they'll do a lot of work to map you to different versions of kernels. And they'll stop you from falling foul of swapping between different systems with different kernel versions. You may have specialized hardware or technical requirements or networking requirements. So not all of those are going to work within a containerized environment, so say like a token ring network, as I got as an example there. It may be a mainframe application that's difficult to turn into an I-86 architecture. So those sorts of things can also be red flags as well. And also, important to remember is that where are you getting that software from? Are you getting this from a vendor? And does that vendor support it running in a container? And so there may be licensing contracts. There may be support contracts, maintenance contracts, which prohibit you from running that application in a container. Those red flags don't mean that it won't containerize. And often, you just need to have a conversation with the development team. So recently, we had one where we had a customer asking, well, this application is a REL5 application. We can't run it because we don't do a REL5 base image. And so, well, why is that? So why do you think you can't take your application? And actually, it was that the ops team for that application didn't support anything other than REL5. So they only supported a REL5 operating system. But if you put stuff into a container, then we're really not worried about the operating system. We're worried about the user space, libraries, and the application. So if we can take those out and then we have a supported platform, then we can do that work. So sometimes, you just need to have a bit of a discovery session, talk to the teams around that, and understand what those problems are. So if it's in a container, will it run in Kubernetes? Well, again, we can hit some other problems around that. If you look at systems like OpenShift, where we are promoting OpenShift as a common platform that's multi-tenanted, you have lots of teams using it rather than one cluster per development team, then you need to start adding additional capabilities and security policies within that to make that viable. And particularly around security and things like that, we need to make sure that we're kind of mitigating any risks around vulnerabilities and things like this. And so different policies and ways of working can have a constraint on what you're able to do in a container. So a very common one that you may have seen if you've used OpenShift is we don't let you run as root. And not only do we not let you run as root is that we randomize your user ID that you're running that process in. Now, many applications are quite happy with that. But you'll often find the odd application where you'll hit some roadblocks because of that. Generally, we would say, well, why can't you change your app to do this and to be more secure? And generally, we can work away around that. So that's generally OK. But you do need to be aware of these things sort of going into it. The other thing is, is it optimized for Kubernetes? So you may be able to get it running on the Kubernetes, but then maybe we're not getting the best out of the platform. And so it's things like being able to horizontally scale your application, so making your applications more stateless, more aware of each other, more cloud native, essentially. And so these are sort of things that you can consider. But actually, if you can get your application onto Kubernetes and you get this workflow and delivery pipeline through a Kubernetes-type environment, and particularly on OpenShift, then it gives you an opportunity to then modernize your application then in situ and do that as well. So what's a good idea is to start that containerization journey with Kubernetes and not start straight with just doing Docker files. Because if you start with Kubernetes in mind, it changes the way that you will architect your application and the way that you containerize it. What I hopefully I'll show you now is that there are different decisions that you're making during that development process and that containerization process. And Kubernetes takes you down a different road. In particular, it gives you opportunities to have something which is to drive towards things like microservice architectures, but use things like sidecars, which I'll explain in a minute, and to actually take your logical design for your application and transpose that into a running operational environment. If you contrast that with a traditional approach for applications, you typically have a logical design for your application and then you have, this is the physical realization of it, so we'll have a number of servers and then each server's got its own manifest and they've got different things installed on them. And so what you end up with, what you actually deploy into production doesn't really look like your original design. Kubernetes actually gives you an opportunity to essentially implement your logical design within the platform. So where's OpenShift and all this? So I'm sure someone's mentioned this already today. So the upstream community, OKD, derives from Kubernetes and a whole bunch of other upstream projects and creates, it's the upstream OpenShift community. And then feeding from that are the various different versions of OpenShift that you can consume. So OpenShift Container Platform being the one that you build and manage yourself on-premise or in cloud, whereas Dedicated is a managed service that we offer and Online is an open managed service that we share with lots of different organizations. So OpenShift Container Platform is certified Kubernetes, so you can use Kubernetes constructs and objects and everything within it. OpenShift does add some additional extensions, but that's part of our work with the community. So why use that instead of doing it yourself? Well, really it's about the most important thing in your business are the business outcomes. And those business outcomes are derived by the applications that you're gonna be running on this platform. And whilst it can be very interesting to build your own Kubernetes and build all of these services that are in there, like registries and logging and metrics and service meshes and things like this, it's actually better just to be focusing on your applications and OpenShift gives you an opportunity to basically build that platform for you and then you can then focus on those apps. So, okay, using OpenShift. So maybe you're already building containers so you don't need OpenShift to build your containers for you, right? So you've already got your Docker files and things like that. Well, that's great actually because OpenShift will, you can point OpenShift at your Docker files and it then becomes basically a build farm for your containers. So not only will it build those containers, but it then has a registry where it'll store them and they'll be nice and secure in there. So you can manage the security and the access of those containers. We also wrap a whole bunch of metadata around that. So if that Docker file is say in a Git repo, you'll have the commit reference and things like this within the metadata, all bonded into that container and then stored for you. And so instead of having three or four servers sitting there, their job is to build containers, you now can just fire that into a Kubernetes cluster and it will build those containers for you. It gives you an opportunity to increase and reuse with those Docker files. So your traditional approach is to, with a Docker file is to say, well, from some OS base like Alpine or something like that, add some application dependencies and everything that I need for my app and off I go. Whereas we can now start looking at a kind of a build pipeline for your containers and going to vendors to get third-party containers that maybe you add your own specializations to and then again, focus on your app. All your Docker file needs to do is take the binary for your application and then drop it into your container, into a base image. So an example of that would be this Tomcat one. So this was based on a Tomcat example that someone had put on GitHub. This is typically what you would see in a Docker file. So you've got, we start from a JDK base. They're installing lots of dependencies. There's lots of making directories, making sure the permissions are correct, downloading binaries and things like that for the application, putting them in the right place, applying any static config for that, adding maybe init scripts and stuff like this and then explaining how that application is going to run. But if you take an upstream Tomcat vendor image, we can basically simplify that into three steps, which is basically get the binary and drop it into the deployment directory for your Tomcat. Maybe add some standard libraries and things like that. If you needed, maybe you wanted to connect yours to an MS SQL server database, so you need to pop in the driver. You just need to add that as an extra step. So again, we're focusing on the application using that approach. So designing for Kubernetes means that we have some other choices around that. So instead of having trying to treat your container as a mini VM and trying to bundle in all of the things that we need to put into our application to be able to access it, like you may have done traditionally on a traditional VM, we can now start splitting this out and having a logical design. So kind of rules of thumb, like one process is per container. For example, you don't need to have like a sys init type process that launches your process within the container and make sure that it's always running because Kubernetes is essentially going to be doing that job for you. Splitting things out into sidecars, so if you have tightly coupled things that need to be able to talk to each other locally. Maybe you're building from binary. So you've got like a CI process. For example, you're building Java applications using Maven and you have a deploy step at the end. It takes the Maven artifacts and chucks them into, I don't know, like a Nexus or an Artifactory repository. That's great. So we can take that. We can basically do a binary build from that approach. So we can take the binaries and the libraries that you've got resulting from that existing CI and we can build new images for you. So that's really easy as well. So we can do that. Maybe you're starting a new project, you've written some source code and you just wanna check that this is working nicely in a Kubernetes environment. We can do that as well. So source to image enables us to take, you just point us at your source code, we will build your source code for you and then build the container and then run your container. So it's all about accelerating and getting you into the platform as quickly as possible. So what do I do about all of these Kubernetes resources that I've got that describe my application? Well, these are essentially just like the code that you've got for your application. So we can treat that as source code, use your source code repository to manage and maintain those Kubernetes resources as well. OpenChef does a load of work to help create those so you don't have to type in these YAML documents from scratch. It will help you create them, but once you've created them, it's very easy to export them and have them as resources that then goes through that software development lifecycle as well. So different examples of the types of resources that you've got. So things that describe how to deploy your application and how to transition from version A to version B. So deployment information, we've got services, which are the stable endpoints. So when you have your applications talking to each other, you typically would talk through those services. How are we gonna talk to these applications that we've created from the outside? So how do we get ingress? And how do we provide specific runtime context? So we talked about immutable images going through the software development lifecycle. How do I take that immutable image and make it look like a dev environment, look like a test environment, look like a production environment? So we use things like config maps and secrets to be able to mount within the container and provide that context for us throughout that lifecycle. Okay, so let's look at a particular example. Can I containerize this web app? So this was taken from an actual example with a customer and say, well, it's a Tomcat app. And that was like great because we've got a Tomcat image, so that's cool. And they wanted it to talk to an external Microsoft SQL server database. That's cool, absolutely fine. They were using the ODBC Java driver. Okay, so it makes it a little bit fiddly. We need to make sure that we build them a custom image with that driver built in, but that's actually super easy, so we can do that, that's not a problem. And they wanted it to, the ODBC driver used a valid Kerberos token to authenticate to the database. It didn't use username and password. And so that's where it became like a little bit tricky. So okay, so we'll need to get some kind of token somehow. So the first thing I asked was like, well, how do you guys do this today? And they said, well, what we do is we install a Tomcat server and then we run a cron job, which does a K in it with a service account and just make sure that the token is always up to date. So I think they gave their tokens expired after about an hour, so okay. And you don't wanna rewrite your code. No, they didn't wanna rewrite the code, so right. So that's what they were doing today. So we said, well, look, what you could do, and this is actually the, which is where they were kind of heading was, all right, so if we create a custom container and we write essentially that cron job to sit inside that container that's gonna do that, it's gonna do the K in it for us and we can provide like the initial credentials through a secret. So we get the key tab and we do it that way. And I said, well, look, you could do it that way, but then basically that application has got that Kerberos stuff built into it. And they said, well, maybe, okay, well, maybe we could put that into another container which sits outside and we somehow inject the token into it in some way. And I said, well, actually, I think what the preferred option would be using this sidecar approach. So what you would do in this case is you have your app, your app is expecting to have a Kerberos token that's valid, but we write another container and its job is just simply to get that Kerberos token and authenticate it. And they said, well, how does that, how do the two containers like share that information? And so when you run sidecar containers, we're able to share different bits of capability within a Kubernetes pod. And one of them is shared memory. So we can have a shared memory space, a directory, there's basically a temporary file system. If I put a file into that directory in one container, it then, the other container can then see it. And so we can share information in that way. And so what that means is that we can create a container that just does the initialization and you can then have a standard container for running the application. So in order to test this out, we did an architectural spike. So an architectural spike, we have one thing that we need to test and it's that we can essentially authenticate or get a valid token, Kerberos token using one container and then show that we've got a valid token in the other container. And so in order for us to do that, we needed some sort of tame Kerberos KDC that we could talk to. We needed a test application and we needed to build our sidecar application. So actually created a test Kerberos server to help do this. And so this again used a sidecar approach as well. And so the Kerberos test server, so we ran a KDC process and we ran the K-Admin process, both in separate containers and they were using the shared memory as well to talk to each other. So it's a bit like how you would install it normally. And because both of those two things run as a service within normally if you install them on a Linux server. So we ran them in two separate containers. And so actually what we ended up with was this stack here. So we've got two pods, we've got a pod which is the KDC pod, that's our test server and it has a service for that. And then we had the application pod which had our application in it. In this case it was a, because it was a architectural spike, our application just simply did a K-List. So if you do K-List, it tells you if you've got an authenticated token. And then we did the K-Init sidecar which did the initialization. So it worked like this, there's actually a test script. You can run this if you've got an OpenShift environment that's running. If you run this test script it will basically create, this is an example. It basically provisions the KDC server. It provisions the test app with the sidecar. It runs K-Admin to basically create a new account in the KDC server and then passes that to the application. I do actually have a little video of it running if you wanted to see that running. So this was the example. So we've got the KDC server that's running which is there, it's just starting up and then here's our test application. And so if you've not seen this in OpenShift if you look at the pod, sorry, I think I clicked through into the pod, and then we look at the logs, you'll see that we've got both containers running. And then in the, when you look at logs you get a dropdown. So you can look at the console from one container or console from the other one. So in this one here it's showing you the console from the K-Init sidecar. So it's just waiting to get the credentials. You'll see that it gets, should get credentials in a minute. There we go. So we've got, we've actually logged on and we've got a valid token. And then if we swap to the example application which was just basically looping and waiting till we got credentials. And we can see that we've then got the credentials coming through in that test application. No, stop playing, go. Yeah, I know. Okay, and all right, there we go. So what I wanted to show here was really that Kubernetes changed the way that we solved this as a problem. And it's given us something which we've got very clear separation of control concerns. We've got something that's reusable. So if someone came along and said, right, I've got a Go application which needs to do the same thing, we can basically use the same technology. We don't need to rewrite it for that Go application. And it can have its own release cadence. So we've got basically a team and then it looks after this as a feature. And they can roll that out to all of the developers that are using it. And then they have their own development cadence. If you look at how this has been rolled out across things like Kubernetes, Istio is a good example of this. So Istio provides, is providing you with a service mesh so you can control and manage your APIs within the platform. It's essentially using this same kind of technique. So we've got sidecars running against your applications. It then supports any type of technology that you've got in your applications. And so the old approach of having all of that logic built into your application, you can then take that logic out of your application and have it defined by a common service and configuration. So it makes you get very clear separation. So explain like I'm five. So it does look a little bit like it's a lot more complicated. So if you look at the system, we've got lots more moving parts. But actually, if you look at your monoliths with all of these things in and you look at, if you had lots of monoliths all doing these things, all of that logic traditionally is bundled into it. And now we're kind of separating these things out and managing them separately. And so it gives us the ability to tune against the capability that we're looking for in our environments and optimize and innovate and things like that around that. And ultimately, what this means is you get to focus on your application and not all of this kind of infrastructure that we need in order to get your application running. Thank you.