 Very very fast talk today. It's a lightning talk. My name is Jay and this talk is all about how we're bringing Kubernetes to the edge for people who don't do Kubernetes, people who might not even have heard the word Kubernetes. A little bit about the us here. It's the Center for Parallel Computing and we're based in beautiful sunny London, England if you get it on a good day at the University of Westminster. In our research group we do a lot of work trying to bring cloud native technologies to users that don't normally take advantage of such technologies. I'm thinking fields. The current fields we're working in are healthcare, space, science, and as you're going to hear me talk today, manufacturing, especially at the kind of SME level. So it's small to medium enterprise rather than large enterprise. So today I'm talking about the DigitBrain project. This is an EU Horizon 2020 project and it's all about bringing digital twins to small manufacturers via the cloud. What do we mean by a digital twin in manufacturing in this specific sense? Well, picture your favorite manufacturing line. I've got a canning line up here, which is my favorite. And then think about the life cycle of that canning line. So there we go. Maybe we start off in this sort of engineering phase and design. It goes into production. We distribute these canning lines all around the world. They're all around the country. They go into operation filling cans with delicious beverages. There's this sort of maintenance and repair cycle that they go through. Eventually, sadly, the machine dies and we try to recycle as many parts and re-engineer into a new canning line. So there's the kind of classic. Where's the digital twin here? Well, hopefully, if we're doing our job right, in as many of those different stages of this life cycle as we can possibly imagine. So the way we do that is we feed data in from sensors, different aspects. We get lots of data points. We can train some models. We can write really complex algorithms to do things like divine a really nice preventative maintenance schedule based on machine learning or predict when failures might be imminent. So that's the background there. What my group or our group, I should say, is really interested in is this section down here. That's the deployment of these digital twins. Digital twins, or at least most of the ones that we're working with, are really just lots of microservices working together to fulfill some higher functionality. I'm sure we're all familiar with that. We want to deploy these microservices, one, to the cloud, but also two because there's so much compute now on the factory floor to some edge, somewhere, wherever it may be. The problem is that these small manufacturers don't have big teams of DevOps engineers and site reliability engineers or software engineering teams. They might have outsourced some work, got a little bit of development done to come up with a nice digital twin that meets whatever needs they have for their machinery. And now they have maybe one engineer on staff who can sort of try to deploy that. So here's sort of how we're seeing this come together. We've got a user there. That's someone in the manufacturing plant. Now they've got a container or many containers that they know how to describe. So we ask them some simple things. We ask them for some information on how they might currently deploy that. Remember, these aren't people who have ever dealt with Kubernetes. So usually it's a Docker compose file or even a Docker run command sometimes. We ask them what the hardware requirements are. They're sometimes more or less familiar with that. And we ask them if they've got any devices on site that they'd like us to connect to, and we ask them to give us access to those devices. We then send this off to DigiBrain. I heard someone mentioned Tosca over there. This actually goes in, gets generated into Tosca and fed into our little platform here. And then a number of different things happen. So first of all, we use something like Terraform, sometimes Terraform, to provision a bunch of nodes in the cloud. We're using EGI cloud, which is the European grid infrastructure. It's used heavily across a lot of European projects. And then of course, we run a little Ansible Playbook to deploy Kubernetes cluster across those nodes. It's K3s at the moment. And then lastly, we use another Ansible Playbook to connect it to those edge nodes that have been defined or described by the manufacturer over there in the corner. And we use QBedge to bring these edge nodes into one big, beautiful cluster. At that point, we can now schedule whatever containers and microservices have been described. And the user finally can now interact with whatever interface this particular digital twin exposes and keep track of whatever they want to keep track of. The user's oblivious to all of this stuff happening. All of they know is they provide some very simple metadata. And all of this action here that's happening is wrapped up into a higher level orchestrator that we've developed at the University of Westminster that we call Mikado. And it's used to do those things I mentioned right at the top of the talk. So wrapping tools to bring them to users in fields and domains that may not use them every day. So that's a little bit about how we're using these cloud native technologies. We're really enjoying our time with them. I know I don't get any questions in this particular session, but maybe you'll see me wandering around in the days to come. And you can ask me a question because obviously that doesn't explain all that stuff. And I'd love to answer them. I'd love to have a chat. So thanks very much for your time.