 So welcome, and thanks for everybody actually speaking here. I was almost expecting I would be speaking for on front of a completely empty room. Lots of people certainly had to cut flights. So the topic that I'm talking about is up here. It's probably the most descriptive speaker title that you will ever see in the conference. So pretty easy to understand what it is about. First, a few words about the company I'm representing and the product portfolio that we have. So first of all, we're not your usual startup here that you find here. We've been around for quite some time. We actually have products that do workload management for a living for many, many years. We do service lots of really big companies. Typically Fortune 100 companies with some of the biggest clusters that you can find out in the industry. So like 300,000 cores or something like that is not rare. And we have built scheduling solutions for those types of customers for a long time. And actually the part of the product portfolio that we have is for facing. Kubernetes is bringing some of the knowledge that we have gathered over the years into that in the Kubernetes space. So that's part of what we're doing there. Just by way of further enforcing that point. So we have customers in pretty much all industries. And as I said, pretty big names doing quite interesting things. So they usually their value generation chain starts with our product where we manage the workloads. And it's absolutely critical that our stuff runs their stuff. They have pretty complex requirements. And so we have lots of experience with scheduling workload management, etc. Now, switching over to the product that we provide for Kubernetes. The product family is called NavOps. It's a number of things. The key point is that we're trying to help you make use of your Kubernetes cluster as good as possible. Meaning getting the most out of it. That entails things like manage resource scarcity. Allowing you to run multiple tenants on the same Kubernetes clusters without them stepping on each other's toes. Running mixed workloads that is containerized and non-containerized. And part of it is also, as we will see in this talk, Meso's frameworks. So running that all on the same Kubernetes cluster. We also support things like application workflows where you basically have dependencies within your application workflow. So that you run step one, phase one of your application, then phase two. So things like that we all provide solutions for. One of our core products, which is called NavOps command, which provides this virtual multi-tenancy. That I've mentioned from an architecture point of view, the way it works, is that we basically have implemented a scheduler that is compliant with the Kubernetes scheduler interface. So it can be used as a second scheduler next to the Kubernetes stock scheduler and can be used by the workloads. And it provides a lot of functionality on top of what the stock scheduler provides to give you a little bit more insight into that architecture. So it's actually running as a pod itself on top of Kubernetes. It talks directly to the Kubernetes API server. So from an end user point of view, you see no difference. You just use your cube control to submit your work. But our system actually gets to know which pods it is supposed to be scheduling and has its own internal persisting and so on. And of course the scheduling component that decides where pods are running. It updates the API server just like the stock scheduler is doing it. For the administrator, there is an additional line of interfaces if you want to configure the policy. But that's only for the policy configuration. So we have a REST API, Web UI, and of course also a CLI. The types of policies that we provide, you can see here on this picture. So there is things like workload isolation. So for instance, runtime quotas or access restrictions, workflow management. I have mentioned that already. We can affiliate workloads with things like owners, projects, application profiles, and inflict policy on workloads that have a certain affiliation. Then we have node selection criteria, for instance, maximized utilization. So as opposed to just doing spread or pack, we actually look at the resource consumption and place workloads where there is the best use of resources basically for these workloads. And then the most important thing is workload priority. So we do provide many ways of actually putting priority on your workloads and then make decisions where to put them based on that priority. And pretty important, one of those is what we call proportional shares. I have a picture of that here. So with proportional shares, you can subdivide your Kubernetes cluster in different partitions. And each of that partitions represents a percentage of how much of the resources this partition should consume. So here, for instance, I have production, development, and some batch workloads. And you can break that up hierarchically. So for instance, under development, you have back-end and front-end development, which share the slice that the development department has received. And really, our policy automatically makes sure that you get that amount of resources over a certain period of time. If one of the departments doesn't use their resources, then the others can share it. But if they come back, then, of course, they get their fair share of what they're requiring. So finally, in that context, from a general point of view, regarding the capabilities that we bring to Kubernetes, I've talked to a couple of them. Like, for instance, this advanced multi-tenancy that I just had in the previous slide, or the best-fit scheduling, some more things that we do are, for instance, automatic eviction. So if some of the departments that I've mentioned don't get their fair share of resources and others get too much, then we will automatically evict replicas to get the right balance. So things like that. One more thing that I've mentioned already is mixed workloads. So from a mixed workload point of view, what we can provide is, for instance, that you actually run non-containerized workload together with containerized workloads in the same Kubernetes cluster. We do that by deploying a workflow management system that is part of our product portfolio that handles non-containerized workloads. And we have containerized that as basically a workflow management service inside of Kubernetes. And you see that here on the picture to the left, which is called Univer Grid Engine. So now switching over to the universal resource broker. So really the topic of the talk. So what is universal resource broker? It's, first of all, open source. It is a implementation of Mesa's compatible resource broker. It allows you to run Mesa's frameworks really seamlessly on Kubernetes cluster. So what can you do with it? So first of all, you can share resources across Mesa's frameworks and standard Kubernetes applications. Obviously that helps you to reduce cost because you do not need to run multiple clusters. Let's say a Mesa's cluster next to a Kubernetes cluster. It also simplifies cluster administration and management. So again, if you had two types of environments, you would actually need two types of knowledge in order to run your cluster, two types of monitoring and so on. So if you used, for instance, the Mesosphere Kubernetes implementation where you run Mesa's as your basic infrastructure and then Kubernetes as a framework inside, then you still need to know how do I manage Kubernetes, how do I manage Mesos. That's not necessary here in this context. Your single pane of class is Kubernetes. It helps you, obviously, if you want to transition from Mesos to Kubernetes, then that is a viola option. Let's say you have some frameworks that you still want to use for at least some time, then it's very easy to do that with URB. From an overall architecture point of view, what it looks like is simply you have your shared Kubernetes cluster and then on some part of it you run your standard Kubernetes workloads and on another part of it you run your Mesos frameworks. And the Mesos frameworks really that are run there or the instances, the replicas, they just show up as regular Kubernetes pods. So from a management point of view, from a point of view of monitoring, diagnostics and so on, you can use all the toolset that is available in Kubernetes. You really don't need to worry about doing something different there. And also from the Mesos framework point of view, they don't even notice that they don't run under Mesos. They just think they are running under Mesos control. A few technical details. So how is it actually being done? So we have implemented a shared C++ library that pretty much re-implements the C++ library that Mesos comes with. So it implements the Mesos binary interface. We also have provided a JNI wrapper so that you can have your Java-based frameworks talking to this and then a Python wrapper again for Python-based frameworks. We're currently working on the HTTP frameworks. The system itself is a master broker service, which is a Python daemon and is event-driven through GEvent. On the back of it, there is a Redis-based message bus. There are actually multiple implementation of backends for URB. One is for Kubernetes and the other one is for the other product that we sell, which is UniverGridEngine. So these are just more or less possible implementations. You could add further implementations if you had yet another orchestration system, then it would be possible to add those as well. If you wanted to get started with URB, you wanted to take a look at how it works or test it. It's totally freely available on GitHub. It's a partial license. If you look at github.com Univer Corporation, then what you will see is basically three repositories. One is called URB Core, which contains the core implementation of URB itself and then the adapters, one being Kubernetes and then URB GridEngine. And it comes, of course, with some README, with some examples like Marathon Spark and so on. Now, over to doing a little demo. I do not believe in reliable networks at conferences, so apologies that I have canned this demo and I'm just playing it, but it's really a live demo that I've just recorded and in some cases I've sped it up a little bit to reduce the waiting time. So what I've set up here is just a Kubernetes cluster on GKE, simply two nodes. That's all that you see running here. And then I have a number of things running. If you look down, you have this watch command, as you can see, and it just filters out the Kube system stuff. So I have test dev jobs for backend and front-end project, and then I have some production jobs. Again, I have some backend and front-end controllers running. And down at the bottom, you see several in the URB namespace. So I have URB itself, then Marathon, Spark, and also Kronos. So you see this proportional share thing again. I'll just show you quickly how that part of our product portfolio works. As I've already explained, I've split out the total Kubernetes resource pool here in Mesos, CI, and Prod, and now I'm switching between backend and front-end test jobs. So basically I'm giving front-end project more resources and you can actually see this here how more and more front-end test jobs are running. Previously it was five to five. I've set a maximum of ten, so it was five to five. Now it is more or less eight to two. Now switching to Kubernetes, sorry, to Mesos. So here I have the standard Marathon interface, and I'm creating an application. So pretty simple application here. In this case, just simply a sleeper. I give this some CPU fractions and then five instances. And the earth shattering command sleep 30. So creating this here then. Of course, you see the system is functional. I'll speed this up a little bit. I remember this takes some time. So what you already can see down here then, if you look at the URB pods, there was already the executor, the Marathon executor. Now if we look over, then we see there are five staged. Now switching back, you see the five pods being running. And they switch from staged to running. So you saw those Marathon jobs coming to run. I'm just deleting them. And the next thing that I'm doing here is Kronos. Again, just a simple Kronos test job. So creating a new job. Again, some sleeper job, nothing exciting here. Just a sleep command in this case, sleep 20 seconds. And I'm going to start that job every 30 seconds. So the intended behavior is when it started, it runs for 20 seconds, then it finishes, and after 10 seconds we should see another job being started. So looking over at the cube control side, you can already see a Kronos job being running, which I've highlighted here. And if you look for a while, so after roughly 20 seconds, it should actually be terminating. So it's gone now. Now it's gone, actually. And then when we wait 10 seconds, there should be the next job showing up. Here it is. And from the Kronos side, if you look at the status information that Kronos provides, then you can see the schedule and what the runtime percentages was. So that's it for that simple demo here. I could actually have run also Spark, but Spark takes relatively long to launch jobs, so I didn't want you to look at paint drying on a wall until that happens. So to wrap things up, in general, what we try to do with NavOps is allow you to put just simply more or allow you to run more things on a Kubernetes environment, and specifically also mix workloads, so workloads that are non-containerized or workloads like the MISIS frameworks, and also do that in a fashion that they don't step on each other's toes so that you don't have conflicts, which you then have to go in manually and rescale things. But instead, you can use high-level policies that set the goals on how you want the resources being utilized, and then NavOps command automatically implements this. If you have questions, feel free to ask now or find us at our booth. You can also visit two websites, so for the NavOps products suite, it's navops.io, or for the company in general, Univer.com, and here is my email address. Any questions? So first of all, the MISIS framework has to be containerized in this case, right? So in this case, we don't support non-containerized, but containerized frameworks. We currently do not support HTTP. We're working on that, so HTTP frameworks wouldn't work right now. From a hardware point of view, good question regarding GPUs. I mean, we have as a company lots of experience with GPUs. We actually run some of the most advanced GPU or machine learning environments in the world, with our other products, with our more technical computing-facing products. But it would be interesting to see how that works in the Kubernetes context. That's actually something that we have not tried yet. So the question was how the migration process would look like if you have, like, a MISIS cluster and you wanted to migrate things over to Kubernetes. So, I mean, one option is obviously you shut things down, right, and then you start them up in Kubernetes, but that would involve some downtime. I think, in theory, it actually should be possible, although I will admit we haven't tried that, but it should be possible that you just create your framework in Kubernetes, parallel to the MISIS framework, and then you put some low balancer in front and you just point them over, and as soon as you're happy with what's going on in Kubernetes, on the Kubernetes side, you shut down the MISIS cluster. I see no reason why that shouldn't work. Any more questions? Does it seem to be the case? Okay, thanks for coming out here, and again, if you have follow-on questions, find us at our booth or drop me an email. Thanks again.