 classics. My name is Ian Lawson. I'm a solutions architect I think for Red Hat. In reality I am a software developer who does AI just hiding out in Red Hat. I'd be working in the industry for around about 35 years. I did a lot of work involving AI and massive data analytics with yn cyfnod o'r cyfnod i gyd yn oedd ffynol. Yn y last o'r 10 ysgol, wedi'u ffawr oed yn byrch yn ysgolio cwm ychydig, ychynod dynast, ar y cyfnod cyfrannu a'r system cyfnod. Felly, nid oeddwn i'r cyfnod o'r gweithio efo waith yn cyfrannu a'r cyfnod yn ysgolio, yw'r cyfnod o'r cyfrannu a'r cyfnod o'r cyfnod yma ymlaen yma o'r cyfrannu a'r cyflydd mae'n enw i ddegu ei wneud i wir am ddwyllteis ar gael. I fewn i'n gweithio Hadoop. Gweithio hwn y ddedig hwn. Mae'r ddod o'r ddraeth yn gweithio, ddod o'r ddod o'r ddod o'r ddod o'r ddod, ddod o'r ddod o'r ddod o'r ddod o'r ddod. Rwy'n gweithio i'r demo, mae'r ddod yma yn ymweld yn ddod ddod. Rwy'n meddwl i'r ddod o'r ddod o'r ddod. Mae'r cyllid yn ni'n bwysig ar hyn bob hwn yn y Merthyn Tysyn, gynnyddio'n amser hynny ac ei wnaeth i'r gweld. Rwy'n ôl gweithio'r rhyspan mewn cyllid yn ni. Fel ei rhan o gystafodd, falle rwy'n ddiwedd ei wneud o'r cystafol yn fwysig. Ychydig ei wneud o hwn o'r cipau rhysyn. Rwy'n dod i'r gyfnod â'r lluniau lluniau a chynyddio'r pind ac yn gwneud yn fwysig ar hyn yn gweld. Wel, mae yna 4 o erdwyd yn ôl. Mae dyniolaeth uchelio y byddwyd a bwda. Rydych beth yw hyffordd y byddwyd yn y ffwrdd. Ffwrdd mae ar bwda y dyniol yma, am ddechrau ei wneud gyda'r ffordd fyrdd. Mae'r berthynas brwynt ni oedd mynd i ddwy iawn cofnerd Coraus, a'r ymdyn nhw i ddweud iawn gennym? Mae'r gweyniol mae'r byddwyd mae eu arwag ar gweithfyn wedi'i gynnhau a'r byddwyd ymddwyd mae'r byddwyd a'r number cluster operators. Cluster operators are the little objects that sit on top of Kubernetes and own an individual object type. So you can extend the object taxonomy of Kubernetes without having to actually change the Kubernetes core. What this means in real terms, what this means in English is that when you can do an installation of OpenShift, when you can do an upgrade, it's basically zero downtime because you're updating individual applications that make a big application in their combination. And this ties in a lot to the AI side, but we'll get on to that once we start the slides and once I start the presentation. I'm kind of hoping it won't be as hot today. I'd like this talk to be longer than 25 minutes because this is the coolest room in there in the entire venue. But I will get dragged off the stage if I go over 25 minutes. Do I start now or do we just waffle for a couple of minutes? Right, cool. So today's talk is going to be very, very quick. I've got around about eight to nine hours of information. I'm going to try and get across in 25 minutes. I'm going to talk very, very quickly. My voice has gone because I was talking to lots of people yesterday and I was drinking last night. So I apologize if I actually fade out. If I fall down or someone come and give me some water or even better some beer and I'll be fine. So we can talk about the next generation of artificial intelligence and machine learning workloads on the perspective of application development. So the Red Hat approach to AI and ML, we've come late to the party. Going to be honest on that. Red Hat is basically a provider of facilitation software. We provide software on which people can build things. You know, we've had RHEL, the sort of the best version of the legs for a long time. We've had OpenShift for about six, seven years now. Our job is to produce basically massive boxes of technical Lego that people can use to build systems. But we are quite late to the party on the AI ML side. And that comes down a little bit to the mechanisms that you have to do for artificial intelligence and machine learning. It's all about big data. It's all about huge amounts of data and repetitive tasks. But the Red Hat approach to AI and ML can be summed up in four basic sort of clauses. The first one is hybrid cloud and that's incredibly important. That's immensely important. And the point behind that is that OpenShift runs the same everywhere. If you install OpenShift on Azure, install it on AWS, install it on bare metal, if you're brave enough to install it on ARM, you know, we'll try to squeeze it onto Raspberry Pi at some point, any workload you deploy to OpenShift behaves identically. It's the same configuration as code. We've abstracted the underlying infrastructure and technology away from the orchestration that's actually done to orchestrate the applications. We're talking about open source efficiency. Now Red Hat is a different kind of company. I've worked for a huge amount of companies and I intend to die at Red Hat. Hopefully not that soon. But Red Hat is a different type of company. We basically take open source software and we make it production strength. We provide basically a subscription, which gives you support for using our enterprise strengthened version of our open source software. And there's a little clause I always talked about with customers. I call it the half past four in the morning stack trace. And it comes from an example why I was the first person to bring open source software into the government. And I brought a project called Lucene. Everyone knows that's a search engine into a government agency. And I built a user interface around it, made it nice, it looked like I'd done something. And then I delivered it. And then I went home. And I had past four in the morning, I had a phone call from the agency saying, there's a stack trace. And so I had to get out of bed, rack into the agency itself, look at the source code. And it was all in the open source bit, which I hadn't written. And I had no clue what was going on within the open source software. But because I brought it in, it was mine. And I owned it. And I supported it. Red Hat provide a breaker. They provide kind of a safety net for that half past four in the morning stack trace. We provide you with the open source products. We provide you with the enterprise strength. And you can raise support requests on anything that we actually provide you with. The third one is intelligent platforms. What we're going towards, what we're moving towards with the Red Hat product set, is trying to be a slightly more opinionated on the way in which our products work. Because in the old days, we used to produce a massive box of technical LEGO and say, here's the technical LEGO, build a car. And it's like no instructions on how to build a car. But you know, there's a wheel. Here's a piston for the engine. We're coming up with more opinionated approaches now. And one of the things I'll mention later is Open Data Hub, which is an open source set of data scientist tools that we can now bring down as a single operator and execute within OpenShift. And finally, intelligent apps. It's letting customers build intelligent apps using Red Hat products. This is very important to me because I was a developer for 35 years. I was a developer back in the days before most people here were born when you had to do everything yourself. And I came up against what I call the 7030 problem. And the 7030 problem was I was getting paid a reasonable amount of money to be a contractor with a government writing software. And I spent 70% of my time not writing software. I spent 70% of my time building machines. I spent 70% of my time downloading frameworks, installing frameworks, configuring the tools I needed to write the software. So I was only spending 30% of my time writing software. And that's hideously inefficient. And what we're trying to move towards, especially on the Red Hat side, is the bright tools and technologies that up that, so 90, 95% of your time being able to write the code. But why containers for artificial intelligence and machine learning workloads? Well, the whole point with containers is that decoupling of application and infrastructure. I actually called it taint. And the reason I called it taint is the ops used to hate me because I used to rack up with a piece of software and say, here's the software I meant that you're meant to use. You'll need to install this version of the JPM. You need to install this database. You need to set this configuration. You'll need to, you know, at half past two in the morning you have to come in and tweak this environment variable. And all those things were taint that would actually taint the underlying operating system. So ops would install my machine or my application. And they wouldn't be able to put anything else on that box because it was tied to my JVM. It was tied to my framework. It was tied to my version of the database. The beautiful thing about containers is you abstract all that taint into the container. And the underlying infrastructure just executes the container. So if you're using a Java application, the JVM travels with the container. If you're writing a database, the database code, the database configuration is within the container. It's not in the core operating system. Agility of application creation, it's so easy to write stuff in OpenShift. And that sounds mad, but literally when I do these demos to customers, I rack up, I install OpenShift, I create a, let's say, a node application that's built from source. And I have a running node application in around about 35 seconds. And people often ask, oh, what have you done? What did you pre-wreck? What did you do in advance? It's like nothing. It's so fast to be able to create these different environments. And that's one of the advantages of it. On-demand execution. I'll get into this later because this is the big thing I want to talk about today. We got the ability now within OpenShift and within Kubernetes to execute workloads only when they are required. And that's very important because if you're running containers on a standard Kubernetes system, they have to be active at all time to receive traffic. So when you actually call them, they are there to actually respond to it. We've got this new technology which I'm going to describe and hopefully demo, which allowed you to create applications and install applications that only exist for the duration of time they're being called. I'll turn it on its head. They're offline when they're not being used. They're not consuming any resource. So you can stand up thousands of these applications, not consume any CPU, not consume any memory. The minute they're called, they're installed, they're executed. When they finish executing, they wait for a time out, then they go away. And you can get a lot more bang for your buck in terms of running these applications. And it lends itself to doing experimentation because you only have to have the components of your experiment running when you need to have them. And you can persist your experiment results and data and all that stuff outside of the actual applications. And it elegantly solves a 70-30 development problem. The tools and techniques that come with OpenShift out of a box, you get all the things you need to be able to start coding like that. You don't have to set up your machine. You don't have to install the libraries. You don't have to install the JVM and all those kind of things. And it's not about scale. This is the key thing. The most important thing. It's about dynamic scale. So artificial intelligence machine-owning apps, they need to scale up massively, but they don't need to be scaled up all the time. When you're running a Hadoop cluster, that cluster is always up, and it's always there to be able to execute workloads. And when people aren't using it the weekend, it's just sat there ticking. It's using CPU cycles. It's using electricity. It's not being consumed. In the past, this was impossible due to the nature of infrastructure storage and the tight binding of the applications to the machines. Using containers breaks that binding. You have the application in a container, which is a piece of currency you can move around between your platforms. And with containers, and specifically with Knative, this has radically changed. It also understands how this works. You need to understand the container mindset, what you need to think about with containers when you're actually writing these applications. And I love this first statement. I always use it with customers. Containers are file systems with delusions of grandeur. They are literally just file systems, but they're executed in process spaces and they think they're operating systems. But in reality, a container is just built from a mutable image that is just a file system. And to take advantage of the actual design features of Kubernetes, applications experiments need to follow certain design patterns. Kubernetes was designed to be stateless. It was designed to be able to fire up the applications, lose the applications, recreate the applications. In the old days, we used to write applications that used to sit on boxes and run for months, literally months. I expect most people here work with companies that have a box in the corner of the room that you're currently supporting using eBay China. And you know if that box dies, your application's gone forever. This point with the whole cloud native approach to Kubernetes is you write your applications like sausage machines. So they can be stood up, they can have all their dependency in the configuration injected, and then they can go away and they can be replaced, they can be moved around. And it's that agility of the application control and creation. An application should be effectively the stateless between runs. And that's what I'm talking about with the sausage machine approach. And it's not a limitation because you can use things like data grid, which is based on an open source and thinner span, which is an in-memory data cache and no SQL cache, or persisted volume within OpenShift. What a persisted volume allows to do is to stand up a file system and express that file system into the back of the container, but it's actually external to the container. So the container sees it as a file system it can write into, but you can actually destroy the container and recreate it somewhere else and reattach it to the persisted volume, and it can carry on from where it was. And that's brilliant because it introduces that kind of post state, the state that actually survives between executions of the containers. So we're introducing Knative. I get very excited about Knative, and I haven't been miked, so they made sure that I stand in one place because I normally walk up and down and wave my arms, and then I go to the toilet and forget I'm still mic'ed. Knative as a concept is simple. It allows a container to scale down to zero replicas whilst inactive and then recreate when it's called. When you install an application in Kubernetes or OpenShift normally you use a deployment, that deployment specifies a number of active replicas you have to have. So by default you'll install a single replica. That application will be running at all times. If you install it using the Knative technology it actually creates an ingress control of the front and it creates a mechanism that allows you to offline it to zero replicas. Now if you've actually had an application in standard Kubernetes or OpenShift and you've reduced it to zero replicas, when you send traffic into it you get a failure. You get a 500. The actual internal proxy and the internal router will not be able to push traffic to it. With Knative serverless that is treated as an event to spin up the application. And there are two types of Knative applications that are supported within Kubernetes. The first is serving and that actually creates an input point, an ingress point on the service and that means that when you push traffic into it it spins it up and responds to it. The second one is more important and I think much more relevant to AI and machine learning and it's called eventing. And what eventing allows you to do is to create a broker that lives within OpenShift, that lives within the namespace itself that processes a new type of abstract event called a cloud event. And I love cloud events because they're incredibly simple. They've got a type and they've got a payload. And what happens is when you push one of these cloud events into the broker the broker will look at the cloud event type and it will look for triggers within the system and if any of the Knative serverless are waiting on that trigger of that type it will pass the event down to it. The arrival of that event into the actual application will spin the app up, run the processing and do all those kind of things. The lovely thing about this is those brokers are by nature by default, they're ephemeral. So they live within the actual namespaces and if you lose the machine bring it back up all the state is gone. But you can back these brokers with Kafka. So you can set up Kafka to have a topic. That topic can deliver cloud events of certain types to the broker. What that allows you to do is basically have a broker that's generating these events to drive the Knative serverless but then you can temporarily replay the actual messages by just pushing the buffer back within the Kafka. And that allows you to replay experiments to actually accept experimental data being pushed into the Knative serverless via Kafka and then just reset the actual temporal point to rerun that experiment. But why is this relevant to AI and ML? So AI and ML workloads are all about size and repetition. They're all about huge data processing and they're all about doing things over and over again to get the results to generate certain types of results. And most organisations are limited in this resource either by size or cost. It's very expensive to run up an AWS cluster. It's very expensive to maintain a Hadoop cluster. Containers and Knative technologies are life a massive experience in much smaller footprints. It's all down to efficiency. If you've got, for example, an experiment that's got 10 phases it has to run, 10 individual components have to be executed, there is no reason why those 10 components have to be resident at once. They can be resident in a chain driven by Knative serverless, much smaller resource footprint. So you can do many more experiments on much less hardware. And OpenShift Kubernetes are facilities for targeted orchestration and this is where it gets really sweet. How Kubernetes works is it has a number of, I'll tell a little joke so I assume there's no Red Hat salespeople in the room, it has a number of worker nodes and these worker nodes are basically buckets where workloads can be executed. So I used to describe to customers what you do is you start off all these individual buckets within your Kubernetes system and then you run your workloads over it and I was taken aside by one of the chief salespeople at Red Hat and he said, you can't use the word bucket, it's too technical a term for salespeople. And it was like, right, okay. That just struck me as being amazing. But anyway, what OpenShift allows you to do is to do specific workload targeting on these buckets. So if you've got, for example, five worker nodes in your system and one of your worker nodes has a huge amount of memory and some GPUs, you can actually have or OpenShift orchestrate those workloads that require GPU to only land on that box. And one of the new features we've got of OpenShift in the latest releases is we've got the abilities of the boxes themselves to explore the actual hardware requirements they have. So if you've got a box that's got numersomes or you've got a box that's got GPUs, it can express through the OpenShift system that it has these kinds of things. And then your orchestration, if you've got a workload that says I must run this numersome against this CPU, I must have access to this amount of memory, I must run on a GPU, OpenShift can automatically orchestrate that to the appropriate box. And that's incredibly powerful. It means you don't have to have a box or a certain cluster that has identical boxes. So I work with some banks and what the banks do is they tend to buy a very expensive box for all their top of the range services and they go and buy the worst possible boxes they can for their developers off the back of a lorry. And they stand those up as worker nodes and label them as developer and then stand up the prime node and label it as for the best. And OpenShift allows that kind of orchestration. It uses affinity and anti affinity to make sure the workloads land in the appropriate places. And I say it's much more efficient use of resources means better results for less outlay. So talk about the thought experiment now and this is where I get a bit more excited because I've been working with neural nets for a number of years. I love the concept of neural nets but I've never been able to build on properly because I'm any Java programmers in the audience, any Go programmers in the audience before I start to insult Go. So I'm a Java programmer and I've been a Java programmer for 25 years and Go makes my teeth hurt. I don't know why so I find it very hard to write experiments and Go. Anyway, a lot of stories, sure. What I've been working on is this concept called k-neural and what it does it allows to create neural nets using neurons but the neurons are k-native services that only exist for the duration they're being called. The neuron itself uses Red Hat data grid which is an in-memory data grid to actually store the state of the neurons. So when a neuron is called via a cloud event it spins up. The first thing it does it talks to the data grid and it pulls off its memory state. It looks at the memory state, it looks at the thresholds, it looks at what it has to generate. If it exceeds a threshold in the way that a neuron normally exceeds a threshold it will throw a cloud event back to the broker and that cloud event will drive other neurons and when the neuron has finished its work it's offloaded. So you can build these hugely complicated systems with very small atomic components and that's incredible. I mean I've had some problem writing it because you know I have a day job so they won't let me do this all the time. I have to come to these events and sell software and things but I've been working on this a lot. I've actually got to get a brief bit for how I play with it but I say neurons are perfect for this kind of system because they're atomic and simple. You know you provide the state when the neuron is created, you persist the state when the neuron changes the state and when the neuron goes away the state is still persisted and if you keep your neuron simple if you keep the actual state of the engine simple if you keep the thresholding simple it's a very very nice way of doing it and incredibly fast and I say I use the infinite span red hat data grid for the memory so it's an in-memory data grid that allows to store no SQL objects so if it's getting slightly technical what I do is that each neuron has a unique ID assigned to it, a unique ID is used as a key into the grid to pull the data up and I say each neuron is represented by a tiny container because you can build containers with very very small footprints. We've got technology we ship as part of open ship called the UBI the universal base image which provides rail in a much smaller footprint so you can build applications on top of a rail system but the actual container footprint is very very small so these neurons are tiny containers and I'm currently testing whether I can get 1,000, 5,000, 10,000, 100,000 of these things up and running at once so demo time I'm aware I'm running there a little short of time so I'm going to show you a quick demo and again this is on the intel kit so if it runs slowly please blame Chinese intelligence I used to say blame Russian intelligence but that's no longer allowed so what we're looking at here when it starts up is basically this is the standard open ship user interface we've actually provided two user interfaces one is for the administrators and that's not actually true because it's a basic it's an object level dive into every object that you as a user can actually change so I'll show you very quickly while this is rendering if I go to the administration user interface what it is is basically it allows to drill down into any component of the system so I can drill down into all the deployments drill down into all the services the routes everything I have access to I can drill down into one of the lovely things I'll take 30 seconds on this because this is really sweet is part of OpenShift is we've now expressed the infrastructure on which OpenShift runs as objects within Kubernetes so you can treat them as other objects so you can change these objects in real time so you can change the number of nodes you've got you can change all these kind of really cool things without having to go down into the dirt and rebuild the system and do all that kind of stuff the developer user interface is an opinionated user interface that we produce specifically to make developers' lives easy going to be honest about this Kubernetes is hard and people don't tell you that people say Kubernetes is simple go get Kubernetes all this kind of stuff I love Kubernetes Kubernetes is one of the most elegant pieces of software I've ever seen but it is painfully complicated and it uses this model which they call eventually consistent but everyone who actually uses it call eventually inconsistent in that you actually talk to the Kubernetes control plane change the object state and then Kubernetes says yep done and in the back it goes away and actually does the physicalization of that so you have that wonderful thing about being eventually consistent but anyway so what we're looking at here is an application I've written so this is but this is the k neural stuff I'm actually running I've got the data grid active I've got something called grid connect which is basically my way of actually interacting to the grid itself the reason for that is I wanted to keep the neurons very small so rather than having the connectivity information within the neurons themselves the neurons just send very small packets to the grid connect which does the actual physical connection and physical update of the data within the grid again I'm just trying to optimize those things a more optimized container a much smaller containers much faster to run much faster to deploy much faster to move over here which is the cool bit I'll make it slightly larger so people can see it is I got a quarkus function so quarkus is a new version of Java we basically made Java relevant again because I said I was a Java program for 25 years and I love Java and Java got this terrible reputation of being slow so we come up with quarkus and what quarkus does is it pre-compiles all the class files it makes the startup time of a Java application go from seconds to microseconds to nanoseconds beautifully fast so what I've got here is a single quarkus application that's waiting for an event to arrive within the broker so this broker here is actually got two triggers one trigger is waiting for a quarkus event and I've got a subscription for that trigger for this Knative service the other one is actually a tech talk event and on the end of that I've got a technology called Camel K so Camel K is based on Apache Camel it's an integration technology which allows to write some very very cool very fast technology in very small amounts of code what that does it's very very simple it just pulls the image it pulls the actual event off and it just logs the fact it's got an image of an event and I've also written and I apologise profusy I haven't changed style sheets since 1999 I find style sheets very hard to write so this looks like an old school web app what this does is allow me to actually admit the events themselves so what I'm doing is I'm actually going to push an event a quarkus event with a payload that has just a single key payload and hello AI summit in it and I'm going to push that into the broker so this is just basically throwing that into the system and the broker I'm targeting you'll see over here is the k neural broker in the actual k neural namespace so you can individually target these brokers based on the namespace themselves which allows you to fragment the event model so I'm going to admit that event and if I'm quick this makes it easy because the network is a very good you'll see that the application immediately fired up but you'll also see that this one fires up and the reason this one fires up is that this one actually emits an event of tech talk event so when that event arrived in the broker the broker had saw that it was a quarkus event it pushed it down the quarkus event trigger the quarkus event trigger actually spun up that quarkus function you saw how fast it actually spun up the quarkus event processed that and then re-emitted a tech talk event back to the broker which kicked off the camel k it's a pithi example and it's the best I can do with this network but if you think about it what you can do with your systems is you can break your systems down into atomic components break it down into atomic microservices write each one of those microservices is being driven by a cloud event and install it as a canative serverless workload and then you can write extremely complex systems but not consume a huge amount of resource and this is huge when I talk to customers about this you know this this is what I wanted 20 years ago and I whenever I talk to customers about it I'd like the customer to say well why don't you quit that out and come work for us and write this for us because then I got to write software again but then I remember how hot it is to write software for a living this is the next generation the beautiful thing about it is when you use OpenShift this comes out of the box it's not additional configuration you just install basically the operator to install a canative serverless and away you go so this thing is now flashing numbers at me I think I've gone beyond so that was basically the demo we've got to stand over there my voice is probably going to last for another two hours if people want to come and chat I think the pub opens round about which will make my voice work even better but I'll say thank you for that questions yeah cool so we don't have five minutes for questions still as we start slightly early so if you have any questions as a rush for the exit question yes there's one to be mentioned one to one yeah first quick question you really atomized the neural network at the level of neurons yeah all right so we are engineers so for engineers and most of the time we need to create our own big deep warning networks different architectures a lot of neurons isn't it really like comprehensive to run every one of each node isn't like a limitation as if you don't see like I don't need to atomize it I want an input and now basically right so if I would have to do a thousand hundreds of thousands of nodes does it really still help it does what we do normally is we localize the actual that the networks for the so if you're running let's say a huge amount of pods within a single cluster we use the internalized sdn and what we can do is actually have placement so if you want to have a number of applications that very chatty across the network you can make sure they land on the same nodes or nodes that are actually close but we've got a new technology called Submariner have you heard of Submariner so what Submariner allows you to do is to put an overlay network over multiple clusters so you can actually treat what this overlay network does is it provides basically an overlay network over the sdn's of multiple clusters so you can actually spread your workload out but in answering your question you can localize these things I've a lot of customers who've got very very sort of intensive application spaces and they need to be close to each other to cut down the latency from the call from point to point and what they've done is basically actually stood up a number of the worker nodes within OpenShift with black fibre between them or basically in the same room to get around that so you can architect the beautiful thing about the OpenShift side is that we're not opinionated in the way you actually install it this is what I'll talk about the box of technical Lego if you've got a system that has to be massively network efficient you can design those appropriately I've got some customers and all they've got is two nodes and these nodes are massive sort of dual socket 96 core systems and they do everything I actually have one I won't say the name because it's under NDA but what they did was they actually created a CSI driver that expressed the PV so you basically have persisted volumes which are the way in which you store things offline on disk they wrote a CSI driver that actually used memory for persisted volumes so they could mount a file system into the containers itself but when they wrote to that file system they were actually writing into memory just to get the speed of actual processing and pushing the messages about I'm not sure if that answered the question I think that might have gone off on one. Any more time for one more question if we can keep it short, thanks. Maybe to add to that question so how scale what is OpenShift if you look at your neural and the architecture so how many neurons do you think OpenShift can serve? We used to be limited by the Kubernetes node count and what there are two things you look at it's the number of nodes you can actually support in terms of the response time of ETCD because the more nodes you have the more kind of overhead you have on the ETCD side and then it's down to the number of pods you can run on each node and we used to be limited by the Kubernetes count for the number of pods on a node off the top of my head I think there's about a thousand you could do on a thing. The beautiful thing about the scalability side is that you don't lose anything by having tons and tons of little nodes because we've optimised the way in which ETCD works so if you want to scale out your system by having small nodes but thousands of them you can do that. If you want to scale out in a more kind of horizontal way by having just a number of huge nodes and then scaling up the applications themselves you can do it that way. It's a bad answer to a question but there are so many ways you can do it. The problem we've got, the problem we've always had with this is that we try not to be opinionated because the minute you're opinionated you're forcing someone to do it that way. The beautiful thing about the OpenShift is you can hang them together whatever way you like. Does that make sense? Maybe I can ask the second question because you also have hadoop experience. So did you already compare the performance of your hadoop experience with what you are experiencing right now? Yeah, are we cool? If we can answer that briefly. Try and go and get a drink now. Yeah, I think so. Cheers, thank you. Well, thank you, thank you sir, thank you.