 Welcome to this Cube Conversation. I'm Dave Nicholson and today we have a very special guest from Red Hat, Nick Barcet. Nick is the Senior Director of Technology Strategy at Red Hat. Nick, welcome back to the Cube. Thank you. It's always a pleasure to be visiting you virtually. It's fantastic to have you here. I see new office surroundings at Red Hat. Have they taken a kind of a nautical theme at the office there? Where are you joining us from? I'm joining from my boat. I've been living on my boat for the past three years and that's where you'll find me most of the time. So would you consider your boat to be on the edge? It's certainly one form of edge. There are multiple forms of edge and a boat is one of those forms. Well, let's talk about edge. Now, we're having this conversation in anticipation of CubeCon CloudNativeCon that's coming up North America 2021, coming up in Los Angeles. Let's talk about specifically the edge, where the edge computing and Kubernetes come together from a Red Hat perspective. Walk us through that. Talk about some of the challenges that people are having at the edge. Why Kubernetes is something that would be considered at the edge? Walk us through that. So let's start from the premises that people have been doing stuff at the edge for agents. I mean, nobody has been waiting for Kubernetes or any other technology to start implementing some form of computing that is happening in their stores, in their factories, wherever. What's really new today is when we talk about edge computing, it's reusing the same technology we've been using to deploy inside of the data center and expand that all the way to the edge. And that's what from my perspective constitutes edge computing or the revolution it brings. So that means that the same GitOps, DevSecOps methodology that we were using in the data center are now expandable all the way to those devices that live in weird locations and that we can reuse the same methodology, the same tooling, and that includes Kubernetes. And all the efforts we've been doing over the past couple years has been to make Kubernetes even more accessible for the various edge topologies that we are encountering when discussing with our customer that have edge projects. So typically, when we think of a Kubernetes environment, you're talking about containers that are contained in pods that live on physical clusters, despite all of the talk of no code and serverless, we still live in a world where applications and microservices run on physical servers. Are there practical limitations in terms of just how small you can scale Kubernetes? How close to the edge can you get with a Kubernetes deployment? So in theory, there is really no limit as the smallest devices are always bigger than Kubernetes itself, but the reality is you never use just Kubernetes. You use Kubernetes with a series of other projects that makes it complete. For example, stuff that is going to be reporting telemetry, components that are going to help you automatically scale, etc. And the further you go into the edge, the less of these components you can afford. So you have to make trade-offs when you reduce the size of the device. Today, what Red Hat offers is really concentrated to where we can deliver a full OpenShift experience. So the smallest environment on which we would recommend to run OpenShift at the edge is a single node with roughly 24 gigabits of RAM, which is a gigabyte, sorry, which is already a relatively big edge device. And when you go a step lower, then that's where we would recommend using a standard rail-for-edge configuration or something similar, not Kubernetes anymore. So you said single node, are you... Let's double-click on that for a second. Is that a single physical node that is abstracted in a way to create some level of logical redundancy? When you say single node, walk us through that. We've got containers that are in pods. So physically, what are we talking about? You have, based on your requirement, you can have different way of addressing your compute need at the edge. You can have the smallest of clusters, and this would be three nodes that are delivered with the control plane and the worker node integrated into one. When you want to go a step further, you could use worker nodes that are controlled remotely via a central control plane that is at a central site. And when you want to go even one step further, deploy Kubernetes on a very small machine, but that remains fully functional even if disconnected, that's when you would use this thing that is not anymore a cluster, which is a single node Kubernetes, where you still have access to the full Kubernetes API regardless of the connectivity of your site, whether it's active or not, whether you're at sea or in the air or not. And that's where we still offer some form of software high availability because Kubernetes, even on a single node, we will still detect if a container dies and restart it and provide similar functionality like this, but it won't provide hardware availability since we are with a single node. And that makes sense. Yeah, that makes perfect sense. And I would suggest that we refer to that as a single node cluster just because we like to mix it up with terminology in our business and sometimes confuse people with it. So that was a choice we made. Actually, it's not a cluster because it's not a cluster. Exactly. No, I appreciate that. Absolutely. So let's be explicit about what the tradeoffs are there. Let's say that I'm thinking of deploying something at the edge and I'm going to use Kubernetes to orchestrate my container environment and pretend for a moment that space and cost aren't huge limiting factors. I could put a three-node cluster in, but the idea of putting in a single node is attractive. Where is the line drawn in terms of what you would recommend from what are the tradeoffs? What am I losing going to the single node cluster? See, I just called it a cluster. In a nutshell, you're losing hardware high availability, meaning if one of your server fails, since you only have one server, you lose everything. And there is nowhere around that. That's the biggest tradeoff. Then you have also a tradeoff on the memory used by the control plane, which you won't be able to use to do something else. So if I have a site with excellent connectivity and the biggest loss of connectivity might be counted in ours, maybe a remote worker is a better solution because this way I have a single central site that carries my control plane and I can use all the RAM and all the CPUs on my local site to deploy my workloads, not to carry the control plane. To give you an example of this tradeoff in the telco space, for example, if you're deploying an antenna in a city, you have plenty of antennas covering that city, and therefore the loss of one antenna is not a big deal. So in that case, you will be tempted to use a remote worker because you will be maximizing your use of the RAM on the sites for the workload which is, let's have people establish communication using their funds. But now we take another antenna that we are going to locate in a very remote location. There, if this antenna fails, everybody fails. There's nobody that is able to make calls. Even emergency vehicles cannot discuss together very often. So in that case, it's a lot better to have an autonomous deployment, something where the control plane and the workload itself are being run in one box. And this one box, in fact, can be duplicated. There could be another box that is either sitting in a truck in case of emergency or off, but on the antenna site, so that in case of a major failure, you have a possibility to restore it. So it really depends on what your sets of constraints, in terms of availability, in terms of efficiency of your RAM use, is going to be there, is going to make you choose between one or the other of the deployment models. Now, that's a great example. And so it sounds like it's not a one-size-fits-all world, obviously. Now, from the perspective of the marketplace looking in at Red Hat participating in this business, some think of Red Hat as the company that deployed Linux 20 years ago. Help us make that connection between Red Hat today and what you've been doing for the last 20 years and this topic of edge computing, because some people don't automatically think of Red Hat and edge computing. I do. I think they should, but help us understand that. Yeah, obviously, a lot of people consider that Red Hat is Red Hat Linux, and that's it. Red Hat Enterprise Linux is what we've been known since our beginnings 25 years ago and what has made our early success. But we consider ourselves more of an infrastructure company. We have been offering for the past 20 years the various component that you need to deploy server, run and manage your workloads across data centers, and make sure that you can store your data and that you can automate your operations on top of this infrastructure. So we really consider ourselves much more of a company that offers everything that enables you to run your servers and run your workloads on top of your server. And that includes a tool to do virtualization, that includes tool to do deployment of containers, and that's where Kubernetes entered in play about 10 years ago. Well, first it was a pass that then became Kubernetes and the OpenShift offering that we have today. Yeah, thanks for that. So I have, I've got a final question for you. That's a little bit off topic, but it's related. This, this is, this is in the category of Nick predicts. So when does Nick predict that we will get to a point where we tip beyond the 50-50 point cloud versus on-premises IT spending? If you accept today that we're still in the neighborhood of 75 to 80% on-premises, when will we hit the 50-50 mark? I'm not, I'm not asking you for the 100% cloud date, but give us a date, give us a month and a year for 50-50. Given the progression of cloud, if there was no edge, we could say two to three years from now, we would be at this 50-50 mark. But the funny thing is that at the same time as the cloud progresses, people start realizing that they have needs that needs to be solved locally. And this is why we are deploying edge-based solution, solution which reliably can provide answers regardless of the connectivity to the cloud, regardless of the bandwidth. There are things that I would never want to do, like feeding assassin feeds from 4K cameras into my cloud environment. That, that won't scale. I won't have the bandwidth to do so. And therefore maybe the answer to your question is it's going to be asymptotic and it's almost impossible to predict. So that is a much better answer than giving me an exact date and time because, because it reveals exactly the reality that we're living in. Again, there is, you know, it's, it's, it's fit for function. It's not cloud for cloud's sake. Compute resources, data resources have a place that they naturally belong oftentimes. And, and, and oftentimes that is on the edge, whether it's on the edge of, the edge of the world in a sailboat or, or out in a single server, not node, or, I keep wanting to say single node cluster. It's killing me. I don't know why I think it's so funny. But, but, but a single node implementation of OpenShift, where you can run Kubernetes on the edge. It's a fascinating subject. Anything else that you want to share with us that we didn't know? I think one aspect that we never talk enough is how do you manage at the scale of edge. Because even though each edge site is very small, you can have thousands, even hundreds of thousands of these single node something that are running all over the place. And I think that what you're seeing in Advent cluster management for Kubernetes, and particularly the 2.4 version that we are going to be announcing this week and actually releasing in November, is I think a pretty good answer to that problem. On how do I deploy with zero touch these devices? How do I update them, upgrade them? How do I deploy the workloads on top of them? How do I ensure to have the right tooling to deploy that at the scale? And we've done the testing now of ACM with up to 2000 clusters connected to a single ACM instance. And in the future we are planning on building federation of those, which really gives us a possibility to provide the tooling needed to manage at the scale. Excellent, excellent. Yeah, that's whenever we start talking about anything in the realm of containerization in Kubernetes, scale starts to become an issue. It's no longer a question of a human being managing 10 servers and 50 applications. We start talking about tens of thousands and hundreds of thousands of instances where it's beyond human scale. So that's obviously something that's very, very important. Well, Nick, I want to thank you for becoming a CUBE veteran once again. Thanks for joining this CUBE conversation. From Dave Nicholson, this has been a CUBE conversation in anticipation of CUBECon and CloudNativeCon North America 2021. Thanks for tuning in.