 From around the globe, it's theCUBE with coverage of KubeCon and CloudNativeCon Europe 2020 virtual brought to you by Red Hat, the CloudNative Computing Foundation and ecosystem partners. Hi and welcome back, I'm Stu Miniman and this is theCUBE's coverage of KubeCon, CloudNativeCon 2020 in Europe. The virtual edition of course, we're talking to practitioners, we're talking to contributors, we're talking to end users from around the globe and of course when we talk about the CNCF, it's not just Kubernetes, there's a lot of projects in there and it's not just for building things in the cloud. One of the interesting use cases that we've been talking about the last few years or two has been about how edge computing fits into this whole ecosystem to help us dig in a little bit deeper into that conversation. Welcome on board, one of our Kube alumni, Nick Barcet, he is a senior director of technology strategy at Red Hat. Nick, great to see you again, thanks so much for joining us. Thanks for inviting me again. All right, so as I teed up, containerization in Kubernetes, a lot of times people think about it's the big public clouds, it's my data center, but of course cloud is not a destination, there's so much happening with the containerized world and of course these lightweight environments when we can make them lightweight make sense to go to the edge. If you could, just tell us where we are with the state of containerization and the cloud native ecosystem and where does that fit with edge computing today? So what we're seeing currently is every ISV, every customer we talk with are converting to developing their application with container as a target. This is making it so much simpler for them to be able to establish their application wherever they want. Of course, when we add, for example, the operator framework that we just got accepted into the CNCF and normalize how you're going to do day one and day two of the life cycle of this container, this is making things a lot simpler. And this is allowing us to have the same principle reapplied for deployments happening in the cloud on your private data center and anywhere at the edge. And that's really the core of our strategy, whether in the open source community or as a commercial company, it is to make all these different footprints absolutely equal when you are writing code, when you're deploying code, when you're managing it. Nick, we talk about the edge, from my standpoint, tend to think that it is going to need a lighter weight, smaller footprint than if I'm thinking about my data center or the environment. Reminds me some ways of, of course, Red Hat bought CoreOS was, how do we build something that can be updated faster and be a thinner operating system? When we think of Kubernetes, Kubernetes today isn't as simple, there's obviously a lot of managed services out there. Of course, with OpenShift, you've got an industry leading solution out there, but is there something different I need to do to be able to do containerization and Kubernetes at the edge? How does that fit? As a developer, as a user, I hope you have nothing different to do. It's our job to make our platform suit the requirement that are very specific to the edge. For example, if you're going to put Kubernetes inside of a plane, you're not going to be able to use all the space you want, your very space constraint, or if you put it in a train, or if you put it in a boat, you're going to have different types of constraints. And we need to be able to have a implementation of Kubernetes that fits the smallest requirement, but still has the components that enables you as a developer or you as the administrator to feel at home, regardless of the implementation. And that's the real beauty of what we are trying to do, and that's why we are not rushing it. We are trying to do it upstream so that we have something that is as smooth as possible across the footprint. All right, when we talk about going to the edge, one of the considerations, of course, is the network to get there. So help us connect what the impact is of 5G, where we are with their rollout, and are there any industries maybe that are leading the pack when it comes to this discussion? Yeah, so when I talk about 5G, I like to distinguish two things. There is 5G as the network that the carriers are currently deploying to support all kinds of terminal endpoints. And it happens that in order to have an efficient 5G deployment, operators use edge technology to deploy computing power as close as possible to the tower so that the latency between your device and what is connecting you to the internet, the time packets take to go across that last mile is as short as possible. There is a second case, which is also very interesting in the edge part, which is private 5G, because private 5G enables a customer to establish his, let's say its own antenna, his own local 5G network, completely secure that will enable connecting sensors or devices of all kinds without having to run wire and in a much more reliable way than if you're using Wi-Fi or similar kinds of connectivity. So these two aspects are crucial to edge. One, because edge is enabling the deployment of it. The other one, because it's enabling the growth of the number of sensors without multiplying the cost like crazy. In terms of deployments, well, you know our largest references, Verizon and Verizon is moving forward with its plan. This is going very well. I believe they have communicated around this. So I will point you around what Verizon has stated on their deployment, but we have multiple other customers starting their journey and clearly the fact that we have the ability to deploy the stack on a version of Kubernetes that is basically the same, regardless of where you're deploying it, that has the ability to support both containers and VM for those applications that are not yet containerized makes a huge difference in the simplicity of this transition. Yeah, it's interesting. You talk about the conversion between virtual machines and containers. One of the big use cases often talked about for edge computing is in industrial manufacturing. And there you've got the boundary between IT and OT and OT traditionally hasn't wanted to even think about all of those IT conversions and challenges that they've got their proprietary systems for the most part. So is that something to speak to what you're seeing in that segment? So it's interesting because we just released last week our first inclination about the industrial blueprint that we are proposing. And for us, the convergence between IT and OT comes at when you have automation in the interpretation of data provided by sensors. This automation generally takes a form of machine learning algorithms that are deployed on the factory floors that analyzes the sensor data in real time and we'll be able to predict a failure or we'll be able to look at video feed to verify that employees are respecting safety measures and many, many other application. So because of the value this brings to the operational people, this bridge is very easily closed once you've resolved the technical difficulty and the technical difficulty are mostly what I call plumbing. Plumbing that takes a form of norms that have been widely different between the industrial world and the IT world so far. Difficulties because you don't speak the same language. Well, let's take an example. In the industrial world, CAN is the way you're synchronizing time resources. In the IT world, we have been using other protocol and more recently, especially in the telco space, we're using PTP. But it seems that PTP is now crossing over to the industrial world. So things are slowly but very safely evolving with something that is enabling this next wave of revolution into the factory. Yeah, Nick, it's been fascinating always to watch when you have some of those silos and when is the right time that things pull together. Curious, one of the big questions in 2020, of course, is with the global pandemic going on, which projects get accelerated and which ones might be pushed off a little bit? Where does edge computing fall in the conversations you're having with customers? Is that something mission critical that they need to accelerate or is it something that might take a little bit longer, possibly even a delay with the current pandemic? So it's quite hard to answer this question because we are in an up slope. Is the slope less up now than it would have been without the pandemic? I have no way to tell. What I'm seeing is a constant uptick of people moving forward with their projects. In fact, some projects are made, for example, for worker safety are made even more urgent than they were before because by just analyzing video feed, you can ensure that your processes prevents too close contact between co-workers and making them vulnerable in this way. So it really depends on the industry, I imagine, but right now we see the demand growing regardless of the pandemic. All right, Nick, you mentioned earlier that when I think about the edge, it should be the same code. I hopefully shouldn't have to think about it differently no matter where it is. That begs the question, help connect OpenShift for us as to what does Red Hat offering on when it comes to the edge solutions with OpenShift? So you have, what we say is the edge is like an onion where you have different layers and every time I look at the onion in the perspective of a given customer, the layers are very different. But what we are finding is similar requirements in terms of security, in terms of power consumption, in terms of space allocated for the hardware. And in order to satisfy these requirements, we found out that we need to build three new ways of deploying OpenShift so that we can match all of these potential layers. The first one that we have released and are announcing this week is OpenShift deployable on three nodes. That means that you have your supervisors, your controllers and your workers on the same three physical machine. That's not the smallest footprint that we need, but it's a pretty good footprint to solve the case of a factory. In this environment, with these three nodes, we have something that is capable of being fully connected or working in disconnected mode. The second footprint that we need to be able to satisfy for is what we call single node deployment. And single node deployment from our perspective need to come in two flavors. The easy way, the one we are going to be releasing next quarter is what we call remote worker node. So you have your controllers in a central site and you can have up to 2,000 remote worker nodes spread across as many sites as you want. The caveat with this is that you need to have full-time connectivity. So in order to solve for disconnected site, then we need something that is a standalone single node deployment. And that's something that a lot of people have burn types so far and we are currently working on delivering a version that we hope is going to be satisfying 99% of the requirement and is going to be fully upstream. All right, last piece on this, Nick. How should I be thinking about managing my environment when it comes to the edge? You've seen a lot, of course, from Red Hat at Red Hat Summit and talked to some of your peers, some recent announcements. So how do we plug in what's happening to the edge and make sure we've got full visibility and management across all of my environments? So if I had one more to explain what we need to do, it's GitOps. Basically, you need immutable deployments. You need to be pooling configuration and all information from a central site and adapt it to the local site without manual intervention. You need full automation and you need a tool to manage your policies on top of it and, of course, aggregate information on how things are going. What we don't want is to have to sit one administrator per site. What we do not want is to have to send people on each site at the time of deployment. So you need to be abiding by this completely automated model in order to be edge compliant. Does that make sense? It does and I'm assuming the ACM solution, Advanced Cluster Management, is a piece of that overall offering. Absolutely. ACM is the way we present, we organize policies, the way we get reporting information and the way we do our GitOps automation. All right, so Nick, final question for you. Give us a little bit of a look forward. You just mentioned earlier, one of the things that's getting worked on is that single node, disconnected type of solution. What else should we be looking at in the maturity of solutions in this containerizing Kubernetes world? So it's not only about the architecture that we need to support. It's a lot more about the workloads that we are going to have running there. And in order to help our customer make their choice in how they design the network, we need to provide them with what we call Blueprint. And in our mind, a Blueprint is more than just a piece of paper. It's actually a complete set of instruction abiding with this GitOps model that I described that you can pull from a Git repository that enables automation of the deployment of something. So for example, the first Blueprint we are going to be releasing is the one for industrial manufacturing using AIML. And this is going to be something that we are going to be maintaining over time, accepting contribution from outside and is an end-to-end example of how to do it in a factory. We are going to follow up with that with other Blueprints for 5G, for private 5G, for how do you deploy that in maybe a healthcare environment, et cetera, et cetera. The idea here is to exemplify and help people make the right choices and also ensuring that the stack we provide at one point in time remain compatible given the complexity of the components we have in there over time. And that's really the thing that we think we need to be providing to our customers. All right, well, Nick, thank you so much for giving us the update when it regards to edge computing, really important and exciting segment of the market. Thank you very much, it was a pleasure being with you once again. All right, and stay with us, lots more coverage from KubeCon, CloudNativeCon 2020 and you're the virtual edition. I'm Stu Miniman and thank you for watching The Kube.