 Hi folks, we'll give it just another minute before we officially get started, but welcome. Hello everyone, welcome to today's LF networking webinar. The title of our discussion today is Why Edge Computing Requires Cloud Native Thinking, and our speaker today is Bill Mulligan with Kubermatic. Just a couple of housekeeping items before we get started. Attendees will be muted during the presentation. However, we encourage questions. We have a live Q&A window at the bottom of your screen, so please feel free to type in your question at any time during the presentation. And at the end we do have some dedicated time to go over some live Q&A. All right, without further ado, I will kick it over to Bill. Thank you Jill. And thanks everyone for joining today. So my name is Bill Mulligan, and I work at Kubermatic. And we're a Kubernetes and cloud native software company, specifically we focus on multi cluster management for Kubernetes clusters. And the topic I'll be talking about today is why edge computing requires cloud native thinking. So let's jump in. So edge computing is creating a new internet. And if you think about it, we're really kind of entering right now, the third act of the internet. And if you think of it in three stages, it's really around how close is the actual application to the actual user. So in kind of like the first stage we had a really centralized internet that was really focused around kind of like a couple main computing centers. And most the information on the internet was pulled from really just a few locations around the world. And as businesses, companies and organizations wanted to be able to improve their user experience, their performance, what they're able to do. We started to see the growth of regional points of presence or CDN networks to really bring the application closer to the end user. And that's something like content for streaming something and having lower latency or being able to host websites closer to the actual end user. These additional like points of presence would allow us to place applications a lot closer to the users. Now we expanded from, you know, a few hundred locations to a few thousand around the world. And where we're actually going today is the internet that we need to build. And that's what we call edge computing. And this is where we're placing the application that we're trying to run as close to the end user as possible. And really what this is doing is creating a whole new way to make and consume technology. If we think about it, it changes a couple of the fundamental paradigms that we really thought about for the internet so far. Instead of having really centralized like cloud computing, edge computing is geographically distributed to not just hundreds, not even to thousands, not even to tens of thousands, but we can think of like hundreds of thousands or millions of locations around the world. Instead of having latency that a human may notice as you're like loading the website, we can get down to latency is that even machine wouldn't notice. So as we're working on machine, machine communication, they can function in real time rather than having a latency built into this system. Now the cloud computing is thinking thought about as a scalable resource where we can always rack additional servers, whereas edge computing is really out to like constrained devices as you're going towards millions or billions of devices, not every single one is completely scalable like the cloud. And the last part is really just going back to this numbers again instead of thinking about thousands of locations but thinking about billions of devices. So it's really a whole new internet that we're creating and thinking about today with edge computing. And this isn't just a small market opportunity. This is going to be massive. They expect annual capital expenditure for edge computing to be 146 billion with a 35% compounded annual growth rate. So this presents a massive market opportunity for lots of different suppliers, vendors and companies to move into this whole new like almost would say blank vertical. That's anybody's game to capture. And so how do we actually define what edge computing is and think about it different than centralized cloud computing. And I'd kind of like to walk you through the different ways that we think of edge computing because it's not edge computing isn't a specific thing. It's more of a location and there's different layers to edge computing you almost think about it like an onion as you're going out from the core, you're adding additional layers that are further and further away from the centralized data centers. So at the most these distributed and kind of like largest largest ways away we have kind of the centralized data centers that may have be close to or really even are internet exchange points, which then go out to the service provider edge, where we have both regional edges, which is like point of presence or CDMs out to the access edge, which is something like central offices or regional edge sites to actually on premise data centers, where we're running things actually on our location to be a shop floor, a hospital floor, retail floor. It's actually on a customer premise to smart device edges where we actually have iot type devices. They're actually doing the compute all the way out to the constrained device edge that is really microcontroller base, you can think about it as we move from the right side of this diagram to the left, what we're actually doing is moving our actual device and our actual application from further away from the user to closer. So centralized data centers maybe thousands of kilometers apart, what can devices on the constrained device edge might actually be right in front of you it might be a light bulb it might be a handheld device, it is really right next to the actual end user it could be just sensing the environment. So moving from right to left, what we're doing is bringing things closer and closer and minimizing the gap between what the application experience is experiencing and what the actual user is experiencing. And so this creates kind of like a whole new way of thinking about this. And as we're going down this continuum, we need to think of all these as not the edges and one heterogeneous thing like the cloud. So when we move further out in these layers, we need to think about what each of what that transition through the layers actually means for the software for the hardware for the actual users. So if we think about it as we're kind of like going down this list here, as we're moving from right to left, what we're going to see is increasing hardware and software customization, increasing resource constraints and smaller deployment scales. These are centralized data centers. Once again, is running like almost like the cloud where you have very standardized compute like x86 servers or arm, the software can be like off the shelf vendor software, you're not really that constrained for devices and the deployment scale can be racks and racks of servers. As you're getting out to the regional edge or the access edge, what you're going to be looking at is still probably going to be pretty standardized hardware and software that you're going to be running. But it may be actually more specialized hardware to start to accelerate what you're able to do. So the integration of something like a GPU on the hardware side or smart nicks. There's start going to be having some resource constraints and smaller deployment scales so regional edge or the access edge might be something to like a couple racks to just a single rack to even just a half rack and some of the access edge. So a much smaller deployment scale, even though it's still kind of like a standardized you can think of this service provider edges kind of like a shrunk down version of these centralized data centers on premise data centers are kind of an extension of that where you're going to have maybe like a full rack of servers or a half rack and still probably going to be like x86 base. You're still going to have pretty standardized hardware. Maybe you'll start getting into like some customized software and the resource constraints really come into play here as you only have like a half rack of server. What can you do with each one and how many applications can you actually host there as we go to the smart device edge and the constraint device edge. This is where we really start to see the hardware and the software customization come into play. We have specialized hardware that is single function on the constraint device edge. It might be microcontroller based where you have the software really just like flashed on the device because it's only doing a single purpose and it can't really do much more than that. And the resource constraints really become a significant thing here. We can go down to a few hundred megabytes where every almost every single bit is important to the actual functionality of the software. So you're really concerned about the resource consumption here as you're going thinking about who's actually owning this hardware and software and who's managing it. You can think of kind of the right side of this continuum as more shared resources. So X as a service say it's infrastructure as a service or Kubernetes as a service or database as a service. Usually it's something that's owned and operated by a service provider which is then rented out or lended out to the actual end user. Now as we actually start to go move away from the service provider edge to the actual user edge or we're going to see is actually the end users or companies or enterprises owning and operating that actual like hardware and software stack. Sometimes this may be managed by the service provider through like customer premise equipment. But what we're actually seeing is the actual end user starting to own and control these devices. And as we think about the security of these locations. This whole first half on the right here is the same for is like standardized compute it's like racks or half racks of servers and these typically be into either traditional data centers or something like a modular data center that can be kind of placed on a customer premise or these access edge sites. However, as we're moving out more towards user edge. These aren't going to be in traditional secure data centers so we really need to think how we're going to do security around each of these different locations. And as the next part is the latency as I was saying as we're moving the actual application closer to the actual end user. We can change what the latency is and how the applications actually experience it so Moving to the service provider edge allows us to have latency sensitive applications where we're moving into the hundreds of milliseconds rather than larger. But as we need to get to actual latency critical applications, we need something that's extremely close or really at the end user to be able to perform to be able to provide the latency that we need so something like self driving car you can have hundreds of milliseconds of delays because by that time you've already run over the pedestrian or hit the bridge. So you really need to think about what level of latency is really allowable by the application and work me to play in this last part is the actual software on it. So on the constraint devices edge will actually have embedded software right on the hardware where the hardware and software almost married. But as we're the rest of devices that are a little bit less resource constrained, we can actually see increasing cloud native development practices so something like containers or cloud native network functions in the telco edge or deploying things based on Kubernetes into these regional or on premise data centers. And we're really seeing the acceleration of deployment of containers and Kubernetes into each of these edge locations. So as we're thinking about what kind of applications we want to put on the edge, there's really like kind of five key vectors that we need to think about when we think about why we put things on the edge, rather than in the traditional data center. One is the autonomy that it needs. So is it able to access the resources that it has or is it able to call back to different systems, how independent can it focus if it's really reliant on things running in the cloud, it should be closer to the cloud if it's really relying relying on things sent to it by the actual end user, then it should be closer to the end user so we're thinking about which layer of the edge we place it. What, what does it actually need to rely on and how autonomous can the system be. The second part is the scalability, how big is our application, what kind of resource consumption does it actually have how we're how big of like servers do we need to have to need to be running for us to actually able to do it. If we need multiple servers this needs to be something that's closer to the edge and more skill application. If it's running single purpose applications it can be actually closer to the edge user because it's not consuming as many resources. The next part is the bandwidth how much bandwidth does the application actually require the uplink and the downlink when we're thinking about it something like video processing might need a super high like streaming video might need a super high download speed while something like video processing might need a super high uplink. So what kind of bandwidth do we actually need and what can we actually provide on each of these premises. The next part is the security and privacy as we're moving from like modular data centers or traditional data centers out to strange device edge where things are placed all around real physical locations how can we make sure that they stay secure and how important is it to us, the users of our applications and the actual data that it's storing there's a big difference between something like which dressed in somebody by versus what protective health information did they actually have and the level of security and privacy that we need to have for each of those. The last part is the latency how latency critical like is the application as we move further out towards the cloud and more spread out. There's a higher latency induced between the end user in the application. Can it tolerate that or is it something that's really like safety critical running like in oil or mining oil and gas or the mining industry where latency isn't allowable because human lives are online. So what kind of latency does our application allow and each of these things helps determine where exactly on the edge we should place our applications. Now, each of these vectors also come with really some challenges along with them. So on the autonomy side, as we're having more autonomous systems, we have reduced control over what they do in a centralized data center. It's pretty easy to control everything because it's all running on the same set of servers or the really the same server as we're going out to more distributed locations. We need a scalable way to be able to maintain and manage those systems. And the connection demand with everything might not be there to have complete control all the time. Whereas if I connect to, let's say like AWS or Google, I can be pretty assured that they will be online and accessible all the time. And the next part is the ability as we move further out towards the edge. We're really running into restricted resources constrained devices. How big, how big does the application need to be and how much resources can we like truly provide the application. This is key considerations. The next part is the bandwidth as we have limited connections. What can we actually do with that and still provide a good performance for the application. As we're thinking about security and privacy, we're going to have risky locations that are accessible to anybody. They're walking on the street if it's like a camera or accessible devices. This is what happens when somebody can actually go in and just remove a server is that a huge security problem or security risk for us, or is it applications that aren't as sensitive. And last part is latency, what kind of delays and disconnects in our application have and what problems will this induce into the system. Each of these challenges becomes even larger at the edge because the edge is a margins business. It's not you're selling one thing for a billion dollars, you're selling a billion things for $1. You're going to have millions of locations and billions of devices and all this together. Each individual thing only has a small margin for error and for profit. So thinking about each of these challenges is super important as you're scaling out to the edge. Now, this is where I think we really need to have cloud native thinking and how cloud native thinking really makes edge computing the business models and their operational models really possible. And so if anybody's not familiar, the definition of cloud native from the cloud native computing foundation is cloud native technologies and power organizations to build and run scalable application in modern dynamic environments such as public, private and hybrid clouds, containers, service meshes, microservices, immutable infrastructure and declarative API's exemplify this approach. These techniques enable loosely coupled systems that are resilient, manageable and observable combined with robust automation. They allow engineers to make high impact changes frequently and predictably with minimal toil. Now that I've read through that I'd like to highlight this couple of sections I think are specifically applicable to the edge. The first is immutable infrastructure. The second is declarative API's these two together. A completely replaceable system where you declare the state of the world you want, want are super important as we go to the edge. Next are systems that are resilient, manageable and observable. As we're scaling out, we need to have resiliency, manageability and observability built into our systems. Last part that's really key as you're running at scale, living on billions of devices in how robust automation built into these systems. Now to kind of break it down and to line it up with each of the challenges that we've been talking about before. I'll dive into each of these. So if you really think about what is cloud native thinking provide for edge computing. We're talking about the aspect of reduce cable where there may be a intermain connections. We can tie that to the concept of manageable immutable infrastructure where we can specify how things should be. And if anything goes wrong, we can completely replace it. And it's completely immutable for restricted resources. What cloud native thinking allows us to do is that there is a cloud somewhere. It may not be right at the actual application, but what things can we offload to the cloud. Instead of thinking everything has to run on the edge. There's also things that can be connected to the cloud. We can run the cloud in multiple different places. And if we have a consistent thinking as we go from the cloud to the core all the way out to the edge, consistent tool and consistent workflows, it allows us to place applications and resources at the right location so we can make the most of our restricted resources. Next part for limited connectivity. What happens when something goes wrong when connections break down when we miss a message while cognitive thinking provides us with resilient systems that will retry that will try to get to a consistent state and the declarative APIs of a cloud native system allow us to do this. We can declare our state of the world and even if there is a slow connection, a bad connection will be resilient in trying to get to the desired declarative state of the world. When we have risky locations and devices we need an observable system so we can understand when anything is going wrong in those systems or when it's been hacked when it's been or otherwise compromised in terms of delays and disconnects. We need robust automation so rather than having to sit around and have one person wait for the classic IT. Let's turn it off and turn it back on again when these delays and disconnects happens we need robust automation that does this for us that allows us to really scale our operations through software rather than through people and that's what cloud native thinking brings us for edge computing. So kind of like in summary, if we apply cloud native thinking to the edge, it brings a couple things is the standardized declarative APIs with immutable infrastructure with automation and resiliency built in. What this really allows us to do is to scale our whole operations and our business model which is really an emergent driven business through software rather than people and this really makes the edge as a concept and as a business feasible through a reduced total cost of ownership, higher resiliency and faster time to recovery and a faster time to market for each of these. Now the way we a chromatic like to look at this in terms of cloud native thinking is actually the same way we think about the levels of automation for self driving cars. So level zero is me in my high school parking lot learning how to drive a stick shift car grinding the gears making a lot of manual errors, unforced errors and level five is your Tesla self driving car you just hop in tell it where you wanted to go, and it takes care of the rest the system can perform all like all the tasks it needs to under any condition without human intervention, but human can still watch over it. And the problem with a lot of it systems right now that we see chromatic is that throughout level one or level two of automation they have scripts built around it but what cloud native technology really helps us unlock is level three level four and level five where the system is running by itself and can automate a lot of the manual processes that we need to do. And this is really the largest benefit of cloud native thinking that edge computing needs to have to function as a business and operational model. And so now to tie that into a specific use case and to be able to have all the buzzwords of the day and one slide. I'm going to be talking about 5g edge computing and cloud native all together. So if we dive into cloud 5g. Actually, see, this has been a hot topic in the industry for quite a while. Once again, you have the classic hockey stick graph of what 5g will unlock for in different industries like media agriculture construction energy manufacturing, but really what 5g unlocks for us is a lot greater capacities for our communications technologies. So faster speeds higher higher number of devices more efficient networks like slicing. And this allows us to connect remote campuses in things like oil mining gas manufacturing or remote healthcare and traditional business verticals. But on the other side, it also unlocks a whole whole set of new business opportunities and business verticals including iot and edge computing. AR and VR autonomous driving smart cities and industry 4.0. And so it's transforming old industries and creating whole new ones and this is why 5g has been tied to so many exciting new technologies. But on the flip side, it also has some challenges. 5g requires five to 10 times more base stations to be able to operate. These base stations are out in remote environments. We need to have dynamic provisioning to handle this network slicing and higher number of devices needs to have a global data experience as we're moving from cell to cell tower how to still recognize that Bill Mulligan is Bill Mulligan and should be able to access these networks. And how do we deal with the security risks like even as we're seeing right now people are burning down supposedly 5g towers they're out in risky locations. It also requires hardware acceleration for network slicing so we're starting to see the integration of GPUs into the actual network to help accelerate the throughput of our packets to the network integration of cloud native network functions so whole new deployment models that must be integrated into the legacy environment so many telcos today still have physical network functions they have virtualized network functions and now they have cloud native network functions. Each of these need to be able to integrate with each other to ensure full operability. So 5g is not a magic technology that will solve everything but it actually comes with a whole new set of requirements and challenges. Beyond the traditional telecommunications technologies but if we think about this in a cloud native way in the same way that we do for edge computing if we have 5 to 10 times more location. Once again this manageable immutable infrastructure allows us to scale out our operations for if we need hardware acceleration edge computing and cloud native thinking allows us to do local processing. Of that data in terms of integrating into and with legacy equipment network functions and operation styles. The great thing about cloud native thinking that has declarative API's with clear contracts about how it integrates into a system and how other systems can integrate into it. Cloud native network functions are a new like kind of operational paradigm and new way to run networks and how can we make sure that they're running correctly is we is if we have an observable system and the final part is 10 times more locations. This robust automation really helps us deal with these problems at scale. If you want to get involved in any of these opportunities and including edge computing 5g or really cloud native thinking there's a lot of great open source communities to get involved in. So this webinar today was brought to you by the great people at LF networking. So thanks to Jill and Brandon for helping setting this up off edge is also setting up a lot of great technologies to run unified technologies cloud native computing foundation. It obviously hosts Kubernetes and all the associated technologies with it. And finally the last one is CNTT, which is the cloud infrastructure, telco task force that's defining how to run cloud cloud technologies and telco environments. I'm actually the work stream lead for the reference conformance work stream based on Kubernetes. So if you want to join me, we meet every single Thursday at 1600 UTC time. Please feel free to message me if you'd like to join and help out and change how we do networking and telcos and unlock all these great new business opportunities. So with that, any questions that anyone has. This is my Twitter handle and it's my email. Please feel free to contact me before after any point if you have any questions. So thank you for joining today. Great. Thank you, Bill. And we just have a couple of questions that we can go over and if there are any more that pop up please just use a Q&A window at the bottom. So first question, how is kubernetes specifically working with the larger elephant community on cloud native network functions. Yeah. So there's quite a few ways that we're doing that. So as I said before, I'm actually heavily involved in CNTT, which is a sub project of LF networking to define how we should run different odd technologies in my case specifically kubernetes in telco environments and the reference to kubernetes conformance work stream ensures that each implementation or each from a vendor is compatible with the requirements set out by it. So kubernetes as a company is heavily invested in open source. All of our software products are open source. We're the top five committer to the kubernetes project last year. And so everything really the whole focus of our company is making sure that the open source community is as strong and successful as possible. Great. Thank you. I'm sort of a related question. How does participation in open source initiatives help companies in general progress with their networking and edge solutions. So I think we can really see this in both the success of Linux and in kubernetes, probably the two most well known, like open source technologies. What participating in these communities allows actual companies to do is one to leverage the like the power of open source is really having the best technology solutions available on the market to be able to deploy and manage their applications in terms of and then in terms of like actually contributing what that gives them it gives them a voice in the community. Now there's multiple different ways to get involved in community. A lot of people think the only way that counts is writing code, but actually I contribute to open source without writing any code. So actually, I would not consider myself a coder at all. And I still contribute to open source in a variety of different ways. And one is with CNTT giving us a voice in the telco world and making other companies aware of what we do. Another is through the CNCF they just launched an end user tech radar of that lets companies say what they're actually using in testing or production for different technologies the first one they launched was actually around CICD and which what what companies are actually using and this knowledge gives other companies insight into what is actually like the best practices in the field. So there's actually multiple ways for companies to contribute and to enjoy the benefits of open source technologies. Great. Thank you. Just a couple more questions. What are some trends you're seeing as cloud native functions cloud native network functions CNS become more pervasive across the telco edge. So a couple trends that I'm definitely seeing right now. The biggest one is that Kubernetes is definitely coming onto the scene. So we're a vendor of a multi cluster solution and we've been included in multiple RFIs now. Vendors looking to use Kubernetes to deploy their cloud native network functions. I think that's really the biggest thing is really switching away from a virtualized world into a containerized world. So I would say that's a massive kind of like mindset and operational change and that's really the trend I'm most excited about because it really creates a whole new paradigm of what you're able to do with technology. And then the other trend and I guess I would say I'm pretty excited about is really kind of the telcos like getting behind open source. I think a lot of the work that we're doing in CNTT is really exciting to see kind of like the whole industry coming together to really define what the best practices are. I think that's really cool to see. Awesome. It looks like we just have one more question here. How do Kubernetes clusters run in constrained resources at the edge? Yeah, that's actually a really good question and I have a blog post coming out soon about that. And I think the way that people should look at it, there's multiple different ways to look at it. One is trying to shrink down the Kubernetes cluster and making a fork of it and cutting things out. But we at Kubernetes don't actually believe that's the way to go because then you're actually running into the same problem you had before of having to run multiple different infrastructure stacks. So having to not just have one consistent stack all the way from the cloud to the core data center all the way out to the edge. And what we actually think of how companies should think about it is they should redefine what actual Kubernetes cluster is. And what a Kubernetes cluster really is for an end user, the only thing to care about is their application running. The only thing that's running their application is the actual worker nodes. The control plane is actually relevant to the actual application running. Yes, it matters in terms of the management, but once your application is running, you don't actually need the control plane for continue to function. You won't be able to update it, but it will still be able to function. So the actual functional unit of a Kubernetes cluster is really just the worker node with a QBlit. So as you're going out to the constrained device edge or you can actually redefine what a Kubernetes cluster is and all it really is is a worker node with a QBlit on it. And you can separate the concerns of the control plane and the actual like data plane, you know, like the classic like do one thing and do it all like the separation of concerns. You're actually able to shrink your like, what do you think about as a Kubernetes cluster down to much smaller devices because instead of having to run a highly available like master setup with API server, the like controller manager, the at CD and all those on the edge, you can actually abstract those out a layer. So this is what I was talking about before with the cloud connectivity is you can change where you deploy different things based on your actual resource constraints. So if your cluster is just a worker node running a QBlit and you can move your actual control plane up a layer in the cloud that really unlocks where you can put Kubernetes. And so that's actually one of the things we're doing at kubromatic is we have a multi cluster management solution open source. It's kubromatic, you can find it on GitHub. And what it allows you to do is to separate the control plane and the actual worker nodes. And what that allows you to do is to basically place the control plane at a higher layer of the edge where resources are more available and have the actual worker nodes on the edge. And I think that's, I think a really exciting thing that's going to be coming out pretty soon here. Very cool. Thank you. All right, so it looks like we don't have any more questions. So thank you everyone for joining us today. The presentation of recording of it will be available on demand starting tomorrow and everyone who registered will get a link to that in your email. And thank you, Bill for your time. This was really informative. We really appreciate you chatting with us today. Yeah, thank you. All right. Have a great day everybody. Thanks again. Bye bye. Thanks for joining.