 Hi everyone and welcome to this KubeCon on edge virtual session about develop once, deploy anywhere. Flexible infrastructure extends to the edge. My name is Frank Starski and I'm a senior principal software engineer at RATET's Office of the CTO. For almost two decades now there's been this trend for businesses to centralize IT services in large data centers or in public cloud and that made sense, respectively. It still makes sense in many cases because this way they can leverage economies of scale, outsource infrastructure operations and capital expenditures to somebody else and somewhere else. However, most businesses are still brick and mortar. Their production has a physical location. Their customer contacts happen at a physical location Edge computing is gaining momentum exactly because businesses are increasingly discovering the value they can generate by bringing services again closer to where they are needed, where their operations are or where the data is. Edge computing and cloud computing are somewhat counter trends, but they are not an either or decision. In many cases services will be distributed between cloud and one or more tiers of edge computing with hundreds or thousands of deployments. For example, many businesses start with collecting data and pre-processing it at the edge while aggregating and storing that data for later analysis in the cloud. Over time though, businesses then discover they can generate even more value to their operations and their customers if they not only collect more data but also analyze it and take decisions and actions locally at the edge. Advances in artificial intelligence, in robotics, in imaging visualization, in real time process control, they will further increase the momentum back towards the edge. The challenge in adopting edge computing is then that use cases are extremely diverse and so is the edge computing infrastructure in support of these use cases. Let me give you a couple of examples from our customers. Folks coming from IoT tend to focus on devices like sensors and actuators that are microcontroller based. They only have a few tens of kilobytes of memory without memory management. These devices run free autos, Arduino, but not Linux and most definitely not Kubernetes. Others looking in the automotive, robotics, smart displays, etc. are often interested in single board computers that have lower end Intel or ARM CPUs, a bit of RAM but all kinds of networking in the IO already on board. Talking to people from Telco about 5G or from manufacturing about process control, they tend to require powerful servers of lots of cores in RAM that are extensible with all kinds of specialized hardware accelerators and are remotely manageable. Now if your business use case requires some services at the edge and others in the cloud, they'll be wondering what kind of infrastructure to support us with to keep end-to-end service development and infrastructure operations as simple and consistent as possible and to be able to evolve your architecture by adding or moving services later. And a typical response I hear is why not simply run Kubernetes everywhere. So let me briefly discuss when Kubernetes is the right choice for the edge and when it isn't. There are two fairly obvious cases. If the services you'll be running and the load on them is known in advance and stable over time, so you can right size your hardware to the use cases. If you don't need high availability, if you're embedding a compute unit and therefore can only ever fit a single unit anyway, then realize you do not need a container orchestration layer. And no matter how lightweight this layer is, you're better off using something like Podman as an extremely low overhead way of running pods of containers updating, restarting them, etc. On the other hand, if you do not know your service profile or load in advance or it may change over time or if you need multiple computing units to provide high capacity or high availability, you'll eventually need clustering and orchestration and then Kubernetes of course is really your best choice. I often get the question though, is there a case where anyone would run a clustering solution like Kubernetes on a single node? Is there something between these extremes? And the answer is that it's a good idea to decouple the design time decisions of platform to develop against and how to operate it from the deployment time decision of how much capacity and availability a given deployment needs. Not all edge service areas have the same load on services and not every service area is important enough to have redundancy. Therefore you may deploy one node for some areas and three more nodes for others. And then it doesn't make sense to develop against different targets or use a different management approach. You'll notice that it's mainly operational criteria that decide whether Kubernetes is right or not. The currently and protected workloads, the load on the system, the need for high availability, the security, the performance to cost ratio. Consequently, when choosing a Kubernetes distribution for your edge use case, your primary decision criteria should be whether the distribution adapts to your evolving use cases, maybe running some web services and virtual appliances today and hardware accelerated video analytics tomorrow. Whether it scales to your capacity demand both in terms of number of nodes, but also from low resource single board computers to high end servers. Whether that comes pre-integrated with a Linux that's designed for edge computing. For example, by being able to update an auto rollback transactionally and make sparse use of networking, et cetera. Whether the way you develop services or manage deployments scalably is consistent across all the footprints from cloud to the device edge. Interestingly, the developers starting with edge computing POCs often initially focus on criteria like when the distribution runs on their favorite Linux distribution or it uses let's say 800 megabyte memory instead of one gigabyte memory like other distributions. Considering the cost of RAM these days and the much higher transitioning effort from POCs into prioritization, this really shouldn't be high in your priority list, but look into these other criteria first. In that context, let me just briefly highlight a couple of open source projects that we at Reddit believe are super relevant for edge computing and we are therefore investing into them. On an operating system level, we are investing into RPMOS tree and Greenboot and they together allow safe, transactional operating system updates and auto rollbacks because the last thing you want to do is to break field deployed device that you cannot service later on. Secure device onboard to take securely take ownership of devices and to have an optimized supply chain from your hardware vendor to your deployment sites. Key line for remote attestation that devices haven't been tempered with. On a Kubernetes level, it's about the ability to download operators from an application store that extend your cluster's ability to manage non-cubanitas resources or non-native resources as if they were Kubernetes native resources. For example, virtual appliances using the KubeVert project or by metal servers using metal cubed and that's very powerful because modeling resources as Kubernetes resources allows you to manage everything in your edge computing stack using a single set of skills and tools and that vastly simplifies operations. On an operation management level, we are investing into a tecton and algo CD to manage whole edge computing deployments from the infrastructure to the clusters to the services using GitOps principles. Also using open cluster management for observability, auditing and to apply policies and force policies to clusters. So far, of course, we've only talked about edge computing infrastructure and of course, it's the services on top of this infrastructure that solve your use cases. Unfortunately, discussing those would be a completely separate talk. So let me just name drop a few on this slide that we're actually using with customers to solve industry 4.0 predictive maintenance use cases. Thank you very much for attending and please contact Retat or myself in case you'd like to learn more details.