 And welcome to the Red Hat session for the Cloud Architecture Summit, where we'll learn about the new types of engaging, impactful apps that lie at the intersection of cloud, containers, and edge computing. And our guide for today's session, Ben Cohen, Senior Product Marketing Manager, Cloud Platforms at Red Hat. Ben, welcome. Hey, thanks for having me. Hey, totally our pleasure, Ben. Ben focuses on cloud platform architectures with an expertise in cloud-native containers and hybrid apps, as well as edge computing. And he helps customers efficiently extend their application environment from their core data center out to the network edge. His session is entitled Flexible Footprints, the Intersection of Edge Computing and Containers. In it, Ben is gonna show us how edge computing done right brings application processing closer to users and their data. This, as a result, can deliver new user experiences and let businesses get data they need to make split-second decisions. And to wrap it all up today, Ben is gonna show us how to bring all that together with your cloud container edge computing resources for disruptive apps. And given we're in a distributed environment, Ben is even gonna show us about where the edge is these days. So a great session and just a quick reminder before I hand it to Ben, you can download his slides and some other great assets. Under the view screen, you'll see a big red button. You can get that along with the other downloads available. And to ask a question, we love that. So just submit that in the submit a question box. So with that, Ben, let me turn it to you and tell us about Flexible Footprints. Awesome, thanks again, Vance, for the intro and for having me here today. CS, I'm Ben here with Red Hat. And in today's session, I'd like to talk to you about something that you've probably been hearing a lot about these days, edge computing. But if you haven't, it's probably because there are a lot of terms to describe what edge computing actually is. Many folks will talk about their remote locations such as branch offices, airplanes, factory floors. Now you might even hear it alongside initiatives like helping drive innovation or providing better experiences. But what do those actually mean? So how does edge computing help drive these initiatives? And what are the technologies that make all of this even possible? Well, today I'm going to cover topics ranging from what and where edge is, how businesses are using it, and the architectural considerations to think about as your business moves to move new modern applications out to the edge. So what is edge and where is it? Edge is a distributed architecture where processing, so the compute, the storage, and the network are closer to users and data beyond just the core data centers and even larger regional data centers, where the edge is can vary. This diagram shows the varying edges that can exist. Now your architecture may not have all of these layers, but the intention here is to show the relationship between the core data center over there on the right and the edge data centers in the middle. These can be connected by other providers such as telecom providers who themselves have their own edge layers. This part of a network can run solutions such as CDNs. So for example, local video stream caches or even open Iran solutions for wireless services. One other example could be an energy company who would monitor distributed power generations. They're countless examples, these are just a couple. Beyond these layers, we traditionally see the IoT layer. It's over there on the left. This is where the devices or sensors themselves are. So now that we're on the same page about what the edge is and where it can be, let's see what the edge can do. So trends, many of us know that the edge isn't new. So why is edge suddenly top of mind for so many organizations? Well, here's that phrase again, driving innovation. And the catalyst behind driving innovation is data. When you move data processing closer to where it's being generated and you couple it with technology like AI and L, you can use that data to identify patterns that can help the business make decisions or changes just faster. Good example is identifying manufacturing problems right there or anticipating the need for maintenance out on an oil rig in the middle of the ocean where it takes time to get the right parts and the right people on site. And you can't afford to just turn off an oil rig to stop work. You can also discover new opportunities or even shopping behaviors. These can provide the data that you need to ensure that you have the right inventory amounts on hand or you could deliver new offers to attract new customers on the spot or new things you could introduce virtual dressing rooms to try on clothes without even getting a car driving to a store. So with all of this happening at the edge what does it mean for the infrastructure? With Edge you now have a more flexible approach as to where you can place your applications. If your business, your applications or customers needs change, you can adapt quickly and you can deliver the best user experience or perhaps even extend your services to new remote locations that you couldn't do before. If you're in an industry that's subject to data privacy or compliance requirements Edge computing can help you meet those goals too. By providing greater opportunities for localized collection, processing and storing of customer data, Edge computing enables businesses to extend their services to locations no matter how geographically distributed they are and still meet those requirements. So with Edge we see a few main application patterns. Start with operations here. This goes back to the combination of Edge computing and A&I ML based applications where the goal is to act quickly based on input gathered locally. Good example would be gathering and processing of industrial sensor data near where it's being generated. So think of lines in a factory. Number two, latency sensitive applications. These are the use cases where application experience is the key objective. For example, AR VR, online gaming or really anything else where response time is important. With these applications, the goal is to reduce latency or offer continuous application availability. Even if connectivity back to a central data center might be intermittent or could be interrupted entirely. An example of this, a ticketing kiosk at an airport. Not only does operating on local resources mean faster service, but in the event of a service issue or an outage, customers could still continue checking in based on local cached passenger manifests. And then there's provider. Provider is for network services that benefit from scale or low latency when controlling and ensuring proper data transmission is the concern. So an example here could be distributing CNFs. CNFs are cloud native network functions. Could be VPN gateways, VoIP services. You can bring all those to the edge. You can even bring virtualized routing and switching out to the edge alongside edge applications. Also take note at the bottom here that edge can span categories here. These three are rigid silos. They're just patterns to keep in mind. So let's talk a little bit more about these applications. Many of those that we see at the edge are containerized. As mentioned, they are using new technologies to drive deeper engagement or use data to benefit the business in some way. Containerized applications are also portable and can run pretty much anywhere. You can have an application built once and then deployed where the business needs it. And if the application strategy changes, you can move that application while maintaining consistency and development. This then leads to application life cycle management at scale. This is very important at the edge where you can have application instances running in hundreds or thousands of locations based on your use case. So how do you manage to keep everything up to date? Well, developers familiar with tools like Git will be right at home as they can run commands with the tools that they're already used to. Finally, containers can meet smaller lightweight hardware requirements, perfect for edge sites with small space, limited power, limited or no cooling and where the physical footprint is also generally quite limited. Kubernetes distributions like Red Hat OpenShift provide the container orchestration needed to be able to deploy and manage your containerized applications. It provides the capabilities needed to effectively manage containers, microservices and distributed applications that run across both hybrid and multi-cloud infrastructures as well as out at the edge. Together, containers and Kubernetes provide the much needed agility, flexibility, portability and scalability needed so that developers can write code once and deploy anywhere. And then life cycle manage applications from core to edge to cloud consistently. Let's zoom into one use case as an example, AIML. Edge and AIML truly go hand in hand by bringing lightning fast compute out of the data center right to where the data or devices. This means that decisions can be made without concerns of internet connectivity, reliability, speed or latency. And the faster decisions can be made, the more relevant the insights are. Here's a typical AIML life cycle. First stage there on the left is gathering and preparing data. This will be used for AIML model development, testing and training. The stage is also known as data ops and data engineers are the main persona responsible for this. After that, the scientists will develop and train the ML models using data that has been gathered and prepared. Next, number three, data scientists will work with software developers to integrate ML models into the app dev process. Then software developers will work with dev ops to deploy the new apps into production and start doing inferencing for the new data. And finally, on the right, the ML models have to be continuously monitored to make sure that they continue to make the right predictions as they see new data. This table shows where these steps typically take place. From the edge perspective, the main steps are for gathering and preparation of data from various sensors or devices. And then on the right, deploying and monitoring the new apps at the edge close to where the data is generated so predictions can be made in real time. But let's not just talk about the applications. Let's talk infrastructure and more considerations to keep in mind. So what makes edge unique and what are some complexities to keep in mind when designing an environment? First off, there's scale. Now, instead of clusters of servers in one or two locations, you might have hundreds or thousands of sites with just a few servers in each. How do you manage service and upgrade them? In fact, many edge locations might be completely unstaffed. How do you accommodate that? How do you accommodate not having people there? Next is interoperability. With so many deployments, not all are gonna be alike. Can sites A and B be built differently and can they still work together? Then there's variability. Unlike a large data center, not all edge computing has reliable power, connectivity, space, cooling. How does the equipment and architecture handle intermittent or non-existent utilities? Let's break down each of these into more detail. Management at scale. Now you've got thousands of applications or servers, many far away and all that need to be managed. How do you do it? That challenge can be addressed through a consistent centralized management approach from core to edge. What you want is consistent multi-cluster management. It allows you to centrally create, update and delete Kubernetes clusters across private and public clouds. It lets you search, find and modify any Kubernetes resource across the entire domain. And you can quickly troubleshoot and resolve issues across your federated domain. Consistent management also reinforces a consistent policy-driven approach. So you can centrally set and enforce policies for security, for applications and infrastructure. It also provides immediate visibility into your compliance posture based on your defined standards. But with edge, it's not just about the infrastructure. Again, what about the lifecycle management? An advanced management platform allows you to easily deploy and update applications at scale as well as deploy applications from multiple sources. Interoperability. At the edge, you'll find a lot of different hardware types and sizes. Starting on the left, you can see that many devices like sensors or actuators, very small could be microcontroller based. Now a device that small with that low power will likely have very limited memory, probably in the tens or hundreds of kilobytes. And therefore, it's going to run extremely light software. Moving on to the middle, other use cases such as automotive, robots and smart displays, they might have a low-end Intel or ARM CPU and a single board computer and SBC. These could have a few gigabytes of RAM and they might even have networking or some IO on board. Then on the right, uses like telco might require powerful servers with lots of cores and lots of RAM. These power hungry devices might even be on a UPS or have remote management through BMC. Now, all of these work together. Variability. In addition to varying hardware, there are other environmental variables at the edge that aren't usually in standard data centers. Space, how much room is there? Is it in a small closet? Does it need to fit in a vehicle? Does it need to be carried? And what about infrastructure on a satellite? Power, is power guaranteed or is it unreliable? Is it AC? Is it DC? Is it remote? Is it not connected to any power? Is it completely off grid? And then the network, again, how reliable is the network connectivity at edge sites? What's the latency gonna be like? And again, how fast is it? All of these factors need to be considered and can vary from site to site. So you decided that edge is right for you, great. What other considerations need to be taken into account? How will you develop, deploy and then manage those applications? There are a lot of things to consider. Here, just a couple of examples. When developing applications, what tools will you use? How you perform upgrades? When you're deploying applications, what sorts of processors are you gonna use? What about the OS? And what about the networking and all of the policies that need to be set? And then for managing, what will the interface be? How will you view reports? How will you handle the lifecycle? Understanding your use case or use cases is key. We've seen on several occasions where an organization started with one use case, but then it evolved to include others. Remember, no silos here. This has an implication on what you choose as your Kubernetes platform, as well as your need for application and data services, such as middleware and storage. It should be treated as an extension of a hybrid cloud. It provides consistency of operations. It helps scale IT and developer teams and it gives flexibility to move application where the business needs, whether it's on-prem, public cloud or the edge. Remember, develop once, deploy anywhere. The ecosystem that you choose is really key. It should be open to work across a broad ecosystem of technology partners. When edge sites have variability, you wanna make sure that the hardware and software required can address the needs of your use case. Luckily, and thanks to the open source community, there are plenty of projects to help address these. I'm not going to go into detail as each one of these could have its own session really, but good examples to start. Maybe some of these are familiar, maybe not, but the bottom line is if you need to get the most out of your applications, moving those applications to the data might be the answer that you're looking for. So with that said, I know there's some questions, so I'm gonna turn it back over to you, Vance. Ben, thanks very much. Really provocative look at the edge and how it's getting a lot of renewed interest right now and we love the idea that we're connecting that with a distributed environment, with containers and Kubernetes, a lot of our audience is right in that world. So thank you very much. Thank you very much. You're welcome, you're welcome. In fact, you mentioned questions and we do have them. We have kind of a mixed year of real technical implementation types, as well as big picture ones. So let's kind of start with this one. You know, lots of conversation in the last six to 12 months, especially in wake of COVID about the edge and how people are really beginning to focus on that again. What do you think is driving the interest by so many different companies in their edge architecture? Yeah, absolutely. So Edge is not new. However, what is new is that businesses can now use technologies like AIML to analyze sensor or video data from IoT in real time, as well as those new lightweight containerized applications that wanna live out at the edge. Oh, fantastic, fantastic. So given there are different implications of how I can use my edge resources, including AIML, it kind of leads into another cool question here. What are the top considerations when architecting for the edge, especially when performance of real time is required? Yeah, so I'll do top two here. First, you really wanna understand the environment at your edge sites. By this, I mean understanding the amount of actual physical space that's available for hardware, the power, the cooling, and the connectivity back to the central site. The connectivity is really of special importance too. You wanna make sure that your edge sites can continue to operate even if you completely lose connectivity back to the main site. So that's one. Number two would be manageability. You really wanna make sure that you can manage your entire architecture. So both central and edge sites consistently so that your centralized resources to the people can scale and maintain control and security. Yeah, very good point. Let's talk a little bit about the flavors of the edge or the implications of how companies are using it. Early on, the question here notes that you showed a great slide with some concentric circles of the different edge layers. Maybe walk us through that again. And from Red Hat perspective, where are you focusing with OpenShift? Yes, and this definitely deserves going back to this slide. So Red Hat provides platforms that span from the core, you see all the way on the right, to the edge sites right there in the middle on the left for enterprises and telecommunications providers. Really right before the devices and sensors, the IoT, everything on the left, we are the infrastructure that comes before the device itself. So bringing holistic solutions that include all of these devices together thanks to our ecosystem of partners. Yeah, excellent, excellent, Ben. You know, I see time is just about up, but before you go, I know a lot of our attendees here are familiar with, in fact, probably using hands-on containers and even Kubernetes. Maybe they haven't thought too much about implications for the edge or fully distributed end to end. Maybe you can point us into some directions of where Red Hat has some resources to help people learn more about your thinking and your technology. Yeah, so first of all, below, we've got a lot of resources there, but I'd really like to highlight an awesome e-book we just put together called, you guessed it, The Four Benefits of Edge Computing. So if you like this presentation and you wanna learn more, you wanna hear what it can do and considerations to take, definitely check that out. It's a really good one. Yeah, fantastic. In fact, I noticed that when we were looking at the resources prepared for this event, Ben, that that looked very good and I fully endorse it myself. So folks, that link to that e-book is gonna be here right under the view screen here in the breakout room. So I highly recommend you take a look at that along with some other great drill-down assets here in the room for Edge and Red Hat. So with that, I wanna thank Ben Cohen, Senior Product Marketing Manager for Cloud Platforms at Red Hat for a great tour of what Red Hat is thinking and delivering for Edge and helping bring folks an end-to-end look at their application architecture. Ben, thank you very much. Great, thanks for having me. Yeah, and pleasure. And just a quick note, so many different things going on here at Red Hat for Edge, along with other traditional technology offerings for Cloud containers and Kubernetes. Here's a slide that'll take you directly to some other great valuable resources at Red Hat. We didn't have room for everything in the breakout room as you can imagine looking at this list, but it's gonna be a great resource for you. These links will all be live when you download Ben's slides. And thanks again, everyone, for a great session and thanks for the questions.