 Well good morning everyone and thank you for attending the 2021 CNCF KubeCon North America Cloud Native Wasm Day, which is quite the mouthful. On behalf of both the program committee and all of our generous sponsors, it's my sincere pleasure to welcome you to those of you that are here in Los Angeles, and certainly all of you that are watching on home or later online. My name is Liam Randall. I'm the founder of Cosmonic and a long time cloud native participant and enthusiast. I've worked with a number of large organizations to put different technologies into the CNCF. I'm a big believer in the power of open source and the power of community to make a difference. And to echo Ralph's theme, today's story is really about us all working together to make the future better together. So let us begin with where cloud native development is today. This is by some measure the third great epic of computing in the last 20 years. In the first, organizations raced to digitize their cores and business processes. In this age, dominated by vendors such as Sun, Dell, and Microsoft, applications were tightly coupled with a specific piece of hardware. I'm sure many of us are old enough to remember specking out a specific Perk 6 RAID controller to run our applications. Innovations in both CPU architecture, virtualization, and a desire for more efficient utilization led us to the second epic. The virtual machine rose to prominence to bring elasticity and optimization to the utilization of the underlying hardware. And it's first dominated by VMware and later by the public cloud, the second epic of computing began. And the security boundary was moved from the physical boundary of the machine up to the shared CPU. And while different public clouds emerged with their own unique approaches to the segmentation and boundary, the elasticity of the cloud accelerated this transformation. And in building and managing these applications, the literal pain of managing unique individual hosts drove the creation of the third major epic of computing. The container emerged out of the pain and suffering, and it too raised the security boundary from the CPU to the Linux kernel. In conjunction with Kubernetes, this innovation unleashed the entire universe of innovation that we find today in today's CNCF. Both the changes in application architecture and the Kubernetes abstraction from a specific cloud to a portable one were gigantic, and the opportunity in the next wave of computing is just as big if not larger. So let us consider for a moment what this modern cloud native computing environment looks like. If we find this ever-expanding elastic edge or this new cloud, as Ralph just mentioned, redefining both how we need to build our applications and where they will need to run, the crisp boundaries between clients and servers are fading to peer to peer. And probably the largest question is, is where will the execution need to live? Can all the things in the far left of this image be dumb terminals and all the things on the right hold our logic and execution? The client server approach is somewhat dominant way of thinking about computing, and this entire week is full of approaches to this problem to take what we do in a cloud native way and to simply pull it towards some lower bound. And this works and will work, but only to some lower bound. As our desire to move execution close to the user transcends the traditional boundaries that define cloud native itself, we find environments and assumptions where the dependence on both Linux and Kubernetes just no longer hold. Certainly in the web browser itself, or in the venerable iPhone, will not run either Kubernetes or containers. And now, just as I've said that, I'm sure some stunt hackers are going to figure out a way to run Kubernetes inside of your browser. But do we really want to? I think you understand my point. Now, as we consider where execution will run, Ralph mentioned that article that I wrote recently around the WebAssembly being the future of distributed computing. There are a large number of reasons, many compelling reasons, that we think that the execution and logic will need to live on the edge. Certainly when we think about latency and determinism, some decisions should be made locally for performance reasons. When we think about where the data lives, large data processing means that we want to put the computing as close to the users in the edge and the data as possible. For privacy and security reasons, in cases where we want limited or deliberate autonomy, drones, manufacturing, my refrigerator, if it's disconnected from the core, it better still work as intended. And finally, I think the sleeper reason will be regulatory, governance, compliance. Those reasons, when you think about CCPA and GDPR, those are the growing obligations that we're going to have to execute under. So if we accept that in the future, that the future is distributed, that we are faced with a plethora of distributed computing challenges. The diverse CPU system architecture rising not only across the edge, but in the core of the public cloud itself. And in a world where everything is capabilities and we face new application architecture such as peer-to-peer and locality-based computing, we need to consider how we're going to solve that. And this is, of course, a security nightmare that is hardly satisfied with an abstraction that sits at either the CPU or the Linux kernel. These concepts and capabilities don't even apply or are available in many contexts where we need to run. And of course, this all needs to work when it's disconnected. So if we itemize these three great challenges, let's call the first one portability at the bottom of the stack here, that we need a format and a solution that addresses that. The second great problem is going to be the security model that we use and need in the new cloud. And the third great model then becomes somewhat a little more nuanced. And it arises from this tight coupling of our applications today to specific libraries bound to a specific capability and context. Now, tightly coupling your business logic to a specific capability will seem in the very near future as crazy as coupling your application to a specific piece of hardware or to a specific Linux distribution. And just as the cloud evolved from the traditional data center, the near future will echo the past but will be fundamentally different. This entire stack needs to evolve. So let's walk through these three great challenges that are facing the modern distributed computing landscape. So the first is pretty straightforward. This is portability with, as Ralph mentioned, with an estimated 30 billion internet connected devices. This is part of the hyper theme that will define both the challenges and the opportunities we face over the next 20 years. With dozens of manufacturers and hundreds of CPU, OS, and microcode options, we see clear opportunities for WebAssembly here. What may not be as self-evident, but it's probably more important, is that the diversity of CPU architecture is rising not just on the elastic edge but in the core of the public cloud itself. Individual companies like, excuse me, AWS and Apple rolling their own silicon is the harbinger of things to come. As nation states and other companies continue to recognize the power and security risks associated with the digitization of our lives, I strongly expect us to continue to see the balkanization of the computing landscape. Expect more companies and more countries to follow suit. So for the first problem, we have this clear fit that WebAssembly gives us a path forward. Now let's talk about the second problem. With security, the concepts of embracing a boundary like the CPU or the Linux process don't make sense on the edge. And we've seen this rise of this capability-driven security approach. On your phones, when you install an application, you're prompted for access to microphones and other local concepts to a phone. This is not a new concept. My fellow Linux security nerds will no doubt remember pseudo set cap for granting privileges at the command line. But WebAssembly's strong capability-driven security is certainly a good approach for us moving forward. So for the second grade challenge, we have another great fit with WebAssembly. So let's carry this forward. WebAssembly raises the bar, but does it raise it far enough? In French, there's this incredible word, ilan, which if we translated it, I guess it means something like momentum. But it's more than that. It's that moment right before liftoff. So when the bird's wings are out and it's about to take flight or the ballerina's leg swings spinning faster and faster, picking up momentum. And just like that moment of lift, all of the momentum and explosive ilan of WebAssembly will be lost unless we also confront this third grade challenge. The tight coupling to a physical machine or the tight coupling to a specific Linux distribution, the tight coupling of an application to a library will seem just as quaint. So what do we mean when we talk about cloud-native capabilities and tight coupling to libraries? What are these? Well, in the context of the application, these are the specific choices you make to satisfy your nonfunctional requirements. These are examples like a specific database that might bind you to a specific cloud provider at a specific time or a specific device on the edge or a message queue or a key value store. Now, the real dirty secret of modern cloud-native development, it's that enterprise apps are primarily composed of nonfunctional requirements. The private analysis that I've seen at data at large scale of applications is that, in many cases, 95% of an application's code satisfies the nonfunctional requirements. And this is a pernicious risk to operating applications, never mind operating applications across a distributed edge, because the consequences painfully emerge throughout the life cycle of creating, building, operating and maintaining these applications. From design to deployment to management to scale and the most painful to maintain all of those nonfunctional requirements across a distributed landscape requires a considerable amount of developer time and effort. And it doesn't grant you the ability to make location-specific choices. That's why I strongly believe that actor models are the near future of distributed computing. So if we're to take flight and to dance across this distributed landscape, then we need to solve this final opportunity. Now, those of you that know me know that I've been working on a project with Kevin Hoffman called WasmCloud for the last few years. This is both compatible with the current cloud-native stack, but not dependent upon. As WasmCloud combines an actor model with Wasm to run everywhere securely. But there are other approaches to this model as well. Microsoft's Dapper project, which they've put into the CNCF, also embraces this sort of abstraction. So the difference is primarily that we're solving similar problems with different opinions, but the idea remains true. That we need to break the tight coupling of applications to libraries. Now in the WasmCloud context, we really want to flip the model. We give you this actor construct that gives you the ability to build applications that are 95% application logic and only 5% non-functional requirements, letting you defer the binding of those capabilities to runtime. And Dapper offers a similar approach in computing. There are so many other considerations that come into play, why this is a good idea. As we move into the edge, these tiny constrained devices, you would sort of like the Aladdin genie moment. You want these phenomenal cosmic powers, but in living space. That's the world that we live in now. So by working together to solve the great challenges facing cloud-native computing today around portability, security, and capabilities, I'm confident that we can make progress together. Thank you very much for your time. And for more information about enterprise offerings in this space, please visit us at Cosmonic.com. Thank you.