 Hello, folks, and welcome to KubeCon EU 2021. My name is Daniel Mangum, and I'm a senior software engineer at UpBound, and I'm joined today by Carlos Eduardo, who is a cloud architect at Red Hat. We hope you all have had an awesome week so far, and we're looking forward to diving into a topic you likely haven't heard a ton about this week and may not be familiar with at all. However, both Carlos and myself believe that this is not only an important subject to be knowledgeable of, but also that it's going to be particularly impertinent to the cloud-native ecosystem. But before we get into it, let's take a step back and look at the state of cloud-native today. This picture has become a bit of a meme over the last few years as the number of projects in the CNCF have exploded. Folks often reference the image as a representation of how complicated and wide-reaching the space has become. While those criticisms are certainly valid, I believe it is also important to recognize the tremendous innovation we are experiencing. From infrastructure management to service meshes and everywhere in between, individuals and organizations have more optionality than ever in designing a cloud-native platform that is highly tailored to their specific use case. Furthermore, with the rise of a default to open-source mindset, we have the opportunity to try before we buy, greatly reducing the pains of vendor lock-in, which has been a trademark attribute of the technology industry for decades. In many ways, we're in a software renaissance. But isn't there something missing here? While we have a robust open-source software ecosystem, the platforms we design run almost exclusively on proprietary hardware and firmware. And until now, this hasn't really been a problem. The promise of the cloud is that we don't have to worry about the underlying machinery. We simply interact with an API. And don't get me wrong, this is a powerful model, and we will not be suggesting today that every company drop what they're doing and start building out their own foundry and developing custom silicon. Until now, this proprietary hardware model has actually worked quite well. So what makes today different than the last 50 years of computing? Or in other words, why should I care? In 1965, Gordon Moore made a prediction about the growth of the number of transistors in an integrated circuit. His assertion was that the number would double every year, which he revised 10 years later to every two years. The implication of this prediction, which did in fact come to fruition, was that computer programmers and system architects could rapidly improve performance of their applications simply by upgrading to the newest hardware every few years. And with the advent of cloud computing in the mid to late 2000s, upgrading that hardware was as simple as hitting an API endpoint or clicking a button in the cloud provider console. Around the time of Moore's revised prediction, Robert Denard made a related prognostication about transistors, asserting in his 1974 paper that as the size of transistors shrinks, the power density remains constant. When you combine these two properties, that is being able to fit more transistors in the chip and each of those smaller transistors maintaining their power, the natural conclusion is that the overall power per watt of the IC increases. This fundamental truth has driven the computing industry for many years, but both the Nard scaling and Moore's law are plateauing due to the limitations of the physical world. For this reason, we are seeing a movement to custom hardware for specific computational activities. Frequently referred to as domain-specific accelerators, you are likely already familiar with some of these hardware categories. For example, the graphical processing unit or tensor processing unit. However, both the GPU and TPU are relatively general purpose compared to some of the more focused domain-specific accelerators. Technology leaders are developing hyper-specialized hardware for optimizing tasks such as web search, image processing, and bioinformatics in order to maintain the performance improvements we have become accustomed to as an industry. This shift comes at a cost, though. Software typically has to be modified to take advantage of the specialized hardware, meaning that the days of simply deploying your workloads to a new similarly priced machine and seeing drastic improvements could be coming to an end. In short, hardware is going to become more and more heterogeneous. So now that we have sufficiently buried the lead here, let's actually talk about RISC-V. RISC-V is an open-source instruction set architecture. While this may seem unsurprising or even expected by folks accustomed to the software industry, an open-source ISA is a stark deviation from the traditional model of the hardware industry. You may be thinking, aren't X86 and ARM open? We have compilers that target them, and I'm free to write my own assembly for them. That is true, but they're not freely available, meaning you're not able to implement your own processor that uses the ISA. Now you're probably thinking, I don't want to implement my own processor. So what's all this for? You'll notice that we and most folks you talk to who are bullish on the future of RISC-V are not under the impression that all hardware needs to be open-source. In fact, many of them are building proprietary companies based around it. The value of RISC-V is that it is an open interface, of which there are many closed-source and open-source implementations. A useful comparison in the cloud-native ecosystem is Kubernetes itself. Many of the companies sponsoring this very event provide Kubernetes distributions that have a unique value proposition to customers. At this point, few end users are actually installing and managing the open-source Kubernetes implementation. However, the fact that anyone can implement the Kubernetes API, open or closed, is what allows us to have a landscape like we looked at earlier. As with Kubernetes, there will be and already is countless RISC-V implementations, all adhering to a common modular specification that allows implementers to cater to specific use cases that can be targeted by any tooling. For Kubernetes, this tooling is operators. For RISC-V, it's compilers. On an earlier slide, I mentioned that there are trade-offs between open-source and proprietary. As an industry and as a community, we must critically evaluate whether open-sourcing a project creates or diminishes value. For many years, proprietary ISAs have actually created quite a lot of value. They've allowed for a consistent set of targets for software to run on. In some ways, the barriers to entry of the microprocessor industry have been a feature rather than a bug. If the dynamics of compute performance were not fundamentally changing, we might not need an open-source ISA. But the fact of the matter is, they are. And this change necessitates a change in how the industry operates. Hardware must become more fragmented to continue to satisfy our complex computing demands, but we don't want to sacrifice the ability for software to target a common interface. So once again, while proprietary does not equal bad, sticking to proprietary ISAs would be diminishing value and innovation over the next 50 years of computing. Luckily, a number of folks have already recognized this shift and they have been doing the work to make it a reality. Now I'm going to pass it off to Carlos to share a little bit more about where we're at today, where we're going, and what it'll take to get there. Thanks, Dan. And how is RISC-V in the panorama of cloud applications and orchestration? We are already in pretty good shape. Kubernetes already runs in the RISC-V architecture and we can even deploy some applications into it. Here we can see the CY-5 unmatched, the first RISC-V fully-featured computer in a PC form factor. It already runs Linux mainline. The board has a quad-core processor and 16 gigs of RAM, allowing building and developing applications for RISC-V much easier. On the left, there's screenshots showing some terminals of it running Kubernetes, some containers and even open-fazes, serverless platform with a demo function, all beautifully in RISC-V. Getting to the point was not easy. Over the past few months, me and the community have been submitting many PRs to open-source projects, bringing them to support the RISC-V architecture. One big milestone was last year, when we had Go upstream and enabled to run and build binaries for this new architecture. Then I started patching more than 20 projects and sending more than 40 PRs, where most of them were already upstream. Base projects ranging from Docker, RunC, ContainerD and Kubernetes itself. Prometheus and others, including support libraries required from them. Then I started building many container images to be able to run Kubernetes and its applications on RISC-V. Open-fazes, Traffic Ingress Controller, Core DNS, Flannel and many more, all required to support Kubernetes and running these cloud applications. I also had to build the base images to run these applications, like Debian base image. They still don't exist in the upstream repositories, so we have to build on a separate tree. All these changes, projects and images are tracking in a project that I call RISC-V bringing up project that's hosted on my GitHub account. I'll post the link at the end of the presentation for you to follow up these news and projects that have been tracking. A lot have been achieved in the past few months, but we still need a lot of help from the community and you all can help on this. We have some points that need to be addressed and will allow us to progress, like having official support from Linux distributions. Most of them already support building their packages for RISC-V, almost 90% of their packages already runs and builds on RISC-V. But they're still not in the main distribution branches, so we still need to configure it as, for example, unstable or experimental. Once these distributions are already upstream and releasing their installation packages for RISC-V, we can have also changes to the image generation. So we can have in their main, for example, Debian, CentOS, Fedora images, we can have RISC-V in the manifests as well. And that will allow us to build the many applications that we need based on official images. Once all these images are upstream, we can start pushing new PRs for the projects, allowing them to be built on their automated pipeline CIs and binaries for RISC-V. So we can have RISC-V as a first class citizen in the cloud native foundation. Now I will show a quick demo of me running Kubernetes in my RISC-V PC, the sci-fi I've imagined, and deploying a simple hello world application. It seems trivial, but for an architecture that got Linux main line support less than three years ago, it's quite a progress, and things are progressing so fast that we already have other applications running on RISC-V, like Node.js and many more. Thank you very much, and I hope you enjoy. Here in the right, we have two windows on top of my young computer. The bottom, the sci-fi I've unmatched. Let's take a look at our Kubernetes nodes. We have one Ubuntu RISC-V node. Let's take a look at some details. We have some containers running. It's running on RISC-V architecture and running version 1.20.4. It's pretty new. Now let's take a look at our running pods. Yeah, we have the system pods running and OpenFast, a function as a service platform, running in our one node cluster as well. Let's take a look. We have OpenFast communicating to our server and we see no function deployed. Let's deploy a figlet function. The container is created in OpenFast. And it's already up. Okay, take a look and it's already there. Let's test it. When it runs perfectly, we have an invocation with this text that generated the figlet text. Let's bring OpenFast Gateway website. It shows our function and we can also invoke it from here. Thanks, Carlos. We hope everyone has a great rest of your week and if you have questions about Cloud Native, RISC-V or the intersection of the two, feel free to reach out to either one of us.