 All right, good morning, everybody. I want to, let's do, I think I'm supposed to, there we go. All right, let's go back one here. Don't want to bother with legal, okay. So my personal open source journey started around 1997. I was working at HP, I worked at HP for about 30 years, and I was actually running a HP UX kernel development lab. And so I've been doing this, and sort of Linux was this thing off that nobody really knew what it was, and we kind of brought it in-house, it's importing, and that's kind of how my journey around Linux and open source started. In 2002, I ended up writing this book called The Business and Economics of Linux and Open Source. My motivation at the time for writing the book was that very few people understood really how open source works, how the business models works, how the licensing paradigms work, and how all of that stuff comes together. And just like the industry didn't quite fully understand how all this worked, even internal to HP, there was a gap of knowledge. So my primary motivation was actually as an education tool for the management and executive teams at HP. And actually worked pretty well. And it was a big effect. Now, for all of those who consider themselves open source experts who live and breathe open source day in and day out, and in audiences like this, I'll often ask a test question. And by far and away, most people fail this test question. And the test question is how many open source licenses are there currently? And I'm not going to do it here, but in past audiences, I'd probe and I'd get everything from there's only one to there's a million. And I'd get answers everywhere in between. My last count, if you're curious, I wonder what the answer is. My last count was 71. The reason I mentioned this is that in 2003, I gave the opening keynote address at Linux World. And in 2003, the reason I gave that keynote address is that I wanted to set off what I'll call a set of alarm bells to the open source community that there was a clear and present danger that was being ignored by everyone. And that clear and present danger was license proliferation. So I just told you that my last count, which could be off by one or two, was around 71. In 2003, the number was 58. So in the span of about 10 years of the existence of open source, 58 open source licenses have been created. And at the time, there was a tendency for people to create what we call vanity licenses. So IBM had a license, Sun Microsystems had a license, and so on and so forth. And a funny little story, for those of you around back then, Steve Ballmer, then CEO of Microsoft, would call open source and Linux an intellectual property killer. It was a cancer. And then you kind of see that from Jim's presentation this morning how Microsoft has kind of fully come on board. The interesting thing which Steve did not do back then is that if you wanted to kill open source in 2002, 2003, the best thing Steve Ballmer could have done was to go to the industry and say, open source is the greatest thing ever, and every company should go and create a license. And you know what? At the time, given the level of understanding, I bet you that every legal department would have gone out under direction of their CEO and gone to create an open source license, and we would have ended up with an intertwined mess of licensing that was unresolvable and nothing in the open source community would have worked with anything else. And so now we're basically, we kind of worked through that. I had some fun through that, and we've really contained this issue, and open source has been able to thrive over the past number of years, and I guess I've now embarked on this new journey. I didn't predict this. These things just happen. And as I joined Western Digital, I've now embarked on this world of open source hardware, which seems like an oxymoron. How do you open source hardware? And this is not in the sense of OCP. And so what I'm going to do is try to give you a little bit of a rationale for how we think about this and how we think, and I'm going to introduce to you Risk Five and open source processing. And you know, kind of keep in mind Western Digital, we build hard drives and flash devices, and we have a product portfolio around that. We're not a CPU company. We don't intend to ever be a CPU company. But yeah, you're going to see we have taken a very, very active role in the notion of processing, and hopefully you'll understand why. Lawyers make me show this. We'll go through that quickly. All right, so we... Clearly, I haven't been trained well enough. You know, one of the things I do do, sometimes when I show this slide, is I say, if you can find the typo, I'll give you a free SD card. So... But so, all right. So this is, you know, we kind of think in terms of data, right? Like, a new tagline we came up here with recently is, you know, they call it a data center, not a CPU center, right? Because it's about the data. And so, we simplified this construct around big data and fast data. So think of big data. From our perspective, it's sort of very hard drive centric. But it's where you care about volumes and volumes and volumes of data at a fairly low cost per bit. And think of fast data in two constructs. One is around main memory or memory-centered computing. And also, very high-speed persistent storage, namely flash, and some of the new persistent memories coming out. And the reality is, our world is introducing a set of applications that need to be able to leverage both this construct of big data and fast data, where in some cases, fast data, latency, like, you know, I like to use the self-driving car sending all of its data to the cloud, waiting a few seconds for an answer to coming back to decide whether or not it should turn left or right. Not good enough. That's where you need this fast data, low latency, immediate processing close to the edge, right? So this processing close to the edge. Whereas the big data applications involve insights, the machine learning analytics, and those kinds of applications. So what we noticed when we started to analyze this is that our world that I lived in, this open source world that we lived in over the past 30, 40 years, was really largely driven around general-purpose processing. So the processors we have in our servers, our laptops, our desktops, et cetera, those are all basically general-purpose processors. They're all very good, right? So this is not a good-bad thing. We absolutely need those things. I'm gonna start talking about risk five, and sometimes people go to, is risk five gonna be a competitor to Intel? Don't go there, right? Intel needs to keep doing everything they're doing. They need to be as wildly successful, and they need to keep driving this world because we will continue to have all sorts of applications, environments, Linux, and so on, where this general-purpose processing is the best tool for the job. In addition to that, our world thanks largely to things like machine learning and artificial intelligence, our world is needing more specialized processing, and that's been our motivation. And again, not an instead of, but an in addition to. And this is already happening. You're already seeing it. Most people have heard of a Google TPU, a TensorFlow processing unit. That is a special-purpose piece of hardware. When people think about doing compute for machine learning, they tend to think about a GPGPU, a graphics processor, because it's good at vector processing, which is ideal for that world. So we're already seeing that. Other customers have created or other entities have created custom hardware in the form of FPGAs in order to optimize their workload. And so this is what we're seeing now is this expansion into this general-purpose processing for big data and fast data. And so there's also been a tendency, and just so you know, a lot of people come to see me at the office. There'll be customers, there'll be startup companies that kind of want us to invest in them. And they seem to have this notion of we are software companies, and we work on commodity hardware. And they wear that like a badge of honor. So just a safety tip, if you're one of those, don't do that. Because the reality is, it's a fun little tagline, silicon is oxygen that allows software to breathe. A guy I work with said, no, no, Martin, it's the opposite. It doesn't matter. The point of this is that hardware and software are now at a point where they need to come together and be optimized for each other. The amount of optimization we've been able to do with software with a general-purpose processor has reached certain limits in certain cases. And increasingly we're seeing these applications, I'll use TensorFlow again as the example, where you need a TensorFlow framework and a TensorFlow Plus, and it's the combination of those two. It's the hardware and the software coming together that is actually the next generation and where we need to go to. And so we decided that we needed to think about this again in an open-source construct, and we looked at everything, right? We looked at all the various architectures out there. And in fact, when I joined Western Digital, I had never heard of RISC-5. So this was not some agenda thing. The team at Western Digital before I ever showed up was the team that I essentially joined and founded with RISC-5. But essentially, the beauty around RISC-5 was with this open-source instruction set architecture, we had the opportunity to then create big data and fast data environments tied to our devices that allowed us to optimize the hardware and the software together. So if you want to think of this very simply, for example, we build controller chips for our flash devices, our SSDs. With RISC-5, we now have the opportunity to create custom instructions to optimize that SSD for power, for performance, et cetera. Those are things that were very difficult and unavailable to us before. And so that was our big motivation to go there. Now, the big thing we want to accomplish from a data center perspective, this clicker is a little wonky, is this notion of a memory fabric. So remember I said big data and there's also fast data. One of the things that general-purpose computing has done, and it doesn't matter if it's an Intel chip, an IBM chip, any other chip, is the memory interconnect is a point-to-point connection. So you have a CPU, you have a DDR, and then you have a DRAM memory, you have main memory. And this is a very tightly connected, optimized memory bus. Well, I said we are all about the data. And we think that data should be the center of the universe. So our goal is to create this centralized memory pool, or data pool, to be more precise, that any number of processing engines can come and do processing on the data. And rather than say everything is general-purpose, bring the best processing engine for the tool at hand to the data, and use that same data, but process it in different ways, whether it's a RISC-5 CPU, maybe it's a GPU, maybe you've created your own FPGA, you've downloaded your algorithm to it, you've got other accelerators, and things like that. This is basically our goal, is to have this universal memory fabric. Well, in, I'll say, the true notion of open source, we decided to just tell the world we were working on this. And that's why you see this thing called Omni-Extend. And this Omni-Extend fabric is free and available for anyone to go get, it's currently in development. So we are developing it with the community and anybody who wants to come and play. So the question I often get is why this, there's other standards out there, OpenCappy, GenZ, CCIX, so on and so forth. Well, the reason why we ended up doing Omni-Extend was not because we wanted to pick a fight, or we said we wanted ours is better than yours, nothing to do with any of that. In the case of C6 and OpenCappy, those were still point-to-point protocols. We were trying to create a fabric. We couldn't use those because they were point-to-point. And they're not fully open and we're trying to really go after this fully open environment. GenZ would have been ideal and still might be ideal. We still actually actively work with GenZ and we still love GenZ. GenZ wasn't ready for coherence and wasn't ready with an open physical interface and so that's why we ended up doing Omni-Extend. So we're trying to essentially standardize a cache coherency fabric which allows all of the CPUs to stay coherent on the cache front and have a routable fabric. So this is going to take a couple of forms. So one, we're partnering with Sci-5. Sci-5 is taking a coherency fabric that they had on chip and extending it so that we can connect it to an interface outside the chip so it can go onto an Ethernet interface. So we're now, for the very first time, going to have a coherency fabric that can operate on an open industry standard physical interface, namely Ethernet. But when I say Ethernet, I don't mean Ethernet in the context of networking or TCPIP and all of those kinds of things. I mean Ethernet as in the physical interface. We are still, because it's a fabric and we want to bring things together, we have to have a switch. So we're going to use this barefoot Tafino, which is a programmable switch and it uses a programming language called P4 and we're going to write all of the code for that switch and all of the code for that switch is all going to be open source. And it actually, we're probably about a month or so away from that code release to become available. So we're really taking this notion of open sourcing to a whole new level and we're really taking it to heart and bringing it to a hardware construct. So about a year and a half ago when we started the RISC-5 journey, we had said that, hey, Western Digital ships a billion cores a year. So for those of you who work in serverland, you probably think about millions of servers. We think about billions of cores. And we did the analysis and we expect to double that and we said we will transition the entire portfolio. This will take many, many years to essentially go to RISC-5. So one of the things we could have done was to essentially go buy RISC-5 cores from and there's a variety of, so Sci-5, Codesip, there's a bunch of core providers out there. They're kind of like the arms of the world but in RISC-5. And we could have done that, but we said, look, no, we've made this commitment to RISC-5. We need to learn and understand exactly how this works. So we essentially put a team together and we went off and created our own RISC-5 core. So this RISC-5 core is not a derivative of anybody else's IP. This is built from the ground up from a team at Western Digital. Now, by the way, if all you people were silicon people, you'd be going, ooh, ah, this is really cool stuff, man, okay. This is like JavaScript like you've never seen before in my world, okay. But so this is basically now a full open source core and we gave it a name and it's called Swerve. And so let me go backwards. So the RV means RISC-5, so no surprise there. The we has two meanings. One is Western Digital. And the second one is we as in collaborative, as in all of us, as in we do this together. And then S and Swerve was because we are swerving around general purpose computing and focusing on the special purpose. I'm not gonna bore you with all the details of what this core can do. I'll give you a little bit of a performance metric just so you can kind of connect it in your brain. This is basically a chart that was done by some folks at Berkeley University looking at all the cores. Now I wanna clarify something here. These are cores, not processors, okay. So this is not a full Xeon processor or a full ARM processor. As you know, your processor and your laptop or your server has lots and lots of cores. And then around the cores is uncore. So memory controllers, IO controllers, all of those kinds of things. This is only the core. And so there's a performance benchmark that's well known for just the core called the core mark per megahertz. So it's normalized on a megahertz basis. And so this is basically the core mark per megahertz of a variety of different. This was done around 2015, I think. So it's getting a little bit old, but it gave us all the data on one chart. And you can see the blue bars are for out-of-order cores and the orange bars are in-order cores. So Swerve came in at 4.9. It's actually about 5.1 now on a per core mark per megahertz basis. And so here's the interesting data point for you and why this special purpose thing matters. When we started this core development effort, our ambition, our target, was to achieve parity with our current cores. We were saying we don't want to blow it out of the park. We want to be realistic. We're doing this for the first time. So let's have realistic goals. And we said if all we do is achieve parity to what we have, that we'll call it victory. What ended up happening was 50% better performance, 30% better silicon area, and 30% lower power utilization. That's what ended up happening. And the reason is because this core is a production-level core. It will go in our devices. I think when you buy your USB stick next year to say, hey, RISC-5, I know. There won't be a RISC-5 logo on it. That's okay. But the thing is that we were able to focus on the task at hand. And as a result of focusing on the task at hand, we were able to achieve levels of capability that were so far beyond. And so for those of you who work at large companies like Western Digital and others, you know that when you sort of go to an R&D team and say, hey, look at my PowerPoint. This is what I'm going to build, and you should get on board. That they tend to kind of say, hey, you know, I don't build my product plan around a PowerPoint. Once the team actually released the core, and now they went from PowerPoint to This Is Real, the amount of pull internal to Western Digital went up exponentially. And we've now had a whole number of requests for all sorts of variants that are pretty interesting that would not otherwise be possible if we didn't have our own core. But we're at the open source leadership conference, and so we said, hey, our cores would be embedded cores. That's where we're going to start. And they would be for internal use only, right? This is kind of what we're not doing industry thing. But it's open sourced. And so some people will say, but it's a chip, it's like a core, like what does that mean open source? For all of you who have never designed a chip in your life, let me show you a little bit of what it looks like to build a chip. On the left is what's called RTL code. This is the code that a silicon designer writes to do a chip, a core, or any other kind of silicon. On the right is C code. And to the uninitiated, if you look left and right, they probably go, it just looks like software. That's the point. Building silicon is building software. There are a few differences. So one of the things that software programmers do when you write that C code on the right is you kind of think in sequential terms. So every line of the code that you see gets executed in sequence. And even if you're an expert multi-threaded programmer, still things happen in the sequence. The big difference with the code on the left, and if you really want to follow along, you see this always at positive clock edge or positive edge, essentially things happen simultaneously. And that's the big difference between silicon development and software development is on the left, what's between those that begin and end block happens simultaneously, not in sequence. Now, if you're all software people and you're like, oh, why do I need to know any of this, and why does any of it matter? Well, guess what? There's a thing called high-level synthesis. So by the way, on the left, it's called synthesize. On the right, it's called compile. So you compile your code in hardware. You synthesize your code. There's a thing called high-level synthesis that will allow you to take C, C++, and other language codes and actually compile them into the thing that's on the left to then create something that is synthesizable to produce a chip. So we are actually in the early phases of a world where even all of you who are potentially software developers are going to write software where your output is not the CICD DevOps and go through Jenkins and all that kind of stuff, your actual output is synthesizable into hardware that is optimized for a very specific algorithm like a machine learning thing. And so that's why this is so, so interesting to us. But when we say it's open-source hardware, well, guess what? We're just open-sourcing source code. So if you go to GitHub and you go download our swerve core, what you're going to get is basically all of this RTL code, the language is called System Verilog. So it's written in System Verilog. And that's what you're going to get. All right, so we also created this instruction set simulator. So we wanted to basically have an environment where we could test the RISC-5 core and anybody else's RISC-5 core. And so for that we needed a simulator. And so we essentially had a completely separate independent team write the simulator while we had another team developing the core. And so this instruction set simulator in the spirit of everything we're doing related here is all about open-source. And so this instruction set simulator is also available for anyone that wants to do RISC-5 simulations and run an instruction set simulator. Our swerve core is 32-bit. The simulator will do both 32 and 64-bit. All right, so that's it. So basically we are passionate about having a memory fabric where data is the center of the universe and we allow any general purpose or special purpose processing engine to communicate through that open fabric and have access to the data at very low latency and in a cache-coherent way and built on this ubiquitous Ethernet fabric that everybody has access to. We did our very first RISC-5 core. We're just getting started. As I said, we're already working on variants. But let me give you another piece of data. So for all of you who have been working in open-source like I have since the late 90s or through to 2000s, your psyche is sort of adapted to and you take for granted how all this stuff works. Right? In this silicon world, this is all new, folks. And the thing that's fascinating for me and the reason that I mentioned the book at the beginning is all of this very early education is sort of where I'm finding myself again today because all of the people that are in this silicon world are asking all the same questions around how does this work and what about licensing, what about my IP and how does this collaboration thing work and so on and so forth. All of these things, it's essentially a complete deja vu. And so it's fascinating to see how this works. I'll tell you a story. There was one of these IP providers, these intellectual property providers, and when we released our RISC-5 core to the open-source community, their instant reaction was, you are competing with me. I said, wow, that's interesting. Not in CPU business, not in the IP business. I don't know exactly how I'm competing with you. And I said, why don't you just take the SWIRV core off of GitHub, include it in your portfolio, wrap your tools around it, and sell it. There's a little company we know called Red Hat that did something similar with the Linux kernel. And they just looked at me like dumbfounded and I said, I just accelerated your roadmap by a year. And so it was a complete aha moment. So this stuff that all of you take for granted in this world is all gonna be brand new and I need your help. Because as I said, the real world is not hardware versus software and there's no winners and losers. The only winning comes when the hardware and the software come together and work and are optimized to work together. And you're gonna see this new world where software becomes hardware in order to be fully optimized. But I've got, I don't know, let's call it a 15- to 20-year gap of DNA, of psyche, of people who need to get up to speed just like all of you did in learning how open source works. And so that's what a big part of the RISC-V Foundation is all about. And so I'm gonna ask Jim Zemlin to come back up here and do a special introduction with me. All right, thank you, Martin. Well, let's give him a hand for like some amazing talk. So Martin has been the interim CEO of the RISC-V Foundation and speaking as somebody who runs a foundation, it's sometimes a bit of a thankless task. I'm getting fired for it. So we're gonna let Martin go today. But when he called me up and said, hey, you know, we're looking for someone to come run this, you know, it's a tall order because you need somebody who is technically savvy who understands the world of silicon and CPU technology, who is emotionally intelligent, can lead through influence, can herd cats to create great outcomes, and immediately to my mind came Callista Redmond, who we want to introduce as the newest CEO of the RISC-V Foundation. Please welcome Callista Redmond. Welcome. I may regret this, Callista, but welcome and we're so happy to have you. Thanks, thanks so much. Well, I mean, you've built a great foundation, not just in the sense of an org, but a set of technical artifacts that we can continue to innovate on, continue to build the community and engage a wide variety of stakeholders. So it's super exciting for me to join. All right, thank you. Thank you. We're happy to have you on board. Thanks, everybody. Look for more from Callista. Thank you, Martin. I look forward to great things out of RISC-V.