 Thank you, Mike, for that. Am I coming through okay on the audio? Okay, excellent. Yeah, so Mike already covered some of this, but this is a homecoming for me. It's, I come back to the campus quite often, but it's been a long time since I lived in Ann Arbor. And I spent the better part of 10 years at the University of Michigan and in Ann Arbor. I did my undergrad here and worked for a couple of years at a company called Applied Dynamics International, and that's where I met Trev. He was a consultant there at the time and did my PhD here in the CSE department. I, you know, what I thought I'd do for today's talk is describe some of the work that I did in my early career at Intel Labs after I joined the company back in 96 and has to do with Intel virtualization technology. And it was a project that actually started in Intel Labs and an example of one of the ways that research investments at Intel have made their way to to practice Asian. And as part of that, I'd just like to draw some lessons that I took from the project over the years and then apply that to some of the new research that we're doing inside Intel Labs and sort of give a peek at some of the stuff that's active in the lab. And by the way, I'd really like taking questions during talks so if there's anything that's puzzling you, just throw up your hands and happy to make this more interactive. Okay, so this is the story of Intel VT and the virtualization has many different meanings in computer science. And so just to center us on this, I'm talking about hardware virtual machines, the ability to run multiple operating systems on the same physical platform. And so just some of the terminology of VM is a collection of resources, processor memory IO devices that you present to an operating system so that it can run as a guest. And then the virtual machine monitor, the hypervisors, the layer of software that controls those resources. And if you just look at the ecosystem today, virtualization is well established. I mean, you can see it, it's sort of the foundation of cloud data centers, infrastructure as a service is based on VMs. We see it in client systems. If you've got a Windows PC, there's something called virtual secure mode, which is the way that we can create a portion of the machine that's secured from the rest of Windows. So that's an example in client. We have in edge deployments, 5G as it gets built out on Intel servers. The network functions, which used to run its purpose built hardware are now running in virtual machines and we're aggregating them together on the same server platform. And then there's examples that embedded it as well. So it's really well established throughout the industry. But if you look back to the late 90s, it was pretty much a forgotten technology, even though it was actually ancient at that time. Back in the 70s, IBM had invented virtualization for their mainframe designs. And then something happened. It was almost like it just got forgotten, at least in the context of 86 platforms, which were on the ascendancy during this period. And so question is why is that the case? And part of the answer is that at this point in time, we were still in that sort of virtuous cycle that at least virtuous to Intel and Microsoft that we called the hardware software spiral. So every time Microsoft would introduce a new operating system, provide a new capabilities, but also drew on the resources of the platform where hardware could barely keep up and that motivated an upgrade. We delivered more performance and then a new version of Windows would come out. And there was a spiral that happened. And it was such that you couldn't even imagine having multiple operating systems on the same platform. You could barely run one, right? So that's one factor. Another factor is that x86 quite simply was not virtualizable. Now I'll get into some of the details of that, but there are all these virtualization holes in the CPU. And it was also a much different ecosystem. There's lots of different IO devices that you could plug into a PC from lots of different vendors. It wasn't the way that IBM put together machines where they controlled the hardware, the software. Everything was purpose built by them and they could make sure that virtualization would work. So that's a brand new challenge that would have to be dealt with. And then the other thing is just a question of usages. Why do we need virtualization? After all, we had moved from a world where we were sharing hardware with mainframes to a world where every person had their own personal computer. Why do I need to run multiple operating systems on a machine that belongs to me? So it's kind of the motivation for virtualization had gone away. But there are a lot of things were happening that started to change the picture. And so for students of micro architecture, these are familiar graphs. We were just coming off the risk-sysk debates of the 80s and late 80s to a world where we were just cranking out generationally more instruction level parallelism and frequency scaling of our processors and being able to deliver performance very predictably until we hit the power wall. And that's what's shown in the left-hand side here. There's a point in time where we simply couldn't scale frequency and make the machines more complex, wider issues with speculation and so on. And we had to find a different way to deliver performance and that's when we switched over to multi-core. In fact, one of our students, Kun-Lei, a look at him, Professor Stanford, now his dissertation was about chip multiprocessing and that was an exploration of this, which is indeed exactly what happened. I mean, right in this era of time, we just started ramping up the number of cores in how it's very commonplace that any CPU would have multiple cores. But it wasn't clear what we would do with all those cores because we didn't have the thread-level parallelism that you needed to drive them. So I'm showing a graph from another paper that Chris Floutner here did together with Trev and me. And this is just a study of desktop benchmarks, more than 50 of them or so, running on a quad processor client machine. And what's shown here, this is in time, these are the benchmarks just sorted and then utilization of the machine. And so you'd like to see close to 100% utilization, but 25% utilization for a four-processor machine means you're using one processor, except for maybe a few cases. And this is for automated benchmarks where you're really trying to hammer the machine and make it run fast. But if you take something that's realistic benchmarks, like a real interaction with the system, you'd find that there would be a high degree of idle time for most of the workloads. Okay, so we're on this path to building multi-core processors, but we don't have the workloads that would actually drive them and make use of them. Now, these are client examples. It turns out that the same problem was happening in servers. And I don't have a graph to show you for that, but I can tell you at this time in Intel, there was a lot of concern that we couldn't motivate the purchase of ever more capable servers because we didn't know how to utilize them. In fact, there was a sell-down that was happening. Like the sweet spot for us was a four-processor server, a four-socket server, and people were increasingly not buying those. They were going to dual-socket and single-socket because of this problem, that we didn't know how to exploit thread-level parallelism. And so at this time, this is where some of us were starting to ask, well, maybe virtualization wasn't possible before on the hardware software style, but maybe there's a case now to be made for building it into x86 platforms, and maybe that could be a way we could harvest thread-level parallelism. But that brings us to the next problem, which is that there are all these virtualization holes in x86. So if you want to virtualize a platform, run multiple operating systems on it, the first thing you have to do is take the OS and a single OS and deprivilege it. You have to run it at a lower privilege level than the hypervisor or the virtual machine monitor. And how would you do that? Well, x86 has multiple rings. It has four rings, zero for three. And so normally the way that they get used is a single OS kernel would run in ring zero and then your applications run in ring three. So the first thing that you think about when you're considering this problem is, well, maybe we can just use ring one, run the operating system there. After all, it's not getting used. x86 is a very strange isa. It has all these capabilities. The rings have been there from the beginning, but most of them don't get used, just ring zero and ring three. But we thought, ah, maybe we can do this with ring one. But it turns out that that's just simply doesn't work. And this is a really technical diagram. I'm not gonna go into the details of it, but I just wanna give you a flavor of it. Basically it's saying that if we run the hypervisor in ring zero and guest OS is in ring one, there's all kinds of problems like instructions that incorporate the ring number in their computation start to break. Or there's cases where writes to privileged state that the hypervisor needs to control don't trap. And therefore you can't properly virtualize them. Or reads from privileged state don't trap. And so you can't make the necessary modifications to those values so that the guest OS will run as intended. And then there's other problems like excessive faulting, certain instructions. They do properly fault, but they fault so often that it's just a performance problem. And so this really is a problem if we wanna run completely unmodified operating systems with good performance. And essentially it just wasn't possible in the state of the architecture at the time. So there are a number of different approaches to solving these problems. And there are different researchers exploring this like Ian Pratt at Cambridge started a project called Zen. And Zen said, well, if you can't take unmodified OSes and run them on x86, maybe we can modify them. So this team took Linux and started making changes to the OS kernel. And then also built a hypervisor that understood those changes. And the two then could work together and you could show that you can virtualize the platform with all those modifications. And that worked pretty well. High performance, pretty low complexity if you made the changes and you had the ability to make them to the source code. But it did limit it to Linux, basically. And nothing wrong with that. But if you're interested in running Windows or just any x86 OS, that's pretty limiting. VMware was coming into prominence at this time as well. And they had a different trick. They were using binary translation to modify the guest OS binaries. So they didn't require changes to the source code. And they could actually make it work with Windows but they went in and rewrote the code so that we got the necessary behaviors. And that gave broad OS support, Linux, Windows, really any x86 OS. But it was a lot higher complexity with that binary translation. And there were also overheads with that. So in the labs, in Intel labs at the time, we were really interested in this and we thought, well, we're inside Intel. If the ISO is broken, we can fix it. We can just redesign it and fix all those virtualization holes. And that's essentially the journey that we embarked upon and what we were going for was that broad OS support, legacy compatibility, be able to work with unmodified binaries, high performance. The downside was that the hardware changes would be required. And so we'd have to be able to make the argument for why Intel should do that. And that turned out to be kind of hard to do. And I'll explain some of the reasons why making that argument was difficult. But we ultimately were able to overcome it and I'll get into that as well. There's this other problem about the diversity of IO devices that I mentioned earlier. IBM controlled their machines and so they could make the IO virtualizable. In a PC universe, there's all this device diversity. There's even devices that didn't even exist in mainframes, you know, graphics, audio device, multimedia devices, things like that, that no one had ever thought about what does it take to virtualize them. So that's another class of problems, but it turns out that you can approach these in a similar manner and it was getting done at the time. So Zen just used the para virtualization concept. They would change the IO drivers in a guest OS to use sort of an idealized interface. The best example was for networking rather than try to virtualize a NIC inside a physical platform. You would just write a driver that would run in the guest and it would send packets directly and there'd be a backend driver in the hypervisor that could receive and send those packets and it was a pretty straightforward way to do things. And you can do that with graphics and storage and other devices too. VMware chose a different approach where they reasoned that let's try to support binaries that are unmodified and let's just emulate the platform, like simulate it right down to the details of the IO device registers. And if you just have a good complement of devices that are commonly found in a system, all the things that are necessary for proper support of an OS, you can sort of figure that out and that was a pretty effective way of doing things. But again, working from within Intel, we were trying to figure out how to make our platforms more virtualizable in the general sense. And we did work on things like something called SRIOV, which is a PCI SIG standard for virtualizing IO devices. We got that going and we also started building other capabilities into the platform that would allow for things like DMA remapping and essentially allowing multiple IO devices from different guest OSes to share the same physical hardware. And so that provided high performance and functionality, but there were some downsides for this as well that we learned about, which is that the advantage of VMware's approach where you fully emulate the platform allows you to encapsulate all the state, both the CPU state and the IO device state, and then that gives you a way to migrate the state from one physical platform to another. So there's sort of always these pros and cons with different approaches to doing this virtualization. But the thing is, these all had technical solutions. If you focus on it, there were ways and we had confidence that we could solve those problems, but this wasn't really the hard part. The hard part was making the case for why you even wanna do this. Even despite the things that I said earlier about multi-core making it possible to deliver more thread-level parallelism. And so just to sort of break this down, if you just take a step back and think about what virtualization allows you to do, there's sort of three basic capabilities. The first is you can aggregate workloads onto the platform. So if you have maybe lots of servers, you can run the workloads of those multiple servers onto one server and increase the utilization of that server. You can do isolation. So if you're concerned about large bodies of software which may have bugs in them, might be a large threat attack surface there, you can separate them, apply a principle of least privilege, and then if there's an attack in one part of the platform, the other part might not be infected. Those kinds of things. Workload isolation has its benefits from a security point of view. And then migration, the thing that I talked about earlier is also useful. If you imagine having large fleets of servers that you want to maintain over time, maybe to put them into service, being able to migrate workload from one platform to another is a very important capability and for orchestrating the collection of workloads that are presented inside a cloud data center, for example. Now, making these arguments inside Intel at the time was surprisingly difficult. But if you reflect on it for a moment, it's actually not that hard to see why. So the first thing like, in client, one thing that we were saying is, wouldn't it be cool if you could run Windows and Linux on the same platform? That would be awesome. And the response was that, yeah, we could have literally tens of thousands of users which at Intel is another way of saying, that's a stupid idea. We need hundreds of millions, okay? Like, that's just not enough users. We have to operate at a completely different scale. So that's really cool for a geek to be able to do that. But it's just, it's not enough to make the argument for adding a whole new mode to a CPU which has to be validated. It's gonna be twice as expensive to do the validation. There's a lot of costs associated with that. The server usages which are a little bit easier to explain, you can imagine. And remember, this is before the advent of cloud and Google and all this. We had, you could say, wouldn't it be cool if we could take server installations and then consolidate them onto a server? And then we could have these failure isolation, service migration capabilities and all these things which today in practice are well-established. Everybody likes to do this. But at the time, the answer was, no, we're gonna sell fewer servers. Why would you wanna consolidate all of our, we're just gonna, the business will collapse. And actually, I mean, early on, at Intel we have different forums for bringing ideas to the top leadership of the company. And I was a really junior person at the time. But I had an opportunity to make the argument for virtualization to Andy Grove and the CEO at the time as well. And Andy himself, I mean, it was one of the more horrifying moments in my career. He really dug into this point. He said, this is a ridiculous thing to want to do. You're gonna crater our growing server business. Now it turns out there's a really strong counter to that argument. And I don't know if you've heard about Jevons Paradox, but it's basically an argument in economics that if you have a scarce resource and you figure out how to make it more efficient in its use, like let's take coal. If you can figure out how to make it more efficient the use of it, that ought to drive down the use of coal, right? No, the exact opposite happens. When you make something more efficient people find new ways of using it and it drives up the demand. Which is exactly what happened with virtualization. So by being able to deploy lots of VMs, hey Todd. Do you make an argument at the time? No, in fact, when Andy asked me the question I think I said something like, good point. And then I tried to sort of shuffle and say, well it's gonna happen anyway, Andy. So get used to it, but that didn't work either. It was only later that I learned about Jevons Paradox. But it really ended up turning out okay for Intel because we obviously were able to grow with Cloud and other things. Okay, but anyway, at that point in time people didn't want to hear it, okay? And so, but other things were happening elsewhere in the company. It had to do with Microsoft. So Microsoft had come to Intel and AMD by the way to request a new processor mode. And it was called ISOX at the time. And the usage case was Windows has all these security flaws. It's a large attack surface. We want to be able to have a corner of the machine where we can run some secure computation without fear of exploiting secrets or other things. And so the idea was we're never gonna be able to make Windows in its totality secure, but we can at least get a corner of the machine that would be secure. But they needed hardware support for that. And so what they wanted essentially was a mode where you could flip back and forth between a secure side and an unsecure side or the left hand side and the right hand side as they called it. They were gonna build a new OS called Nexus and they called this thing NGSEB which is probably one of the worst acronyms ever, next generation secure computing base. And they came to Intel and AMD to ask for this new mode. And so this was an opening. What we realized is that we're trying to make this case for virtualization, general purpose virtualization with arbitrary numbers of VMs. And that also is a new processor mode. And we also knew that there weren't gonna be two new modes that were gonna get added to x86. Adding one new mode is a problem. Adding two new modes is just the validation nightmare. But we said, why can't we introduce a secure hypervisor that uses this VT mode and then have the secure hypervisor build execution environments that look just like ISOX would have provided, which was only gonna provide two environments, a secure and unsecure environment. That was the simple argument. And the proposition then was we can still, we can address Microsoft's requirements, but at the same time we leave open this option for general purpose virtualization. And that would be maybe a good thing. And it was a good enough argument that we did win it. And Intel decided to pursue VT as a path. Remember this was just like a research project in the labs at the time, but we were able to get that to happen. But that was just the beginning because there were so many other things that happened. So we, our desired endpoint was to get full, general purpose virtualization in Intel platforms. And we won this argument that ISOX could be provided to Microsoft to support it. And we continued the definition of VT to both support client security as well as server virtualization. But then what happened is that Microsoft actually backed out. They decided NGSCB too complicated, we're just not gonna do it. And in the end they came back many years later and they still provided virtual secure mode. But at this point in time they didn't. And the whole first processor implementation where we were gonna do this which is called Tejas was just, the project was canceled. There was no support for ISOX or VT or anything. And so that was kind of a bummer. But what happened in the meantime, because we had started going public about VT, AMD which had been focusing on ISOX, they weren't gonna do virtualization. They announced that they were gonna do virtualization. And now that sort of got the competitive juices flowing inside Intel and we also sort of parallel to this, VMware, Zen, later KVM started arising in the industry. And it became clear that, oh maybe these server usages are possible and viable after all. And so we were able to sort of adapt to the environment and there was kind of a scramble to get like a defeatured version of VT implemented in the first generation. We got that on the road map. And ultimately there was a platform where all this stuff came together, it was called Nahalem in 2008, that pretty much implemented most of what we wanted in VT. And we were able to sort of get that in place. And there are many other generations that happened after that. I mean, I spent probably 15 years working in this space and I'm not gonna go into the details on this, but we started in a world where there were software only VMs, they were using binary translation, para virtualization, device emulation. And what we wanted is to provide the proper hardware support so that you don't have to do these tricks in software anymore. And we did over time succeed in getting VMware and Zen and later KVM, well KVM always used VT. And then sort of provided more hardware support over time, other things that I'm not getting into the details here, but there's support for doing high performance, page table virtualizations and they called extended page tables. We provided all kinds of hardware assist over time that improved the performance. And so this just kind of became a roadmap of capabilities that were just added generation after generation and we got into a good state here. And so that's kind of how coming full circle virtualization, we see it in all these different market segments, but it was kind of a tough argument to make at the time and so I learned a lot from that and there's maybe four lessons for me that I'd like to share. The first is working at the intersection of domains. Another is learning from ideas that are perceived have been failed in the past. Another is you've got to understand how you're gonna capture value and make a good argument for it. And then the last one is it's good to have a North Star that you're focusing on, it's just the path to getting there may not be direct. And so just to go into these in a little more detail, for me the intersection of domains was micro architecture, which Trev taught me, I learned a lot from him. Operating systems, which I learned from Stuart Seacrest, studying here in grad school. And then simulation, which was something I was really interested in, learned a lot about because simulation and modeling is important to understand micro architecture and also to understanding operating system performance. And what I didn't understand at the time when I was learning those three areas is that at the intersection of those is virtualization. Really, you think about the solutions to things like IO virtualization, it's a simulation problem. You're just modeling the IO devices so that you can run a full operating system in the platform. And then of course understanding ISA is important to being in a position to modify it and make it different from what it is. And then operating systems, of course, that's what virtualization is all about, supporting multiple OSs. And I think that this is a key to succeeding in a lot of different areas. If you know more than one domain, you just increase your probability of reducing the number of competitors out there. And you can also find something that people who just stick in one domain may not see. There's a lot of ideas in computer science that have failed or are perceived to have failed. In this story, it was virtualization. It was dead in the late 90s. But there wasn't a good fundamental reason why that was the case. And it had its resurgence for the reasons we discussed. But there's other examples of this. Like think about neural networks and the advent of deep learning, same kind of thing. I mean, neural networks were ridiculed for many years until they weren't. And now they're taking over the world. And what happened? I mean, the main thing that happened is we got access to large data sets and an ability to deliver more compute, which was the key to making the vision of neural networks and deep learning take off. And now it's this whole burgeoning field. But you can find many examples where something that was perceived to have failed really just didn't have the necessary ingredients to take off. So things fail until they don't. And so this is something to focus on and maybe just ask yourself, why did something fail in the past? What's missing? Is there some catalyst or some inflection that we're just right at that where suddenly this could happen? And the thing is people won't send you the memo about this. You have to be paying attention and thinking about those things, be a student of computer science to find these things. Value capture is crucial to getting any technology to be adopted. And one thing I've learned is that technologies really do focus mostly on value creation. They geek out on the cool thing that they can build. Wouldn't it be awesome if I could run Linux and Windows, macOS on the same platform? That's awesome. But if you don't have an argument for how you capture the value in some significant way to justify the investment, you're never gonna get anywhere. And usually, you have to make a bunch of assumptions and make some kind of an argument for how a business entity can capture the value and turn it into a viable business. And those assumptions, whatever assumptions you make are probably gonna be wrong, but you still have to make them and build your argument and adapt your argument over time. And that's what we had to do with virtualization. We assumed things. We assumed Microsoft would be there delivering client security with the hardware support we're gonna provide. And then they stepped away, as I described. And we had to build a whole new strategy on the fly and be able to keep things on track. But if you're not always thinking about the value capture and the assumptions you're making around that, then you're probably gonna get stuck. Okay, then the last one is just, it's really great to have a North Star, like the endpoint you wanna reach. But the reality is that most innovation, maybe all of it is evolutionary. And it's usually, it happens under environmental constraints. There's existing legacy solutions out there. There's established businesses. There's relationships in the ecosystem that are there that are not easily taken down all at once. So revolution rarely happens. You need to figure out how to break things down into generational steps. And that's kind of what we had to do with VT. We wanted to get everything in one generation, but we realized, no, we have to figure out step by step how to build it up over time. And every step has to be justified on its own. It has to be viable. And the strategy you have, the endpoint, the North Star that you have in mind, it can be very helpful, but you have to assume that you're gonna be wrong about some of your assumptions and make changes over time. And so it's not gonna be a direct path. So yeah, those are the things that I learned. And now I have the privilege of running into labs. And I just wanted to talk a little bit about the organization and some of the projects that we do. And a lot of what I've learned in the past and tried to apply to leading the organization today. So labs is about 700, it's actually about 750 people we've grown. And it's an amazing organization inside Intel. It's present across the globe. We have people in California and Oregon. We have people in Mexico and Israel and Germany, India and China. And there's a lot of diversity of technical disciplines, which is a great place for pursuing, working across multiple disciplines. That trick of trying to find new things. And I think it's a unique organization in terms of a research lab in that it actually has a track record of lots of major commercial successes. So I talked about VT, but Thunderbolt, that came out of Intel Labs. Back when we had PCs were littered with lots of connectors for different IO types. And in the labs, there were some people, Kevin Kahn and others who had the idea of let's converge all of those connectors onto one protocol, one connector type and run everything through that. Let's also bring optics into the picture. That part didn't work. But the idea of converging IO protocols onto a single connector is what ultimately became Thunderbolt and we were able to drive that into the industry. And because Intel, once we get onto something, we can drive scale and that really makes a big difference. But Silicon Photonics is one that Mario Paniccia worked for many years. More than a decade doing all the physics behind building silicon-based lasers and photo detectors and light amplifiers and things like that. And now we have a whole new business unit that's building Silicon Photonics transceivers. We've done a lot of work in security technologies. You may know about SGX. That's Lama Nachman who redesigned Stephen Hawking's communication system to the world, something that he adopted and rejected many other examples of that. So some of the things we do maybe aren't at scale, but it's delivering technology for good. This is, like on a single slide, is the scope of things that we like to work on in the labs. So we have work in connectivity technologies, so optical interconnect, wireless technologies, wireline technologies. Of course, we look at compute, Core to Intel's business. But then we're also always looking at where is software going, where is it evolving, staying on top of things like AI and emerging workloads. But we do so with always looking at security and trust across the stack. And increasingly, we are investing in technologies around efficient design and programmability, because you may have noticed Intel is pursuing a new strategy that we call IDM 2.0, which is all about opening up our foundry to other designers. And so having technologies to be efficient around that is important. I just want to do a double click on four different areas and of research right now. So we do have a quantum program. And the inspiration here, there's Richard Feynman. Of course, he wrote one of the seminal papers here on that got this field started, where you made the argument that maybe to be able to simulate physics, like quantum physics, you need to build systems that do quantum computing. And so far, this still is not working. Let's just be really clear. There's a lot of activity in quantum, but it's not actually proven to work yet. But like we observed earlier, things fail until they don't. And so we are investing in this, and we think we have some. We're bringing together people that have a good mixture of domain expertise. So we've got physicists. We've got material scientists. We have people who understand computing, though. Sometimes you get physicists, quantum physicists, but they don't really know much about computing. You've got to bring these people together. And that's what we're doing with our quantum program. We also have a different way of building qubits. So we're building silicon spin qubits as opposed to transmon superconducting qubits. And the key difference is that they are little. Transmon qubits are actually large devices. And I almost think of them like vacuum tubes. If you want to get to millions of qubits, you have to have a different qubit design. And so I think that we have an advantage with the choice we've made here. And that's where we think we have a catalyst or a different kind of approach. Also we're investing in the other things that are needed. You may know that quantum computing requires running the devices at very low temperatures, very close to zero Kelvin. And that means you have to have control electronics that runs at those lower temperatures. And if you don't do that, then you'll have setups where you've got lots of coax cables running at room temperature, going in and controlling those qubits, which will scale to about maybe 50 to 100 qubits. And then you're done. So we have to be able to run CMOS at really low temperatures so that we can get to millions of qubits. And that's an investment that we're making. But we haven't forgotten that value capture argument has to be there. Why would you want to build a quantum computer? What are the usages? There's a lot of ideas out there. But it's not really clear which of them is really going to hunt, which is going to be sufficient to justify the investment. So this is still an exploration. And there's a lot to be done here. Neuromorphic computing is another one. So Carver Mead was the source of this idea originally from Caltech. And he was looking at analog computing and trying to figure out ways to build systems that mirror more closely biological brains. And biological brains are inspiring. If you think about what a parakeet can do in terms of navigating complex environments and with a very energy efficient compute device, its brain. And here, again, we need to bring people with different kinds of expertise from different areas. People who understand AI, who understand neuroscience, who understand things like asynchronous design. But we're really building this. We have a chip that we call Loehe. We have actually a couple generations of it that we have built and put together in different configurations. In some cases, it's just embodied in a USB stick. In other cases, we're building clusters of Loehe's. And we're trying to figure out what the value capture is for this. And the way that we're doing it is by putting this hardware out in the research ecosystem. So we have something called the Intel Neuromorphic Research Community. Lots of academics working in this space, but also now increasingly commercial entities. Companies are joining in. And the methodology is simply to put that hardware out there, challenge the researchers to find new usages. And we're finding a lot of interesting stuff. One thing that I think is particularly interesting is neuromorphic systems may be good at solving optimization problems. Quantum computers claim to be good at solving optimization problems. And it might be that neuromorphic system would be a better way to do this sooner than quantum. But we don't know. That's why we're investing in both. Silicon photonics, I talked about how we got that going in the labs before. But if you look at optical transceivers and silicon photonics implementations, today, there's still discrete devices. They live outside of a compute package. And you still have to go over electrical interconnect to get into the compute. But if you just look at the trends, increasingly, the power that goes to I.O. on a compute device, that's whether it's a GPU, a CPU, an FPGA, whatever, increasing amounts of power are going to the I.Os, to memory and to other I.O. And that's not sustainable. If we want to feed these beasts, we have to be able to get data into them more efficiently. And so what we're trying to do is just shorten the essentially bring optical interconnect straight into the package. And we think that there's lots of advantages in terms of shoreline density and being able to deliver bandwidth with higher energy efficiency. And here, again, we're drawing on lots of different domain experts. We have people who know how to build silicon photonics. We have people who understand I.O. protocols. And we have people who are experts in packaging, which is kind of the key thing that needs to come together with the other core physics. And we're also doing other innovation, like building micro ring modulators that have small scale and that allow you to actually array those micro rings around the perimeter of the package. But I predict at some point we will make this transition where optical will go straight to the package. We're not quite there yet, but you fail until you don't. And then the last example I'll give is homomorphic encryption. If you know about this field, this is pretty fascinating. It's basically math in a form of encryption that allows you to keep data encrypted always, but still perform operations on it. And if you think about that, that's incredibly powerful because you could do something, for example, like scan encrypted passwords against a database of passwords that's also encrypted to see if there's an intersection and use that as a way to find if you have a password that's been compromised. Without ever being concerned at all about your password, you could send that encrypted password into the cloud and do that operation, that private set intersection operation, and not worry about the password being compromised by the service provider that's doing the scanning. So that's a pretty cool application of homomorphic encryption. The problem is that it's tens of thousands of times slower than regular computation. And so what we're doing with this effort is we're building accelerators with a new memory design and a new functional units that implement the math that homomorphic encryption needs so that we can bring those overheads down several orders of magnitude. It still will probably be 10 or 100 times slower, but that could seem into a range where it has much more applicability in many more areas. And so that's another one where we're investing, and this is a collaboration with DARPA that we think has some promise. But we still have to figure out what is it going to be useful for. So value capture is still going to be important there. All right. So I think each one of these is going to also probably follow a windy path, just like VT did. Some of them might dead end. I don't know. But I think it'll be fun as we pursue them. So I think we have some time for questions. Yes, of course. Thank you very much. Yeah.