 Hi, please introduce yourself. Hi, I'm Joe Speed with Ampere Computing. And this is your chip right here. This is doing a lot of noise in the industry right now. What are you looking at? It is. It's a 128 core CPU with 128 PCIe lanes. And these are used in everything from Google Cloud to vehicles and 5G base stations. And so it's a big chip. It's a lot of cores. And it's also a lot of performance. Yeah, it's a lot of performance, but it's a lot of efficiency. So this is three times the performance per watt of x86, like legacy architectures. It's three times. That's a lot. Quite a bit, yes. That's like a big shift. And here in the embedded world, the argument is that you want to bring this to the edge. That means potentially millions of devices. We would settle for millions. Millions would be okay. That's millions of base stations, 5G. Yeah, so 5G base stations, smart city, industrial automation, mill arrow, and all kinds of other applications. What are we looking at here? You can explain. What is this? So this is an industrial PC. So this is a fanless, ruggedized industrial PC with the ampere processor. And with optional NVIDIA GPUs, you can have an ampere CPU with your NVIDIA ampere GPU. This is a full self-contained, the whole thing. Yeah, yeah, yeah. And then this, this is a AI computer for autonomous race cars. Autonomous race cars. Yes. So soon, Formula One? We'll be robots. We'll see. But it's not really, people ask sometimes, like is it about humans competing against machines? It's actually, so I've been a long time involved with a program called the Indie Autonomous Challenge. And these aren't in the vehicles yet. We want to work with our friends DeSpace to put ampere into theirs. And they, and so it's a university software contest. When you put a 128 core in a car, you get a lot of potential self-driving happening in there? Well, it's really a software problem. But we help. We do our part. Because is that also the market for this one? Going to cars? So probably not all the cars, but I think they're very good for R&D fleets, for specialty vehicles. Autonomous people movers, earth-moving, mining, agriculture, things like that. Can you hold this board? Oh, sure. And try to explain what are we seeing on this board? So this launched here today. So this is a dev kit. This is the Ampere Ultra Dev Kit. It's made by our friends at AD Link. And you can visit them over in Hall 3. And it comes with the memory, the heat sink, storage, really all the things you need to do to, like if you're making devices, okay, and things, and you need to develop device drivers and tell us them. Or you need to see how easily your software can port to this architecture. This is the kind of thing that you would use. What do you connect here? So this is 3 PSI by 16. So you could, for example, put like 2 full-length, full-height NVIDIA GPUs plus a camera frame grabber. 3 GPUs. Well, you could. That's all of that. Yeah, yeah, yeah. 2 what? This is 2 PCIe by, I think 4 or 8. I don't know, I have to check. So you could put like NICS, network inner, network cards and other things. And what's connected up here? So we have more GIGI, GIGI, USB, VGA, serial. Just normal things. Nothing exotic. Yeah, it's, we kind of work hard to make the arm as boring and painless as X86 is in the data center. It's been a decade-long work, work in progress to optimize the arm for servers. And now would you say it's 100% fully smooth and actually even smoother than other architectures? I don't have a smoother, but it's widely deployed. You look at the Oracle cloud, Azure cloud, Google cloud, people like Hatsner, Equinix. You look at products from people like HPE and Supermicro. It's very much mainstream now. You know, people really want, they want the density, they want the cost savings, they want the energy savings. They want to figure out how do they make their compute footprint green and sustainable and reduce the carbon impact. And so we help with all of those things. And now we're bringing those same benefits to the edge. If I go on Google cloud or Azure, I can click and I can get an ampere instance. Oh, sure. And it's great to work with. And it's a good price compared to some of the other architectures. And the Oracle cloud, they've been doing impressive things. They're running like large-scale heavy AI workloads on ampere on the Oracle cloud. So that's pretty interesting as well. And here you are, I would say right here, there was some, you know, right here. These are the different booths where you are. AdLink, AWS also, Automotive, Lattice. Yes, AWS Automotive. If you look at the Yachto project. So the Yachto project is using ampere for all the things that are demonstrating there in the booth. I'm working with AWS cloud and with the other Yachto project members. When I hear about the Yachto, I think about small stuff, right? But it goes all the way to 128 cores? Yeah, Yachto has grown up. So it's happy to work all the way up? Yes, yes, very much. What's the challenge when you make software and you want to use so many cores? You need to be very good at part of the processing in your software? Actually, you know, in the open source community, a lot of people take care of that for you. So you've got so many workloads that are being developed for the cloud and by their nature, they tend to scale out across many cores. And so any of that software that's designed to scale out across many servers is very happy running in one SOC using all 128 cores. One thing that I guess is a challenge but it's hard to talk about is the price of a chip. But that's not the main question, right? That is the price of the power to run the chip. Sure, so there's a few things. You can look at things like performance per dollar, you can look at total cost ownership. For the data centers, the exercise is how can I get more compute per rack while using less power, okay? And then you save so much. Well, now we're bringing those same kinds of benefits to the edge where if I can get more compute into a device and use less power, that's very good, right? Before you said 3x. Yeah. And that was power per... Performance per what? Performance per what? Yep, but also if you look at things... It's also translating dollars? Oh, yeah. When you look at the TCO, sure. And that was also looking at things like, you know, how much compute per rack or how much compute per device, okay? That I can get in the same power budget. So there's two ways to think of it. I can get triple the compute in the same power budget, or for the same compute, I can get using only one-third the same power. And so that's a really big savings and that lets you make very powerful devices that really don't use that much electricity. So that's a great benefit. So we see, you know, for like 5G base stations, there's like 30 companies working around 5G base stations, software accelerators, the whole stack integrated, and they're using our processor and that. I like watching these weird videos on YouTube with people customizing vans and they have huge Tesla power walls in their van and they have a whole bunch of power. Maybe they could be driving around with the little base stations. And when there's an event, they go and add 5G capacity at the same time as, you know, they sleep in the van or something. Maybe this is this kind of stuff, maybe that could go in the van. Yeah, yeah, definitely so. So there's people making some small, powerful things with us and the benchmarks I've seen around like 5G base stations is moving from x86 to ampere. For the same power, they're getting double the throughput. Could you use a Starlink and make a 5G base station out of the Starlink connection, I guess, maybe? And then suddenly, oh, it's just, I don't know, it's a weird question. Okay, and here with the 7 Starlink, what kind of work are they doing with you? Yeah, so they work with our other partners, people like 80 Link and Gigabyte and Supermicro around taking these compute modules and motherboards with the ampere processor and making these rugged, extreme rugged, frankly, compute for different kinds of applications. So, you know, think like buses, trains, ships, subs, all kinds of things. Wow, what happens when you put this on the bus? You're able to do more using less power, so if you want to do like a safety or traffic safety with cameras, you can do all kinds of interesting things. There could be cameras inside the bus. Make sure everybody's, I don't know, I'm not going to say distancing, but they're all comfortable. Everybody's getting what they want. Yeah, yeah. And they get a VIP service on the bus, maybe. Or are these kind of like developing boxes? No, this is for development. These are very much for production deployments. So, they're actually made by 7 Starlink. What's the difference between this and this? I'll just see more cooling. Yeah, so what this has, you see the two fans that, fans, this is sealed, but the two fans blow air across the fans so it has more cooling so you can put a more powerful processor and a more powerful GPU. Cool. All right. So, you were the Mobile World Congress. It was a big story, right? Yeah. To run the base stations, you're all about doing that. And there's a big demand for that. Yes. And here in the embedded world, is this different kind of demand? Yeah. It's a different domain, a different application but some similar challenges. So, what's happening is, you're getting more and more sensors and the sensors are becoming higher and higher bandwidth and the communications is becoming higher bandwidth. And so, the challenge, but you don't always have, you might be constrained, you might have a fixed amount of physical size, weight and power. So, the swap available for your application. So, it becomes this issue of, how do I get more compute into my existing size, weight and power? All right. So, it's very relevant for everybody here who look for being at the extreme edge. Yes. Like the high performance edge. Exactly. Yeah. And we don't do, everyone talks about the 128 core, but we also go down to, you know, 32 core, right? With below 40 watt. You know, we can actually cover a pretty wide range of compute requirements and power budgets, but even at 32 core, we're giving you 128 PCIe lanes. So, all of your IEO. So, if you need to do like portable NAS or any other kind of application that's really demanding or CDNs, that's a popular application for these. You can, we have some very, we have some much lower power, highly efficient parts that you can use for that as well. This is a lot of partnerships in the software world. Not just Yocto. Yeah, yeah. So, we do a ton with like Canonical, with Red Hat, with a lot of ISVs. We give heavily to the open source community. So, we have hundreds of engineers who contribute to open source full time. So, we have open source BMC, open source firmware. So, the Tiana Core, EDK2, open BMC, all these kinds of things. And a lot of upstream contributions to things like GCC and LLVM. So, all the work that the guys at Linaro and elsewhere have been working on for a decade. Yeah, yeah, yeah. To be able to take all this and improve and give back to everybody. Well, I wouldn't even say we take it. We just contribute upstream. We're just trying to help the community and help everything. So, you get this painless out-of-the-box experience, okay? And everything just works. All right. But even though it just works, it's still a challenge to learn how to use so much parallel processing. Or is that just something that everybody that works in the cloud knows how to do? Yeah, it's... And it's not necessarily just about, like, how do I get one process, one workload to use a 128-core? You have a lot of mix. So, think about if you do a robot, okay? You've got Perception Pipeline, Path Planning, Control. You have all of these different workloads and being able to run them all on the same SOC, which in the past, maybe it was you would need many separate computers. So, if you have an application that today is requiring many separate computers, you can collapse that into one chip. And I guess your colleagues are very busy on the next gen and getting smaller and smaller nanometers and everything. So, there's a new product that will be announced at some point. But it's not a matter of, this is V1 and that's V2. That actually has some, you know, interesting characteristics. And so, this is, you know, for the edge for these kinds of things, this is very compelling. Nice. That's awesome. And eventually, maybe even the governments will get involved and start saying, hey, cloud is a certain part of our energy consumption in the country and the cities. And they will start saying, I'm not going to say they're going to mandate our servers, but they're going to tell everybody to be conscious and use less power. Yeah, so I certainly won't pretend to speak for government policy or any of those things. I will say that we help companies do more with the existing data centers and minimize their impact on the local communities. Cool. All right. Thanks a lot. Hey, real pleasure. Thank you.