 Here's the KVM Thunder X2 and it's real. Hello, so who are you? I'm Gopal Hedde. I'm the VPGM for data center processor group at Cavian. So what are we looking at here? So this one is our customer reference board. It's in the half SSI open compute form factor, OCP V1 form factor. And this reference platform goes into a chassis that can support four of these compute nodes in a 2U form factor. Something like that? Yeah, something like that. So four? Four, one, two, three and four. Four of these in a 2U form factor. This is a standard size for server market. This is a standard size for the so-called density optimized or high density compute servers. These are very popular for high performance computing applications as well as cloud computing applications. And so in this rack here, how many you can have? So you can have, depending on the form factor, you can put up to 40 plus servers in the rack. One U, you can have 40 servers. If you have a 2U form factor, you can have 20 servers and so on and so forth. And what you see here is various form factor systems and these systems actually are running different types of workloads. So this one is more compute. You can have a one U form factor node, which is more of a web front end type of application or a networking application. Then you have a storage node, which could be running database. It could be running big data type of applications. You can have four U form factor boxes, which can run more like a cloud storage type of workloads. We have a monitor over there, which actually shows various applications running on a Thunder X based infrastructure. It includes applications like web serving based on Nginx, web caching, memcache B, databases, MySQL, big data Hadoop type of applications. We have a Thunder X2 platform right here. And this is a reference design. This is a reference design. This was an ODIM developed reference design. And it's running right now. It's fully running and you're seeing elastic search workload running on a Thunder X2 platform. How's the performance? Its performance is very, very good. So I think we talked about when we launched this product last year, we talked about how it performs compared to the competition. And we are right on target in terms of performance, in terms of scalability, et cetera. We're very happy about that. So this is your custom design cords? Yes, they are custom design cords, architecture license from ARM. And we actually developed these cords from scratch in-house. How do you call those? Do you call them the Thunder X cords? These are called Thunder X2 cords. Thunder X2 cords. So that's the name of your design. Yes. And you thought about the design to cover all these markets. Server, storage, security. Yes, so we started off with Thunder X, mainly focused on a scale-out type of applications. And we focused on web-serving type of applications. And you can see web-serving, you know, so that's a Thunder X workload running in the cloud. And it's by a company called Packet, which has data centers in US, Europe, as well as in Japan. And they have Thunder X servers which are available in the cloud. So you can go there, you can request a virtual machine based on Thunder X. They charge one-tenth the cost per VM on a Thunder X-based cloud compared to equivalent X86-based infrastructure. One-tenth. One-tenth. 10% of the cost. Yes, per VM. Is this the truth? Is this how it's gonna be? This is the detail. Well, that's what they're actually charging. And this is mainly driven by the level of integration in Thunder X. The capabilities the Thunder X has and from the cost points you can drive at a server platform level because of the core count, because of the integration capabilities, et cetera. Let's look at some of the companies that are working with you. So there's a big company called Inventec. Yeah. And they have a board right here. Yeah, so Inventem built this platform based on Thunder X2. This platform will be demonstrated at Open Compute Summit in March of this year. And this platform is Microsoft Project Olympus based form factor platform. Okay, so. This is Windows ARM server. This can run Windows ARM server, yes. And why is it designed like this for Windows? What specific consideration they have? This can actually run Linux. It can run any type of operating system. Specific to Microsoft actually developed a specification called Project Olympus and they contributed it to Open Compute Initiative. And this platform is based on that specification. And what you see here is that it supports, it has a dual socket Thunder X2 processors and it has up to 32 dims per dual socket configuration. So it has the highest memory density, very high memory bandwidth and very high performance of motherboard that can fit into a Project Olympus form factor. So it's consideration about many, a lot of RAM. A lot of memory. Yeah, absolutely, yes. A lot of RAM and it looks like there's a lot of green but that means you put a lot on the SOC and they don't need to put all these chips. Absolutely, so from a form factor standpoint, we have a lot of integration in this device. So you don't need a number of additional extra chips that actually lowers the overall cost and delivers much better performance to the end user. This one does have a few chips. Can you talk about which chips they have? So if you look at this chip, you have the two Thunder X2 processors. You have the BMC right here, which is ASP2500. This is mainly used for manageability. There's a CPLD, which is used to manage. Is it like this? Yeah, that's the CPLD that manages communication between the CPU and the BMC. Okay, and then of course you have all the DIM connectors and the usual stuff on the board that allows you to deliver a compelling server platform. So those chips are great, but it was not possible for you to put everything on the SOC. So typically in a server platform, people want to have standard space, management capabilities, standards-based infrastructure. We can certainly integrate those things, but then customers want their favorite BMC and they want to manage these servers the way they have managed rest of their infrastructure. Our goal here is to make these platforms as similar to the platforms they have in their data center from a manageability perspective, from racking and stacking perspective. These go into standard chassis. They use standard power supply. They use standard tool chain. So if you're an end user and you're using these servers, you really can't tell whether you are running on what kind of ISA, right? So the OS, the BMC management, the racking and stacking, all those are very similar. So it makes deployment of these servers in the data center very seamless and very easy. And because of all this integration, the board is not going to run as hot as competitive boards, right? Absolutely, so essentially, absolutely. Less cooling required, better performance per dollar, performance per watt. Let's look around some more over here. Here's Ingrasis, this is a company related with a big factory, right? Yeah, so Ingrasis is a subsidiary of Foxconn, right? And they're building this platform. This is a two-use four-node platform, high-density compute. You can see here, if you look at the board, it's pretty dense, but then look at how many chips it has. It has a lot fewer chips because the SOC actually integrates a lot of that functionality. Four of these go into this Ingrasis chassis, right? And what you see here is four nodes running the standard workload, okay? So standard workload. So it runs standard Linux operating system. It runs standard cloud workloads. So you can see over there, cloud compute, cloud storage type of workload, big data, high-performance computing type of workloads. So we actually do a very good job across a wide variety of workloads on the Thunder X2 machine. And there's another one, a very beautiful one right here. It's a nice color, it's blue. Yeah, so this is developed by Gigabyte. This is the platform that they're developing for their channel mainstream server. It's a standard two-use server. And it essentially has, as you can see, it has very good dent density, dual socket machine. You know, it uses the exact same chassis that they use for their high volume server infrastructure, exact same power supply. So cost-wise, it's actually very, very compelling. And it has 24 storage in the front. So these are all essentially hard drives, okay? That you can put so from a storage density standpoint, it's actually a very nice platform for storage intensive workloads. Dual socket with lots of memory, very good for compute workloads. So depending on the form factor, you can actually target various workloads and Thunder X2 covers a wide variety of them for our end-user customers. And is it possible that people could imagine, they could imagine what's called configuring even maybe more storage, even cheaper, cold storage? Absolutely, absolutely. So this one actually can, we have one-use server platforms, which is not shown here. We have only limited space here. We have one-use storage platforms that can actually take 12, three and a half inch hard drives. So very, very high density. If you can think of that, each storage drive can be 12 terabytes times 12, right? So you have 144 terabytes capacity in a one-use form factor. And 40 of those in a rack, you have tremendous cold storage capability. This one here, if you look at this platform, another thing to point out is that it has these tabs here, they go to an NVMe drive. So not only can you have storage, this is high performance storage based on NVMe flash and it allows you very flexible architecture where some of the high performance data can be cast in NVMe and the rest of the data can be in the hard drive. And there's some other use cases, people are thinking, what did they need to do with security? So for security, you can actually, the chip supports crypto engines on board. So you can actually encrypt and decrypt the data on the fly. And with the large core count, we actually have highly scalable performance. And this is supported by standard software libraries like OpenSSL. So very, very seamless. No difference compared to what they deployed today. Very, very good performance. But you can configure some systems that are specific just about enhancing security in your organization. You could, you could. So one of the capabilities is to actually target a specific workload. So if you see our cloud scale rack, which is the rack here, we set it up such that you have different form factor platforms and they can be provisioned on the fly. Right? So we have Mezos marathon based infrastructure, which can create bare metal servers and you can create a storage server, a compute server, a web server on the fly. And you can deploy them on your server hardware. So it's very, very seamless and you can add capacity. You can add, you know, more, more servers on the fly to create a real dynamic data center. And this stuff with networking, you could run a whole bunch of networking. Is this for ISPs? Absolutely. So this is mainly targeted to the cloud and we support a variety of cloud networking applications like if you think of CDN, okay? So where we have core count and networking capability to drive very high bandwidth in and out of the machine and applications like HAProxy, okay? Applications like secure load balancing or load balancing across multiple servers. A lot of these applications are actually moving to servers with very high networking capabilities and very high compute capabilities. From purpose built appliance, there is movement to servers and we actually enable that with a high performance platform like Thunder X2. And so you were talking about this company selling the service at 10% of the cost of what Intel was doing. This is a big change. What you wanna enable, stop, is this, are you talking about double power efficiency, performance efficiency, power? A lot of it depends on the kind of workloads people deploy. So, you know, Packet has this advantage. So service provider in Europe, Scaleway is offering a VM at one third of the cost. So a lot of it depends on their business model, how they deploy these services, et cetera. But suffice to say they are seeing a strong value proposition for Thunder X based infrastructure in the cloud. And this is for sure shipping this year. So Thunder X2 is, of course, you know, so that's our target. And we are on track to actually ship these platforms, ship these processors this year. So all the stuff you were announcing last year, you've been able to match? Absolutely. You know, we talked about sampling, we talked about shipping, we talked about performance, a lot of the capabilities. You can see actually systems now based on these devices. So whatever we announced last year, we are able to follow through on them. We are able to execute to deliver silicon and ODM platforms based on that silicon and workloads running on those platforms in a very seamless way. So I guess your job is really fun to work in this, your work on this project is amazing, right? It's actually very exciting. So essentially we have a number of, you know, we have a number of engineers, our teammates at Cavium. We have been working on it now for past several years. Our first generation product in the market, second generation product sampling and ready to go to market. It's definitely very exciting. What did you take from the first generation? How did you consider all this stuff you would do in the second generation? So we looked at what kind of capabilities we were able to deliver in the first generation and we looked at how do we expand our target set of workloads in the second generation. One of the things that we had in our first generation was that a single thread performance in a ThunderX based infrastructure was lower compared to the competition. We significantly improved that in ThunderX too and that allowed us to go after a wider variety of workloads in high performance computing, elastic search, real time analytics, et cetera. And this is consistent with what we talked about at the launch last year. The second thing we was ecosystem. So we used the first generation product to actually develop a very broad ecosystem on the firmware, on the OS, on the application side and we are actually leveraging all that ecosystem on our second generation processor. A lot of the operating systems actually run out of the box mainly because of the work that we have done in the first generation. So instead of just optimizing the software for the hardware, you're optimizing the hardware for the software ecosystem people want to have? Absolutely and then we also enabled all the partners whether it is OS guys, whether it is hypervisor guys, whether it is the adapter vendors, whether it is the memory vendors, whether it is application vendors, we gave them platforms, we worked with them to actually get their software running on our platforms and all this is helping us actually accelerate time to market for our customers in our second generation product. And the hard drive manufacturers, you have good connection with them and you know what they want and you try to deliver what they want. So what we do is we work with them, we take their hardware, we qualify it on our platform, they have our hardware, they actually do all the testing. So we work very closely together in order to ensure that when the customers actually use our platform with their device, it works seamlessly and customers don't see any pain. So Thunder X1 was already happening, I mean it's happening but now maybe Thunder X2 is gonna flip things around in a big way. Yeah, exactly. So what we're expecting is to expand our market opportunity, our target workloads with Thunder X2 and then essentially also go after applications that we're not able to address in the first generation and leverage the ecosystem to improve time to market. And the big company called Intel, they've been looking at what you've been doing, right? And they have been compelled to try to adapt also, right? They do what they do, we do what we do, you know, so this is a very, very big market. We are focused on the workloads where we can deliver value to the customer and we are heads down executing towards that. You are showing the way in how to optimize, customize, arm processing, not from just taking a smartphone chip and building it up but designing it from ground up Absolutely, so a server's performance is very important. A smartphone chip basically focuses on battery power. So performance is important but the first priority for a smart mode chip is power, right? In the server's performance is very, very critical. So as an architectural licensee of ARM, we are able to leverage the ARM ecosystem but use the architecture, implement it so that we deliver much higher performance and put a lot more of these cores together in a cost-effective power-efficient way, integrate other technologies that Cavium has in its portfolio to deliver a very compelling solution to the end customer which is very competitive in the market. Designing the perfect SoC. That's what we try, that's what we strive to do. Everything should be optimized, everything should be optimized. You can use much more power than a smartphone but still the whole point is- It needs to be power-efficient. You can't just be, you know, ignore power, right? But you can't ignore performance either so you need to have a very good balance. And the single thread. And you can announce more about the details of the chip. When are you going to announce more? So once we launch the product, once we have parts in production, we will be able to launch the part. We're going to talk more about silicon. We're going to talk more about benchmarks, et cetera. Just stay tuned, all that exciting stuff coming, you know, down the road. So there's going to be details about how many cores, what kind of cores, and what's that? Yeah, we expect to see performance, expect to see all kinds of interesting data that will come out, okay, over time. And the single thread performance you will say at that point, and show. Absolutely. As a proof. Yes. And then all of the big companies, the big cloud vendor companies should just get on board. That would be something you need to ask them. But we're working with a lot of customers. There is a lot of customer interest in this platform and we continue to try to make sure that we work closely with them to match their needs.