 Hi, this is David Flyer. I'm at the OCEP conference today and with me I've got Kevin Dealing who is vice president of marketing at Melanox Kevin welcome. Good to see you David Always good to be here. Oh, it's good to have you here. So your mission at Melanox if I can put it Succinctly is to connect everybody with everything. Is that right? Absolutely. Yeah, we like to connect the clouds and the data centers Really everybody. Oh, so at the conference that you consummated any new Connections we really did so we made a couple of announcements here at the show and it's interesting We covered three different CPU architectures. So we're really connecting across multiple platforms The Olympus platform that Microsoft announced. They also have the x86 one is it exactly exactly And we also announced a platform with Qualcomm that We're connecting a Qualcomm arm-based platform that Microsoft also says they're going to be using as part of Olympus in their Azure cloud Wow, okay. The third one was a power architecture. So that was a power nine architecture What's interesting about this? There's a lot of first there the Qualcomm is the first CPU that's at 10 nanometers So they're taking the process lead Really driving what they've done in the mobile platform and it's coming into the into the Server platform exactly it's kind of backwards. It used to be that the servers and you know They would drive the CPUs would drive the process roadmaps now We're seeing mobile platforms doing it. So Qualcomm's leading in the process Well, if you think about it is exactly the same playbook as Intel who introduced it First of all on the PCs and then got into the server market And it seems that Qualcomm are doing the same thing in the in the arm market. That's right You find the volume market use that to drive your process. So Qualcomm's there now with 10 nanometers the other big thing that we announced the first ever was a PCI Express Gen 4 server So the power nine is the first CPU that has this PCI Express Gen 4 Which is the faster speed of PCI Express IO connectivity? So the fastest IO connectivity again is on power Architecture so pretty exciting that they're leading there So Intel are being pressured a little bit of the top by the open power and pressure to the bottom from from arm with Different objectives different connectivity issues absolutely And so Facebook talked about the power platform that they're using that platform is called Zias It was actually a co-development between rack space and Google as part of this OCP Project and so it's an open platform similarly the project Olympus, which is the Microsoft there They've got both the x86 and the Qualcomm arm processors. So we're excited to connect all of it Oh, wow, that's you're right in the middle of everything. Absolutely. Okay, so Why are they using you to connect? I mean seems a simple question, but you know, why are you so popular? Yeah, so we actually have greater than 90% market share of the 25 gig and above sort of nicks ethernet nicks So 25 40 50 and 100 gigabits per second. So we have the fastest nicks out there So hence all of these different cloud and you know Hyperscale data centers are using our nicks inside of their data centers Also, we have a technology that's called rocky which is RDMA over converged ethernet It's a technology that Microsoft was showing in their data centers and in their storage They can actually deliver extremely good efficiency and the whole key there is that we're able to deliver this hundred gigabits per second Abandoned without chewing up all of the CPUs. That's still the most expensive part of the server is the CPU in the memory subsystem And we give it back to run applications and we move the data for you. So you you've moved that Overhead of rocky put it into your own nicks and your own chips Exactly right. So we take make the data transport very very efficient and it doesn't use any of the CPU That means you can run more workloads. So whether you're on Facebook or Microsoft or doing a public cloud or a web 2 dot application You can run that application support more users and not have to worry about moving the data around, right? And if you've got an expensive Piece of software which is core counted then you can use it for running the application as opposed to running the network behind it Exactly So if you're paying a license fee, you know, you don't want to pay that and then chew up half of your CPUs Just moving the data back and forth you want to have all of it focused on running the application So all of these connections that you talked about We're talking about pretty short connections here, aren't we? I mean sort of meters rather than anything else Yeah, so, you know most of the switch to server connections are under three meters So we actually have a full line of connectivity products We do copper cables inside of the racks that go to our top of our switches right our spectrum switch We can do 100 gig we have breakout cables that go to 25 gig and that's all copper for those short distances And then it depends on how far you need to run and how big your data center is Just come back to that in a second Talk a little bit more about this the switches there, but you were talking about connecting 425 Gigabit into 100 gigabit, exactly right We have a breakout cable so our top of rack switches one of the other announcements We made here at OCP this week is that we're running Sonic, which is the Microsoft open source Operating system that runs on top of the switch. It has really good Metrology and observability so you can monitor what's happening in these giant data centers That's running on top of our sonic Spectrum-based switch and so we can take a hundred gig port and we can break it out to 425 gig lengths and connect to four servers So with one half rack width server we can switch we can connect to 64 25 gig servers. That's really impressive That's cool. Yeah, but what about the longer distances then? I mean data centers are pretty big these days at the Azure data center I think is more measured in kilometers isn't it than meters? Yeah, so once you get beyond about three meters people will go to often multi-mode fiber which uses Vixel technologies We have transceivers and cables that do that and then when you get to these hyperscale data centers 100 meters doesn't cut it. It's not big enough to connect from one end of the data center to the other So we have a silicon photonics platform We bought two companies and we have a silicon photonics platform that goes up to two kilometers over single mode fiber Really cool technology. We're shipping this stuff in volume now. Wow okay, so you've got photonics you've got the the normal one and you've got a set of different Protocols that you can use to connect things together. What other protocols are you supporting? Yeah, so I think one of the big areas that we've seen development is on the storage side So we see things like NVMe and now NVMe over fabrics That's a new class of flash connections and a bunch of the guys We're showing platforms today that use these new NVMe drives The good news from our perspective is faster storage needs faster networks We can take three NVMe drives and saturate a hundred gig link But to do that we need things like NVMe over fabrics right that lets you extend the storage You don't care whether it's in your box or somewhere on the other end of the data center You just go grab the data. We do that with super low latency and we offload all of that So storage is a big good push for Melanox. So that seems to me the the data center of the future, isn't it? You're going to have these very high-performance discs down in the the really low levels of latency, you know 50 microseconds even lower. Yes and then those are going to be having to connect To multiple CPUs and multiple nodes. That's obviously a prerequisite. So this connectivity this NVMe over fabric is going to be the key to doing that. When do you see that coming out? When do you see that hitting the market? So we see it in the market now So our first generation with our ConnectX4 we accelerated parts of that. That's a hundred gig device now with the ConnectX5 We've offloaded all of it and NVMe over fabrics is really the next generation of Storage networking because there is no fiber channel in the cloud. No, you never see it. Okay, they don't deploy it They're using a converged infrastructure in this case Ethernet So the ability to do that and we're also doing accelerations for virtualization and VMs and TPDK All the data services, yeah All the things that when you're in a cloud environment and you have lots of multi tenants in the same physical infrastructure We create virtual networks and virtual pools of resources and let you go get your storage and your data wherever it is And you don't have to worry that you're operating in a multi-tenant environment where you might be using the same physical wires and servers As one of your competitors, but we keep it everything isolated. So for the first time we are going back to direct connect Storage, but it's actually connected to everything in the fabric. That's right automatically That's right. Back to the future. It's your server sand that you talked about. It's that's exactly right You're putting it direct attached storage, but now it's shared storage and to do that You need these really low latency efficient protocols. Otherwise, you end up burning up all your CPU cycles moving the data Doesn't make sense. Excellent. Well, thanks very much Kevin