 from our studios in the heart of Silicon Valley, Palo Alto, California. This is a CUBE Conversation. Hey, welcome back, everybody. Jeff Frick here with theCUBE. We're at our Palo Alto studios, having the CUBE Conversation. We're just about ready for the madness of the conference season to start in a few months. So it's nice to have some time to have things a little calmer in the studio. And we're excited to have a new company, I guess they're not that new, but they're relatively new. They've been working on a really interesting technology around infrastructure. And we welcome to the studio, first time, I think. Sumit Puri, CEO and co-founder of Liquid. Welcome. Thank you guys. Very, very happy to be here. And joined by our big brain, David Floyer, of course the CTO and co-founder of Wikibon and knows all things infrastructure. David, always good to see you. It's so good to see you. All right, so let's jump into it. So Sumit, give us kind of the basic overview of Liquid. What are you guys all about? A little bit of the company background. How long have you been around? No, absolutely, absolutely. Liquid is a software-defined infrastructure company. The technology that we've developed is referred to as composable infrastructure, think dynamic infrastructure. And what we do is we go and we turn data center resources from statically configured boxes to dynamic agile infrastructure. Our core technology is two-part. Number one, we have a fabric layer that allows you to interconnect off-the-shelf hardware. But more importantly, we have a software layer that allows you to orchestrate or dynamically configure servers at the bare metal. So who are you selling these solutions to? What's your market? What's the business case for this solution? Absolutely. So first, I guess, let me explain a little bit about what we mean by composable infrastructure. Rather than building servers by plugging devices into the sockets of the motherboard, with composability, it's all about pools or trays of resources, a tray of CPUs, a tray of SSDs, a tray of GPUs, a tray of networking devices. Instead of plugging those into a motherboard, we connect those into a fabric switch. And then we come in with our software and we orchestrate or we compose at the bare metal. Grab this CPU, grab those four SSDs, these eight GPUs and build me a server, just like you were plugging devices into the motherboard, except you're defining it in software. On the other side, you're getting delivered infrastructure of any size, shape, or ratio that you want. Except that infrastructure is dynamic. When we need another GPU in our server, we don't send a guy with a cart to reprogram the, to plug device in, we reprogram the fabric and add or remove devices as required by the application. We give you all the flexibility that you would get from public cloud on the infrastructure that you are forced to own. And now to answer your question of where we find a natural fit for our solution, one primary area is obviously cloud. If you're building a cloud environment, whether you're providing cloud as a service or whether you're providing cloud to your internal customers, building a more dynamic, agile cloud is what we enable. So is the use case more just to use your available resources and reconfigure it to set something that basically runs that way for a while? Or are customers more using it to dynamically reconfigure those resources based on, say, a temporary workload as kind of a classic cloud example where you need a bunch of something now, but not necessarily forever? Sure. The way we look at the world is very much around resource utilization. I'm buying this very expensive hardware. I'm deploying it into my data center. Typical resource utilization is very low, below 20%, right? So what we enable is the ability to get better resource utilization out of the hardware that you're deploying inside your data center. If we can take a resource that's utilized 20% of the time because it's deployed as a static element inside of a box and we can raise the utilization to 40%, does that mean we are buying less hardware inside of our data center? Our argument is yes. If we can take rack scale efficiency from 20% to 40%, our belief is we can do the same amount of work with less hardware. So it's a fairly simple business case then to do that. So who are your competition in this area? Is it people like HPE or Intel? No, that's a great question. I think both of those are interesting companies. I think HPE is the 800 pound gorilla in this term called composability and we find ourselves a slightly different approach in the way that those guys take it. I think first and foremost, the way that we're different is because we're disaggregated, right? When we sell you trays of resources, we'll sell you a tray of SSD or a tray of GPUs where HP takes a converged solution, right? Every time I'm buying resources for my composable rack, I'm paying for CPUs, SSDs, GPUs, all of those devices as a converged resource. So they are converged, we are disaggregated. We are bare metal. We have a PCIe-based fabric up and down the rack. They are an Ethernet-based fabric. There are no Ethernet SSDs, there are no Ethernet GPUs at least today. So by using Ethernet as your fabric, they're forced to do virtualization protocol translation. So they are not truly bare metal. We are bare metal, we view of them more as a virtualized solution. We're an open ecosystem, we're hardware agnostic, right? We allow our customers to use whatever hardware that they're using in their environment today. Once you've kind of gone down that HP route, it's very much a closed environment. So what about some of the customers that you've got? Which sort of industries, which sort of customers? I presume this is for the larger types of customers in general, but say a little bit about where you're making a difference. No, absolutely, right? So obviously at scale, composability has even more benefit than in smaller deployments. I'll give you just a couple of use case examples. Number one, we're working with a transportation company and what happens with them at 5 p.m. is actually very different than what happens at 2 a.m. And the model that they have today is a bunch of static boxes and they're playing a game of workload matching. If the workload that comes in fits the appropriate box, then the world is good. If the workload that comes in ends up on a machine that's oversized, then resources are being wasted. And what they said is we want to take a new approach. We want to study the workload as it comes in, dynamically spin up small, medium, large, depending on what that workload requires. And as soon as that workload is done, free the resources back into the general pool, right? So that's one customer by taking a dynamic approach, they're changing the TCO argument inside of their environment. And for them, it's not a matter of am I going public cloud or am I going dynamic or am I going static? Everyone knows dynamic infrastructure is better. No one says, you know, give me the static stuff. For them, it's am I going public cloud or am I going on prem? That's really the question. So what we provide is public cloud is very easy. But when you start thinking about next generation workloads, things that leverage GPUs and FPGAs, those instantiations on public cloud are just not very cheap. So we give you all of that flexibility that you're getting on public cloud, but we save you money by giving you that capability on prem. So that's use case number one. Another use case is very exciting for us. We're working with a studio down in Southern California and they leverage these NVIDIA V100 GPUs. During the daytime, they give those GPUs to their AI engineers. When the AI engineers go home at night, they reprogram the fabric and they use those same GPUs for rendering workloads. They've taken $50,000 worth of hardware and they've doubled the utilization of that hardware. The other use case we talked about before we turned the cameras on there, that was pretty interesting was kind of multiple workloads against the same dataset over a series of time where you want to apply different resources. I wonder if you can unpack that a little bit because I think that's a really, you know, interesting one that we don't hear a lot about. So we would say about 60 plus to 70% of our deployments in one way or another touch the realm of AI. AI is actually not an event. AI is a workflow. What do we do? First we ingest data. That's very networking-centric. Then we scrub and we clean the data. That's actually CPU-centric. Then we're running inference and then we're running training. That's GPU-centric. Data has gravity, right? It's very difficult to move petabytes of data around. So what we enable is a composable AI platform. Leave data at the center of the universe. Re-orchestrate your compute, networking, GPU resources around the data. That's the way that we believe that AI is approached. So we're looking forward in the future. What are you seeing where you can make a difference in this? I mean, a lot of change is happening. There's Gen4 coming out and PCI-E. There's GPUs which are moving down to the edge. How do you see, where do you see you're going to make a difference over the next few years? That's a great question. So I think there's two parts to look at. Number one is the physical layer. Today we build or we compose based upon PCI-E Gen3 because for the first time in the data center, everything is speaking a common language. When SSDs move to NVMe, you had SSDs, network cards, GPUs, CPUs, all speaking a common language, which was PCI-E. So that's why we've chosen to build our fabric on this common interconnect because that's how we enable bare metal orchestration without translation and virtualization. Today it's PCI-E Gen3. As the industry moves forward, Gen4 is coming. Gen4 is here. We have actually announced our first PCI-E Gen4 products already and by the end of this year, Gen4 will become extremely relevant into the market. Our software has been architected from the beginning to be physical layer agnostic. So whether we're talking PCI-E Gen3, PCI-E Gen4, in the future, something referred to as GenZ, it doesn't matter for us. We will support all of those physical layers. For us, it's about the software orchestration. And I would imagine too, like TPUs and other physical units that are going to be introduced in the system too, you're architected to be able to take those. Today, we're doing CPUs, GPUs, NVMe devices, and we're doing NICs. We just made an announcement. Now we're orchestrating Optane Memory with Intel. We've made an announcement with Xilinx that we're orchestrating FPGAs with Xilinx. So this will continue. We'll continue to find more and more of the resources that we'll be able to orchestrate for a very simple reason. Everything has a common interconnect, and that common interconnect is PCI-E. So this is an exciting time in your existence. Where are you? I mean, how far along are you to becoming the standard in this industry? Yeah, that's a great question. I think we get asked a lot is what company are you most similar to or are you most like at the early stage? And what we say is we a lot of time compare ourselves to VMware, right? VMware is the hypervisor for the virtualization layer. We view ourselves as that physical hypervisor, right? We do for physical infrastructure what VMware is doing for virtualized environments. And just like VMware has enabled many of the market players to get virtualized, our hope is we're going to enable many of the market players to become composable. We're very excited about our partnership with Inspire. Just recently, we've announced the number three server vendor in the world. We've announced an AI centric rack which leverages the servers and the storage solutions from Inspire tied to our fabric to deliver a composable AI platform. That's great. Yeah, and it seems like the market for cloud service providers, because we always talk about the big ones, but there's a lot of them all over the world is a perfect use case for you because now they can actually offer the benefits of cloud flexibility by leveraging your infrastructure to get more miles out of their investments into their backend. Absolutely. Cloud, cloud service providers and private cloud that's a big market and opportunity for us. And we're not necessarily chasing the big seven hyperscalers, right? We'd love to partner with them, but for us, there's 300 other companies out there that can use the benefit of our technology. So they necessarily don't have the R&D dollars available that some of the big guys have. So we come in with our technology and we enable those cloud service providers to be more agile, to be more competitive. All right, so before we let you go, season's coming up. We were just at a harse. Yesterday, big shows coming up in May. Where are you guys? Are we going to cross pass over the next several weeks or months? No, absolutely. We got a handful of shows coming up. Very exciting season for us. We're going to be at the OCP, the Open Compute Project Conference, actually next week. And then right after that, we're going to be at the NVIDIA GPU Technology Conference. We're going to have a booth at both of those shows and we're going to be doing live demos of our composable platform. And then at the end of April, we're going to be at the Dell Technology World Conference in Las Vegas, where we're going to have a large booth and we're going to be doing some very exciting demos with the Dell team. Well, Sumit, thanks for taking a few minutes out of your day to tell us a story. It's pretty exciting stuff because this whole flexibility is such an important piece of the whole Cloud Value Proposition and you guys are delivering it all over the place. Thank you. Thank you guys for making the time today. It's excited to be here. Thank you. David, always good to see you. Good to see you. You're a smart man. All right. I'm Jeff Frick. You're watching theCUBE from theCUBE Studios in Palo Alto. Thanks for watching. We'll see you next time.