 More than a decade ago, the team at Wikibon coined the term serverSAN. We saw the opportunity to dramatically change the storage infrastructure layer and predicted a major change in technologies that would hit the market. ServerSAN had three fundamental attributes. First of all, it was software-led. So all the traditionally expensive controller functions like snapshots and clones and DDoop and replication, compression, encryption, et cetera, they were done in software, directly challenging a two to three decade long storage controller paradigm. The second principle was it leveraged and shared storage inside of servers. And the third, it enabled an any to any topology between servers and storage. Now, at the time we defined this coming trend in a relatively narrow sense inside of a data center location, for example. But in the past decade, two additional major trends have emerged. First, the software-defined data center became the dominant model thanks to VMware and others. And while this eliminated a lot of overhead, it also exposed another problem. Specifically, data centers today allocate, probably we estimate around 35% of CPU cores and cycles to managing things like storage and network and security, offloading those functions. This is wasted cores. And doing this with traditional general purpose x86 processors is expensive and it's not efficient. This is why we've been reporting so aggressively on ARM's ascendancy into the enterprise. It's not only coming, it's here. And we're going to talk about that today. The second mega trend is cloud computing. Hyperscale infrastructure has allowed technology companies to put a management and control plane into the cloud and expand beyond our narrow server sand scope within a single data center and support the management of distributed data at massive scale. And today, we're on the cusp of a new era of innovation of infrastructure. And one of the startups in this space is Nebulon. Hello, everybody. This is Dave Vellante and welcome to this CUBE conversation where we welcome in two great guests, Craig Nunez, CUBE alum, co-founder and COO at Nebulon and Tobias Fish, who's director of product management at Nebulon. Guys, welcome, great to see you. So good to be here, Dave. It feels awesome. Soon, face to face, Craig. I'm heading your way before the fall. I can't wait. All right, Craig, you heard my narrative upfront. And I'm wondering, are those the trends that you guys saw when you started the company? What are the major shifts in the world today that caused you and your co-founders to launch Nebulon? Yeah, I'll give you sort of the way we think about the world, which I think aligns super right with what you're talking about. You know, over the last several years, organizations have had a great deal of experience with public cloud data centers. And I think like any platform or technology that gets its use in a variety of ways, a bit of savvy is being developed by organizations on what do I put where, how do I manage things in the most efficient way possible? And there are, in terms of the types of folks we're focused on in Nebulon's business, we see now kind of three groups of people emerging. And we sort of simply coined three terms, the returners, the removers, and the remainers. And I'll explain what I mean by each of those. The returners are folks who maybe early on, hit the gas on cloud, moved everything in a lot in, and realized that while it's awesome for some things, for other things that was less optimal, maybe cost became a factor or visibility into what was going on with their data was factor, security, service levels, whatever. And they've decided to move some of those workloads back, returners. There are what I call the remover, there are what I call the removers that are taking workloads from, born in the cloud, on-prem. And this was talked a lot about in Martin's blog that talked about a lot of the growth companies that built up such a large footprint in the public cloud that economics were kind of working against them. You can, depending on the knobs you turn, you're probably spending two and a half X, two X, what you might spend if you owned your own factory. And you can argue about where your leverage is in negotiating your pricing with the cloud vendors, but there's a big gap. The last one is, and I think probably the most significant in terms of who we've engaged with is the remainers. And the remainers are hybrid IQ organizations. They've got assets in the cloud and on-prem. They aspire to an operational model that is consistent across everything and leveraging all the best stuff that they observe in their cloud-based assets. And it's kind of our view that frankly we take from this constituency that when people talk about cloud, or cloud first, they're moving to something that is really more an operating model versus a destination or data center choice. And so we get people on the phone every day talking about cloud first. And when you kind of dig into what they're after, it's operating model characteristics, not which data center do I put it in. And those decisions are separating. And so that, it's really that focus for us is where we believe we're doing something unique for that group of customers. Yeah, cloud first doesn't mean cloud only. And of course, followers of this program know, we talk a lot about this. The definition of cloud is changing. It's evolving. It's moving to the edge. It's moving to data centers. Data centers are moving to the cloud, cross-cloud. It's that big layer that's expanding. And so I think the definition of cloud, even particularly in customers' minds is evolving. There's no question about it. People, they'll look at what VMware's doing in AWS and say, okay, that's cloud, but they'll also look at things like VMware cloud foundation and say, oh yeah, that's cloud too. So to me, the beauty of cloud is in the eye of the customer beholder. So I buy that. Tobias, I wonder if you could talk about how this all translates into product? Because you guys start up, you got to sell stuff. You use this term smart infrastructure. What is that? How does this all turn into stuff you can sell? Right, yeah. So let me back up a little bit and talk a little bit about what we at Nebulon do. So we're at Nebulon, we're a cloud-based software company and we're delivering sort of a new category of smart infrastructure. And if you think about things that you would know from your everyday, you know, for everyday surroundings, smart infrastructure is really all around us. Think smart home technology like Google Nest, as an example. And what this all has in common is that there's a cloud control plane that is managing some IoT endpoints and smart devices in various locations. And by doing that, customers gain benefits like easy remote management, right? You can manage your thermostat, your temperature from anywhere in the world basically. You don't have to worry about automated software updates anymore. And you can easily automate your home, your infrastructure, right? Through this cloud control plane. And translating this idea to IT infrastructure to the data center, right? This idea is not necessarily new, right? If you look into the networking space with Meraki Networks, Nausisco, or Miss Systems, now Juniper, they've really pioneered efforts in cloud management. So smart network infrastructure. And the key problem that they solve there is, you know, managing these vast amount of access points and switches that are scattered across the data centers, or across campuses and, you know, the data center. Now, if you translate that to what Nebulon does, it's really applying the smart infrastructure idea, this methodology to application infrastructure, specifically to compute and storage infrastructure. And that's essentially what we're doing. So with smart infrastructure, basically our offering at Nebulon, the product, that comes with the benefits of this cloud experience, a public cloud operating model, as we've talked about, you know, some of our customers look at the cloud as an operating model rather than a destination, a physical location. And with that, we bring this model, this experience as operating model to on-premises application infrastructure. And really what you get with this offering from Nebulon, and the benefits are really circling it out, you know, for areas. First of all, rapid time to value, right? So application owners think people that are not specialists or, you know, experts when it comes to IT infrastructure, but more generalists, they can provision on-premises application infrastructure in just less than 10 minutes, right? It can go from just bare metal physical racks to the full application stack in less than 10 minutes. So they're up and running a lot quicker and they can immediately deliver services to their end customers. Cloud-like operations, this notion of zero touch remote management, which now the last couple of months with this strange time that we're in with COVID is, you know, turnout is becoming more and more relevant, really, as in remotely administrating and management of infrastructure that scales from just hundreds of nodes to thousands of nodes. It doesn't really matter with behind-the-scenes software updates with global AI analytics and insights. And basically overall, you know, combined reduce the operational overhead when it comes to on-premises infrastructure by up to 75%, right? The other thing is support for any application, whether it's containerized, virtualized, or even bare metal applications. And the idea here is really consistent leveraging server-based storage that doesn't require any Nebulon-specific software on the server. So you get the full power of your application servers for your applications, you know, as the server is intended. And then the fourth benefit when it comes to smart infrastructure is, of course, doing this all at a lower cost and with better data center density. And that is really comparing it to three-tier architectures where you have your server, you have your sand fabric, and then you have an external storage array. But also when you compare it with hyper-converged infrastructure software, right, that it's consuming resources off the application servers. Think CPU, think memory, a networking. So basically you get a lot more density with that approach compared to those architectures. Okay, I want to dig into some of that differentiation too, but what exactly do I buy from you? Do I buy a software subscription? Is that right? Can you explain that a little bit? Right, so basically the way we do this is really leveraging two PNU innovations, right? So, and you see why I made the bridge to smart home technology because the approach is civil, right? The one is, you know, the introduction of a cloud control plane that basically manages on-premises application infrastructure. Of course, that is delivered to customers as a service. The second one is, you know, a new infrastructure model that uses IoT endpoint technology, and that is embedded into standard application servers and the storage within this application service. Let me add a couple of words to that, so, you know, to explain a little bit more. So really at the heart of smart infrastructure, and in order to deliver this public cloud experience for any on-prem application is this cloud-based control plane, right? So we've built this, how we recommend our customers to use public cloud and that is building your software on modern technologies that are vendor agnostic so it could essentially run anywhere, whether it is, you know, any public cloud vendor or if we want to run into our own data centers when regulatory requirements change. It's massively scalable in response if no matter how large the managed infrastructure is, but really the interesting part here, Dave, is that the customer doesn't really have to worry about any of that. It's delivered as a service. So what a customer gets from this cloud control plane is a single API endpoint, how they get it with a public cloud, right? They get a web user interface, from which they can manage all of their infrastructure no matter how many devices, no matter where it is, can be in a data center, can be in an edge location and in the world. They get temple-based provisioning, much like a marketplace in a public cloud. They get analytics, predictive support services and super easy automation capabilities. Now the second thing that I mentioned is this server embedded software, the server embedded infrastructure software, and that is running on a PCIE-based offload engine and that is really acting as this managed IoT endpoint within the application server that I mentioned earlier. And that approach really further converges modern application infrastructure and it really replaces the software defined storage approach that you'll find in hyper-converged infrastructure software. And that is really by embedding the data services, the storage data service into silicon within the server. Now this offload engine, we call that a services processing unit or SPU in short. And that is really what differentiates us from hyper-converged infrastructure. And it's quite different than a regular accelerator card that you get with some of the hyper-converged infrastructure offerings. And it's different in the sense that the SPU runs basically all of the shared and local data services and it's not just accelerating individual algorithms, individual functions. And it basically provides all of these services aside the CPU with the boot drive, with data drives, and in essence provides you with this a separate fall domain from the server. So for example, if you reboot your server, the data plan remains intact, it's not impacted for that. Okay, so I want to stand up for just a second, Craig, if I could. I get very clear how you're different from the, as Tobias said, the three tier server, sand fabric, external array. The HCI thing's interesting because in some respects, the HCI guys take Nutanix to talk about cloud and becoming more friendly with developers and API piece. But what's your point of view, Craig, on how you position relative to say HCI? Yeah, absolutely. So everyone gets what three tier architecture is and was. And HCI software emerged as an alternative to the three tier architectures. Everyone, I think today understands that data services are SDS is software hosted in the operating system of each HCI device and consume some amount of CPU, memory, network, whatever. And it's typically constrained to a hypervisor environment is kind of where most of that stuff is done. And over time, these platforms have added some monitoring capabilities, predictive analytics, typically provided by the vendors cloud, right? And as Tobias mentioned, some HCIS vendors have augmented this approach by adding an accelerator to make things like compression and deducle faster, right? Think some PIVITY or something like that. The difference that we're talking about here is, the infrastructure software that we deliver as a service is embedded right into server silicon. So it's not sitting in the operating system of choice. And what that means is you get the full power of the server you bought for your workloads. It's not constrained to hypervisor only environment. It's OS agnostic. And it's entirely controlled and administered by the cloud versus with most HCIS is an on-prem console that manages a cluster or two on-prem. And think of it from a automation perspective, when you automate something, you've got to set up your playbook kind of cluster by cluster and depending what versions they're on, APIs are changing, behaviors are changing. So a very different approach at scale. And so again, for us, we're talking about something that gives you a much more efficient infrastructure that is then managed by the cloud and gives you this full kind of operational model you would expect for any kind of cloud-based deployment. You know, I got to go back, you guys. Obviously you have some three-par DNA hanging around and remember, of course you remember well, the three-par ASIC, it was kind of famous at the time and it was unique. And I bring that up only because you've mentioned a couple of times the silicon. And a lot of people, yeah, whatever, but we have been on this, particularly with ARM and I want to share with the audience and if you follow my breaking analysis, you know this, if you look at the historical curve of Moore's law with x86, it's the doubling of performance every two years, roughly. That comes out to about 40% a year. That's moderated down to about 30% a year now. If you look at the ARM ecosystem and take for instance, Apple A15 in the previous series, for example, over the last five years, the performance when you combine the CPU, GPU, NPU, the accelerators, the DSPs, which by the way are all customizable, that's growing at 110% a year. And the SOC costs 50 bucks. My point is that you guys are riding, perfect example of doing offloads with a way more efficient architecture that's going to, you're now on that curve that's growing at 100% plus per year, whereas a lot of the legacy storage is still on that 30% a year curve. And so cheaper, lower power, that's why I love to buy as you were bringing in the IoT and the smart infrastructure. This is the future of storage and infrastructure. Infrastructure, absolutely. It's, and the thing I would emphasize is it's not limited to storage. Storage is a big issue, but we're talking about your application infrastructure. And you brought up something interesting on the DPU, the SmartNIC side of things. And just to kind of level set with everybody, there's the HCI world, and then there's this SmartNIC DPU world, whatever you want to call it, but it's effectively a network card. It's got that specialized processing on board and firmware to provide some network security storage services. And think of it as a PCI card in your server. It connects to an external storage system. So think NVIDIA Bluefield 2 connecting to an external NVMe storage device. And the interesting thing about that is, storage processing is offloaded from the server. So as we said earlier, good. You get the server back to your application, but storage moves out of the server. And it starts to look a little bit like an external storage approach versus a server-based approach. And infrastructure management is done by the server SmartNIC with some monitoring and analytics coming from your suppliers cloud support service. So complexity creeps back in if you start to lose that heavily converged approach. Again, we are taking advantage of storage within the server and keeping this a real server-based approach but distinguishing ourselves from the HCI approach because there's a real ROI there. And when we talk to folks who are looking at new and different ways, we talk a lot about the cloud. And I think we've done a bit of that already. But then at the end of the day, folks are trying to figure out, well, okay, but then what do I buy to enable this? And what you buy is your standard server recipe. So think your favorite HPE, Lenovo, Supermicro, whatever your brand. And it's going to come enabled with this IoT endpoint within it. So it's really a smart server, if you will, that can then be controlled by our cloud. And so you're effectively buying it from your favorite server vendor, a server option that is this endpoint and a subscription. You don't buy any of this from us, by the way. It's all coming from them. And that's the way we deliver this. Not sorry to get into the plumbing, but this is something we've been on and fascinating. Is that silicon custom designed or is it pretty much off the shelf? Do you guys adding any value to it? No, there are off the shelf options that can deliver tremendous horsepower on that form factor. And so we take advantage of that to do what we do in terms of creating these sort of smart servers with our endpoint. And so that's where we're at. Yeah, awesome. So guys, what's your sweet spot? Why are customers, where are you seeing customers adopting maybe some examples you guys can share? Yeah, absolutely. So I think Tobias mentioned that because of the architectural approach, there's a lot of flexibility there. You can run virtualized, containerized, bare metal applications. The question is where are folks choosing to get started? And those use cases with our existing customers revolve heavily around virtualization modernization. So they're going back in to their virtualized environment. Whether their existing infrastructure is array-based or HCI-based and they're looking to streamline that, save money, automate more, the usual things. The second area is the distributed edge. You know, the edge is going through tremendous transformation with IoT devices, 5G, and trying to get processing closer to where customers are doing work. And so that distributed edge is a real opportunity because again, it's a more cost-effective, more dense infrastructure and the cloud effectively can manage across all of these sites through a single API. And then the third area is cloud service provider transformation. We do a fair bit of business with cloud service provider CTOs who are looking at trying to build top-line growth, trying to create new services and drive better bottom line. And so this is really as much as a revenue opportunity for them as cost-saving opportunity. And then the last one is this notion of, bringing the cloud on-prem. We've done a cloud repatriation deal. And I know you've seen a little of that, but maybe not a lot of it. And you know, I can tell you in our first deals, we've already seen it, so it's out there. That's, you know, those are the places where people are getting started with us today. You know, and it is just interesting, you're right, I don't see a ton of it, but if I'm going to repatriate, I don't want to go backwards. I don't want to repatriate to legacy. So it actually does kind of make sense that I repatriate to essentially a component of on-prem cloud that's managed in the cloud. That makes sense to me. But today you're managing from the cloud, you're managing on-prem infrastructure. Maybe you could show us a little leg, share a little roadmap. I mean, where are you guys headed from a product standpoint? Right, so I'm not going to go too far on a limb there, but obviously, right? So one of the key benefits of a cloud managed platform is this notion of a single API, right? We talked about the distributed edge where, you know, I think a retailer that has, you know, thousands of stores, each store having local infrastructure. And, you know, if you think about the challenges that come with, you know, just the, you know, administrating those systems, rolling out firmware updates, rolling out updates in general, monitoring those systems, et cetera, right? So having a single console, a cloud console, to administrate all of that infrastructure, obviously, you know, the benefits are easy now. If you think about that and spin it further, right? So from a, you know, from the use cases and the types of users that we've seen and Craig talked about them at the very beginning, you can think about this is a hybrid world, right? You know, customers will have data that they'll have in the public cloud. They will have data and applications in their data centers and at the edge. Obviously, it is our objective to, you know, deliver the same experience that they gain from the public cloud on-prem and eventually, you know, those two things can come closer together. Apart from that, right? We're constantly improving the data services. And as you mentioned, ARM is on a path that is becoming stronger and faster. So obviously, we're going to leverage on that and build out our data storage services and, you know, become faster. But really the key thing that I'd like to mention all the time and this is related to roadmap, but would rather feature delivery, right? So this is the majority of what we do is in the cloud. Our business logic in the cloud, the capabilities, the things that make infrastructure were delivered in the cloud. And, you know, it's provided as a service. So compared with your Gmail, you know, your cloud services, one day you don't have a feature, the next day you have a feature. So we're continuously rolling out new capabilities to our cloud. Yeah, and that's about feature acceleration as opposed to technical debt, which is what you get with legacy features, feature creep. Yeah, absolutely. The other thing I would say too, is a big focus for us now is to help our customers more easily consume this new concept. And we've got, you know, we've already got, you know, SDKs for, you know, things like, you know, Python and PowerShell and some of those things, but we've got, I think, nearly ready an Ansible SDK. We have, you know, we're trying to help folks better, you know, kind of use case by use case, spin this stuff up within their organization, their infrastructure, because again, part of our objective, we know that IT professionals have, you know, a lot of inertia when they're, you know, moving stuff around in their own data center. And, you know, we're aiming to make this, you know, a much simpler, more agile experience to deploy and grow over time. Guys, we got to go, but Craig, quick company stats. Am I correct? You raised just under 20 million. Where are you on funding? What's your head count today? I am going to plead the fifth on all of that. Oh, okay. Keep it stealth. Keep it staying a little stealthy. I love it. Absolutely. All right, guys, really excited for you. I love what you're doing. It's really starting to come into focus. And so congratulations. I know you got a ways to go, but Tobias and Craig, appreciate you coming up on theCUBE today. Right on. All right, and thank you for watching this CUBE Conversation. This is Dave Vellante. We'll see you next time.