 So everyone, welcome to this special CUBE conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE. We are here with two co-founders, Adam Cassella, who's the CTO and Glenn Sullivan's co-founder of SnapRoute. Hot startup guys, welcome to this CUBE conversation. Thanks for joining me. Thank you. So I love having the founders in because you get the down and dirty, but you guys are launching an interesting product as for cloud, cloud native, super exciting. But first take a minute to explain what is SnapRoute? What do you guys do? What's the main core goal of the company? Sure, right. So your audience and you familiar with white box networking, disaggregated networking, where you're buying your hardware and your software from different companies. There's a lot of different network OSes out there, but there's nobody doing what we're doing for the network OS, which is a cloud native approach to that, where it's a fully containerized, fully microservice network OS running on these white box switches. Tell us about your background. How did you guys start this company? Where'd you come from? What was the epiphany? What was the motivation? Sure, so our heritage is from operations, running at some of the largest data centers in the world. We came from Apple and running the networks there and the issues and problems that we saw doing that is what led us to found SnapRoute. And what are some of the things that Apple, you guys noticed, obviously huge scale? Yep. I mean, Apple, you know, a huge market share, most profitable company. I think it's still got the, now the largest cat Microsoft was there for a while, but you know, Apple's the gold standard. Yeah, from privacy to scale, what was some of the things that you saw there? What was the opportunity? So, I mean, there was a couple of things going on there. One, we were driving to doing white box for more control. So we wanted to have a better sense of what we could do with the network operating system on those devices. And we found very quickly that the operating systems that were out there, whether they be from a traditional manufacturer, OEM appliance, or from someone from a disaggregated marketplace, were basically using the same architecture. And this was this old monolithic single binary item that goes into poison device. And you know, that worked in, you know, back in the day when applications didn't move, they were static, they're in one particular location. But as we were seeing, and one things that we were really pushing on is being able to dynamically move workloads from one location to another quickly to meet demand, the network was not able to keep up with that. And we believe that it really came down to the architecture that was there to not being flexible enough and not allowing our control to be able to put in the principles that would actually allow us to allow that application time to serve us to be faster. You know, one of the things I'm personally fascinated you know, seeing all the startups out there and living in this cloud era and watching because like Facebook and Apple literally build the new kind of scale in real time. It's like, you guys are changing the airplane engine out of 35,000 feet as the expression goes. You have to be modern. I mean, there's money on the line, there's so much scale. And when you see an inefficiency, you got to move on it. This is like what you guys did at Apple. What were some of the things that you observed? Was it the boxes? Was it the software? As you wanted to be more agile, what was the problem that you solved? So it's really in fragility, right? It's basically this network OSs as they were are designed in a way so that you don't touch them, right? If you look at the code releases and how often they fix security vulnerabilities or they have patches or even new regular versions, right? The cycle isn't weekly, it's not daily like you see in some CICD environments, right? You might have a six month or a 12 month or an 18 month cycle for doing this sort of new release for whatever issue, new features or fixes, right? And the problem that we would see is we would be trying to test a version in the lab, right? We would be qualifying code and say there's a security vulnerability, something like Heartbleed, right? That comes out, the guys on the server side, they push a new patch using Ansible, Chef or Puppet and two days later, everything's good. Even two hours later in some environments. But we had to wait for the new release to come from one of the traditional vendors. We had to put it in our lab and we'd get this sort of kitchen sink of every other fix. There'd be enhancements to BGP that we didn't ask for. There'd be enhancements to spanning sure that we didn't ask for. Even if they patched it, you'd still get this sort of all in one update and by the time you're done qualifying, there might be another security vulnerability. So you got to start over. So you'd be in this constant cycle of months of qualifying the image because you'd be testing everything that's in the image and not just the update. And that's really the key difference between what we're doing. And the work involves, you're basically chasing your tail. Exactly. One thing comes in and opens up a lot of consequences. But that's what systems are all about. There's consequences, right? This is why systems are challenging. And what it does is it creates this culture and know from the network folks, right? Because the network folks are basically like not in my backyard, you want to add this new thing. No, because they're judged by uptime. They're judged by how long the network is up and how long the application is available. They're not judged by how quickly they can put a new feature out or how quickly they can roll an update. They're literally judged in most organizations by uptime. How many nines are they giving? So if I'm judged by uptime and somebody wants to add something new, my first answer as a network person or as anybody really is going to be, no, no, no, don't touch anything. It's fragile and fertile. And it's not because they're jerks or anything. They just know the risk associated with what could come from the consequences of touching something. So yes is hard, right? Yeah. Okay, so I've got to ask you guys a question. How come the networking industry hasn't solved this problem? Well, there's a few different reasons I feel that there is and that's because we've had very tightly coupled, very tightly controlled systems that have been deployed as appliances without allowing operators to go ahead and add their innovations onto those items. So if you look at the way the compute world has kind of moved along in the past 15, you know, actually 30 years, you mean really the revolution started with Linux, right? From their particular perspective. You have Linux, you can open up the system. You get people who can start doing open source items. Everyone knows the end of the story of that. Linux is the most successful monolithic, you know, piece of code base that's ever existed, right? It took 15 years later for anyone in the network industry to even run the Linux on a switch. I mean, that's pretty, you know, huge in my mind, right? That's called lack. Yeah. And so, and even when they got it on the particular switch, they're running older versions of kernel. They're running different things that don't, you know, back port versions of code that don't work with the most modern applications that are out there. And they really have it in their tight little walled garden that you can't adjust things with. And that was an operational mode at the time. I mean, you know, networks were stable. They weren't that complicated. And hence the lag and many felt have been left behind. Yeah. The operational efficiencies that may have function when you have dozens of devices, doesn't function when you have hundreds and thousands of devices. And so when you look at like even from the way they presented their operating system from a config standpoint, it is a flat config file that's loaded from file and booted. That's the same paradigm people have filed for 40 years. Why do we still think that it's out today? Compute has left that behind. They're going to programmatic API versions with, you know, whether it be, you know, Kubernetes or with Docker, where they have everything built into one ephemeral container that gets deployed. Why doesn't it work in the same thing? And I really believe it's got closed ecosystem that hasn't allowed people to put their innovations onto there. Yeah, it's almost as a demarcation point in time if you think about history and how we got here where it's like, okay, we got perimeters. We got firewalls and switches top of rack stuff. So you got scale. It's bolted down. It's secure. And in comes cloud, in comes IoT. So there's almost a point, you know, it almost picked the year. Was it 2008 through 2012? We started to see that philosophy. So the question I have to ask for you is that, what was the tipping point? So because, you know, the fire being lit under the butts of networking guys finally hit. And some were saying, well, if they don't evolve to be like the mainframe guys, I was like, not really, because mainframes is just different from client server. Networks aren't going away. They're around. What was the tipping point? What made the network industry stand up? So yeah, what it is, is it's being able to buy infrastructure with a credit card, right? Because as soon as I've got a problem as an application owner or as a developer, I say, hey, I've got this thing that I've got to release. And I go to the network team and say, I've got this new thing. And I get any sort of pushback. Now you look at cloud, right? AWS, Azure, Google, like all the different options out there, fine, I don't need these guys anymore. Let me grab a credit card, slide it, boom, now I can buy my infrastructure. That's really the shift. That's what's pushing folks away from using those kind of classic network infrastructures, because they can do something else, right? So cloud clearly driving it. I would say so, yeah, absolutely. All right, so the path to solve these problems, you guys have an interesting solution. What's the path? What's the solution that you guys are bringing to market? Sure, so the way I kind of view the way the landscape is set up is really if you look at where this innovation has happened in the compute side in the last little bit, whether it be cloud, whether it be, you know, some of the cloud native items that come out there, they've all come from the operators. They haven't been a vendor that's sitting there and going into play. They've kind of morphed more of themselves into vendors, but they didn't originate as vendors, right? To go ahead and supply these systems. And so what I see from the solution to that is start enabling operators and people who are running networks to be able to control or their own destiny to manage how their networks are deployed, right? And this boils down from our perspective to a microservices containerized network operating system that is not bespoke, not proprietary, but is using the ecosystem that has been built from these people on the compute side, specifically the cloud native universe and the cloud native world, and applying those perimeters and shims onto network equipment. Yeah, learn from the cloud, right? Like don't try to make something better. Look at the reasons why folks are going to the cloud. Look at the API structures, look at the ease of launching instances, look at the infrastructure that you can build with a few clicks and say, what can I learn from that environment to be able to mimic that in my private environment? Yeah, and this is why we kind of looked at Kubernetes as a really big piece of our infrastructure and using the Kubernetes API as the main interface into our device so that you can, you know, multiple different reasons, it's expandable. You can do a bunch of different custom options to expand that API, but it allows people who are either in DevOps to look at that and go, I understand how this works. I know how these shims function and certainly getting into realization that networking is not that much different than what the compute world is. So you guys embraced integration, because deployment, CICD pipeline, all that good stuff, and Kubernetes, even so Apple at the CNCF conference and they have a booth there, no one would talk, but certainly Kubernetes is getting part of that cloud native. What's the important solution that you guys are building to solve, to solve some of the problems that you're going after with now the cloud native? Because DevOps ethos is trickling down, up and down the stack, certainly we know what cloud is. What is specifically the problem that you're solving? So a couple of things that are, so obviously you have your application time to service, the faster you can deploy your application, the faster you can get up and running, the faster you get people using on it, is you get more money, you save money, right? You have security, no one wants to be in that box of having a security vulnerability happen on their particular. Or non-compliance. Yes, or non-compliance to a particular thing with a PII, PICPCI socks, and all the things that are come along with that. And finally it's the operational efficiency of day two operations. We've gotten pretty good as industry as deploying day one operations and then walking away, we don't want to do anything. No, no, no, we can't change the network anymore. It's really that next day when you have to do things like apply those applications or have a new application that gets moved. Containers are ephemeral. The average container lasts two to three days. VMs last 23 days. Monolithic apps last for years that are not in those things that are just in a compute bare metal piece. So when we start moving to a location or a journey of having a two to three day ephemeral app that can be removed or moved or placed in a different location, the network needs to be able to react to that. And it needs to be able to take that and ensure that that not only uptime but availability is there for sure. And it's not management tools that are going to fix it, right? This is sort of our core argument is that you look at all of the different solutions that have come out for the last, you know, seven, eight, nine years in the networking in the open networking space. This, they're trying to solve this from a management perspective with, you know, different SDN profiling different solutions for solving this management day two operations issues, right? And our core argument is that the management layers on top aren't what needs to change. That can change if you adopt Kubernetes you get that kind of along with it but you need to change the way the network OS itself is built so that it's not so brittle so that it's not so fragile breaking into microservices breaking into containers so that you can put it into a CI CD pipeline. You try to take a monolithic network OS and put it into a CI CD pipeline you're going to be pushing rock up the hill. It's funny we've had Scott McNeely on the Cube founders some microsystems and we said, you know, we asked him one time, hey, you know, what do you think about the cloud? He goes, I should, I had network as the computer was his philosophy. I just could have called the cloud that would have been it. So if the network is the computer kind of concept, the operating environment management is not a key subsystem of the network. It's a component, but the operating system has subsystems. So I like this idea of a network operating system. Talk about what you guys do with your network operating system and what is day two mean? What is actually that mean? Sure, so when you take your services and you divide them up into containers and you know, call them microservices basically taking a single service, putting a container and having a bunch of dependencies that might be associated with that. What you end up doing is having your ability to you know, replace or update that particular container independently of the other components on the system if an issue happens or if you want to get a new feature or functionality for that. The other thing you can do is you can slim down what you're running so you don't have to run these 200 plus features which is the average amount you see on just a top of rack device and you only use maybe 10 to 20% of those. Why do I have all these extra features that I have to qualify that may introduce a bug into my particular environment? I want to run the very specific items that I know I need to get my application up and running. And the ability to go ahead and pull in the cloud native environment and tools to do that allows you to get the efficiencies that they've learned from not only the cloud way but also even doing some on-prem Kubernetes private cloud items to get those efficiencies on there for running your network and running your applications. It's learning from the hyperscalers too, right? Like this is well, I mean, we had this when we were running networks, right? You'd put every protocol on the board, on a white board and then you'd start crossing them off and you'd start arguing in a room full of people saying why do I need this feature? Why do I need this other feature? And it's like you have to justify it and we know this is happening up the road at places like Facebook, like Google, right? We know that they're saying hey, the fewer features I have running, the simpler my environment is, the easier it is to troubleshoot, the less that can go wrong and the less security vulnerabilities I have. These are all, it's all goodness to run less, right? So if you give people the ability to actually do that, they have a substantially better network. What's unique about what you guys are doing? How would you describe the difference between what you're doing and what people in the industry might be looking at? So if you look at what other folks that are in the network industry that look at cloud-native or Kubernetes, everything they do is a bolt-on unto its old architecture that's been around for 25 years. So it's like a marriage between these two items. It's how you go ahead and have this plug-in that interacts with that. Forget all that. You're going to get up in the same spot with another thing you're adding onto, another thing you're adding onto, another thing you're adding onto. It's these abstraction layers and top of the abstraction layers. We're taking the approach where it is native to the network operating system. You know, Kubernetes, Docker, microservices and containers, they're native to the system. We're not adding anything on, we're not bolting anything on there. That's how it is architected and designed to be run. And that's key, right? The thing that we're really walking away from from our operational experience, we know that the decisions being made at that CIO, CTO level and even in the director of infrastructure level are going to be, we're looking to build an on-prem solution, you know, Mr. Customers saying, I need it to be orchestrated by an open non-proprietary platform. That gets rid of all of the platforms that are currently out there by the traditional network OEMs, right? If you start out saying my orchestration platform has to be shared from compute storage network and it has to be open and it has to be non-proprietary, that pretty much leaves Kubernetes as you're really only the choice. And Kubernetes is important. Yeah, it's hugely important to us, right? We knew that when we broke everything into, you know, containerized microservices you need something to orchestrate those. So what we've done is we said, hey, we're going to use this Kubernetes tool. We're going to embed it on the device itself and we're going to run it natively so that it can be the control point for all the different containers that are running on the system. That's awesome, guys, great stuff. We're looking forward to chatting more. Final question, what words of wisdom do you have for other folks out there? Because there are a lot of worlds colliding as we look at the convergence of, you know, a cloud architect, which by the way is not a well-defined position. You have infrastructure folks who have gone through machinations of roles, network engineer, this, that, the other thing. Programmable networks are out there. You're seeing this real-time data, IoTs, all secure, all coming together. So a lot of people are reevaluating. What's your advice to folks out there who are either evaluating, running POCs and rethinking their architecture? So the first thing that, you know, and I think this is pretty common from folks to hear is that evolve or you're not going to be relevant anymore. You need to actually embrace these other items. You can't ignore cloud. You can't pretend like I have a network. These applications will never move because eventually they will and you're going to be out of a job. And so we need to, you need to start looking at some of the items that are out there from the cloud native universe to Kubernetes universe and realizing that networking is not a special silo that's completely different from, you know, DevOps and other items. They need to be working together and we need to get these two groups to communicate to each other, to actually move the ball forward for getting applications out there faster for customers. Don't let, the thing I would say to infrastructure folks, especially those that are going to a cloud strategy, is don't let the Ivy and the Moss grow on your on-prem solutions, right? Go into your multi-cloud strategy with I'm going to have some stuff in AWS. I'm going to have some stuff in Azure. I'm going to have some stuff in Google. I might have some stuff overseas because of data sovereignty, but I'm also going to have things that are on-prem. Look at your on-prem environment and make it better to reflect what you can do in the cloud because once your developers get using the API structures in the cloud, they're going to want something very similar on-prem. And if they don't have it, then your on-prem is going to rot. And you're going to have some part of your business that has to be on-prem and you're going to give it a level of service that isn't as good as the cloud and nobody wants to be in that situation. Glenn, Adam, thanks so much for sharing. Congratulations on the launch of SnapRoute. Great journey. Thanks for coming in and sharing the conversation. Yeah, absolutely. I'm John Furrier here in Palo Alto, the CUBE studios for CUBE conversation with SnapRoute launching. I'm John Furrier. Thanks for watching.