 from our studios in the heart of Silicon Valley, Palo Alto, California. This is a CUBE Conversation. Hello everyone, welcome to this CUBE Conversation here in Palo Alto, California. I'm John Furrier, host of theCUBE, here in theCUBE Studios. We have Don Wild, the CEO of SnapRoute and Glenn Sullivan, co-founder of SnapRoute. Hot startup, you guys are out there. Great to see you again. Thanks for coming on, appreciate it. Thanks. You know, famous work you guys done at Apple. We've talked about it last time. You guys were in buildup mode, bringing your product to market. What is the update? You guys are now out there with traction. Don, give us the update. What's going on with the company? Quick update. Yeah, so if you remember, we've built the sort of like the new generation of networking targeted at the sort of next generation of cloud around distributed compute networking. And we have built a cloud native microservices architecture from the ground up to reinvent networking. And we now have the product out. We released the product back at the end of February of this year, 2019. So we're out with our initial sort of POCs. We've got a couple of initial deals already done and a couple of customers of record and we're deployed up and running with a lot of interest coming in. And I think it's kind of one of the topics we want to talk about here is where is the interest coming from and where is this sort of new build out of networking, new build out of cloud happening? Yeah, I want to get the detail on that traction but real quick, what is the main motivator for some of these interest points? Obviously, you got traction. What is the main traction points? So a couple of things. Number one, people need to be able to deploy apps faster. The network has always traditionally got in the way. It's been an inhibitor to the speed of business. So number one, we enable people to deploy applications much faster because we're sort of integrating networking with the rest of the infrastructure operational model. We're also solving some of the problems around or in fact, all of the problems around how do you keep your network compliant and security patched and make it easier for operations teams to do those things and get security updates done really, really quickly. So there's a whole bunch of operational problems that we're solving. And then we're also looking at some of the issues around how do we have both a technology revolution in networking but also an economic revolution. Networking is just too expansive and always has been. And so we've got quite a revolutionary model there in terms of bringing the cost of networking down significantly. Glenn, as the co-founder, as the baby starts to get good out there and grow up, what's your perspective? Are you happy with things right now or how things going on your end? Absolutely. The thing that I'm proudest of is the innovation that the team has been able to drive based on having folks that are real experts in Kubernetes, DevOps and networking kind of all sitting in one room solving this problem of how you manage a distributed cloud using tool sets that are cloud native. That's really what I'm proudest of is the technology that we've been able to build and demonstrate to folks because nobody else can really do what we're doing with this mix of DevOps and Kubernetes and cloud native engineering like general network protocol and systems people. You know, it's always fun to interview the founders and being an entrepreneur myself. Sometimes where you get is not always where you thought you'd end up. But you guys always had a good line of sight on this cloud native shift in the modern infrastructure. When you work at Apple, we've talked about in our last conversation, really was obviously leading the way. They had pressure from the marketplace selling a trillion dollar valuation company. But that was an early indicator. You guys had clear line of sight on this new modern architecture. Kind of a cloud 2.0 we were kind of talking about before we came on camera. This is now developing, right? So you guys are now in the market. You're riding that wave. It's a good wave to be on because certainly app developers are talking about microservices. Are you talking about Kubernetes? They're about service meshes, stateful data. All these things are now part of the conversation but it's not siloed organizations doing it. So I want to dig into this topic of what is cloud 2.0? What do you guys, how do you guys define this cloud 2.0? And what is cloud 1.0? Then let's talk about cloud 2.0. Yeah, so cloud 1.0, huge success. The growth of the hyperscale vendors. You've got the success of Amazon and Microsoft as your and all of these guys. And that was all about the hyper centralization of data bringing all the disparate data centers that enterprises used to run and all that infrastructure into sort of relatively a few locations, a few geographic locations and hyper centralizing everything to support SaaS applications. Massively successful because really what cloud 1.0 did was it made infrastructure invisible. You could be an application developer and you didn't have to manage or understand infrastructure. You could just go and deploy your applications. So the rise of SaaS with cloud 1.0. Cloud 2.0 is actually an evolution in our mind. It's not an alternative. It's actually an evolution that complements what those vendors did with cloud 1.0. But it's actually distributing data. So we pulled everything to central and now what we've seen is that the applications themselves are developing such that we have new use cases, things like enhanced reality in retail. We have massive sensor networks that are generating enormous amounts of data. We have self-driving cars that need rapid response time for safety things. And so what happens is you have to put compute closer to the devices that are generating that data. So you have to geographically now disperse and have edge compute and obviously the network that goes with that to support that. You have to push that out into thousands of locations geographically. And so cloud 2.0 is this move we've got this whole new class of cloud service providers and some regional telcos and things who are reinventing themselves and saying, hey, we can actually provide the colos. We can provide these smaller locations to host these edge compute capabilities. But what that creates is a huge networking problem. Distributed, networking in massively distributed cases is a really big problem. What it does is it amplifies all of the problems that we've coped with in networking for many years. And I mean, Glenn, you can talk about this, right? You know, when you were at Apple, one of the first real-time apps was Siri. I mean, let's get back to the huge networking problem, but I want to just stay on the thread of cloud 2.0. Glenn, you were talking about that before we came on camera. You referenced your time at Apple. Kind of a peek into the future around what cloud 2.0 was. Can you elaborate on this notion of real-time latency as an extension to the success of cloud 1.0? Right, so we saw this when we were deploying Siri, right? Siri was originally just a centralized application, just like every other centralized application, you know, iTunes, you buy a song. It doesn't really have to have that much data about you when you're buying that song and you go and you download it via the CDN and it gets it to you very quickly and you're happy and everything's great. But Siri kind of changed that because now it has to know my voice. It has to know what questions I ask. It has to know things about me that are very personal and it's also very latency sensitive, right? The quicker that it gets me a response, the more likely I am to use it, the more data it gets about me, the better the answers get. It's everything about it drives that the data has to be close to the edge. So that means the network has to be a lot bigger than it was before. And this changes the architectural view. So just to summarize what you said is, iTunes needs to know a lot about the songs that it needs to deliver to. Right. The network delivers it, okay, easy. Right. Clicking, but with the voice piece that kind of changed the paradigm a little bit because it had to be optimized and peaked for real time, low latency, accuracy, different problem set than, say, the iTunes. So they've now worked together. Right, language specific, right? So where is the user? What language are they speaking? How much data do we have to have for that language? It's all very, very specific to the user. So Cloud 2.0 then is, if I could piece this together, is Cloud 1.0 we get at Amazon Showcase there. It's kind of data. It's a data problem too. It's like AI, you've seen the growth of AI kind of validate that. It's about data personalization. Siri is a great example. Edge where you have data that needs to integrate into another application. So if Cloud 1.0 is about making the infrastructure invisible, what is Cloud 2.0 about? What's the main value proposition? It's about, I mean, to me it's about extracting the value from the data and personalizing it. And it's about being able to provide more real time services and applications while maintaining that infrastructure and visibility paradigm. That is still the big value of any cloud, any sort of public cloud offering, is that I don't want to own the infrastructure. I want to know about it. I want to be able to use it and deploy applications. But it's the types of applications now and it's the value that the applications are delivering has changed. It's not just a standard SaaS application like Workday, for instance, is still a fairly static application. It's a monolithic application. These are real time apps. They're operating real time. If you take autonomous car, right, if I'm about to crash my car and the sensors are all going off and it needs to break and it needs to send information back and get a response, I want all that to happen in real time. I don't want to sort of like have a little delay. In any abstraction layer, or any kind of layer of innovation, 1.0, 2.0s and you're implying advancement, it's still an application developer, opportunity, Glenn, right? Because at the end of the day, the user expectations changed because of the experience that they're getting. It's just crazy. It only gets worse, right? Because the more network that I have, the more distributed the network is, the harder it is to manage it. So if you don't take that network OS, the really, really boring, not very exciting thing and treat it the same way you always have and try to take what you learn in the data center and apply it at the edge, you lose the ability to really take advantage of all the things that we've learned from the cloud native era and from the public cloud 1.0, right? I mean, just look at containers, for instance, right? Containers have taken over, but you still see this situation where most of the applications that are infrastructure-based aren't actually containerized themselves. So how can they build upon what we've learned from public cloud 1.0 and take it to that next level unless you start replacing the parts of the infrastructure with things that are containerized? It's just as a side note, just going through my head right now, there's going to be a huge conflict between who leads the innovation in the future, absolute on-premises or cloud. And that's going to be kind of like an interesting dynamic because you could argue that containerization and networking is a trend, it makes sense to be cloud native, but now you've got on-premises. It's going to be a dynamic we're going to have to watch. But you mentioned, Dom, about this huge networking problem that evolves out of cloud 2.0. Absolutely. What is that networking problem and what specifically is a directionally correct solution for that problem? So I think the biggest problem is an operational one. In the cloud 1.0 era and even prior to that when we were in sort of hosted enterprise data centers, we've always built data centers and the applications running them with the assumption that there are physically expert resources there that if something goes wrong, they can hands-on do something about it. With cloud 2.0, because it's so distributed, you can't have people everywhere. And one of the challenges that has always existed with networking technology and architectures is it is a very static thing. We set it, we forget it, we walk away and we try not to touch it again because it's pretty brittle because we know that if we do touch it, probably breaks and something goes wrong. And we see today a ton of outages. We were talking about a survey the other day that said the second biggest cause of outages in the cloud age is still the network. And so it's an operational problem whereby I want to be able to go and now touch these thousands of devices for usually, I'm fixing a bug or I want to add a feature, but more and more it's about security. It's more about security compliance and I want to make sure that all my security updates are done. With a traditional network operating system, we call it the monolith. All of the features are in one big blob. You can turn them off, but you can't remove them. So it's a big blob and all of those features are interdependent. When you have to do a security patch in a traditional model, what happens is that you actually are going to replace the blob. And so you're gonna remove that and put a new blob in place. And that's a hard enough operational problem all on its own because when you do that, you kind of sort of down things and up things. So consequently, anyone who's done any kind of location shifting out hardware knows it's a multi-day week operation. It is, but you know, and what people do is they overbuild the network. So they have two of everything. So as when they down one, the other one stays up. When you're in thousands of geographic locations, that's really expensive to have two of everything. So the problem statement is essentially, how do you have a functional, robust network that can handle the kind of apps? It is. IOT, is that? Yeah, it is, absolutely. But as I said, it's important to understand that you have this monolith that is getting in the way of this robust network. What we've done is we've said, well, apply cloud-native technology and thinking. Containerize the actual network operating system itself, not just the protocols, but the actual infrastructure services of the operating system. So if you have to security patch something or you have to fix something, you can replace an individual container and you don't touch anything else. So you maintain a known state for your network. That device is probably going to be way more reliable and you don't have to interrupt any kind of service. So rather than downing and upping the thing, you're just replacing a container. So you guys built a service on top of the networks to make it manageable, make it more functional, is that? We actually didn't build it. This is the beautiful part. If we built it, then I would just be another network vendor that says, hey, trust my proprietary, not open solution. I can do it better than everyone else. And that would be what the traditional vendors did with stuff like ISSU and things like that. We've actually just used Kubernetes to do that. So you've already trust Kubernetes, came out of Google, everybody's adding to it. It's the best community project ever for distributed systems. So you don't have to trust that we've built the solution. You just trust in Kubernetes. So what we've done is we've made the network native to that and then use that paradigm to do these updates and keep up the current. And the reason why you're getting traction is you're attractive to a network environment because you're not there to sell them more networking. You're there to give them more network capability. With Kubernetes. Yeah, we're attractive to a business for two reasons. We're attractive to the business because we enable you to move your business faster. You can deploy applications faster, more reliably. You can keep them up and running. So from a business perspective, we've taken away the pain of the network interrupting the business. From an operations perspective, from an IT operations network operations perspective, what we've done is we've made the network manageable. We now, as you said, we've taken this paradigm and said what would have taken months of pre-testing and planning and troubleshooting at two o'clock in the morning has now become a matter of seconds in order to just replace a container. And as ease the burden operationally and now those operational teams can go and do worthwhile work that is more meaningful than just testing a bunch of vendor fixes. Even though Cloud 1.0 had networking in there, compute and storage, I think Cloud 1.0 would be about compute and storage. Cloud 2.0 is really about the network and all the data that's going around to help the app developers scale up their capability. Yeah, that's a great way, I think. All right, so how about the use cases? I think the next track that I'd love to dig in with you guys on is, as you guys are pioneering this new modern approach, some of the use cases that you touch are probably also pretty modern. What specific use cases are you guys getting into or your customers talking about? What are some of these Cloud 2.0 use cases that you're seeing? Yeah, I mean, so one we already touched on was just this sort of horizontally and generally was the security one. I mean, security is everybody's business today. And it's a very, very difficult networking problem, keeping things compliant. If you take, for instance, recently Cisco announced that there was sort of 40 vulnerabilities in their mainstream Nexus products. And you know, I mean, that's not a terrible thing. It's normal course of business. And they put out patches in the fixes and said, hey, here it is. But now when you think about the burden on any IT team, that comes out of the blue. They hadn't planned for it. Now they have to take the time to take a step back. And what they have to do is say, well, I've got this new code. I don't know what else was fixed or changed in that. So I now have to retest everything and retest all of my use cases. And I have to spend considerable time to do that, to understand what else has changed. And then I have to have a plan to go out and deploy this. That's a hard enough problem in a centralized data center. Doing that across hundreds, if not thousands, of geographically dispersed sites is a nightmare. But it's just, this is the new world we live in. This is going to happen more and more and more. And so being able to change that operational model to say, actually, this is trivial. And actually, what you should be doing is doing these updates every day to keep yourself involved. Do the use cases, Glenn, have certain characteristics? I mean, we've come a latency in bandwidth. That's a traditional networking kind of philosophy. Is there certain characteristics that these new use cases have? Is it latency in bandwidth? No, no, it's mostly about bringing properties like CI CD to networking, right? So the biggest thing we're seeing now is as people start to investigate disaggregated networking and new ways of doing things, they're not getting this free pass that they used to get for the network because the network isn't just an appliance anymore, right? When you had something that was from one of the big three vendors, you'd say, okay, that thing runs some version of Linux on it. I don't know what it is. Maybe it runs FreeBSD in June of 1st case. I don't understand what kernel it is. I don't care. Just keep that thing up to date. But now it's like, oh, I'm starting to add more services to my network devices. Say in the remote sites, I wanna kickstart some servers with these network devices I install first. Well, that means that I have to start treating this thing like it's another server in my environment for my provisioning, right? That means that everything on that box has to be compliant just like it is in everything else. Let's not even get into personal credit card information and personal identifying information. Everything is becoming more and more heightened from a non-exemplary set. It's a surface area device. I mean, it's part of the surface area. It has to be. And if it's not inside a data center, then it's even worse because you can't guarantee the physical security of that device as much as you could if it was inside a regular data center. So this is a new dynamic that's going on with the advent of security regulatory issues and also obviously perimeter being dismantled because of cloud. Absolutely. Yeah, I mean, you asked about specific use cases. I mean, there are multiple sort of verticals and industries that are having these challenges. I mean, retail is a good example, point of sale. Anywhere where you have a sort of like a branch kind of problem or mentality where you're running sophisticated applications. And by the way, so people think of point of sale as not terribly sophisticated. It's incredibly sophisticated these days. Incredibly sophisticated. And there are thousands of these devices, hundreds of stores, thousands of devices similarly with healthcare, again distributed hospitals, medical centers, doctor's offices, et cetera. You have all running sort of private mission critical data. I think one of the ones that we see sort of coming is this kind of autonomous car thing. And as we get sort of IOT sensor networks, large amounts of data being aggregated from those. So there's lots of different use cases. We had only a lot of interest, you know, and I mean, quite frankly, the challenge for us is, you know, as a startup is keeping focused on just a few things today. But the number of things we're being asked to look at is just enormous. Well, there's tailwinds for you guys in terms of momentum you have. This cloud 2.0 trend, which we talked about, but hybrid cloud and multi-cloud is essentially distributed cloud and edge. You think about it. And that's what most companies going to do. They're going to keep their on-premises and they're going to treat it as either on their platform or an external remote location that's going to be everywhere, big surface area. So with that, what are some of the under the hood benefits of the OS? Can you go into more detail on that? Because I find that to be much more interesting to say the network architect or someone who's saying, hey, you know what? I got hybrid cloud right now, I got Amazon. I know the future's coming onto my front doorstep really fast. I got to start architecting. I got to start hiring. I got to start planning for distributed cloud and distributed edge deployments, if not already doing it. So technical debt becomes a huge issue. I might try some things, my old gear or old stuff. They're in this mode. You know, a lot of people kind of in that mode. I'll do a little technical debt to learn, but ultimately I got to build out this capability. What do you guys do for that? So the critical thing for us is that you have to standardize on an open, non-proprietary orchestration layer, right? You can talk about containers and microservices all day long. We hear those terms all the time. But what people really need to make sure that they focus on is that their orchestrator that's managing those containers is open and non-proprietary. If you pull that from one of the current vendors, it's going to be something that is network-centric and it's going to be something that was developed by them for their use, right? This is not something they're basically saying, here's another silo, keep feeding into it. Sure, we give you APIs. Sure, we give you a way to programmatically configure the network. But you're still doing it specifically to me. One of the smartest decisions we made, besides just using Kubernetes as core infrastructure, we've also completely adapted their API structure. So if you already speak Kubernetes, if you understand how to configure network paradigms in Kubernetes, we just extend that. So now you can take somebody who off the street might be a cloud native Kubernetes expert and say, here's a little bit of networking, go to play the network, right? You just have to take the barrier down of what you have to teach them from this CLI and this API structure that's specific to this vendor and then this CLI and this API structure. But the cool thing about what we're doing is we also don't leave the network engineers out in the cold. We give them a fully cloud native network CLI that is just like everything else they're used to, but it's doing all this cloud native Kubernetes microservices container stuff underneath to hide all that from them. So they don't have to learn it. And that's powerful because we recognize, because of our ops experience, there's a lot of different people touching these boxes. Whether you put it in an ivory tower or not, you've got knocks that have to log in and check them. You've got junior network admins, senior network engineers, architects, you've got cloud native folks, Kubernetes folks. Everybody has to look at these boxes. So they all have to have ways that into the switch and into the routers that is native to what they understand. So that's very critical is to present data in a way that makes sense to the audience. And also give them comfort to what they're used to, like you said before, if they got whatever's running Linux on there. As long as it's operationally running, the waters flowing through the pipes or packets are moving through, they're happy. They've got to have this new capability to please the people who need to touch the boxes and work with the network and give them some more capabilities. Right, and it prevents you from building those silos, which is really critical in the cloud native. And that's what public cloud 1.0 taught us, stop building these silos, these infrastructure silos and say, okay, you look at like AWS right now, there's AWS certified engineers. They're not network experts, they're not storage experts, they're not compute experts, they're AWS experts. And you're going to see the same thing happen with cloud native. Cloud 3.0 is decimating the silos basically, because if this goes to that next level, that's why horizontally scalable networks is where you go, right? That's kind of what you were talking about. Exactly. That's the use case. I mean, I think all sort of revolutionary ideas are actually more transformational. I mean, revolutions begin by taking something that is familiar and presenting it in a new way and enabling somebody to do something different. So I think it's important as we sort of approach this is to not just come in and go, oh, what you're doing is stupid, replace it. The answer is what you're doing is obviously the right thing, but you've not been given the tools that enable you to take full advantage and achieve the full potential of the network as it relates to your business. And you guys know as well as we do is that the networking folks are, it's a high bar for them because you mentioned the security and the lockdown nature of networking has always been, you know, you don't eff with it because you think that thing is going to be, anyone touches it, they need to be reviewed. So they're a hard customer to sell to. I mean, you got to align with their ops mindset. I mean, I think that network operators have been, you know, and Glenn and our other co-founder of, you know, wax the real about this, but you know, network operators have been forced to live in a world of no. Anytime the business comes to them and says, you know, hey, we need you to do X. The answer is no, because I know that if I touch my stuff, it's going to break or, you know, or I'm limited in what I can do or I'm, you know, I can't achieve the, you know, the timeframe that you're looking for. So the network has always been an inhibitor, but the heroes of the moment are actually the network operations teams because nobody knows that the network. This is an interesting agile conversation. We've been having this in our here in our CUBE studio yesterday amongst our own team because we love agile with content. Agile's different. Agile's about getting to yes, because iteration in a sense is about learning, right? So you have to say no, but you have to say no with the idea of getting to yes, because the whole microservices is about figuring out through iteration and ultimately automation, what to tear down. So I was seeing a kind of trend where it's not the no ops kind of guys as they say, you know, no, no, no. It's no, don't mess with the current operational plumbing, but we got to get to yes for the new capability. So there's kind of a shift in kind of the cloud native. Your thoughts and reactions to that Glenn? Yeah, so it's basically like I set myself up so that I'm doing a whole like drop the forklift with everything in there, like a created replacement. Networking has always been this way. Like I'm not saying no to you. I'm just saying not right now. I do my maintenance three times a year on the third Sunday of the second month and the moon's in the right place and I make sure that I've got 50, 60 changes. I've got 20 engineers on call. We do everything, you know, in order. We've got a rollback plan if something breaks. This is the problem. Like network engineers don't do enough changes to build the muscle like the agile developers have seen or CICD developers have seen where it's like, I do a little bit of changes every day, something breaks, I roll it back. I do a little bit of changes every day and if something breaks, I roll it back. That's what we enable because you can do things without breaking the entire system. You could just replace a container, you could move on. In networking, the classic networking, you're stack modeling so many changes and so many new things that everything has to be a green field deployment. How many times have we heard that? Like, oh, this thing would be perfect for our green field data center. We're going to do everything different in this green field data center. And that doesn't work. We don't get a mulligan in networking, as they say. Look, this is a good point. Great conversation. I think that is a great follow up topic because developing those muscles is an operational practice as well as understanding what you're building. And you got to know what you're, and I know what the outcome looks like. This is where we're starting to get into more of these agile apps. And you guys are the front end of it. And I think this is a sea change, Cloud 2.0. Yeah. Quick plug for the company. Take the last minute to explain what you guys are up to. Hiring, funding, what are you guys looking for? Give a quick plug for the company. Yeah, I mean, we're doing great. We're always hiring. Everybody always is. If you're a cutting edge startup, we're always looking for great new talent. Yeah, we're moving forward with our next round of funding plans. We're looking at expanding the growth of the company. I'll go to market, doubling down on our engineering. We're just delivering now our Kubernetes fabric capability. So that's the next big functional release that we've actually already delivered the beta of. So taking Kubernetes and actually using it as a distributed fabric. So a lot of exciting things happening. Technology-wise, a lot of customer engagement's happening. So yeah, it's great. Glenn, what are you excited about now? I see Kubernetes we know you're excited about, but what's getting you excited? So the dual process that we have, where we actually use, we're doing stuff in Kubernetes that nobody else is doing because we have a version that runs on the switch and it manages all the containers local and then it also talks to a big controller. It's fixing that SDN issue, right? Where you have this SDN controller that manages everything in the data plane and it controls my devices and it uses OpenFlow to do this and it has a headless operation in case the controllers go away. Oh, and if I need another controller, here's another one. So now I've got two controllers. It gets really messy. You got to buy a lot of gear to manage it. Now we're saying, okay, you've got Kubernetes running local. You don't want to have a Kubernetes cluster. Don't bother. It just uses it autonomously. You want to manage it as a fabric, like Dom says. Now you can use the Kubernetes fabric that Kubernetes masters that you've already built for your other applications. And now we can start to really embed some really neat operational stuff in there. Things that as a network engineer, it took me years of breaking stuff and then fixing it to learn. We can start putting those operational intelligence in the operating system itself to make it react to problems in the network and solve things before waking people up at 3 a.m. This takes policy to a whole new level. Absolutely. It's a whole new intelligence layer. Yeah, if this is broken, do this. Cut off the arm to save the rest of the animal and don't lift people up and troubleshoot stuff. Troubleshoot stuff during the day when everybody's there and happy and awake. Guys, congratulations, SnapRoute. Hot startup. Networking is the real area for Cloud 2.0. You got real time, you got data. You got to move packets from A to B. You got to store them. You got to move compute around. You need to move stuff around the cloud to distribute networks. Thanks for coming in. Thanks for having us. I'm John Furrier here for Kube Conversation here at Palo Alto with SnapRoute. Thanks for watching.