 Well, welcome to this special CUBE conversation here in Palo Alto. I'm John Furrier, host of theCUBE. It's the Palo Alto studios here in Palo Alto. We're here with Adam Cassella, CTO and co-founder of SnapRoute and Glenn Sullivan, co-founder of SnapRoute. Guys, good to see you. Thanks for coming on. Thank you. So you guys are a hot startup launching. You guys are former Apple engineers running infrastructure. I would say large scale on Apple. Yeah, just a touch. The global nature. Tell the story, how did you guys start the company? Where did it all come from? Obviously, Apple, a lot of motivation to see a lot there. You're seeing huge trends. You'd probably build on your own stuff. What was the story? So yeah, basically, we were running the large external stuff at Apple, so think of anything you would use as a user, Siri, Maps, iTunes, iCloud. Those are the networks that Adam and I were responsible for, you know, keeping up and keeping stable. And you know, this was a lot of growth. So this is pre-2015. We started Snap out in August 2015. So it's a big growth period for iCloud, big growth period for iTunes, lots of users, lots of demand, sort of lots of building infrastructure in sort of a firefighting mode. And one of the things that occurred is that we needed to move to more of a, you know, infrastructure kind of building out as you need it for capacity. If you start talking to the folks up the road, you know, with Facebook and Google and Microsoft and all those folks, you realize that you have to kind of build it and then they will come. You can't really always be reactionary in building these kind of bespoke artisanal networks, right? So him and I had to come at it from both a architectural, topology network, kind of network engineering geeky kind of level, and also from an automation, orchestration, visibility standpoint. So we pretty much had to do an entire re-imagining of what we were building as we were going to build these new networks, to make sure we could anticipate capacity and deploy things before, you know, it was necessary. Yeah, and make sure that the network was agile and flexible enough to respond to those needs and changes that were required. And you mentioned obviously the surge came around the time for 2012, 2013 timeframe. Exactly. Apple's been around for a while, so they had, they were buying boxes and stacking and stacking for years. So they have applications probably going back a decade. Of course. So as Apple started to really, really grow, iCloud and the iPhone hit in 2007, you still got legacy. So how did you guys constantly reshape the network without breaking it? What were some of the things that you guys saw that were successful? Because it's kind of a case study of, you know, go to the next level without breaking anything. Yeah, the migration was interesting. Essentially what you ended up doing is you started attacking it for the legacy environments as a rack by rack process, right? You had to figure out what applications that are most easily be able to move and start with the low hanging fruit first, so you can start proving out the concepts that you're talking about. If you try with the hardest aspect or the hardest app to move, you're going again with a lot of roadblock. You might actually fail potentially and you won't be able to get where you need to go if you took some of the low hanging fruit applications that can easily migrate between an old environment and new environment. It's not dissimilar to environments where things are acquisition heavy. Like we've got some friends at some other Silicon Valley companies that are very acquisition heavy, right? It's a company that's one name on the outside but it's 20, 30 different companies on the inside. And what they typically end up doing is they end up treating each one of those as islands of customers and they build out a core infrastructure and they treat themselves more like an ISP. So if you can meld your environment to where you're more like a service provider and your different legacy applications and new applications are more customers, then you're going to end up in a better situation. We did a little bit of that at Apple where they had really, really core service provider, heavy type infrastructure with all of these different customers hanging off. Well, it's isolation options there but also integration probably goes smoother if you're thinking of yourself as a service provider. DMARC is solid and clear, right? So talk about the nature. You guys are cloud experts, obviously infrastructure experts. You're really in the deep DevOps movement as it goes kind of multi-integrated because you got storage, networking and compute, holy trinity of infrastructure, kind of all changing and being reimagined. Storage isn't going away, more data is being stored. Networks need to be programmable and secure and compute is unlimited now and it's enabling all kinds of innovation. So you're seeing companies whether it's the Department of Defense with the JETI contract trying to figure the best architecture and enterprise that might have a lot of legacy trying to reimagine. The question of what to do around multi-cloud and data center relationships. What's your perspective on this phenomenon of, okay, we have to have scale. So we're going to have a little bit of on-prem or a lot of on-prem. We're going to have cloud on Amazon, maybe cloud over Microsoft. So there's clearly going to be multiple clouds but is it simply the answer of multiple clouds just for the sake of being multi-cloud or is there a reason for multi-cloud, is there a reason for one cloud? Can you guys share your perspective on that? Sure, the thought might be that it's kind of most important to have one overarching strategy that you adapt to everything. And that's sort of true, right? Where you'd say, okay, well, we're going to standardize something like Kubernetes. So we're going to have one Kubernetes cluster and that Kubernetes cluster is going to run in Azure and it's going to run in Google and it's going to run in on-prem and all that. It's actually less important that you have one fabric or one cluster or one unified way to manage things. What's more important is that you standardize on a tool set and you standardize on a methodology. And so you say, okay, I need to have an orchestration later, fine, that's Kubernetes. I need to have a runtime environment for my containerization, sure, that's Docker or whatever other solutions you want to have. And then you have API structures that you use to program these things. It's much more important that all those things are standardized than they're unified, right? You say, I have Kubernetes control and I'm going to control it the same way, whether it's in Azure, whether it's in Google Cloud or whether or not it's on-prem. That's the more important part rather than saying I have one big thing and I try to manage it. So to your point, by having that control point, that standard with all the APIs allows for the microservices, allows for all these new agile capabilities. Then it becomes the cloud for the job kind of thing. So if I'm running Office 365, why not use Azure? Yeah, I mean, that's the whole problem with doing technology for technology's sake. Technology doesn't solve problems. All of this is maybe a piece of technology is a piece of technology. And I think it's why you look at cloud native and Kubernetes and Docker and why Docker initially had a little more struggle and why it's been more successful after you see more of the cloud native come out there. Because cloud native put a process around how you can go ahead and ensure these things will be deployed in a way that was easily managed. You have CICD for like, I want my container to put it out there. I have a way to manage it with Kubernetes in this particular pipeline and have a way to get it deployed. Without that structure, you're going to be just doing technology for technology's sake. Yeah, and it's modernizing too, so that's a great point about the control point. I want to just take it to the next level, which is back when I was breaking into the business, the word multi-vendor was a word that everyone tossed around. No, we got to be multi-vendor. Why? We need choice. Choice is good. Well, choice downstream. It's always, it was always something that's an option, more optionality, less of a reality. So, options are good. No one wants the vendor lock-in. I mean, that's great. Unless you, it's affordable and it's fine, right? So, you know, Intel chips a lock-in, but no one ever cares, processes stuff and moves on. So the notion of multi-vendor and multi-cloud, how do you guys think about that? As you look at the architectural changes of a modern compute, modern storage, modern network. Facility. I think it's really important to go back to what you said before about Office 365, right? Like why would you run that other places other than Azure, right? It's got all the tools. It's really, really critical that you don't allow yourself to get boxed into a corner where you're going to the lowest common denominator across all the platforms, right? So, when you're looking at a multi-cloud or a hybrid cloud solution, use what's best for what you're doing, but make sure that you've got your two or three points that you won't waver on, right? Like Kubernetes, like API integration, like whatever serverless abstraction layers that you want, right? Focus on those, but then be flexible to allow yourself to put the workloads where they make sense. And having mobile workloads is the whole point to going into the cloud or having a multi-cloud strategy anyway. The workload mobility is key. Workloads and the apps is super important. You mentioned earlier about apps moving around. That's the reality. Correct. If that becomes the reality and is the norm, then the architecture has to wrap around it. How do you advise and how do you view that of unfolding? Because if data becomes now a very key part of a workload, data can see the multi-clouds, latency comes in, now here we go back to latency and laws of physics. So, as you start thinking about network and the realities of moving things around, what do you guys see as a directionally correct path for that? Sure, so I kind of see if you look at your breakdown, okay, you have storage, you have network, you have applications, right? And I heard something from a while ago and I actually agree with that. It says, you know, that is the new soil, right? And I look at that, okay, if that is the new soil, then guess what? Network is the water and the applications are the seeds. And if you have missing one of those, you're not going to end up with a growing plant. And so if you don't have the construct of having all these things managed in a way that you can actually keep track of all of them and make them work in chorus, you're going to end up where, yeah, I can move my application from point to point B, but now it fails because I don't have connectivity or I don't have storage. Or I can move out there and I have storage and no connectivity or connectivity and missing one of those pieces out there. And you don't end up with a fully functioning environment that allows you to use it. So the interplay between storage, networking and compute has to be always tightly managed or controlled to be flexible to manage whatever situation or whatever's growing. And you got to have the metadata, right? Like you've got to be able to get the stuff out of the network. That's why what we're doing at SnapRoute is so critical for us is because you need to have the data presented in a way using the telemetry tools of choice that give you the information to be able to move the workloads appropriately. The network can't be a black box. Just like in the storage side, the storage stuff can't be a black box either, right? You have to have the data so that you can place the workloads appropriately. Okay, what's your guy's thesis for SnapRoute when you guys started the company? What was the guiding principle or the core thesis? And what core problem did you solve? So answer the question, core problem we solve is blank. What is that? So I think the core problem we solve is getting applications deployed faster than they ever have been, right? And making sure it's done in a secure way and an efficient way operationally. I mean, those are basically what the tenants of what we're trying to solve and what we're going for. And the reason for is that today the network is holding back the business from being able to deploy their applications faster, whether that be on a Colo site, whether that be local and data center or whether it be in the cloud from their perspective, connectivity between their local on-prem stuff and whatever might be in AWS or in Google. And enabling that happened seamlessly so that the network is not in the way or... Yeah, so if you can now see what's happening on the network and now you can have control over that aspect of it, you do it in a way that's familiar to people who are deploying those applications. They now have that ability to place those workloads intelligently and making sure that they can have the configuration and connectivity that they need for those applications. Okay, so I say to you guys, hey, I'm solved, I'm solved, I love this. What do I do next? How do I engage with you guys? Do I buy software? Do I load a box on the infrastructure? What's the snap route solution? So the first part of the discussion is we talk about hardware, obviously. We don't make our own hardware. That's the whole point of disaggregation is that you buy the hardware from somebody else and you buy the software from us. So there's a lot of times with the initial engagements there's some education that goes on about this is what disaggregation means and it's very, very similar to what we saw in the compute world, right? You had your classic environments where people were buying big iron from HP and Dell and IBM and Sun and everybody else, right? But now they can get it from ZTE and Kwan, and Supermicro and whoever else. And they wouldn't really think of buying software from those same companies and maybe some management software, but you're not gonna buy your Linux version from the same people that you're buying your hardware from. So once we explain and kind of educate on that process and some folks are already learning this, the big cloud providers are already figuring this out, then it's a matter of, here's the software solution and here's how to. Take me through some of the use cases. What am I plugging into? Am I connecting to certain systems? How would I just deploy it? Well, take me through the use case of installing it. What does it connect to? Sure, so you have your white box, stop or act device or switch that you might have on there. You load our code on there. We use only to initially deploy the stuff on there. And then you can go ahead and load all the containers on there using things like Helm and pulling it from Harbor, whether that be something that you have locally or internally or you can bundle it all together and load it in one particular image. And then you can start interacting with that Kubernetes API to go ahead and start configuring the device. Additionally, I want to make sure this is clear to people who are networking guys going, oh, Kubernetes, God, what is all this? I never heard of this stuff. We supply a full fledged CLI that looks and feels just like you want a regular network device to act as a bridge from what those guys are comfortable with today to where the future is going to be. And it sits on top of that same API. So network guys will be comfortable with this? Correct. And they get to do stuff using cloud native tools without worrying about understanding microservices or containerization. They now have the ability to pull containers off, put new containers on in a way that they would just normally use a CLI. I want to get you guys thoughts on a trend that we've been reporting on and kind of commenting on theCUBE. And I certainly have been a lot for the past couple of years, past a year in particular, covering this cloud native since the CNCF and KubeCon was started when we were there, when that kind of started. Developers, we know that world. DevOps has seen agile, blah, blah, blah, all that good stuff. Networking guys used to be the keys to the king. They were gods. You were a networking engineer. Oh yeah, I'm the guy saying no all the time. I'm in charge, come through me, but now the world's flipped around. Applications need the network to do what it wants. So you're starting to see programability around networks just go live. We saw the trend there is DevNet, their developer programs growing really, really fast. You're starting to see networking folks turn into developers. The smart ones do. And the networking concepts around provisioning is you see service meshes on top of Kubernetes, hot. So you're starting to see the network parent, policy based this, policy based that, programability automation. It's kind of in the wheelhouse of a network person. Your guys thoughts on the evolution of the developer, the network developer. Is it real? Is it hyped up? Is it, and where does it go? So we're going back to where networking originated from. Developers started networking. I mean, let's not forget that, right? It wasn't done by some guy who says, oh, I have a CLI and I'll have to go and now networks work. No, someone had to write the code. Someone had to pull it out there. But eventually you got to those guys where they went to particular vendors and those systems became more closed and they weren't able to go ahead and have that open ecosystem that we have been built on the compute side. So that's kind of, to say or hindered those particular, that industry from growing, right? Network industry has been hindered by this. We have been able to do an open ecosystem to get that operational innovation in there. So as we've moved on further and as we get that, you know, those people saying, no, hey, you can't do anything. No, no, no, we have the keys to the castle. We're not going to let you through here. The DevOps guys are going, well, we still need to deploy our applications. Our business still needs to move forward. So we're just going to go around you. And you can see that with some of the early SDN solutions going on there. It says, you know what, we're going to forget like that. Network doesn't exist. Okay, tunnel, we're going to go over you. That day is coming to an end. We're not going to be able to do that long term as we're going on here because that efficiency there, the overhead there is really, really high. So as we started moving on further, we're going to have to pull back in to where we originally started with networking where you have people use that open ecosystem and develop things on there and start programming the networks to match what's happening with the applications behind them. So I see it as something that's just... Glenn-Dan, your thoughts? Yeah, so the smart network engineers, the guys and girls out there that want to be progressive and, you know, really adapt themselves are going to recognize that their value add isn't in being a CLI jockey and cutting and pasting from their playbooks and their 48-page method of procedures that they've written for how to upgrade this chassis, right? Your expertise is in operational runtime. Your expertise is in operational best practice, right? So you need to just translate that. Look at Kubernetes, look at operators, right? Operators, existing Kubernetes to bake in operational intelligence and best practices into a bundle deployment, right? So translate that, right? So what's the best way to take this device out of service and do an upgrade? It's a step, it's a method of procedures. Translate it into a Kubernetes operator, put that in your Kubernetes bundle, send it into your image, you're good to go. Like this is the translation that has to happen. There is an interim step, right? You know, our friends over at Ansible, our friends in Puppet and Salt and Chef and all that, they've got different ways to control, you know, traditional CLIs using, you know, very, very kind of screen scraping, pushing the commands down and verifying, getting output and changing it. It's possible to do it that way. It's just really painful. So what we're saying is, why don't you just do it natively, use the tool like an operator and then put your intelligence into design, operational intelligence, layout, like do that level instead of, you know, cutting and pasting for them. So developers are, it's all developers now, it's all merged together. Now you have open. Infrastructure is code, right? Infrastructure is code, yeah, everything. It's real. I mean, but you can't, you can't, and I want to make sure it's already clear too and Glenn was saying this, you can't get away from the guys who run networks and what they've seen and experienced as they've had. So, but they need to now take that to his point and make it into something that you actually can develop and code against and actually make it to a process that can be done over and over again, not just words on paper. Well, that's why I think the network developer angle is so real because it's about translating the operational efficiencies into the network into code. Because to move apps around and do kind of dynamic provisioning and handling all these services that are coming online. And you can only do that if you've actually taken a look at how the network operating system is architected and adopt a new approach to doing it, because the legacy ways of doing it don't work here. And getting an operating system like what you guys approach, your strategy and thesis is, have an OS baked as close to the network as possible for the most flexible and high performance, nice and secure, no abstraction layers, no reverse proxies or any kind. Yep, simple it down. Well done, great. Guys, thanks and good luck on the venture. We'll be following you. Thanks for the conversation. Appreciate it. Thank you. Appreciate it. You have a conversation here in Palo Alto. I'm John Furrier, talking networking, cloud native with SnapRoute launching a new operating system for networks for cloud native. I'm John Furrier. Thanks for watching.