 And welcome back everybody. Jeff Frick here with theCUBE. We're in our Palo Alto studio for a CUBE conversation. It's really a great thing that we like to take advantage of a little less hectic than the show world. And we're right in the middle of all the shows is if you're paying attention. So we're happy to have a CUBE on my on. He's been on many, many times, Sargalize. Now the CEO of Teridian and Sar, welcome. I don't think we've talked to you since you've been in this new role. Yeah, it's been about a year I think. Been about a year. So give us kind of the update on Teridian. What's it all about? And really more importantly, what attracted you to the opportunity? Sure. First of all, great to be here. I don't know where John is. I'm looking for him. I mean, he ran away. Maybe he knew I was coming. Somewhere over the Atlantic, I think, at 35,000 feet. We'll have to follow up on that later. But hey, you're here. So, you know, Teridian, I mean, let's talk about maybe the challenge that Teridian is addressing first so people understand that, right? So if you look about what's going on these days, you know, with the advent of cloud and how people are really accessing stuff, you know, things have really moved in the past. Most of the important services people accessed were in the data center and were accessed through the LAN. So you could, you know, the enterprise had control over them. And, you know, if you wanted to access an app, if it didn't work, you know, somebody went and LAN played around with some Cisco router and things maybe got better. Right. But at least you had control. You had control. And if we look at what's happened over the last decade, but certainly in the last five years was SaaS and cloud, you know, stating the obvious more and more of your services now are actually being accessed through your LAN and in many cases, that actually means the internet itself. You know, if you're accessing Salesforce or Box or Ignite or any of these services. And the challenge with that is that that now means that a critical part of your user experience, you don't control that the vendor doesn't control because you can make the best SaaS up in the world. But, and those apps are increasingly very dynamic. So, you know, caching doesn't solve this problem. And the problem is now, okay, but I'm experiencing it over the internet. And while the internet is a great tool, obviously, it's not really built for reliability, consistency and consistent speed. You know, reality, right? If you look at the internet was designed, right? To send, you know, one packet to NORAD and tell them that, you know, some nuclear missile died somewhere. That's what it was designed for, right? So the packet will get there, but the jitter and all these things may work. And so what happens is that now you have a consistency problem. Now, historically, you know, people will say, well, that's all been addressed through traditional caching. And that's true and caching still has its place. The reality is though that caching is more for, you know, stuff that doesn't change a lot. And now, of course, it's all very dynamic. You know, if you're uploading a file, that's not a caching activity. If you're doing something and Salesforce is very dynamic, it's not cast. And so at Teridian, we looked at this problem. Teridian's been around, I think about four years. I've been there for about a year. And we felt that the best way to solve this problem was actually to leverage some of the cloud technology that already exists to solve it. And so what we do actually is we build an overlay network on top of the public cloud surface area. So instead of traditionally the way people did things is they would build a network themselves. But today the public cloud guys obviously are spending gazillions of dollars building infrastructure, why not leverage it the same way that you don't buy CPUs, why buy, you know, routers. And what we do is we create a massive overlay network on demand on the public cloud surface area. And public cloud means not just Amazon or Google, but also people like Ali cloud, DigitalOcean, Vulture, any cloud provider really, some Russian cloud providers. And then we monitor the internet situations and then we build a fast path. If you think about it like it almost looked at ways, a fast path for your packets from wherever the customer is to your service, thereby dramatically increasing the speed, but also providing much higher reliability. So lots of thoughts. So I'm hearing you're right, you're leveraging the public cloud infrastructure. So they're pipes, if you will. And they're CPUs. And they're CPUs, but then you're putting basically waypoints on that packets journey to reroute to a different public cloud infrastructure for that next leg if that's more appropriate. Yeah, I mean basically what I'm doing is, I'm basically just saying, you know, if there's a, if your server's here, whether on a public cloud or somewhere else, it doesn't matter. And a customer is here, right? Through some redirection, I will create a router on a public cloud, so a soft router, somewhere close from a network perspective to the user and somewhere close to the server. And then between them, I'll create an overlay fast path. And then where it goes over, will be based on whatever the algorithm figures out. The way we know where to go over is we also have a sensor network distributed throughout the public cloud surface areas. And it's constantly creating a heat map of where there's capacity, where there's problems, where there's jitter, and then we'll create a fast path. Typically that fast path will give you, I mean one of the challenges, I'll give you an example, right? So let's say, you know, you're in Comcast and let's say you've got, I don't know, 40 meg, let's say, and your connection at home. Okay, and then you connect to some server and theoretically that server has much more, right? But reality is when you do that connection, it's not going to be 40 meg, it sometimes is 5 meg, okay? So we'll typically give you almost your full capacity that you have from your first provider all the way there by creating this fast path. So how does that compare, so we hear things about like direct connect, say between Equinix and Amazon, or a lot of peer-to-peer relationships that get set up. And so how does what you're doing kind of compare, contrast, play, compare to those solutions? Yeah, I mean direct connect is sort of a static connection or if you have an office and you want to have a direct connection. It has some, it's got advantages and it's useful in certain areas. Part of the challenge there is that first of all, it has a static capacity. It's static and it's got a certain capacity. What we do, because it's completely software oriented, is we'll create a connection and if you want more capacity, we'll just create more routers. So you can have as much capacity as you want from wherever you want. Where direct connect, you sort of say okay, I want this connection, this connection, this much capacity and it's static. So if you have something very static, then that may be a good solution for you, but if you're trying to reach people in other places and it's dynamic and also you want variable capacity. For example, let's say you say I want to pay for what I use, I don't want to pay for a line. Historically, when you're using these things, you say okay, if the maximum I may want is 40 meg, you say okay, give me a 40 meg line, that's expensive. But what if you say I want 40 meg only for a few hours a day, right? So in my case, you just say look, I want to do this many terabytes and if you want to do it at 40 meg, do it at 40 meg, it doesn't matter. So it's much more dynamic and this sort of lends itself more to the modern way of people thinking of things like the same way as you used to own a server and you had to buy the strongest server you needed for the end of the month because maybe the finance guys need to run something. Today you don't do that, right? You just go to public cloud and when it's the end of the month, you get more CPUs. We're the same thing. You just set a connection. If you need more capacity, then you'll get more capacity that you need. I mean, we had a customer that we were working with that was doing some mobile stuff in China and they needed, all of a sudden, they needed to do 600,000 connections a minute from China and so we just scaled up. You don't have to preconfigure any of this stuff. Right, right. So that's really where you make the comparison of kind of public cloud for networking because you guys are leveraging public cloud, infrastructure, your software based so that you can flex. You don't have your own mobile. It's completely elastic. Like I said, yeah, it's very similar. Our sort of view is the compute in the last decade, obviously compute has moved from a very static I own everything mode to let's use dynamic resources as much as possible. Of course, there's been a lot of advantages to that. Why wouldn't your connectivity, especially your connectivity outside going to, which is increasingly your connectivity, also use that paradigm? Why do you need to own all this stuff? Right, right. And as you said before we turn the cameras on, the value proposition to your customers where the people that basically run these big apps is the fact that they don't have to worry about that. But net net, it's just flat out faster to execute the simple operations like uploading or downloading something to box. Yeah, I mean, and yeah, and you think you mentioned box that they're one of our big customers. And you said we have a massive network. If you think about how much box uploads, I mean, on a given day, right? There's a lot of their traffic that goes to us. But yeah, if you think about these SaaS providers, I mean, they really need to focus on making their app as good as possible and advancing it and making it as sophisticated as possible. And so the problem is then there's this last edge, which is from their server all the way to the customer, that they don't really control. But that is really important to the customer experience, right? If you're trying to download something to box or trying to use some website and it's really slow, like your user experience is bad, it doesn't matter, it's internet's fault. You're still like as a customer. So this gives them control. They give us that ability. And then we have control of it. We can give it much faster speed. Typically in the U.S., it may be three to five times faster. If you're going outside the U.S., it could be much faster. Sometimes in China, we go 15 times faster. But also it's consistent. And if you have issues, we have a knock, we monitor, we can go look at it. If some customer says, I have a problem, right? We'll immediately be able to say, okay, here's the problem. Maybe there's a server issue and so forth, as opposed to them saying, I have a problem. And the SaaS vendor going, well, it's fine on our side. Right, right, right. So I'm curious on kind of your go-to-market, obviously said boxes is an example of a customer because you've got some other ones on the website. Who are these big application service providers? That term came up the other day, like flashback to 1998. I call them SaaS. No, it's funny. We were talking about the old days of ASP. To me, it's all the service guys. Right, so, but then are you, is your go-to-market thing going to also include going out directly through the public clouds in some of their exchanges so that basically I can just buy a faster throughput with the existing service? Or kind of where do you go from here? I imagine who doesn't want faster internet service period. Yeah, yeah, no, that's, so, you know, we started off going to the people who had the biggest challenge and easier to work with a small company, right? You're sort of, you want to work with a few big guys. They also help you design, you know, your solution, make sure it's good. If you can run boxes traffic or nice traffic, you can probably handle other things. Right. Or last year, for example, we are looking at potentially providing some of this service, for example, you know, we, if you're accessing S3, for example, we can access S3 at least three times faster. So we are looking potentially putting something on the web where you can just go to Amazon and sign up for that. The other thing that we're looking at, which is, you know, later in the year probably is that, you know, we have gotten a lot of requests from people that said, hey, you know, since the WAN is the new LAN, right? And they want to also improve, you know, try to use this technology for their enterprise WAN between brands' offices, you know, where SD-WAN is sort of playing today. We've gotten a lot of requests to leverage this technology also in SD-WAN. So we're also looking at how that could potentially play out because, again, people just say, look, why can't I use this for all my WAN connectivity? Why is it only for SaaS connectivity? Right, right. I mean, it makes sense. Again, who doesn't want it? The network never goes fast enough, right? Never, never, never. It's not, yeah. I mean, it's not only speed, I agree with you, but it's not only speed. What you find, you know, what people take for granted in the LAN, but it's sort of, they only notice it when now they're running over the WAN is that they want, you know, it's a business critical service. So you want it to be consistent. If it's up, it needs to have, you know, latency, jitter, control. It needs to be consistent. It can be like, one second it's great, next second it's bad, and you don't know why. And visibility, so, you know, no one's ever had that problem. I'm just laughing. I'm thinking of our favorite Comcast here. They should, if they're not a customer, you need to get them on your list. So, you know, make some introduction. So, I think, you know, people take that for granted under LAN and then when they move to the cloud, they just assume that it's going to continue, but it doesn't actually work that way. And then they get people from brands offices complaining that they couldn't upload a doc or that Salesforce was slow and all these problems happen. And the bigger issue is not only is this a problem, you don't have control, right? You want to have, as a person providing a service, you want to have control all the way. So you can say, hey, yeah, I can see it. I'm fixing it for you here. I fixed it for you. And so it's about, you know, creating that connection and making it, you know, business critical. Yeah, it's just a funny thing that we see over and over and over where, you know, cutting edge and, you know, brand new, quickly becomes expected behavior very, very quickly. And, you know, the best delivery by the best service, suddenly you have an expectation that that's going to be consistent across all your experiences with all your apps. So you got to deliver that QoS. Yeah, and I think the other thing that we notice, of course, is because of the explosion of data, right? It's true that the internet capacity is growing, but data is growing faster because people want to do more because CPUs are stronger, your handset is stronger. And so so much of it is dynamic. Like I said before, historically, some of this was solved by just, let's cache everything. Right, right. But today, you know, everything is dynamic. It's bi-directional and the caching technology doesn't do that. It's not built for that. It's a different type of network. It's not built for this kind of capacity. And so as more and more stuff is dynamic, it becomes difficult to do these things. And that's really sort of where we play. And again, I think the key is that, you know, historically, you had to build everything, but the same way that you have all these SaaS providers, not building everything themselves, but just building the app and then running on top of the public cloud. The same thing is like, why would I go build a network when the public cloud is investing $100 billion a year and building massive infrastructure? Yeah, and they are big infrastructure. Well, Sar, thanks for giving us the update and stopping by and we will watch the story unfold. Great to be here. All right. And we'll send John a message. Yes, I'll have to track him down. All right. He's Sar, I'm Jeff, you're watching theCUBE. It's a CUBE conversation in our Palo Alto studio. Thanks for watching. We'll see you next time.