 Okay, welcome back to VMworld 2011. I'm John Furrier with SiliconANGLE.com. I'm here with my co-host. I'm Dave Vellante from wikibond.org and we are here with Jayshree Ullal. Jayshree, welcome to theCUBE. Thank you, good to be here, John and Dave. So, we know Arista Networks because Doug Gore-Lay's been on theCUBE a couple of times last year and then this year, and he's not afraid to hold back and kind of bring out the Arista, you know, messaging, but you run the company so we want to talk to you about kind of, your perspective. Kind of the shy meek one compared to Doug. Okay, all right, good. I know, you get the reality from the top. So, first of all, what's going on? You have a background in networking. Networking is what it is, it's converged. Cloud has changed in the game. You guys have been around, started the company a while back, not too long, how many, four years ago? Yeah, we've been shipping products for three years. For three years. So, you're now coming up to see Cloud's exploding, what's different and what's going on in the network layer and what's the interplay between two and give us the dynamics of that? Well, I think what you're really seeing here is that the cloud has gone from being a hype term to real deployments and Arista was started with the sole intent of building purpose-built cloud networking software, upon which we layer different kinds of hardware. And back when we did that, nobody believed it to be anything more than a vision statement and today we have deployments all over the world, both in private and public clouds. Not surprisingly, the private cloud took off first because people were looking to provide high performance transactions and solutions for very specific applications. And financial, what we call the tick cloud was our first market segment for extremely low latency. But today I'm pleased to say that the public cloud provider is as important and they're really looking at scale, power, footprint, and a new type of resilience never seen before without buying two of everything and it's really fun to be at Arista and make networking sexy again. So Steve Herrod talked about the key things in his keynote, performance, availability, mobility, and security. We were riffing yesterday in theCUBE about mobility. So it's easy to say we want to move stuff around but take us through the realities of that because it's not that easy, it's a hard problem. No, not at all. I think the deployment of virtualization has especially been a challenge for mobility because you have now what's called VM sprawl, virtual machines running everywhere independent of location. And has forced the networking vendors to be less static in their deployment of a network where everything is physically associated with a logical IP or MAC address. And the announcement that Steve made of VxLands is a very powerful one and really an indicator of VMware and Arista partnering more closely on networking because all of a sudden now you really created a connection between the location and the virtual layer in terms of VM workloads and their mobility in variety of places. And now you can build very large scale networks with what we call leaf and spine that are not hundreds of nodes, but really thousands if not hundreds of thousands of nodes. A lot of customers have WAN links out there that might be outdated. So the WAN link is a big part of the equation. Can you just drill down on what's going on the WAN link, where are we in the evolution of modernizing that WAN link and what's your take on that? Yeah, it's an excellent point. And I think what Arista's done is really put so much bandwidth into the LAN and data center that we've shifted the problem to the inner data center. And many companies have come out with proprietary tags and capabilities and we see no reason for it. You can put in full fiber optic DWDM WAN bandwidth and enable large scale layer two layer three topologies enabling VxLand at the edge so that now you can cross boundaries without proprietary networks and implement a full internet protocol rather than have what you did which is multiple terminations and proprietary links in between. Where are we in that WAN link evolution? Immaturity or, I mean it's- We're in the beginning. We're always talking about it, but it's a problem and is it like top of the first inning? I mean, where are we in this? It's like anything else, John. First, you have to identify the problem. It's been identified. Second, you have to have a set of technologies like VxLands, like Leaf and Spine Networks from Arista. And then third, you get into the deployment. So I would say we're at the midway point where the problem's been identified, the technology's available, but products and deployment are still six months to a year away. So Jayshree, you hear a lot about the flattening in the network. You guys are a big part of your marketing motion. So what does that mean? What's the driver from a lay person standpoint? Number one and number two is for a CIO. Is that a mandate or is that an outlier largely for cloud service providers? That's an excellent question. And I'll take you back to an analogy for those of us who've been a few years. It's absolutely a mandate, very much like moving from mainframes to client server architectures, right? And almost every CIO today looks at me and goes, you know, I like you, but I also dislike you because you have made me think about what my strategy for the cloud needs to be. And in very simple terms, what it means is tuning your applications appropriately for the network rather than building a generic multi-purpose enterprise network for multiple applications. Doesn't mainframes that are going away or enterprise networks are going away, but it's really solving the problem of how do you tune high performance applications, high performance transactions with predictable latency, a great level of scale, resiliency, low footprint, low power, which is increasingly a high component of the cost, never seen before in networking. So it isn't for everybody, but it's for a very specific set of virtualization applications, high performance mission critical applications, workloads that require scale of not 10 nodes, thousands of nodes. So when we say a flat network, quite simply we mean we can provide unlimited, not infinite, but unlimited capacity in a two layer, very simple network topology we call leaf and spine that can give you scale of 10,000 nodes, 30,000 nodes, 100,000 nodes today. I'm not talking about some future vision, but our CIOs can deploy this today. The way they mostly deploy it is they build a little cloud networks in parallel to the enterprise network and eventually this will become one and the same. I wonder if we could talk a little bit about best of breed versus one stop shop. We love disruption at Wikibon and Silicon angle. We love pure plays. You guys are a pure play. Talk a little bit about best of breed versus the whole one throat to choke, one stop shop. I think there's a room for both of those types of players. The larger companies tend to be systems integrators that provide servers, storage, network, and the network in many of those cases has to be good enough. It doesn't have to be best of breed. Now, they may not admit that they're building a worst of breed product, but again, it's good enough. Best of breed is about really tuning for the best performance, the best price, the best application availability, which means often you have to start with a clean sheet of paper. You can't just take something that was built for the enterprise and try to monetize it for the cloud. And Arista started out with that mission to focus on a software stream that had to be built from scratch with a state processing database where we could separate the state from all the different cloud elements. And that's the beauty of architecture. Somebody has to really rebuild it. And by starting with that clean sheet of paper, we were able to get a level of fault tolerance and resilience and fault containment never seen before in networking. The surge of applications, obviously end user computing has been hot. What is happening at the network layer? Because the developers know the program, but developers aren't network folks. And we all know you got app developers and you got network guys. They're two different breeds of people. What makes it all work together? It's interesting you should bring that up because Arista introduced a feature called Latency Analyzer, where right in the middle is the problem you described. The app guy says that network is running slow. The network guy says that's a problem with the apps. And that's really the reason for Arista, which is for the first time, we're tuning the network for the applications. Not all applications, but very specific to the data center. So what we're able to do is set watermarks and thresholds that say, hey, a problem is coming before it comes and before you reach that congestion rather than wait for the problem to come and have the finger pointing going on. This kind of proactive troubleshooting is only possible if you tune the network with the application vendor. And so one of the key tenets of Arista is to build that open, extensible software so that we can work with the likes of VMware and application vendors to make that interaction more possible. One of the things that happened on the consumer side of the business was the carriers, the mobile carriers had one app, basically, well, two, call someone and take a text message. And then over the top, you had a lot of apps. The consumerization of IT is creating a tsunami of diversity of applications. So you said a few apps, all apps, is it all apps? How do you see that tuning? Someone walks in, I want to run my own app or I want to do Facebook, I want to do these apps. Well, I think the key to understand on whether it's few apps or all apps is the traffic patterns. What's really changed is client server traffic has moved more from North-South than it used to be, where 20% of the traffic was really affecting the network to more East-West. So all these applications that are pushing performance, whether it's Facebook or the consumerization of IT or really a lot of the portals that we all interact with for online exchange, are pushing the traffic more server to server in East-West, demanding the kind of latency and real-time performance never seen before. So to some degree, I don't care if it's one app or 10 apps, it's the high performance transactions that are really driving it. So talk about, let's just go philosophy here. In the old days, we grew up with a seven-layer stack. Today with what Paul Morris is putting out is the operating system. Is the stack changing? You said you can do stuff at one or two layers, I mean, where's layer three, layer four? Is it a new stack? How would you describe this dynamic between traditional stack, the seven-layer model, versus what's going on today? The traditional stack is for textbooks and it's really well-defined and most of us know what we're doing with layer one, two, three, four, right? And then there's the applications at layer seven. I think what's really emerging is a cloud stack where you need the best-to-breed compute, which could be a general-purpose compute, the best-to-breed storage, the best-to-breed virtualization, and as you're seeing with VXLands, a new kind of network hypervisor, and the network becomes central to orchestrating across these and working with the right management applications or the actual applications that run on the system. So I think the definition of a stack is less theoretical and more mission-purpose to different applications. There is a debate to follow up on that question going on in terms of some functions within, let's say, level four to level seven load balancing and DLP and the like. Where do they belong? Is it an appliance? Is that an appliance to be virtual? Should it be in the switch? What's your thoughts on that? I've been around the block more on this than I care to even admit from my previous networking days. And the reality is different strokes for different folks. If you're not looking to get a certain extreme high performance, then you can integrate this and get good enough, as we just talked about earlier. But if you're looking to have a highly optimized load balancer security, the appliance is always the right way. And the key is not to focus on the form factor, but really to focus on the functionality offered. And this is where companies, as you know, like F5 and Palo Alto have really felt. So I had one last question, John. I have to steal this from you. Jayshari, you said networking is sexy again. Why is networking sexy? Because I really believe there's three disruptions happening with or without Arista. And Arista's fortunate to be in the midst of all three. There's a hardware, merchant-silicon disruption. The companies were building A6 that were two, three generations old. We're building geometries that are 100 times more cost-effective and 10 times more performance than anything seen. There's a software disruption where all of the things done in the system computing and storage is now coming to networking with database architectures based on Linux, based on extensibility, based on fault control, based on high availability. And most importantly, there's a customer disruption where people are looking for this high-performance transactional cloud. And all those things are making Arista and networking sexy again. Perfect storm, everything's coming together. That's exactly, it's well put. Well, thanks for coming on theCUBE. We really appreciate your insight. And you guys have been a great start on great success. We're watching you guys. Always have Doug on theCUBE and now we'd like to have you on every time we see you. Thank you, John and Dave. You kept me on my toes and this was real-time 10 gigabit speed. Appreciate it. Thank you very much. Thank you. Arista Networks, they're gonna, I think they're gonna be a rising star in the networking. Took a leap of faith, Dave, and when they built this company and great stuff. So, thanks. We'll keep rolling. Jay Sri Ula.