 And as we're here today at Hadoop World, very similarly, we've been introducing networking capabilities that actually integrate with the HDFS file system, which is really the first topology aware file system out there, which is one of the things that makes it really cool. It distributes data and protects it by being topology aware. And then it distributes the jobs to where the data is. And if you know the underlying topology, the infrastructure works better. If you want to break a Hadoop topology in a heartbeat, mess up the rack locality file. You know, mess up the rack locality section. You'll get things in the wrong place. You'll think your data is protected. It's not. You'll think you're distributing the jobs correctly and you won't be. So if we can automatically keep it coherent, what we end up having is a better performing infrastructure but it's applications coming together with the network. We talked about this at VMworld slightly. We didn't really do a drill down on it, but since we're at Hadoop World, this new configuration, as you mentioned, topology aware is key. But people are talking about software. They're talking about software. There is kind of a software revolution going on in the networking space. You see OpenFlow getting some traction. It's been around for a few years. You talked a little bit about some of the secret sauce you have, but what people don't talk about the network is really moving data around. That's a key component. And sometimes data's sizes are big. Sometimes they are certainly growing. What are the dynamics in moving across the network? Because with Hadoop, compute kind of goes away. Storage kind of goes away. That hardware layer kind of becomes not irrelevant, but it's still not a bottleneck like it used to be. What's the issues at the network level? Well, the first piece is, and I think you described it well, is when you roll out Hadoop, the model you use to deploy it is markedly different than what people are used to. This isn't buying big iron enterprise storage systems and saying, now let's load it up with 10 petabytes. And now in those worlds, you have to move the data off of the storage across a network, which as a network guy is a good thing, to a set of compute nodes, process it, determine your result, and then either discard or push the data back. The Hadoop model is distribute the data across a large number of nodes and then make replicas of it to keep it, data protection is through increasing the number of replicas you have and distributing them the right place. Well, you have to know what the topology is to do that. Hadoop is a layer three aware file system. You can route it. So every switch at the top of every cabinet is actually routing, which is brilliant. The network guys get excited by that. I can actually route that stuff. I mean, I'm sorry, but do you know how many companies chased this, let's build big, flat, layer two networks because it will be easier. Yet the application world through VXLAN and VMware and through HDFS with Hadoop got rid of the need for that. And so we come back to what are the network architectures that built the internet? Or the network architectures the largest search firms, the largest networks the world run on. It's all based on putting routers in the right place. And we're able to do that with Hadoop, which was large scalable, but also cost effective and reliable infrastructures. So Jay Shree said, sorry, John. So Jay Shree at VMworld said that virtualization requires fat and flat networks, right? So Hadoop requires just flat networks. Not even flat. Just fat or neither. Thin and. What it requires. Any network. It requires networks. It requires a network. But you know. On any kind. Yes and no. So what's different? What it requires, it's a topology aware file system. It requires a network that's Hadoop aware. That means the network needs to know and integrate with that topology construction. If I don't know where the servers are connected, I might think that you guys own different switches. So I'm going to give you a copy of the data and I'm going to give you a copy of the data, John. And then when the switch supporting Dave fails, well, if you were on separate ones, it would work. But if the same switch supported both of you, if it went down, you just lost data. Your jobs stopped. And until you get some advanced replacement or new component in, you cannot resume your data processing mining. And it's sequential too. It's a sequential model as well. So that's another point. And then you see applications like HBase coming in, running on top of Hadoop. And that says now let's take this Hadoop model and now let's put a real time transactional 24-7-365 system on top of it. And it's a lot of data too. It's not just little data. It's a pounding data. I mean, we've been working with architectures with some customers putting between a third to almost a petabyte per cabinet. And then you look, it lets me put a hundred cabinets in. And so now we're talking 10 petabytes, 20 petabyte type of clusters. You can't lose.