 Live from San Francisco, California, it's theCUBE at VMworld 2014, brought to you by VMware. Cisco, EMC, HP, and Nutanix. Hi, this is Stu Miniman with wikibond.org. Here are SiliconANGLE TVs, fifth year of VMworld 2014, San Francisco, California. Had a little bit of an earthquake Saturday night. Definitely kicked things off to a start, at least for me. First time I've seen a magnitude six earthquake, not too far from me. But we're going to be talking about the cloud in this segment. Joining me, first time CUBE guests for both of you, it's Matt Calger, who's the global VMware architect for EMC, and Patrick Neuer, who covers cloud ops and infrastructure program management for VMware. Guys, thanks so much for joining me. Thanks for having us. Thank you. All right, so Patrick, let me start with you. Tell me a little bit of your role at VMware and what you're doing here. So I work for the internal private cloud for VMware, called OneCloud, and in that regard, I am the cloud operations and infrastructure program manager. Okay, so we heard a lot in the keynote about how the hybrid cloud, renamed vCloud, air, threw a lot of the partnerships, but that's a different group from you. That's correct. So you guys, can you give us just a quick, what's the history of the internal cloud from VMware? So it was an alignment that came about three years ago by where VMware needed to consolidate and to really bring together all of the cloud infrastructure and service offerings we had internally into a single overarching kind of orchestration and delivery mechanism called OneCloud. All right, and Matt, you're working with Patrick because the hands-on lab runs out of this cloud, correct? That is correct, yes. Okay, so I have to ask, first of all, do we have this conference every year in Moscone because it's not too far from Palo Alto and therefore easier, or for the labs to work, actually, we have three primary data centers that are serving as the infrastructure to run hands-on labs. Yes, one of them is local in Santa Clara, but the other two, one is in Washington, we're not in Washington, and the other is in Amsterdam. Okay, so the Barcelona show will be running off the Amsterdam data center? Not primarily, it would be one of the main sites, but at any point in time, the infrastructure that actually is delivering hands-on labs can be distributed anywhere in the world. So it's not necessarily one that takes over just because it's local, or closer, if you will. Okay, Matt, and what's EMC's part in the hands-on lab? We've supported the hands-on labs since, I think, the very beginning. I can remember all the way back to 2009 having a bunch of CX-4s and those kinds of things back then. Today, we supply pretty much exclusively extreme IO storage for these labs because that's just the right product for the right job, and I think we support the majority of the storage for that lab at this point. Okay, okay, unpack that for us a little. Why extreme IO? Obviously, I would say performance has to be number one on the list there, but why extreme IO? I think I defer to Patrick a little bit. I would probably argue that reliability is probably your highest priority. I think it's actually a multitude of things. Obviously, performance is critical. Reliability is paramount, but it's also the content itself in how we can leverage different platforms. With the most optimal being extreme IO, because of the high D-dupe ratio that we can achieve on the actual underlying line labs. Great. So a lot of it comes down to a platform like extreme IO where you've got multiple controllers, two controllers per brick, and we've got multiple bricks in every cluster, means that a brick failure or something like that is not a hugely catastrophic event for us, and then just ridiculously high consistent IOs and sub-millisecond response times all the time guaranteed means that Patrick's team can just rely that it's just going to work. And in fact, I believe Patrick last year, the first time that you guys had 100% storage uptime was last year on extreme IO. That's absolutely correct. All right, so does anybody have, what speeds and feeds, what kind of hero numbers can you share about what you've done at the show so far this year or what the plan is? Well, the show isn't over, so obviously it would be based on the last couple of days, but what I can tell you now is that compared to last year where we had both traditional storage platform and extreme IO, this year, outside of the SDD infrastructure, it's 100% extreme IO. And in that regard, we're actually increasing the scope obviously, but we're actually diminishing the footprint. What we've come to understand and appreciate extreme IO is the flexibility and power of the platform by where at this stage, we're able to conceivably run 100% off the show off two bricks. And compared to last year with traditional storage platform, we achieved that same scope in six full racks. So you went from six full racks to 12 U, in and of itself, is it amazing? So Patrick, when you build out the labs, how much is the application a focus of building there, or getting some questions from kind of those watching? Application in the, actually with the lab content is serving up. Yeah, exactly. Oh, absolutely, it's paramount to the delivery, because there's obviously a consideration on the formation and presentation of that lab, but also on the performance impact that that lab will have on the underlying infrastructure. Some labs are relatively minor, they mean more interactive clicks through. Others have a massive impact in translation to the performance characteristics that we need to align to from an infrastructure standpoint. Okay, and so I assume the internal cloud, something that lives on well beyond kind of the VM world shows, how much is there a build up for this event or does this run on kind of the existing infrastructure? Well, no, there is a burst element. Normally, we don't have 550 concurrent people demanding upwards of 15,000 VMs at an important time. So there is a burst element, but there's also very much a steady state. This is our internal private cloud. This isn't necessarily purposely built just for this one event and then decommission. And to that point, the one cloud and all the stuff that supports it, project knee and so on and so forth, actually is running 100% of the time for customers live and ready to go. One of the new services we launched last year was actually HOL public, by where the HOL labs lived on well after VM world was over and by where people, customers can come in and continue to take labs throughout the year. All right, so Matt, you've been in the lab so far. Things running pretty smooth. We don't have big, big lines. I remember 2010, I think I came there. I remember 2010. Lines out the door, some of those things. What have you seen over the last few years as kind of the maturity of what goes on in the hands-on labs? Well, I think to your point, back in 2010 and even maybe 2011, there were definitely lines out the door, some stability issues and I think ever since Patrick's team took on a lot of this and it was really treated like this critical environment that people had to use, like a true 24-7 environment, the availability's been incredible this year, 100% uptime, last year 100% uptime and it's so far, I can't imagine any complaints. When I go in, I was just in the room earlier and looking at the arrays. One of them, one of the ones in Santa Clara is had a bit of a hot, hot moment for it, which was 53,000 IOs a second, which is not even a third of what we rate that system for. But most of the time, they kind of just hover there at maybe 10% of what they're capable of. Okay, and can I ask, is there anything, are there converged infrastructures used as this or any rest of the cloud suite? What kind of plugs in beyond the extreme IO? How does that connect with everything else? No, we definitely, I mean, the purpose of VMworld is to showcase our latest, greatest technologies. So we do have about 10% of hands-on lab running on all of EVO rack and vSANs or our latest technologies. It's definitely a showcase and we want to put it in the most visible light possible. That's the EVO rack, not the EVO rail? That's the EVO rack, which is interesting because we're actually, the lab for EVO rail is running on EVO rack. Okay, yeah, can you, it's new to a lot of people. Can you kind of tease out for us a little bit, kind of the rail versus rack? We talked a little bit about rail earlier, kind of four servers, kind of in a configuration with a number of partners. How does that different from the rail and what can we expect to see there? So unfortunately, I don't know much about the rail from an EVO rack standpoint. Absolutely, it's all converged infrastructure predicated on like vSAN technologies by where we can leverage internal storage to deliver similar infrastructure and compute platforms where you would consider more traditionally, that we've aligned to. Yeah, it's interesting to think about, vSAN has a certain set of applications that it's really geared for, as opposed to say, Extreme I.O., there's some overlap, for example, VDI could fit in both of those, but other than that, I think of Extreme I.O., I mean, database environments, something that would be a great fit for, an all-flash array versus, not necessarily where vSAN's first target is. Any comments on that, Matt? I think it's interesting because when you look at the actual, the I.O. profiles that OneCloud tends to push into these systems, we looked at them closely last year after the show, and they are incredibly right heavy, more so than I think I would have expected, and they're incredibly latency sensitive. If those latencies start to get up there in the five, six, seven milliseconds range, customers really start to feel it, and so both of those really play well into the designs of both Extreme I.O. and vSAN. vSAN's great at some of the slow latency stuff that they can do with data locality and that sort of thing, and Extreme I.O. is great at that, too. Does somebody want to add Patrick? Well, I actually don't think the conversation is separate from what we're doing with Evil Rack and what we're doing with Extreme I.O. I think as the cloud infrastructure and operations team, you have a variety of platforms and architecture that you can align to, and it's all based on the use case and what your customer's requirements are, so. Great, so I'm just curious, Patrick, maybe it's for you, is what lessons do you learn every year at VMworld? Building year over year, does any of what goes on here at VMworld translate itself into product requirements? Of course, absolutely. I think the biggest lesson learned when something like VMworld is that you can take lessons learned from the past, but don't think that they're going to translate 100% into the present, and it's an ongoing learning process, and I think that with any cloud infrastructure and operations team that defines how we do business every day. All right, yeah, Matt, anything else you want to share from what you've seen in the maturity? Where's the opportunity for us to kind of move the labs forward? You know, I think it's interesting, I'd love to see continued use of Evo Rack and Evo Rail to push those. One of the things that we've been hearing a lot from VMworld recently is their increased desire to sort of dog food their own stuff, right? I think Pat actually said this morning, it's extreme dog food. Extreme dog fooding, and I think this is a great start to see more of it, and we can help with certain parts of it. Extreme IO certainly can stress an ESXi host pretty well, deliver almost anything that it could possibly ask for, and then we can also stress the other parts of it, sort of the STBC and BSAM sides. So what I really look forward to is sort of pushing that envelope even further to Patrick's point about let's try to see, we've historically overbuilt these environments a little bit, but now we're actually starting to decrease the footprint a little bit, see how much can we really get this down to something even closer to a customer possible scenario? Yeah, Matt, you know, since I've got you on here, I mean, you've done some interesting tests looking at really scalable architectures. I mean, I know I read your blogs that you did on Scale.io. You know, do you do the labs yourself? Are you involved in some of the architecture pieces? How does it play into kind of your day-to-day job? Are you talking about the hands-on labs or? So, right, do you take some of the labs and, you know? Oh, I absolutely take the labs. It's probably one of my most favorite things to do here. I honestly, if I were a customer, my top two things to do would be to come take the labs and go to what are sort of listed as the advanced technical sessions. I always end up taking a lab and I never have time to take all the labs that I want. I think my favorite so far this year has been the advanced NSX lab. My weakest area is networking and so I learned a lot from that lab. Okay, yeah, Patrick, anything, you know, what's been hot this year is, yeah, had to reconfigure. Definitely, everything we're doing with the EVO rack, EVO rail and NSX are all basically the top three labs at this point. It's very exciting what those technologies can offer and bring to the market. Okay, so, yeah, I think I just want to give you a last chance to kind of give a plug for, you know, something they should check out. There's two more days left and I'm sure there's things that people can follow up afterwards. So Matt, I'll let you start. Well, you know, I think I'd have to plug my session. Come on. I have a session this afternoon on automating very, very large-scale environments using things like Python and VCO and those kinds of technologies. So if you want to take what you learn about the infrastructure side of that and things like EVO rack and EVO rail and Extreme IO and apply those to an environment with hundreds or even thousands of hosts and how to deal with that in the same fashion, it's a good place to start. All right, and we can also find some of your stuff. Hopefully you'll be having your findings on your blog, right? It's always on my blog, exsaforge.com. All right, and Patrick? Yeah, and just for me, come visit us in Hands on Labs. Please take a lab. Come visit us in the knock. This year, like the year previous, we want to be full disclosure, full visibility so we have the monitoring up. It's all real time. Feel free to ask us any questions you might have. Yeah, and if somebody's not at VMworld or they don't get to all the labs, how do they engage in between the shows? So once VMworld San Francisco actually concludes over the course of the next two to three weeks, we're going to be migrating all of the 2014 content into HOL Publix, so they'll be able to take the lab at that time. All right, guys, thank you so much for coming. Thank you. Definitely, when I talk to the practitioner, the Hands on Labs are always some of the highlights for them, so we will be right back with our continuous coverage from VMworld after this quick break.