 And we're here with Steven Spellacy of Vertensys. Hey Steven, how you doing? Good to see you. Good to see you again. Hey Steven, great to see you. All right, always smiling in the cube. I love it. The cube is our flagship telecast. We go out to the ground. You're on the alum list. Hey, that's awesome. That's awesome, thank you. So you're going to have to concentrate here a little bit. Yeah, exactly. So tell us about Vertensys. I owe virtualization, hot area. Yeah, definitely. Exploding. Tell us about Vertensys and we'll talk about why we need I owe virtualization. Great, cool. So Vertensys is based in the US and we have headquarters in the UK. Our founders come out of a spin out of Zyrotex, which you guys are pretty well familiar with. The technology was originally developed as a high-speed PCIe switch fabric. And the founders basically decided that they could exploit the technology for I owe virtualization. At core, what we do is we take standard off-the-shelf adapters in a concentrator appliance style solution. We plug those in our chassis and then we share using PCI Express base switch, switching, and a virtual proxy controller, which handles all the virtualization layer of the hardware that's being inserted in our device. So the technology is sold for standard server environments, standard rack mount environments. And we also have an OEM arrangement with NEC Corporation in Japan. And they're selling a blade version of our product. So talk about why you need I owe virtualization and why server virtualization drives that need. Yeah, definitely. So obviously, I owe is one of the big bottlenecks in a server virtualization environment, right? VMware hates it when we blame them for all the I owe problems. But hey, it's the fact of life. It keeps everybody else in this ecosystem of business. So the long and short of it is the environments that we're in are very diverse. They have needs for multi-protocol storage, for 10 gig I SCSI, for NFS, for fiber channel, and they even leverage traditional direct attached storage architectures and emerging technologies like PCI Express based SSDs. So with all those different technologies, you need to address the connectivity and the management of pairing those technologies with the servers, whether they be blades or standard servers in your data center. So virtualized I owe provides both greater choice and control of the types of interfaces that you present to your standard servers and your blade systems, as well as it provides a greater amount of density. So diversity and density, as well as you can drive these servers harder because you're offloading in I owe virtualization, you're offloading essentially the I owe process into hardware. So actually you're taking a lot of the overhead and burden off of the traditional host, lower CPU utilization on the server host, greater performance, and your I owe performance is increased because you're driving both the resources in that appliance or in that infrastructure harder and you're getting more out of it through acceleration. So that's the value prop. It's really a resource utilization part. That's how you get the ROI out of it, right? Yeah, definitely. What about flash? How does that change the whole I owe? Yeah, definitely. So that's the new hot device to virtualize, right? So folks are looking at inserting and using PCI Express based SSDs. That's the big talk now, right? So I owe mount applications, data intensive operations on databases, obviously greater scale and hyper consolidation needs on server virtualization hosts. These technologies are going to essentially assist and allow those hosts to scale to meet those new demands. So what we're doing with respect to virtualizing flash, the first in our entree into the space is leveraging our virtual RAID controller capability where we actually take again in our architecture virtualize a standard RAID controller, SAS Saturday RAID controller, which is made by LSI. We present that through our infrastructure back to the host. They see a virtual instance of that same LSI card attached to that are a set of micron P300, high performance enterprise SSD, SLC based NAND. So now our appliance is being presented to a group of connected hosts that have access to high speed flash for boot drives, local data stores. And with the advent of vSphere 5, the ability to tag it as a SSD resource and leverage it for host cache allocation. Okay, so you can by policy allocate certain data sets to that host based. Stay in your swap and your swap operations as well. What is the biggest misconception for folks out there when they think of all this switch innovation, all the network fabric? So obviously cloud is growing, everyone's trying to be there, HP is trying to get a strategy there, has a strategy and they've been in the news lately getting crushed with their recent PC division. The market's really crazy right now. So what's the big misconception? Is cloud that easy? Is the conversion networking really that efficient now and carrier grade? And what are the innovations and what are the misconceptions that people have in this area? Yeah, I think when I think about cloud, I think about an anonymous pool of infrastructure that serves an application. An application can be accessed anywhere by any device. I mean very much buy into that classic utility sort of view of cloud. Underlying infrastructure's got to be easy to use. It's got to implement easy. It's got to be easy to manage and administer. It's got to be supported. It has to be able to leverage standard protocols. So I think those are like the baseline requirements. If you can make an administrator's job easier so they can roll out that 10th server in a rack or that 30th server in a row. If you can get access to those multi-protocol environments like iSCSI, NFS and fiber channel all within a couple of clicks, that's taking the traditional two or three days to provision a network tap or a fiber channel sand port down to a couple of minutes or maybe a half an hour. And I think that's where our customers and even our competitors' customers are leveraging virtual IO in the cloud space as it becomes more and more anonymous. I wrote a post this morning about VMware's moving to cloud and I kind of was kind of talking about the provocative question of OpenStack. OpenStack has got a lot of traction because it's just an intoxicating message. Hey, we'll cobble together all this open source. We all win and everyone's jumping in kind of like, hey, we need a cloud story. Jump on OpenStack. But the reality is delivery, getting the solutions to the marketplace. So that OpenStack really represents the developer trend. So how does all this innovation affect the developer environment? Because at the end of the day, they have to deliver products and cloud is not that trivial. And customers want things like security data on-premise, processor in the cloud and all these same network features that they've had in the enterprise. Obviously, I mean, a developer's dream is to have access to any and all the technologies in the data center and exploit them to their fullest capabilities. From our perspective, having the ability to position and support all these different technologies, as your needs change, the big thing with cloud today is managing service levels. The traditional telco model of bronze, silver, gold and being able to support those differentiated service levels. The key is as your needs change and there's some great features by the way within vSphere 5 to support those changing needs like storage DRS and VASA to integrate more closely with the underlying hardware stack. In my opinion, the successful technologies out there and the ones that will be successful integrate at that level. They show value, they diminish themselves in the model and actually integrate greater into the upper level stack. And I think OpenStack is doing that. They're putting all this intelligence up here and all this lower level stuff is being dumbed down but they have to have APIs and they have to have conduits to be able to communicate with it. And I think that's the key to success. But they have to move pretty fast because the pace of delivery that VMware and others are coming out with real solutions is real. Well, when you talk about, you sort of mentioned earlier talking about allocating certain data sets and high performance and so forth. One of the things we're talking about this week is really the whole notion of cloud service providers delivering a quality of service. How do you guarantee quality of service in the cloud in a multi-tenant environment? How can you, right? It's a big challenge, so talk a little bit about that. Yeah, so from our perspective, our product actually supports a quality of service and bandwidth allocation on the 10 gig side. So all of our products come with 10 gig standard, all the models we sell. So we have some interesting tool sets and feature capabilities right within our native GUI, our command line, our PowerShell interface, as well as of this week, the release of our vCenter plugin. Now an admin can literally go in and fine-tune the percentage of guaranteed bandwidth on a given interface. That virtualized 10 gig pipe can be assigned to any server in your pod, and then you can literally govern, monitor, and watch how that's being utilized. By customer. By customer or by environment, right? So you could buy stack or by individual customers. Well, the reason why we're interested in doing it by customers is because then you can charge, it's a new pricing model, it's not just cost plus. That's right. It's I can now drive more business value and the cloud service providers, I think, that are sharp are going to pick up on that. True though. Back to OpenStack, is where are they in that? I mean, you're looking for, it was interesting your post this morning because you love open source, you're very supportive. I'm really bullish on OpenStack, but they're not delivering in my mind to the level that I think is not their fault, it's just where they're at. I think that it's obviously the vote of confidence on the messaging is fantastic. Everyone's jumping in because it makes sense, but it brings up the Android iPhone kind of discussion. Okay, opens great. Android has been criticized lately on, not just in their markets has been great, but they've been criticized in fact that there's some holes in this and hacking, some interference at the carrier level in terms of RF overuse for error checking where iPhone's just been bulletproof and great customer support because of it. So they're two approaches. I mean, it's like Democrats, Republicans. It's like the, it's a commitment to quality. The individual developer may have a commitment to quality to produce their bit in the open source community, but sort of in the ecosystem at the global level, it's no one's watching what people do. There's a check-in, check-out process in the open source community, but at the end of the day, individual developers can screw up their own little bit. We'll hear about this next, as Amra Awadala is coming on next after you, and he is successfully executing an open source strategy by commercializing some open source, and he's actually doing some good work. So I think there's a plan where open source can work, but it's got to get some meat on the bone, as they say. Well, I just want to ask you before Dave jump in, one more question, what are you expecting to hear this week? And what should VMware be doing? I'm torn because I live in the infrastructure space, so I like to hear all the great announcements about both what they're doing to improve how infrastructures utilize, like Vossa and storage DRS. Those things excite me, and I'm also torn, like you, I want to understand where their commitment is to spread this cloud vision and make it a reality for customers. Because I think customers hear this and they think, well, yeah, I like the idea of private cloud, but at the same time, cloud is something for me that's far away. It's distant for me, right? Not no pun intended, distant cloud, right? Federated distance. But for them, they're still trying to get their act together. So I think vCenterOps is a very interesting proposition for customers, both traditional customers and service providers. I think all this great newness around Vossa and vCenter APIs, this host cache acceleration that I mentioned before is a great new feature and something that now customers can leverage in their data center or providers can leverage in the global. I guess the question might be, what are customers expecting? I mean, customers are, they want solutions. So what should they, what are they wanting to hear? Yeah, that's a big question, right? So I think, you know, my time in the industry, what I've seen is customers in that crawl walk run sort of mode, right? So they're in the sort of walk mode with virtualization. They're getting to that 30%, 40% virtualized. They hit the wall with whether it's political, whether it's licensing related or it's technically related, they hit some kind of barrier. And I know that VMware has been spending a lot of time, energy and emphasis on moving beyond that percentage point. So any tools that can be used, you know, new features with APIs and their existing equipment. The last thing a customer wants is to find out that they just bought a storage array or they bought a network switch that no longer works with the brand new shiny toy, which is vSphere 5. So I think commitment in a backward compatibility mode, commitment to existing technologies in the data center. Again, you're bringing more intelligence up stack, the down stack doesn't matter as much, but the APIs need to be there. You know, actually I have a personal question and I want to bring it back to something you were just talking about in the infrastructure side. So how do you become a vExpert and what does that all mean? Well, so I have the luxury of being a vExpert. I was nominated by the community and I'm very thankful and honored to do so. I had the privilege of being a part of EMC's vSpecialist team, which is I think the last time we saw each other prior to my time at Pretensis. And I spent an awful lot of time in the community, both at EMC as well as within the VMware community, writing some blog posts, creating videos, working with my buddy, Chad Sackage, as much as I could to promote the message of VMware integration with EMC's product. So that actually I think, which is what I believe, won me a spot in some folks' hearts and I was nominated this year. Well, congratulations. I appreciate it. And so that brings us back to VMware integration. And I think back to VMworld 2009, maybe even 2008, and it was just so obvious that storage was a huge challenge for a lot of the customers. You talk to customers and they just, all they would do is tell you about all their storage challenges. And so VMware I think has worked very hard with getting the APIs out to the community, supporting it. I mean, everybody's getting the line and of course spending a lot of engineering and resource on it. But it strikes me that there's got to be an easier way. Do you think, just thinking about sort of storage design in general, do you think that storage should be designed differently? I mean, maybe for a virtual world? Was the storage that we have that we're trying to integrate, was it maybe not designed for a virtualized world? Can you talk a little bit about that? Yeah, I think the issue that I've seen in my time helping customers and partners with solutions. Workloads are completely random. They end up being like scrambled eggs, essentially. Because you can't, you no longer can count on deterministic performance of a given type of workload because it's intrinsically a mixed workload in a virtualization environment. It could be a sequential workload against a random workload. They married together a little completely random, right? But your disk, the underlying disk subsystem is trying to satisfy both of those requests. And these are traditional classic disk technologies that have frankly overgrown or outgrown their usefulness in some data centers. Almost some of the bare metal technologies like the SCSI protocol need to be re-engineered, the next generation, to support these new types of data center models. So what do you see happening there? I mean, you've got the big whales in the industry working really hard to do all this integration and obviously maintain and keep their customers happy. You've got disruptors, like you guys in the business, do those worlds come together? Is it the big guys just buy the little guys? What's your forecast there? Does the storage eventually get invisible? It's funny, because if history tells us anything, the big guys buy the little guys. You need the big guys even buy the medium guys. And sometimes the big guys buy the big guys. So I mean, what's interesting about what we're doing right now, I think it's kind of exciting in the storage space. We're not a storage provider. We create a PCIe sharing appliance. But what we're committed to do, we're releasing Q4, is a version of our appliance that'll actually house multiple PCIe-based SSDs. Now, imagine in our world in the traditional data center having to put a single PCIe adapter that supports SSDs on every server in your data center. That would be cost prohibitive. I mean, I think most would agree that you could not do that for every server in your data center. But if you put it in a concentrator and share it across 1632 servers, now that's lowered the cost of entry in the barrier to get this new exciting technology into a new, and that new workload into a traditional data center without having to break the bank and blow the budget for that particular type of purchase. Okay, Dave, we got Amur Awadallah coming up, co-founder of Cloudera. So we got to move along. All right, Steven, thank you very much for coming on, Steve Spellacy from Vertensis. We appreciate the time. Okay, great job. Thanks for coming, great for sharing.