 All right, everybody, let's get started. This is Dave Vellante. Welcome to the Peer Insight, reviewing EMC's VF Cash. You can watch live on, as we were just talking, siliconangle.tv, or you can go to the Wikibon blog. There's a widget there. It is on a delay, so if you're going to watch live, you're going to have to turn down your phone. So welcome, everybody. Thanks very much for attending today. If you didn't see the announcement on the Wikibon blog, we have a very exciting news that John MacArthur, old friend and colleague from my IDC days and many years afterwards is joining the Peer Insight team. John is going to be moderating the Peer Insights. So John, welcome. Thank you, Dave. And we're joined today by Noemi Gresdorff, a Wikibon contributor. Noemi, thanks very much for coming on. You're with Cambridge Computer, an expert in many fields, and I appreciate you bringing the practitioner's perspective. Thanks. So I am going to turn it over to John. John, why don't you set up the call and we'll get into it. Well, as David said, we're going to be discussing what was pre-announced as project lightning, which EMC announced as VF Cash. David Fleuer was on site in San Francisco for the announcement. And so what we'd like to do here is have him discuss what he saw. But before we get started, just want to remind everyone that the call is being recorded. For those of you who are tweeting, please include the hashtag for Wikibon, W-I-K-I-B-O-N. And we do have Noemi Gresdorff from Cambridge Computer here. And David Fleuer is on the phone. So Dave, since you are at the announcement, why don't you give us a quick summary of what you saw, what you heard, and your initial impressions. Thanks very much. Can you hear me clearly? Yes. It was a great announcement in San Francisco. And the key announcements were two of them. There was project lightning, which is now being called, as you said, VF Cash and project. Hey, can I ask folks to please mute their lines if they're not speaking? So that would be everybody but David Fleuer. Appreciate you muting your lines, so I don't have to go through and mute them for you. And David, make sure your computer is muted. Great. Carry on. VF Cash and there was project thunder. So let's talk about VF Cash first. That is a Micron or LSI flash card, PCIe card. So this is the first venture of EMC into the server side of storage. And it's a cache. So IOs are put into the cache on a first out basis. And they revolve around the cache and they're kept there. And there's a piece of software, a motor driver from EMC, which decides what gets put into the cache. So this speeds up IOs. The writes aren't speeded up, but the writes can be written to the cache if they're not sequential or large blocks. And those are then available for rereading from that cache. So it's a capability of improving IO response time for reads in particular and for generating a much larger number. Right, David. There are a number of different use cases I think that you've identified. Can you just talk a little bit about the use cases that you see for this today in its current instantiation? The best use cases for the cache are for simple caching situations. So read intents were closed. Things about 50% read. For example, probably exchange wouldn't be a best candidate for this, but for transaction processing would be a lot of good candidates for that. Small IOs size, 64 kb is the largest that will be allowed into the flash cache. Is that a limitation of the card or is that just a best practice? With the limitation of the card, it actually doesn't allow anything bigger than 64 kb in it. That's to avoid sequential processing of overloading the cache. But in general, small IOs size would be the right use case anyway. So this is a database kind of application then for the most part? In transaction processing, database applications again would be the best fit. Random IOs are obviously random IOs that is a benefit and more from transaction processing. Sequential IOs would not get much particular benefit as the probability of rereading is usually much lower. Good locality of reference. In other words, does it have a small working set of data which can be cached efficiently? There is in VF cache a split cache which means that you can put a particular volume in that cache. Again, can the participants please, if they're not speaking on the call, please mute your lines? If you don't have a, sorry, if you don't have a star six, we'll mute your line please. Okay, so if you're using split cache and you have an entire volume in the split cache, it's not protected. Is that correct? That's correct. Okay. I'm sure that will come in the future. David is Josh. You mentioned the software that's managing. Where is this software sitting? On the storage control unit or on the server? The storage, the filter driver is sitting on the server. It's in the operating system itself. And that comes on to the next point. In VMware, the filter driver cache is in the VM guest and not in the hypervisor. I'm sure that will come in the future. But that means that even though there's a vCenter plug-in for management, it won't be, it's used in VM motion. It's not yet ready for prime time with VMware. But if it is sitting on the server and the storage is connected by ISO, fiber, channel, whatever, it means in fact that you can connect any storage. It's not limiting to EMC storage, external storage to connect. Right. Absolutely correct. And they have said that very clearly that this is useful for all storage and there's no connection at the moment between fast management and the VF cache. That's a future improvement that will come. And it will still work even though the fast improvements are just hinting and, well that's the main one, hinting for VF cache. And that will still work without that hinting. So in fact, it is third level cache of a server which has very little to do with storage. Well, I think it's important, well there will come to that in a second. Just one more use case I want to talk about and then we can talk about the general use. The other thing that it's not suitable for is the clustered environment. There's no support for clustering. Again, the reason for that is that it's tied to that specific server and there is not no cache coherency between that. The solution for clustered environment is going to be part of Project Thunder. Going back to Josh's issue and that's a nice lead up to what this means and the importance of it. This is, they started with a server side. There is a split cache so you can put volumes there. You can have storage there. And this is a very clear indication of a trend from many, many startups. There's Viridian, there's Vercito, there's just a whole LSI, Walk Drive, Microm, Samsung. Many, many players in this marketplace who have cards, PCIe cards that will fit into the server and provide this type of service. So this is acknowledgement from a EMC. And for me, the most profound statement made by Patrick Elfinger at the announcement was the fact that he said that the high-speed drive, the high-speed fiber channel and high-speed SAS drive, their days were numbers. In other words, most of active data, all of active data was going to be in flash over the next five years. And that means that EMC and everybody, every other server, has to compete with storage on the server as well as project thunder as well as providing clusters. So, David, I read that comment as well that he said high-performance drives could be dead in the future. I didn't hear the five years. I'm just curious on people's perspective on a couple of things relative to that. Again, could the people on the line mute their phones if they're not speaking? There's some noise in the background. The question really being how much can we get behind that statement? Do we believe that fast drives or fast spinning drives are dead? Noemi, you're out there with customers all the time. I'm curious what you see. We do actually see quite a bit in optic and interest in SSD technology for acceleration of performance. But the majority of areas where you see SSDs being adopted is, wow, that's really distracting. So someone on the line is moving stuff or something. I don't know if we can mute all of the lines if we have to, but there we go. Okay, it's gone now. Thank you. So the adoption is around more of a read type of workload. In terms of the writes, write-intensive workloads, you're still seeing demand for higher-speed drives more of the HDD side. Although there's a number of solutions that are coming in the market that are claiming that they can not only accelerate writes as well as they can accelerate reads on the SSD drives, but they can do it in a very capacity and cost-efficient way. I believe that's still to be proven. I guess the other question really is... It doesn't have to be proven. In sequential operation, you read sequentially into the cache. It will be always faster than flash. And you read very long blocks ahead of time. But the thing is that if I remember well, something like 2007, Tucci said that there will be no revolving disks in year 2010. And we are already in 2012. I don't remember that statement. I remember they announced in 2008 that when they started introducing solid-state drives into their arrays. But I don't remember that statement, Josh. I do believe, though, that if you look at the performance characteristics of various types of drives, that the 15,000 RPM drive is definitely going away. We can see that already in terms of the systems that are being deployed. Leveraging SSDs has enabled organizations to deal away with 15k drives and just go with either 7,200 SAS, which are beginning to get adopted, or 10,000 RPM drives. Just the difference in performance uptake and the capacity sort of trade-offs you get from 10k to 15k just don't seem to warrant it, especially if you offset that with SSDs. So in that regard, the market is definitely moving away from the 15k. I'm not sure about the 10k in a five-year horizon is really realistic. Can we make enough? Actually, this is Barry Burke. Hey, Barry. I would disagree with that assertion. Which one? The one to 15k's are going away and the world's going to 10k and 7,200. In a tiered storage environment, it turns out that the delta of performance between 10k and 15k is big enough that you can't get sufficient performance out of your middle tier with just 10k drives. And you really need to have a 15k spindle to kind of handle what you can't afford to put into flash. In fact, if anything's going away, I believe it's the 10k. And we'll see 15k's last longer just because of the performance profile. And we'll have 15k and SATA for a long time before there are no drives. And then, Floyer, you've predicted that the price of flash will come down at some point close enough to 15k fiber channel or SAS to kill it. Are you still on that prediction? Absolutely. If you take today's environment, there are some still-use cases where the high-speed drives are useful. And the early implementations of FAST, for example, by EMC have shown that very clearly that you need to have sufficient 15k drives and balance them. If you project forward, though, just one or two turns of a crank on flash, the ability of the flash to be written to and then to, as a second stage, to pass that down to high-capacity drives will be the way that large block files will be, or sequential processing will be handled. And particularly for things like log files. And the key reason for this movement down will be the fact that applications will run much better, much, much better. Locking will be reduced significantly if you can acknowledge the right as quickly as possible onto a flash medium. Within five years, it doesn't mean that all 15k drives, there will be no 15k drives sold, because we all know that migration takes a long time. There's an irony here, and everybody on the line that I wanted to address, maybe hit the escape key for a minute. And this is the following that for the past 15 years we've seen storage function move even 20 years out of the host toward the SAN array. So the disk array, and then it merged into SAN. And EMC was the big reason for that. But so things like snapshots and replication and data management techniques going into the SAN. And as you referenced, John, in 2008 EMC announced what we call the hay maker when it surprised everybody with the enterprise flash drives inside the array, Fusion I.O. was just coming out of stealth ironically at the same time. And now we're seeing certain functions move back toward the host, the pendulum swinging. And I'm interested in what people's thoughts are on that and what it means to practitioners in terms of how they architect systems and storage. What we're seeing in the marketplace, actually the adoption of virtualization, in particular VMware, who's the market leader currently, but in general the adoption of virtualization has created an opportunity for deploying data management functionality in layers that makes most sense for that particular environment. So you're no longer tied to doing it in the storage array controller. You can do it in the network. You can do it in the host. You can do it in the hypervisor, which definitely gives you a lot more options. And the movement of additionally caching in the host and elevating awareness of this with the VF cache announcement, I think that's sort of validation. I don't think that all the functionality is going to sort of, the pendulum is going to swing back to all the functionality going to the host. But I do think that what we are seeing already is that folks are making decisions where that functionality should live based on their own specific requirements and having the options that were brought on in a lot of cases by virtualization is enabling them to do that. I get that, but I also see that we've had dramatic improvement in processor speed. We have not had dramatic improvement in storage speed. And so we do sort of unnatural acts to improve speed. And the other thing that's happened is this price decline on memory so that now we can do things that frankly just weren't affordable. If you think disk drives were unaffordable or expensive, when I left the user side in 95, memory was a heck of a lot more expensive. Disk prices are going up for at least a period of time. Yeah, a little period of time. So what do people think about this? Maybe we should open it up to the community? For the audience a little bit, just to get people's opinions. I wonder if we want to restate the question? Yeah. Is this only going to be available? Yes. So the initial, could you just give us your first name? Yeah, David. Oh, hey, David. So the first implementation is a PCI card. It's not supported in blade servers. So, and of course, so I don't know what the what the plans are down the road. David, do you have any insight on that? This is where Project Thunder comes in as a much better way of dealing with the blade type of environment. So Project Thunder is a collection of appliances, each of which has a CPU with a number of PCIe cards basically. Right. And these cash coherence and provides a layer, if you like, below the CPU and... And below the PCIe card in a server. If you have a blade server, then these would be connected directly via InfiniBand or even RDMA. Yeah. So yeah, I had read InfiniBand like. So... Yes. This is Abhishek from EMC. Hi Abhishek. Hey. So just to add to that. So the explanation for Project Thunder that was given is one of the use cases, obviously. But in the meantime, we are also looking at different options to provide support for blade servers using the current VF cash offering. Okay. So you... That's something we are looking at definitely. Nothing. Nothing we're announcing today, but something down the road. Okay. So I'm not sure if we addressed this, but I just got a direct message from somebody saying EMC VF cash 64K limit. This is something we were talking about earlier, is the driver, not the card. I can't remember if we sort of specified that. It's not a card limit. It's a driver. The filter driver has this 64K limit. And that's... You had said that that's to make sure... David had said that's to make sure that you don't overwhelm the card with... No. No? I don't think it's tight to the card at all. Okay. The limit is caused the vast majority of large block read data. And either a single shot sequential read for the purposes of backup and loading an application in the memory. And very rarely does that data ever get reused through all of our millions and millions of IAPs. Right. We recognize that. So that's why the driver does that. Yeah. And actually that was one of the things that sort of went into some of EMC's claims here is that they've analyzed a lot of workloads. And so their ability to make recommendations based upon the workload profile of the customer, you know, I think they've got a good competitive differentiator there. So there are a lot of ways to reach us for questions. Obviously you can chime in online if you want to tweet us. I'm at D. Volante or at Wikibon. We're watching the stream. But again, any other thoughts from the community? It's a good time now. We'll just sort of take a quick breath and see if people have some comments that they want to make or questions that they want to ask. Yeah. This is Alex Williams of Services Angle. And I'm curious about the services aspect to this story around the attached. It seems like there will be a major services department required with this. And it seems a point to kind of a differentiation between EMC's approach and TBIO's approach. I'm just curious about, you know, you got to take on that. When you say a services approach, let's make sure we're talking about the same things. The requirements of, you know, the services integration, the need for consulting to some degree. In terms of the, you know, will this increase the need for services? How much help is the customer doing there? Right. So Dave Volante was talking about customers have a lot more options now. And I'm not sure that customers are necessarily looking for a lot more options. But if you do have a lot more options, they're looking for better performance. And I get that. But if you have a lot more options, you better have an awfully strong consultative services practice. Because let's say a customer comes to you and says, you know, I'm having performance issues with my Oracle database. I'm thinking you got six or seven or eight different options to try to address that performance problem. Well, you know, on the Wiki, right in the front page, in the middle of the page under professional alerts, you'll see under register now, there's a big orange register now button. This is on wikibon.org. Under professional alerts, the very first professional alert, it's called wikibon peer insight on EMC projects lightning and thunder. And in there, David Floyer has this just lovely set of charts sort of laying out his vision of the architecture of the future real time big data processing and the different layers and where flash generally and VF cash and thunder specifically fit. And it seems to me, David, that that has services implications to Alex's question in terms of the processes and the procedures that I have established, maybe even certain application development trends in terms of writing directly to new layers of flash. I wonder if you could comment on that a little bit. Yes, I think there's a very profound change coming. And I think that the best way of expressing that is that we're entering an IO centric era. We will be able to afford very, very large numbers of IOs which previously were orders of magnitude too expensive. And the solution to reducing the number of IOs was to block everything up and do everything in a bigger block as you could. In an IO centric environment, you'll be able to take very large streams of data coming in from many, many different sources and be able to do the transaction processing and at the same time sort out the data and the metadata and the indexes in a parallel type process. And that's a very different way of looking at it and from a services angle that's going to mean a great deal of help in helping organizations restructure their whole thinking about big data and how to do analytics and doing it top down as opposed to doing the transaction processing and then having the data on disk and then trying to get it back again. So that's one area. In terms of the options open, this is a very exciting time. There are a lot of different startups with a lot of different products. And people mentioned Fusion IO recently. One of the most really interesting evolutions has been the introduction of atomic rights and the ability to write directly to the flash from the processor. And that's a game changer in terms of the number of IOs that can be written and the speed that that can be written to the flash. So there are techniques of that sort which are changing. There are many, many, many flash only vendors out there. There's SolidFire and Pure and Nimble and Nimbus and many more and many, many hybrid solutions as well. So there's a host of new ideas coming out and they're emphasizing a top down view of the management of storage as opposed to the previous more storage array up view of managing it. So yes, I think there is a tremendous role for services to play to be able to cut through the claims and counterclaims and make sensible practical decisions on what type of workloads will fit what type of solution in the best way. David, if I may, this is very interesting. One is the original question that the VF cash card doesn't require any services to purchase or use. The services are available. So you can just buy the card, picking your server and run. There's no prerequisite. And the second point I want to make is that the bridge read response time out of flash and these PCI cards is relatively low on the quarter of a millisecond or so, maybe even a little less than that. This is an average response time of a large cash disk array for a write to cash. So whether it be DMC, BNX, DNX, TATCHI, whatever, a write to cash is also about a quarter of a millisecond. So in terms of writing to flash at flash speeds, a quarter of a millisecond far faster than you can write to any flash today. I think there's a little bit of direction being put out there by this write to flash thing being so fast. Well, we can gently disagree on that one. I think the atomic writes to flash can go much, much faster than milliseconds. You're talking about microseconds and a few of them as well. Clear, David, I did say microseconds. I know what flash speeds are for a frame of flash. You know they're not that good. The writes to flash are still much, much faster than going through the IOS stack and out the game. So the opportunity and IBM will disagree. I think one of the things that is definitely going on in the innovation community with all the new vendors coming out and there are new technologies not only in the all flash array systems, network cash appliances as well as software that facilitates caching in the host, I think across all of these innovators and all of these newer players in the marketplace what is happening is that they're really working to design how data is placed on media with flash in mind. So they're taking the technology and leveraging it in ways that hard disk drives have never been able to be leveraged. And they're coming up with some really interesting paradigms that enable them to get more capacity whether it's through, I don't want to call it deduplication or compression, there's newer kind of techniques, how they do rate protection, even how they do just general data placement. All of that is still in the early stages of development and as that continues I think there will sort of some very specific standards will evolve that will be to the benefit of the general community and the general marketplace that applications will be able to take advantage of. I mean on the application level though we're still looking at things that administrators and application developers have to make changes to their environments in terms of knowing that they are working let's say with flash technology that does have less latency, faster response time, you know can handle thousands of IOs more than a traditional hard disk drive and making sure the deduplication is aware of that so it can take advantage of that as well. All of that is going to sort of optimize the general performance you can squeeze out of these drives as well as capacity, things that we don't even think about right now. When we start talking about application rewrite and I think about legacy applications I start to get a little nervous so I'm curious as to relative to flash adoption where do you see it coming first? Do you see it in the new apps that are coming out or do you see it in legacy apps? The rewrites will be in the file systems and the database systems to take advantage of it that will be the vast majority of the rewrite and that then will unleash capabilities which will mean that applications will change over time to be able to take advantage of those. So we talk in quarters, years or what? Oh yes, yes, yes. The technologies will start to be available in 2012 and you're on a decade long movement to get to a higher centric environment. The applications take a long time to change and to test and the whole philosophy will take a long time to change within organizations. So these things take time but having said that there is a huge opportunity for people who take advantage of this quickly to make game changing alterations to the efficiencies of their organization and their ability to provide new services. So we're already seeing that the large social media companies, the cloud providers are embracing this and providing new ways of doing things, the Apple's and the Facebook's of the world. Right, but those are newer applications than the traditional databases. So maybe we could reset here. For those maybe just joining, we're talking about the EMC's VF cash announcement, really read side cash, EMC's first entrance into the closer to the server stack. We've been talking about that today, initially a read cash over time, project thunder getting into a more shareable sort of network oriented approach. So we've got David Floyer on the line, myself Dave Vellante, John MacArthur, Noemi Gresdorf and many hundreds of people on the call and watching on video. So John back to you. Well let's do that again and go back to the listeners and see what questions or comments they have. And I think EMC's talked about several hundred implementers already. If we have any customers that are using VF cash today, I'd love to hear from them as well. I have a question, I'm not a customer who has implemented, but I have a question if there are no other questions. Go ahead, your name, or first name? My name is Tim Sammers. Hi Tim. The question is, when will VF cash and other systems like it, will they ever get integrated with the operating system? If the operating system is doing its own caching, now VF cash is doing caching and then back at the array you've got caching happening at the array. So you've got three different engines, three different caching engines. When if ever will they be tied together so they know what each other is doing? I think that's the idea behind, you know, the vision of VF cash is that you have the integration with the backend array so that you can make smarter decisions about what is being cached in the server versus what may or may not be cached at the array level so that the resources can be more intelligently allocated to applications that need it where they need it. VF cash aware of what the array is doing because I don't think it is, is it? Not today, but the idea is moving on. But I think it's an excellent question because it brings in the issue of where the locus of control for that caching should be. Should it be at the array level, a storage array level, or should it be at the file system or database level? And I think with a clustered support over a number of those systems. And I think over time the changes are going to be made to the file systems and the database systems to be able to more efficiently cache the data being aware of the flash layer. And those are changes again which are going to take some time to take advantage. So today VF cash is a simple solution and there will be some overlap on the caches but it will still be improved performance. But over time it will become more efficient as it can be integrated into the operating system. And my personal belief is that those changes in the operating system and the database and the file systems are going to be very important going forward. Well, you know, David, we've probably got a number of different representatives from different factions of that stack that would like to debate where the point of control should be. So that's an interesting question and one that we'll be following here. Yeah, please. I don't know that there needs to be a debate about where this, but I think we should all understand that while probably in the future we'll see things like the databases and file systems to take advantage more of flash as kind of a metadata or whatever local reference they need to. That really what we're trying to do here is to accelerate the utility and value of this new technology flash to a broader set of applications that don't have to be rewritten to take advantage of it. And that's really what the VF cash model does. It's what putting flash in the storage arrays do is what putting, you know, caching devices in a network or an all flash array is all about supporting today's applications and giving them performance benefits without the rewrites that would inevitably happen over time. So I think they're very complimentary. I couldn't agree more. That's very well said, Barry. There's two forces. There's the forces of today's applications and they need support and they need help and VF cash is one of many, but a very good entry into the field to help in that place. So the discussion on the broader aspect is saying where that ultimately where that locus of control should be. And as I said, that'll take a decade for that to move in the same way that took a decade for the migration to move originally out to the storage controllers. I think one of the things that we should probably bring into this conversation outside of the technology itself is the concept of cost. That does play a significant role in how the applications are able to take advantage of flash SSD technology and how an end user who is trying to accelerate the performance of their existing applications without having to rewrite them can best get the best bang for their buck. And I think that in terms of implementations, there are certain places where you get better performance benefits. You may be trading them for maybe lower reliability or less functionality. There are other implementations where you might have more redundancy, but at a much higher cost. So I think cost is a very important aspect of any discussion associated with the adoption of flash SSD host network or array. Dennis Martin here. I wanted to make a couple of comments. Yep. Go ahead, Dennis. So on the question about integrating cash into the OS, I think that's a great question and there's two ways that can go and these are not mutually exclusive. I think as we get more and more flash or flashlight technology become more common and the prices continue to drop, I think it'll be a natural for the OS to begin to start thinking, well, if this is always here, maybe we can exploit it. Coming from the storage side, like in the case of EMC, you've got flash that can ultimately be coordinated with the caches that are in the storage system. So you've got this, you can sort of draw a line all the way through from the OS through this caching layer wherever it happens to be on the server side into the caching into the storage side. And who knows, maybe one day we'll get to where it used to be on the mainframe where the OS is aware of caching the subsystem. So there's all kinds of ways that could go and I think it's all positive. Secondly, on the comments about performance and all that, obviously we're big promoters of flash, we were involved with obviously some stuff with the VF cash and we're doing a lot of other things with SSBs in general. We do like the performance we see on PCIe cards that do flash. We also have measured performance to caches in the storage subsystems using FiberChannel or SAS or even Tenka guys. And we're seeing less than 100 microseconds to writing to a cache in the storage subsystem. So there's a lot of speed out there and I don't think you can argue that flash is faster. Certainly a lot of storage systems have a lot of good speed, but I think it all works well together. So I don't think it's an either or, I think it's a both and kind of situation. What about the latency issue? Can you make some comparisons between CSUNIL and VF cash in terms of latency? CSUNIL says that latency is the issue more than anything else. Could you give us your first name? We're good in both. You can look at our performance paper and there's a comparison with VF cash and FusionIO. So I take a look at that. For next generation cloud apps, would you say that your latency is on par with FusionIO? You mean for a VF cash thing? Yeah. For VF cash, yes. I mean it's going through. The very low latencies that FusionIO are driving towards are using the VFL software and the atomic rights. And those are not yet as a product, a full product yet. So those are experimental things. But obviously there are a lot of cards out there and a lot of differences between the cards. Using it, the card and the major use of FusionIO for people like Facebook is putting the whole of the database there. Or the vast majority of the database there. And that changes things a lot because you then don't have any of the very long elapsed time. The average latency is one thing, but you get the outliers with hundreds of milliseconds that you can get once in a while, which can really affect performance in a big way. So there are a lot of low latency implementations of flash in general that are in the marketplace and are being used. And caching is one solution to the general problem of, as Naomi said, of speeding things up. It depends on the business value of the application and the requirement for IOPS as opposed to just bandwidth. And I think the biggest change of flash is not necessarily that it performs much better or latency is much better, etc. It's just a simple number of IOs that can be done. Beforehand, doing IOs was very, very expensive. And with flash, you can increase the number of IOs by orders of magnitude for the same cost. And that's a very, very application design, a very big impact on application design. It brings into play new applications which just couldn't be afforded to going back to Naomi's point. This could not be afforded before. Yeah, we're looking at different metrics in that regard. Most people, when they think of storage, they think of price per gigabyte. And that's a very reasonable thing to keep in mind. We're looking at two other metrics with flash dollars per IOs or price per IOs and IOs per watt. And when you look at those metrics, just like David said, it's astronomically bigger than what you get with hard drives. So there's lots of different ways to skin this cat with lots of different types of SSDs. And we're finding, even in some cases, you can do better for some applications by not using a PCIe card by using a lot of SSD not very far away with a high-speed piping in there because then you don't have to split your database up to fit onto the PCIe card. So there's a lot of opportunity here and a lot of good stuff happening. What was that last metric that you mentioned, Dennis? So there are three that we're looking at, price per capacity or dollars per gigabyte. Second one is price per IOs. And the third one is IOs per watt of electric power use. And you get a lot more, of course, for flash products in those last two categories. I would actually add the third one. This is the one that I think is important when flash is actually used as the primary capacity, not as just cash, but for storing data. And that is price per IOs per gigabyte. IOs per gigabyte, yeah. Because when you're buying a performance, let's say you have a certain set of IOs that you need for your application, but you also have to deal with the size of the dataset and certain form factors for SSD may be too small to hold the whole dataset. So you either have to then work with tiering, which works in some cases, not in others efficiently, or you then have to look at the different form factor and consider different densities. SLC has lower densities than MLC, but have higher reliability. I mean, there's a lot of considerations when you're talking about it. So price per IOs per gigabyte is another metric I think that is important for end users to consider. And they do. Yeah, in a recent posting that I put up on Wikibon called a real-time IO centric processing for big data, I've got a comparison of the costs in table three between the dollar per rack and obviously the flash is about 22 times more expensive than that. But then if you look at the dollar per, if you look at the dollar per mile per, I've called it, it's about 2,500 times different than dollar per terabyte. It's much cheaper for disk and then miles per terabyte, again, it's much, much cheaper for flash. So different use cases for flash and storage and different metrics that we have to use in the future for evaluating storage systems. Okay, in a few minutes that we've got left, let's open it up again to the community and see if we have any other questions that people want to chime in with or comments. I have a question. As far as some of the key milestones, when will the, any idea when the project thunder would be available? So this technology would be available in other than just rack mount servers and also the milestone of when the integration with fast technologies. So the BF cash will be aware of the cash back in the EMP rate. At the announcement, they indicated they'd be starting working with customers on project thunder in the second quarter and that all of the things that they talked about would be available within 12 months. So I would expect that an EMP here being have a very good track record of meeting these types of projections. So I would expect that there'll be a steady drumbeat of announcements over the year, filling in pieces. For example, I'm sure that they'll fill in the piece by having some sort of solution with the BF cash in the hypervisor that will come as a solution for VMware. And there'll be a significant movement towards a general availability of the thunder within the next 12 months. And lightning was in customer hands, pre-release for a good seven, eight months at least. It was late last summer, I believe, at Oracle Open World. Gelsinger and Chad Sakic announced they were in beta. They were in beta, so they were in alpha before that, yeah. So I would expect maybe something at VMworld this summer, some kind of announcement to that effect. Go ahead, somebody. Just a quick addition to it. This is Abhishek. So don't associate support for VF cash in blade servers with Project Thunder, as I said earlier. That is one of the use case for Thunder. But going forward, we are looking at other options to start supporting blade servers earlier than availability of Project Thunder. Oh, before that. Okay. Thank you. Yeah. Very helpful. Any other questions? And there was one more comment about support on Hypervisor. Yeah. So today we do support VF cash in VMware environments. And obviously we are looking to add more functionality as we go on. But yes, today we do support it on 4.1 and 5.0. Yeah, I think the issue is at the guest level. And that leads to issues in terms of supporting VMware capabilities such as VMotion and things like that. VMotion, yeah. And you can script. Go ahead. That's what I was saying. More functionality will be added in due course. And you can obviously script your way around that and you're going to, I'm sure, make it more automated over time. But that's good clarification. Yeah. Other questions? Any other comments? It's kind of interesting as we sit and have this conversation because I remember somewhere around 1992 or so, although IBM was shipping system managed, was talking about system managed storage, there was a concept of symmetric managed storage, which means if we just give you fast enough performance, your database administrators don't have to spend a lot of time worrying about data placement. And so we were able to sort of rest control of data placement and free up a ton of capacity by just, by just guaranteeing service level. So I think the fast integration that's coming is going to be critical to being able to deliver functionality where you don't have to sort of micromanage it. And Barry made a comment about ease of implementation. One of the things that I always think about when we're implementing new technologies into existing operating environments, you know, what does that process look like? And I don't know if there's anyone on the phone that can speak to that. But we've got to take the applications down or what do we need to do to be able to take advantage of VF cash? Whoa. Somebody killed me. The other interesting angle here, and I wonder if anybody has comment, is the whole investment climate. There's a lot of action going on out there, and I don't know if anybody's got a... Dave, Dave, Dave. Hey, John. Hi, John. I'm just calling in from Palo Alto, Silicon Angle. And I was at the event, and it was interesting to see the EMC standing tall and kind of... They didn't really speak directly about Fusion I.O., but David Flynn did issue a statement. I thought that Fusion I.O. dynamic was really interesting at the launch. And there was references in the EMC launch event in San Francisco around a slew of startups and other activities. And so I would say it's very frothy right now in solid state. Anything with solid state is going to get instant funding. But what's happened is with SolidFire and others really emerging, there's been a barrier to entry on the product side. So we're seeing a lot more product development on the mature funded companies. So what I'm seeing is the main dynamic is you got some leaders out there already, Fusion and SolidFire and others in the solid state area. And all the new startups are trying to take a different approach on the architecture, but all admittingly, privately, and I'm hearing from multiple sources here in Silicon Valley, that most of those startups are successful entrepreneurs doing another one in the storage area. And they're doing parallel paths. They're also looking for venture capital while essentially selling their deal, looking to sell it. So we're seeing a lot of different dynamics that are going on. And that tells me specifically that the go-to-market path is not as clean as it was about a year ago. So that's kind of my report from the field in the sense of what I'm seeing on the startup. So seasoned entrepreneur coming back in, viable technology. There's funding. There's also these parallel paths going on shopping their deal around. So is VF Cash, is that good confirmation for these guys or is that a look out? Well, I think it's like there's a lot of speculation that, hey, EMC is going to need to step up and buy a company. They could have bought Fusion when they were younger. They could have now. It's kind of a little bit late. Maybe it's not too late for the public offering. You know, I think a lot of people see EMC as a primary target to acquire white space in this area. So there's a lot of go-to-market and accelerated timetables on the product side. If it doesn't material itself itself in the marketplace, the M&A growth is going to be significant for the big guys. Not only EMC, but others who need solutions. So I think a lot of startups are kind of hipping themselves up to say, hey, we're a viable approach. We have an alternative. We have a different angle on it. Cloud Mobile Social, new user expectations and applications. And position themselves that way, but also looking to be bought. So I think EMC's race in here is a testament to the marketplace. Pat Gelsinger and the team have rolled up their sleeves and they've got thunder right behind it. So EMC is not just mailing this in. So clearly that's a statement that there might not be enough seats at the table. I think that one of the interesting things about this market is that I don't believe that there is a clear leader. I think that there's a lot of companies, startups and young companies in general, who have been making a lot of noise and have very well recognized brands, but have not yet delivered the product, which kind of reminds me a little bit of the dot-com era where people were going public without actually having customers. But besides the point, I don't believe that there is a clear market leader. I also don't believe that there is a clear preference or has been defined sort of is it going to be in the array? Is it going to be in the network? Is it going to be in the host? Or is there going to be some sort of a combination of many of these depending on the situation? From a big sort of vendor perspective, I think that jumping in is validation for the technology for sure. I don't think it obviates the fact that as these newer players come to market and start sort of executing that the bigger guys are going to sit and wait and see who's going to execute well, who's going to come out sort of on top, and those are the ones who are eventually going to get probably bought. That's kind of what we've seen in the last decade. And even when they have come out with their own technologies and their own products, it has not precluded them from four or five years from now changing their tune or saying, yep, we have this, but we're going to add this other technology to our portfolio because it has done so well. I would agree with you 100%. I think you're right on the money. I think there's a lot of still open book around what will happen relative to the architecture. And kind of what I'm seeing in the trenches is the definition and the ecosystem of what that stack is going to look like is changing. And a lot of forces are driving at the application level. Does it sit with developer frameworks? Does it move in closer to the server? It gets to sit with the array. So all these things are debatable and viable depending upon how you're looking at it. So Fusion I.O. talks about performance, and they have a real viable story there, and they have specific clients implementing that. Yet EMC is clearly positioning them as a one-off early adopter fringe example. And I think with big data, you're seeing this data layer becoming a really key abstraction layer as middleware. So again, application-specific acceleration, network, and then ultimately it's server. So it's interesting. I think the only great point there, and I would gratify that, that still as the ecosystem of the NoSQL and the database wars continue, that will continue to be a viable discussion. Thank you, John. So the bottom line is Flash is hot. EMC is in. A lot of stuff coming down the pipe, but they've made their first big stake here. I want to thank our contributors. First of all, David Fleuer, who gave us the quick overview. Dave Vellante. Do I have to thank you for being on your own show? Noemi, great story. My pleasure. Noemi. Noemi, thanks for coming in. John on the phone. Javishak Berry. Tim, Alex, Dennis Martin. Thank you, Steve Stark. And I would encourage folks to take a look at the work that Dennis has done. It's up, I think you can link to it from the Wikibon site. To get Josh. Josh Kirschers on the phone. Thank you, Josh, for your contributions, questions, and opinions. Just to recap, we will have six research notes up pertaining to this call up in the next 24 hours. We'll summarize the call, we'll provide some implications on the IT department, implications for the organization, talk about technology integration, and Wikibon stuff, so hopefully we can get rid of some stuff. I'm not sure what. All members of the Wikibon community are welcome to hit the edit button and contribute opinions and improve the pieces. So we appreciate and encourage all participation there. We will have a podcast of this research meeting up on the Wikibon site within, what, 24 hours? Yeah, but then seconds on the TV. Silicon Angle TV. I want you to also watch for announcements on an upcoming peer insight. We're just trying to close down on getting some specific IT professionals who've done the implementation, but I think we're going to have a discussion regarding a black box for the data center coming up at the end of the month. We have another peer insight in March. It's already on the calendar. So please watch the Wikibon page for upcoming peer insights. Thank you, and I think with that we'll say goodbye.