 Hi, I'm Dennis Martin, president of Demartech. Over the last decade, processing power has improved more than 100 fold, while the top speed of spinning hard drives has remained at 15,000 RPM. In short, hard disk drives cannot keep up with the processing power of modern computing. Storage vendors, recognizing that their products can be the bottleneck, are increasingly turning to flash to decrease latency and increase IOPS and throughput. There are several ways to deploy flash in the enterprise. One method would be to replace all of the hard drives in your storage systems with solid state drives. This can be very effective, assuming that the controllers can keep up with large quantities of SSDs, but is frequently cost prohibitive, except for very large enterprises or extremely critical application data. Cost effective solutions often use a small quantity of flash compared to the total storage and employ an automated tiering or caching solution. The idea behind SSD caching is to temporarily keep a copy of the frequently accessed data known as hot data on the fastest storage available. After a period of time that can be anywhere from minutes to days, depending on the application, the cache fills up with the hot data and the time it takes to access that data is significantly reduced. The caching software or controller analyzes all IO activity for all of the LUNs or file shares it has been assigned to monitor and moves data in and out of the cache depending on how the access patterns change over time, with the goal of keeping the hot data in the cache. The EMC strategy for VF Cache is to place a PCI Express flash card directly in the application server. This lets VF Cache take advantage of the extremely fast access times of the server's PCI Express bus to access cache data and avoids the additional latency caused by going over the storage network. VF Cache is a read-only caching solution. This means that every time an application sends a request to read data from the storage, the VF Cache software determines whether that data is already in the cache. If it is in the cache, it's read back directly from the cache without having to access the backend storage at all. If the data is not in the cache, it's read from the backend storage and then written to the cache in anticipation for a future read. As the application requests additional unique data, that data is placed in the cache. This is referred to as warming the cache. The warmer the cache, the faster the data access. When the cache is full and additional data is determined to be hot, the caching software begins replacing less frequently requested data with more frequently requested data. When an application writes data, that data is sent directly to the backend storage bypassing the cache. In parallel, that data is also written to the cache in case it is requested by the application. This means that while written data does make it into the cache, it first goes to the backend storage to protect its integrity in case of a cache or system failure. With VF Cache satisfying an increasing number of read requests, the backend storage is freed up to handle more and more write requests. This speeds up both reads and writes, and since many applications read data more often than they write it, all application transactions are accelerated. Workloads that can take maximum advantage of VF Cache R, read-intensive workloads, workloads with small IO block sizes up to 64k, random IO, and multiple IO streams. So the question is, just how much performance can VF Cache deliver? We ran four workloads using different combinations of servers and storage. Each server had its own VF Cache installed in it, as you see on the diagram. We captured the performance data from each of these runs and included these details along with all the configuration detail in our evaluation report. Here are some of the highlights. We ran an Oracle OLTP workload in two different configurations. One was a server connected to an EMC VMAX, and the other was a different server connected to an EMC VNX storage system. The test on the server connected to the VMAX storage system showed more than 300% increase in transactions per minute, as you can see here. Notice that the cache warm-up took slightly more than one hour to complete to achieve maximum performance. The same test on a server connected to the VNX 5500 showed more than a 250% increase in transactions per minute. Both storage systems recorded a significant reduction in database reads and a corresponding increase in writes as the number of transactions processed each minute grew. As the cache began to deliver more and more read requests to the application, the backend storage was freed up to perform a much greater number of writes. These systems also reported a very high percentage of Oracle wait events that were less than one millisecond. We ran another simulation. This time modeling the type of workload a brokerage firm would use for an online account management database system. We used Microsoft SQL Server connected to a VNX 5300. The results were very similar, showing a more than 300% performance improvement using VFCache as you can see here. For more detail on VFCache, please go to www.demarctech.com slash vfcache for the full analysis report. I'm Dennis Martin, and thanks for watching.