 I want to talk to you today about how systems and storage architectures are changing and the impact this will have on application delivery and organizational productivity. I'd like to start by exploring the following premise. Today's enterprise IT infrastructure limits application value. What do I mean by that? Well, let's explore what we mean by value first. The value IT brings to a for-profit business is directly a function of how information technology contributes to the productivity of that organization. Infrastructure in and of itself delivers no direct value, however the applications which run on infrastructure directly affect business value. Now value comes in many forms, but at the highest level it's about increasing revenue and or cutting costs and ultimately delivering bottom line profits. Today's computing infrastructure can be likened to a military convoy. The entire convoy must decelerate to allow the slowest vehicles to keep up. While the slowest vehicle in computer systems today is of course the mechanical disk drive. In the very early days of computing the delta between processor and persistent storage speeds was negligible. Today it's literally six orders of magnitude. We're talking nanosecond speeds for processors versus millisecond speeds for spinning disks. Think about this just for a moment in terms of distance. It's like processor speeds are one foot away whereas disk speeds are like the distance from San Francisco to Los Angeles. Applications are constrained by this length of time delay. The amount of data that can be brought into systems is extremely limited and application design must be cognizant of this slowness. Computer systems today are designed to minimize trips to LA. They're designed to handle many other tasks while data is being written to disk. So there's an immensely complicated multi-programming environment that's been built up over decades. The speed of applications is severely limited by this complexity and an organization's ability to attack, for example, a new problem is very much constrained by the inflexibility of computer architectures. In particular, databases today are relatively small and there are many of them. Modules made to the database are relatively few. The transactional systems must be isolated from all other data in an organization so their performance can be optimized. Think about an ERP system that has many modules, might be tracking inventory or supply chain, demand forecast, et cetera. The entire workflow of the organization is built around this big application and workflow is a fixed process that is very, very hard to change. To alter pricing, for example, all these asynchronous systems must be synced up in, let's say, a data warehouse that becomes the single source of record. But by the time the single source of record is actually created in place, the market may very well have changed. So how does Flash change infrastructure design? Well, for the past 15 years, function has moved out of the processor into the array for good reason, to share data, to protect data, and to offload servers. The persistent Flash, however, function is starting to move back closer to the server, promising new levels of application performance and organizational flexibility. Now Flash is going to reside in the server, in all Flash arrays, in hybrid arrays, virtually throughout the entire stack. But importantly, the control point for the Flash will be the fast server, not the slow storage array. As a result, application design will change. Machines will begin to design a much flatter database structure, allowing secure, multi-tenant access to the single database of record. Data architectures will accommodate both transactional and analytic data, and allow machines to make decisions, for example, pricing changes, in near real time, based on market conditions. In this scenario, the organizational workflow can be flexibly changed in days or less versus many months, because of that single point of record. The impacts of these changes to organizational productivity will be enormous and result in much greater IT value and flexibility. Companies will be able to respond much quicker to market opportunities, competitive threats, disasters, etc., by analyzing and acting on massive streams of data in near real time. Now, as data volumes explode and new approaches like Hadoop hit the enterprise, Flash will play a critical role in allowing organizations to manage, capitalize on, and monetize this massive amount of information that's coming in. Systems design will evolve, and Flash will be an enabler to this vision. Flash as a persistent medium is very important, but much more critical and valuable will be the software that manages the data, end to end. System and software expertise in the file system, operating systems, and metadata management and middleware will be critical to enable a new crop of applications to be developed. Now, importantly, spinning disk will not disappear. It will still account for the lion's share of capacity stored, and according to Wikibon estimates nearly half of the spend. But increasingly, organizations will invest more in the software, algorithms, data scientists, and processes to extract value from the data, rather than investing in the container in which the data resides. This is Dave Vellante of the Wikibon Project. Thanks for watching, everybody. We'll see you next time.