 Hi, I'm Kim Palco, and I'm the Product Manager for Big Data across the middleware portfolio and specifically for the JBoss Data Virtualization product in middleware. Well, enterprises today are seeing a big change in data in their environment. So it used to be that there were kind of more relational data sources, and now we've got new data sources coming online all the time, no SQL sources, IoT sources, so data's coming in from sensor devices, big data sources like Hadoop and Spark, and the speed in which they're getting requests for these data sources is unprecedented, really. So with all these new and disparate data sources coming online, they need to be able to correlate the data and combine it quickly. So the time to decision is shrinking all the time, and there's an expiry date for data. It loses value over time. So there's no longer time to be able to create long running ETL processes or big data warehouse projects that can take months to implement. So what data virtualization allows you to do is to be able to connect to a number of different disparate data sources. And then pick and choose just the data that you need out of each source, whether it's in the cloud or on-prem, SQL, relational, no-SQL, legacy, whatever it is, and take just the fields, just the columns, just the data that you need from each source without having to move or copy any data and be able to compose what we call a virtual database that can be consumed by any standard interface. So ODBC, JDBC, REST, Java applications, or OData, which is a standard for transferring data over REST. So what that allows architects to be able to do is to create flexibility in their architecture, kind of a plug-and-play architecture where you can plug new sources in and you can actually pull off the old sources as you move that data over time. You can also plug in new consumers, so new BI or data discovery tools, can plug into this architecture. And while you're doing all of this, that data abstraction layer in the middle, the data virtualization allows, protects any of the consuming applications from disruption. So developers are particularly interested in data virtualization for a couple of use cases. One is when there are multiple different data sources and they have to connect to them. You can do this on your own by writing your own code, but especially when you get more than one data source and you need to say, write to that data source as well, data virtualization lets you do that really clean with kind of a no-code goal in our Eclipse-based GUI tooling. And it allows you to be able to do that very easily and quickly, and as well as be able to write. Another use case that developers are interested in for data virtualization is moving toward microservices. So in a pure microservices architecture, you know, every microservices owns its own data. It's completely isolated, and that's perfect for, you know, new projects. But what we see in a lot of enterprises is data is stuck in monolithic, you know, systems with huge databases. And even if someone wanted to go to pure microservices architecture, that would take a lot of time. So what data virtualization allows you to be able to do is to leave the data on your legacy or traditional systems, and to be able to create virtual databases on top of it so that each microservice can actually have its own virtual database without having to go through the pain and time to break up that whole monolithic system at once. A lot of people are interested in using data virtualization and big data projects to solve a couple of problems. So one issue with big data projects not getting traction is that there's simply not enough data scientists around. There's not enough highly skilled people to be able to use a lot of these technologies. So what data virtualization allows you to do is to be able to consume data from big data systems like Hadoop or Spark and be able to consume it with the tools, the traditional tools that people are accustomed to using. So you can actually run queries from Excel if that's what you like or any BI tool that you're used to using and then let data virtualization do the work in translation and be able to run it on those systems. So there are several new features in data virtualization that we're excited about, especially our new integration with JBoss Data Grid. So we've had integration with JBoss Data Grid now for a couple of years. But what's new is the data grid can now be a materialization target for the data and data virtualization. So as I explained in DataVert, we don't take ownership of that data, but you can materialize it to a source. Well, now you can materialize it to JBoss Data Grid, which is a lightning fast in-memory scalable cache. The other feature that I'm very excited about is integration with Apache Spark. So Apache Spark is just taking the big data world by storm. I just went to a recent conference. It's a very popular technology and it integrates now with data virtualization as both a data source so we can consume from Apache Spark and also Apache Spark can consume data from data virtualization, which allows it to be able to take advantage of some of the performance optimizations we have in data virtualization for connecting with multiple data sources and also our centralized security model, which is a huge benefit when working with Spark. So data virtualization will make its debut in OSC3 in the early fall. And what we're doing there is really acting as a data service kind of API layer in OpenShift so that developers can use a quick and easy web UI to quickly create a data service with writing no or very low code. And it also allows you to be able to connect to data sources, whether they're hosted in the cloud or on-prem. So it acts as kind of a gateway, data gateway for OpenShift.