 from Strada in New York City, Strada plus Hadoop World, which is a conference that's brought to you by O'Reilly, and O'Reilly Media is a partner of Silicon Angles, and they've had us here, and for now, for a couple years, we've done several Hadoop Worlds, so Hadoop World and Strada have merged. So the Cube is our flagship product where we extract the signal from the noise, as I say, we bring you the smartest nodes that we can find, and we are here with Justin Borgman, who's the co-founder and CEO of Adapt and Scott Houser, who's the vice president of marketing. Gentlemen, welcome to the Cube. Thanks for having us. Yeah, so appreciate you guys coming on, and first of all, congratulations are in order. You won the startup showcase last night, I understand, and the startup showcase is basically a competition for the best startup, and they do it every year at Strada. It's a big deal, I think there were 10 finalists last night, right? That's correct. So, well done, congratulations. Thank you. So, Justin, a lot of people don't know Adapt. We had Daniel Abadi on two years ago at the Strada February 2011 conference. Before you really started the company and sort of came out of stealth. Just give us a high level overview of the company and what you do. Sure, yeah, so we got started about two years ago. Daniel's my co-founder, he's a young professor at Yale. His prior research actually led to the founding of Vertica, so very sharp guy. I met him, I found his research fascinating, and that's really how we got started. And the basic idea was to bring together the performance and functionality of an analytic database with the scalability of Hadoop, and really converge these into a unified platform that allows you to use SQL to interact with structured and unstructured data in one Hadoop-based platform. And that was really what gave rise to the founding of the company. Since then, we've grown up a little bit. We have about 40 people today, raised venture capital. We've got some production customers already using the platform. So, talk about why that's important, the bringing the SQL to NoSQL. I touched on it earlier, but there's a lot of skill set around SQL, and there's not a lot of skill set in the whole Hadoop world. I mean, there's a lack of skill set, so talk about why that's important and how you guys are bringing those two together. I think that's an incredibly important point. There's this enormous skills divide between the guys that know how to write, map, reduce jobs, and the business analysts and the SQL-based application developers that know how to interact with SQL. And that was one of the core tenants when we started the company, was how can we bring these worlds together? And that's effectively what we do. So, we are accelerating production, Hadoop into production by allowing business analysts and current investments in SQL-based tools to interact with data inside of Hadoop. One example of that is an integration with Tableau that we've done that we'll talk about a little bit later. Yeah, so, you guys have talked a lot about this notion of connectors, right? But connectors mean you got Hadoop, and you're doing a lot of processing in Hadoop. People put it as the, you know, the batch platform. And then if you want, we were talking earlier about Larry Ellison talking about, okay, then extract it, bring it into Exadata or Exolytics or Exologic, a million dollar infrastructure, and then we'll make it real-time. Talk about your vision of actually using commodity, you know, components to do that. Yeah, I mean, that's absolutely the future that we think is a necessity and inevitable, which is that, you know, Hadoop itself is such a great underlying infrastructure. It has built-in fault tolerance and load balancing. It runs on inexpensive commodity hardware. It can scale, and it can store everything. So customers start to use it as really this supervised landfill, that's what one of our customers calls it, where they can just dump everything into this one place, collect it all there, and then perform the analytics in place on that data. And that's really what Hadaap provides, is you no longer have to have those connectors between Hadoop, which is your low-cost storage, your ETL tool, and then some data warehouse or data analytics appliance. You can now actually do those analytics inside of that Hadoop cluster itself. All right, we'll be drilling down to this topic all we guys have said before. This is a major theme that we've been tracking at SiliconANGLE and Wikibon. Scott, can you talk about what you guys are announcing at this event? This is a big deal for you guys, sort of coming out party, and talk about the product that you're announcing, and then we'll actually bring in Ted from Tableau and talk about some of the integration you're doing. Absolutely, so a couple of things. We made our announcements last week, and the focus of our announcements were about accelerating Hadoop into enterprise deployments for production based upon interactive capabilities. So what you described about having this notion of moving beyond batch and giving folks the interactive capabilities to do investigative analytics amongst the dataset via tools like SQL and Tableau. So we announced the interactivity. We also announced what we're calling the Hadaap development kit. And what that is, is the ability for us to publish via SQL functions the advanced analytic ecosystem of Hadoop. So things like, there's a lot of talk about things like machine learning, how are other advanced analytic tools. So we're able to publish those now via SQL function and enable customers, and as you mentioned earlier, business analysts to call upon them from basic SQL tools or from visualization tools like Tableau. So how, paint a picture for how you would do this pre-Hadaap and post-Hadaap. Yeah, great question. I think it's what Justin started to highlight, and you mentioned earlier is, in most worlds there are these silos. So you'd have a situation where you might do some exploratory analysis on a Hadoop platform, and then once you've found something that you might find to be somewhat interesting, you would create some structure out of that data and you'd ship it across the connector to another platform, and then you would iterate and ask analytic questions about the data. But unfortunately, in that sort of situation, there's no ability to do real investigative analytics, and there's no ability to take and drill into that source data because you've distilled all that and you've lost that because, again, you have these two disparate platforms. But now what we're enabling you to do is to say, I can, in one platform, find something that's interesting, drill into those results, and do that investigative analysis in that one single platform. So we can start with sort of the aggregates and drill into what's fascinating in an interactive fashion for the analyst. So let me introduce Ted Wasserman who's with Tableau. Ted, welcome to theCUBE. Thanks very much. Thanks for coming on. So for those who don't know, Tableau is becoming the gold standard of visualization in this big data world. But you actually started before all this big data fervor came about. But so before we get into Tableau, Scott, I want to ask you why Tableau? Why did you guys partner with Tableau? What's that partnership all about? And then we'll ask Ted to comment. Sure. So I think it was the overwhelming choice when we talked to prospects and customers alike that were looking to do some advanced analytics and sort of walk down this big data path. It was overwhelmingly we're interested in Tableau. We want to work with Tableau. I've had the pleasure to work with Tableau in a previous life as well. And so knowing what a good partner they are and how flexible they are in technology and how advanced the technology is was a perfect fit for us. So the trail that we're blazing and bringing these interactive applications on Hadoop to the enterprise and to production was a perfect fit with their vision and the company. So Ted, I mean everybody's working with you guys. You got your big user conference coming up and it's been growing year after year after year. Ted, give us a quick update on Tableau and what you guys are doing with Adapt. Right, absolutely. So our key mission is to help people see and understand data no matter where it is. And a lot of our users are now bringing Hadoop into their environment. And that's great because it's allowing them to get access to huge volumes of data which maybe they couldn't do before on a traditional relational database. However, one of the sort of issues with Hadoop is its latency, right? So Tableau provides a really fast and fluid analytics experience where you drag and drop depending on the types of questions you wanna ask. With Hadoop, because of the latency that sometimes gets slowed down, right? So sometimes even fairly basic queries take a number of seconds to run before you get an answer back. And when you're in that analytic flow trying to ask different questions and ask follow-up questions, sometimes that little break between where you're trying to get the results back to see that sometimes breaks your analytical flow. And so that's why we're very interested in working with vendors like Hadoop who are building a really fast interface on top of Hadoop to be able to do deep analytics on. That's an interesting point you're making, so it's about insights, not only insights, but in putting those insights in the hands of business users that can make decisions, but it's about productivity of their workflow as well. Because you're absolutely right. If there's a break there, you're gonna go do something else and then you have to restart the whole mental process. Absolutely. So all right, we have very limited time here, but Scott and Ted, I'll ask you both. So paint a picture of the future for us. So this is sort of early days, right? You guys are startup, you got your funding, you put it together, you deliver the product. Where do you see this all going in the grand scheme of things? And put it into context of the traditional BI and data warehouse. Sure. Do you want to start off and I'll follow up? Sure, absolutely. So I think Hadoop is definitely a strategic platform for the future for doing massive analytics on huge amounts of data. I see the platform maturing, so it's going to get faster, it's going to get more robust, it's going to get easier to use. And the key for tools like Tableau is to be able to make that data available to any user who has questions and data, but don't have the know-how to write scripting and programs to ask questions of their data. Okay Scott, so what about the legacy data warehouse world? Is it going to be like the mainframe? It'll be around forever? It won't get as much investment? What's your angle on that? I think that over time that will evolve and I think that there will be pockets of that that will exist for quite some time in the future. But what I see in all the customers that I talk to and folks that we work with is there is absolutely a desire to push all of this advanced analytic capabilities into the hands of the knowledge worker. And so when I go out and see and talk to folks, it's about being able to, is to describe, take these advanced analytic concepts and functions and put them in the hands of an analyst to where they can take and change the way that they interact with the data, the decisions that they make in their daily workflow or their process and do so based upon real investigative analytics and based upon empirical insights. And I think that we will continue to see this permeate the industry and that the more people that get on the platform, it just further amplifies the necessity for something like Hadoop that is extremely parallel, fault tolerant, resilient, et cetera. And I think that's, you know, it's going to certainly supplant the legacy platforms in the market. So this is a theme that we're going to be tracking all week. A number of companies are attacking this problem in different ways. ADAP was really the first to really set out to solve this problem. They've got a product in the market now. So first of all, thank you gentlemen for coming on. Thank you. I also want to thank our sponsors, O'Reilly Media, thank you for having us here. Cloudera has been great, really. This is, you know, it was the Hadoop World Conference and they've been really friendly to theCUBE for a number of years at MAPR, data stacks, important works, of course, ADAP, Opera, Squirrel, a very interesting company doing security, focused on security and big data, Tableau and Rainstore. So keep it right there. We're right back. We're live from New York City. This is theCUBE at Strata Plus, the Duke World. Right back.