 So you mentioned SLA, I mean, you have a lot of experience dealing with SLA, what are the key factors in the SLA kind of component of this that you're keeping an eye on that you guys are watching and developing? Yeah, there's a whole spectrum of things in the SLA that we care about, right? So you even heard Larry this morning in the keynote talk about some uptime and stability concerns, like there's some very basic elements of the SLA, like if the system's not available, or if you have to worry about data corruptions, that's sort of telling you that we're in an early stage in the market, and also you have to start thinking about, well, how business critical is that information if there's some degree of tolerance for those sorts of outages? Then if you go all the way to the other end and talk about from an analyst perspective, and you think about everything that went into OLAP processing, so relational provided analytics, but it wasn't good enough for a lot of business analysts. They couldn't slice and dice quickly enough, couldn't really satisfy their sort of fast twitch they would need for information, and Hadoop, again, is really good at relatively high latency, high scale analytics, but isn't necessarily an environment where this writing on top of vanilla Hadoop, you'd want to sit there and slice and dice information, and that's why we're seeing a lot of the tools build up to build effectively caching and other things on top of Hadoop to provide that sort of SLA for their users. And bring some structure to that Wild West. How about your relationship with Cloudera? What is, what's going on there, and can you talk about that a little bit? Yeah, sure. So we have a partnership with Cloudera that was announced several months ago. We have a certified integration with Cloudera. We have a connector to HDFS, and they're going to be coming out with a connector to HBase, of course. And so we are seeing kind of where things go. And what you see with Hadoop is it's this kind of big powerful engine, but despite Mike's comments, data doesn't originate in Hadoop. In a lot of cases, data doesn't terminate in Hadoop. And so Hadoop is just like one of many happy stops along the way for data. And Informatica wants to be a part of that process, getting data in and out of Hadoop. And also, you know, you heard about the skills shortages around MapReduce. Everyone had their hiring pitch at the talks this morning. And what we're seeing from our customers is, hey, we don't necessarily want a whole army of MapReduce programmers. We have people that know Informatica. We have people that know SQL. Can they leverage those skills and have that actually be effectively our programming environment for our Hadoop environment? And so our answer to that is...