 I don't see it being transactional because essentially we're trying to lower the human effort to understand and read things. So we talk about automating understanding, that's sort of like a phrase we've seized on in this. It's really about how do you take on a class of data where we no longer can scale the human resources to understand it. Because what's happened is data's gone up about 100x in the last maybe 15 years to 16 years, about 10x in the last six, right? So you can back up to the web around 0102, get another 10x there. Human attention has not expanded. So what's ended up happening is we do a search and you get maybe the top 1% to 5% depending upon the amount of data involved, 10 to 15 years ago. Now we're getting the 0.5% to 0.1%. Or we're getting worse than that in the larger assets, 0.01%. And the psychological thing happens when you're dealing with such a small sample that you feel you have no confidence at all anymore. So that already has been hit by the Intel community. And so they've had to make a preemptive investment in that area, which we're part of. And I think that it's true in extra financial services and people that have large scale data problems. So web 2.0 community that has this data, comments those pieces as well as financial intelligence. It's natural they're going to have to do that next because they can't go hire 100,000 people to read this stuff. So information.