 This is an interesting aspect in the sense, every dataset has a quality aspect. The quality can be in terms of absolute quantity of data or in the dimensions of the data or in the value set of the data. So all these contribute to the quality aspect of the data. And if you think about it, if you are running the data within your system and within your consumer ecosystem, consumer ecosystem could be say one or 10 or 20, whatever. So any issue in the quality, the blast radius as it's called in classic security terms is contained within these systems. Now with NPD, what happens? You give a dataset, that dataset can be federated to another person that can go to another person. Now say layer three, that is the quality issue. Because say in your ingestion, a person who has to give data on a snapshot basis for some reason, he just dumped the whole data on one day. That nuked couple of your versioning capabilities, couple of your quality capabilities and everything. And this is getting propagated to say one, two layers down the line of your consumption and his algorithm breaks or whatever, it created havoc on the system. Now who takes liability of these issues? What is the protection criteria and what are the legal measures to be taken here? So this is something to be thought about. And if you have a quality issue, if I have a customer, you would have seen in a classic e-com website, there is a status page which is showing there is a quality issue here and we are rectifying it and this is the RCA. Now that is the classic broadcast, it's like think about it, you are a hub and there is a bunch of spokes you are connected to, you are able to clearly broadcast. Now think of your hub and it's a kind of a free model in which case, how do you do the broadcasting of this issue and how do you stop the downstream layers from doing any harmful processing of the data. So that is the challenge which I am talking about. So on the left hand side, the problems it can create is it can create unnecessary cost of the overall ecosystem and we don't know, again, it's all comes to the question of liabilities or who is liable at this point. And this is one aspect of propagation of quality and incoming quality issues having an effect on the entire ecosystem. Second aspect is if you think about it, anybody who is consuming the data, they have to invest on something called some quality gates or veracity of the data. Suppose say you are collecting a particular attribute from three different data sources, you can get conflicting attributes. Now how do you figure out which of the three sources has given me the right attribute? This itself is a challenge. And if any startup is trying to solve this challenge, they'll have a bunch of ways to figure out how to do that. And this is not a simple investment, this is a huge investment in terms of how do you create an incoming gate, how you can protect yourself when you are getting the data. So this is a two-prong problem in terms of the producer side as well as the consumption side, what they need to invest on. So I just put two challenges there, one is the veracity and the other, in the veracity I just told, same entity, different sources are coming with the same entity and conflicting data attributes, what happens to that. So this is the other pillar which we need to think about when you are trying to implement something towards all these tracks.