 If you think about data processing, first is many of the processing pipelines, let's look at the lifecycle of how a data processing pipeline is created. Typically you have the business team which comes and put a bunch of us and it goes to the product team and the product team curates us and they decide based on existing data sets and new data sets which might be applicable, whether we need to create a new processing pipeline, whether we need to invest on that and what is the lifecycle of the pipeline and what is the frequency? All the bunch of things which you can think of from a productization perspective happens there. Now, given the NPD states that I have to make publicly available all my data available, that means I am making it discoverable via data catalog. Think about the scenario, there might be 10, 20, 30, 40 asks saying that I need this part of your data, I need this item on your data or this on your data. They can ask a subset, they can ask a aggregate, they can ask a view of your data, so all these things could come into this. Now, who will curate it? Who will curate the ask and how does it get curated is one of the major thing and you could run into a scenario where you as a business don't need to run any aggregation or tertiary pipeline at all. So now that would become a requirement by say the external NPD based discovery and requirement coming into the picture. And the thing is sometimes you may not even be protected from it because if the requirement runs into a conflict and if you go to the ombudsman and they say, okay, you have to make this available, that means you have to actually invest on this particular new pipeline processing as well. So that is one. Now the thing is when you're processing this new pipelines, new requirements for each of the additional consumers, what happens is your metadata automatically increases. So it becomes a cycle. Now, you have a bunch of metadata based on this metadata, new metadata is coming in because you're creating for that new consumer. Now that can go into another cycle. So it creates a ending cycle of the metadata derivation problem as well. And the third thing, if you think about it in terms of processing is running proper algorithms in the sense even in the NPD draft, there are a bunch of algorithms which they talk about, K-anonymity, differential privacy, homomorphic encryption and all these things which may not be required to be run for your core business whatever you are doing. A homomorphic encryption may not be needed, but by virtue of ask from the based on the NPD from the external consumer, you may need to force to be run to the run that algorithm. Say, if you want to run a differential privacy algorithm on a million data set, it might be possible. Suppose you are talking about a one billion data set or two billion data set, there is a possibility it may not even be possible for you to run that algorithm. So in these kind of cases where there is a genuine issue for you as a data provider that you won't be able to honor that ask it all, what is your protection? Whom do you go to? So this is some gaps which I see that is not addressed in the NPD. Can the provider say no? No, it is not possible for me to give whatever be the price or anything. It's not within my reach to provide this particular data or cater to this ask. Even though my metadata has this particular, from my metadata, even though he has figured out this is possible, but it is not possible for me to create that data set and give it to the person. So that is another thing. So that is what I mentioned in the challenges as based on the metadata, we have a control on what the ask might be. And if you look at any data processing pipeline, whatever you put into your system, be it in your first party systems or third party systems, it comes with a couple of baggages, non-functional baggages. You might be running a reporting on top of it. It could be a quality reporting or plain number reporting. You could be having a alerting and monitoring on top of it. So this, all these additional pipelines create some back pressure in those additional systems as well. That is another aspect. And the third thing is which you have to think about it, any pipeline will have some failures. So the failure rate, actually I mentioned as rate, rate doesn't increase, the number of failures more or less remain constant because the rate remains constant. So for example, if 10 pipelines, there is a failure rate of one person and 100 pipelines having a failure rate of one person, that means you have to manage 10 failures for this thing by catering to these new requirements. So that also throws up a challenge to the implementer in terms of how he's going to manage the whole processing life cycle as well. So that is another area which needs thought when a person is trying to cater or when this bill comes into a frutition and it comes into a full flow stage. There should be protections around whatever we have been talking about as well as we need to figure out how all these are additional things are handled there as well.