 primer in any data processing in the data business, whether you are running data as a service or even if you're running as a say some other domain, if you're having data processing pipelines, what exactly happens there, right? So there are three pillars to this whole data aspect if you think about it. So the first pillar is the sourcing. So if you look at any first party, let me define what is first party. First party is the data the company is collecting about its own customers. A classic example could be Ola collecting customer info based on their rights and their registration and all these things, right? So that the sources of this data could be myriad from say the web SDKs or SDKs apps as well as the walk-ins and the discount coupons and the subscriptions they get and all these things, right? So there are a bunch of online sources as well as offline sources for first party. And if you think from a third party business angle, there is a complete flip in the sense you'll have specialist sourcing teams which actually scouts around on various aspects of the data, whether it is the right data, whether it is in the right format and what are the integrations required and what are the contractual requirements and based on all these, they'll be sourcing this data and this data is what say some company B is collecting from some company A. Some company A is its first party data and that comes and this is termed as a third party data and that is the sourcing aspect of the whole problem. And the second thing is the refining aspect. Refining, if you think about it, it is, there is enough literature as well as lots of data engineering talks around what is refining refinement of data. So in refinement of data you standardize in the sense you standardize into a common taxonomy you do mapping, you do transformation, you do tensing of the data to get the, weed out the wrong data points and get the clean data inside your system and you will have a bunch of enrichments on top of it. Then you can apply heuristic or AI-driven or ML-driven intelligence addition on top of it. Then you have the quality makeovers, whatever is needed when you are creating the statistical quality or it could be anomaly detection and these bunch of quality requirements would be there. And the final point is you will be adding something called the temporality. Now temporality is nothing but the time dimension because any data which you collect, almost 90 data, I wouldn't say all the data, but almost 90% of the data you collect would lose value over a period of time. So you have to add the temporality as well to the data. And from that you move on to creating consumable data sets which can be delivered to the end consumer. Now this delivery, if you think about it, it is more or less the reverse of your sourcing problem that in sourcing you have the same kind of challenges in terms of format integrations, delivery to have the same formats and integration challenges, as well as you will have say a push-based mechanism or full-based mechanism, batch mode mechanisms, streaming mechanisms, AP-based mechanisms or some customer would come and say, I need a self-serve, a discoverable tool on top of your data set, where I can go and define my own criteria, create my own data sets and download it for my consumptions. So these all the various modes of delivery, then other non-functional aspects come in terms of the security as well as the reliability. When I say reliability, it is about the classic SLA management. For example, you could have given a contract saying monthly first week of the month, I'll give you ensure the data dump is available to you. So in that case, what happens if you don't honor the SLA? So these are all various aspects in terms of the non-functional aspects as well in terms of delivery. So if you think about it, this is something very similar to what happens in the current oil industry and it's intended to be so. Data business more or less happens in these three major pillars in terms of sourcing, then refining the data that is curating the data sets and giving it for delivery. So with that, we'll see if you are either already a data business, you will be having pipelines which are doing it. If you are in a different domain, say some other ad tech domain or fintech domain, you might be internally running some pipelines to solve some internal growth or revenue or fraud detection use cases. These particular similar structures or constructs would be available within your company for that.