 Hey guys welcome to SSUNITEX which will decide and today we are going to see about the inline data set in the data flows. So in the previous videos we have seen about the data flow and we have seen about the data sets and we have also seen the debug option inside the data flow. So if you haven't watched the previous three videos of this video series so I would strongly recommend to watch those videos. So just get started with the data set. So inline data set is used when data set is not reusing again. So as we have already seen we have created multiple data sets. So for example we have created one data set for like employee data set and this employee data set we are using in multiple places. Multiple pipelines and multiple data flows we are using then we have created that physical data set. But let's assume we are having a requirement and we want to use a data set only a single time and we are not going to use that data set again. So on that scenario we can use the inline data set option. So go to on the browser and we'll try to see in the practical. So as here so this is the data flow that we have created. So let me go on the source side and under the source we can see here source setting. So in this source setting we can see like data sets and inline data set. So we have already created the data set option and this video is totally dedicated on the inline data set. So let me select the inline data set. So while we are going to select the inline data set. So here it is asking the data set type. So as our source is blob storage and input holder and under this we have this file that is the employee India and here go to the edit. So this is the file which is the comma delimited mainly. So here we can go and in the inline data set type we can see a lot of options there. So we need to select delimited text in our case. Now once we have selected that then we are required to select the link service. So under the link service we can select the ssu testing. Now here options we have already discussed. So let me leave this go to the source option and under the source option here everything is changed like here we can see the option which we have already seen inside the data set. So let me quickly go inside the data set of anyone and I can show you the property. So this is the property that we can see here. The same property is available on this source option. Here we can see the file path. So we are required to browse the path and go to the input folder and then we need to select that file. Let me click on OK. Here we can go downside and we can see option like first row as header that we can find here. So let me select this checkbox because here we can see in the source side the first row which is the header of the table. Now go back to the data factory. Here under the projection we can see the import schema option. So this is not our label because your debug option is off. So let me quickly try to on this for one hour and it is getting the cluster ready. So it will take little bit time we need to wait. Now as we can see this is on. So we can see the import schema option is available. We can click on the import schema. So we can click on import directly here. So it is importing the schema. So under the projection we can check the schema of the source file. So this should be having the four columns as we have seen in the file. Now as here we can see that now go back to the now go to the optimized tab and here the partitioning that we have already discussed then we can see the inspect. So under the inspect here we can see like the order and the column names and after that the type. Now go back to the data preview option. So under the data preview option let me try to refresh. So here as we can see the three rows those are available in the source. So all those we can see here. So this is the inline data set. Now we can go in the sync side and under the sync we are not do anything like we can directly use the data set here inline data set and the cache. So the cache option is available in case of destination. So don't worry for the cache as of now we will be going to see this cache in detail in our upcoming video. Now we can do the publish and let me go to the blob storage and here go to the output folder. Here under the output folder we can see these two files. Let me try to delete all these two files here and go back to the Azure Data Factory and here it is publishing. So we will try to execute this inside the pipeline that we have created in the last video. So we can directly go on the pipeline. So now let me go in the pipeline and we will try to create a new pipeline and under that pipeline let me call the Dataflow activity. So this activity will be going to execute your Dataflow that we have made the change. So this is the same one. Now we can try to debug it. So this will take a little bit time. So once this will be executed, one file should be available in the destination with all the data. It is executed successfully. Now go to the blob storage and here let me try to refresh it. So here we can see the file and under this file we can go and go to the edit and here we can see all the data that we have seen in the source site. So now it is working properly. But here under the dataset we can see there is no new dataset is created because we are using the inline option in the source one. So that is the inline that we have selected. So this is all about the inline dataset. So thank you so much for watching this video. If you like this video, please subscribe our channel to get many more videos. See you in the next video.