 Hey guys, welcome to SSunitech, so see this side and today we are going to see about the dataflow debug option. So before going forward, if you haven't watched the last video of this video series where we have discussed about the dataflow. So what is the dataflow and on what scenarios we can use the data flows. So this is kind of transformation activities we can use under this. So we can implement for example, we are getting a raw data and we just want to do some transformation on that data and after that we want to load that into the destination. So we have to use the dataflow on that case. So we have already discussed about the dataflow in the last video. So today we will be going to see about the debug option or the data preview we can say. So SEO data factory and SNAPs analytics mapping data flows debug option allow you to interactively watch the data shape transform while you build the debug your data flow. So this is basically we can just preview the data for any source transformation or the destination. So we can validate before going to execute that dataflow. Next is the debug session can be used both dataflow design session as well as during the pipeline debug execution of the data flows. Next to turn on the debug mode the dataflow debug button is on the top side available. So we can directly open that. So go to the SEO data factory inside the browser and where we'll see in the practical. So basically we are here in the SEO data factory. So here we can say the option for the dataflow that we have already seen the third option under this factory sources add a new dataflow here and this is something we can call that dataflow introduction that we have already done in the last video but this is for the debug. Now here we can add the source and we can add the sync. So I just want to copy the data from this blob storage and under this ssu testing container we have two folders that is the input and output. So I just want to load the data from this input folder from the employee file that is the employee India. So here go to the edit and this file is having four columns and three rows. So we just want to load this file from the input folder to the output folder of the ssu testing container. So here in the output as of now we don't have anything. So that is no results. So by using dataflow we just want to copy the file from input to output folder. So this is the actual task and in between we will be going to see the data inside the dataflow preview option. So here as we can see the add source so we can click on that. So here simply we can see like data set and inline so that we have already discussed. So let me quickly try to add the data set here. So as of now we don't have any data set. So let me try to add a new data set. So that is from the Azure blob storage click on continue delimited text file click on continue here we need to specify the data set name. So this is for the employee source and this is input folder. In the link service we have to create a new link service because we don't have any link service here. So let me click on new and this we can call as ssu testing. Now in the below side we can add the azure subscription. So this is the azure subscription and after that storage account name. So ssu testing is the storage account name. So everything looks good. Now we can test the connection here. So it should succeed. So that we can see connection successful. Now we can click on create. So this link service will be created. Now under this we can see the file path. So we just want to get the data from the input folder and here we have the file that is the employee file. So click on okay. Now here the first row as header. So that is true. Now click on okay. So it will be creating the data set for the source. Now we just want to add the sync on this. So on this source here we can see this plus symbol. So we can click on that we can see all these options. So let me search for the sync. So this is the destination and under the destination let me try to go here and here we can see the data set. So let me try to create a new data set and the elevated text. Dataset name that should be emp destination and this should be output folder. Link service we can utilize the same that we have already created. First row as header. Here we need to browse the path. So we can select this path. Now we can click on okay. So it will be creating remote data set and that is created. Now everything looks okay. Here we can see the debug option. So as of now it's off. So let me try to click on this button. So here we can see the option to turn on the data flow debug. So it is having the time which is for one hour only and this is the integration runtime. So as of now we have only this Azure one. Now we can click on okay. So here it is on. Now we can directly select the source or the destination and under this we can see this data preview option. So we can go there and let me try to refresh it. So it should have the three rows as we have already seen in the source. So it is fetching the data. So here we can see the data. So let me try to make it a little bit bigger side. So here we can see three rows. Same thing we can check in the sync side. We can go in the data preview and we will try to refresh. So it will have the three rows there as well. So here as we can see so data will be loading in the destination and these three rows will be there. Now here we can publish this. So how we can execute and we will try to load this in the destination side that we need to create one pipeline so that we will be going to discuss in the next video. So don't worry for that in this video. Let me try to publish this. So it is published completed. So thank you so much for watching this video. If you like this video please subscribe our channel to get many more videos. See you in the next video.