 Hello, welcome to SSUnitech, so see this side and today we are going to see about the flattened transformation in Azure Data Factory. So if you haven't watched the last video of this video series where we have discussed about the parse transformation, so I would strongly recommend to watch that video before going forward because you will be going to have the better understanding about the flattened transformation if you have watched that video. So what is the flattened transformation? Use the flattened transformation to take a array values inside a hierarchical structures such as JSON or the unroll them into the individual rows. This process is known as the denormalization. So for example if we are having the values in an array for like item 1 and the item 2 something like this if we are having that value in the array. So in the last video by using the parse transformation we are converting these two into two different columns like the column 1 and column 2. But in the flattened transformation that will be having a single column and these two will be going to convert into two rows basically. So it will be going to have the item 1 here and then it will be going to have the item 2. So this can be done by using flattened transformation. So go to on the browser and try to see in the practical. So this is one of the customer file here for the sample I am holding only a single customer data and this data is in format of JSON. It has the customer ID email then the customer name again the complex data type because it is again having the nested JSON value here for the first name and last name and after that the item the item we can see here two items and these two items are in a single row. So we want to split that into two different row. First row will be the visor and second will be the mudguard. So this we want to convert. So go to on the SEO data factory. Let me try to add a new data flow here. So first let me call this data flow as flattened transformation for FT. Now here we are required to add the source. So this is the Azure blob stories under this input folder and this is the customer dot JSON file as I have already created the source for the same. So let me try to use that one. So that is the JSON one. Let me try to open. So we can verify that. So this is pointing under the input folder of the customer dot JSON file. So this is for the source. Let me go in the data preview and try to refresh. So we can see the data. Okay. So it got failed because we did one mistake. So let me go in the source option here in the bottom we can see the JSON setting and this file is having the single document. So we have to select this option. Now let me go in the data preview and try to refresh it. So now we should be able to see the data that we have seen in the source. So here as we can see the data like this is ID email and after that this is the first name and this is the last name. Then here we can see the items under the item. This is the complex data type with an array values. So once we click on that, then we can see it has two values for the item one and item two for the visor and mudguard. So instead of going to load that, we want to create two rows for the same one for the visor and second for the mudguard. So how we can do that? Let me go here and try to add the flattened transformation. So under this flattened transformation we can see the enroll byte. So on what basis we want to do the enroll? So that is the item and only that option is enabled here because that is holding the value as an array. So we can select that after that we can see enroll root. So this is again item. Now here we can see in the bottom side. So here all the mapping are completed. So let me go in the data preview and try to refresh it. So this time it will be going to have two rows with the visor and mudguard that we can see and verify. So it has two rows here and one for the visor and second for the mudguard. So this item is converted the complex data type to the string one. Now let me try to add this into the sync. So we can verify there in the file. So here let me try to use the inline dataset and here we can convert this into the delimited text as an in destination. Link service we have already created we can select that one. After that we can go into the setting and here we have to select the folder path. So we can browse and under this we have the input folder. So under this go to the output folder. So here we want to keep the file. Let me click on okay. Here the first row as header we want to keep that. After that the file name option. So let me try to use output to a single file and the name of the file that will be your customer data dot csv file. After that go to on the mapping disable this auto mapping. Here we need to do the mapping manually. So how we can do all these like ID is going to map with the ID that is okay. Email is going to map with the email that is okay. Here we can see the customer name. So this is a complex data type. So we have to select the first name here and this will be having the first name like this. And after that let me try to add a new mapping here and this mapping let me use the customer last name and this should be the last name like this. Now everything is okay now we can go in the data preview and try to refresh it. So we should be going to see the data here. So as we can see it has all the data and like the customer first name then the last name then the item everything is okay as per the expectation. Let me try to publish this. So it is saying the output to a single file okay because we have selected the option to output to a single file. So we have to go here and go to the optimize and we need to use the single partition instead of the current partition. Now let me try to publish this. So it will be published in between. Let me try to add a new pipeline. So we can execute this and we will verify the same here for execution of the data flow. We have to use the data flow activity publishing completed. So go to the setting and here let me try to select the data flow that we have created. So this one now let me try to debug it. So we will be going to see the output of this into the blob storage. So let me go in the blob storage go to the output folder of this. So we can go into the container go to the output folder of this and here we can see the customer data file. So we can open this. So it will have two records with the visor and mudguard into different roles. So let me click on this edit. So here we can see it has two records everything is same except this items. So visor for the first one and second for the mudguard. So this is the actual use of the flattened transmission. So let me try to recap what we have done. So as we have seen in the source we are having the data into the complex data type in an array. So what we have done we are going to use the source and under this pattern transformation we are going to unroll by that column. So that column is the item. So that's why we have selected the item here and under this mapping everything is okay. So we have only done this unroll by. So this is going to unroll and converted your array type values into different roles. So thank you so much for watching this video. See you in the next video.