 Hello and welcome to SSUnitech, so shall decide and this is continuation of Azure Data Factory interview questions and answers. So recently one of my friend has attended interview with data consultancy services. So this question was asked there. So here we need to copy the data as per the file size. So what does it mean? We will be having one of the input stories that will be your Azure blob stories. And here I have created one container that is the input container. So we need to copy the files from the input container to the output container of the blob stories. So as you can see this is the input container and this is the output container. Under that inside the input we are having multiple files, but we need to check about the file size. If the file size is less than one KB, then we are having three folders inside the destination. So if your file size is less than one KB, then you need to copy those files inside the folder of the one KB. If your file size is between one KB to two KB, then we need to copy that files inside the two KB folder. And if the file size is more than two KB, then we need to copy those files into the more than two KB folder. So like these three folders will be available inside the destination and we need to copy as per the file size of the input. So let me quickly go into the Azure blob stories and I will show you there. So I am under the input container. So we are having these four files and we can also check the size. So the size of the India file and UK file is one KB. So these two files will be loaded into one KB folder. Park file is having two KB. So it should be going to load into two KB folder and here for the USA it is three KB. So it will be going to load into the more than two KB folder. Let me quickly go back there. So these four files I have already uploaded into the input container. Let me quickly go into the output folder and inside the output container, we can see it is having three folders. First for the one KB, two KB and more than two KB. So we want to load the data as per the file size. Let me quickly show you these folders are having only this file which is a test.xlsx file. Because we cannot create the empty file inside the blob storage. So that's why I have created by using this test.xlsx. So as of now, we can assume like this is the empty one. Let me quickly go inside the Azure data factory and try to create a new pipeline for implementing this. So I would say you should be going to understand each and every point step by step because this may be little bit tricky one. So first we want to read inside the input containers how many files are there. So first we have to get all these files. So for getting that we have to use the get metadata activity. So we can simply drag and drop here and after that we can go inside the settings directly and inside the settings we have to select the data set. So let me quickly create a new data set. So our source is the Azure blob storage so we can select it click on continue files are delimited text click on continue. Then we need to select the link service. I have already created the link service. So I'm going to use the same. Let me click on browse. And after that we will see two folders first is input and second is output. So we need to get all the files under the input. So we can specify the path up to this input folder. Let me click on OK. And first row as header we can mark this as true and we can click on OK. So what this delimited text for data set will be doing it will be going to point that particular folder location that is input. What property and field list we want we want the child item from here. So what child item will do child item will be returning all those four files. Let me try to debug it. So here we can verify like those four files are coming or not. So this is in queue. We can wait. We can check the output of this. So here under the child item we can see like this is the first file. This is the second file. Similarly we are having all those four files. So till now we are good. Now next we want to loop through with all these four files for checking the file size. So for looping through we have to use the for each activity. We can connect this for each activity from this get metadata activity with the success one. So we can simply go inside the for each go to the settings and under the settings we can see the items. So we can click on add dynamic content. So we want to loop by using the child item of the get metadata activity. So here we can see the child items we can simply select and click on OK. So this for each loop will be going to loop through four times because it is having four files. Now under the activity we can click on this pencil symbol and we want to configure under the for each activity what we have to do. So first time we will be receiving the first file name. So if we are having the file name then we want to check the size of that file. So again for checking the size of the file we have to use the get metadata activity. So we can use the get metadata activity here. So this will be pointing get metadata to simply we can go into the setting and we need to create the data set. But this time you have to remember file names are coming from the for each loop. So we need to make this data set as dynamic and we will be receiving the file name from the for each activity. So let me click on this new and it should be going to point on the as your blob stories and then delimited text click on continue. Here we can select the link service and this is for the delimited text or we can call this as file name data set something like this. Then we can click on this browse and it should be pointing to input location and we should not be selecting any of this file because those files will be getting dynamically by using the for each activity. We can mark this first row as header and here we don't see option for adding the dynamic content. So we can simply click on OK and it will be going to create this data set. Here we can go inside the new and under the new here we can see option for the parameter. So we can add one parameter and this parameter we can simply say like the file name. And once we have selected this go to the connections and inside the connections inside the file path we can see this file name. So this file name we should be getting from the parameter that we have created. We can click on OK. Once we make this change inside the data set we should be see inside this get metadata activity to it is asking the file name. So this file name we should be getting from the for each activity. We can go into the dynamic content and here we can see these items dot name. So this property we can use and click on OK. Now we can debug it and we will be seeing. So OK it is asking your field list should not be blank. So what we want from this get metadata to here we need two attributes. So we can click on this new and here first we want item name. So we can select the item name so that will be your file name and the second attribute that should be the size. So here we can go in the downside and we should see the size. So this I have selected and after that let me try to debug it and we will see the output of these two. So this get metadata activity two will be executed four times because we are having four files inside the get metadata one and for each will be executing four times. So that's why get metadata activity two will be executing four times and each time will be seeing about the file size and the file name. So we can check. So here we can see get metadata two is executed four times and inside the for each if we can check the input. So the item count is four. That's why we can see those four items here. So one two three and four. Now let me quickly show you about the get metadata activity two. So the file size that we can see the employee dot UK dot CSV. So this file is having total 149 bytes. Because the size is returning in the bytes. Second file we can see and it is having 1237 for the path. So as per the condition this file should be loaded into the two KB folder. Similarly we can check for this one. So for the USA it is 2453. So it is 2.4 KB. So this file should be loaded into more than two KB folder. Till now we are good. We can use the copy data activity next. So copy data activity will be copying the actual files. We can select with the success one and here first we need to check how we can get the source value. So for the source value we can use the same data set that we have created for the get metadata activity two. So that is DS file name because this is going to point on the same file at the source. So that file we want to copy simply we can select that file. So this is the file next it should be asking about the file name. So again we can get this file name from the for each and we can use the name property and click on OK. So we are good at the source now inside the sync we need to create one more data set and this data set again will be dynamic. First let me try to create that then I will explain. So we can select the delimited text here we can select the link service. Then we can browse to the output folder and inside the output folder we are having these three folders. We don't know on which location we will be copying the file. So we can simply click on OK without selecting anything inside the directory. So this time we will be making this directory and the file name both as in dynamic. Let me click on OK and we will try to make that here we can click on open. Now here inside the parameter we should be creating two parameters first for the file name and second size. We can go into the connections and inside the connections we should be seeing this directory. So this directory should map with the size and this file name should map with the file name. So till now we are good. So simply inside the output we are having the size folder that we can see we can go inside the output and we will be seeing the size folder and next we can see the file name so that will be copying. Simply go inside the pipeline and inside the pipeline here we can see it is having the file name. So file name we can simply get from the activity output of the get metadata activity to with the file name option. So we can scroll down here we can see the item name and we can click on OK. So this will be the file name. Now next will be the size. So inside the size remember it is having values in bytes. So we have to write the expression. So first we have to check about the file size so how we can get the file size. So simply we can go inside the activity output and we can use the size property of the get metadata activity to. But before that we have to use the if condition and inside the if condition we can choose the size property of the get metadata too. So here we can scroll down side and here we can see the size. So we can select the size. Now if this size value is less than 1000 bytes then that file should be loaded inside the one KB folder. So again we have to use one more function that is the less if this value is less than 1000. Then what we want we want one KB as an output. Next again we need to check so we can use the if condition and here we can check if less we can copy the first one. The same thing and this time this value should be less than 2000 then we want as an 2 KB. If everything will not be as expected then we want this more than 2 KB in the output. So simply we can say if the file size is less than 1000 bytes then that file should be going to load inside the one KB folder. If the size is less than 2000 so 1000 is already covered in the first one so it will be going to copy into the 2 KB. Otherwise that will be copy into the more than 2 KB. Simply we can click on OK. So once we have selected all these so everything looks good. Now we can execute and we can validate the output. So simply we can debug it. Now we can see this pipeline got executed successfully. Let me quickly go into the output folder of the blob stories. Check inside the one KB it should be having two files of the employee one for the India and second for the UK. So as we can see India and UK let me go into the 2 KB it should be having Pakistan file that we can see. Next we can see more than 2 KB so it should be having file for the US as we can see. So the files are going to load as per the file size. I know this is little bit tricky. If it's not very clear you can watch the video again and thank you so much for watching this video. If you like this video please subscribe our channel to get many more videos and please share to your friends. Thank you so much.