 Hello, welcome to SSunitech. So, see this side and today we are going to see how we can unzip and load the file inside the Azure Data Factor. So, let me go into the browser and we will try to see the scenario first. So, here we are having one of the file under the blob storage under this input container. So, in this input container we have the file which is this customer details.gif file. So, this gif file is having total three files under this. Let me go and try to show you that as well. So, this is the gif file. Let me try to open this. So, it is having three files. First is the customer detail.csv. Second is the sales.csv and third is the image file. So, these three files it has. So, what we want to do? We want to unzip all these files and the file which is available under this that is these three files. We want to load all these under this jibbed output container. So, as of now in this we don't have any file. So, this is the requirement that we need to do. So, how we can do that? So, for that first we need to go into the Azure Data Factory. We have to use the copy data activity and copy data activity has enough power to unzip and copy the file from input location to jibbed output container. So, let me quickly add a new pipeline here and let me call this pipeline as jibbed. Now, here we can use the copy data activity. So, we can drag and drop this copy data activity from these activities and minimize this activity. So, under this activity we can see we need to set up the source and sync. So, our source is the input container. So, let me quickly create a new dataset under the source. So, click on new and our file is available under the blob storage. So, we can search for Azure blob storage. Let me click on continue. Here we can see the delimited text we can select and click on continue. Now, we can call this dataset as dataset of jibbed and this is for the source. Now, we need to select the link service as we have already created the link service for the blob storage. So, that is SSU testing. We can select that. Here we can see the file path. So, we can browse and we can select the file. So, go to the input folder and here we can select this file and then click on OK. So, everything is OK. Here we can see the first row as header. If you can go in the file, we can check the first row as header. So, we can mark this as true. Now, we can click on OK. So, here we have created this dataset for the source file and the file which is a jibbed file. So, how we can indicate this is a jibbed file? So, let me click on this open. So, it will open this dataset here. In this dataset, we have the property which is the compression type. So, what is the compression type? So, as this is a jibbed file. So, instead of none, we have to select this jibbed option. So, this is indicating your file is the jibbed file. And we have done with the source. Now, we can go into the sync of this copied data activity. So, in the sync, we can create a new dataset. As your blog stories, click on continue. Delimated text, click on continue. Let me call this as dataset sync for the jibbed file. Here we can select the link service. So, that is SSU testing. We want to keep the file under this jibbed output. So, we can select this folder and click on continue. First row as header, we can mark this as true and click on continue. So, that is it for the sync location. In the source, only we are required to make the change on this compression type. That is it. Let me try to execute and we will see whether the file is going to unjibbed and loaded into the output folder or not. So, let me try to debug it. So, once it will be executed, your files should be copied into the output folder. So, as we can see, this is in progress. So, it got executed successfully. Let me quickly go into the folder and try to refresh. So, here we should see this is the folder path. We can click on that this folder. This folder is containing these three files. So, once we will open this file, then we should be able to see the data under this. So, click on edit. Here we should see the data as we can see. Similarly, for the sales file, we can also see the data and verify. So, under the preview, we can see all the data. Here we have the image file. So, we should be able to see the image. So, as in the edit, we can see this image. So, we have successfully loaded the file here. But remember, here we are having this folder and under this folder, we have the files. But we don't want to keep this folder. So, let me try to delete it from here first. Instead of creating a folder and keeping the file under that, we want to keep the file directly on the output folder. So, under this blob storage of the jibbed output, we want to keep all these files here. So, how we can do that? So, let me go here. So, here under the sync, we can see the copy behavior. So, we have to set the copy behavior here. So, as of now we can see this none. So, instead of none, we can set the flattened hierarchy. So, once we are going to set this option, your file will be going to copy directly in the output folder. Let me try to execute it. So, I can show you that. The file name will be going to generate at the runtime, but the files will be available at the output folder directly. So, it is in queue. So, it is executed successfully. Let me go here and try to refresh. So, we should be able to see the data here. So, file names are generating at the runtime, but all the data should be under this output folder directly. So, let me click on edit. So, this is the image file. After that, we can see this customer file and this is for the sales file. So, these three files are loaded directly into the output folder. One thing that I also want to tell you, go to the here in the Azure Data Factory, here go to on the dataset of the source. So, once we are going to use the compression type option, then we are also having the compression level. So, compression level we have two options, fastest and optimal. So, what is the fastest and optimal? So, in case of the fastest, it is not concerned on the time. Your file will be going to jibbed or unjibbed very quickly without taking the time. Next for the optimal, your file will be unjibbed. It will be taking care of the size of the file. So, your size of the file will be less, but it might take the more time. So, if your requirement is on the size, then you should go on the optimal option. If you want to unjib the file quickly, then you can go on the fasted option. So, this option we can choose as per your requirement. So, this is all about the unjib and load the file into the another folder from one folder. So, thank you so much for watching this video. See you in the next video.