 Hello, welcome to SSUnitech, Susil this side and this is continuation of Azure Data Factory interview questions and answers. In this video one of my subscriber has commented this interview question so we are going to discuss how we can do this. So in this question like we are having total 30 files inside the blob storage and we need to copy those 30 files inside the Azure SQL but there is no predefined tables or label inside the database. So what we have to do we have to first create the table and then we need to load the data into that table. So it should be having 30 table inside the destination and those 30 tables we are going to create at runtime and loading the data into that. So how we can achieve this inside the data factory we will see in this video. So here as you can see I am not having total 30 files. So I am total having 8 files. So these 8 files will be going to load inside the SQL server and these are the employee files and let me try to open the employee file so I can show you whatever the data is available under this file. So as you can see it is having total 4 columns. So we need to load the data from these files to the table. So let me quickly go inside the browser and inside the input container of the blob storage I have already uploaded these files. So as you can see total 8 files I have uploaded. So we need to load these files inside the SQL server. So let me quickly go inside the data factory and here we need to design a pipeline. So let me try to add a new pipeline. And here first we have to use the get metadata activity. So what get metadata activity we will be doing it will be going to get all these files that you can see over here. So it will be going to get all these file names. So let me use the get metadata we can drag and drop here. And after this we can go directly inside the settings and we need to setup the dataset. So I am going to create a new dataset and source is the as your blob storage let me click on continue. So the file which is the excel type as you can also verify this is excel sx. So we can select excel and here let me try to use the link service. So this is the link service we have already created. Let me try to browse for the path and here input folder we can click on that input. And we need to select this input we are not required to select any file because we are going to get all these files from this get metadata. And after that we can see the excel workbook. So this worksheet mode either we can go with the name or we can go with the index. So I am going to go with the index and I am going to specify 0. So it will be going to pick the first sheet from that excel workbook. Now first row as header we can set this and click on OK. So it is going to create this dataset and all the files that we want we will be getting by using this get metadata one. So under the field list we have to select the child item. So let me select the child item and try to debug it. So once it will be executed we will be seeing all those eight files. So it got executed let me check the output. So inside the output of the child items the name property we are having all these files. So that looks good. Next we need to loop through with all these files one by one. And after that we need to copy inside the SQL server. So we have to use the for each activity and we need to connect with the success of the get metadata one. So here we need to loop through we inside the items we can check the child item of the get metadata one so that we can see. So child items we are having total eight so for each will be executed eight times one for each. After that inside the activity simply we have to use the copy data activity because we want to copy the data. So here inside the source we have to select the source but this source is the dynamic. It will be going to point out this input location but the file name we are getting from the for each activity. So we need to make this source as dynamic. So let me click on new and here let me select the Azure Blob Stories. Click on continue Excel file click on continue here we can select the link service that we have already created. We can browse and select the input here and click on OK. Here we can go with the seed index so I am going to specify 0. First row as header so that looks OK. Everything is same click on OK. Let me try to open this dataset for making it as dynamic. So as we can see the file name we need to get it from the for each activity. So let me try to create a parameter here first and this is the file name and go to the connection and make this file name as dynamic from this parameter. So once we made this change in the dataset let me go inside the pipeline and pipeline we can see the dataset is asking for this input value for this parameter. So let me try to go here and we need to get the name property as we have already seen inside the get meta data one. Let me click on OK so we have done with the source it will be going to look through with all these files one by one inside the sync we also need to make this dataset as dynamic and files we are required to create at runtime. So let me try to click on new and this time we are going with the Azure SQL. So we can select Azure SQL database click on continue here we need to select this link service that we have already created and inside the tables if we are going to select any one of the table so all the files will be going to load in this table. But this table is not available so let me go with the none so we will make few changes so don't worry for now let me click on OK. So as we can see this sync is created let me click on open here we are required to make few changes. So first thing this table name we want to get from the parameter so here we can see the parameter let me call this as table name and inside this connection we can see this table click on edit this is the schema name so by default it will be picking DBO so I am OK with that inside this I am going to use this parameter that we have created and this parameter value we should be getting from the Excel file so whatever the Excel file name will be your table name if you can go back to the pipeline so it should be asking the value as we can see so it is having the table name so we need to supply this table name from this for each activity with the name so whatever the Excel file name will be your table name but if you can go here and we will be seeing it is having something dot Excel sx in the file name but we don't want to use this Excel sx over here inside the table so what we need to do we need to make few changes on this expression builder so here we need to use the replace function so under the functions let me go and search for the replace we can select so here we want to replace from this input parameter of the for each activity what we want to replace we want to replace dot Excel sx and we need to make this as blank so this is the change that we have to make let me click on OK now we have done everything one thing that you have to see the table option so here table is not created so we want to create the table at runtime so we can select this radio button as auto create table so this is the only change that you have to make let me try to execute this pipeline and we'll see ok here it is asking we have forget to select the seed name so seed name will be the zero so it will be picking the first seed let me try to debug it again so it will be going to execute and will be seeing the output so get metadata one is executed successfully if you can go in the downside and we want to check then we can see it is having all these employee names excel file and if you want to see the for each so it should be going to execute eight times that we can see the item count is eight now we can see the copy activity is executing and that is executed successfully so if you can go inside the SSMS and let me try to execute this query to verify the tables then we should be seeing like all these eight tables has been created earlier it was not here let me try to cross verify the data as well so let me use the select as tick from this table and here we should be seeing the data in this table that we can see so all the data is also copied so I hope guys you have understood how we can create this pipeline so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos see you in the next video