 Hey guys, welcome to SSUniTek, Sushil this side and today we are going to see about the parameter that is on the pipeline level and that will be the dynamic pipeline parameters. So in the last two videos, we have seen about the dynamic linker service and we will having seen about the dynamic dataset. So if you haven't watched last two videos of this video series, so I would strongly recommend to watch that video before we look forward because today's video is totally dependent on the previous two videos. So for the better understanding, you should watch that two videos. So let me quickly recap what we have discussed in the dynamic linker service video. So we have discussed about the parameter there and we are having a source that is the blob storage and on that blob storage, we are required to load the data into these three environments that is the SQL, Dev, SQL, UAT and the SQL plot. The source that is same but by using a single linker service, a single dataset and a single pipeline, we are required to load the data in these three environments and at the runtime we will be going to pass the values for these three environments. And in the SQL side, we have these three databases. In the real time, you will be having three different servers. But here for the practice purpose, I am created these three databases like the Dev, Prod and UAT. And it is having only one table that is the employee table with empty rows. And in the source side, if we can go and check. So our source is the blob storage under the SSU testing under the SSU container. And here we can see this file that is the employee.txt and it is having these many rows. So we just want to load the data which is available in this txt file into the Dev environment, prod environment and UAT environment. So now go to on the portal. And here first, let me quickly recap about the dynamic linker service that we had created. So here we can see this link service. So in the link service, we are having four parameter, the first parameter, the server name, then the database name, then the username, then the password. So we are having these four, if you can test the connection. So all values are there. So let me click on OK. So connections should succeed as you can see. So let me cancel. Go to on the author tab. Let me quickly recap about the dataset. So this is the dynamic dataset that we had created. And here the table name is dynamic. So this table name should be coming from the parameter and this is the table name parameter. So everything we have discussed and set up in the detail in last videos. So in this video, let me try to create a new pipeline. And under this pipeline, I just want to load the data from the blob storage to the azure SQL. So let me call this as dynamic load. Now, here we are required to use the copy data activity. In the copy data activity, go to the source site and let me click on new. Here let me search for the blob storage. Click on continue here. Click on continue. Let me call this as dataset for the blob storage for the EMP. In the linker service, here we had already created the linker service, but I am going to create a new one. So let me click on new. And here let me call this as link service for blob storage. Scroll down. Let me select the subscription storage account that is SSU testing. We have already seen that. Then click on create. So it will be creating a new link service. Then under the dataset, from which folder we want to get the file. So from SSU and the file name is employee.txt. So click on ok and again click on ok. So the source site we have done. Now in the destination site, we need to move focus. So click on sync. So first it is asking for dataset. So we had already created the dataset. So if you have remember like dynamic dataset for the EMP. Now here it is asking the value of the table name. So if we can go here in the downside, everything is ok. But we are required to pass the value for the table name. Once we click on this open. Then here it is asking for the values under this linker service. That is the server name, database name, user ID password and the table name. So these five values it is asking from the pipeline level. So in the pipeline level, we are required to create five parameter. So we can click outside this copy data activity. Let me click on new. So the first parameter will be the server name. Second parameter will be the database. And these parameters we are creating on the pipeline level. Third will be the user ID. Fourth will be the password. Fifth will be the table name. So these five parameters we have created on the pipeline level. So first thing you have to remember, like these five parameters or the four parameters for the linker service, we cannot pass directly to the link service. That parameter should go through the dataset. So on the dataset level, again we have to create all these. So let me click on that copy data. And here on the sync dataset. So under the sync dataset, we can see this table name. So here we can directly add the table name. This one and click on OK. But while we are going to jump inside the linker service. So under the linker service, if we can go and try to search. So under this, we can see the parameter. Those are our label only inside the dataset. So in the linker service, if we want to utilize the parameter that we had created on the pipeline level, so first we have to create those under the dataset level. And that dataset level we can move into the linker service. So simply we can understand, we have to create the parameter under the dataset level. So here we can see all these. So let me click on open again. So we have to go on this dataset. Here we can see the parameter. So let me add four parameters. So first here for the server name. Next is the database name. Next is the user ID. And last is the password. So these we have created. Now go back to the connection again. And under this, we can see this linker service. So we want to pass the value. So we can pass it from here from the dataset. Then the database name, then the user ID and password. So everything we have set up on the linker service and dataset level. So it is saying error. Okay, leave that for now and go back to here. Now it is asking how we can pass the values under the dataset level. So we can pass the values from the pipeline. So we can select like server name, then the database name, then the user ID and the password. Now let me try to publish it. So sync must be binary when source is binary dataset. So this is saying if your source is the binary, then your sync should be the binary. Create a new one and this time search for the blob storage. This is my mistake. Here we have to select this delimited text and go to the linker service. Here we can see the EMP data and let me click on this under the SSU. We have this file, click on open, click on okay. So we have created this at the source and at the sync we have done each and everything. Let me try to publish it again. Click on publish. So it is publishing now. That was my mistake where we had created the source with the binary. So it is succeed. Now we want to execute this. So trigger it now. So it is asking the server name, database name, every values here. So what we have to specify? Let me pass all those values. So for the database that is SSU dev, user ID that is the PVI, then the password that is we can add here. Then the table name, table name is the employee. Now click on okay. So while it is executing, it will be loading the data under the SSU dev environment. So click on okay. It is running. So we can wait. Go to the monitor. It got succeed as you can see. So let me go inside the SQL server and go on the dev one. So this is the dev. Let me try to refresh it. So this should have the data. So as we can see data is successfully loaded in the dev environment. Now let me go in the pipeline and we'll try to execute the pipeline again. And this time I want to load the data into the prod environment. So trigger now. So everything we have to pass it here. So the database that is SSU prod this time and the user ID. Then the password then the table name. In the real time you might have employee underscore prod table. Let me click on okay again. So it is running. Let me check the execution. So it is in progress. So here we can see it got executed successfully. Now let me go and check under the prod. So it should have the data as well. So as you can see it has the data. The same thing that we can test with the UAT environment as well. So the first thing that you have to remember like we have to create the parameters all the parameters those are going to use either in the link service or in the dataset level. So those should be created under the pipeline level. Then whatever the parameters that we are going to use under the dataset level or under the link service all should be created under the dataset level. Then under the link services we are going to map the parameters those we have created in the dataset level and what parameters that we have created on the dataset level will be mapped with the pipeline level. So this is three step process. So if you are not clear then you can watch this video again and still if you have any doubt then you can comment your question in the comment box. So thank you so much for watching this video. If you really like this video please subscribe our channel to get many more videos. Don't forget to press the bell icon to get the notification of our newly uploaded videos. See you in the next video.