 Hello friends, welcome to SSGunitech. My name is Susheel Singh and this is SSIS Tutorial Part 1. So here you will understand how you can download and install ssdt. If you are using sql2k14 or evo version then you have to download ssdt separately and install to your system. If you are using sql2k12 or lower version then ssdt will be available on your sql server setup. So you need to open your browser. I am using Google Chrome so I am going to open Google Chrome. And then you need to type ssdt or sql server and then you need to specify the version. As I am using sql2k14 so I have to specify 2k14. How you can download and install ssdt? So if you like this video please subscribe our channel to get many more videos. See you in next video. Thank you so much for watching this video. Hello. Welcome to SSGunitech. My name is Susheel Singh and this is continuation of SSIS Tutorial. So today we want to discuss about what is SSIS and what are the use of SSIS. And then we will discuss about what is the control flow and data flow. So let's start. So first of all what is SSIS? SSIS means SQL Server Integration Services. So SQL Server Integration Services is a component of Microsoft SQL Server Database Software that can be used to perform a broad range of data migration. So we can say that SQL Server is a tool by which we can do data integration and workflow applications. Next is what is ETL? So ETL is a three step process. First is extract then transformation and then load. So extract data from different different sources is our first process. Then we need to use some transformation by which our business requirement will be fulfilled. And our last step is load data into our database. So this is our control flow. So before going forward I would like to explain how we can start with SSIS. So first of all we need to go on start and then we need to type SQL Server Data Tool. So as you can see over here we need to open it. So this is our start page for SSIS. We need to go to one file then new and go to one project. So here we can see we have business intelligence inside we have three options. First is analysis services then integration services then reporting services. So now we are talking about integration services. Here we need to select our first option and we need to choose the path where this project will be exist. So this is our path we can rename with SSIS under code testing. So this path will not exist. So first of all we need to create a folder with the name of SSIS testing SSIS testing. Now we can select this folder. And here we can rename with SSIS tutorial. Now we can click on OK. Once we click on OK it will load our SSIS. So this is our SSIS and this is our initial package. So our first step is control flow. So control flow our first step then we have data flow. So we want to discuss about control flow. So here we can define the operations or task execution order. So it to wait until previous task will not be completed. Once previous task completed only then execute next task. So as here we can see in this example we have a data flow task. So if this data flow task will be executed successfully then our execute SQL task will be executed. If this data flow task will be failed then our script task will be executed. How we can configure our task and how we can set their execution order. So let's move to data flow. So this is our data flow. When we are talking about movement of data then our data flow task will come into picture. So it has three components. So first is source then transformation and then destination. In source it will pick data from different different sources like database, Excel, flat file, XML files. Then transformation it is used to clean up the data which we are getting from source according to our business requirements. And our last step is destination. We will load data into our destination. Destination could be database, Excel, flat file or anything else. So here we can see we have a Excel source and then we have a transformation which is row bound and then we have a destination which is our database. So this is our data flow task. In data flow task it will not be wait until our previous task will be completed. Once Excel source get data it will forward to row count and once row count will get data it will forward to our database. So I hope you have understand about control flow. You have understand about data flow as well. If you like this video please subscribe our channel to get many more videos. Thank you so much. Hello. Welcome to ssqlitec. My name is Suseen Singh and this is continuation of SSIS tutorial. So today we going to discuss about how we can load data from flat file to our database. So let's start. So as we can see over here first of all we have to use a data flow task inside this control flow in our first image. And in our second image we can see first of all we have to use a flat file source from where we want to get the data and then we have to use array db destination where we want to load the data. So this is our process which we have to follow. So let's move to ssdt. In our last video we have understand how we can start with ssdt. This is our package. First of all I am going to rename this package with flat file to database. So this is our first practice flat file to database. So as we can see over here we have to use a data flow task. So this is our control flow and here we need to use data flow task. We can drag and drop this data flow task. And once we directly click on this data flow task it will directly move to data flow panel. In data flow panel we can see in our ssis toolbox we have option for other sources. When we expand this other sources then we can see we have flat file source. Then we can drag and drop this flat file source. So before going forward I would like to explain about connection managers. So what is connection managers? So connection managers are used to make the connection to connect with our source or destination. For example here we want to get the data from flat file and then we want to load that data in our SQL server tables. So here we need to make two connections. First is for flat file source and second is for our OLDV destinations. So before making connection let's have a look of flat file by which we want to get the data. So we have this text file. So here we can see it has comma separated values and it contains three columns. First is for ID, second name and then data output. So here we need to load this data in our SQL server table. So go to SQL Server Management Studio. So in SQL Server Management Studio we can see we have a table with flat file to database table. It contains three columns. First is ID, then name and then data output. Initially we don't have any data and we want to load data from our flat file. Which is available over here we can see. It has comma separated values and total 8 rows is available over here. So we want to load this data in our SQL server table. So I need to create a connection for flat file source. So double click on this flat file source and here we can see flat file connection manager. Initially we don't have any connection over here so we have to make a new connection. So click on new. First we click on new then it will ask few things. First is file name. So we need to browse. As we can see flat file comma delimited source. We need to open. Then we need to go to format. In format we can see we have three types of formats available in flat file. First is delimited, second is fixed width and last is ragged write. So flat file source contains three formats. First is delimited, second fixed width and then ragged write. In delimited we can see values will be delimited by any delimited like comma, pipeline or anything else. And in fixed width column size will be fixed and in ragged write column size also will be fixed. But what is the difference between fixed width and ragged write? The only difference between fixed width and ragged write last column value is variable in case of ragged write. First of the column value should be same but the last column value will be variable. So here I am going to load data from delimited file and delimited is comma. So go to columns column delimited is comma. So we can select comma from this drop down. It has another option as well. So here we can see we are getting the data which is available in our text file. But we do not want to load double query which is available in our input. So how we can remove this double query in our source? So go to one general and here we can see text qualifier. In text qualifier we need to specify that character. That character is double query. So double query is specified over here. Now we need to go to one column and here we can see double query is removed. If we want to remove any another character then we can specify so go to one advance. So we can specify the data types. So first is id. Id should contain the value of integer so we can convert this from our integer. And now go to one name. Name contains the string and output column width is 50. It's fine to me. If we want to change this we can change by 100 or anything else. Then we can see in data birth. Data birth is also contains the string value. Now we don't want string value we want to convert this string value to dead time. So now we can select dead time time stamp that is fine to us and go to preview. It looks fine to us then we can click on OK. So here we have another option retain null values from the source as null values in the data flow. So what it mean? So when we check this check box then if we are getting any blank or zero length value from our source then it will be converted to null. As we can see preview over here then we can see our where id is 8 then name value is blank. Once we checked this check box then we can preview then this blank will be replaced by null and once we load this data in our SQL server table then we can see null will be loaded instead of blank. Now we can click on OK. We have done our source part. Now we need to make the connection for destination where we want to load the data. So we need to select OLEDB destination. So this is our OLEDB destination. Now we can see we have two pipelines over here. First is for this task will be completed successfully and then for failure. So we want to pass this data from this flat file source to our destination. Now we need to make one more connection for this destination. Now once we double click on this then we can see it will ask for OLEDB connection manager. Now we need to create one more connection. I have already created the connection now I am going to delete these connections and create new one. So once we click on new it will open this window. It will ask for server name. Now we can select our server. So here we can see this is our server name. And here we need to select the database name by which we want to connect. Then it's SSI status where we want to connect. Now we can check the connection. Connection succeed then we can click on OK. And then we need to click on. So connection has been made. Now we can enable to access the tables. So as we can see we can scroll down and we have flat file to database table. Go to on mapping. So mapping succeed because the input column names and output column names both are same. So it will be mapped automatically. Now click on OK and execute our package. So we can see it was and our package completed with successful. We can stop this package and go to our SQL server. And we can execute our and now we can see now we are getting the data. And for ID 8 name value was blank in our flat file so it has been removed by null. So I hope you have understand how we can load the data from flat file to our SQL server tables. If you like this video please subscribe our channel to get many more videos. Thank you so much for watching. Hello, welcome to SSUnitech. So see in this side and this is continuation of SSIS tutorial. So today we are going to discuss about how we can load data into flat files from database as I am using SQL server. So we have to load data from SQL server to flat file. So here we can see again we are dealing with movement of data. So we have to use a data flow task inside the control flow. And then when we move to data flow panel then we have to use already the source from which we want to get the data and then we have to load in a flat file destination. So we have a table which contains the employee information. We have four columns ID, name, date of birth and salary. So we want to load this data into a flat file and that flat files we contains comma separated values like ID, then comma, name, then comma, date of birth, then comma, salary. So we will get data in comma separated. So how we can achieve this? Let's move to SSGT. So here we have this SSGT. Now we need to create a new package. So now we need to remember it database to flat file and comma separate. As we discussed, we need to use a data flow task. So drag and drop this data flow task was double click on this. It will directly move to data flow panel. Inside the data flow panel, we have to use OLDB source because we want to get the data from SQL server table. So this is our OLDB source. We need to drag and drop. So here we need to make the connection with the SQL server. So double click on this. It will ask for OLDB connection manager. So we need to create a new one. We have already created a connection. So delete this and created new once click on new. It will ask for server name. So we can select the server name. So this is our server name. And then it will ask for database name by which we want to connect. So we want to connect with SSIS test. Then we can test our connection. Connection succeed. Then click on OK. And again click on OK. So we have made the connection. Now we need to select the table from which we want to get the data. So we want to get the data from employee. So this is our employee data table. Now we can preview. So we are getting 4 columns as we have discussed ID, name, date of birth, salary. Close it and go to on columns. So do we want all the columns or we want to skip any column. If we want to skip any column then we can unslip that column name. As we don't want date of birth in our flat file. So we can skip over here. Now click on OK. So we have done our source path. We want to load the data in flat file. So drag and drop this flat file destination. Now we need to connect. And now we need to make the connection with the flat file. So double click on this flat file destination. It will ask for flat file connection manager. So click on new. I need to create a new one because we don't have any connection right now. Now we can see it will ask for flat file formats. So we have different different flat file formats. First it delimited, then fixed width, then fixed width with a row delimiter, then right right. So we want to use first one. Click on OK. First we click on OK. So our flat file connection manager editor will be open. So now we need to select the file. So we don't have any file right now. So we need to create new. So we can do the name with database to flat file comma separated. Click on OK. So here we can see our format. Format should be delimited. Then we can see our header row delimiter CRLF. It will be discussed in next video. And then our first row should be column names. So we can select this checkbox. Now move to columns. So in columns we can see column delimiter. So column delimiter should be comma. As we can see it has been selected by default. If we want to change then we can also have option to change it. Now go to advance. So here we can specify the size. If we want to increase the size then we can increase. Like for name we want length of this should be 500 then we can select over here. Now click on OK. Now go to mapping. So inside mapping our mapping succeed ID will be mapping with ID then name then cell. So now we can click on OK. And run this practice. Once we run this practice we can see 47 rows. So 47 rows has been loaded in our flat file. So go to on flat file. So we can see database to flat file comma separated. Now we can open it. So here we can see we have ID name and cell. And it contains the comma separated values. So ID then comma then name then comma then cell. And then move to next row. So we are getting the data. We can close it and stop this practice. So as we have discussed we have four flat file formats. First is delimited then fixed width then fixed width with row delimited then right right. How we can load data from delimited we have discussed over here. In next video we will discuss how we can load data into fixed width. And then how we can load data with fixed width with row delimited and then right right. And we will also understand what are the differences between these flat file formats. Thank you so much for watching this video. So if you like this video please subscribe our channel to get many more videos. Thank you so much for watching. Hello. Welcome to SS Militech. So see in this side. And this is continuation of SSI's tutorial. So today we are going to discuss about how we can load data into database from flat files. In our last video we have also discussed how we can load data into database from flat files. But the format was delimited. So today we are going to discuss how we can load data when flat file formats are fixed width and right right. Let's start. So again we are dealing with movement of data. So we have to use a data flow task inside the control flow panel. And when we move to data flow panel then we have to use a flat file source and then LEDB destination. In flat file source we want to connect with our flat file from which we want to get the data. And then from LEDB destination where we want to load the data. In fixed width each row contains one complete record of the information. And each row contains one or many piece of data also referred to a column or feeds. Each data column has a defined width specified as a number of characters that is always same for all the rows. If we open our fixed width file then so this is our fixed width file. Now we want to open in notepad plus plus. So here as we can see for ID we have 11 characters and then for name we are having 100 characters. Then for salary we are having 12 characters. And at last we have two characters one for CR then LF. So we want to load this file in our database. So how we can load it? We have a table flat file to database table. So initially we don't have any record and it contains three columns ID, name and salary. And we want to load data in this table. So how we can load it? Let's move to SHGT. So this is our SHGT. Now I am going to create a new package. So I need to rename with flat file to database with fixed width and right right. So as we have discussed in control flow panel we need to use a data flow task. So we can drag and drop our data flow task and this is for fixed width. So we can also rename with fixed width. Once we double click on this it will directly move to data flow panel. Inside data flow panel we need to use flat file source. So go to other sources then flat file source we can drag and drop. And then we need to make the connection to connect with our flat file. So double click on this and it will ask for flat file connection manager. So we don't have any connection manager right now so create new one. Once we click on new our flat file connection manager editor window will be open. It will ask for file name. So we can browse and select our file. So this is our file fixed width. Now we can open it. Once we open so we need to select the file format should be fixed width. So we can select fixed width. And we can see our text qualifier is disabled. So text qualifier is only enabled once we are dealing with delimited. Once we move to drag drive then also it's disabled. Now we move to columns. So inside columns as we can see we have total how many characters. So we can see 123 is the total length. So we have to select 125 because 123 for this data and 2 for this delimited. So we need to select 125. So this is 125 now. So here we need to select 11 as we can see in our flat file for ID how many characters. Once we count it should be 11. So we need to select 11 and second for 100. So we need to select with 100 plus 11 so triple one and go to on advance. Once we move on advance then we can see we have ID name and salary. We can preview as we can see it looks good. So click on OK. Once we click on OK then we have option to retain null values from the source as null values in the data flow. So we can select this checkbox. Go to on columns. So we are getting ID name and salary. We are OK with it. Click on OK. Now we have done our source path. Now move to destination. So we want to load data in our OLEDB destination. So we can drag and drop and then make the connection. Once we connect then we need to make the connection to connect with our database. So double click on this and go to on OLEDB connection manager. Click on new. Now we have already make the connection. So delete this one and create new. Here we need to select the server name. Once we select the server name then it will ask for database name by which we want to connect. So we want to connect with SSIS test. So we can check the connection. Connection succeed. Click on OK again OK and OK. So here we have made the connection without database. And then from which table we want to load the data we can select over here. So this is our table. Go to on mapping. So inside mapping as we can see we need to map the columns. So ID should be mapped with ID. Then name should be mapped with name. Then salary should be mapped with salary. So we have done the mapping. Now click on OK. Once we click on OK. So we have done our practice. Now we need to execute. Once we execute as we can see 18 rows. So 18 rows should be loaded in our table. At least we don't have any data. So now 18 rows should be present over here. So we can see and we can count 18. So it looks good. So we have done our fixed width. Now we move to ragdright. In ragdright each row contains one complete record of the information. As we have discussed in fixed width. Each row contains one or many piece of data also referred as a column or field. Each row column has a defined width specified as number of characters. That is always the same for all the rows. This is what difference between fixed width and ragdright. Last column value is variable in case of ragdright. We can disable this data flow task. So right click on this and disable. Now we need to drag and drop one more data flow task for ragdright. And we can rename with ragdright. Once we double click on this it will directly move to data flow panel. So inside data flow panel we need to select our source and destination. So our source is flat file. So we can drag and drop. Now we need to make the connection. So double click on this as we have already make the connection for fixed width. So we do not want to use it. Create new one. Click on new and then it will ask for file name. So browse and this is our file name. So click on open. Once we click on open then we need to select the format. So format should be ragdright. Click on OK. So now we move to columns. Now we need to select the column size. So first should be 11. Then move to our next. That should be triple one. And our last column value should be variable. No need to select that value. Now we can click OK. Once we click on OK then we need to select this checkbox. Once we select this checkbox then go to one columns. So we are getting id name and cell D. And click on OK. Once we click on OK our source part has been done. Now we need to make the connection with destination. So polydb destination drag and drop this. And we need to connect. Once we connect then we need to make the connection with the database. So here we need to set the table. As we have already made the connection for SQL server. Then I am going to use that one. Just only set the table name. So we want to load the data with the same table. As we have used in fixed width. So we can select go to on mapping. Now we need to map with id. Then name. And cell D is automatically mapped. So no need to worry about it. Click on OK. Once we click on OK then as we can see we have 18 rows. Now we need to delete this data. So delete from table. Once we delete. Now we do not have any data right now. Now execute this practice. Once we execute this practice we can see 47 rows. So 47 rows should be inserted in our SQL server table. Go to our SQL server. Execute. And we can check 47 rows. So data has been loaded in our SQL server table. Stop this practice. And so I hope you have understand how we can load data into database from flat files. When formats are fixed width or right right. So thank you so much for watching this video. If you like this video please subscribe our channel to get many more videos. Hello. Welcome to SSUnitech. So see you this side. And this is continuation of SSI tutorial. So today we gonna discuss about how we can load data from database to flat files. In our last video we have discussed how we can load data from flat file to our database. And we have also understand about the flat file formats and connection managers as well. If you haven't watched part 3 of this video series. So I would strongly recommend to watch that video where we can understand how we can load data from flat file to database and flat file formats. Which is very important. And today we will also understand about the fixed width and right right formats. And make the difference between these two. So let's start. We have to use a data flow task. Because we are dealing with movement of data. When data will move from any source to destination then our data flow task will be used. So as we can see inside the control flow panel we have to use a data flow task. And when we move to data flow panel then we have to use our source and destination. So we are going to get the data from database. So we have to use LEDB source. And we want to load the data in our flat file. So we have to use flat file destination. So let's move to SSGT. So this is our SSGT. And we need to create one more package. So now we can right click on this SSIS package. New SSIS package. And our package is created. Now we need to rename this package. We are getting the data from database. And then we are going to load data in our flat file. And format should be delimited. So this is our package. So now we need to use a data flow task because we are going to transfer the data. Once we double click on this it will directly move to data flow panel. So here we want to get the data from database. So we need to use LEDB source. So this is our LEDB source. Once we double click on this then it will ask for connection manager. And initially we don't have any connection manager. So we need to create a new connection. Once we click on new then we can see we have already made a connection. So this time I am going to delete and create a new connection. Once click on new so it will ask for connection manager. First server name. So we can select the server name. Once we click on server name. So it will ask for database name by which we want to connect. So we want to connect with SSIS test. We can test the connection. Connection succeed. So click on OK. So we have made our connection. Now click on OK. Next it will ask from which table we want to get the data. So we want to get the data from EMP data table. Now we can preview and we can see. So we have four columns in this table. First is ID, second name, then date of birth and then salary. So we want to load this data in our flat file. We can see our columns. So we want all the columns. So we can select all the columns. If we want to skip any column then we can unselect this check box. And this time we have only three columns. And we don't want date of birth. So we can uncheck this one. Click on OK. So we have done our source part. Now we need to make a connection for our destination where we want to load the data. So this is our flat file destination. We can drag and drop this. And then once we double click on this flat file destination then it will ask for flat file connection manager. Once we click on new then it will ask for flat file format. So we have four type of flat file format. First is delimited. Second is fixed width. Then fixed width with row delimited. Then right right. So we will cover one by one. So first I am going to load the data in delimited. So click on OK. It will ask for file. So in our destination folder I am going to create a file with data base to flat file delimited. First row so column name. So here we can see we have the columns and we have only three columns because we don't want to load the data for data of birth. So that column is removed in our source. Once click on OK. Then go to on mapping. Mapping succeed. Click on OK. So now I am going to execute this practice. Once we execute this practice 47 rows inserted in our flat file. So this is our destination folder where we can see. So here we have data base to flat file delimited. So here we can see we have comma delimited file and we can see we have all the records. Now we can close it. And now we can rename this with delimited. So I am going to disable this dataflow task and create new one. And this time we will load data from fixed width. So this is our source quality resource. Once we double click on this then we have already make the connection. So we need to select only table name. So this is our table name. Go to on columns. We don't want data of birth. Click on OK. Now I am going to make connection for flat file destination and then we can connect and double click on this. Because we have already a connection in this package. So it is shown over here. But we won't create new one. And this time I am going to load data from fixed width. Click on OK. And create a new file with data base to flat file and this time fixed width. Click on OK. First one should be column name. Go to on columns. We are OK with it. Click on OK. Go to on mapping. So we have done the mapping and click on OK. Now I am going to execute this practice. 47 rows affected. Now go to on folder. And we have this fixed width. So here we can see we are not getting the data as we expected. So why we are getting with this data? Let's cover data. So we need to rename with fixed width. We have two more formats. So we need to create that one. So this time create one more data flow task. Double click on this. Go to on source query db source. Double click on this. And select our table name. So this is our table name. Now go to on columns. We don't want data birth. Click on OK. Now go to on flat file destination. So flat file destination we can select. And once we double click on this we have two connection managers. So it's available over here. But we want new one. Click on new. And this time we want to load data from fixed width with row delimiters. Click on OK. Browse. Fixed width with row delimiters. Open. First row should be column names. Go to on columns. We are OK with it. Click on OK. Go to on mapping. Then again click on OK. Now we can execute this practice. And stop this one. And here we can see. So this is our file. We can open it. This is our fixed width flat file format. But we have one more additional column over here. Flat file format which is ragged write. So we can disable it. And create new one. Double click on this. And go to on source. So we have already the source. Click on it. Create one more connection. Select the table name. Go to on columns. You don't want data birth. Uncheck this one. Click on OK. And this time this is our flat file destination. Now we can connect with it. And now click on this. So this time we have three connection managers. Because we have already created this one. But we won't create new one. And this time with a ragged write. Click on OK. It will ask. So we want to create a flat file with ragged write. So this is our ragged write. First of all should be column names. Then columns. You are OK with it. Go to mapping. Mapping succeed. Click on OK. And execute this package. So once we execute this package or practice executed successfully. Now go to on files. And make the differences between these files. Now I am going to open these files in notepad plus plus. So this is our notepad plus plus. Open this one. So here we can see we have ID, name, salary. We have two more characters in our delimited file. Which is CRLF. Which indicates for new line characters. So this is our delimited file. So now I am going to open fixed with file. It will notepad plus plus. So here we can see we don't have any delimited. That's why we are getting the data in a sequence. So in our first row we have all the data. So this is our column name. And our data is started over here. So we can see this is our data. And So this is our row. And this is our second row. Because we don't have any delimited row here. So once we move to our row delimited. Then we can open in notepad plus plus. And here we can see we have all the data. But we have an extra column over here which indicates the row delimited column. And this is our row delimited column. And we have two characters. CR then LF. So we are getting two more characters over here. So this is the difference between fixed with row delimited and fixed with flat file format. And go to on right right. And open this file in notepad plus plus. So here we can see in our last column we have the variable values. If you are not getting any value then this column contains zero length. If we have the value then like 5000 for this row. So we have 5000. So this is the difference between right right and fixed with flat file format. Once we deal with fixed with then we can see our last row contains the full length. Like cell D contains we can count how many characters. We have 11 characters. So it will take 11 characters in each. Either it has the value or not. But in right right for this we have only 4 characters so we can see over here. So I hope you have understand about these file formats and how we can differentiate between these. So thank you so much for watching this video. If you like this video please subscribe our channel to get many more videos. Thank you so much for watching. Hello, welcome to SS Unity. Sushil this side. And this is continuation of SSIS tutorial. So today we gonna discuss about how we can load data into our database from Excel file. And you can see you will face 2 types of scenarios. So our first scenario is when database table contains nwarecare data type for string data. Like as we can see we have a table it contains 4 columns. First is ID, then name, then date of birth and then cell D. So we have 4 columns. So in name column we can see we have nwarecare data type and we want to load the data in this table. So basically we don't have any data. Let's have a look of Excel file. So this is our Excel file. As we can see it has 4 columns ID, name, date of birth and cell D. So we want to load this data into that table. So first of all we have to use a data flow task because we are dealing with movement of data. So we have to use data flow task inside the control flow. And when we move to data flow panel then we can see we have to use Excel source from which we want to get the data then while ADP destination where we want to load the data. So let's move to SSDT to create the package. So here we need to create a new package right click on this SSIS package new SSIS package and we want to rename this with Excel to database nwarecare. So we have to use a data flow task. So we can drag and drop this data flow task and once we double click on this it will directly move to data flow panel. So inside data flow panel we have to use Excel source. So we can drag and drop this Excel source and here we need to make the connection with Excel file. So once we double click on this Excel source then it will ask for Excel connection manager. So we don't have Excel connection manager at yet. So we need to create a new one. It will ask for Excel file path. So we need to specify that file. We can browse and we can see and this is our Excel source. So we can select this file and click on open. Once we click on open then click on then we need to select the sheet from which we want to get the data. So we want to get the data from sheet one as we can see we have data in our sheet one. In sheet two we don't have any data and in sheet three we don't have any data. So in sheet one we have the data so we can select the sheet one. Once we go to columns then we can see we have id name, date of birth and set date. We have all the columns then click on OK. So we have done our source path. Now we move to our destination. So we want to load the data which equals our table. So we have to use OLEDP destination. Now we can map and once we do click on this OLEDP destination so here it will ask for connection manager. So we need to click on new. Once we click on new then we can see our configuration OLEDP connection manager. We already have a connection. So now I am going to delete this one and click on now. It will ask for server name. So we can select our server name. So this is our server name. Once we click on server name then it will ask for database name where we want to connect. So here we want to connect with SSIS test. Now we can check our connection. Text connection succeed. Then click on OK. Again click on OK and click on OK. So here we need to select the table name where we want to load the data. So we want to load the data employee data, destination and where care table. Now go to on mapping so mapping succeed and click on OK. So as we can see in our table initially we don't have any data. Now I am going to run this package six rows. So these six rows should be inserted in our table. So we can select and we can execute as we can see we are getting the data. Once we have an where care data type for string data in our SQL server table and we want to get the data from Excel file and load in our SQL server table. Then we don't need to worry about it. It will directly move to data from our Excel source to destination and our Excel source will written in where care data type instead of where care. Our second case when database table contains where care for a string data. So as we can see in our second table. So this is our second table and it also has the same structure as our first table but it has where care data type for name column. Now we want to load the data in our second table and we can execute and we can check we don't have any data right now how we can load the data. First of all again we have to use a data flow task inside the control flow panel because we are dealing with the moment of data and then once we move to data flow panel then we can see first we have to use Excel source and then we have to use a transformation which is data conversion. So data conversion because Excel source will written in where care data type and in our database we have where care data type. So it will not be mapped correctly. We have to use a data conversion by which we will convert our and where care data type to where care data type go to on sdt now create a new package of this time Excel to database and where care we have to use a data flow task. So we can drag and drop this data flow task once we double click on this it will directly move to data flow panel so in data flow panel we need to use Excel source we can drag and drop this Excel source then we need to make the connection once we double click on this Excel source it will ask for Excel connection manager. So we don't have any Excel connection manager at this time so we need to create new one once we click on new then it will ask for Excel file path now we can select our file so this is our file and click on ok then it will ask for Excel seed so we want to get the data from seed one we can select seed one go to columns we want all the columns then click on ok so we have done our source part now we need to load the data in our SQL server table so we have to use OLA-DP destination we can drag and drop this and connect now again we need to make the connection for this OLA-DP connection manager click on new we have already make the connection so I am going to use this one click on ok now we need to select the table but this time we want to load the data where string data contains where the data type click on ok now go to mapping mapping succeed and click on ok so here we can see our destination will not be succeed if we read then we can see column name cannot cannot convert between unicode and non-unicode string data types so we have to convert first so delete this connection and we have to use a data conversion go to on columns and we can see we have a transformation which is data conversion we can drag and drop this data conversion and once we do click on this then we want to convert data type for this name and our new data type should be string so we can select our string and click on ok so once we click on ok and we can see if we want to name our output alias name then we can also rename like name new so now click on ok now we need to map this data conversion with our destination click on destination go to on mapping and this time we don't want to map name with name we want to map this name with name new so we can drag and drop this name and click on ok now we can see error has been gone now we can execute this package once we execute this package then we can see our package succeed and 6 rows should be inserted in our table so go to management studio and execute our selected statement so as we can see we have 6 rows so I hope you have understand how we can load the data if our database table contains wirecat data type for string and n-wirecat data type for string now we have few points to be remember so our first point is it make connections with excel workbook so if we need to access multiple seats in same workbook then we don't require to create connection for each seats once we create the connection with our excel workbook then we can access all the which is available in the same excel workbook or second is we can also sort data add source by selecting is sorted property true and set a sequence in column sort key position so we will cover this part once we understand about the merge next is we can select only specific range from excel if required like in our excel source as we can see we are getting 6 rows but we want to insert only 3 rows in our database so how we can select only this specific range so for this we need to understand about the data access mode so here we specify how we want to face the data for example if we have table or seat name in a variable then we can use it or if we want to write sequence statement to get the data then also we can use it in our scenario we can see we have 4 types first is table or view second table name or view name variable third is sql command fourth is sql command from variable so I am going to use sql command to select the specific range of data to go to ssdt also double click on this then we can see table or view so I am going to change with sql command here we need to write the sql command so select start from then we need to specify seat name seat 1 then we have to specify dollar sign after that we need to specify the range as we can see select our first 3 rows so we want to get 1, 2 and 3 so these 3 ideas and we also want to get our column names so we will pick the data from A1 to D4 A1 to D4 now we can preview and we can check it is going to written only first 3 rows as we are expected so go to columns and we can see we are getting all the columns click on OK then execute our package once we execute our package package succeed now go to on ssms and execute our selected statement so we can see we have 3 rows so I hope you have understand how we can select the specific range and we have also understand about the data access mode and we will understand about the table name or table and SQL command variable thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and please share to others thank you so much hello welcome to ss unitec so still decide and this is continuation of ssis tutorial so today we gonna discuss about how we can load data into excel file from our database as I am using sico server so we will use sico server and we will load data from sico server table to excel if you haven't watched previous video of this video series then I would strongly recommend to watch that video where you can understand how we can load data into database from excel file so here we will understand how we can load data into excel file from database so again we will face two type of scenarios in our first scenario as you can see database table contains nwarecat data type for string data as we can see in our first table we have four columns first is id, name, date of birth and salary so we have four columns in our second column as we can see on name we have nwarecat data type so we want to load data from this table to our excel file we have six rows so these six rows should be inserted in our excel file so how we can do this first of all we have to use a data flow task inside the control flow panel because we are dealing with movement of data and then move to data flow panel inside data flow panel we have to use oledb source from where we want to get the data and then excel destination where we want to load the data so let's move to ssdt to create the package we want to create a new package right click on this ssis package then new ssis package now I want to rename this package from database to excel and this is for nwarecat so this is our package name so inside control flow panel we have to use data flow task so we can drag and drop this data flow task and once we double click on this it will directly move to data flow panel so inside data flow panel we have to use oledb source where we want to get the data so we can drag and drop this oledb source next we need to make the connection with our database so double click on this oledb source then we can see it will ask for oledb connection manager so click on new we have already connection I want to delete this connection and create a new one once we click on new then connection manager window will be open and inside we need to select our server name so we can select our server name so this is our server name we can select I am using window authentication then we need to select the database by which we want to connect so we want to connect with ssis test now we can check our connection connection succeed click on ok so here we need to select the table so this is our table employee data destination nwarecat go to one columns as we can see ID name date of birth and cell D so we have all the columns click on ok so we have done our source part now we need to configure our destination so we want to load data in our excel destination so we need to drag and drop this excel destination and connect now double click on this excel destination then it will ask for excel connection manager so we need to select excel connection manager as we don't have any excel connection manager yet so we need to create a new one now we can browse and this is our destination go to excel and EMP data nwarecat now we can click on open once we click on open so here we can see our first row has column names then click on ok once we click on ok then it will ask for seat name so here we don't have any seat because our excel file is not available at yet so we want to create a seat we can click on new as we can see create table excel destination so it will create a seat we can click on ok so once we click on ok then we can see excel destination seat will be available in our excel file so we can select go to one mapping so in mapping as we can see id name date of birth and selfie all these columns have been mapped successfully so we can click on ok now we need to execute our package as we can see six rows should be inserted in our excel file so go to excel file so this is our excel file we can open and we can see we are getting six rows so we have six rows and our first row should be column names so this is our first scenario in our second scenario as we can see database table contains worker data type for string data so here if we want to load data from database to excel file and our table columns contain worker data type then it will not be mapped successfully and our package will not be succeed until our final list we will not use data conversion so let's start here need to create one more package click on new SSIS package and this time we need to rename with database to excel and worker so again we need to use a dataflow task inside control flow panel so we can drag and drop this dataflow task once we double click on this it will directly move to dataflow panel inside dataflow panel we need to configure our package so here we want to select our source so our source is OLEB source because we want to get the data from sequence over table now we can make our connection and I want to delete this and create a new one we can select our server name so this is our server name now we need to select our database name so database name is SSIS test now we can check our connection connection succeed click on ok again ok and again ok so here we need to select our table from which we want to get the data so we want to get the data from this worker table so we want all the columns and click on ok so as we can see in our second table we have worker data type for name so we have selected this table and we have 20 records on this table so these 20 records should be inserted in our excel file so here we need to select our destination so our destination should be excel destination so we can drag and drop this excel destination and need to map now we need to make connection for this excel destination so we need to select file I want to rename with worker now click on open and ok here we don't have any seed at yet so click on new seed click on ok again ok and our seed has been created now go to on mapping so inside mapping we can see our mapping succeed so click on ok so here we can see our destination has reflected an error so what is error the column name cannot be converted between unicode and nonunicode string data type so we have to use data conversion to convert nonunicode data types to unicode data type so we have to use data conversion so drag and drop this data conversion and map double click on this data conversion then we can see our name we can select name and we can rename our output alias name with name new then we can see our data type currently we are having string data type so we want to convert this string data type to unicode string then click on ok now we need to map with our destination now go to on mapping so as we can see we want to delete our existing mapping with name and map with new column which is name new now click on ok as we can see error has been gone now we can execute our practice so once we execute our practice so 20 rows should be inserted in our excel file so go to on excel file as we can see this is our excel file so we are having 20 rows so normally these two types of scenarios will be faced when we want to load data into excel file from database I hope you have understand how we can load data into excel file from database if you like this video please subscribe our channel to get many more videos and please share to others thank you so much for watching hello welcome to SS Unitec social this site and this is continuation of SSIS tutorial so today we gonna discuss about how we can load data into raw files from flat file and then we will also understand how we can load data into database from that profile at last we will understand what are the difference between flat file source and file source so let's start as we are dealing with movement of data so we have to use dataflow task inside the control flow panel and when we move to dataflow panel then we can see we have to use a flat file source where we want to get the data and then we have to use a raw file destination where we want to load the data so before going SSDT let's have a look of flat file so this is our flat file as you can see it has comma separated values first is id then first name, middle name last name and data output so it contains total 5 columns so we want to load this data into raw file so let's move to SSDT so this is our SSDT now we need to create a new package so right click on this SSIS package and create a new package and then I am going to rename this package from flat file to raw file so first of all we have to use a dataflow task inside the control flow panel so drag and drop this dataflow task and then double click on this dataflow task then it will directly move to dataflow panel so inside dataflow panel we have to use our source so our source is flat file so we need to drag and drop this flat file source now we need to make the connection with our flat file so double click on this then it will ask for flat file connection manager so click on new then we need to select our file so file name we can browse so we need to select the file so this is our file and click on open as we can see we have delimited values so go to our columns and we have comma separated values so we have 5 columns first is id, first name middle name, last name and data so we are getting the data now click on ok so retain null values from the source as null values in the dataflow task so go to columns as we want all the columns so we have to select all these columns and click on ok so we have done our source part now we need to configure our destination so destination should be raw file destination so this is our raw file destination we need to drag and drop this raw file destination so now we need to connect once we connect then we need to directly click on this raw file destination so once we active then we see we have access mode it has two options first is file name second is file name from variable if we have file name in any variable then we can select our second part if you want to select our file then we can select our first option now we can browse and we need to create a new raw file so we can set with raw and click on open once we click on open then we can see our right option right option has four options so first is create always create once append and truncate and append so what it means so first is always create it is used to drop and recreate raw file once execute our practice so it will always create a new file and drop our existing file in our second option create once it is used if you want to execute our practice once it will create raw files only once and if you want to execute package again then it will reflect an error so if we have any scenario by which we need to create that row file only once then we can select our second option and our third option is append it is used to append the data to our existing row file truncate and append it is used to truncate existing data and append the new data so these four types of right option available so we need to select create always so go to on column page how many columns we want we want all the columns so we can select all the columns then go to on connection manager and generate initial file so file has been generated successfully now click on ok so once we move to our folder then we can see raw file has been generated and we can also have a look inside the connection manager we don't have any connection manager for this raw file destination so there is no need to make any connection manager to connect with our raw file so I'm going to execute our package so as we can see five rows so five rows should be inserted in our raw file so stop this package and go to on raw file so this is our raw file I'm going to open this in notepad plus plus so as we can see we are getting the data it's not very easy to understand now we need to create one more package by which we will load this data into our database so this is our database and we have raw file table in this raw file table we have ID first name middle name last name and the data upper so all these columns are all label in our raw file so now create a new package right click go to on new package I'm going to rename this package with a raw file to database so now we are dealing with moment of data so we have to use a data flow task inside the control panel then double click on this and inside the data flow panel we have to set our source and destination so our source is a raw file source so we can drag and drop this a raw file source and double click on this then it will ask for the file name we can browse and select our file so this is our raw file click on open so once we click on open then we can see columns so we are getting all the columns which we have seen earlier now click on ok so we have done our source path now we need to configure our destination so destination is already we destination so we can drag and drop already we destination now we can connect once we connect then we need to make the connection to connect with our database so click on new so I have already make the connection I am going to use that one click on ok so now we need to select our table so our table is a raw file table we can select that table and go to on mapping so mapping succeed click on ok now I am going to execute this package so once I execute this package then five rows should be inserted in our SQL Server table so go to on SSMS now I am going to execute our selected statement so here we can see data has been inserted successfully in our SQL Server table so I hope you have understand how we can load the data from flat file to raw file and then raw file to SQL Server next what are the difference between flat file source and raw file source so first is flat file source a flat file is an ordinary file with text data like TXT or CSV files in raw file a raw file is basically dump of our data in SSIS to file system next is we can read the data using notepad or any another editor tool in case of flat file but in case of raw file data is in binary format so we cannot read it in case of flat file connection managers are required to connect with flat file but in case of raw file there is no need to make the connection so in case of flat file it's very slow to transfer the data as compared to a raw file so in raw file it's used to transfer the data very quickly so I hope you have understand how we can load the data from flat file to a raw file and then raw file to a database and you have also understand how we can differentiate between flat file source to a raw file source so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and please share with others thank you so much hello welcome to SS Unitec so see this site and this is continuation of SSIS tutorial so today we are going to discuss about what are the project parameter and what are the package parameter and how we can use it on Dataflow task so as we can see parameter are very similar to local variables which we used to create in sikoso and we can access them from the package that we created and we cannot access the package parameter from different packages like if we have created any package parameter in package one and if we want to access that parameter in package two then that will not be available over there so let's move to SSGT this is our SSGT as we can see this is our package and we are going to create package level parameter so how we can create so right click on this package and then we can see our third option which is variables so click on variables then we can see it has few options so we can click on this then we can see first is name then is scope so scope is our package so this parameter will be available on this package if we want to access this parameter from another package then it will not be available over there so we can rename with count and value be 0 and here we can specify the expression as well click on this as we can see so we have formulas over here we can use this formula according to our requirement next is project parameter so project parameter are like two global parameters so we can access these project parameters from any packages which is present in the same project as we can see over here if we will create any project parameter then we can access that parameter in all these packages which is available in a single project so here we can see project dot params so double click on this and it will open another window and this is for the project parameters so now again we can click on this add parameters and here we can specify like package count here we don't have the option for the scope so this parameter will be access from all the packages so now we can save it and close it so now we have created our package parameter and project parameter both so next is how we can use it so this is our case HD as we can see we want to load data from SQL Server table to flat file and apply filter at the source using the variables which we have created so first we will use package parameter then we will use project parameter so let's start so this is our table says detail with cell person table and it has the values according to cell person and we want to load the data from that cell person which we will pass through parameters so as we can see over here then we need to create a access parameter then name for cell person then value should be string and we can specify value now we can close it and so here we are going to dealing with movement of data so we have to use a dataflow task so whatever click on this inside dataflow panel we can configure our source and destination so sources already be sourced now we need to configure our source we need to use the connection as we have already created the connection so click on okay and then we can see data access mode so this time I am going to use SQL command and here we will specify the command so this is our query so we can specify where condition with cell person equals to question mark so if you want to apply the filter on the variable then we have to use the question mark so this is our question mark and companies command go to on SSGT and here we need to paste that command then we can see we have the parameters option we need to map those parameters so click on parameters and here those parameters will be all able as we can see this is system parameters so this is our cell person so this is our package parameter click on okay then again go to columns as we can see we are getting all the columns then click on okay so need to configure our destination so destination should be flat file now click on this delimited file click on okay we need to select the file this is our file click on open column names go to one columns so here we can see we have the data in this file we have the data for our now click on okay good one mapping mapping succeed click on okay so let's have a look of flat file so this is our flat file as we can see it has few rows but the cell person is Harvard so now we are going to execute for join as we can see our variable and this time for join so now we need to execute our package so 16 rows should be inserted in our flat file stop this package and go to our flat file open it we have the data for join now close that file and go to one project parameters then click on this and this time we need to add one more parameter with package cell person and value should be a string and then we can specify the cell person name so here we have join Howard Rahul so we need to put Rahul icon we can save it and close it now we need to change our mapping so go to on this quality be source then go to one parameters so inside parameters as we can see we have X parameter so change with parameter so this is our project parameter and this is for cell person so click on okay again click on okay and execute our package 30 rows should be inserted in our flat file so go to one flat file and open it as we can see Rahul's record has been inserted successfully so I hope you have understood how we can use parameter and how we can use package parameter thank you so much for watching this video if you like this video please subscribe our channel to get many more videos hey guys welcome to SSU Tech so still this site and this is continuation of SSIS tutorial so today we gonna discuss about dynamic connection manager so what is dynamic connection manager so dynamic connection manager will be created automatically like we can see we have a scenario where we have SQL server table and we want to load the data from SQL server table to our flat file destination and here we can see we have flat file name with sale underscore 2018 which is our year then month then date then are and then minute so we have this flat file once we run this flat file on 1054 our flat file name should be 10 underscore 54.txt file when our time will be changed then we need to run our package then it will automatically generate a new file with the new date and time so let's have a look of SQL server table now we need to execute so as we can see it has total six column cell order number cell order date customer ID quantity value and country go to on ssgt to implement the package so here we need to create one more package so right click on this SSIS package then new SSIS package then I gonna rename this package with dynamic connection manager then we need to drag and drop data flow task after that we need to double click on this data flow task so it will directly move to data flow panel inside this data flow panel we need to configure our source so our source is already be source so we need to drag and drop already be source then we need to load data in our flat file this is our flat file destination so we can drag and drop now we need to configure our source so double click on this already be source here we need to select the next manager click on new here we have already made the connection so I gonna use that one so click on ok here we need to select the table so our table is cell info table so we can select and we can preview so we can see we are getting the data now go to on columns now we want all the column then click on ok so here we need to connect with destination now we need to configure our destination double click on this flat file destination then we need to select the flat file connection manager so I need to click on new then we want create delimited file we are ok with it so click on ok so here we need to select the file name so we can browse and here we want to generate file with the name of sale so we need to write sale then click on open columns name in the first data row so we can check this checkbox go to on columns so as we can see we are getting the all columns now click on ok then go to on mapping so mapping succeed now click on ok so once we run this package so it will generate a file with the name of sale so after some time once we run this package again then it will not going to create one more file with date and time so for that we need to create two variables so go to on variables so here I want to use for folder name so this is our folder name data type should be a string so we can select a string and then we need to specify the value so this is our folder path so we can copy and we need to paste that in our value and we can create one more variable with the name of file name and data type should be a string so we can select a string here I am not going to pass any hard code value so we want to generate the file name at runtime so go to on expressions we can click on this here in this expression we need to write some code so first of all we need to write sale and here we can check so we are getting the sale then underscore then we need to concatenate sign which is plus then we want to get date and time so we can use get date for current date and time so this is our date time and sale underscore is our string so we need to convert so here we have to write dt underscore w str then we need to specify the size so I want to use 100 now I need to evaluate so once we evaluate then we can see we are getting the date and time but we want to fetch here then month then date hours and minute so I want to fetch only four characters from starting so service string then we need to specify start value so we want to start from one then how many character we want for factor so we can select four then close the bracket evaluate then we can see we are getting the data now we need to use underscore so we can again evaluate so we are getting the year now we need to configure for month so I am going to copy this code and then plus sign and paste that value this time we want to get the value from 6 and we want to get only two characters for month so as we can see we are getting 12 now again we need to specify our concatenate symbol which is plus then paste this code and this time we start from 9 and we want to get only two characters for date now what one evaluate so we are getting 0 code between this we need to use underscore so we can specify underscore now we can evaluate so as we can see we are getting year month and date now we need to get our so again we need to use underscore so this is our underscore then I want to paste that value so this time we want to start from 12 and we want only two characters now we can evaluate and we can see we are getting the value this is for our now last time we need to use for minute so underscore concatenate symbol then paste that value so here we can see we are getting from 12 then we need to start from 15 and we need to use only two character so we can evaluate and we can see we are getting the minute as well now we need to use the file extension should be txt file now we can evaluate and we are getting the value so we can okay so here this is our file name and this is our folder name so we have done it now we can save this package and go to one flat file connection manager then we need to press F4 to get the properties inside this property we can see we have expressions so click on this expression and here we have connection string so in this connection string we need to go on expressions then here we have folder path so we can drag and drop this folder path within expressions then we need to use concatenate symbol which is plus then we need to drag and drop this file name now we can click on evaluate so this is our folder path inside this folder our flat file name should be this and this flat file would be generated now we can okay again okay so as you can see time is 11 6 so I gonna run this package once we run this package then our file should be generated with the time 11 6 go to one folder so here we can see we are getting the file with 23 underscore 0 6 so this is our 11 6 so I gonna stop this package and again run this package when our time will be changed so as we can see time has been changed I gonna run this package one more time so click on start so new file should be generated go to one folder as we can see we got a new file so this is our dynamic connection manager so it will generate a new file on every execution if the time will be changed because it's dependent on time so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to SSUniTech so see this side and this is continuation of SSIS tutorial so today we gonna discuss about what is the row count transformation and how we can use it in our last video we have discussed about the project parameter and package level parameters and how we can use it inside the data flow task so today we gonna discuss about a row count if you haven't watched previous video of this video series so I would strongly recommend to watch that video so let's start with the row count so what is row count so row count transformation is used to count the rows as they pass through data flow task and stores final count into a variable basically row count transformation is used for the auditing purpose so here as we can see first of all we have to use a data flow task and inside data flow task we are going to get the values from flat file source and then we have to use a row count transformation where we can count the number of rows that is going to pass from flat file source and we will store that count into a variable and then by using script task we will see that count so let's start with this case is steady so go to on ssgd and here I am going to create one more practice and rename this practice with row count so this is our row count and as we have discussed we need to use a data flow task so this is our data flow task double click on this it will directly move to data flow panel so inside data flow panel we are going to get the values from flat file so our source is flat file so we can drag and drop this flat file source and need to make the connection for this flat file source double click on this then it will ask for flat file connection manager then click on me so it will open flat file connection manager editor window here we need to select the file we can browse and we can select our file this is our file click on open and go to on columns as we can see we are getting the value so click on ok then click on ok so we have done our source part now we need to use transformation so row count transformation is a label inside the common here we can see row count so we can drag and drop row count and we can map now we need to declare a variable where we will store the value for this count put one variable add a variable here we need to specify count and value should be integer so we have done for it and now double click on this row count then we can see it will ask for variable from which variable we want to store the values so we want to store in this variable click on ok so we have done our row count transformation now we need to check the value then go to on control flow panel so inside this control flow panel we have to use a script task by using you can see the output now map and double click on this script task then it will ask for scripting language I am going to use sysop and we have option for bb as well here we have two options first is read only variable second is read and write only variables so I am going to use read only variable click on this then we can select our user count then click on ok then we need to click on this edit script once we will click then it will take few seconds then it will open a script so inside this script we have main block so inside this main we need to specify our code so we can use message box dot so then DTS dot variables inside the bracket we need to specify our variable dot value dot to string so we have done our code and this code will be available on the description of this video so you can copy if you need then close it click on ok so now I am going to execute this package so as we can see 32 so if we can see our flat file then it should have 32 rows so here we can see so we have total 32 rows so these 32 rows are going to display by row count transformation so I hope you have understand how we can use the row count transformation how we can use in real-time scenario once we will cover auditing of the package then we can understand the real-time use of row count so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos hello welcome to SS Unitec so see this side and this is continuation of SSIS tutorial so today we gonna discuss about one more transformation which is derived column transformation so direct column transformation is very important transformation because here we can apply the expressions according to our business requirement so let's start with derived column transformation so what is derived column transformation so basically it is used to create a new column values or replace existing column values by applying expressions so what is expression so an expression can contain any combination of variables functions and columns from transformation input for example if we are getting data from SQL server table and we want to apply few formula to manipulate the input then we can use derived column transformation by which we can manipulate our input data it also has an option to concatenate data from different columns like if you are getting the data like first name and last name and we want to concatenate first name and last name to get the full name so by using this transformation we can achieve this so we have few examples over here as we can see we have input table and it has few columns so in our first condition we want to concatenate first name and last name so we can do this second if you want to get the only first character from first name then also we can do this if you want to apply round function inside the salary column and we want to only two digit after decimal place so we can do this if you want to extract only here from data birth then we can also do this so how we can do this so let's start so this is our table as we can see it has four rows we have displayed in our slide so let's move to ssgd so here we need to add one more package right click on this SSIS package and add a new SSIS package and rename with derived column so this is what arrived column transformation so first of all we have to use a data flow task and inside this data flow task we have to configure our source so our source is sequence over table so we have to use OLDB source now we need to configure OLDB source here we need to specify the connection manager so click on new as we have created the connection so I'm going to use this one click on ok here we need to select the table so this is our table we can select and go to one columns so we want all the columns click on ok so we have done our source part now we need to use derived column so this is our derived column transformation so we can drag and drop as we can see over here it has effects so we can use expressions so double click on this derived column and go to one columns so first of all we want to concatenate first name and last name as we can see over here so first first name then last name so we can drag and drop this first name inside this expression so this is our first name then we need to specify plus symbol and then we want one space then again concatenate symbol then last name so we can drag and drop this last name and we can rename this with full name this is our full name as we can see derived column inside this derived column we have dropped down what we want we want to add a new column or we want to replace our existing column so we want to add a new column for this so choose our first option so we have done our first part now in our second foundation we want to get only first character from first name so drag and drop this first name inside the expression then we need to write substring so this is our substring and we want only first character so we have to specify one and one so we can close the bracket so we have done our second part and this is for first character from first name so our next is we want around the salary so drag and drop the salary column inside the expression and we have to use around so we can specify round comma two because you want around only two additional so we have done our third and this is for round salary so our next is we want to extract your part from that of birth so we need to drag and drop that of birth this time and so here we need to use that part function so that part then we need to specify what we want to extract we want to extract here so we have to specify here then close the bracket so this is for here so we have done all these so now I am going to use multicast I will explain multicast in upcoming videos for this time I am not going to load this data into any table or any flat file I am just going to populate this data by using enable data viewer it will pause this package and display the data so now I am going to execute this package so once we execute this package then we can see package is paused between these two tasks because data viewer is enabled between them so as we can see we have full name then first character from first name then round salary and then here so we are getting the data as we were expected so we have done all these four examples now as we can see I have explained this so here we can see we can also use if a condition like if the month name is equal equal to one then we want to display Jan if the month name is equal to equal to two then we want to if this value not be satisfied then we want to this number so this is our if then then again if then else so we can use if else condition inside our drag column so how we can achieve this we can now I am going to delete this one and going to add one more column so so I am going to use for employee ID if the employee ID is less or equal to two then we have to use Goshenma and we need to specify low then again column and we have to specify our employee ID less or equal to three then we want to display medium else we want to display large so this is our if and else condition now we can check again I am going to use multicast and going to connect here we need to enable our data where now I am going to execute as we can see last column has been added successfully so for ID 1 and 2 we are getting low for ID 3 we are getting medium for ID 4 we are getting large so we can use if and else condition as well we can also replace the values by using the first condition as we have discussed over here or we have a replace null formula by which we can directly replace the null values now I am going to draw I am going to add one more column and this time if the null value is present in this department name then we want to replace that value so this is a function which is available inside this editor if we need then we can use so go to on null function so we have replaced null so we can use it replace null then if the null value is present in this department name then we want to display and so now I am going to again enable it over so as of now we can see we have the null value in department name if the employee ID is 3 so now I am going to execute this package so null value should be replaced by NA so for this column as we can see null value has been replaced by NA so we can use it if we needed so we have a lab assignment for you check if null and divide by 0 values are available if available then replaced by 0.00 and formula for this lab assignment will be available on the description of this video if you need then you can copy from there so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos hello welcome to SSPiratec so see in this side and this is continuation of SSIS tutorial so today we gonna discuss about conditional split so what is conditional split and how we can use it so first of all what is conditional split so it is MSBI component which is available on SSIS toolbox and next is it's just like a if-as condition or case statement by which we can split our input data into multiple outputs depending on the certain condition which we will be applied next is it has one input and multiple outputs depending on the condition and it also has an default output so that if a rows doesn't match with a specified condition then that row will be redirect to that default output and it also has an error output so let's move to our case study so we have two case study first is based on the string data and second is based on the integer data so let's start with string data so in this scenario as we can see we have basic crossover table that contains different countries sale detail our requirement is to split the data for USA and India in different flat files and rest of the countries will move to a different flat file let's have a look of sequence over table so as we can see this is our table and if we can execute our first query then we can see we have total four countries India USA Australia and Nepal so we want to split data India in different flat file USA in different flat file and in our third flat file we will contain the information of Australia and Nepal so how we can achieve it let's move to SSGT so I am going to create a new package so right click on this and create a new package and then we can rename this package with conditional script case study one now we are dealing with movement of data so we have to use a data for us then double click on this and now here we need to configure our source transformation and destination so our source is already be source so we can drag and drop this already be source now we can configure our source so here we need to select our connection manager as we have created the connection so I am going to delete this one and create anyone here we need to select the server name so this is our server name we can select and then here we need to select our database name so our database is SSIS test now we can test our connection connection succeed click on okay again okay so here we need to select our table as we can see in our sequence over then we can select our second statement and good one SSGT and we can change our data access mode and here we can write our SQL statement I am going to paste that SQL statement now move to columns as we can see we want all the columns then click on okay so we have done our source part now we need to configure our transformation so what one common then we can drag and drop this conditional script and we can connect with source not only from this condition split then it will open condition split transform as an editor here we can specify our conditions so first condition is if the country is India then we need to select double equals to instead of single and then double correct instead of single now here we need to specify our country so here we want to split our data in case one if the country is India so in our second condition again we need to drag and drop and here we need to set for USA this is for USA and rest of the data will move to our default output as we have discussed now click on okay so we have done our transformation now we need to configure our destination so I am going to load data in flat file so we have to use that file destination so we can drag and drop and this flat file for India here it will ask for what input we want to move this destination so in our case one as we can see in our case one we have specified for the India so this is for case one if you want then we can change it as well in our case two it will reflect the output for USA so I am going to drag and drop this flat file destination for USA as well so this is for USA and we need to connect and it will ask what output we want we want case two now we can click on okay then we need to drag and drop for default over it will contain the information of Australia and Nepal so this is our default output so here we need to configure our destination so as we can see inside this folder we don't have any file at yet then click for India and it will ask for flat file connection manager so click on new so here it will ask for flat file format I am going to use delimited now click on okay then here we need to select our file go to this folder and we want to create file with the name of India first row should contain the column names go to one column it will contain the comma separated values so we are okay with it click on okay go to one mapping so mapping succeed again click on okay so we have done for India now I am going to configure for USA again we need to repeat the process we can browse inside this condition is great here we need to create file with the name of USA so click on open go to one columns so we are okay with it now click on okay go to one mapping mapping succeed so click on okay here we need to configure our default output so click on new delimited file we need to browse and here we need to specify the file name so default click on open so column names in the first row go to one columns click on okay now go to one mapping so mapping succeed again click on okay so we have done our package now I am going to execute this package as we can see package has been executed successfully now I am going to stop this package and move to our folder so here as we can see inside this India we have the data for India only and inside this USA we have the data for USA only and inside this default we have the data for Nepal and Australia as well so this is for consistent split now move to our second case is today so in this case is today as we can see we have integer or numeric data types so we need to take care of while choosing their orders for example if we have 57 11 12 and 25 so we have two scenarios by which we can select our order in our case one if we will select if the number is less than 30 then all data will move to that output and rest will not contain any information in our case 2 as we can see this is our correct order so here we need to specify order in increasing order like we want to get the data is less than 10 then 20 then 30 so we have to specify in increasing order first less than 10 second less than 20 then less than 30 so it will return output as we are expected so this is our case is today now we can implement as we can see we have a table with conditions split I'm going to execute and we can see we have all the data now I'm going to create one more package for this case history and this time conditional split case history 2 so again we need to drag and drop over data flow task double click on this here we need to select our source so our source is sequence of our table so we have to use OLDB source not double click on this OLDB source then we have to specify our connection manager so as we have created the connection so I'm going to use that one here we need to select our table so this is our table go to one columns it has only one column now click on ok so now we need to use conditional split so go to inside this common and drag and drop conditional split now we need to select so after mapping directly on this conditional split here we need to specify our conditions so in our first case as we can see I'm going to select the value if less than 30 second if the ID value is less than 20 third if the ID value is less than 10 now click on ok so I am going to use multicast for display data because I am not going to load this data into any destination so this is for case 1 this is for case 2 and this is for case 3 I'm not going to configure over default output so click on this enable data viewer then again enable data viewer and again enable data viewer now I am going to execute this package so once we execute this package as we can see in our case 1 we are getting all the data we don't have any data in our case 2 and in our case 3 because we have selected incorrect sequence so now I'm going to change the sequence first less than 10 less than 20 is okay then less than 30 now click on okay now I'm going to execute this package again so as we can see we have the data in our case 1 which contains the ID less than 10 in our case 2 we have the data when the ID is less than 20 in our case 3 we have the data if ID is less than 30 we are getting the data as we are expected so thank you so much for watching this video I hope you have understand how we can use conditional split and we need to take care while using the conditional split inside the integer and numeric data if you like this video please subscribe our channel to get many more videos hello welcome to SSUnitech so see this side and this is continuation of SSIS tutorial so today we gonna discuss about multi-cast and what are the difference between multi-cast and conditional split in many interviews you can see interviewer wants to ask about the differences between multi-cast and conditional split so we will cover this part in this video if you haven't watched part 13 of this video series so I will strongly recommend to watch that video where you can understand about the conditional split so let's start with multi-cast so what is multi-cast so multi-cast transformation in SSIS is used to distribute every input rows to every rows in one or multiple outputs if you have a scenario by which you want to load the data from any source to multiple destinations and data would be the same as we are getting from input then you can use the multi-cast in our scenario as you can see we have a table and we want to load data from that table to multiple flat files so this is our table as you can see it has 20 rows so we want to load all these 20 rows in multiple files so here as you can see we have three folders file 1, file 2 and file 3 so we want to generate three files and that file will contain all the data which is available in this table so how we can achieve it let's move to ssdt so I am going to create a new package so right click on this and create a new package then rename this package with multi-cast so this is multi-cast now I am going to drag and drop data flow task because we are dealing with movement of data then double click on this data flow task so here we need to configure our source transformation and destination so our source is OLEDB source because we want to get the data from SQL Server table so here we need to configure our OLEDB source so double click on this it will ask for OLEDB connection manager so click on new we have already made the connection so I am going to delete that connection and create a new one so once we click on new then connection manager editor window will be open and it will ask for server name so we need to select the server so this is our server name and then log on to server I am going to use windows authentication so there is no need to enter the user ID and password so here we need to select the database name so this is our database name we can test our connection connection succeed click on okay then again okay so we have done our connection manager now we need to select the table so this is our table go to one columns we want all the column then click on okay so we have done our source path now we need to use multi-cast so go to inside this common and here we have multi-cast so we need to drag and drop this multi-cast now connect with source and then we want load the data into three different files so we need to select the flat file destination so flat file 1 flat file 2 and flat file 3 so in our flat file 1 we need to connect with multi-cast then 2 and then 3 as well now I am going to make the connection for flat file destination 1 it will ask for flat file connection manager so we need to click on new then we need to select the flat file format I am going to generate the file with the delimited format now click on okay so here we need to select the file name we can browse so here we can see we have three folders so I am going to generate the file inside this file 1 folder this is for one so click 1 then open column names in the first row we need to select this checkbox then move to columns we are okay with it then click on okay now go to on mapping so mapping succeed now click on okay now we need to configure our second file right click on this click on new then again delimited file here we need to select the file and this time I am going to load the file inside this file to folder click on open column names in the first data row then move to columns so we are okay with it click on okay go to on mapping mapping succeed now click on okay now we need to configure our third file so that double click on this again click on new we want to generate the delimited file so we are okay now we need to select the file and this time I am going to load the file inside this file 3 folder 3 then open select this checkbox go to one columns okay got one mapping mapping succeed now click on okay so we have done our package so initially we don't have any file in this folder if we can open then we can check it we don't have any file so now I am going to execute our package once we have executed our package then we can see 20 rows 20 rows and 20 rows and in our input we can see 20 rows so 20 rows we are getting from our input and all the 20 rows have been moved to in our file now we can stop our package and go to our folder in file 1 as we can see we have the file and it contains 20 rows now move to our second folder then again we have 20 rows then moved to our third and last folder as we can see we are getting 20 rows so as we can see we have redirect all the rows which we are getting from our input to all the outputs then what are the difference between multicast and conditional expect and this is very important so first is multicast so replicate source data to multiple destination so that different logic can be implemented to to the same source data in case of conditional split distributed the source data into multiple destination based on the condition in our multicast we cannot apply any condition but in case of conditional split we can implement the condition in case of multicast redirect all the rows to every output but in case of conditional split direct row to single output in case of multicast it doesn't have any error output but in case of conditional split we have an error output in case of multicast we cannot apply any expressions because we can't use any condition go to our package then double click on this multicast we don't have any option by which we can apply any expression or condition but in case of conditional split we can specify our conditions by which we can apply our expressions so in case of multicast it doesn't have any default output but in case of conditional split it provides default output so that if rows doesn't match with expression then it redirect to default output as we have discussed in our last video so these are the differences between multicast and conditional split and you need to remember all these differences so that you can explain in your interviews so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos hello welcome to SSU detect so still decide and this is continuation of SSIS tutorial so today we gonna discuss about the short transformation and this is very useful transformation while you are going to use the merge statement and merge join so what is short transformation so short transformation is used to sort source data in either ascending or descending order which is very similar to order by statement in tcql command merge transformation and merge join transformation need data to be sorted before using them so if you are getting the data from the source which is not in sorted so by using short transformation we can sort the data and it's a fully blocking transformation so we need to avoid this transformation while using the big table and it's also used to remove the duplicate records from the source data it only look for the source key column value if it found duplicates then remove any of them and get unique values like if we have four columns in our table like id name data birth and gender and we are going to sort on the graph of id and in id column we have the duplicate record and rest of the column we don't have the duplicate values so it will look only id column and if duplicate value will be found then one will be deleted and unique value will be available so we have a sequence over table for our testing purpose so let's have a look of that table so this is our table with the name of sort as we can see it has only one column which is named and it contains the values so in our first row in our last row we can see it seems to be duplicate so one values will be deleted so how we can achieve this let's move to ssdt to create our package so right click on this and create a new package and then I want to rename with sort then we need to drag and drop dataflow task double click on this and here we need to select our source so our source is OLDB source so we can drag and drop OLDB source and then we need to configure here we need to select our collection manager click on new as we have already made the connection so I am going to use that one click on ok here we need to select the table so our table is sort as we can see this is our table got one columns we have only one column which is named so we are ok with it click on ok so we have done our source part now I am going to use the sort transformation so this is our sort transformation we can drag and drop this and connect with the source now we need to click on sort once we click on sort then we can see available input columns so we can select our column and we want to order by with that column in ascending order or descending order according to our requirement we can set the sort type here we have option for remove rows with the duplicate sort values once we select this checkbox then our duplicate value will be removed now click on ok as we can see in our input we have a duplicate value and that value will be removed I am not going to load this data into any destination I am going to display this data by using multicast so we can drag and drop this multicast then we can connect to the sort now we can enable our data viewer then we can execute our package so here we can see 1 2 3 we are getting only once so duplicate value has been removed next is sorting with the string data SQL server sort and SSIS sort treats as a differently because SQL server uses the case in sensitive and SSIS uses the case sensitive so as we can see in our SSIS sort and you know SQL server sort so in our first room we have the same data in our second row and third row we are getting the same data in our fifth room and your sixth room so we have the different values in case of SSIS sort and SQL server sort let's have a look in SSGT and SQL server so here as we can execute our package now I am going to copy this data so we can open Excel and paste that data now go to SQL server and I am going to use order by 1 and copy this data and paste that data so as we have removed the duplicate value so we can delete our 1 2 3 and here we can see so this is our SSIS sort and this is our SQL server sort so we are getting the different values in case of fourth row fifth row in case of seventh row eighth and ninth row because SQL server uses case in sensitive and SSIS uses case sensitive so how we can remove this different behavior so once we use the cases to in our SQL server then that will be removed so go to our SQL server and paste that query here we can execute and copy the data and go to on Excel now I am going to paste that data and delete one row as we can see in our SSIS sort and SQL server sort both are treated as a similar so you need to remember while using sorting with the string data types so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos thank you so much hello welcome to SS unit X so see this side and this is continuation of SSIS tutorial so today we gonna discuss about much transformation so what is much transformation so much transformation is used to combine two sorted data sets into a single data set the rows from each data set are inserted into output based on the value in their key columns it also requires that the must column in its should have matching metadata like if you want to merge to input data sets column 1 in our data set 1 in column 1 in data set 2 these two should have the matching data types we cannot use the more than two data sets this is our case 1 where we need to use short transformation second we will understand without using short transformation because thought transformation is fully blocking transformation so we need to avoid if we can so here we can see we have two input seats seat 1 contains the information of male employees second seat 2 contains the information of female employees so we want to merge and load in our ledp destination so let's have a look of input seats so this is our male employees as we can see it has employee ID name gender and salary we have total four columns go to one female employees here again we have employee ID name gender and salary so we have total four columns so we want to merge these two seats data and upload in our ledp destination so let's have a look of equals our table where we want to load the data so this is our merge statement table so initially we don't have any data if we can execute over first query then we can see we don't have any data and it also contains the four columns first is employee ID second name gender and salary so we have total four columns now go to one SSDT to implement this package here we need to add one more package so right click on this SSIS package and create a new package and then we need to rename with merge we are going to use short so we can use short now inside this control flow panel we need to use data flow task so we can drag and drop this data flow task and double click on this data flow task so here we need to configure our source transformation and destination our source is Excel file so we need to use Excel source double click on this Excel source Excel source editor window will be open so here we need to make the connection manager here we need to select the Excel file path so go to one merge first of all I am going to make connection with main employees click on open click on okay here we need to select the Excel seed so see to one it contains the data so we need to select the seed one go to one columns you want all the column then click on okay so we have done with male employees we can rename this Excel source with male second we need to configure for female so we can rename with female not double click on this female here we need to make one more connection for female employees select the file click on open then okay again we need to select Excel seed seed one contains the data go to one columns we want all the column click on okay so we have done our source now we need to configure our transformation so here we need to use short transformation but I am not going to use short transformation at yet I am going to use directly mulch if we can drag and drop mulch then we can see over here we have down arrow and up arrow it shows that we need to use sorted input if we are going to use unsorted input this is for merge one click on okay for merge two if you can click then we can see error message it shows me that the input merge one and input merge two has sort key position assigned to their columns incorrectly it means we need to use short transformation to sort the input data so I am going to use short transformation we can drag and drop short transformation for male and female as well then we can connect with male and female now we need to double click on this short transformation for male here we need to sort with employee ID and click on okay then double click on this female we want to sort with employee ID then again click on okay so now we have sorted input data for male and female as well now I am going to use much transformation so we can connect with merge input one here we can see we don't have option for input three we have option only for input one and input two so it has the limitation where we cannot use more than two inputs so as we can see error has been gone once we double click on this then we can see we have output column name merge input one and merge input two so we are okay with it now we need to configure our destination so our destination is well adb destination so we can drag and drop well adb destination and connect with merge now double click on this well adb destination so well adb destination editor window has been open we need to make the connection manager so as we have already made the connection so I am going to use that one here we have merge statement table so this is our merge statement table got one mapping so mapping succeed now click on okay so initially we don't have any data as we have displayed now execute our package as we can see 19,000 rows has been inserted in our table if we can execute our select a statement then we can see 19,932 rows has been inserted this is our case one so in our case two without using sort transformation so here we can sort data at source by using it sorted property true and set sort key position so how we can implement create one package SSIS package new SSIS package we need to rename our package with merge so here we need to drag and drop our dataflow task then double click on this dataflow task now we need to configure our source transformation and destination so our source is excel source so we can drag and drop excel source double click on this excel source click on new we can browse and select for main we click on okay now we can select our excel seed seed one go to on columns we want all the columns click on okay now we can right click on this so advanced editor here we have input output properties here we have an option for is sorted we can select this property with true once we click on true then go to on output columns in this column we need to set the sort key position sort key position should be one click on okay for female employees we need to make connection browse female click on okay now we need to select our excel seed seed one go to on columns we are okay with it now again then so advanced editor input output properties excel source output here we need to select this is sorted with true then we can expand output columns employee ID sort key position should be one then click on okay so now we are getting the data is sorted at source so we can use this now we can use our merge transformation drag and drop this merge transformation for male this is merge input one click on okay then merge input as we can see the click on this merge and we are getting the output column name merge input one and merge input two we want to load this data in our only be destination so we can set up already be destination now we can connect with merge click on this click on new as we have already made the connection so I am going to use that one and again we are going to use data in merge transformation table go to on mapping mapping succeed click on okay so we have done our package now go to on SSMS here we need to truncate our table so as we can see is we don't have any data in this table so I am going to execute this package so package has been executed successfully now we can stop this package go to on SSMS and we can execute our selected statement as we can see in nineteen thousand nine hundred and thirty two rows has been inserted successfully in our table so this is our case two so I hope you have understand how we can use the merge transformation with short transformation and without using short transformation so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos thank you so much here is welcome to SSQT and this is continuation of SSIS tutorial so today we are discussing about union all transformation and what are the differences between union all and merge transformation so let's start with union all so what is union all transformation so union all transformation in SSIS is used to combine data from multiple sources and that source could be excel files, flat files or anything else or multiple sequence over tables and produce one output that could be used in your destination table next is union all transformation doesn't follow any specific order while merging the data and sorting into all just so when we mainly use union all transformation if the transformation input is not sorted as we have discussed in our last video and we have seen that much transformation needs data sorted before using it next is combined output doesn't need to be sorted and next is when we are having more than two inputs then we cannot use much transformation we have to use the union all transformation so these are the cases where we need to use our union all transformation this is our case study as we can see we have three input sheets first sheet contain the information of male second female and third unknown so by using union all transformation we want to merge data between three sheets and then we want to load in our OLDB destination so let's have a look of excel sheets so this is our female implanted as we can see it has four columns employee ID name gender and second next is for male employee again we are having same number of columns now unknown employee as we can see we have employee ID name gender and second so all three sheets contains same number of columns and data types are also same as you can see in case of employee ID we have integer in case of name we have string data in case of in the again we have string data and in case of cell V we have integer data now let's have a look off the crossover table where we want to load the data so this is our table as we can see it has employee ID name gender and second so it has total four columns so now I'm going to SSDT to implement this practice so here I want to create one more practice so right click on this SSIS practice and new SSIS practice then I want to rename the stack as union all this is our union now we have to use data flow task inside the control panel so let's drag and drop this data flow task then double click on this data flow task once we deliberately on this data flow task then it will move to data flow panel inside this data flow panel we have to configure our source transformation and destination as discussed we have Excel file so we have to use Excel source let's drag and drop three Excel source first is for male then female and then and rename this with male rename this with female and this is for unknown so we can rename them so I want to set up our source for male so double click on this male then as we can see Excel source editor window has been open so click on new here we need to select our Excel file path so you can browse and go to inside union all and then I'm going to select male then open so first row has column names we are okay with it click on open here we need to select our seat where data is available so data is available in our seat 1 so we have to select seat 1 then go to one column inside this column you want all the volume so you can click on open so as you can see error has gone now double click on this female now we need to select our female employee seat so this is our female employee seat click on open here we need to select our Excel file so seat 1 contains the data columns click on open now we need to configure for unknown names here we can click on new then select our unknown name by seat and click on open here again we need to select our seat so seat 1 contains the information so we need to select seat then go to one columns inside this column as we can see employee ID name gender and so we can click on open so we have done our source now we need to configure our transformation so our transformation is union all transformation so we can use union all drag and drop this union all transformation then we need to connect with male then female and then now double click on this union all as we can see output column names by ID name gender and gender we may not column then union all 2 then union all 3 so we are getting the inputs now we need to set up our destination so our destination is across our table so we have to use already be destination we can drag and drop this already be destination and connect then double click on this already be destination here we need to set up our already be correction manager so click on new once we click on new then we can see we have made the connection so I am going to use that one here we need to select our table so table name is union all table so we can select union all table now go to on mapping so inside this nothing nothing succeed so we need to click on okay once we click on okay then you want a system is now I am going to execute then we can see initially we don't have any data now I am going to execute these methods then we can see 19,951 rows should be inserted in our table now go to all SSMS and execute our selected statement 19,951 records have been inserted successful so next is what are the differences between union all and merge transformation so first is union all so data should be sorted order before applying much transformation in case of union all transformation there is no need to be sorted data before applying union all transformation next is output data could be sorted in case of our transformation and outcome data will not be sorted in case of union all transformation next is as we have discussed must transformation can accept only two inputs and in case of union all transformation we can use more than two inputs next is its bad for performance because it except only sorted data and in case of union all transformation it's good for the performance as compared to much transformation because there is no need to be sorted input so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and press the bell icon to get the notification of our newly uploaded videos hello welcome to ss unity so see this side and this is continuation of ssis tutorial so today we want to discuss about aggregate transformation and what are the difference between sql aggregate and ssis aggregate so let's start with aggregate transformation so what is aggregate transformation so it is used to perform aggregate operations on groups in a data set like we have a data set and it contains many columns and we want to do aggregate by any specific column then we can use aggregate transformation to do the same thing the aggregate functions or label are count sum average so these are the functions available in sql server as well as in ssis aggregate transformation has one input and one or more than one output so let's move to our case study so this is our case study as we can see we have input table and it contains the information of phone number we have id phone type and phone number in id id 123 in phone type business phone number cell phone number phone number so we have three type of phone numbers and we want to do the aggregation on id and then phone type so first of all we want to do the aggregation on phone type second with the id let's move to sql server management studio here we can see we have the same metadata with this phone aggregate table and we want to do the aggregate by using aggregate transformation so as we can see first of all we want to do aggregation with the phone type so we can execute our first query then we can see business phone has three count cell phone again has three count and home phone has four second we want to do the aggregation with id then we can execute our second query here we can see id1 contain 2 id2 contain 3 and id3 contain 5 so we want to do this aggregation by ssis so let's move to ssdt then we need to create one more practice right click on this ssis practice create a new practice then we need to rename this practice with aggregate here we need to drag and drop our data flow task so we can drag and drop then we need to double click on this data flow task it will directly move to data flow panel inside data flow panel we need to configure our source transformation and destination so our source is oledb source so go to on other sources then we can drag and drop our oledb source now we need to configure our oledb source so double click on this oledb source oledb source editor window will be open now we need to configure our connection manager so click on new here we have already made the connection so delete this connection and create a new connection click on new here we need to select our server name so this is our server name i'm going to use window authentication so there is no need to enter user id and password here we need to select our database name so database is ssis test now we can check our connection connection succeed now click on okay so we have done with connection manager now we need to select the table so as we can see we have already typed the query so we can copy this query we need to change data access mode with simple command here we need to paste the query now go to on columns we want all the columns so click on okay so we have done our source now we need to configure our transformation so our transformation is aggregate transformation inside this common we need to drag and drop aggregate transformation and connect with source now we need to double click on this aggregate here we have the columns so first of all we want to do the aggregation with phone type we need to select the phone type and select phone number so here we can see we have input columns output areas then operation so we want do the aggregation with phone type so our first operation is okay in our second operation we want count so we can change with count now click on okay so we have done our aggregation with phone type now we need to set up our destination so we don't want to load data any destination we want to just check our data so we need to drag and drop multicast go to on aggregate connect with multicast here we need to enable our data where so enable data where now i'm going to execute this package so as we can see phone type b c and h 3 3 and 4 now move to sico server management studio and execute our first query so once we execute our first query then we can see we are getting the same metadata as we are expected now we can stop our package so here we can see it doesn't support any error output so here we can check we don't have any error output so aggregate transformation doesn't support error output next is so aggregate transformation has one output and in advanced mode we can configure more than one output each will contain different aggregation so first we have done with phone type second we want to do the aggregation with id so double click on this aggregate so here we can see we have this advance so click on this advance as we can see we have only one aggregate here we can configure our second aggregate so we can click with aggregate 2 and this is for id so here we need to configure our second so this time we want to do the aggregate with id and we want to count with phone number so here we can change this phone number click on okay now we need to drag and drop one more multicast and here we can see we have one more output so we can select so this is for id and this is for phone type so we need to enable data for id as well then we need to execute our package so as we can see this is for id and this is for phone type so we are getting the multiple aggregate at a single time so this is the difference between sico server and ssis in ssis we can do multiple aggregate at a single time but in sico server we cannot do the multiple aggregate at a single time we can also use aggregate to load unique data by applying group by two all columns so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos thank you so much hey guys welcome to ssis video tech so see on this side and this is continuation of ssis tutorial so today we are going to discuss about row sampling and percentage sampling transformation so let's start so what is row sampling and what is percentage sampling so these transformations in ssis will take whole data from the source and it will randomly extract the selected number of rows from it next is it will return two outputs one from the selected data and other from the unselected data like we have 1000 rows in our table and we want to extract only five rows from that so we will return two outputs first we'll return only five rows and second we'll return 995 rows next is in row and percentage happening it will block the source table and write the random sample so it's fully blocking transformation and it is used only for the testing purpose and we have also an option to fix the random sample on every execution by checking the random seek option and this is useful if the source data is not already issued like we have flat file source and it has the millions of records and we want to extract only 1000 records to verify the data so there is no need to load that data in our table we can use a row sampling or percentage sampling transformation and then we can extract only selected data according to our requirement so next is what are the difference between row sampling and percentage sample in row sampling transformation we can set how many number of rows we want for sampling and in percentage sampling we can set the percentage of source data how many percentage number of rows we want for the sample so this is the only difference between row sampling and percentage sample so let's develop a practice where we can understand about row sampling and percentage sample so here we need to add one more practice so the common practice then we can rename this with the row sampling then we can use data flow task inside the control flow panel then double click on this data flow task so inside this data flow panel we can set our source transformation and destination so our source is already the source as we can see in our ssms we have a sampling table and it has 1000 records so we want to extract only 5 records from this table for our sampling let's configure our source double click on this already the source then we can set up our already the connection as we have already made the connection so I am going to use that one then we can select our table so this is our sampling table now go to one columns so we want to extract all the column so click on okay so we have done our source now we need to configure our transformation a row sampling transformation is available inside this other transformations so we can drag and drop this row sampling transformation and then we can connect with source once we connect with source then we can double click on this row sampling transformation here we can specify the number of rows so we want five rows so we can set up five here we can see sample output name this is our sample output name it will return only five rows second unselected output name so in unselected output name we will get 995 rows then go to one columns so we want all the column click on okay so we have done our row sampling now we need to configure our destination so we don't want to load data in any destination so we can use multicast test our data here as we can see we have two outputs so we want to see selected sampling output click on okay now we can use one more multicast and now we can configure our unselected output then we can enable data now we can execute our practice so selected sample we will get only five rows as we can see we are getting only five rows in our unselected output we are getting 995 rows on every execution we will get the different data as we can see so number two two three we are first and then we can execute one more time then we can check our data will be checked so our data has been changed now we can fix this output by selecting a random c function click on okay now I am going to execute our process so we can see 990 rows first now I am going to execute one more time it's going to return the same record this is for row sampling and we can set up our percentage sampling as well so how we can set up I am going to disable this data process then I am going to use one more data process and this is for percentage sampling so directly click on this here we need to set up our source transformation and destination our source is already the source so we can drag and drop already the source and then we can configure here we have already made the connection I am going to select the table this is our table click on okay now we need to select percentage sample so we can drag and drop that's going to be our source now double click on this percentage sample so here we can see percentage of row so if we are getting thousand rows from our source and we want to display only 10 rows so we can set our percentage with one here again we have option 257 a random center so every execution it will be looking the same record now I am not going to load this data in any destination so we can do magic cost this is for selected random sample and this is for and this is for then we can enable data here and now I am going to execute this once I execute this process then we can see we are getting 12 rows and here we are getting 988 rows so it will return the approximate one percent record from the how one and then we can stop this practice I hope you have understand how we can use a row sample and how we can use percentage sample according to our requirement so in real-time scenario this is used for the testing purpose if our source except for ADP source then we can use this transformation which has the random record for our testing purpose thank you so much for watching this video if you like this video please subscribe watch another video and press the like button so you will get the notification of our newly uploaded videos thank you so much hello welcome to SS Unitec so see on this side and this is continuation of SSIS tutorial so today we are going to discuss about character map so what is character map and how we can use it so character map is used to transform input characters this transformation operates only on column data with string data type if we have any another data type accept string data type and we want to use the character map to transform the input data then we cannot use the character map to transform the input data next is the character map transformation can convert the column data in place or add a new column to the transformation output and put the converted data into that new column if we want to change our string columns data to upper or lower case or many more formats then it help us so mainly we use lower case and uppercase the rest of the formats are based on the language specific so here we can see we have flat file source and we have LEDB destination so we want to transform our input data by using character map then load our destination let's have a look of flat file so this is our flat file as we can see we have first name middle name and last name and we want to convert these three fields because these fields contains the string data so we need to convert with upper case then lower case let's have a look off table go to on ssms so here we can see we have the same number of columns id first name middle name last name and a topper first we execute then we can see we don't have any data add yet now go to on ssgt to create the package so create a new package right click on this ssis package then new ssis package here we need to rename this package with character map so this is our character map now we need to use data flow task inside the control flow panel then double click on this data flow task so it will directly move to data flow panel inside this data flow panel we need to use our source transformation and destination so our source is flat file source so we need to use flat file source we can drag and drop then double click on this flat file source then we can see flat file source editor window will be opened where we need to make the connection so flat file connection manager add yet we don't have any connection manager so create a new connection once we click on new then flat file connection manager editor window will be opened here we need to select our flat file here we can browse this is our folder and this is our file we can select this file then click on open once we click on open then this is delimited file go to own columns comma delimited we are okay with it we have id first name middle name last name and date of birth so we have total five columns as we can see we don't have uppercase or lowercase we have the mixed uppercase as well as lowercase but we want to load the data into uppercase in our database so click on okay retinal values from the source go to own columns we want all the columns then click on okay we have done our source now we need to configure our transformation so our transformation is actor map so we need to drag and drop this character map transformation then we need to connect with source now double click on this character map so here we can see first name middle name last name so we want to convert these three columns here we have option to create new column or replace our existing column so we want to add a new column we don't want to replace our existing column operation what we want to do here we can see we have a lot of options but we want to use uppercase so click on okay then again uppercase for last name we want lowercase so we can select lowercase then click on okay here we have the output alias name we have compu of first name so we can rename with new for middle name we can also use new and for last name we can also use new now click on okay once we click on okay then we need to configure our destination so our destination is well adb destination so we can drag and drop this well adb destination then connect with character map now double click on well adb destination create a new well adb connection as we have already made the connection so delete this connection and create a new connection here we need to select our server name so this is our server name i'm going to use windows authentication so there is no need to enter user id in password here we need to select our database name so this is our database name we can test our connection connection succeed now click on okay again okay so we have done our connection manager now we need to select our table so our table is character map table got one mapping as we can see by default it has been mapped with old columns so we need to map with new columns so first name goes to first name new middle name goes to middle name then new last name goes to last name now click on okay so we have done our practice now we need to execute our practice so five rows should be inserted in our table now go to on ssms and execute our selected statement as we can see first name middle name it contains the upper case and in our last name we have the lower case so we can use according to our requirement so these are the formats available which is based on language specific it also allows us to select the multiple option for a single column but we have certain limitations where we cannot use the multiple options for example we cannot use upper case and lower case at a single time second kira gana and kata gana we cannot use at a single time next is half width and full width we cannot use next is simplified chinese and traditional chinese we cannot use at a single time so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos thank you so much hey guys welcome to sx united so say this side and this is continuation of ssis tutorial so today we're going to discuss about cassette transformation so what is cassette transformation and how we can use cassette transformation in real time scenario we will understand over here so let's start with cassette transformation so this is one of the data flow transformation it uses memory cassette or file cassette as configured in cassette connection manager and push data from there and this is very useful transformation while we are going to talk about performance improvement so cassette transformation in ssis is used to read the data from wide variety of sources such as flat file excel seats or adio dotted sources and save that data into a memory or in cassette file and extension of cassette files should be caw file we have a flat file source so this is our flat file source as we can see it has some record and we want to save that record in our memory and second we will store that record in cow file so go to our ssdt to implement the package so here i want to add one more package so right click on this ssis package then click on new ssis then i want to rename this package with cassette so here we need to use a data flow task so we can drag and drop this data flow task and then double click on this data flow task then it will directly need to data flow panel inside this data flow panel we need to configure our source transformation and destination so our source is flat file source so we can drag and drop this flat file source and then double click on this flat file source then so here we need to select our flat file connection manager as we don't have any connection manager right now so i want to add new connection manager so click on new so as we can see flat file connection manager editor window has been opened so here we can select our file so we can browse and then go to on flat file and we can select our first file click on open as we can see column names in the first row so we are okay with it it's a delimited file then go to own columns so as we can see id first name middle name last name and date of birth so we are getting the record as we expected so click on okay then we can select this checkbox and go to on columns so we want all the volume then click on okay our source has been done now we need to configure our transformation and our transformation is cassette transformation so go to our other transformation and here we can see cassette transformation so we can drag and drop this cassette transformation and then we can connect with source now double click on this cassette transformation so here it required cassette connection manager hence we need to click on new so here as we can see we have two options first use file cassette if we have the file and we want to use that file so we can select our use file cassette option otherwise go to on columns as we can see we are getting all the columns so we don't want to get the data from file we want to get the data from source which is our flat file source so we are getting the data so here we need to select any one column as index pollution so I want to add id as an index pollution with one then click on go to on mapping so we are getting the mapping and click on now our flat files data will be loaded on this cassette transformation and this cassette transformation uses the memory so our flat file source data will be loaded in our memory it will not going to load in any file in our second scenario we need to use this and load that data in our cassette file so I want to add one more data for task then double click on this data for task here again we need to configure our source transformation and destination so our source is flat file source so drag and drop this flat file source then double click on this flat file source here already we have made the flat file connection manager so I want to use that one good one columns so we want to all the volume click on okay now we need to configure our transformation so transformation is cassette transformation so we can drag and drop this cassette transformation and then connect to the source then double click on this cassette transformation here we need to add one more cassette connection manager and this time we want to use file cassette so here we need to specify the file you can browse and go to inside this cassette transformation and here we can select our file so we can set name with field file and we can say extension extension is caw file so it will generate a caw file once we gonna run this package click on open go to one column so we want all the volume so again here we need to set the index position so I gonna add index position with id so one then click on okay go toward mapping so mapping succeed again click on okay so this time I wanna run this package and go to our folder so this is our cassette transformation as we can see this folder is empty right now so I wanna run this package once I run this package then we can see five rows has been moved to our caw file we can stop this package and go to our folder as we can see right caw file has been generated so now we can open this file in notepad so I wanna open this file in notepad plus plus so as we can see five rows first is our column names then we have our data so what is the real time use of cassette transformation so by default lookup transformation in ssis is uses well adb connection manager for lookup refresh table we can use other sources also like flat file excel source or any another source except well adb source but we cannot use that source directly so we have to use that sources indirectly via cassette transformation so we use cassette transformation to save the refresh table in caw format and then perform lookup transformation using the cassette connection manager like we have two flat files and we want to compare our first file with our second file then we can compare between these files by using lookup transformation so if we want to use cassette data in multiple packages then we can store cassette data in caw file then it can be used in other packages so we will implement this package while we gonna discuss about lookup transformation so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss unitec so she'll decide and this is continuation of ssis tutorial so today we gonna discuss about lookup transformation and this is very useful transformation in ssis so here we will understand basic of lookup in our upcoming videos we will understand lookup advance so let's start with lookup so what is lookup transformation so lookup is very useful transformation and it is used to perform lookup operations by connecting input values with data tables or data sets basically it compares source data with our existing table data and filter the matching and un-matching rows so we can say in our lookup transformation mainly it will return two outputs first for the matching rows and second for the un-matching rows when we talk about a reference data set then we can use two type of connector managers inside the reference data set first is bladb connector manager and second is kasek connector manager and we have discussed about kasek transformation in our last video if you haven't watched last video so i would strongly recommend to watch that video before going forward so lookup transformation tries to perform an equi joint between the values transformation input and values in the reference data set as we can see we have a flat file and it has comma delimited file and it contains the information of sale so as we can see item description quantity value and sale percent so it has total four columns go to on ssms so this is our reference table now we can execute so here we can see it contains the information of item quotes it has item code and item description so in our scenario we need to compare between item description in our flat file and then we need to compare item description in our reference table if it will match then we need to load that row in our item sale lookup basic table and it contains four columns item code quantity value and sale percent name so before creating packets i would like to explain what happened when equi joint doesn't appear so if there is no matching row in our reference data set it means no join occur so by default it would return an error however we can configure the lookup transformation to redirect such rows to no match output so how we can handle such type of rows with no match entries we have total four options first is fail component and this is our default option so when a new row crosses from data flow lookup transformation will fail if it find any non-matching row in the lookup table in our second option ignore failure if a new row passes from data flow lookup transformation continue its processing even it find any non-matching row in the lookup table as well in our third option direct rows to no match output so in this option lookup transformation direct the rows that doesn't find any matching rows in the lookup table to the no match output in real time we use this option mostly in our last option redirect rows to error output so in this option lookup transformation direct rows to standard error output that doesn't match any non-matching rows in the lookup table let's move to ssdt to create the packets so i want to add one more package right click on this ssis package and new ssis package then i want to rename this package with lookup basic then we have to use a data flow task we can drag and drop data flow task then double click on this data flow task inside this data flow panel we need to configure our source transformation and destination so our source is flat file source so we can drag and drop flat file source not double click on this flat file source then we need to select a flat file connection manager so we don't have any connection manager at yet so click on new so here we need to select the file name we can browse so this is our file we can select and click on open now go to on columns as we can see item description quantity value and cell person so we are getting all the column click on okay now go to on columns here we want all the columns so we are okay with it now click on okay so we have done our source now we need to configure our transformation and our transformation is lookup transformation so we can drag and drop this lookup transformation and then connect with source not double click on this lookup transformation so here we can see we have kasei mode connection type and many more options so we will discuss these options in our upcoming videos here we can select our reference table connection manager so as we can see connection type here we have only two type of connection manager kasei connection manager and oladb connection manager as we have discussed so this time i want to use oladb connection manager goto on connections so here we need to select our oladb connection manager now click on new so as we have already made the connection so i want to use this one here we need to select the table item lookup basic table now goto on columns so as we can see so this is our source input column and this is our reference table column so we want to compare with item description with item description so we can connect with and we want to get the item code from reference table so we can select this checkbox now goto on journal tab so here as we can see specify how to handle the rows with no matching entries so as we have discussed so by default it has fail component and we have ignore failure redirect rows to standard error output and redirect rows to no match output so i want to use redirect rows to no match output now click on okay so now we have done our lookup transformation now we need to configure our destination so we want to load the data in our sequence over table so we have to use oladb destination now we can connect with lookup here we have option what output we want to move in oladb destination so we want to move only match output so we can select and click on okay and unmatch output we want to load in our flat file so we can drag and drop flat file as well and we can connect for lookup no match output now we need to configure our destinations so first of all i would like to configure our oladb destination so double click on this oladb destination so here we can see we need to select the table so our table is item sale table so we can select goto on mapping so as we can see quantity mapped successfully here we have item codes so we can map item code with item code value with value and we have sale percent as well so we can map with sale percent now click on okay so we have done our oladb destination now we need to configure our flat file destination so double click on this flat file destination and here click on new so here we want delimited file so click on okay here we can browse goto on destination then lookup we need to specify the file name so this is for no match so i want a name with no match now click on open column names in the first row so we can select this checkbox goto on columns item quantity values and sale percent now click on okay goto on mapping mapping succeed now click on okay so we have done our package now we need to execute our package and check our output so as we can see 64 rows should be inserted in our table and eight rows should be inserted in our flat file now stop this package and goto on ssms and execute our selected statement so as we can see 64 rows 64 rows have been inserted successfully now goto on flat file goto on destination then lookup folder then we need to open this no match file so as we can see we are getting eight rows so as we can see these item descriptions is not available in our refresh table that's why these rows move to in a flat file so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss unitec susil decide and this is continuation of ssis tutorial today we're going to discuss about types of casemode in lookup transformation if you haven't watched last video of this video series so i would strongly recommend to watch that video where you can understand about basics of lookup transformation so let's start so there are three types of casemode available in lookup transformation first is full case second is parcel case and last is no case so let's start with full case so what is full case so this is used very often in this option entire reference table will be preloaded into the case memory and lookup transformation will perform lookup from the memory instead of data set this works well when we have less number of record in lookup table we need to remember kase connection manager can be used only this type of kase in this case when package starts data will pull and keep into memory kase or file kase and later only kase will be used not database we might not get the latest data from the database second is parcel kase mode so what is parcel kase mode so in this option lookup transformation starts with empty kase when a new row comes the data flow the lookup transformation first check for matching values in its kase if it's not found in the kase then it will check in the lookup table if the match is found in the lookup table then the value will be cast into memory for next time so if the lookup table is very big then we can use this approach so kase connection manager can't be used with this option and advanced tab of lookup transformation will be enabled only in the case of parcel kase where we can configure kase size initially kase will be empty so for each value first it check into kase if not found then it goes to the database if found in database then it is stored that value in the kase for next time startup time will be less than the full kase but processing time will be longer and it also required a lot of memory and our last option is no kase in this option lookup transformation will not use the kase to store the lookup table at any stage when a new row comes from the data flow the lookup transformation directly check in the lookup table for the matching values every time it will get the data from database so go to on ssdt so this package has been created in our last video so double click on this lookup transformation then we can see kase mode so in case of full kase as we can see kase connection manager has been enabled once we move to parcel kase then kase connection manager has been disabled once we move to no kase then kase connection manager is also disabled so as we have discussed kase connection manager is only enabled when we are going to use kase mode is full kase next is when we move to parcel kase then our advanced tab has been enabled as we can see we can specify the kase size and we can also enable kase for rows with the no matching entries when we move to our full kase then advanced tab has been disabled we cannot modify anything and in case of no kase as we can see it's disabled so here we can see this is our source table and it has total five rows and this is our lookup table and it has three rows so once we talk about full kase then we can see whole data from lookup table will be cast into memory as we can see in case of full kase all three rows have been cast into memory when we talk about parcel kase then in our source it will start from id one so it will check in our parcel kase so initially it's empty it goes to lookup data table and it will pick the value from there so one then join and ct is usm for serial number two again we can see id is one so id one will be checked in our parcel kase so it's found so there is no need to go to in lookup table and it will pick directly from parcel kase when we move to serial number three then we can see id is again one so there is no need to go to lookup table it will directly picked in parcel kase memory when we move to serial number four then we can see id is two and here in parcel kase memory we don't have id two so it will check first in our parcel kase memory if the value is not found in parcel kase memory then it will goes to lookup table and pick the value from there so two then monies then deli so our second row has been cast in the memory when we move to our serial number five then id is three so first it will check in parcel kase memory it's not found then it goes to data table so in lookup data table we have three so three has been cast so this is our parcel kase when we move to no kase so in case of no kase it will check from one so id one initially no kase is empty so it will directly move to lookup data table and it will pick the value from there so join and us so it is cast when we talk about serial number two in source table we have id one so it will directly check in no kase so in no kase it will store the last first value so as we have first id one so id one is available in no kase so it will directly pick the value from no kase when we talk about serial number four then it has the id two and in no kase we have first our last value is one so it will directly move to lookup data table and replace our existing value from memory so two then monies and then deli when we talk about serial number five then we can see id is three three is not available in no kase so it will directly goes to lookup data table and three is available over there so it will pick and replace our existing value so three has been replaced and it will store our last updated value so i hope you have understand how we can use full kase parcel kase and no kase so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ssunitech so see this side and this is continuation of ssir tutorial so today we're going to discuss about lookup scenarios and this is very important when we talk about interviews so total we have seven scenarios and we will discuss one by one so let's start with our first scenario so it's on equi joint so as we can see in our source table and in our lookup table so in our source table we have four rows and in our lookup table again we have four rows so what would be the output for the matching and and matching so let's go to ssms and these are the our tables so now we can execute and we can check this is our source table and this is our lookup table so we are getting the same number of record and we want to apply lookup on the behalf of name as we can see name is highlighted so let's go to an ssgt here we need to create one more package so right click on this ssir package then new ssir package then we need to rename this package with lookup scenario now we need to use a data flow task then double click on this data flow task so inside this data flow panel we need to configure our source transformation and destination so as we have sequels over table so we have to use already be source so we can drag and drop already be source then double click on this already be source here we need to select our already be connection manager so click on new as we have already made the connection so I am going to use that one click on okay here we need to select our source table for case one so this is case one source table so we can select this table go to on columns so we want all the column then click on okay so we have done our source now we need to configure our transformation so our transformation is lookup transformation so we need to drag and drop lookup transformation then we need to connect with source now double click on this lookup transformation go to on connections now we need to select our table and this is our reference table for case one so go to on columns here we need to specify the lookup column so we want to apply lookup on the name so we can connect with name and here we can select all these rows or go to on general here we can specify how to handle rows with the no matching entries so we want to redirect with no match output now click on okay so we have done our lookup transformation now we need to configure our destination so we don't want to load data any destination so we can use multicast to check the data then this is for lookup match output we can select then click on okay and second lookup no match output here we can enable our data viewer so right click on this data viewer then enable data viewer now we can save our package then we can run our package so as we can see in our no match output we are not getting any data in our match output we are getting all the four rows so all the four rows is redirect to our match output so this is our case one so in our case two as we can see more number of records in source data so as we can see in source data we are getting four rows and in our lookup table we have only two rows so what would be the output for the match and unmatch go to on ssms and this is for case two so we can execute this query and we can see in our source table we are getting four rows in our lookup table we are getting two rows go to on ssgt here we need to change our table so double click on this already be sourced here we are going to check for case two so we can select our case two now click on okay then in lookup transformation we can also change our table for tone connections here we need to select for case two reference lookup table now click on okay now we can execute and we can check once we execute our package then we can see in our match output we are getting two rows and in our unmatched output we are getting two rows this is our case two now move to case three so in case three as we can see more number of records in reference table so as we can see in our source table we are getting only two rows in our lookup table we are getting four rows so what would be the output for match and unmatch so go to on sql server management studio and we can execute our electric statement for case three and we can check the data in our source table we are getting two rows in our lookup table we are getting four rows now go to on shdt now we need to change our table in source so this is for case three so we can select case three source lookup table click on okay then double click on this lookup transformation and here again we need to change our table so this is for s3 so we can select case three reference table now click on okay now we can run our package and we can check our output once we execute then we can see in our lookup match output we are getting two rows and in our lookup unmatch output we are not going to get any data so this is for case three now move to our case four so in our case four as we can see duplicate records in source table so go to on ssms now we can execute our selected statement for case four as we can see erics record seems to be duplicate go to on shdt now we need to change the table so this time for case four so we can select case four source table and go to on lookup transformation and here again we need to change our table so this is for case four refresh table now we need to execute our package and we need to check our output so here as we can see we are getting the four rows which we have in our source so if we have the duplicate values in our source then lookup transformation will redirect that duplicate values so we can stop this package go to on case five so in our case five as we can see duplicate records in reference data set so as we can see in our source data we are getting only three rows and in our lookup table we are getting four rows and erics record seems to be duplicate so what would be the output for match output and unmatch output so let's go to on ssms now we can execute our practice statement for case five here we can see in our source table we are getting three rows and in our lookup table we are getting four rows now go to on shdt now we need to change our table at source so this is for case five so we can select case five source lookup table click on okay now double click on this lookup transformation here we need to change table as well so go to on connections and change for case five refresh lookup table now click on okay now we need to execute our package so we can click on start and here we can see we are getting only three rows so if we have the duplicate values in our refresh data set then it will return only listing values from our source if we have duplicate values in source then it will return the duplicate value now we can stop our package and go to our case six so in our case six this is for cases two as we can see in our source we are getting total four rows and in our lookup table we are getting four rows in our source as we can see erics record in uppercase but in our reference lookup table we don't have in uppercase so what would be the output in our matched output and unmatched output so let's go to on ssms now we can execute our selected statement for case six so as we can see we have total four rows in our source and in our lookup reference table we have total four rows now go to on shdt so here we need to change the table so double click on this already be source and then we can select the table so this is for case six so we can select case six source table now click on okay now double click on this lookup transformation then go to on connection and here again we need to change the table so this is for case six so we can select case six lookup table now click on okay now we need to execute our package so as we can see erics record moved to no match output so we can say that lookup transformation treats as a case six two now we need to change the mode so double click on this lookup transformation then i gonna change the case mode this time parcel case now click on okay now i gonna execute our package so now we can see in no match output we are not getting any row so all rows moved to our match output now i gonna change case mode to no case so stop this package double click on this lookup transformation and then no case now click on okay now i wanna execute our package so here we can see so we are getting the erics record in our match output so we can say that in case of full case it treats as a case as two else case in sense two now we can stop this package and go to our case seven so this case seven is totally related to null values if we are getting any null values in our source and in our lookup table we have also null value so what would be the output for match output and no match output so go to on SSMS now we can execute over select a statement for case seven so as we can see null values are available in source and lookup table so go to on ssdt now we need to change the table so double click on this already be sourced here we are gonna check for case seven so we can select case seven source lookup table now click on okay then double click on this lookup transformation i wanna change access mode to full case then go to on connections and lookup reference table for case seven we need to select then click on okay now we need to execute our package once we execute our package then we can see in our match output we are getting all the values in our no match output we are not going to get any value we can stop this package then go to on lookup transformation and i gonna change casemode to full case then click on okay now we need to execute our package so as we can see null value is moved to no match output so now i gonna change casemode to no case so double click on this lookup transformation and here we need to change casemode to no case then click on okay now we need to execute our package so as we can see null value is moved to no match output so as we can say that if we are going to use full case then two null values are treated as a same once we gonna change case mode to parcel case or no case then two null values are treated as a different now we can stop this package so i hope you have a nice time about the lookup scenarios so if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ssunitc so see this side and this is continuation of ssis tutorial so today we gonna discuss about audit transformation so what is audit transformation and when we can use this transformation so let's start so it includes data about the environment in which package will run and it also allow us to add extra columns with audit information like machine name package name username and many more columns so when we can use this transformation when we want to check which user executed the package what was the execution time what was the machine name what was the package name and what was the task name so if you want such type of information then we have to use audit transformation so here we have a scenario like we can see we have a sequence over table and we want to load data from sequence over table to flat file destination and between this we want to capture the audit information like machine name package name username and execution time so we have to use audit transformation so let's have a look of sequence over table so go to on ssms then we have cell information table now i wanna execute this query so once we execute this selected statement then we can see we have total six columns cell order number cell order date customer ID quantity value and country so we want to load this data in our flat file so go to on ssdt here we need to create one more package so right click on this ssis package then new ssis package so here we need to rename this package so i wanna rename this package with audit so here we need to use data flow task so i gonna drag and drop data flow task then double click on this data flow task so inside data flow panel we need to configure our source transformation and destination so as we have discussed we have sequence over table so we have to use well adb source so i gonna drag and drop this well adb source double click on this well adb source so here we need to select the well adb connection manager so i gonna click on new so we have already made the connection so i gonna use this connection so click on okay now we need to select the table so as we have discussed we have cell information table so we need to select cell information table so this is our cell info table go to on columns so we want to capture all the columns so we are okay with it now click on okay so we have done our source now we need to configure our transformation so we have audit transformation so we need to drag and drop this audit transformation and then connect with source now double click on this audit transformation so here we can see we have output column names then audit type so in audit type we have a drop down so in this drop down we have many more options so we want capture package name so we can select package name then we need to capture machine name so we can select machine name then we need to capture execution start time so we can select execution start time now we need to select username then task name now click on okay now go to on our destination so we want to load data in our flat file destination so we can drag and drop this flat file destination then connect with audit transformation now we need to configure our destination now double click on this flat file destination then we have to specify flat file connection manager so click on new we want to generate a delimited file then click on okay here we need to select file name so we can browse we want to generate a file with the name of audit then click on open so column names in first data row we can select this checkbox then go to on columns so as we can see we are getting cell number cell order date customer ID quantity value so now i want to click on okay then go to on mapping so inside mapping we can see we are getting cell order number date customer ID quantity value and country so these six columns are coming from source and we are getting package name machine name execution start time username and task name from audit transformation so it will generate all these columns now click on okay so we have done our destination as well now we need to execute our package so click on start so go to on folder as we can see file has been generated with the name of audit now we need to open this file as we can see it has captured all the information so we are getting these six columns from source then package name machine name execution start time username and task name from audit transformation so i hope you have understand how we can use audit transformation to capture the audit information so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to SSUniTech so see this side and this is continuation of SSIS tutorial so today we gonna discuss about import column transformation so what is import column transformation and when we can use it so let's start with import column so this is used to import data from binary files like MS file text file or media files to databases so simply we can see we have any flat file or we have any media file or we have any MS file so we want to load that files in our sequence of a database or any another database so we have to use import column transformation next is the data type for the output columns must be dt text dt and text or dtms so before going forward i would like to tell you about what is the pre-requirement so we should have any source file like flat file that would contain the information of file path as we can see these are the files so this is our excel file this is our flat file and this is our media file and these are the MS files so we want to load these files in our sequence of a database so here we have a flat file that is contained the information of our file path so once we open this file then we can see this is our MS file this is our excel file and this is our media file and this is our again MS file so it has contained the information of file path so go to an SSMS to check the table so this is our table import column table so it has only two volumes first is file path and second is file as we can see the data type of file path is where care so it would contain the information of file path and in our files it would contain the actual file and data type for this we have where binary so this is our package in our flat file we have contained the information of file path and then we need to use import column transformation and then already we destination where we want to load the data so go to an ssdt and here we need to add one more package so right click on this ssis package then new ssis package then i want to rename this package with import column so this is our import column then inside this control flow we have to use dataflow task so we need to drag and drop this dataflow task then double click on this dataflow task so here we need to configure our source transformation and destination our source is flat file so we have to drag and drop this flat file source now double click on this flat file source so here we need to make the connection as we don't have any connection manager right now so click on new then it would ask for file name so we can browse then we can select our file path file click on open then we make sure that this checkbox will not be checked as we have seen we don't have any column name in our flat file so as we can see we don't have any column name go to one columns so as we can see we are getting this data now go to on advance then here we can change the width so i gonna change this by 100 now click on okay then go to on columns so we want this column now click on okay so we have done our source now we need to configure our import column transformation so we can drag and drop this import column transformation then we need to correct the source now double click on this import column transformation so as we can see we have input columns tab so go to on input columns tab then we need to select that column that is contain the information of file path so this is our column 0 now go to on input output properties so this is our input column properties and this is for output column properties so go to on input now we can expand as we can see we have input column now we need to expand our output column as we can see we don't have any output column right now then we need to add output column so add this column and we can rename this column by files so it would contain actual file so here we need to take care one thing here we have linux id so linux id 24 in my case when you will be practicing then it might be different so go to on input column and here we can see we have an option for file data column id here we need to update that linux id that is 24 so that has been updated successfully now click on okay we have done our import column transformation so now we need to configure our destination that is already be destination so we can drag and drop this already be destination connect with import column transformation and now double click on this already be destination so here we need to select the connection manager so click on new as we have already made the connection so click on okay now we need to select the table where we want to load the data so we want to load the data in import column table so we can select then go to on mapping so as we can see files has been maxed successfully because input column name and destination column name is same and we need to map file path manually so we need to click on okay so we have done this package before in this package go to on ssms and execute our selected statement as we can see initially we don't have any data in this table now we need to execute our package so click on start so as we can see four rows should be inserted in our table so we can execute our selected statement then we can see this is our file path and this is our actual file and it has in binary format so this is our image file this is our excel file this is our media file and this is again our image file so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss united so see this side and this is continuation of ssis tutorial so today we gonna discuss about export column transformation before going forward if you haven't watched last video so i would strongly recommend to watch that video so let's start with export column so export column transformation reads dt text dt n text and dt image data from database and write it a file with the name and path given in the another column of the same row and in this transformation it can be used to override or append existing file so as we can see here we have oledb source then we have export column transformation there is no need to use any destination so data will be loaded automatically while we are going to use export column transformation so let's have a look off sickos over table so here we have import column table and we have imported data in this table in our last video so as we can see we have only two columns in this table first contain the information of file path second is the file name so in file path where we want to export this file and this is our actual file so i wanna execute this selected statement then we can see we have total four files first is our image file second is our excel file then media file then again image so we have two image file and one media and one excel file so go to one ssg t to implement the package here we need to create one more package so new ssis package then i gonna rename this package with export column then we have to use dataflow task so we can drag and drop this dataflow task and then double click on this dataflow task here we need to configure our source and transformation so our source is oledb source because we have data in our sickos over table now we need to double click on this oledb source then we need to select the connection manager so click on new we have already made the connection so i gonna use that one click on okay here we need to select the table so here we have import column table so we can select import column table and then go to own columns so we want all these column now click on okay then we need to configure our export column transformation that is our label inside this other transformations so we can drag and drop this export column transformation then we need to connect with source now we need to double click on this export column transformation so here we can see extract column so here we need to select that column which contains the actual file so this is files then where we want to export this file so we can select the file path so here we have option for allow append, force truncate these are the options available so we don't want to use that one then click on okay so we have then our export column transformation so as we can see in this folder we don't have any file we have only one file which is file path once we execute this package then files should be loaded in this folder so we can click on start so as we can see four files have been loaded successfully now we can stop this package so now we can check the size of excel file it's nine kb now go to one export column and this time i want to select this checkbox allow append now click on okay then i want to execute this package one more time so click on start now we can see excel file size has been increased other file size as well now we can stop this package so i hope you have understood how we can use export column transformation thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss unitek so still decide and this is continuation of ssis tutorial so today we gonna discuss about term extraction transformation so what is term extraction transformation so it is used to extract noun or noun phrases in transformation input columns and provide occurrence in output next is we can use term extraction transformation to discover the content of a data set for example we have a text file that contains the information of email messages that may provide useful information about product or depend on your business type so here we need to take care one thing it can only work with either dt underscore w str or dt underscore n text data type and it will accept these data type in input and provide in output as well if we have any column with text data and that column doesn't contain dt underscore w str or dt underscore n text data types and we want to use this transformation so data conversion transformation will help us so let's move to our scenario so this is our scenario as we can see we have a database for training and we have a table that contains the information of emails and we want to analyze demanding courses so here as we can see we have a OLDB source because we have a SQL server table that contains the information of emails that table contains wire cat data type that's why we have to use data conversion to convert wire cat data type to n wire cat data type next is we have to use term extraction to count the occurrence and then we want to load in our OLDB destination so let's have a look of SQL server table go to on ssms as we can see we have term extraction table now we can execute and we can see we have total 14 rows and we can check the data type for this so we can find it has wire cat data type instead of n wire cat so this is wire cat that's why we need to use data conversion to convert this wire cat data type to n wire cat data type so go to on ssdt to implement the package so here i wanna add one more package then we need to rename this package with term extraction so this is our term extraction now we have to use data flow task so we can drag and drop this data flow task after that we need to double click on this data flow task so here we need to configure source transformation and destination so our source is SQL server table so we have to use OLDB source now double click on this OLDB source then we need to select the connection manager so click on new as we have already made the connection so i wanna use that one click on okay then we need to select the table we have term extraction table so we can select and go to on columns so we have mail description now click on okay so we have done our source so it would return wire cat data type but we want unicode corrected so we have to use data conversion so we can drag and drop data conversion and connect with source then double click on this data conversion then we can select so as we can see it's going to return dt underscore text so we have to convert this dt underscore text to dt underscore n text now click on okay so we have done our data conversion now we need to use term extraction this is our term extraction so we can drag and drop then connect with data conversion now double click on this term extraction so here we can see we have three tabs first term extraction second exclusion third advance so here we can select copy of mail description then then we can see output column names so this is term and then score so term will return the courses so we can rename this with courses and this is score so we can rename with total count so it will return the count of the courses then move to advance exclusion tab will be covered later now we have term type so in term type we have three options first is noun second noun phrases and third noun and noun phrases so we want to use noun and noun phrases second is score type so in score type as we can see we have frequency and tfidf so tfidf is a predefined formula so I am not going to discuss about this I am only discussed about the frequency so I am going to select this radio button so here we can see parameters so what would be the parameter first is frequency three sold so in frequency three sold we can define the value what would be the minimum occurrence so I am going to define two so if any noun or noun phases will occur minimum two times then it will return in our output second is maximum length of term so I am going to set with counting then we can see we have an option so in this option use case sensitive term extraction so I am not going to select this checkbox once we select this checkbox then it will treat as a case sensitive now click on okay so here we can see we are getting a warning rows sent to error output if rows will be lost so I am going to ignore this failure I will click on this configure error out so here we can see we have errors so we can ignore failure then again ignore failure you don't need to worry about it I will cover this part once we will understand about the error handling so now we want to load this data in our secosova table so we have to use OLEDB destination then we need to connect with term extraction now double click on this OLEDB destination so here we have already made the connection so we need to select the table where we want to load the data so we want to load the data in term extraction destination table now go to on mapping so as you can see total count has been mapped successfully because input column name and destination column names are same courses we have to map manually now click on okay before execute this package let's have a look of table as we can see we don't have any record right now so now we need to execute this package click on start as we can see five rows should be inserted in our destination so we can stop this package and then go to on ssms then we need to execute this selected statement as we can see we are getting the courses and number of occurrence ssis four times ssr is three times dot net two times for bi two times and ssis again two times next is exclusion so here we have an option to exclude nouns and noun phrases those are not required so we have a table and that is term extraction exclude table so we can execute so here we can specify the noun and noun phrases so I have specified dot net I want to truncate this table truncate table and then table name we don't have any data in this table now go to on ssgt then double click on this term extraction so go to on exclusion tab so here we can see use exclusion terms so we can select this checkbox then we can see it would ask for well adb connection manager so we cannot specify nouns and noun phrases except well adb so we have already made the connection so go to on tables what is the table so we have term extraction exclude table then we have to specify the column so we have only one column that is exclude so now click on okay now we need to execute our package so once we execute our package then dot net will be ignored as we can see we are getting only four rows last time when we run this package then that was five so we can stop this package go to on ssms and execute our selected statement so as we can see we are getting four rows and dot net is not all level here so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss in tech so see on this side and this is continuation of ssis tutorial so today we're going to discuss about term lookup transformation so what is term lookup transformation and when we can use it so before going forward if you haven't watched last video of this video series so i would strongly recommend to watch that video so let's start with term lookup so it is used to analyze the setup text records next is it looks the setup predefined nouns and phrases in the input text column that should be our OLDB source and provide count of occurrence as an output on row by row basis it will accept only unicode string that is dt underscore w str and unicode text stream that is dt underscore n text so it will accept only dt underscore w str and dt underscore n text as we have discussed in our last video as well so in output of term lookup we will get two extra columns terms and frequency so these two columns will be added in our output so let's have a lookoff scenario so this is our scenario as we can see we have a source table that contains the information of emails and we have a reference table that contains the information of nouns and phrases so as we can see we have a OLDB source here we have information of emails and in our term lookup we have a reference table that contains the information of nouns and phrases and we don't want to load data in destination we just want to check the data that's why i have used multicast so let's have a lookoff table go to on ssms then we can see term lookup then we can execute our selected statement as we can see we have the emails data now go to on our reference table let's execute the selected statement for reference table so as we can see we are getting the courses that is our nouns so python sql ssis ssis and ssrs go to on ssdt to implement the package so here i want to add one more package so right click on this ssis package then new ssis package then we need to rename this package so this is for term lookup so we can rename with term lookup then we have to use a dataflow task so we can drag and drop this dataflow task then double click on this dataflow task so here we need to configure our source so our source is equals our table that's why we have to use OLDB source so now we need to double click on this OLDB source then we need to select OLDB connection manager so i want to click on new so i have already made the connection so click on okay here we need to select the table so our table is term lookup source table so this is our table now go to on columns we are getting id and mail description so click on okay so we have done our source now we need to configure our transformation so our transformation is term lookup transformation so we need to drag and drop this term lookup transformation then we need to connect with source now double click on this term lookup transformation so as we can see our refresh table so here we need to select our reference table so our refresh table is term lookup reference table then go to on term lookup tab so in this tab we want to apply lookup with mail description from our input column and in our reference table with courses so we can drag and drop and in our output we want all these columns so we can select id and mail description as well go to on advance tab here we can see we can select the case sensitive option but i am not going to use this option so as we can see it's going to reflect and warning rows send to error output if the rows will be lost so double click on this term lookup transformation then we need to click on configure error output we can click on ignore failure and here again we can select ignore failure then apply and click on okay then again okay so we have done our transformation now we don't want to load data in any destination so we just want to check the data so we can use multicast then connect with term lookup transformation here we can enable data over so right click on this data pipeline then enable data over now we can execute our package so as we can see this is id and this is mail description so we are getting the data but we are getting two additional columns as i have discussed term and frequency so we can stop this package next is how term lookup works so first is if reference table contains similar form of world then it would be considered both similar and plural form of data in our source as similar as we can see here source contains student and students so we are getting two rows and in our reference we have only student so in our output we will get students two time and it will reflect student and students so go to on ssms so here in our case one i want to insert student in our reference table i want to insert student and students in our lookup source table so as we can see two rows affected now go to on ssgt and execute our package so click on start so as we can see student and students so we are getting both recall now we can stop this package in our second as we can see if reference table contain plural form of world then this transformation considered similar and plural form separately at source as we can see source contains student and students and in our reference we have students so our output will student go to on ssms and this time i want to update this lookup reference table with students so one row affected now go to on ssgt now i need to execute this package one more time so as we can see we are getting students student and students both are treated separately so we can stop this package so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss unit act so see this side and this is continuation of ssis tutorial so today we gonna discuss about fuzzy lookup transformation so what is fuzzy lookup transformation and when we can use it so before going forward i would strongly recommend to watch lookup transformation if you haven't watched so let's see start with fuzzy lookup so it is used to replace the wrongly typed words with the correct words basically it uses fuzzy matching to find one or more close matches in the reference table and replace our source data by reference table data and it uses only ladb connection manager to establish connection with the reference table in our reference table we cannot use except ladb source next is it allows us to use only dt underscore w str and dt underscore str data types column for fuzzy matching so it would take only wire care and n wire care data types next is this transformation creates temporary table and indexes in SQL server database at runtime so as we can see this is our scenario we have a flag file source that contains the information of employees and while entering the titles which is our designation we may enter the incorrect titles when we try to use lookup transformation then we will not get output due to incorrect titles because lookup transformation will use for exact match so this transformation will help us to find the nearest match and get the output so here as we can see we have flat file source then we need to use fuzzy lookup transformation to correct source data by using reference table data and then we want to load that data in our OEDB destination so let's have a look of flat file source go to one folder then double click on this fuzzy example so as we can see we have last name first name and titles so in title as we can see we are getting cust service rp dot in our very next row as we can see customer service representative so this is our correct title next is tech lead this is our correct title then senior manager assistant manager again senior manager then we can see for rajesh kumar we are getting source then mgr dot so our correct title would be store manager so by using fuzzy lookup transformation we will replace these incorrect titles by correct titles so let's have a look of our reference table so go to on ssms then we can see we have our occupation table so i gonna execute this selected statement then we can see these are our correct titles as we can see when we gonna use lookup transformation then occupation description as we can see customer service representative so our first row will not be matched so lookup transformation will be failed then we have to use fuzzy lookup transformation to replace these incorrect titles by correct titles and here we can see fuzzy lookup output table basically we don't have any data as we can see we have id first name last name and titles and id is our identity column so as we can see id identity seed one and increment by one so go to on ssgt to implement the package so here i gonna add one more package so right click on this ssis package then new ssis package then we need to rename this package with fuzzy lookup so this is our fuzzy lookup then we need to use dataflow task inside the control flow panel so we can drag and drop dataflow task then double click on this dataflow task so inside dataflow panel we need to configure our source transformation and destination as we have discussed our sources flat files so we need to drag and drop flat file source then double click on this flat file source so here it would ask for flat file connection manager so i gonna click on new then click on browse so this is our file we can select and click on open so column names in the first data that is okay go to on columns so as we can see last name first name and titles so title would be more than 50 characters so go to on advance then we can increase the size of title so i gonna increase by 255 then click on okay here we can select this checkbox then go to on columns we want all the column click on okay so we have done our source now we need to configure our transformation so our transformation is fuzzy lookup so we can drag and drop this fuzzy lookup transformation then we need to connect with flat file source now double click on this fuzzy lookup then as we can see we have three tabs in our first tab we have reference table so here we need to select our OLED connection manager so click on new as we have already made the connection so i gonna use that one click on okay so here we can see generate new index then a reference table name store new index new index name maintain stored index use existing index and name of an existing index so here these are the lookup properties first is generate new index so it would generate a new index on the reference table that would help to fuzzy lookup next is store new index if we want to create a new index on reference data then we can select this option next is new index name if we select store new index option then here we need to specify index name next is maintain store index if we want SQL server will maintain the newly created index then we can select this option next is use existing index if we want to use already created index then we can select this option and our last option is name of an existing index when we select our use existing index option then we need to select the existing index by using drop down so go to on SSDT then i wanna generate a new index then we need to select our reference table name as we have discussed occupation table is our reference table now go to on columns so in column as we can see these are our input columns and these are available lookup columns so we wanna match with title then occupation and here we also want to get correct occupation so we will get occupation description from available lookup columns then go to on advance and this is very important so here as we can see we have similarity threshold value so similarity threshold ranges between 0 to 1 where 1 is for exact match the fuzzy lookup editor provides a slider to adjust similarity between 0 to 1 if it's close to 1 then source should match more accurately and also it would generate two extra columns with our existing column first is similarity second is confidence so in our similarity its score between 0 to 1 it will show us how much accurate source is matched with the fuzzy lookup data for example if it's 0.5 that means source is 50% matched with the reference data second is our confidence so it shows us how much confident fuzzy lookup is about the best match in the fuzzy lookup table for example 0.50 means fuzzy lookup is 50% confident so go to on ssdt so this is our similarity threshold value so i gonna set this value 0.7 and here we can see we have token delimiters so i gonna use space and dot as our token delimiters now i am going to additional delimiters with dot so now click on ok so we have done our fuzzy lookup now we need to load that data in our destination so before loading data i would like to check the data so go to on common then we can drag and drop multicast and connect with our fuzzy lookup now we can enable our data over then we can save our package and execute so here as we can see occupation description has funeral values so similarity threshold value is not correct so it's our heated dry method so we can close this and stop this package then we need to change our similarity threshold value by 0.5 then click on ok then we can save it and execute our package so this time as we can see we are getting occupation descriptions but there is no any null values are available so this is our perfect value for similarity threshold so here as we can see we are getting extra columns first is similarity second is confident so as we have discussed similarity and confidence so similarity and similarity titles both are same as we can see these are getting the same values now we can close this data over and stop our package now we need to configure our destination so delete this multicast and we need to use polydb destination so we can drag and drop and connect with fuzzy lookup transformation then double click on this already be destination so here we need to select the table so our table is fuzzy lookup output table good one mapping so as we can see we are getting available input columns and available destination columns so last name mapped with last name that is fine then first name mapped with first name that is also fine and there is no need to provide any value for id because its identity column so it would be generated automatically but once we talk about titles then our available input columns we have titles and occupation description so we have two columns and by default titles mapped with titles because our available input columns and available destination columns both are same but in our titles we are getting incorrect titles in our occupation description we are getting the correct occupation so we can map this with the title in our destination now we can click on okay so now we can save this package and go to on ssms and we can execute our destination table so by default we don't have any data now we need to execute our package so click on start so as we can see 21 rows should be inserted in our destination with correct titles can stop this package go to on ssms now we can execute this selected statement one more time as we can see we are getting customer service representative customer service representative tech lead senior manager assistant manager so we are getting correct data so i hope you have understand how we can use fuzzy lookup transformation and this is very important so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss unitec so she'll decide and this is continuation of ssis tutorial so today we're going to discuss about fuzzy grouping transformation so before going forward if you haven't watched part 13 and 30 of this video series so i would strongly recommend to watch that video where you can understand about the conditional split and fuzzy lookup so let's start with fuzzy grouping so what is fuzzy grouping so it is also used to replace a wrongly typed words with the correct words like we have done in fuzzy lookup transformation for example if 90% records are correct data and 10% records typed incorrect then it would be treated same as 90% records next is it uses grouping technique to replace the wrongly typed words in data to correct data next is it allows us to use only dt underscore w str or dt underscore str data types columns in fuzzy matching and in exact matching can be used any data types except dt underscore w str or dt underscore str so this is our scenario let's assume we have a flat file which we have used in our last video when we were discussed about fuzzy lookup transformation so that flat file contains the information of employees that has three columns first is first name last name and designation which is titles so in our SQL server we have a table that contains the information of titles so we want to load only correct titles and we don't want to load incorrect titles which is wrongly typed so as we can see we have to use a flat file source then we need to use fuzzy grouping transformation then we will use conditional split and then OLA DB destination to load the data so let's have a look of flat file source so go to one folder then we can open this file as we can see it has last name first name titles in titles we can see customer service representative in our second row we can see customer service rep dot so this is our incorrect title while we are going to group between titles then our second title will be treated as a our first title let's have a look of our destination so this is our destination table and we can see it has only two columns so first is id and second is occupation description so in id we can see it has identity column so go to on ssdt to implement the package so here we need to use one more package so right click on this ssis package then new ssis package then i want to rename this package with fuzzy grouping so this is fuzzy grouping then we have to use dataflow task so we can drag and drop this dataflow task after that we need to double click on this so inside dataflow panel we have to use our source which is our flat file source so we can drag and drop flat file source then we need to double click on this flat file source then it would ask for flat file connection manager so click on new then we can browse and this is our file so we can select this file and then click on open go to on columns so we are getting last name first name and titles now move to advanced tab so in this advanced tab we have titles so i gonna change the size of this title with 255 then click on okay now go to on columns so we want all the column then click on okay so we have done our source now we need to configure our fuzzy grouping transformation so go to on other transformations then we need to drag and drop then we need to connect with source once we connect with source then double click on this fuzzy grouping transformation so here we need to select oledb connection manager so i gonna click on new as we have already made the connection so i gonna use that one click on okay go to on columns so here we wanna do grouping on this titles so we can select this checkbox then here we can see we have many options first is input column names which is titles second is output alias name i wanna use as titles then this is our group output alias and this is very important so this column will be loaded in our database next is match type in match type as we can see we have exact match and fuzzy match so i wanna use fuzzy match so here we have comparison flags so it has many more options so what it mean so our first option is ignore flag if we select this option then fuzzy grouping will ignore our case so here as we can see we have capital letters and in our small letters so it would be treated as same values next is ignore kana types if we select this option then it will ignore the difference between hiragana and katakana so these are the language specific and our next is ignore non-specifying characters if we select this option then fuzzy grouping will ignore the difference between die critics and characters our next option is ignore character width if we select this option then it will ignore the differences between single byte character and double byte character and our next is ignore symbols so if we select this option then it will ignore the difference between letters and symbols so these are the comparison flag so here we can see we have numerals so what is numerals so basically we have to specify the significance of starting and ending numerals while comparing data so it has four options first is neither second is leading then trailing and leading and trailing so first is neither so numerals of the starting and ending position will be ignored while grouping next is leading so numerals at the starting position will be ignored while grouping third is trailing so numerals at the ending position will be ignored while grouping and our last is leading and trailing so numerals at the starting and ending position will be ignored while the grouping so go to our ssdt and i'm going to use neither and move to our advanced tab so in our advanced tab as we can see we have input key column which is our system defined formula second is output key column name which is underscore key underscore out then similarity score column name which is underscore score then we have similarity threshold value and this is very important so here we need to take care here we have a slider so we can adjust the value between 0 to 1 so i want to use 0.8 at this time and then we have token delimiters so i want to use space and dot so in our additional delimiters we can specify dot now click on okay so now we need to use conditional split because here we can see in advanced tab if the value of underscore key underscore in is equals to underscore key underscore out that would be our perfect titles so we want to load only that data in our database so we can use conditional split that is available inside this common so we can drag and drop conditional split and then we need to connect with fuzzy grouping so here double click on this conditional split and then i gonna specify underscore key underscore in should be equals to underscore key underscore out when this value will be matched then that would be our matched output so we can put an output name with matched and you can specify any output name as you wish so click on okay now i want to load data in our destination so we have to use OLDB destination so we can drag and drop OLDB destination then connect with conditional split so here we want to load data for matched output so click on okay now we need to double click on this OLDB destination here we need to specify the table so our table is fuzzy grouping occupation then go to on mapping so in mapping as we can see we have many columns in our available input and in our destination we want to load only titles clean column so we can map with destination so there is no need to specify any value for id because id is identity column so values will be loaded at the runtime now click on okay now we can save this package and then click on start so as we can see we are getting 21 rows then fuzzy grouping will perform grouping on this 21 rows and provide the output by using conditional split we have divided input data and in our matched output we are getting nine rows so we can stop this package go to on ssms then we can execute our selected statement so here we are getting the duplicate data as we can see store manager store manager assistant manager assistant manager and then customer service representative customer service representative so we are not getting data as we were expected so go to on ssdt so we had specified wrong threshold value so i want to change this value by 50 this time then click on okay now go to on ssms i want to truncate this table so truncate table we can execute now we can check we don't have any data right now so i gonna run this package click on start so as we can see we are getting 21 rows from source and in our destination we had loaded only six rows so go to on ssms then we need to execute our selected statement so as we can see we are getting the data as we were expected so we are getting distinct titles so this is our actual value for threshold so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss unitech so see on this side and this is continuation of ssi tutorial so today we gonna discuss about power transmission so let's start with power transformation so it is used to interchange rows into columns or we can say that the power transformation makes a normalized dataset into a less normalized but more compact version by pivoting the input data on a column value next is what is the prerequirement for using power transformation so first is short rows in input data so input data should be sorted with the pivot key if data is not sorted the power transformation might generate multiple records for each values in the set key next is duplicate rows so input data might contain duplicate rows which will cause the pivot transformation to fail so our input data shouldn't contain the duplicate rows so these are the pivot steps so before going forward let's have a look of our source so this is our flat file source as we can open then we can see we are getting employee ID month day of week and expenses so after using power transformation our output should be like this so this is our input and after using power transformation our input looks like this output so first we have employee ID then there in our columns then last column is month so this would be our output after using pivot transformation so first is pivot uses so it would contain 0 1 2 and 3 for 0 doesn't participate that column in the pivot transformation when we select 1 that is for set key so for each unique values for the column when we select 2 then this is for pivot key so for each value of the column add source created new columns at the destination and for 3 this is for pivot values add output columns manually so here we need to generate our output columns manually next is map the linus property at the source with source column property at the destination and last is we need to set the pivot key value for the newly created column so go to on ssdt to implement the package so i gonna add one more package so right click on this ssis package then new ssis package here i wanna rename this package with pivot so we can select pivot then we have to use dataflow task so we need to drag and drop this dataflow task after that we need to double click on this dataflow task so here we need to select our source as we have discussed we have flat file source so we need to drag and drop this flat file source then double click on this flat file source so here we need to select our flat file connection manager so click on new then we need to select the file so we can browse and this is our input file so we can open now go to on columns as we can see we are getting all the column click on ok now go to again columns so we want all the columns that is fine so click on ok now we have done our source now we need to configure our transformation and that is pivot transformation so i gonna drag and drop this pivot transformation after that we need to connect with source now double click on this pivot transformation so here we need to take care first is pivot key second is set key then pivot value in our pivot key we need to select our day of weeks which will generate our newly created column then set key here we need to select employee ID in pivot value we need to select expenses so here as we can see generate pivot output columns from values so here we need to generate columns manually as i have already typed so i wanna copy this and paste that value over here then we can see this button generate column now you can generate columns now now click on ok so columns has been generated successfully we can rename these columns later here we can see this checkbox so ignore unmasked pivot key values and report them after data flow execution so we need to select this checkbox now click on ok so we have done our pivot transformation but i am not going to load data in any destination i just need to check the data so we need to drag and drop this multicast after that we need to connect with pivot transformation then we can enable data viewer then we can save our package and execute once we execute our package then we can see we are getting employee ID friday monday thursday tuesday and wednesday so we are getting all the data but we should have one more column with the month so we are not getting over here now we can stop this package so here as we can see we don't have any option to select more than one columns in set key so we can close it then right click on this pivot transformation then so advance enter then go to on input columns so here we have month so we can select this month go to on input output properties so we can expand as we can see we are getting month in our input column go to in our output columns so we don't have month column so i want to add one more column and this is for month so go to on input columns so in month we need to pick the line as id so that is 14 in my case when you will be practicing then it could be different so in this source column we need to specify 14 then pivot q value here we need to specify zero now click on okay now delete our multicast and then drag and drop one more time and now we need to connect with pivot transformation and enable data everywhere now i want to execute our package once we execute our package then we can see we are getting the data so this is our output data now we can match so we are getting the same data as we expected so we can close this and stop this package so here as we can see go to on advance editor input output properties then output properties so here we can see we can rename this column name as well by Friday and this is for Monday so we can rename for Monday and we can do the same for all the columns according to our requirement now we can save our package and execute so we can see we are getting Friday and Monday so we can close this and stop our package so what are the limitations for pivot transformation so here we need to create the column manually second is it can't be some rise data your sequence over while we gonna use pivot operator then we can summarize the data but here we can't next is data should be sorted so these are the limitations so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss unit app so see this side and this is continuation of ssir tutorial so today we gonna discuss about unpivot transformation in our last video we have discussed about pivot transformation if you haven't watched last video so i would strongly recommend to watch that video before going forward so let's start with unpivot transformation so what is unpivot transformation unpivot is an algorithm to convert multiple columns values to rows or we can say that the unpivot transformation makes an unnormalized dataset into a more normalized version by expanding values from multiple columns in a single record into multiple reports with the same values in a single column so what it means so as we can see we have a flat file source that contains total seven columns first is employee ID then day of weeks friday monday first day tuesday and wednesday then we have month so we have total seven columns so after applying unpivot transformation we won't get output that would contain only four columns first is employee ID then month then day of week and then expenses so basically it's the rotation of table where we gonna rotate from columns to rows so let's have a look of source files so this is our input source file as we can see it has total seven columns as we have discussed so after applying unpivot transformation we won't get our output like this so here we will get four columns first is employee ID month day of week then expenses so go to on ssdt to implement the package so here i gonna add one more package so right click on this ssis package new ssis package then i gonna rename this package by unpivot then we need to use dataflow task after that we need to double click on this dataflow task so inside this dataflow panel we need to configure our source transformation and destination as we have discussed our source is flat file source so we need to drag and drop flat file source then double click on this flat file source here it would ask for flat file correction manager so click on new then we need to select the file so click on rows so this is our file we can select and click on open then go to one columns now click on okay so here we want all the columns so we are okay with it now click on okay so we have done our source now we need to configure our transformation that is unpivot transformation so we need to drag and drop this transformation and connect to the source now double click on this unpivot transformation so here we can see available input columns i gonna select which we want to rotate from columns to rows so we want friday monday first day tuesday wednesday so these are the columns which we want to rotate from columns to rows here we need to specify destination column name that is our expenses so i gonna use expenses and this will be appear all the columns and here we can see pivot key value column name so here we need to specify day of week so this column will contain the information of our day of week now click on okay so i have done our unpivot transformation now we need to select our destination as we don't want to load data in any destination so i wanna use multicast to check the data so connect with unpivot then here we can enable data here and now i wanna execute this package so as we can see we are getting the values as we were expected so we can match with our output so as we can see we are getting employee id employee id month day of week and expenses so we are getting all the columns with all the values now we can stop this data viewer then we can stop this package thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to SS unity so see this side and this is continuation of SSIS tutorial so today we gonna discuss about merge join transformation so this is very important transformation where we can apply our SQL joins that is inner join outer joins so let's start with merge join transformation the merge join transformation provides an output that is generated by two sorted data sets using full left or inner join so what is the input requirements the merge join transformation requires sorted data for its input here we can see we have two already be sources then we have to use sort transformation to sort our already be source data then i wanna use merge join transformation and i'll apply inner join between these two data sets and provide that output so before going forward let's have a look of already be sources so as we can see we have cell order header table and cell order detail table so this table contains the cell order number cell order date and many more information in our detail as we can see we have cell order number item code variant quantity and value so we are getting all these values so we can see cell number is common between these two tables so we can apply join by using cell order number so go to an SSGT so here i wanna add one more package so right click on this SSIS package then new SSIS package here i wanna rename this package by merge join so this is our merge join then we need to drag and drop data flow task now double click on this data flow task so here we need to use our source transformation and destination as we have discussed we have two already be sources i have drag and drop these two already be sources now i wanna double click on this already be source click on new then we have already made the connection so click on ok here we need to select the table as we have as we have discussed we have cell order header table and cell order detail table so i gonna use cell order header table for this and in our second well adb source we gonna use for cell order detail table so this is our cell order detail go to on columns then click on ok and here we can rename this by header and detail this is for detail now we need to use sort transformation because merge and merge join transformation only accept the sorted data so here we need to drag and drop sort transformation this is for header and this is for detail now we need to connect with header and then detail now double click on this sort transformation so here we gonna sort by cell order number so we can select and click on ok in our detail again we gonna sort by the order number then click on ok so we have done our sort transformation now we need to configure our merge join transformation so i gonna drag and drop this merge join transformation and then connect with sort transformation so here we can see it would accept left input and right input so i gonna use left input at this time then click on ok then this is our right input now we need to double click on this merge join transformation here we can see join types so it would accept inner join left outer join and full outer join so in case of inner join our swap input button is disabled when we move to full join then it's also disabled when we have option for left outer join then it's enabled because if you want to use a right join then we can simply swap our inputs to get the right join so at this time i wanna use inner join we want all the columns from header table and in our detail table we want item code variant quantity and value we don't want cell order number from detail table so now click on ok so we have done our merge join transformation as we can see it doesn't have any error output i gonna load this data in flat file so we need to drag and drop flat file destination then connect with merge join transformation double click on this flat file destination here we need to select flat file connection manager so we don't have any connection manager right now so click on new so it would ask that file format so i gonna use delimited format so click on ok so here it would ask for file path so we can browse so here we want to generate our output with merge join output then click on open and column names in the first data row so we need to select this checkbox go to on columns so here we can see we are having all the columns and it would generate a comma delimited file now click on ok now go to on mapping as we can see mapping succeed so click on ok so we have done our package now we can save this package and click on start as we can see 36 rows should be inserted in our flat file destination now we can stop this package to go to on flat file so we need to open this file as we can see we are getting all the values so here we can see it doesn't provide any error output as we have seen it accepts only two inputs so these are the limitations for merge join so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ssunitech so see this side and this is continuation of ssis tutorial so today we gonna discuss about oledb command transformation so what is oledb command transformation so the oledb command transformation runs SQL statement for each rows in the data flow for example we can run an SQL statement that could be inserts update or delete rows in our database table and the SQL statement includes parameters the parameter values are stored in the external column in the transformation input and mapping an input column to an external columns maps an input column to a parameter so what it means and how we can use it so first of all we want to load data in our database table then we want to delete that data then we want to update the data so go to our ssms so have a look of source so this is our source table once we execute then we can see it have two columns first is id then name and in our destination it also contains two columns first is id and second is name initially we don't have any data so by using oledb command transformation we want to load data in this table from our source table so go to our ssdt and here we're going to add one more package so right click on this ssis package then new ssis package after that i gonna rename this package by oledb then we have to use a data flow task so i gonna drag and drop this data flow task and then double click on this data flow task so here we need to configure our source transformation and destination as we have discussed we have sequence over table so we have to use oledb source so we can drag and drop then double click on this oledb source here we need to select oledb connection manager so i gonna click on new then we have already made the connection so i gonna click on okay here we need to select our source table so as we have discussed this is our source table go to our columns so we want id and name both columns so we can click on okay now we have done our source now we have to configure our transformation and that is oledb command transformation so this is our label inside this command so we can drag and drop this oledb command transformation then we need to connect with oledb source so here we need to configure our oledb command transformation so now double click on this oledb command in connection manager we need to select our destination connection manager so that is our sequence over table so we can select then go to on component properties so here we can see we have an option for our SQL statement so this is our SQL command here we can write our SQL statement so as i have already typed the query so this is for insert statement so i gonna copy this and paste in our SQL command so here we can see we want to insert data in our destination table so this is our inserted statement then values coming from parameters so for that we need to specify question mark so i have specified two question marks so first for the id and second for the name and then we want to map these parameters from our input source so i gonna click on okay then we need to go on column mappings so in this column mapping we can see we have id and name so we need to take care about the orders which we have specified in our query so first is for id so parameter 0 should be mapped with id then name should be mapped with parameter one so we have done our mapping now we need to click on okay so we have done our oledb command transformation now we can click on start as we can see eight rows should be inserted in our destination table so i gonna stop this package and go to on ssms then we can execute our selected statement so as we can see we are getting the data this data is inserted in our destination table by row by row so our first row will be insert first then second then third so it will be performed row by row that's why the performance of this oledb command transformation is very low then we need to configure for delete so i gonna change our SQL command and this time i wanna use for delete so this is our deleted statement so i gonna copy this and then we need to paste that command in our SQL command then click on okay go to on column mappings as we can see id mapped with parameter 0 so that is fine to me now click on okay and now i gonna execute our package so this time eight rows should be deleted from our destination table now go to on SQL server management studio then we need to execute our selected statement for destination table so as we can see all rows have gone so i gonna insert a record one more time so we can copy this inserted statement go to on oledb command go to on component properties then in SQL command now click on okay then go to on column mappings so id has been mapped successfully now we need to map for name name is mapped successfully now click on now we can save our package and then we can execute once we execute our package then eight rows should be inserted in our destination table now go to on ssms then we need to execute our selected statement here we can see we are getting the data now this time i wanna update some record manually so we can update our destination table then set and this time we want to set the value for su kumar sim where id is equals to 7 now we can execute our updated statement now i gonna select so as we can see in our id 7 we are getting su kumar sim but in our source we are getting su only so once we execute our updated statement then our destination table should be updated so i wanna copy this updated statement then go to on ssdt here we need to double click on this oledb command then component properties then sql command this time insert should be replaced by our updated statement as we can see name comes first so we need to map parameter zero with the name and then parameter one with the id now click on okay go to on mapping so name should be mapped with parameter zero and id should be mapped with parameter one so we can map now click on okay we have done our package now we can save our package and execute so we can stop this package and go to on ssms then we can execute or select a statement for destination table as you can see su kumar sim has been updated by su so i hope you have understood how we can use oledb with command transformation so it will perform row by row that's why the performance wise it's not good except this we can use import export or we can use execute sql task to execute over sql queries so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos and don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss unity so till this side and this is continuation of ssi tutorial so today we gonna discuss about precedence constant and this is very important so let's start so precedence constants are used to link the executable task and containers and this is part of control flow the execution of next executable depends upon the status of previous executables so it has three values first is success then failure and then completion so as we can see in this example we have a execute sql task if our execute sql task will be executed successfully then our success and completion data flow task will be executed if our execute sql task will be completed with failure then our failure and completion will be executed so as we can say that our completion will be executed doesn't matter our previous executable which is our execute sql task is executed successfully or failure and success and failure will be depend upon the status of execute sql task so go to on ss dt here we need to add one more package then we can rename this package with precedence constant so let's drag and drop execute sql task i haven't discussed about the execute sql task but i'm not going to use this widely so directly on this execute sql task and here we can see in general tab we have connection type so we want to make connection with our database then we need to set up the connection so i want to add new connection so here i have already made the connection with our database so i want to use that one so click on okay so here we can write our sql statement so i want to simply select then hello then i need to click on okay now click on okay so i have done execute sql task then i need to use three data flow first for success second for failure then completion so i need to set these precedence constant with data flow so we can set all these three now we can double click on this precedence constant then we can see value for success failure and completion so this is for success second i need to change for failure then move to third and this is for completion now click on okay so we have done our package now i need to execute our package to check the output so as we can see once execute sql task executed successfully then success and completion have been executed now i can stop this package now double click on this execute sql task so here i need to generate a custom error so i can remove this single square now click on okay then again okay now our failure and completion will be executed this time so we can save our package and execute so as we can see execute sql task executed with failure so our failure and completion have been executed now i can stop this package now i need to double click on this precedence constant so here as we can see we have evaluation operations so it has constant expressions expressions and constants and expressions are constant so as we can see for constant we have done next we need to set up for expressions so what it means so constants a constant that uses only the execution result of the precedence executables to determine whether the constant executable run the execution result of the precedence executable can be success failure or completion as we have seen if our execute sql task has been executed successfully then we are getting our output and we are not going to check for any expression in our second and expression that is evaluated to determine whether the constant executable run if the expression evaluated true the constant executable run so what it means so go to on ssdt so here i need to set up a expression so if the one equals to equals to one so once this value will be satisfied next executable should be executed as we can see it will return true first data flow should be executed so i gonna click on start so here as we can see execute sql task has been executed with failure so our failure and completion have been executed and in our first data flow we have set up for expression if the expression value will return true then our next task will be executed so i have set up one equals to one that is true that's why we are getting this output so i can stop this practice next we can see we have constants and expressions a constants and expression that combines the requirement of execution results of the precedence executables and the return result of the evaluating the expression go to on ssdt and here i need to click on this precedence constant and i need to change expressions and constants so it will check expressions as well as constants if the expression value will be true and constant value will be true then our next executable will be executed otherwise next executable will not be executed so for this case as we can see our execute sql task will not be executed successfully that's why our first data flow task will not be executed because our expression value will be true but constants value will not be true so i can click on okay and click on start our first data flow task has not been executed so i can stop this practice so this is for constants and expressions it will check constants as well as expressions if both value will be true only then our next executable will be executed next is constants or expressions and expressions or constants that uses either the execution result of the precedence executable returns of the evaluating the expressions what it means so it will check output of the constant and expression if any one of these value will be true then our next executable will be executed so go to on ssdt and in our first precedence constant double click on this and here we need to change for expression or constant so in this case as we can see our execute sql task will not be executed successfully so our constant value will not be true but the expression value will be true because i have specified one equals to one so i need to click on okay so our first data flow task should be executed now click on start so as we can see first data flow task has been executed successfully now i can stop this practice next we have multiple constants if the constant task has multiple constants then we can choose below options first is logical and and we can select logical or so what it means so go to on ssdt so here i can delete this data flow task and this data flow task as well and i can delete this constant i need to use one more execute sql task and here again i need to make the connection so we can connect and here i need to specify our query so i need to select select hello now click on okay then again okay and we need to select this precedence constant with our data flow task and for second execute sql task again we need to select for data flow task i need to double click on any of these two precedence constants and we can see here we have an option for logical and second option is logical or so first time it will check and so if these two execute sql task will be executed successfully only then our data flow task will be executed so save this package and click on start so our first execute sql task has not been executed successfully that's why our data flow task has not been executed so let's correct this error we can put a single quote and click on okay now our data flow task should be executed so as we can see data flow task has been executed successfully now do click on this and i want to select for logical or so this time it will check any of these two will be executed successfully then our data flow task will be executed so double click on this and create a custom error so it will return an error so our first execute sql task will not be executed successfully but data flow task will be executed successfully so click on start so as we can see execute sql task has not been executed successfully but our data flow task has been executed so i can stop this package so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ssunitech so still decide and this is continuation of ssis tutorial so today we gonna discuss about execute sql task and here we will discuss about the basic introduction so let's start so what is execute sql task execute sql task is one of the very important component in ssis because here we can run tsql statements within control flow and it has four options first is general tab second parameter mapping then result set and then expressions so let's have a look of execute sql task so go to on ssdt and here i need to add one more package i wish to rename this package with execute sql task basic introduction here i need to drag and drop execute sql task then double click on this execute sql task as we can see it has four options first is general then parameter mapping result set and expressions so go to on general tab and here i wish to start from sql statement so first is connection type so in connection type we need to select the connection manager type as we can see we have excel valedb odbc adu adu.net as equal mobile so these are the options available for connection type then in our second option we have connection so here we need to make the connection manager to connect with our database next is sql server type so in sql server type as we can see we have three types first is direct input next file connection then variables so what are these three types so basically in direct input we write the sql queries directly into the sql statement properties in case of file connection here we write down sql queries into dot sql or txt files and then we need to create a connection for the same next variables in this case we can write the queries into a variable and then we can select that variable in source variable property next is bypass repair so as we can see over here we have bypass repair it has true and false so what it means so it indicates whether the task should prepare the query before executing if we have true then it indicates preparation is done by database engine by which we are connecting if we select false it indicates preparation is done by ssis package so this is for sql statement let's go to on result set so in result set as we can see we have four options first is none then single row then full result set then examine if our sql query would written only one row then we can select single row it doesn't matter number of columns number of column maybe one or more than one but number of row should be one if you are getting the data from multiple rows then we need to select our full result set if we are getting the data in xml then we need to select our xml option if we are not getting any data then we need to select none so this is for result set so let's move to parameter mapping so in parameter mapping as we can see we have many more columns so what are these columns so basically in parameter mapping first is variable name the variable that contains the value to be used for the parameter so our first is variable name next is direction it determines whether the pass the value into the parameter as input parameter or we are getting the value from the parameter as output parameter direction would be input or output next is data type it determines the type of data provided from the variable so we can select that next is parameter name so here we can specify the order of the parameter and it would start from 0 our last is parameter size the size of the parameter if it can be a variable length the default value would be minus one which let's sequence over determine correct size for example if we are getting in n wirecare max then size would be minus one so this is our parameter size so goto on ssdt and in our expressions so it is an collection of expressions the evaluation result of each expression is assigned to a property and replace the value of the property in this case we can use the expression in where clause of the sequence statement where we want to assign value to column using variable so this is use of expression so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ssunitech so say this side and this is continuation of ssis tutorial so today we gonna discuss about executes equal tasks so here we will discuss about the source types and how we can use it so let's start so as we have discussed in our last video we have three type of source first is direct input second is file connection and last is variables so let's start with direct input so before going to ssdt to implement the package let's discuss about the scenario so as you can see we have input table and it doesn't have any data and by using ssis package i want to insert record on it so go to on ssdt and here i need to add one more package then i wish to rename this package with executes equal task then source type now i need to drag and drop execute equal task then double click on this execute equal task then here in SQL statement as we can see connection types so here we have many more options so i gonna use ladb connection type rest will be discussed later on this video series then we need to make the connection so we can click on new connections as i have already made the connection so i gonna use that one click on okay here we have option for direct input so here i need to write the query so i have already written the query so i need to copy this insert statement and then go to on ssdt then we need to paste that query now click on okay then again okay so we have done our package now i gonna execute our package so package has been executed successfully so now i gonna stop this package and go to on ssms and here i need to check the records now we can press ctrl r so as we can see data has been inserted successfully on this table so this is by using direct input second we have a flat file and it has the SQL queries so these SQL queries will be executed now go to on ssdt and then directly click on this execute SQL task so instead of direct input we need to select file connection then it would ask for file connection so we can select then we can browse i have to copy this path this is our file now click on open then okay now we have done our file connection now click on okay now we can save our package and execute then in this file we have total five rows so six rows should be a label in our table go to on ssms now we can press ctrl r so as we can see six rows so data have been inserted successfully in this table so this is by using file connection next we need to create one variable so we can add one variable and this is for SQL query and data type should be string and what would be the value so i gonna copy our insert statement then then we can paste our insert a statement in this value now i can close this variable go to on execute SQL task and here we need to select variables instead of file connection and then we need to select our variable name so our variable name is SQL query so this is our variable now click on okay so we have done our package so go to on ssms so in this table i wish to delete all the records which is available in this table so now we don't have any record on this table once i execute this package so one row should be inserted in our table click on start so package has been executed successfully now we can stop this package and go to on ssms now we can press ctrl r so as we can see data has been inserted successfully in our table so this is from variable so these are the input types so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss unitech so see this side and this is continuation of ssis tutorial so today we gonna discuss about execute sql task result sets in our last to last video we have discussed about result sets so we have total four type of result set first is none then single row then full result set and then XML result set so here i would like to discuss about none full result set and single row so let's start with none so here as we have discussed it will not going to return anything so i have a scenario where i would like to insert the coordinate table by using execute sql task so go to on ssms so this is our table initially we don't have any data in this table once we will execute our ssis package then data should be inserted in this table so go to on ssgt and here i would like to add one more package then i wish to rename this package with execute sql task result set then i need to drag and drop execute sql task then double click on this execute sql task so here as you can see we have sql statement so i would like to create oledb connection type then we need to select the connection manager so click on new here i have already made the connection so i would like to use that one so click on okay here as we can see sql source type which is direct input in sql statement i would like to write our insert statement so as i have already typed the query so go to on ssms and need to copy this insert statement and go to on ssgt paste that query then click on okay so we have done and in our result set it's not going to return anything so that is none now click on okay so we have done our package now we can save it and execute so package executed successfully now go to on ssms to check the table we can press control r to refresh this table three rows have been inserted in this table so this is for none second is single row result set so here it will return a single row doesn't matter number of columns column may be one or more than one but row should be one so go to on ssgt and here double click on this execute sql task and this time a result set should be single row and in our sql query i would like to write a select statement so this is our select statement and it will going to return server name and current date and time so it has two columns with a single row so i can copy this select statement and go to on ssgt and here we need to paste that query now click on okay as we have discussed it will return a single row so we need to go to on result set tab and here we need to create two variables first is for server name and second for the current date and time so this is for server name so i would like to write ser and value types would be string now click on okay and in our second i would like to create one more variable and this is for current date and time and value types would be date time and here we need to specify a default value so i want to use default value now click on okay and in result set name we need to specify 0 and 2 it will start from 0 then 1 0 would be refer for column 1 and 1 will be refer for column 2 now we have done now we can click on okay we have done our execute sql task to check the output we have to use a script task so go to on common and here we can drag and drop script task and need to connect with execute sql task now double click on this script task and here i would like to pass two variables which we have created first for the current date and time and second for the server name so this is server name click on okay click on edit script so here we need to write our code so i would like to use message box dot so so here we need to write dts dot variables and i would like to use server name first then dot value dot to string so this is for server name second i would like for current date and time so i can copy and paste and here we need to replace our variable name by current date and time so we have done our package now we can close it and click on okay now i would like to execute our package so here as we can see this is our server name now click on okay and this is our current date and time now click on okay so it looks good in full result set it would going to return full results sets maybe single row or more than one row maybe single column or more than one columns it could be anything so go to on ssdt and this time i would like to delete our script task double click on this execute sql task in our result set i would like to use full result set and instead of sql statement which we have written i would like to select whole the table so i would like to copy this once we execute then we can see we are having three columns and three rows so go to on ssdt and here we need to paste that query now click on okay go to on result set in result set i would like to remove these two and add a new and this is for zero here i would like to create one more variable and this variable should be object type so obj value and type should be object now click on okay again okay because it would going to read more than one rows that's why we need to use object type variable it will hold the data which we are getting from our query so i can execute it looks good so here we cannot use script task directly to check the data we have to use for each loop container once we will discuss about for each loop container then we will see the output for this full result set so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello welcome to ss unit a so say this side and this is continuation of ssis tutorial so today we are going to discuss about execute sql task parameter mapping so here i have created two scenarios first we want to insert record into a table by using variables and second we want to execute a stop procedure with parameter so in our first query as we can see if we want to use variables to insert record in our table then we have to specify values by question mark and then we can map those question marks with our variables so go to on ss dt and here i would like to add one more package so then i wish to rename this package with execute sql task parameter mapping now i need to drag and drop execute sql task then double click on this execute sql task and here i would like to use oladb as a connection type then i need to build a connection manager so as i have already made the connection so i want to use that one so click on okay in sql query i would like to paste our insert statement so i have already written the insert statement so i would like to copy this and need to paste that query in our ss dt so before going to paste this query into ss dt let's have a look off table so it has eight rows so go to on ss dt and paste that query and click on okay so here we need to create four variables so first i would like to create four variables this is for id and values should be integer and i would like to add id9 second it should be name and it should be string value and what would be the value value would be john in third variable this is for date of birth and here we need to specify our date and i would like to use 2019 01 01 then in last variable this is for salary so it should be integer and value would be 52 000 so these are the values in our variables so i would like to use these variables to map with our question mark so got one parameter mapping and here i would like to add four parameters and this is for id so we can select our id second we need to select for name then we need to select for date of birth and last we need to select for salary and direction should be input then what would be the output data type so for id it should be numeric data type then for name it should be work here then this is for date so we need to select date then for salary again we need to select numeric and what would be the parameter name so this is our first parameter so we need to specify zero this is second so one then two then three it will start from left side so our first parameter would be id then name date of birth and salary so we have done our mapping now we can click on okay and we can save our package and click on start so package executed successfully we can stop this package go to on ssms and we can press control r so as we can see id 9 has been inserted successfully if i want to update our name for id 2 and we want to update with sucing then how we can update by using execute sql task so this is our update statement which i have written so we can copy and go to on ssgt double click on this execute sql task and instead of this sql statement i would like to paste our update statement so here as we can see our name parameter comes first for name it would be zero for id it would be one now click on okay go to on parameter mapping and here i wish to remove our data birth and salary and here as we can see this is id and this is name so put one for the id and zero for the name now click on okay now we can save it go to on variables and here i wish to change the values so i want to change with sucing and id would be two now we can save it and execute our package so package executed successfully now go to on ssms and in this table i need to press control r so as you can see id 2's name has been updated so here i have created a simple store procedure so in this store procedure it accept a parameter with id and it would return the id name and data birth so i would like to create this store procedure so store procedure created successfully now if i execute this store procedure for id one then it should return information of id one so we are getting id name and data birth i can press question mark copy this execute statement and go to on execute sql task and stop this package not click on this and here i can paste and it accept id now click on okay go to on parameters so i have removed name then we need to add two real sets so first for the zero then one and we are getting two as well so first is id so i would like to for id second we are getting name then we are getting data birth so now i need to click on okay so by using script task we can check our output so i would like to drag and drop script task and connect with execute sql task double click on this script task read only variables i need to select our data birth and second name now click on okay so go to on edit script here i need to use message box dot so dts dot variables and here we need to specify variable name that is data birth then dot value then dot two string so this is for data birth second we want to use for name so we can specify name now we can save it and close it click on okay so we have done our package now i wish to execute our package so as we can see this is the data birth and this is the name now i gonna stop this package and go to on variables i need to change the id so we want to output for id three stop this go to on ssms so here as we can see for id three we are getting we pulled so in our output we should get we pulled we can execute our package and this is data birth and this is we pulled so we are getting as we were expected so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss u detect so still this side and this is continuation of ssis tutorial so today we gonna discuss about parallelism so what is parallelism so it is a great technique to improve the performance of our data integration operations and it supports parallel execution of practice task and transformations so when we talk about parallel execution then maximum concurrent executable property comes into the picture and default value of this property is minus one that means maximum number of tasks can be executed at a single time as microsoft says when we set n plus two value of this property then we will get the best performance and n is the number of process nodes so how we can calculate the number of process nodes so we need to go to on ssms then right click on this server name go to on properties then go to on processes and here as we can see all so we need to expand then again expand so here as we can see we are having four nodes so once we will set four plus two it means six then we will get the best performance and performance will also depend on the hardware specifications as well so here in this case is today as we can see we are having two execute SQL tasks first it will capture start time of this package and in our last and it will capture end time of this package and inside this package we need to use 10 execute SQL tasks where we will insert record in our table so go to on ssdt to implement the package so here i wish to add one more package then i wanna rename this package i need to drag and drop execute SQL tasks so it will capture start time of this package so double click on this here we need to make our connection manager so click on new connections as i have already made the connection so i gonna use that one click on okay then in SQL statement i have already typed the query so i gonna copy that one and paste it here so this is our query let's have a look of this table so this is our table so as we can see for id1 we are having four sequence and start time is null and end time is null so it will update start time and end time once our package will be completed successfully so i gonna copy this update statement and paste in execute SQL tasks now click on okay again okay now i need to drag and drop one more execute SQL tasks and it will capture end time of the package so i need to use that connection and so go to on ssms and i need to copy this update statement and paste in execute SQL tasks now i need to click on okay again okay so we have captured our start time and end time now i gonna use a sequence container then i need to connect start with sequence container and sequence container with connect with end and inside this sequence container i gonna use execute SQL tasks double click on this and need to make the connection then here i want to write a query to insert record in a table so i have already write the query so this is our query by which we will insert 10,000 record in a table so i gonna copy that one and go to on ssdt and paste it here and click on okay again okay so we have done our first now i need to copy this and paste over here and this is our second how we can copy and paste third fourth fifth sixth seventh so here we are having 10 executes equal tasks that will be executed to insert record in our table so i gonna connect these one by one then second then third then fourth so here i have connected one by one now i gonna execute this package and to check our start time and end time so once we click on start then as we can see our first execute SQL tasks will be completed then move to second once second will be completed move to third once third will be completed move to fourth then fifth and it will take until our last execute SQL tasks will not be completed so as we can see package has been executed successfully so i gonna stop this package and go to on ssms and here we can refresh this table and we can check so start time and end time has been captured so this package was started from 21 25 29 and and it's completed 21 26 0 8 now move to ssdt and design this package to run parallel execution so i need to remove these connections now right click on this package and go to on properties so here we have a property which is maximum concurrent executable so i gonna change with five so five execute SQL tasks will be executed at once once these five will be completed then next five will be executed so before in this package i need to change our update statement and this time i would like to run for id2 because this is for parallel execution now click on and and here i would like to change for id2 click on okay okay we can save and click on start so as we can see five execute sql task is going to executed at once so package executed successfully now i can stop this package and go to on ssms and here i can refresh this table so we can check 21 28 54 is our start time and 28 21 29 is our end time so this package is executed very early previously when we executed our package with SQL then it was taking a lot of time but once we design this package for the parallel execution then it's going to execute early thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ss unitec social decide and this is continuation of ssis tutorial so today we gonna discuss about containers so what is container and what is sequence container so let's start with container so containers are a logical grouping of tasks which allow us to manage the scope of the task together ssis containers are controls that provide structure of the package and it supports repeating control flows in packages and group tasks into a meaningful unit of work in ssis we have three type of containers first is sequence container then for loop container and last is for each loop container so for each loop container is used the most today we will discuss about sequence container what is sequence container and what are the real-time use of sequence container it is used to grouping logically related tasks together which makes it simple to divide the control flow in a package into group of tasks and containers that we manage as a single unit so what is the purpose of sequence container so here i have written six scenarios where we can use sequence container and that is very useful for developing our package so first is grouping tasks so that we can disable a part of package that is no longer used so what it means go to on ssdt and here i would like to add one more package then i wish to rename this package with sequence container then here as we can see inside these containers we have three containers first is for loop container for each loop container and sequence container we can drag and drop sequence container inside the sequence container we can use multiple dataflow tasks so this is our first dataflow task and this is our second dataflow task then this is our execute SQL task and this is our execute SQL task so these four tasks is a label inside this sequence container if we have a requirement where we want to disable all these tasks then there is no need to go to on each task and click on disable we can simply disable our sequence container so that all these tasks will be disabled as we can see once we disable then all these tasks have been disabled in our second narrowing the scope of the variable to a container if we wish to create a variable and scope of that variable will be available inside this sequence container only then we can create our third is managing the property of multiple tasks into one by setting the property of the container like if we want to manage any property of these four tasks then there is no need to go to on each task and set the property we can simply set the property of this sequence container then that property will be applicable for these tasks next is ensure that all tasks in the container must be executed before next task executed so it ensures that once we execute this package our sequence container will be executed first if we have one more dataflow task and that is outside the sequence container then once these four tasks will be completed after that next dataflow task will be executed so it ensures that our four tasks which is placed inside the sequence container will be executed first then our dataflow task will be executed which is available outside this sequence container next create a transaction across a series of data related tasks but not on the entire package let's assume we have a scenario where we want to use transaction and we don't want to use transaction on entire package we just want to use transaction inside this sequence container then we have a option by which we can use the transaction for this sequence container and it will be available only for those tasks which is placed inside this sequence container creating event handler on a single container where we can send an email if anything fail inside the container so what it means like we have four tasks and if any task will be failed then we want to send an email there is no need to go to on all these four tasks and set the property to send email if that task will be failed we can only set the property for this sequence container if anything will be failed inside the sequence container then mail will be sent so these are the scenarios by which we can use sequence container once we will discuss about the case study then we will see what is the real-time use of sequence container and how we can use it so thank you so much for watching this video if you like this video please subscribe our channel to get many more videos don't forget to press the bell icon to get the notification of our newly uploaded videos thank you so much hello friends welcome to ssunitech so see this side and this is continuation of ssis tutorial so today we gonna discuss about for each loop container so here we will discuss basics of for each loop container like what for each loop does so let's start so for each loop container is used to look through a set of enumerated items such as files in a directory or rows in a table within container we can add one or more control flow tasks for example we can use execute sql task or we can use data flow task and we have one more container called as task host container which is not visible from id but every task is contained in it like we can say it is a container which contain the ssis package so go to on ssdt and let's have a look of for each loop container here i wish to add one more package then i need to rename this package with for each loop basic then inside this container as we can see we have for each loop container now we can drag and drop for each loop container and double click on this for each loop container and here as we can see we have four tabs first is general tab here we can specify name and description then collection tab variable mapping tab and expression tab what we can do inside this collection tab so in collection tab we can select enumerator type and expressions so as we can see here we have enumerator and we have multiple enumerators so we can select according to our requirement and inside this expression we can select the expression according to our requirement inside this variable mapping we can map the variables and inside this expression we can use the expression according to our requirement next is types of enumerators So, we have total 7 types of enumerators and we will discuss one by one. So, first is for each file enumerator, second is for each item enumerator, third is for each adio enumerator, next is for each adio.net schemaRoset enumerator, next is for each variable enumerator, next is for each nodeList enumerator and last is for each SMO enumerator. So, we have total 7 enumerators and we will discuss one by one in our upcoming videos. So, thank you so much for watching this video. If you like this video, please subscribe our channel to get many more videos. Don't forget to press the bell icon to get the notification of our newly uploaded videos. Thank you so much. Hello friends, welcome to SSUnitech. So, see this side and this is continuation of SSIH tutorial. So, today we gonna discuss about for each loop file enumerator. In our last video, we had discussed about basics of for each loop container and types of enumerator. So, before going forward, if you haven't watched part 10.1 of this video series, so I would strongly recommend to watch that video where you can understand about the dynamic connection manager. So, let's start with file enumerator. So, what is for each file enumerator? So, it is used to loop through all the files present in a specified folder. It will also has an option to traverse the sub-folders present into that specified folder. Like we have a folder and that contains many files and we want to load data from those files into SQL Server Table. Then we can use it can be used to fetch full qualified file path or file name only with or without extension. So, here we have a case study as we have four text files into a single folder and we want to load data from those files into SQL Server Table. So, let's have a look of flat files. So, this is our folder. As we can see in this folder we have four files. So, this is our first file and it has information for sales in India, then sales in Peru, then UK and then US. So, we have total four files and we can see we have four columns in each file and first is customer ID, then item name, then quantity, then value. So, we want to load data from these files into our SQL Server Table. So, go to on SSMS and here as we can see we have this table, customer sale table and we want to load data in this table from these files. So, how we can achieve this? So, go to on SSDT and here I would like to add one more package. So, new SSIS package. Then I wish to rename this package for each loop and this is for file enumerator. Here as we have discussed in our last video we have three type of containers. First is for loop container, for each loop container and sequence container. So, I gonna use for each loop container. Then double click on this for each loop container. Go to on collections and here we can see in this enumerator we have multiple options. But we are going to deal with files. So, we need to select for each file enumerator. And here as we can see we have enumerator configurations. Here we need to select the folder path and then we need to select the file extension. So, before configuring for each loop container let's create two variables. First for the folder path and second for the file name. So this is for folder path and it would contain the information of the folder. So, this is our folder where our files exist. So I need to copy and inside this value we can paste that value. Then second for the file name. So I wanna use file name and it should be string type. Now I can save it and we can close it. Now I need to double click on this for each loop container. Go to on collection tab and here I need to select file enumerator. So, in this folder we need to specify the folder path. So we can copy it and we can paste over here. And here what type of files we want we can specify over here. Like we want only txt file so we can specify .txt. So here we have three options. First is name and extension, fully qualified name then name only. So we want name and extension. So it will retrieve name and extension. So we want to store name and extension into a variable. So this is our file name. So where we want to keep the information of file name with extension. Now we can save it and here we need to use a dataflow task inside this for each loop container. Now double click on this dataflow task. So we want to get data from flat file and want to load data into sqlsov table. So we can drag and drop flat file source and per ADB destination. Now double click on this flat file source. So here we want to make connection any of these files. I am going to make connection with India. Click on open. Go to on columns. It looks good. Now click on ok again ok. Now we need to destination. Double click on this and click on new. As I have already made the connection so I am going to click on ok. Here we need to specify our table name. So we want to load data in customer sql table. Go to on mapping. Mapping succeed because our input column name and destination column names are same. Now click on ok. So we have done the mapping and this is our static mapping. So once we run this package our for each loop container will be executed 4 times because in this folder we are having 4 files. So it will be executed 4 times but data will be loaded only from sale India flat file. Because we made the connection only for India. Now we need to make this connection dynamic. So we can click on flat file connection manager. Then go to on expressions and here we have a property with connection string. Now go to on expressions and here we need to drag and drop folder path. Then we can specify plus and then file name. Now we can evaluate and we can see we are getting. But the file name will be calculated at runtime. So it looks good click on ok again ok. We have done it. Now we can execute our package to check the output. Now we can click on start. As we can see package executed successfully. Now we can stop this package and go to on ssms. Now we can refresh this table. It should get 12 records. So we are getting 12 records. Because 3 records is available in each flat file. Now double click on this for each loop container and here go to on collections. And here we can see we have an option for traverse subfolders. Once we select this checkbox and click on ok. So it will check on that folder which is available in this location. As we can see we are having sale Australia. So we are having same number of columns it also have 3 records. Now I wanna execute this package. Once I execute this package then we can see our package failed. Because once we gonna calculate our connection string. So here first it will check the folder path. So our folder path will be this. But this file is available inside this subfolder. So it is not going to check in this folder. Because we have hard code the value for the folder path. So this is the difference while we gonna use folder path plus file name and full qualified name. So here I gonna cancel and cancel double click on this go to on collections. And this time we gonna use full qualified name go to on variable mapping. So in file name it will pick the folder path plus file name I can click on ok. Now go to on flat file connection manager and here go to on expressions. And here in this expression I need to remove this folder path. Because our folder path and file name will be keep in this file name variable. Now we can click on ok. Go to on ssms and here I would like to delete all these rows so I can delete. Here I can see once we refresh this table then we are not having any data. Now once our package will be executed successfully then 15 rows should be available in our table. So I can click on start. So it looks good. Now stop this package and go to on ssms and refresh this table. So as we can see 15 rows inserted in this table. So this is the difference while we gonna use folder path plus file name and extension and full qualified name. Once we are having sub-folders inside the specified folder and we want to load data from those files which is available on the sub-folder then we cannot use folder path and file extension. We need to use full qualified name. Thank you so much for watching this video. If you like this video please subscribe our channel to get many more videos. Don't forget to press the bell icon to get the notification of our newly uploaded videos. Thank you so much.