 Hello, welcome to SS Unitex, we'll see this side and this is continuation of Azure Databricks tutorial. So today we are going to start with the mount point inside the file system utility. So before going forward, if you haven't watched the last video of this video series, so I would strongly recommend to watch that video. So why we are required to create the mount point and what is the mount point, all these we will see in this video. Remember, we have discussed about the DBFS, so which is the Databricks file system. So Databricks file system is the internal storage for the Databricks and it is available inside the Databricks workspace and Databricks clusters. It's available internally, so we are not required to create any additional connection string for accessing the files. Those are available inside the DBFS. Let's assume if the file is available outside the DBFS that could be your Azure blob storage or ADL as Gen1 or Gen2, then we are required to create one additional connection that is called mount point inside the Databricks. So here we will be creating connections by using the Azure blob storage. So here the first thing that we need to notice like the mount point can be created either using the account key or SAS token. So like by using these two options we can create the mount point. So in this video we will be covering both. So first we will see how we can create by using the account key. Second we will see by using the SAS token. So here as we can see in the first one we are having command that is dbutils.fs.mount. So this command will be used for creating the mount point. And it is asking three required input parameters. The first one is the source. Second is the mount point name. And third is the extra configuration. So the first one which is the source as we can see it is having the container name. First we have to supply and after that the storage account name everything remains same as we can see here. Next here we have to supply the mount point name. So that will be your mount point name. Next we can see inside the extra configuration. So everything remains same and here we need to add the storage account name. And last we have to supply the account key. So this is the changes are required in this syntax. So first is the container name. Second is the storage account name. Third is the mount point. Fourth again for the storage account name. And fifth for the account key. So how we can get all these values. Let's quickly go inside the browser and we will see before going inside the SEO Databricks we are inside the blob storage. So I have created this storage account with the name of ssunitec and inside this we are having two containers. One is the input container and second is the output container. So you have to remember one thing while we are going to create the mount points it will be created on the container level. It cannot be created on the storage account level. So let's assume if we are having 100 containers under any storage account then you have to create 100 mount point for accessing all those. So here we will be creating two mount points one for the input and second for the output. So for the input we will be using access key and for the output we will be using size token. So inside the input we can see we are having two files. One is the employee file and second is the sales file. So first we will try to create the mount point and then we will try to access these files. So let me quickly go inside the Azure Databricks and in this workspace as I have already created the notebook and inside this notebook as we can see this cluster up and running. So before going to create the mount point first try to see the description about the mount. So let me quickly use the dbutils.fs.help so inside this we can see all those commands those are available. We can see this mount. So under the mount we have mount then we can see the mounts, refresh mounts, unmount, update mount. So all these commands are available. Let me quickly see for the mount because in this video we are more focused for learning about the mount. So we can execute. So let me quickly scroll down and you can read all these if it's not very clear. Here we can see it is asking for the source that we have seen in the slide. Next it is asking about the mount point. So mount point we can supply and after that we can see all these values. So we can skip because this is not required. Here we can see the extra configurations. So we can supply this extra configurations that we will see. So I have already written the syntax for creating the mount point. So let me try to copy it and after that let me try to paste it here. Now we need to make the necessary changes on this. So as we can see the first change for the container name. So we can go here and we can see the container name. So the container name is the input as we can see. So we can click and this is the container by which we want to create the mount point. So we can replace this container name as input and after that it is asking about the storage account name. So storage account name is the SS unit tag that you can see. We can replace this storage account name by SS unit tag. Remain everything will same for the source. Next we can see the mount point. So it is asking mount point name. So I am going to supply mount point name as an input because we are going to create the mount point for the input location. And after that here we can see it is asking about the storage account name. Remember the storage account name is the SS unit tag. So we can replace this storage account name by SS unit tag. And at the last we have to replace this account key. So how we can get the account key? We can simply go here and on the storage account level don't go inside the container level. On the storage account level you can scroll down and you can go on the access keys and inside the access key you will see key 1 and key 2. So either you can go with the key 1 or key 2. So let me go with the key 1. Let me copy this. So it's copied and let me paste it here. After pasting let me try to execute this. So once it will be executed we should be able to create the mount point and we can access the files those are available inside the input container. So as we can see true, so your mount point should be created. So how we can check that? So let me go and try to use the dbutils.fs.ls. So ls for the list of all the files. So let me go and try to use mount of the input. So here we should be able to see two files. So let me use the display command so the output will be in tableau format. So that will be easy to understand. So here we can see two files. One is the employee files and second is the sales file. Next we want to create one more mount point for the output by using the SAS token. So let me quickly go here and use the same whatever we have written for the account key. So this time we want to use for the output. So let me use the output as an container name here and source will remain same. There is no any change in the source in the mount point name instead of input. I am going to call this as output. We have to modify little bit inside the extra configuration. So inside the extra configuration so here we can see the account key. So instead of account key we have to use the SAS and after that your container name. So like these two changes you have to make. First you have to replace account key by SAS and then you have to supply the container name. Remains everything will be same. Now here we have to specify the SAS token. So how you can get the SAS token. Again you can go inside the blob storage and here inside the storage account level. Again you have to remember you don't go inside the container. Inside the storage account level you can go and you can search for the shared access signature. Here allowed resource type you can select all these three. You can scroll down. You can set up what will be the expiry of this token. So I am going to leave as it is because I am going to create this for the testing. Here we can see the SAS token. So we can copy the SAS token and we can go back to here and paste that SAS token and we will try to execute it. So it should be executed successfully and your mount point will be created and the mount point name will be output. So as we can see output isn't true. So it means your mount point is created successfully. So let me quickly go and try to check that by using dbutils.fs.ls and inside that we can check for the mount and after that output. We try to execute. So as of now we don't have any file under this. So that's why we can see the blank but we are successfully able to execute this query. So we have created two mount point one by using the account key second by using the access token. So I will be going to provide these two queries inside the description of this video so you can utilize. Now we want to copy the file from the input location to the output location. So let me quickly go inside the container and here let me go inside the input and inside the input we have employee. So let me try to copy this employee file from here to the output. So let me quickly go here and for copying the file from one location to another location we can use dbutils.fs.cp. So we have seen in the last video about this. Here we have to supply the source. So as we can see the source let me quickly go here. So this is the path. So we can copy this path and let me quickly go in the downside and here we can paste that. So from this input location of the employee file we should be going to copy into the output location. Simply we can execute it. So once it will be executed your file should be copied. As we can see true let me quickly go here into the output folder. Seriously we were not having any file but now we can see this file. So we are successfully able to copy the file from external as your blob storage. Now let me try to read the data from this file whether we are successfully able to copy the file or not. So for that we can use the spark dot read dot option. I am going to use the option because the file which is having the header. So we have to mark this header as true after that here we can supply as CSV and we can paste this. So this is we are reading from the input location. So let me try to execute. So it should be having five records I guess. So the display is not correct. Let me with the display. Now it's running and we should be able to see the output. So as we can see it is having five records with all these values and let me quickly see in the output location that we have copied the file that should be having with the same number of records and same data. So we can execute and we can check. So as we can see it is also having the same data. Now the last thing on this video I want to discuss how we can delete this mount point. We have successfully able to create it but how we can delete it. So for deleting the mount point we can simply use the DB utils dot fs dot help. Let me try to execute this and we will see how many commands are there. So under the mount we can see unmount. So let me try to copy this unmount. So by using this unmount we can delete this mount point. So let me try to execute this as well. This is executing. Here we can see it is going to delete a dbfs mount point once this method returns the mount point metadata is guaranteed to be deleted. So as we can see in the downside it is asking only one parameter and that parameter will be your directory. Directory means your mount point name. So let me try to delete. So let me go and dbutils dot fs dot unmount and under that we can simply supply as mount of input. Let me try to execute it. So once it will be executed your mount point should be deleted. So here as we can see this mount point is unmounted. It means it's got deleted. If we want to read again we should not be able to read it because this is not available now. As we can see this path does not exist because we have not able to access this input location. So this is all about this video. Thank you so much for watching this video. If you like this video please subscribe our channel to get many more videos. See you in the next video.