 In this video, we will take a look at how we can upload data locally from a machine to an S3 data lake. In order to do that, firstly, we would need a working S3 instance, which provides us an object store platform, which can be used as a data lake. Secondly, we would need a valid set of credentials for this above S3 object store. We also need a sample data set or a directory consisting of some files which we want to upload to S3. And lastly, we need a Python PIP package manager to install the S3-CMD client, which we would be using throughout this tutorial. So I would first start by installing the S3-CMD client using the PIP package manager. And once our S3-CMD client is installed, we would like to check connectivity to our S3 data lake by listing the existing buckets. In order to do that, we first need to set the necessary environment variables, which consist of the S3 access credentials, namely the host access key and the secret key. I've already set my environment variables and I will check the connectivity by using the S3-CMD LS command. As you can see here, this lists an existing bucket test which I had previously created. In order to create a new bucket in which you want to copy data into, you can use the make bucket MB command and we will call this new bucket AI library. And once you have created a new bucket to copy data into, let's go into a directory of a data set that we want to copy into our S3 data lake. So this consists of some sample models. Let's use the risk analysis directory and this consists of a model and a data set which we want to now sync with our existing bucket. So in order to do that, we use the sync command and first we are providing our model directory that we want to copy and then the bucket which we want to copy it into. And once we have uploaded the necessary files into our S3 backend, we want to list the contents of the S3 bucket to check if the files have been correctly copied. And this can be done again using the same command, the LS command. Now as you can see, the model file and the data set files have been correctly copied into the bucket AI library.