 in this video are available on the opendatahub.io website in the Getting Started section. Specifically, we will be going over the quick installation steps and a few of the steps in the basic tutorial that will demonstrate how to access the Jupyter Hub application. So let's begin. Here I have an OpenShift 4.1 cluster that was deployed using code ready containers. I've configured this cluster to have six CPUs and 12 gigabytes of memory, which is enough to deploy a few of the components provided by Opendatahub. Feel free to increase these resources as your hardware allows. Let's get started. First thing we need to do is create an OpenShift project for the Opendatahub deployment. We can do that by clicking on Create Project, and we'll give it a name of ODH. In order to deploy an instance of Opendatahub, we first need to deploy the Opendatahub operator using the OpenShift operator hub. To access the OpenShift operator hub, navigate to Catalog and Operator Hub. Here you will see all of the community operators that are available for installation in the cluster. You can find Opendatahub in the list by typing a keyword Opendatahub and selecting it from the filtered list. Click Continue on the dialog. Before you install the operator, feel free to review the description of the currently available version for this release of the Opendatahub. For this video, we will be deploying Jupyter Hub and the RAD Analytics Spark operator. To begin the operator installation, click Install. On the operator subscription page, make sure the project that we created earlier is selected, and then accept the defaults for everything else. On the subscription overview page, wait for the status to show one installed. Now that the operator is installing, we can view the status by going to Install Operators and seeing Opendatahub operator. Now that the status has installed succeeded, we can begin to customize our deployment of Opendatahub. Click on Opendatahub operator, and then click Create New. This page, you're presented with a template of the Opendatahub custom resource. In this template, we can customize which components will be deployed and also how we want to configure those deployed components. For this, we want to make sure that Jupyter Hub is deployed and the Spark operator. You can control their installation by changing the ODH deploy property to true. This will make sure that these are deployed in the Opendatahub. We default to deploying Prometheus and Grafana by enabling monitoring, but for this video, we want to disable that. So we'll change its deployment property to false. Since we're running in a cluster with limited resources, we need to modify some of the properties for the Jupyter Hub deployment. Specifically, we want to change the CPU allocated to the Spark cluster nodes to one. We want to reduce the Spark memory to one gig. And we'll go ahead and disable the Spark node, worker node. All the other options will allow us to safely deploy Jupyter Hub in this deployment. If you want to customize any property further, feel free. Now, to deploy Opendatahub, go ahead and click create. On this page, you'll see the Opendatahub deployment custom resource that we just created. If you want to monitor the status of this Opendatahub deployment, click on workloads and then pods. Here you can see all the pods that are deploying as part of our Opendatahub custom resource creation. Specifically, we're deploying Jupyter Hub, a database for Jupyter Hub, and the Spark operator. Once each pod reports the status of running, we can proceed with accessing Jupyter Hub. So, to access a Jupyter Hub application, we're gonna navigate to the Jupyter Hub route that was created with this deployment. So we can click on networking and routes to view all of the routes that are available as part of this deployment. So, to access the application, we just click on the location field for Jupyter Hub. We can proceed by clicking sign in with OpenShift. And we'll go ahead and log in as the regular developer user that's available on this cluster. For the Opendatahub deployment of Jupyter Hub, we've added a few customizations to allow an individual user to customize the notebook deployment for their workflow. Specifically, they can select from a list of available notebooks, change the notebook pod size to fit their workflow, and if their GPU is available in the cluster, they can access those directly. Also, if they wanna add any specific environment variables to be available in that pod, they can do so now. Here, we'll go ahead and add a Hello World environment variable that will be available in our Jupyter Notebook. So, we'll go ahead and spawn the notebook pod. Here, you'll see our notebook pod is spawned and we can create a blank Jupyter Notebook. So, to create a blank Jupyter Notebook, we can click on New and then select the Python 3 notebook. In this Jupyter Notebook, we have an empty cell where we can start to insert our Python code. So, just to show you how to run this, we'll just create a very simple Hello World script and execute it. This is just a simple script to show how you can access the environment variable in the Jupyter Notebook pod and print it out. We can execute the cell by entering the information and just hitting Run. From here, you can start the basic tutorial that is available on the opendatahub.io website. The basic tutorial will demonstrate how to interact with a Spark cluster and read and write data to object storage from a Jupyter Notebook. For more information on the Open Datahub and its components, please visit our website at opendatahub.io and subscribe to this channel to be notified when new Open Datahub videos are available.