 Hi, my name is Johnny Rickard and I'm a special projects engineer on the Red Hat Validated Patterns team. Today I'm going to do a demo of the Medical Diagnosis Validated Pattern. Are you a data scientist looking to deploy AIML at the edge? Or are you an OpenShift Administrator looking to use automation to deploy multiple Red Hat technologies together? Well, you've come to the right place, so stick around. This pattern was created using the x-ray demo from the Data Services Jumpstart Library here at Red Hat. Today's demo will be deployed on a Red Hat OpenShift 4.11 cluster, has three control planes, and three compute nodes. The compute nodes are AWS machine type M5.4XL, and this is to accommodate the co-located storage and application resources on the same compute nodes. Using the Validated Patterns framework, we deployed multiple operators, the instances and configurations of those operators, as well as the x-ray application using a GitOps framework. To get started, let's open a new window and browse to https://hybrid-cloud-patterns.io. On the left hand navigation menu, we'll find the dropdown for patterns, then Medical Diagnosis, and then let's click Getting Started. The first prerequisite that we have is that we must have an OpenShift cluster. Next, you'll need your own GitHub account, and you'll need an S3 storage service available within your public or private cloud to host the x-ray images. And this is because we don't have an x-ray machine to take the images and distribute to our cluster, so we have to emulate this service. Next, we'll need to install the Helm binary on our control machine, and we'll need to install some packages using DNF or Homebrew or whatever our package manager is. Next, we'll need to install some Python dependencies to support the kubernetes.core Ansible Galaxy collection. Please note that Ansible is only used during the bootstrapping of the pattern, and once the pattern has been initialized, the state of the cluster is managed through the GitOps controller. To set up the S3 bucket for x-ray images, we provide a utilities repository that you can use that has some programs to create the bucket as well as the bucket policy. We also have a program that will sync images from a source bucket to a target bucket in your environment. With the pre-rex out of the way, we're ready to fork the medical diagnosis repo. So let's open a new tab, and let's go to github.com slash hybrid dash cloud dash patterns. Under the append repositories, we'll see medical diagnosis, so let's click the link. And then we'll just click fork in the upper right hand corner to copy the repo from hybrid cloud patterns into our personal account. Your view will look slightly different. Under owner, you'll see your account, and then under repository name, you'll see medical dash diagnosis. Just click fork, and in a few moments, you'll be redirected to your account. I'm going to jump over to my GitHub account and get ready to clone the repo. So now I can click the green code button and then click the copy button. So let's open a terminal, and I'm going to change directory into my demo directory. Now let's type get clone, and then paste the link that we just copied from GitHub. And we'll cd into medical diagnosis. And next I'm going to jump back over to the documentation. And we can see that we need to create a values dash secret dot YAML file. If you're not familiar with helm, helm allows us to override default values that are provided with charts and templates. And that's exactly what we're going to do here to create our credentials. And this is so we don't commit our secret or credential into a source control manager like GitHub or GitLab. We provide a template file in the root of the medical diagnosis repository. It's called values dash secret dot YAML dot template. So let's copy that file into our home directory slash values dash secret dot YAML. Now it's important that we get the path and the file name absolutely correct. So it needs to go in your home directory slash values dash secrets dot YAML. So let's go ahead and edit the home slash values dash secret dot YAML file. Anywhere we see the word plain text and all caps, we need to replace with a unique and custom password. Now in your environment or in a production environment, you want this to be a complex password, multiple uppers, multiple lowers numbers and special characters. But for the purposes of this demo, I'm just going to use a secure ish password with thread hat upper R upper H and then 123 bang. Next we need to modify the values dash global dot YAML file. Again, using external values files to override default helm values, but this time at the global scope. So we're going to define things like cluster information as well as our cloud provider information. But before we do that, let's create a branch. So I'm going to create a branch called demo. You can create a branch named whatever you'd like. Now I'm going to edit the values dash global dot YAML file. In the values dash global dot YAML file, we define some options. The first is use CSV and it's set to false. And this is telling the operator lifecycle manager as well as the get ops controller to use the latest version of the operator within the subscription channel. Next is sync policy and it's set to automatic. This is an Argo CD configuration and it's telling Argo to sync automatically whenever it detects a change in state between what's in get and what's on the cluster. Next, we have the install plan approval and that's for the operator lifecycle manager to automatically approve the install plan for the subscription. If this is set to manual, then we would have to go into the OpenShift console and manually approve the install plan before the subscription would install. Now we need to define our cluster and cloud information. I'm deployed into AWS. So I'm going to change my cloud provider to AWS. My storage class name is GP2 and I've deployed my cluster in US East one. My cluster name is jrickard dash x-ray demo. And my domain is blueprints dot RH eco ENG dot com. I'm going to update this bucket source. Now this is the S3 bucket that we created during the prerequisites. So this is where we actually declare to the cluster the bucket name that we created. Just saving quit this file and next we're going to add the file to get. So let's run get commit minus AM and then just give it a nice message like updated values dash global dot YAML. And now we'll just push it into our repo. My origin is demo because that's the branch I created. Yours will be whatever you created for your branch name. You may be wondering how does the get ops controller know what to deploy. And the answer is simple. We declare it in the values dash hub dot YAML file. If we take a look at the values dash hub dot YAML, we can see that we have a key called namespaces. And here we're declaring to the get ops controller that we want it to create and manage each of these namespaces. Next we have our subscriptions. And here we're declaring which operators we want created, which namespaces they're going to be deployed to and which channel we want them to use and if necessary, which operator catalog. We've built in the capability to pen operators to specific values, but it's not an out of the box default configuration. Next we have projects and projects are an Argo CD resource that's used to group applications together. So here we're declaring that we want two projects created, one called hub and one called medical diagnosis. The next item is applications. Applications are also an Argo CD resource and they're used to describe the application to be deployed. In this pattern, we're creating applications that use Helm charts as well as get repositories. The HashiCorp Vault application is a Helm chart. And with this app, we can define overrides similar to how we define overrides in values files or at the command line. This chart is hosted on a remote repository located at helm.releases.hashicorp.com. This is a neat feature because if you don't want to maintain a chart that's already been developed by a vendor, you can certainly plug into their repo and pull the chart down and deploy it like you normally would. For our Git repository applications, we define these using a path. So anything that's common to all validated patterns is going to go in the comment directory. And anything that's common to this specific validated pattern is going to go under charts. Sometimes when we have multiple clusters, we'll have charts slash all and charts slash the cluster group ID. And this is so we can declare which applications go to which cluster. And for some clarity, we define the path to tell the GitOps controller where within the Git repository that helm chart or template lives. So when it goes to install it, it knows exactly where within the repo to go to find the resources. Let's jump over to the OpenShift console. We can get our console address by typing OC who am I dash dash show dash console. Next we just copy and paste the link into a browser and then log in. Something to keep in mind is I'm using self sign certificates during this demo. I would be able to bypass the security warnings if I had accepted the CA and to my local trust store, or if I had legitimate certificates. And I'm just going to log in using kube admin. And within the console, I'm going to go over to operators and then installed operators. And this is so we can do a side by side comparison of the make install output as well as the operators being deployed. So over in our terminal, let's run make install. And we can see that the very first thing that's happening is we're running validate origin. And this is just ensuring that the Git remote URL, our repo and branch that we created exists and it's valid. The next thing that's happening is the deploy target. And that's just making sure that the prerequisite software is installed. And it's also installing the helm chart for medical diagnosis. It's installing and configuring the open shift GitOps operator. And then what's going to happen after this is the vault init target is going to initialize. And this is where we're waiting on hash corp vault to become available. So that way we can unseal it and then insert our values that secret to create our credentials for the database and Grafana dashboard. So if we look at the open shift console, we can see that the operator for open shift GitOps is starting to go through its install process. So the install plan has been approved. The operator is getting installed. It's going through any upgrades. This can take some time to complete. But when it's done, we're going to jump over to the open shift GitOps console and take a look at the medical diagnosis application. So in our open shift console, let's jump up to the nine box icon and click on cluster Argo CD because this is our cluster instance of the open shift GitOps operator. So let's just accept the security warnings. Now a benefit of using red hat open shift and open shift GitOps together is that we get to take advantage of the OAuth proxy. So I logged into the open shift console using my kubadmin account. And because of the OAuth proxy, I get to log in using the same account in the Argo CD user interface. We have the tile for medical diagnosis hub. And then we have the actual application resource within the UI. So Argo CD can take up to three minutes to reconcile. So this means that although we've made a change and we've got the syncing set to automatic, it can take up to three minutes for the initial sync to take place. So I'm going to pop the Argo CD console out to the right. And that way we can watch side by side. The operators get deployed as well as the application start to render within the Argo CD console. Now this takes a few minutes for the reconciliation to happen. So I'm going to fast forward the video and then we'll pick up right before it actually starts the execution. So we can see the application beginning to sync. So if you remember, we declared that we want a list of namespaces and applications to be deployed onto our cluster in the values-hub.yaml file. The yellow icons just mean that the application is out of sync. And so we're just waiting for Argo CD to reconcile what's in Git and what's on the cluster. In our open shift console, we can see that a number of operators are now starting to get installed and deployed onto our cluster. In the Argo CD console, we can see that now we've got a green check mark next to the namespaces. So that means that those namespaces have been created. This instance of OpenShift GitOps is scoped at the cluster level. So this is essentially creating all objects that are going to be applied at the cluster scope. There's another instance of OpenShift GitOps that gets installed in the medical-diagnosis-hub namespace. And that's used to manage all of the resources for the medical diagnosis validated pattern. So now let's jump over to our developer context. So under administrator, click on developer, and then under projects, let's type in xraylab-1. And this is because our demo is going to take place within the xraylab-1 namespace. And we can start to see some of the resources getting provisioned in our project. So before we go any further, because we're using OpenShift 4.11, let's go to our username, user preferences, change the theme to dark, and now we can be one of the cool kids. And we'll just jump back over to the topology. And now this just looks better. A few minutes ago, we talked about the namespace scoped ARGO CD instance. So what we're going to do is we're going to jump over to that specific OpenShift GitOps instance, and we're going to take a look at the applications. So in the OpenShift console, let's click the nine box dropdown, and then click on hub-argo-cd. And we'll just bounce this out to its own window as well, except the security warnings. And again, we'll take advantage of the OAuth proxy with OpenShift and OpenShift GitOps. Now within this instance of the OpenShift GitOps operator, we can see that we have all of our applications that we've declared and values-hub present in the console. So let's change the items per page from 10 to all. And let's also filter our applications by project. So let's click on projects and then hub. We can tell that there were two applications that were affected by the hub project filter, one being going external secrets and the other being vault. Vault is a Helm chart application where going external secrets is a Git repository application. And we can tell by their icons in the upper left-hand corner of their application tiles. So let's clear this filter and let's go over to our ARGO CD user interface and take a look at our other applications. When this deployment has completed, we expect to see everything as healthy and synced, so all green, with the exception of ODF. As the deployment is going on, we will see some applications that show red or degraded or yellow with things that are missing or out of sync. And that's just because of the way that the resources are being deployed and they haven't been accounted for yet. So as they get provisioned and start checking in, these applications will turn from yellow and red over to green. The only exception is going to be ODF and that's because the manifest that's used to install the storage server, it gets some extra injections once it's been installed and so it always makes it look out of sync. So ODF will always be healthy and out of sync. The application to keep an eye on is xraylab-init and that's because that's the last application to complete before the medical diagnosis validated pattern has successfully completed its deployment. It has Kubernetes jobs and some other resources within it that wait for parts of the ODF operator to complete before moving on to its next phase. And this is because of the amount of time that it takes for ODF to deploy. It can take upwards of 10 minutes for the entire operator to deploy. So we had to create some jobs that will go out and wait for those resources to become available. Instead of sitting here and watching the console, I'm going to pause the video and come back when the deployment has completed. So we can see now that xraylab-init has finished syncing. So it's healthy and synced. And so now our application is completely deployed. If we look at our apps within the user interface, we can see that everything is green and green except for ODF, which is expected. So if we take a look in our OpenShift console, we can see all of those resources that we declared finally deployed in our cluster. So before we start the xraylab demo, I need to go through and accept some security warnings for three of our routes. And that's because I am using self-signed certificates. So if I don't do this, then when the images are coming through, they'll show up with the security warnings instead of the actual xray. So it really kind of defeats the purpose. So first, we're going to change back over to the administrator context. And then we're going to drop down to networking and then routes. And because we're already in the xraylab-1 namespace, let's go ahead and click on the link for image server. And this is just a simple Python application that actually displays the images as they're being processed and as they've been uploaded. The Hello World is just a good sign that shows that the application itself is up and ready. Next, let's click Grafana Route. And this is going to be our dashboard. So we're not going to do anything with Grafana just yet. We're just going to go ahead and accept the security warnings. And then we're going to jump back over to the OpenShift console and then go to OpenShift Storage under Projects. And here we have to accept the route or the security warnings for the S3-RGW route. And that's because this is actually where the images within the cluster are coming from. Once we accept the warning for this route, then we'll see some XML and we'll know that the service is good to go. So let's jump back over to our OpenShift console. And next, what we want to do is we want to go back over to our developer context and go back and look at that topology again. So under Administrator, click Developer and then click Topology. And then let's change our project back to xraylab-1. And now let's take a look at the Image Generator deployment config. The Image Generator application is what's actually pulling the images from AWS S3 into our cluster on that object store or into our ODF. Now, by default, the pods are set to zero because this is essentially simulating an xray machine. So we're not constantly going to be taking xrays. We're only going to scale up when we need to. So to scale the application, let's click the up arrow. And then if we take a look at our topology, we can see that the Image Generator donut is going to start turning dark blue as the pods start scaling up. Once the Image Generator kicks off, the next thing that we'll see is the risk assessment application start to spin up as well. So going over to our Grafana dashboard, let's take a look at the images coming in. Let's go full screen, click on the four squares. Let's click Manage and then click xraylab-1 and then xraylab. Now we are presented with our dashboard. And we can see that we have images coming in. Now those x-rays that we see are coming from AWS S3. So what they're doing is they're being populated into the self object store bucket. That's notifying the Kafka topic. The eventing list here that's listening to that topic triggers the Knative serving function, which scales the risk assessment application up. Now the risk assessment application is a containerized machine learning model trained to make a determination as to whether the person in that image may or may not have pneumonia. Once the image has been uploaded, the dashboard is updated with a timestamp, the filename, as well as the associated image. As the images are processed through the model, the same information is output to the center table. Once the images have been processed, they go through an anonymizer to redact any personal information of the patient. We're also collecting cluster metrics for CPU and RAM from the application, as well as providing a real-time widget that shows the risk distribution of the images processed. The risk distribution widget provides information of the number of images that are normal, the ones that it's unsure of, and the number of images that have pneumonia. There's also another widget that provides the number of containers that are running within the application. Now to stop the pipeline, we need to scale the image generator deployment config back to zero. This is because OpenShift Serverless will stop receiving the events and then will eventually scale the risk assessment application down to zero. So to put this into perspective, the application doesn't constantly need to be running to be effective. With Red Hat OpenShift, OpenShift Serverless, Red Hat AMQ Streams, and OpenShift Data Foundations, we can dynamically scale this application up to meet the demand and then scale that application back down to zero when the demand is subsided. So that's it for this demo of the X-Ray Lab application and the deployment of the medical diagnosis validated pattern. Before we go, I'd like to make one point. We deployed a significant number of operators and configurations which are tightly integrated together with very little modifications. All we did was update a configuration, create a credentials file, make sure that it was in git, and then run make install. So very little effort with a ton of upside. If you have an application that's similar to X-Ray Lab, think about how you could use Red Hat OpenShift and OpenShift GitOps to enable your MLOps workflows. Thank you for watching.