 In the following demonstration, we have a closer look at GitOps for manufacturing edge. We will see actually how we can speed up the software development and release cycles for applications that span from central data center to manufacturing plants and line data servers. This means on one hand the software development and rollout, on the other hand also optimizing the whole lot of configurations across multiple manufacturing plants. All this with an approach that is called GitOps, where we have all the different changes managed in Git and using processes like pull requests and deployments to hold out the right configuration to the right side in a very controlled manner. The context is here a simplified application for monitoring and production environment. The application spans across three levels from top to bottom, the central data center, factory data center, and the line data server. We will see in the first demo flow how an OTOps manager can change the configuration of sensors using a GitOps approach and in the second demo flow we have a look at the developer and especially rolling out software change with a pipeline and doing the correct versioning and doing this controlled within GitOps approach. During the demonstration the dashboard of the sensor can be monitored by the machine operator. Let's first look at the application, you can see the dashboard which is used by the machine operator to observe various parameters and as you can see the metrics are rolling in in real time. The technology beneath is deployed on Red Hat OpenShift and let's have a quick look at the pods running. First you see for example the pod for the line dashboard, for that dashboard actually in one OpenShift environment and a different kind of project. We also see the pods for the messaging that means for the AMQ and pushing the data to the real time UI and on a completely different cluster you see actually the two pods for the machine sensors. Now we are going to change the configuration of the sensor using a GitOps approach. As you can see here we have two sensors but one sensor sending temperature and vibration data and the other sensor only vibration data. We are jumping directly into GitOps for that change and in this directory you see the settings for specific deployment, the deployment is here the production deployment and the settings here are really specific to this production environment. But we can fine tune these because the generic settings are defined in templates. Here you see the templates for machine sensors and for example the properties which are generic and for all sensors and with the approach we have here with the directory structure and with the concepts of using customize to tune the manifests we can only do or we can do the changes really in the fine grain manner without duplicating a lot of configurations. Now let's do the actual change. We are simply putting here this attribute sensor temperature enable and true or make a quick command here temp to true and commit to change and we are committing this here now directly into master and if you were GitOps approach we would create a pull request or do this in a branch and create a pull request but let's do this now directly into master. We can now also jump to ARGUCD and ARGUCD is observing the changes in the regular interval we can force a sync so that we don't need to wait it's already jumping in you see the changes here now it actually wanted to have the change here and then deploys the new configuration to the to the pods this will be picked up pods will be restarted and we will see also here in a second already immediately how these extra lines coming up the extra charts coming up for the temperature data. So that means with this change and the monitoring or the approach of ARGUCD and monitoring the files in the Git repository and forcing that to the actual OpenShift deployment in the production environment we deploy this change automatically. In the second part I'd like to discuss actually the rollout of software changes using OpenShift pipelines this will involve here on one hand the OpenShift pipelines that means the Tecton project to build the software to test the software pushing then containers to Quay and then rolling these containers out again. We assume here that the software change has been already made and we will drill down into the pipeline. This is actually an example or this is the pipeline how to roll out the consumer component it has several steps which I would like to explain briefly. In the first step here the containers are built and then pushed into the local registry. We are using here semantic versioning that means on the next steps we are having then the version file counted one level up and with that also tagging the newly created containers in the local registry. Because this is the GitOps approach we are also tagging now the manifests to declare which image should be deployed. And we are doing this in this step here to actually change the configuration, the GitOps configuration on GitHub. That would be a change like this here where you see here the customized file and then the image and the text that means we are tagging the image here so that the new version would be deployed in this test environment as a first step. And then running actually informing argcd to deploy this change so it would pick up the customized file and deploying new containers and then having that deployed in this testing environment here doing various tests for the sensor, consumer and end-to-end test and if everything runs okay we can now or the pipeline pushes the image to Quay because from the central Quay repository we would feed this into the production environments. So once this is done we also modify the GitOps repository for the production environment that would be the same step like before and this time we can also have a closer look here. It would simply go ahead and change the customer's file with the customization file with the new image tags and then at the last step it would create actually the pull request. The pull request is say look the changes have made correctly and now we can somebody can now say okay I want to hold in this change for production. This is the pull request that can be viewed of the changes of the configuration that means exactly saying which container should run well and approving this pull request and merging that in argcd then would pick that up and then also enforcing the change for the production environment. So we looked at two GitOps use cases for manufacturing edge. One hand rolling out configuration change for software sensors that has been then deployed through enforcing the manifest with argcd to the right location and secondly a tecton pipeline where we build new containers, test containers, put the right versions into the manifests and then rolling out that to the test environment but also rolling that out or creating a pull request to hold this out into a production environment. In this video I'm going to demonstrate one of the many ways you can extend the GitOps model to your machine learning workflow. As part of the GitOps for manufacturing edge demo we have sensors deployed at the edge data centers that are using machine learning to detect when anomalies occur. In this scenario I want to show you one of the many ways in which a data scientist can develop a model at the core data center then update multiple edge data centers simultaneously with one commit to a GitOps repository. Here we are utilizing the open data operator to provide a data scientist with the tools that they need to develop their machine learning models on OpenShift. Here we have a Jupyter notebook with a model that a data scientist has just updated and now needs to make it available for all of our edge data centers. In this cell you can see we have serialized this model so that it can be packaged and deployed in our updated image for model serving. A GitOps repository that will provide the source for our image that will be deployed to the models at the edge. Once the updated model has been committed to the repository we will execute an OpenShift pipeline that will orchestrate our CIC process before we release this model into production. The first task will build the model serving container image locally using a source to image build process and automatically determine the next available image tag build number for the release. The next few tasks will perform our CI test to verify that the model deploys and runs successfully in our staging environment. Once the CI test executes successfully the pipeline will push the new release of the container to an image registry that can be accessed by the edge data centers. Finally the pipeline will update the GitOps repository by creating a pull request that will specify what version of our model needs to be deployed at the edge. Now that the container image and GitOps repository has been updated all of the edge data centers running Argo CD will detect the change and update their models accordingly. This example shows one of the many ways you can extend the GitOps model to your preferred data science workflow. If you would like to learn more or try this out on your own please look for more information on our website at opendatahub.io.