 Hello. My name is Martin Jackson. I am a Red Hat engineer, and I work on the Validated Patterns team. I'm very excited to be able to talk to you today about the Pipelines feature in the Industrial Edge Validated Pattern. First of all, let's take a look at what pipelines look like in context. The Pipelines operator is installed by the pattern when it is first installed. You can see this if you look in OpenShift in the Installed Operators section. Another thing to remember about the Pipelines operator is that all of the things that we are going to talk about today are now optional as of version 2.1 of the pattern. Before, we required Make Seed as the first thing to make the pattern fully operational. But now, we will bring the pattern up using pre-built containers from our team repository, and if you want to rebuild the entire application, you are certainly able to do that, but it is no longer necessary in order to see the demonstration. Thirdly, we're going to talk about some of the details of the seed pipeline, and we're going to talk about some of the details, finally, of the build and test pipeline. First, let's talk a bit about the different application components that the Pipelines deal with. The first of them is IoT Anomaly Detection. It deploys the Seldon-based Anomaly Detection model, which is the thing that drives a lot of the action in the overall pattern space. The IoT Anomaly Detection application sends alert messages, which are then picked up on the MQTT broker and reported in the line dashboard. The IoT Consumer is the application that is responsible for consuming messages from the MQTT stream and then injecting them into Kafka. The IoT Software Sensor component, which is referred to as Machine Sensor 1 and Machine Sensor 2 in the deployments, is responsible for placing actual sensor reading data on the MQTT queues. This sensor data is in turn read and acted upon by the IoT Anomaly Detector application. The IoT Front-end application is the line dashboard that shows most of the interesting stuff in the demo, and let's get a look at what the line dashboard looks like. This is the view of the line dashboard, real-time data. As you can see, these points are being added in real-time. The points are being added by the IoT Software Sensor application. The messages that the points represent are being taken by the IoT Consumer application and placed into Kafka. This red alert that you see down here is being injected by the IoT Anomaly Detection application. And that is how all four of these applications work together for this part of the demo. The test configuration is a little bit different than the production configuration. Let's take a look at where the data center application is installed and take a look at how the different resources that are maintained within this application are actually laid out. This is the cluster view of the data center GitOps server, which is one of the Argo instances that is relevant in the industrial edge pattern. This is the Manuela test application. As we go through, we see the anomaly detection pod over here. The messaging pod consumer is right here. Machine Sensor 2, Machine Sensor 1, and the front-end application line dashboard. And this cluster is in the state that it is when the initial make install is done running. You can see that all of the pods are up and operational. That is because the deployments all call for the installation of these pods from the Hybrid Cloud Patterns organization in Quay. And these are further backed by the local OpenShift image registry, which will pull them from Quay and make them available locally as soon as they are available. So if we want to see where the code is that controls these things, here is the industrial edge GitOps repository, charts, data center, Manuela test. Inside the templates directory, you can see these different components, anomaly detection, line dashboard, machine sensor, messaging, and messaging Kafka. In production, the installation layout is a little bit different. Let's go and see what Argo shows for the production or factory installation. Here, the resources have basically all the same names, but they are deployed into slightly different namespaces. The front end line dashboard, as you can see, is installed into the namespace Manuela storm shift line dashboard instead of Manuela test. As you can see from this mapping, the namespace model is just a little bit different. Let's talk in some detail about the seed pipeline. Let's go and look at our OpenShift console. And if we go down here to the pipelines section and click on pipelines, we then want to go to the Manuela CI namespace. And we'll see that there are a number of pipelines defined, but none of them have been run yet. So we'll go to the terminal and run make seed. In order to actually start the make seed process, from the terminal after make install is done, we can run make seed just like we ran make install. See here that make seed validates that a couple of resources are actually present. And then we see that there should be a pipeline run running that we should be able to see in the OpenShift console. Let's switch over to the OpenShift console and see if we do in fact see it. And we do. After we've switched to the pipelines view in the console, we can see different aspects of the pipeline actually running. If we want to look at the details of what all of the tasks in the pipeline run will entail, we can click on the seed run pipeline as it's running. And that will show us the details of what different tasks are going to run and when. First we build the base images and we can see the log for that. And then we will build each of the four applications in turn. Seed IoT anomaly detection, seed IoT consumer, seed IoT front end, and seed IoT software sensor. This process actually takes several minutes. So we're not going to watch the whole thing on this video. When it's done, we'll come back and see how things are going. But if you want to watch the details of all the build logs and all the details of what the different tasks are doing, that's what these options are here for. It's kind of interesting to see what the pipelines are doing as they're actually building the applications. Something you might notice as the pipelines run is a difference in get hashes that your local repository reports versus what the repositories that Argo is pulling from. Let's take a look at the factory where our local one says 4D, factory says 2C, data center also says 2C9. The reason for this discrepancy is that what pipelines is actually doing as it's building is pushing commits to the repository that is being used. And so those changes are being pushed to the repository. And if we go back to the terminal and pull origin main, we'll see the same or a more advanced get hash as well. Each of the applications as they build have to push new tags which require commits to the get ops repository. And they have to do that for both production and test. So there are a number of changes that these get repositories will see as this process continues. Over in the factory application, we see that the messaging pod, the new version of the messaging pod is just coming up. And this is what we expect to see is the seed process is completing. The new version of the messaging pod has built and deployed. That's the new tag. And as the liveness and healthiness check probes are satisfied, the new messaging container is running and the old one is deleted. We are watching the last phase of the IoT software, the seed pipeline build, which is the IoT software sensor run. We can look through some of the previous ones. We see source builds. We see some get operations to change the image streams and to push those commits to the repository. Those will result in those containers being updated within the application. All right. We are in the process of doing the build now. You can see that this build process has to deal with two different repositories. One being the Manuela Dev repository, which is documented on our site. The other one being the GitOps repository or the Industrial Edge repository that you see me working with regularly. It is normally not necessary to interact directly with the Manuela Dev repository, but one of the reasons that we require that you fork Manuela Dev repository and allow changes to it is specifically so that these pipelines can push new tags to those repositories and deploy new versions of those applications if you need them. Okay. Now we are going to start the build and test pipeline. Build and test is different from the seed pipeline in that instead of building and deploying to both the test and the production instances, build and test builds and deploys to test, but very specifically does not deploy to production. Instead, it opens a pull request and allows you to make the decision to merge those changes to production when you're ready. The way that we start build and test is like this, very similar to how we start make seed. And then once we have the notice that pipeline run, build and test run has been created, we should be able to go to the OpenShift console and see that pipeline running too. And there we go. Just like the seed pipeline, build and test has a number of different steps that start with build base images and build the various IoT applications, anomaly detection, consumer, front end and software sensor, just like seed does. We have extra steps here that are related to the creation and pushing of the PR and we'll come back when that's done. As you can see, all of the steps have completed. Now we can go to the pull requests for our GitOps repository and see what that status is. As we can see, there is now a pull request which was created by the Tecton task, GitHub ad pull request, that is a pull request from me to me in my repository to stage all of the changes that have been made since the last commit in the production repository from staging approval. All of these things, the reason that there are a large number of changes is that I've been running some extra tests in this repository in the process of doing this demonstration video. So if you see a much smaller number of changes made by Tecton tasks and other changes in your repository, that's normal and that's expected. Now notice these changes will not actually be done to the storm shift namespace until this pull request is approved and merged. So we can commit that merge. We can let it run the tests or we can just merge it and going through these merges if we go to code and the main branch where the repository should be. After every three minutes or so, these Argo instances will try to pull the latest commit but hitting refresh, they can accelerate that process just a little bit. So as you can see the new machine sensor pods and line dashboard pod, messaging and IoT anomaly will be updated by this process and they're only updating here in production in the factory GitOps server because we have approved and merged this pull request. So as you can see the new pods have spun up at the new versions and that is how the pipelines work in the industrial edge validated pattern. Once again, my name is Mark Jackson, an engineer at Red Hat. I've worked on the validated patterns framework and on the pipelines feature in industrial edge in particular. Thanks for your time and attention.