 Hi, my name is Brian Tannis, and I'm with a Red Hat Cloud Platforms business unit. In this video, we'll focus on Knative running within OpenShift. We'll be able to see what Knative has to offer, like building our application through Knative's build component, then running that image through Knative's serving, and then finally, we'll use Knative eventing to be able to emit IoT events through the serving service. We'll roll out canary releases, as well as being able to show how the application is able to scale down to zero. Our first step is to set up certain rights to the default service account in our namespace. We have a YAML file that has three rolls and three roll bindings configured. These configurations allow Knative to perform the actions that it needs. We'll push this configuration to the cluster. Next, we'll define a build template using Knative builds, which defines a blueprint for our build. The build component in Knative is not so much utility to build images itself, it rather provides a primitive to be able to string together the tools that we want to be able to do our own build. In our case, the most prominent example is OpenShift's own build capacity. So, in this demo, we'll implement a Knative build by the means of an OpenShift build. Now that we have a build template, we could run our own build using the YAML shown. Instead, it's more interesting to see how the build is stringed together using the serving component of Knative. The current version of Knative serving does not automatically configure the routes within OpenShift, so we'll need to perform a workaround. The YAML for this workaround is very simple, and in later releases, this won't be needed. Next, we'll build and deploy our application using the Knative serving component. As you could see, the spec run latest configuration build section is a one-to-one copy of the build manifest that we could have deployed earlier. Once we apply this configuration, Knative will go ahead and deploy a revision or a configuration of the application at a specific point in time. Now Knative will kick off the build for our app. We could see this happening by heading to the OpenShift console, going to our project, and clicking on builds. We could watch the build taking place by watching the output in the logs. Once the build is successful, we could head to the image streams in the OpenShift console and see that the image has been pushed to the internal registry. We then could go to deployments and see one-of-one pod is available, running our application. We could test the app by curling the route and OpenShift, which we created earlier using the workaround. So far, we have used two of three components of Knative. Knative build, which allows us to build our application, and then Knative serving, which allows us to run those applications based on HTTP requests. The third component of Knative is Knative eventing. Eventing allows us to invoke the application in response to other events that we've received, from something like maybe a message broker or an external application. For the rest of the video, we're going to show how to receive Kubernetes platform events and route those into our Knative serving application. To begin with Knative eventing, we will deploy the three primitives that are required. The first is a channel. In our example, we're just going to use an in-memory channel, but in production we probably would want to use something like Apache Kafka or another message platform. The second component is called the event source. And this is where things are starting to get a bit more interesting. In our example, the event source is a container source, which is running a container image. In this case, the container happens to be in heartbeat application that generates events at a configurable rate. These events are forwarded to the sync. The sync is the channel that we created earlier. The final piece wires everything together, and it's called the subscription. The subscription references our channel as well as the service that we deployed earlier in the Knative serving example. Once our subscription has been deployed, we could see what's happening by checking out the output from one of the containers. We'll head to one of the Knative serving pods and check out the logs for that particular pod in here. We could see the heartbeats coming through. We are able to visualize exactly what's happening using Kiali. The graph visualizes how the app we've deployed is connected with various components of Knative. We could see how the events flow through each component of Knative, as well as roughly seeing how Knative serving works underneath. One of the other things that we'll notice is that our application has a problem. This is indicated by the error rate being 100 percent and all requests being a 500 type HTTP error. Since our application contains a bug, we'll need to fix this quickly and safely. To do so, we'll rely on a concept called a Canary release. A Canary release allows us to deploy a new version of our application and gradually shift more and more traffic to that version. This allows us to validate the fixes in place. In our case, there's already a fixed version of our application available as v2. We just need to update our service to reference that version. We'll start by updating the service with a rollout percentage of zero, allowing Knative's build system to build our new release. We'll also issue a second update, which just simply changes the rollout percentage to 50 percent for now. We'll validate that Knative build is building the new release and once the push is successful, we'll go and check Keali and see the traffic flow. One of the interesting things that we're able to see in Keali is the activator, which is a component of Knative serving. The activator allows requests to come in while the Knative serving service is scaled to zero or just initially coming up. The activator will hold those requests while the Knative serving service is coming up and send them to the service once it's ready. We're able to see that 50 percent of the events that are being pushed are successful. Our canary release works. Now we will finalize the canary release by updating our service, changing the references of the rollout percentage and the versioning to only use the new version two. We'll validate that all of our events are now working as intended in Keali. We could see that all the traffic now goes to version two and the error rate is now zero. A promise of serverless allows the application to scale down to zero in the event of no traffic. If for some reason traffic came back to the service, the activator would hold that traffic until the container is back up. We're able to seize traffic from getting to our case service by removing our subscription. Traffic will slow down and we're able to see this in Keali. Finally, we could see OpenShift scaling our case service down to zero. Thank you for watching. Please be on the lookout for future videos on functionality with OpenShift 4.