 Perfect. Perfect. All right. So I mean, based on the work that I've been doing with with our customers, you know, we know that companies are getting many events, business data, right? And then over time, they have created a number of architectures, use a number of technologies to, you know, process those events and also run their business. And, you know, you have multiple workloads, right? You have legacy applications, package applications. You also have, you know, things like microservices and now functions, right? So, you know, in order for the companies now to compete, they need to integrate not only internally, but also with third-party vendors and also with public cloud services, right? And, but we know that there is no one-size-fits-all tool. So no single tool can manage all workloads. And that's why, you know, companies need to have a strategy for where to run this. And, you know, moving those workloads to the cloud is not going to change that fact, right? So you're going to need, you know, a number of platforms that will allow you to run your containers, your applications and your functions. Now, today we're going to concentrate on functions. And, you know, the reason for that is because, you know, the industry analysts, the tech press, researchers and Google trends agree that the serverless wave is happening, right? So you have the public cloud service providers like AWS providing Lambda, Google Cloud Functions, Azure Functions, and there are a number of open-source projects now. You know, some of them were designed or were not designed with Kubernetes in mind. Others just offer a plugin to run on Kubernetes. But the most interesting category is those open-source projects that are running natively on Kubernetes, either on premises or across multiple clouds. So, people's contribution to this fast movement is Project Rift, which is open-source and Rift. It's basically a Kubernetes extension. It's a Kubernetes native service that allows you to run, you know, functions triggered by events. Now, with Rift, you can run functions, you know, anywhere on your laptop, on your data centers, and also across multiple cloud providers. So, you know, today we're going to use an application just to illustrate how Rift works. So this application is available on GitHub. It's public like anything of people. And we have the ability to vote for our favorite technology just by mousing over these bubbles. Then the middle section is going to show the function replicas processing our votes. And the bottom section is just going to showcase the stream processing and time window capabilities of Rift. Now, how this works is the application is going to send the votes as events to the internal HTTP gateway from Rift, which in turn is going to transform those events and publish them to a votes topic in the event broker that Rift manages for you. And now your functions are packaged as containers. They are, you know, darker containers. And we're going to have two node functions and one Java function. Now, you know, the beauty of this is that your functions don't need anything or don't need to write code. Your developers don't need to write code for the functions. That's something that Rift provides out of the box. Now, you know, all you need to is declare what input topic, what output topic you want to subscribe to and the messages are going to flow automatically. And, you know, two of our functions are writing to Redis and Redis is providing just for demo purposes. It's providing the information back to the dashboard so the dashboard can display voting data. And last but not least, you can scale your functions. So that happens automatically. Rift is going to take care of that based on event traffic. All right. So you want to see Rift in action? Cool. All right. So I mean, the first thing we're going to do is we need to install Rift on Kubernetes. It can run on any Kubernetes distribution. So the first thing we're going to do is configure Helm. So Helm is the package manager of Kubernetes. We have a few Helm charts that are going to help us with the deployment. And the Helm chart is going to configure Kubernetes resources that are particular to Rift. So the first one is the HTTP gateway, which is the entry point to Rift. It will allow you to process events, either request reply, right? If your consumer wants to wait for a response, you can do that, or just fire and forget. It's your choice. And the Kafka and Zookeeper resources are basically the event broker where all the events are going to come and go. And we have two controllers, a function controller, which manages the life cycle and scaling of the functions, and also the topic controller, which manages the life cycle of the topic. And, you know, let's say you want to create a topic within partitions. That's the function of the topic controller. And these two controllers talk to the Kubernetes API to do their job. So let me go to my dashboard. So for this demo, I'm using Pivotal Container Service. You might have heard of this product. It's a certified Kubernetes distribution that is, you know, built on top of Cloud Foundry Container Runtime. So it's running on GCP. And let me show you the Bosch view of this. Oh, my God, it's connected. Package. It is functions. So these functions are, you know, basically dogged containers. And what you need to is you need to get an image from us. So we provide a number of images out of the box for the different programming languages. So Node, Java, Go, Python, and a command invoker. So we call that the function invoker. And on top of that, we're going to put your function, right? And in order for, you know, for us to simplify your job, we're providing a rev build command. So there is a rev command line interface, and there is a command that will allow you to build your functions. Now, you can push your created image to your preferred container registry. It can be, you know, something public, like in my case, or your private container repository. Now, oh, it's important to point out that invokers are pluggable. So, you know, we support a number of programming languages. But, you know, you can install them as needed based on your developers' requirements. So let me just call them to rev, right? So how do we do that? You know, like any application, if you guys are using Cloud Foundry or something similar, we just need to provide a manifest, right? We need to tell rev how to manage our functions and topics. So there is a topics file that tells rev, I need this particular topic, and these are the number of partitions, right? So I have two topics, one with 10 partitions, the other one which is one partition. And then you have the function metadata in which you say, okay, where can I find my image, right? So at the bottom, you see the image URL. Then you also tell rev what's the input and output topics, and also the idle timeout in case you want, you know, this function to expire after x number of seconds. Okay? And then the rev apply function or command is going to deploy your functions to rev. So let's take a quick look at that. Rev is going to create a function pod, and it has two containers inside. One is your function container with invoker and your code, and the other container is a sidecar. You don't have to worry about it. It's a proxy that rev manages for you every time you deploy a function. So it's basically a proxy, a layer of indirection, so that the function container can communicate with event broker. And, you know, it talks natively using the broker API, and it talks to the function container using gRPC, which is a bidirectional protocol, right? And that enables the stream processing magic that you're going to see in a few minutes. Now, that's it. So your functions are up and running. So we installed rev. We built the containers, and, you know, we deployed the functions. So now we're ready to deploy the application. So this step is required only if you have, you know, an application that you want to deploy and start voting for your favorite technology. So we're just going to wait a few minutes or a few seconds. Yeah, GCP is pretty good. It's coming. So just wait until the application comes up. Is that the right IP? Yeah, that's the right IP. Okay, so we have now the application. It's public. If you mouse over, you're going to start, you know, casting your vote for your favorite technology. Again, the middle section is, so basically the function controller is going to start publishing events saying these are the number of replicas that I'm running for each function. So we are capturing that. And, you know, if you deploy more functions, you're going to see that changing dynamically. So it's pretty cool. Now that we have these up and running, let me show you something else. Scaling nature of the function controller. The first thing it's going to do is going to look at the queue length of the vent broker to determine do I need more instances for now. And then it's going to interact with the Kubernetes API to add more replicas. So it goes first from 0 to 1 and then from 1 to n. And you can specify the max number of replicas. There is a max replica attribute that will help you in your function YAML. You'll see that. Now, when you want to do something like this, you just need to wait a little bit to see how it scales down. Let me show you that. It's still processing. But you're going to see a number of instances now terminating, unless you guys are doing something. Yeah, I see people voting. So when you stop voting, you're going to see all the functions stop running. Okay. Oh, someone is still voting. Okay. So now the function is not terminating automatically. You don't need to do anything. You just, again, the function controller looking at the queue and saying, well, I don't need as many instances. And at some point, there's going to hit the idle timeout. The last instance is going to say, I don't have any more events to process. And based on the idle timeout that we defined, it's just going to terminate. Okay. So how are we doing on time? Okay. Awesome. All right. So stream processing. You saw the bar chart, right? So what's happening is we have a small reactor function, and it's doing two things. So the bar charts are basically a fixed window of two seconds each. So, you know, we're just showing the number of votes every two seconds. And the shaded chart was the sliding window, right? So we have a tumbling window, which are the bar charts and also the sliding window. So every 60 seconds, we're capturing the number of votes and we're emitting that information every two seconds. That's where you see the shade changing a little bit. So like this. We also have a Node.js implementation of this function. So we're supporting streaming for both Java and Node.js at the time. So if you want to give it a try, the GitHub repo is public again. Now, some of the use cases, you know, I would say, you know, start with a small use case, right? I mean, start small and make sure that there is business value in the use case that you want to implement. And, you know, take an architecture first approach, look at your architecture and see if there is a component that can be replaced by a function. And of course, use RIF to give it a try. So, but, you know, we see a number of use cases. It depends on the SLAs, right? So, you know, if you're looking at the, you know, the near real-time nature of your business, you know, can you afford eventual consistency and all that? So you need to take those factors into consideration and then, you know, apply that accordingly. Then what's next? You know, the project is under heavy development. We just released version 006. So if you want to contribute, you can just go to the project RIF and add your, you know, your issues in there. That's how we define the roadmap. So what we want to, right now, is the Kafka broker, the default, but we want to add support for other event brokers from AWS, Google Cloud, and also Azure, RabbitMQ and various streams. The streaming support is just currently for Java and Node, but, you know, we have Go, we have Python, and we are planning now that GRPC is the official way to communicate inside car and the function container. We want to add streaming support for all the remaining invokers. And in terms of autoscaling, you know, we'll continue to refine the autoscaling algorithm. It's not, you know, it's not where it needs to be just yet, but, you know, we have component inside RIF now that will tell us how the algorithm is performing. So there is a simulator in there that can provide a lot of information for us. And then in terms of security, we're exploring the possibility of using Istio. Istio, you know, provides Android encryption. It can manage TLS certificates. So, you know, things are, you know, very exciting. If you want to see what's going on, what's coming, you can, you know, take a look at the GitHub repo. And there is an Issues section there that you can read. Now, if you want to learn more about RIF, you know, you have the GitHub repository, the website. The website is amazing. It has several tutorials. You can run RIF on Minicube. You can run it on GKE, vSphere, et cetera. And the demo, again, is public. So that's the URL if you want to use it. And again, RIF is going to be the basis for our upcoming product called Pivotal Function Service. So keep an eye on that. And if you are interested in running Pivotal Cloud Foundry and premises, you're just getting started. You know, take a look at the Pivotal Rally architecture. The EMC team has a booth. So they can, you know, talk to you about how they are doing, you know, reference architecture implementation of Cloud Foundry and the container service on VxRail. So that is, you know, something to consider. And, you know, last but not least, Spring Wine is coming. So if you want to, you know, take advantage of the discount, that's the code that you need to use. And, you know, I look forward to seeing you there because I'm going to be there. So that's it. Questions? Comments? Go there now. Please give it a try. It's very easy to use. You know, you saw installing RIF was just, like, a couple of minutes. So you can just run it on your laptop or mini-cube and you can get started that way. Yeah, so it's on any Kubernetes distribution, so it runs on top of it. More questions? You've been an awesome audience. Thank you.