 Hi once again, my name is Michael Waite and we are here with another edition of the OpenShift Commons Briefings Operator Hours. Today we are lucky enough to have with us our friends from TriggerMesh and we have Sebastian Goazgen, the co-founder and Chief Product Officer of TriggerMesh and then joining us from India today we have Samir Naik, the Senior Software Engineer who's going to be showing us some of the capabilities in a demonstration of the TriggerMesh software. Sebastian, how are you today? Doing great. Thank you very much. How are you? I'm doing really well. Thanks for joining us. We're happy to have you here. I know we've, you know, from Red Hat's perspective we've been working, you know, we work with software vendors to help them test and certify their software as a Red Hat operator, as a Red Hat operator for OpenShift so that, you know, customers when they, you know, are using our container orchestration platform know that they're going to have the best two, best day two supportability, you know, for production environments. So TriggerMesh has a, you know, we've been working with you folks for quite a while. You have a Red Hat certified operator. It's available in our registry. It's available in our marketplace, the Red Hat marketplace. But tell me a little bit about TriggerMesh. You're one of the founders of the company. Tell us, tell us a little bit about how that came about, you know, what's in the name TriggerMesh and so forth. Yeah, sure. Is that, Samir, is that your kid or around? No, no, it's just my chair making noise. It's your chair making noise, right? I was wondering, I was wondering if it's one of his interns, one of the new summer interns is already, was already shown up. Yeah, yeah, no problem. You know, TriggerMesh, so we we got started when Google announced Knative, Knative Project. So that was July 2018. They announced Knative. I had created Cubeless, one of the first FAZ function as a service solution on top of Kubernetes. And when Google went ahead and announced Knative, so Red Hat was on board and then IBM, you know, was on board, even though they, you know, of course, you know, you and IBM were working on OpenWISC, but then everybody aligned on Knative. Ultimately, you know, Pivo told VMware also joined, right? So, you know, it looked like everybody was going, all the vendors at least were going to join forces and work on Knative. So, you know, I decided to launch a new startup, TriggerMesh, to work in the serverless, you know, serverless ecosystem on top of Knative. And our vision was not just function, but it was, you know, especially event orchestration. Because we think that eventing is the big, you know, the big component of serverless as you're trying to build event-driven applications. So, we created TriggerMesh very quickly in July 2018 with my co-founder, Mark Inkel. We helped GitLab develop GitLab serverless. So, GitLab serverless was some work we did for them very early on. And since then, you know, we kept on developing, you know, product around Knative, you know, and understanding the use cases, you know, talking to lots of folks, doing a lot of market discovery. And then finally, you know, at some point, we thought we were ready. So, last December, December 2019, we raised a seed round, three million seed round from INDEX and Crane. And early 2020, we were super excited. We decided to start building the team. That's when, you know, ultimately, Samir joined. We're fully remote, fully distributed team. So, you know, we got started first engineering hire, you know, March 1st, and then end of March COVID, right? Or I mean, it started before, right? So, it's been an interesting year, you know. But thankfully, we all know how to work in a distributed manner and remotely. So, it's been fine. Yeah, I was going to say, you know, you're fully distributed remote. We at Red Hat had a large component of the company remote. But, you know, certainly lots and lots of us were working in offices. And it's been pretty interesting for the last nine or 10 months. Everyone, you know, I think we have 380,000 employees now in the company and the combined company. And everybody's working from home. And I thought it was going to make less work. I thought it was going to be a little less busy. And it turns out it's, you know, things are busier than ever, which I suppose is a good problem to have. But you folks have been really, really busy. You've had, you know, you've had a pretty exciting year. And this year, you guys got named by CRN, one of the top 10 coolest startups of 2020. What makes you guys so cool? Yeah. So, you know, it's been an exciting year, you know, because we were building the company. I mean, you know, whatever the, you know, the world environment with the pandemic, you know, it was exciting nonetheless to build the company. So first, you put a team together. Right. So, you know, guys like Samir in India, we have, you know, Antoine in Germany and Spain and, you know, the US and I'm in Switzerland. So, you know, putting all those folks together, building a team so that we can start building a product that's been very, very exciting. And then, you know, really as startup people trying to find the real product market fit, it's actually quite fun because you have a vision, you know, which is, you know, developers are going to adopt or and build event-driven applications using serverless functions and then cloud services increasingly. So that's our vision. But then you need to find the sweet spot. So it's been really, really good. And then, you know, cool as startup, it's because we work with cutting edge technology. So on top of Kubernetes, on top of, you know, OpenShift, you know, we push the, we push Knative. We are number six contributor in Knative behind, you know, all the big dogs, right? You know, UVM where Google. So we're a small startup, but we still managed to, you know, to get news out there, to get, you know, to get contribution in the ecosystem. So here, you know, on the OpenShift, on the OpenShift marketplace, we put some AWS event sources, right? So you're trying to build an event-driven application using events from AWS. But then you want to trigger workload on your OpenShift cluster. You know, you have an open source project from us. And then it's also available in the marketplace. So we do, we do lots of things and we make sure that, you know, people know about them. And then, and then we, being small, we, we also, you know, that, that allows us to go quite fast. Now you use the word trigger there for triggering. Is that, you know, trigger mesh? Is that, is that sort of, you know, what's in the name? Yeah, totally. So the vision was definitely building serverless application that we, we saw as a mesh of serverless functions, cloud services on AWS, on Google Cloud, on Azure Cloud, and then on-premises workload running in Kubernetes or OpenShift, right? So we saw this hybrid, you know, world, right? So this composition, this mesh of, you know, workload and services, that's what we saw. And then we're like, you know, to be able to start the execution of those workloads, you're going to need to events to flow through the system. And, you know, certain, you know, state changes are going to trigger those workloads. So that's how we come up with trigger mesh, which I think is great. But for some people, it's also a little bit confusing because they think we are a service mesh company. We're not a service mesh company. Sure. Now you, you use the word serverless. And serverless seems to be, you know, one of the, one of the, you know, buzzwords of the year. Is it really serverless though? Or doesn't that really mean somebody else's servers? Yeah, so there's a meme actually on, on Google. It's exactly what you just said, you know, serverless is just somebody else's servers. I just, I just gave a talk at KubeCon. It's called serverless or serviceful. Serverless, I think it's a little bit of an unfortunate word because everybody says, oh, you know, there are no servers. Well, of course there are servers, right? You know, you have a process running, it needs to run somewhere. So yes, it's basically managed services. And then really, you know, the philosophy behind it is really you have more and more services in your, you know, in your apps, in your enterprise, you have AWS, you have ServiceNow, Salesforce, Elasticsearch, Twilio, on-premises, you know, services. So you, it's just full of services. And you're trying to make sense of this. You're trying to compose them. You're trying to trigger them, you know, when something happens, right? So serviceful would be much better, I think. Serviceful. Full, yeah. That came from Patrick Debois. So John Willis works with you now, you know, one of the, you know, leaders of the DevOps movement. So Patrick Debois was, you know, a good friend of, a good friend of John. And I think Patrick came up with Serviceful, you know, 2017, so three years ago. I remember he gave a talk in Belgium at Config Management Camp, and he said, hey, you know what, I'm not going serverless. I'm actually full of services. I'm going serviceful. Thanks to make sense. Yeah. So you got any war stories for us? Meaning like, you know, you're one of the top 10 coolest start-ups of 2020. What are the big challenges that you see your customers are trying to solve? You know, moving from the data center into a, you know, public cloud and to a multi-cloud environment is great. You know, OpenShift allows people to build once and deploy anywhere, but it's not all perfect, right? I mean, there's, this new flexibility brings all kinds of new challenges for customers trying to run those production workloads in the multi-cloud environment. How are you folks helping address those issues for, you know, for customers? Yeah. So, you know, we got, we got the seed in December. We started hiring in March, so we're nine months in, right? So, you know, I don't have a huge treasure of war stories, right? You know, but the, the, the POCs that we're, that we're doing with, you know, pretty, especially a lot of financial, a lot of banks, you know, come to us inbound and they have, you know, they have issues. Mostly what came up is that they have integration issues. So, when I say composing services, it translates into integration issues. And then, so example, you know, we work with a bank, you know, they started doing lots of stuff with Salesforce. So, all of their customers are in Salesforce. Every time they do a change, it's in Salesforce. It's their database. And they need their backend infrastructure to basically keep in sync with what's in, what's in Salesforce. So, you know, big challenge, you know, linking, linking the two into integration, right? So, it's not, you know, that's the type of integration that we're seeing. And we're seeing, you know, more and more. So, you know, Azure healthcare API that somebody wants to use, but then they have lots of things on AWS, right? So, they need to link Azure healthcare to SNS or, you know, EventBridge, you know, things like this. Overall, you know, the big challenge that all the startups have, you know, I feel is that we're all very cutting edge and they're multiple speeds, right? The speed of a startup that's, you know, cutting edge with products like K-Native or Argo events or so on. And then the pace of an enterprise, you know, it's totally, it's totally different in the, you know, innovation curve, right? So, you know, you still see a lot of companies that are trying to put CICD in place, for example, you know, you know, we need CICD. We need to speed up deployment to OpenShift. So, it's one of our, one of our big PUCs, you know, it's been one of their biggest driver. How do we speed up deployment to OpenShift from 30 days to one hour, right? So, they put lots of things in place, but then, you know, there are big questions like security, you know, security evaluation, risk assessment, things like this. So, how do they make that happen, which is a basic, basic problem. So, it's all, you know, it all comes back down to automation. How can I, how can I put more automation in my, you know, in my infrastructure, in my pipelines, right? So, that's where, that's where events come into play. And that's where I like, hey, I need events from SonarCube. I need events from Jenkins. I need events from Bitbucket, you know, GitLab. You know, I need events from all over the place. I need a way to be able to codify, you know, what happens to those events and what they should trigger. And once I have done that, once I have automated all those flows, you know, I'm going to be able to speed up my deployment to OpenShift, right? Is that, is that, is that realistic going from 30 days to one hour? Yeah, I was off the phone like a couple hours ago with a big bank that just did this. Really? 37 days to one hour to OpenShift. Yep. Absolutely. Wow. I thought you were just making a random statement. No, no, no, no, absolutely not random. That, yeah. And you know, I'm going to give you this, this answer. I'm under NDA, but that, that one is actually true. You know, no, no, that's, that's for real. That's for real. Yeah. And a lot of people, a lot of people are trying to do this. So the big challenge for us as a startup is that, you know, you start talking to folks and you're like, oh, you want to go serverless. You want to do event driven, blah, blah, blah. But what's the business driver? What's the, you know, what is, what is going to, to make them take the decision? And, and, you know, you go back to very simple things, you know, how can I speed up my time to market, my time to value, right? Which, you know, that, that one example is very telling because, you know, it's absolutely true. And it's, you know, speeding up the time it takes to, you know, to get to go live. Yep. Well, okay. Wait, we were, we were talking when we were on the bridge just before we went live and you were mentioning that there was, that there was something that you wanted to show here before we, before we get on to Samir. I know Samir has a demo that he's going to be sharing with everyone, but I think you said that there was something that you wanted to show before we get to Samir. Yeah. Can we do this? So Samir has a cool demo on OpenShift, but I can, I can show you, I can show you some basic things here. Okay, cool. So I logged to our trigger mesh. We have a, we have a SAS, so we, you know, you can deploy on, you can deploy on any Kubernetes cluster, but we have a, we have a SAS offering. So here I'm, I'm logged in, you know, we have, anyway, lots of little nice things in the UI. This is what we call our bridge catalog. So they represent integrations, basic integration between, you know, services, right? You're trying to go service full. So you go, you got things like, you know, Salesforce to Elasticsearch, Slack to Confluent, Slack to Google Sheet, right? So all of these, you know, they are in the, in our catalog and you say use template and then you can, you can fill up the, the, you know, the parameters to, to basically configure those event sources and the events. We call them target. Some people call them event sinks, right? So event sources to event sink. So when you do this, you end up building, you know, the key to trigger mesh, which is bridges. You build a bridge, a bridge between two services. What's interesting because we rely on, on Kubernetes is that those bridges, you know, they have a declarative API, right? So we have a bunch of CRDs, you know, we extend the Kubernetes API and then, you know, with that, that powerful UI, you can create those manifests that represents those bridges, right? And if we try to do a, you know, a quick bridge here, I want to do, I want to do SQS, SQS to just a web app that, that shows the event. So here I see the, the sources that are available. So I'm going to take, create a, an AWS event source, right? You know, right at demo. We're going to go through a broker, a message broker. And here I need the R and of my, my Q. So I have my AWS console. I'm going to zoom in. I copy the, the R and I put it here. So this is, this is, that's all I need to configure an AWS event source. And Samir can do the same thing on, on OpenShift. And then I need to send that event somewhere. Yeah, go ahead. Oh, I was just going to say, so it's for any, for any cloud though, right? Not just, not just AWS could work on any of them. That's just, this is just that one example. Yeah. So that example is just an, an AWS event source, right? We, we have a few sources for Azure and, and Google. And here as a target, I put a, a basic web app. So when I've done this, I've created my bridge and I submit, right? So now we have a controller, you know, that's running in your communities cluster. It's going to see the objects and it's going to create them. And of course I messed up the, I messed up the name. You see that I have a full manifest for, you know, that bridge, right? So that bridge is running now, it's green. What I'm going to do is I'm going to open the web app that is the, the target for the event. So it's that Sokai 4. This is actually a serverless, called that a function, but serverless application. And so when I put events in SQS, I'm going to consume them and then they get routed to this app, right? So I go in SQS and here I say, you know, hey, Mike. Hi, Mike. I put the message in SQS and then, you know, boom, I get it right away. All right. So, no, we have, we have Chris in the background, right? Hi, Chris. No, I said that message. All right. And then we get, you know, hi, Chris. So that, you know, the, it looks simple, but what you have here is that you have a scalable SQS consumer that's been automatically deployed in your communities cluster. It can, it can scale its communities deployment. Every time there is a message that's being put in SQS, we get that the source emits a cloud event. We follow the CLCF specification. It emits a cloud event over HTTP. It gets to a broker. And then from that broker, it gets sent to the target, which is, you know, this one, this web app. And that basic web app just, you know, shows you the, shows you the JSON. So you see the, you see the body, right? And why the, why the, why the soccer? Hey, Salman, why did you choose that? You have, you have to ask his name, Scott, who is now at, who is now at, at VMware. It's because it's based on WebSocket. So, why, why the fish? I'm not sure why. Do you know Samir? I think there is a fish name in the sockeye. It's a sockeye fish, I think. Okay. Not sure. Yeah. And then here, Sebastian, when you say any, any Kubernetes, does that mean, you know, any community edition of Kubernetes as well, or, or just the mainstream commercial ones or or any, any area? So, because we, you know, we're just, TriggerMesh gets deployed. The entire TriggerMesh platform is really, you know, a set of CRDs and a set of controllers. The AWS event sources specifically, which are in the IBM marketplace, you know, this is just, yeah, this is just one, one controller, right? One, I mean, operator. And I was going to ask you about your operator. I know you, I think you, you've got your, your operator built at the end of, in July, in summertime of 2019 with us or something. How does, how does the operator help people who are using TriggerMesh in either an attest and dab or in a production environment? So the, the AWS event sources, they're open source, right? We, we, we have bits of TriggerMesh that are closed source, but the AWS event sources, they're open source. And it's really just a controller, right? So now the question becomes, you know, you have a controller. How do you deploy an open shift? You need to have an operator. That's where the, that's where the story gets a little bit funny. And at some point we should, you know, we should discuss this, right? Because so we need to create an operator to install our controller. Right? Right. And, you know, the first one, I mean, how did we do this quickly? Well, we base it on a Helm chart. I was going to say, probably, probably a Helm chart is the easiest and fastest way to go, right? So we had an operator to install an Helm chart to install a controller. That's, it's, you know, it doesn't feel, it doesn't feel very optimal. But, you know, if we didn't have all of this to, to, to get through the, basically the operator hub, you know, it's just really a keep controller apply and, you know, you, you, you apply the CRD and you deploy the controller and that's it. Okay. Well, I mean, you're, so trigger mesh is, is, is as a service, right? So the only correct me if I'm wrong, but the only thing that needs to be deployed on the customer's nodes is the agent or the, you know, the thing that phones home, if you will. Is that right? No, no, no. So we have, what I showed you is the SAS, right? So if you want to use it, you go to the SAS, there is no connection to on prem, right? If you want to, what I just showed you, if you want that on premises, then, you know, you have to talk to us and then we install all the bits in your Kubernetes cluster or something else. I would imagine for some of the, some of the, in the financial services industry, that's probably the method of choice I would be willing to bet. Yes, on premises. Yep. Okay. Well, in the market place, sorry, in the market place, it's just the AWS event sources. Okay. Cool. Which you can use without the rest of trigger mesh, right? How does that work? Well, you know, you end up with just a controller, right? So you, so you can, you can configure the AWS event sources and then the targets, it's going to be open shift serverless, right? So that's, you know, we only provide you the sources, they are certified, you know, we support them, but then the targets, you fall back on open shift workload, open shift serverless, you know, functions, right? Okay. Good. Smear, I think you're on deck. Well, you know, welcome from, welcome from India. Thanks for staying up late, I think, or getting, staying up late or getting up early, one of the two, right? We couldn't put pins on the, on the world map in your back. Yeah. So hi, my name is Smear. I am from Goa, like Mike mentioned. So what I wanted to do is to extend on Sebastian's demo. I want to, I want to demonstrate the same thing on the open shift platform using our operator that was mentioned by Sebastian. So like said mentioned that Trigamesh cloud is our SaaS offering, but we do package this bits and pieces of our cloud platform and make it available. So one of these is the AWS event sources package. So using that package, you could do the same sort of integration by hand on open shift. So what I want to demonstrate is an application use case where say, for example, you have an application use case where you have a kinesis data stream. And what your use case is that you want to pick out messages from the kinesis data stream, modify them as your application logic and then send them to a Kafka queue. This is just a imaginary use case. So typically, if you're an application developer, what you would go ahead and do is write a full application for this scenario to the integration with kinesis and then do the integration with Kafka and then write your application logic. So that is where serverless comes into a picture where you don't have to bother about the integration with kinesis and Kafka, you just work on your application logic and just use the existing components. That is the whole idea of serverless. Not that there are no servers, there are servers, but you just focus on your application and don't bother about writing the engine X part and all this. Samir, can I just jump in for one second? We're having some questions come in in the blue jeans chat and I wanted to make sure that we're getting the question addressed and then answered because we're also going out on YouTube live and Facebook live. So Sebastian, I don't know if you want to address that question so that the people on YouTube and Facebook can hear the question and the answer as well. Oh, yeah, sure. Yeah, I was replying to the question. So there was one, sorry, Samir, there was one question. Are sources and sinks implemented as Kafka connectors? Not really because you don't necessarily use Kafka as the messaging substrate. So sources and sinks, they just deal with cloud events that are being sent over HTTP and then it's the broker under the hood that could be a Kafka topic. But it could be something else than Kafka. It could be Kinesis or Azure Event Hub or Google Pub Sub. And Samir, why don't you jump on the, show us the open shift, the operator where we see how you did the config of the AWS event sources. We can keep it a little bit more interactive just. And if Chris Short picks up some more questions from YouTube and Facebook, are you, Samir, are you okay with us or do you want to hold the questions till the end of the demo? No, I'm okay pausing for the questions to be. Okay. I'll try and barge in politely if they come in. Yeah, in the demo, there are going to be some points where there could be some pauses. So at that point, it could be a good segment to have the question. So like I was saying, there's an imaginary use case where you want to consume AWS events occurring on the AWS Kinesis team. So some messages are coming to your Kinesis team. Say you want to process them as an application logic and then send a processed message to a Kafka user. The use case can be anything, but this is just an example to give you an idea of what you can do with our source component, right? So this is an overall view, you can see my screen, right? Overview of how, what our application, what we want to achieve in our application. So with that application in mind, I will just quickly go over the setup that we want to do for getting our application running. So here is my OpenShift console. This is just some housekeeping work just to make sure that we have everything that we need, right? So we're just going to have to install a couple of operators to provide infrastructure for this application. First thing is we need the OpenShift serverless platform. I just type serverless, and I install the Red Hat OpenShift serverless, just accept the defaults, nothing wrong with that. So that's where, you know, Mike, when I said that, you know, our operator in the marketplace, you adjust the AWS sources, and then you can target OpenShift serverless. So here, you know, he's installing the OpenShift serverless operator, which is, which is Knative serving under the hood, right? So you're going to end up with Knative serving in your OpenShift cluster, which is a, you know, set of DRDs, and it gives you scale to zero capability and auto scaling. So I just completed installing the serverless operator, and this, I also instantiated the Knative serving and the Knative eventing APIs. So this is standard installation process. If you visit the readme of that operator, that is how it has to be installed. Knative eventing on OpenShift, which is a GA now, I believe? Yes. Second thing is, now that we have so the serverless components available, we can go ahead and install the AWS sources operator by TriggerMesh. So we have two versions available. It is the project is actually open source, but the one from the marketplace, you get 90 days of trial before you can, you can ask for support, I think. So you just go and install the one from the catalog. Again, just set the defaults. And this is the operator that Seb was talking about, which is a operator, which is a Helm chart that installs the controller. And the same thing, we need to instantiate the API provided by the operators. So let's go and instantiate in the default namespace. So this provides the infrastructure for our application to run. So one of the things that I had spoken about was we are going to post the message to a Kafka queue. So since we are using the OpenShift platform, we can also deploy a Kafka queue on OpenShift. So let's take advantage of that as well. So to do that, there is a string Z operator available on the operator hub. So we can just go and install that one in the defaults. So maybe, Sameer, what you do this I can, because that can be, that can be confusing. So TriggerMesh itself is not, it's not a streaming platform. So we, or a messaging platform. So we need, we need a messaging platform. Okay. Knative eventing, same thing. Knative eventing doesn't do messaging. It's a set of API construct that allow you to build some eventing flow. Right. So TriggerMesh uses Knative eventing. We have additional bits like the AWS event sources for which we provide support, right. But under the hood, if you need messaging, you need something like Kafka, right, or Nats, or, you know, RabbitMQ maybe, or Kinesis, right. So here, when you're on premises, it makes total sense to use Kafka. That's why, you know, Sameer is installing a little Kafka cluster through the StreamZ operator. Yeah. So now that I have installed the StreamZ operator, I can go and instantiate the Kafka cluster, which is pretty cool. This is all you need to do to create a Kafka cluster, right. This part takes a little bit of time. So if there are any questions, maybe you can answer them. Can you show us the GitHub repo of the AWS sources? Yeah. So the AWS sources are open source. It is at TriggerMesh, AWS event sources. You can find... And they're... Yes. This is code here. And then if there's a config folder which has got samples, you can use these samples to actually start using the source and play around with. So show us a manifest of the Kinesis source, for example. Yeah. So Kinesis, so we'll have a Kinesis source. So this is what a manifest of the Kinesis source looks like, very minimal. We just specify the ARN of the Kinesis team and then the credentials to access the AWS service and then the sync where the event. So when an event is picked up from the Kinesis data stream, it is sent to a sync element. And in this case, in the default broker sample, the default sample, it is sent to the default broker on Kinesis. So that one is interesting. So you see that the TriggerMesh event sources are a set of CRDs that allow you to deploy event consumers for specific AWS services. So here it's Kinesis. So you deploy this on your OpenShift cluster, suddenly you're consuming events from Kinesis. Where do they go? They go to the sync reference. So you see in the spec, there is a sync. So if you go back, if you back just, there is a sync. And here that sync, you see that it's a KNative broker. But it can be something else, including an OpenShift serverless function. So now you suddenly have a flow, you're consuming messages from Kinesis and they can go to your OpenShift serverless. Sorry, Samir. Go ahead. Cool. The Kafka cluster is ready. It seems to be ready just hold on a minute once again. So basically what you're going to show here, which is quite powerful, is that you're going to show basically almost a sync between, and my French accent is saying sync, but S, Y, and C, right? So you're going to sync Kinesis and Kafka. So just complete my Kafka. So this Kafka cluster is accessible only within the OpenShift platform. So what I, the way I'm going to post my messages to the Kafka cluster is through the HTTP API. So what I want to do is I want to expose a REST bridge that is made available by the Kafka bridge component. So I will instantiate the Kafka bridge because it creates a REST proxy for my Kafka API and then I will expose the bridge outside the cluster by setting up a route. So again, I'm going to use the defaults. Hold on. Hold on. You said bridge minus one, is that normal? Good catch. I was going to say the same thing. Let's just go back. So none of this is really trigger mesh specific. Here we're doing it like really step by step from scratch, creating a Kafka cluster with StreamZ, creating the HTTP proxy so that we can inject into Kafka. But on the other side we have the trigger mesh sources that are already, and we're going to set up this Kinesis source shortly. As the bridges are coming up, let's go and set up the route. So the idea of the route is to be able to access our REST proxy. So I'll name it the same way with my bridge. This is cool because I'd never seen the full demo. So I'm getting an OpenShift crash course here. So yeah, so this exposes the REST API proxy on this URL. So the REST requests sent to this URL will go through the Kafka cluster. So we have all the things. Let's just create a Kafka topic. I didn't know you could do all of this through the OpenShift console. It's pretty cool. So we have a Kafka cluster. We have an HTTPN point to get into that Kafka cluster. We have a topic. Yeah, we have a topic and we are pretty much ready. On the other end, I have set up a data stream on Amazon Kinesis. Oh, I got logged on one second. Yeah, there is a question about DBZM. So you could use things like DBZM if you're Kafka specific. The thing that's interesting here is that this particular demo, we are really talking about the Kafka with StreamZ. But if you're just using Knetv eventing, all the Kafka bits are abstracted. And it could be something else under the hood. So if you're not using Kafka, you could still use the TriggerMesh sources and use the eventing abstraction from Knetv. Yeah, so I already have set up an Kinesis data stream called myAppStream here. This is the ARM which we will need while setting up our Kinesis source, Knetv. So let's just go and start the demo. So now that we have all the components, I'll just go over the application, how the application will look. A little bit of Knetv knowledge is helpful here, but try to follow along if not that difficult, right? So what we have is myAppStream, which is a data source on AWS Kinesis. The idea is to get it from here to the Kafka queue. So to achieve this flow, we are going to use three components. One is the AWS Kinesis source, which is the one which we just installed. And then we are going, like I have said that in the use case, you want to modify the message in whatever way possible. So there's going to be an application logic component that we are going to use. For that, we are going to use another Trigmesh component called infra target, which is basically a component that allows us to write JavaScript within the declarative syntax of creating a manifest, right? So you can write JavaScript and just right in there, you can modify the message however you want, or you could implement your own serverless function and deal with it, right? From there, we will be using another component called HTTP sync, which is of that kind HTTP target. It is similar to the HTTP source in Knetv. The difference is that HTTP target makes a post. So you can use it to post messages over an HTTP endpoint. So this component will make a post to the Kafka REST proxy, which basically the message will then end up in the Kafka queue, if everything goes quite funny. So Samir, just go ahead and show us the source configure because we're going to be running a bit out of time. Yeah. Actually, I was supposed to do the whole demos. I'll just use a manifest that I already have. To go for it, set everything up and then Mike and I, Mike, if you have questions, you and I, we can talk while Samir gets everything set up and then we can keep watching what he's doing. Yeah. Yeah. So the person who asked the question about glue, are we creating a vent-based infrastructure glue to connect all services in the cloud? Yeah. Yeah, yeah, yeah. You could say it like this. So here, we're really getting down on down and dirty because Samir had created a Kafka cluster and now we're creating, we're talking about transforming the events that are flowing through the system because the Kafka proxy needs events in a certain shape. So what comes out of the Kinesis source needs to be transformed before we can send it to the Kafka proxy and before it gets accepted. So here, we're touching on really all the components to build an event-driven application in OpenShift using TriggerMesh sources, using OpenShift serverless, using the Kafka StreamZ operator. It's the whole shebang. The strength of it and going back to what we were talking earlier with some of the proof of value that we're doing with financial institution, what they really like is that at the end of the day, your event-driven application is fully declared with an API. So your event sources are user declarative API, your syncs, the user declarative API, the triggers, the transformation, that's really what you get with TriggerMesh. And once you have this representation, it's just like your Kubernetes manifest, stick them in version control, source of truth, and then use your CICD system to manage them. Use your GitOps, right? We've just got 46 million to do GitOps. So adopt your GitOps mindset and you can manage your event-driven applications the same way. And AS had a question about, do you have some security context for the messaging, mesh events in JSON? I'm not exactly sure what that question means, but in Knative, if you use Istio or another service mesh, you can turn on the mesh. A lot of people don't use the mesh, they just use the ingress capability. But if you want, you can turn on the service mesh, which gives you mutual TLS between services. So you have added security. So if you're talking about messaging mesh, that would be the answer. Yeah. So I think I have set up all the components. So basically, the components that are deployed are support this architecture. So these are the three components. And then there are two channels in between. So if there are channels and there are the components, right? And then there are subscriptions. So if you look at the whole manifest, there is actually really nothing going on. The only thing of interest should be this. This is the component where we can just write JavaScript code within our manifest and manipulate the event. So whatever JavaScript is supported by InferJS, you can stick it in here and transform the messages. So for example... That's a little secret source in TriggerMesh where we have this manifest where you can inject a bit of JSON to do a bit of magic. We also have an event transformation, which is fully declarative, bumblebee. But okay, skip that once Amir. People are going to get scared. So we have all the components. So the idea was to pick messages from the Kinesis stream and they should end up in the Kafka cluster. So let me just fire up a Kafka consumer. So let's fire up a Kafka consumer so that we know that when the messages start showing up in the queue, we can validate that it is actually happening. So let me just start the consumer. So we're consuming Kafka. How are you producing to Kafka? Yeah. So what I have done is I have set up a PC to instance, which will post application logs, nginx application logs to my Kinesis data stream. I found this nice tool called the nginx log generator, which generates random nginx logs, not real. What I'm doing is the logs are written out to tap.app.log. And the AWS Kinesis agent is set up to send that application log to my app stream, my Kinesis stream. Simple as that. So let's just start generating the logs. So if everything works as expected, these logs should get put into the Kinesis data stream. And then from there, our source will pick the data from the Kinesis data stream and then do all the magic and send it to the Kafka, which we should see in the stop of the terminal. So let us just wait to take a little bit of time, but let's just wait for that to happen. And by the way, Sebastian, I've been negotiating with the production team and I've negotiated an additional 10 minutes for us if we want it. I think we're almost there. We're almost there. But yeah, so it looks like a lot, but it shouldn't be underestimated. Sameer did a bunch of stuff here. He set up a Kafka cluster in OpenShift with StreamZ, deployed the Kafka HTTP proxy, created a topic. He installed OpenShift serverless OpenShift eventing, which are KNATIVE components under the hood. He deployed the trigger mesh operator, AWS sources operator. So now we can actually sync up Kinesis and Kafka. So now he's emitting messages to Kinesis. And then now, if the entire flow goes well, we should see them in Kafka with this little kubectl command. It takes a little bit of time here. Yeah, it's anytime now. What? What did you say? What did you say? Anytime now. That's why we need the 10 minutes extra. Well, it's actually good to see this is actually live, though, as opposed to like just a scripted demo. Of course, what could go wrong, right? Nothing. Nothing. Nothing. Something is going wrong. Drum roll. That's the problem if you don't record a demo or if you don't have a batch script that you execute. That's why you scared me with that minus one. I don't see who arrived. Actually, I can see the messages are being posted. I'm having a monitoring terminal here. I can see that the messages are going to be being sent to the pipeline. But there is some mistake. You're seeing them. So show us the logs of the Kinesis source. Yeah. These are the logs of the Kinesis source. That's running in OpenShift. So the Kinesis source from TriggerMesh is getting the messages from Kinesis. And then it doesn't look like the Kafka consumer is getting them. So wondering whether I set up the live. I think I might have not run the subscriptions. That might explain it. So if there's, ah, there you go. They just arrived. I saw it. Oh, no. The other window. Yeah. I think the subscriptions were not done because otherwise it would have said already exists or something. It would have said configured. So what do you think, Michael? Mike? I love it when it's verbose like this. Does anyone actually go back and actually read all that output? Oh, here it is. There you go. There are the logs. My bad. So if we wanted to verify that these are actually the logs that we are sending, I could do that. So this is base 64 encoded. That's just base 64 encoded. So, you know, there are definitely different ways to do this, right? If you say, you know, I'm purely Kafka, I know that I'm Kafka, maybe you can take shortcuts. If you want, you can write your own source and directly do the transformation and directly inject into an existing Kafka topic and then everything is packaged as a container. So certainly you can do it differently and you can do it yourself. But now, you know, that's one source. What happens when suddenly, you know, the next day you need SQS and then the next day you need Cognito source and you need DynamoDB and you need Code Commit, right? And this is just AWS. So then you need the GitLab event source, a GitHub event source, and then your targets, you know, not only do you need OpenShift serverless, but one day you're going to want to send everything to elastic search. And then one day you're going to want to send everything to Splunk, right? And then, you know, to, you know, whatever, you know. So that's where, you know, we see the strengths of TriggerMesh is that we have those catalogs of sources and targets, and then you can describe those entire event flows in a declarative manner. And because you have those flows in a declarative manifest, suddenly you fall back on your DevOps tooling with your CI CD and your GitOps. And that's the big strength. So that's it. Thank you. So like Sebastian mentioned, he said that it was hardly, really hardly any real setup of servers and things like that done. If it was a real application, I would have just had to work on my JavaScript and get the job done. Thanks, Samir. Thanks, Samir. And we still have plenty of time there. I'm sorry if I went a little fast. I was not aware that Sam was going to do a demo. So I had actually planned to. Surprise. And Samir, I didn't tell you that Mike just wanted a 10-minute demo. That's fine. It's all good. Thanks for coming. I know we're two minutes over. Great session. Yeah, anyways, by the way, if anyone wants to get caught up with us from Red Hat, you can send me an email. Just wait at redhat.com, W-A-I-T-E. As far as getting connected with our folks from TriggerMesh, we have our slide up here. Sebastian, I don't know if you want to speak to that. I didn't realize you guys were down and your headquarters was in Raleigh. Yeah, my co-founder Mark is in Raleigh. Yeah, yeah. So yeah, send us, you know, at TriggerMesh on Twitter. If you need more information on the product, definitely reach out to Gary, Gary at TriggerMesh.com. And then, you know, visit TriggerMesh.com, the website. We're doing regular webinars. So, you know, definitely feel free to reach out and ask us, you know, more questions. We are in the OpenShift IBM Red Hat Marketplace. So, you can find us, you know, directly there. When's your next webinar? I would have asked you like, hey, when is the next event you're going to be at? Are you going to be in Amsterdam or Copenhagen? But, you know, I think we're going to have to wait another possibly up to a year for that. But, you know, tell us about your webinars that you do. Yeah, meet January. We're doing something, you know, with Google. So, you know, I'm not sure. But, yeah, there's going to be some interesting webinar in January. Okay. But then, you know, Twitter. Twitter is a good source of information. We're pretty active there. And so, you know, you can get all the latest news or LinkedIn, you know, as well, of course. Sure. Well, great. Sebastian and Samir, thanks for joining us here for this edition of the OpenShift Commons briefing, our Operator Hours TV show. We have it every Wednesday at noon. And, you know, thanks again. And good luck. Good luck going forward. We'll talk to you soon. Thank you. Thanks a lot. Thanks, Samir. Thanks a lot. Thanks for having us. Bye-bye.