 The next talk everyone is going to be on what is cloud native integration by Michael Costello. Michael, are you here? I'll just give him a couple of minutes if he's there. Okay. Hello. Can you hear me? Yeah, I can hear you fine. Are you ready? All right. And then I suppose, can you see this? Yes, we can. Groovy. All right. Great. All the best. We'll start in two minutes. Yeah, sure. That's completely up to you. You can wait for two minutes. Just let me know when you're ready. Okay. I think this is a very popular topic and everyone would want to hear about this. I personally spoke into two or three of them that are very interested in your talk. Cool. Cool. We'll give it one more minute and then I'm probably going to start kind of on the dot. There's a lot to get through and I'm going to try to demo live. Okay. It's going to be exciting on the pitch for that. Cool. All right, everybody. Thanks for attending this talk. Today we're going to talk about what is cloud native integration. And more specifically, we're going to get into some cloud native architecture and we're going to talk about some cooling, some tooling with Apache Kimmel K. My name is Michael Costello. I'll give you guys a quick introduction to who I am and then we'll talk about the agenda of the things we're going to talk about. I'm a programmer. I've been doing it about 20 years in the distributed software space. I'd like to call myself a reformed enterprise Java guy. I'm a senior architect at Red Hat in the enterprise integration practice in our emerging technologies practice. And if you want to check me out further, please do have some articles. We'd love to get responses so on and so forth. So there's some linkage there that you can go check me out. So today what we're going to talk about is, of course, cloud native integration. It's a term that we've been using lately to describe our emergence into the cloud and some of our traditional integration techniques. So just so we don't get too far ahead of ourselves, we're going to talk about how we came to enterprise integration practices and something called Apache Kimmel. We'll give a prequel to the cloud, SOA and the enterprise service bus pattern. Then we'll talk about moving some of these enterprise integration patterns, some of these new approaches that we've taken as we emerged out of our traditional legacy mainframe environments. And we'll talk about how we move these things to the cloud. We'll talk about what cloud native architecture is and why you would use something like Apache Kimmel and how to apply it to Apache Kimmel. We'll do a live demo. We're going to have to be super quick. We only have really 20 minutes. We want to leave a few minutes open for questions. And of course, as a result, the demo is going to have to be a very, very, very small subsection of a much larger demo where we really, really get into some multi-cloud, hybrid cloud, multi-tenant concerns and speak about some of the architectural thing, the choices we've made to be quote-unquote cloud native. I've got a GitHub URL there. It will be again in the slide deck. And of course, I'll make sure I put that into the chat as we get to the questions. So real quick, how do we come to enterprise integration patterns and then this thing called a Apache Kimmel? So what's one of time, as you guys will all remember, we found ourselves on mainframes and we had bus architectures where different programs on our mainframe could kind of pick up from each other and do their work or just simply do their work themselves. We moved away from this into client server topologies and all of a sudden we had a need for remote invocation. As we moved into this kind of new world where we distributed out our compute and our compute processes, one of the things that we started making some tooling leaps as a result, we started to look at things like asynchronous messaging. IBM's MQ series is one of the classic things to come to mind that became quite dominant in the 90s and aught years and still has quite a presence today. And these kind of things modeled our traditional patterns that we had on mainframe systems, but still allowed us to have remote invocation. Now we could have a series of runtimes and a series of servers out in the wild and they could all kind of communicate to each other, but using fairly similar techniques. What happened during this period of time is we had a lot of different tools, right? And a lot of people were knocking on the same doors and Gregor Hop, one of the guys who contributed largely to the enterprise integration patterns, made a really good point. Ultimately, at the end of the day, we have many similar concepts here, but we have many, many, many, many different tools. If we look at ESBs, for instance, TIBCO had their own tool, Lombardi inevitably IBM-Wesby Oracle with their liquid fusion product, so on and so forth. And however, we were kind of with enterprise Java, other remote invocation tools, we were kind of running into the same things. And inevitably, we distilled these into a set of what we call enterprise integration patterns, where we took these kind of common patterns and said, hey, we find these things applying themselves very generally in enterprise contacts. Some of these things could be the splitter patterns, service mediation, so on and so forth. And so standards began emerging, and what we started noticing in a lot of the tooling vendors, so specifically Apache Camel is one of those tools, is we started to say, hey, we need some semantically meaningful idiomatic way to describe these things, right? We need domain-specific languages, and we need to have patterns kind of idiomatically baked into our language. And if we look over here to the right, we'll see a really small snippet of Camel, where we start to see these kinds of things that are semantically meaningful. We know from, we're going from a JMS queue, and then we want to do some parallel processing, and inevitably we have some other idioms that really reflect what is happening underneath the covers from a technology standpoint. These things would often come with hundreds of adapters. So as we kind of made our way into this new world, we could say, hey, we have an idiomatic way to describe our enterprise behaviors based on these things called enterprise integration patterns. And oh, yeah, by the way, and pretty much every tooling vendor had these things, we also have these hundreds of adapters to other systems like SAP, Salesforce, ServiceNow, so on and so forth. So we kind of, we got to this place where you have this kind of tooling in the SOA space, and most of it kind of lent itself towards the enterprise service bus pattern, which was typical based on where we were coming from. And so this led us to a bunch of great places. We were able to expose reusable service endpoints over common communication standards. Now no longer did we have kind of fun trying to figure out how to serialize over the wire differently for this other tool, right? We had a common means of communication over the bus. We were able to loosely couple. This gave us our service oriented architecture construct of everything being on an island. This is analogous to what we find right now in microservices. And what this allowed us to do is say, hey, because my two bits of processing or my two things that are doing the work are loosely coupled, they can attend to different things and they can change at different rates. This was great because everybody kind of had their own notion of responsibility as they were delivering bits into our enterprise. And this also led us to adapt legacy services via the set of adapters and things that we baked into these appliances that allowed those things to conform to normal communication standards. Now, if I had some big mainframe process in the background, no bother. I just simply use my something like Apache camel. Now it can talk to a host of other services, so on and so forth over this message bus pattern. And this also offered us a really great tool set to do these things with. Oftentimes I would have not just a common protocol for communication, but I would also have a canonical message standard, right? Every message that happened over the bus and that you subscribe to would have the same things, right? And this made things very predictable, easy to implement. And many of these tools, like Apache camel, your tipcoes, like the fusions of the world, all had great IDE, so you could really just kind of drag and drop, get your work done, and let kind of our enterprise service bus appliance or some of the other SOA appliances that we ended up with take care of the work. Unfortunately, this did kind of present some new challenges to us. We still had complex interactions that required state management, as we're no longer sitting there in the same runtime, we started to have some trade-offs between consistency, availability, and partitioning of our data, right? CAP Theorem still definitely applied, and quite frankly, CAP Theorem becomes more difficult in a distributed context. Integration implementations were often coupled to platform-specific interfaces, so as we saw with the JBI standard from Enterprise Java that was retired quite quickly, or even a precursor to camel service mixes normalized message router, these things were very, very, very specific, the implementations were very specific to the appliance that you were using. What this inevitably meant that I was bound inextricably to the APIs that my tooling were presenting, and it wasn't really, it ended up being quite like a bad place to be, because maybe those things didn't really accommodate my needs. Additionally, we popularized the central governance model, we would have schema registries and discoverable services, so on and so forth, over our ESB or other SOA appliance. And unfortunately, what this ultimately meant, and I think many of us have sat through them, is if we needed any change to the business, this had to be expressed through our central governance model. In fact, we likely had a canonical message that everybody in the business had to adapt to. What this meant is all our run times would spend all their time picking out what was meaningful to them, throwing away the rest, and then trying to figure out how to cobble that all back together once they've done their work in a way that the next thing could handle. And so this made change come to an absolute snail's pace. I personally have sat in many, many, many steering committees where literally we spent months trying to figure out how to accommodate everybody. What ends up happening is we all put our Martin Fowler hat on and microservices emerged. So on the integration highway, we had some point-to-point stuff that we initially discussed, right? Got remote indication between two points. We then adopted SOA, Service-Oriented Architecture, and we ended up with ESBs where everybody kind of could link up to a common hub-and-spoke type of infrastructure. And then inevitably we came up with microservices, and these are kind of taking on some of the same capabilities that our Service-Oriented Architecture did, but a little bit different. What we want these things to do is to have independent deployant pipelines act completely independently. We would not want them coupled to any of our other services. We're certainly a service implementation type. And this allowed us to be more flexible in our deployment types, more agile in how we go about our SELCs as we now have independent deployment pipelines. And if we really put our Martin Fowler hats on, we went so far as to silo our data stores and databases such that a microservice would have its own database. Well, that was cool, right? And it got us a lot of stuff. However, we began to move to the cloud and there were some expectations of moving to the cloud that changed the viability of microservices and started to instill some new pain points. So one of the first concerns we needed to attend to was, well, now our infrastructure is ephemeral. We need to be able to tolerate failure. We also wanted our old stuff, our stuff to sat in our data center to be able to talk to new stuff in the cloud. And we wanted to go promise to our boss who is funding all this. Hey, don't worry about the fact that we've gone over to AWS, no bother. We're cloud native, so we can go take this to any cloud, right? And now that means that we have to take on capabilities in our microservices that allow for these sorts of things, right? We have compute density and efficiency front and center. We've said, hey, we're going to go to the cloud. That's going to be cheaper. We don't have to go buy a new data center every time we want a new enterprise feature. However, small problem that also makes us responsible to maintain a notion of compute density and efficiency that's quite lean. That also, again, began influencing our architectures, so on and so forth. And inevitably, we distribute architecture across cloud infrastructure and availability zones. One of the ways that people started attempting to handle this is an abstraction called the container platform, something that many people are very familiar with now. And that's become a central tenet of our architecture so we can attempt to deal with things like failure of our architecture and not have to have a completely different set of appliances to handle that, so on and so forth. Compute density, how we schedule these things, so on and so forth. And this allowed us to kind of follow our microservices, architectural viewpoints and decompose into smaller runtimes and it lends itself neatly to having these kinds of independent deployment pipelines, so on and so forth. However, of course, this also leads to a new set of game points. We all of a sudden had one big, huge microlith or monolith and now we've distributed that out into 30 runtimes. That requires a whole new and phenomenally more robust set of tooling that collects metrics and health information about our runtimes. We had Kubernetes or a container platform to abstract away cloud specific infrastructure APIs. Now I don't have a wildly different experience when I go to Azure versus when I go to AWS, but we've also taken on all these other concerns we saw on the left-hand side and now developers need to go build in high availability, redundancy and cloud characteristics into their applications. It doesn't come for free and as we've all seen over the last several years, developers are spending much of their time attending to these platform needs as opposed to actually doing the work that they need to do for the business. As we mentioned, our monolith decomposition led to an deployment of deployments and it's difficult to go straight to our superior say, we've moved to the cloud and it's far more expensive. And then of course, there's still a need to maintain state reliably in a cloud context, much like we would have done in our traditional world. However, we need to be able to handle the loss of an individual runtime, availability zone, storage, potentially even a region. So we still have this need for our state stores and our sources of truth to be meaningful and real and persist. However, our tooling doesn't really accommodate that. Cool. So here we see, my team and I have actually made an honest effort at defining what cloud native means. And we won't spend too much time on the slide, but some of the cloud native characteristics that we take on need to be that we're elastic, we're scalable on demand, we're resilient, and I mean really resilient, not just like, hey, my runtime comes back up, but hey, I'm able to survive loss of a data center. We need to be observable, manageable. We need to be location agnostic. Remember, our runtimes are going to come and go. Our storage may come and go. All these things may come and go. We need some means of deployment and service discovery that doesn't depend on static things. We need to be API centric, and we need to be event-driven. That's all spelled out. The last two things are all spelled out in the article that you see there. Please take a peek. And of course, finally, GitHub issue if you disagree. But one thing that we notice is to be cloud native, it's more than just a move to the cloud. If we rely solely on a single cloud API, we're only native to that cloud, right? Now when I go from AWS to Azure, I get to be the person who delivers the message to my boss, hey, that'll be another nine months. We need, we've abstracted these proprietary cloud APIs via Kubernetes. So Kubernetes or a container platform becomes great, but that's really not enough in and of itself. To be truly cloud native and capture these characteristics on the left-hand side of the screen, we need something that actually does caring and feeding for deployments that attend to some of these things such as observability, manageability, so on and so resilience, scalability. And so we need something like the operator SDK. I won't get into the operator SDK and what it is, but again, please go check out this article. We just explain it in nauseating detail. All right. So where does that leave us, right? We have these new things as we move to the cloud that we need to attend to. We have point-to-point, we had enterprise, the enterprise service bus, which was great. We have microservices, which started to take some of the pain points away, but also added some new pain points. And our next logical place to go because it does attend to these concerns is serverless. Now, what does that mean in regards to our traditional ESPs, our traditional enterprise integration appliances, so on and so forth? Well, in the Apache Camel world, we have something called CamelK. And what this allows us to do is write that little semantically meaningful camel DSL that you saw previously in a bunch of different languages. Java, XML, YAML, we're able to care and feed for these things very simply via the Kubernetes operator. We have a CLI. We'll show that off in a second. We have a sub-second employment and startup time using Quarkus. If you don't know what Quarkus is, and you're a Java developer, check it out. It's super cool. And we want to be able to run integrations in serverless mode, meaning we want to be able to scale to zero and then we want to be able to scale to n replicas based on some sort of metric, be it CPU, the number of messages we have coming in, so on and so forth. And we want that to happen algorithmically. So meaning I don't want to just scale from zero to one. I potentially want to scale from zero to eight, back down to four, back up again to 10, so on and so forth. We also have a need to be API centric and adventure event. And Knative is something which we'll describe here in this next slide that allows us to do that. So Knative eventing provides a PubSub abstraction for this type of stuff. And if we'll see on the right-hand side what we inevitably have is a broker, which could be anything. It could be in memory. It could be Kafka, which we'll show in a second, Nats, GCP PubSub and more. We create channels much like you would in any kind of PubSub environment. Then we have Knative services that can be serverless and scale to zero and also carry along algorithmic ways of scaling that are actually making subscriptions here. This all happens over HTTP one. So all we need for these services, so on and so forth, to be able to speak is HTTP one. There's nothing specific about it. And this allows us to say, hey, even though I have a store, a state store like Kafka underneath as a broker implementation of my PubSub architecture, no bother. That could be anything. In fact, my services, service A and service B here, don't need to know anything about Kafka. All they need to know is how to consume and produce from one of these channels. This allows us to take on the decomposition that we have for microservices, but do it in a way where we have compute efficiency. Now in service A and service B, may not necessarily need to be running because there's no real payloads they need to attend to. Well, no bother. CamelK provides seamless integration. Real quick, this is what we've got here. Don't worry about the left-hand side unless you're doing the demo, the full-on demo. But inevitably, we'll have some Kafka channels, we'll have some subscriptions, and then we'll have some things to pick up from there. So real quick, let's go ahead and do a demo. All right. And you should see my entire screen. Cool. So here we have some... All right. So I'm in a container platform, and here I have two integrations running. They have subscriptions. They subscribe to some channels. And if we describe one of these channels, you'll notice this is what we call a Kafka channel. In fact, we'll notice that the channel template actually has five partitions into replicas. So that's pretty cool. We know that we have those things. We need some way to... And let's go ahead and check out our broker. We have a default broker. And the way that this broker shows up is I actually just have a label here that says eventing.knative.dev.injection. And, voila, the way that I've wired up Knative eventing, this actually happens. So let's take it quick again. So let's go ahead and give ourselves something that's going to kick off those two integrations that we just saw. Sorry for moving around. I have to go back to my cheat sheet. And we're actually going to use the camel CLI that we talked about to create an integration that will participate in this PubSub behavior. So here we go. Let's see. Let's interrogate that. And we'll notice that our event sync integration is building a kit. So we've got that going on right now. And if we... Let's go over and look at what we're actually doing here. So in our event sync integration... Oops, wrong guy. In our event sync integration, what we're going to do, that's what we just created is we're going to say, hey, I want you to create 50 messages. I'm going to do some of the idiomatic stuff that I talked about previously. We're going to do a quick transform. We'll set the body. And then what I want to do is I want to go off to a Knative channel testing DB events. And as you'll notice, this has no understanding of what testing DB events, what the channel is. It has no idea that it's Kafka. We then have something called event bus transformation integration. And real quick, what we'll do here is we're just going to pick up off of a channel. We're going to do, again, a simple transformation of our body of the message payload. And then we'll just lay off again to another Knative channel. Again, we have... Hey, Michael, sorry, you have five minutes just reminding you. Great. And then we have one more guy and we're going to do the same thing. We're going to pick up from something and we're going to leave off. We're going to do our work, we're going to do a conversion, a transform, right? And we're going to lay off to the next channel. So let's see how it's going over here. Cool. And now what we'll notice is what we wanted to happen there just happened. We'll notice that the events integration that was quickly built by my CamelTLI, great dev apps for any of our developers, take a peek at what it did. So it's sending off those messages to those Knative channels. We'll take a peek at our event bus transformation. One more thing here. You see, we're picking up messages and we're doing the stuff that we said we were going to do, our business-y stuff. And what we'll see here is we have the same thing happening in the DevCon transformer integration. Now real quick, probably shouldn't do this, probably don't have enough time. But let's go over to, I'm running StreamD to get myself some Kafka stuff. We'll notice some Kafka brokers here. Let's take Zeck into one of these guys real quick. And let's make sure that there's nothing up my sleeve that we actually do have these things, these channels, subscriptions, so on and so forth in Kafka. So we'll cd into the bin directory. We'll do a Kafka topics. And in just a second, we should see a list of topics bang. And if we notice, those are the same channels that we had just created. We'll notice that it came in our, that were names, we have in our topic namespace, the namespace that we were just in, so on and so forth. So if we were to do a consumer, a console consumer, we would see the messages that we just saw being logged out. And that is the end of our demo. There's a much bigger demo available here at this particular GitHub URL. I apologize, I left everybody with very little time for Q&A, but we'll go ahead and set up for Q&A now. Thanks, Michael. That was a very thorough presentation. Thank you for that. I'm just looking at the Q&A section. I don't see any questions as of yet. Okay, one question is, can you link the GitHub repo here? Yeah. Yeah, absolutely. Let me stop sharing so I don't show you guys. Oops. Everyone wants to replicate your demo. Absolutely. All right. Just put it in there. There should be, I linked the slides so on and so forth. We have, there's a couple of different GitHub repos to shake a peek at where we attempted to find what it means to be cloud-native. And then a very thorough demo and we'll see how to take what we just did, make it cloud-native or make it, not just make it cloud-native, but make it multi-cloud and hybrid cloud as well. Okay, that's great. If anyone else has any more questions, I will just link the breakout rooms. You're free to talk to Michael over there. And yeah, thank you so much, Michael. Thanks, everybody.