 Welcome and thank you for choosing to be here with shaping tomorrow's technology Navigating cloud native serverless and polyglot programming while you had all the other choices today. No, literally. There are a lot of choices today I am Naina Singh. I'm product manager at Red Hat and K native steering committee member and today with me I have my esteemed colleague and friend Shafsayad. Yeah, so well, my name is Shafsayad Well, I'm a developer. I've been developing for many years. I'm also an editor at info queue and Excited to share some knowledge with you guys today so you all know modern software development has pretty much become like this so to Illustrate the concepts and technologies that we are going to discuss today. We have prepared a live demo Throughout the talk. So let's get started Our title of our talk is quite a mouthful and it seems like we put in a lot of buzzwords together So let's unpack a little and see what they are so that we are all on the same page So first is cloud native So it is the concept of building and running applications to take advantage of distributed computing that the cloud providers are offering By doing this your applications can have the scalability elasticity resiliency and flexibility The next term is serverless Serverless is a way of developing and running applications on someone else's server So you don't have to worry about the provisioning scaling Maintenance of the server resources Serverless applications are event driven meaning they respond to requests or triggers that are coming from users devices or any other services serverless Serverless are Can be auto-scaled so as you see in this picture this octopus is using its arms as needed to do this Serverless can also auto-scale on its own based on the demand that's coming and the third one polyglot Which is multi-tongue it is like having Programming if it is like having a tool in your toolbox to use for a different tax For example, you can create your front end in a language that is more suitable for that or some other language in the back end That allows you to be more flexible and adaptable Knative is more than serverless. Now, what do I mean by that? You know when you are in Kubernetes world, you have to keep track and create a lot of constructs configure them and keep track of What Knative does is it takes away all this? Cognitive toil from you and from a container create a Knative service with the Ready-to-use URL and auto LS auto TLS and that scales on demand So automatic scale on demand for cloud native containers Another thing serverless does is it's a serverless platform because the auto scaling happens based on demand But it also goes back down to zero. So it is a serverless platform for Kubernetes and Because of all this we say Knative is a simplified Kubernetes for application developers But that's not all it is also an event-driven platform for Kubernetes because we provide lots of eventing infrastructure And we are going to see it in our talk today So what I say in the end is Knative when you can and Kubernetes when you must Architecture evolution so we have been and you as you all are developers We have been in this world and we see monolith takes micro services and functions And I'm not here to talk to you about this one is bad and this one is magically solve all your issues Because that's not true. It might depend on your use cases But everything has its pros and cons but in one line. I would I would define my monoliths as a A self-contained software where basically everything runs in the same process and as you can see what are the pros and cons over there and microservices we can define as Multiple loosely coupled independent services. So they give you more flexibility a little bit more Skillability and easier maintainability and then comes functions Which are single-purpose code Event-driven a femurial unit of code that runs on demand in reality Your application and solution is probably going to be mix of all of these and we are going to see in our demo today So our demo is a place where you can buy cool stuff from red hat and it's actually called cool stuff store and It exists in real life. So if you are ever looking for red hat swag, please go buy it there But our demo is not All right, so shaft What is the current architecture of our application right now and what are the challenges we are facing? Yeah, so so obviously we built up a scenario, right? I mean you're like Nana mentioned you're developing Applications you see monolith applications. You see some microservices And you've tried a bit of mix and match between those obviously with with with the monolith architecture And and I say it may be in the most nicest words is that it's not a bad architecture per se It is an architectural choice. It's a pattern that you would use in your application Obviously, you can run that in VM. You can run it on a container, etc. etc But it does have some problems for example scaling could be an issue based on how the state is being shared CICD processes autonomy all of these different things sort of make it difficult for a monolith architecture to proceed But in our experience, we've also seen some people who've done one of the architecture quite nicely and don't do not have those dependencies but obviously Think about a big bucket and everybody has to put things into that bucket before that bucket goes to a release process Obviously that sort of stretches your timeline. So if you want to be faster, you probably want to look at something else In in our architecture, obviously in the cool store architecture. We have a user interface which is It's a simple cool store web interface it has an inventory where we basically Taking the inventory of all our products and then we have a catalog service which basically Is is the products in relation to the inventory that it's offered on the cool store Which you will see in a minute as well We have a cart service which is basically the shopping cart and then the orders and payment as the name Suggest right so that's that's the current monolith architecture as it is Okay, so what would you suggest we should do to evolve it a little bit more? I won't suggest burning down the house, but So obviously When we look at that sort of the component diagram that we just saw earlier When we move into microservices, we obviously become more independent We have a more structured component unit that we are responsible It has endpoints like maybe a rest endpoint graph ql grpc whatever you prefer And it has its own responsibility or separation of concern as we would say within that component So our architecture right now maybe looks like an inventory service Which is based on a nice framework like quarkus And if you want to hear more about quarkus then come to the red hat table later and we'll discuss that We have a catalog service, which is based on spring spring boot We have a cart service which has caching and then of course we have a payment service Which is the golang service in this case and we we have an eventing backbone at this point as well This helps us to be more distributed this helps us to Create code faster. The teams are more smaller right and it kind of gives us the benefit to release more often As well and release more often always has some other problems attached to it But this is not the talk to talk about it, right? Okay, so what about the capacity part? So, yeah So obviously Like a monolith we put it in a VM or a container or whatever mostly in a VM the VM is always there It takes the full capacity When we move to microservices obviously we're doing the same sort of thing is that we have a container That's running and that capacity is always full But what about if my payment service only needs that it should only be up when there are payments Maybe I'm running a campaign people come on to my site and Badly enough, I don't hopefully have a bad bounce rate But let's just assume that twenty to thirty percent of that traffic is just browsing through my catalogs and interested in that They're not gonna place orders. They're not gonna place payments. So what do I do in that case? So we will have components in our architecture that do not always need to be their full capacity at all time Okay, so I see a slide about mix and match. Oh, you already switched the slide Interesting so yeah, so obviously in this case the payment service could be that it's invoked from zero like it's sleeping But it it's invoked and then it comes up the container comes up And that would be something like you would use in K native obviously we've sort of like Trim this demo down for for the duration of this talk and we are gonna focus on the Eventing part of this which means that whenever events are received our service will come up And start to start to work within the process as well, and I'll show that to you in the demo Okay, perfect. So I hope you can see my screen I'm running this on an open shift cluster I'm running my source code in my dev spaces around here I'm not gonna be able to go through the entire source code, but happy to to discuss that after the talk So here you see I have my Kafka cluster running on open shift I have my cart service. I have my inventory service. So if I look at the inventory all the products are there in my inventory I Go into my catalog of the spring boot service also has my catalog all the details there So basically they're running there They're looking fine and then I have my Node.js front-end and an order service as well Which is backed by MongoDB in this case So a bit of mix and match of all those things What I've done obviously is I've taken out the Golang service right now for the purpose of this demo And I put in a payment service and this payment service is over here You can see is a key native function which is running on its first revision at this point It has a Kafka source which means that it's going to receive a Kafka event and then it's going to start up and Process the payments and take it back to the order. So the way it works is let's go and take a look. I'm gonna go in I have my My cool store really really cool. This is not the production cool store Let me just remind you and the credit card information that you might see is not real anyways So I add something to my cart and it's it's being added, you know, there's tons of stuff I could just keep on going but then when I go to the carts I see my card over here And obviously this is going through the cart service which is using caching in the background Which is the data grid or the finish man project. I want to check out at this point and let's just assume I want to put in For payment a credit card, so I'm just gonna do that And I'm just gonna put something dummy in there and then oops. I always do this. Let me just take that Name Card number you sure it's not the real one Yeah Nope Okay, let's say Chicago this time And I check out right so When I check out it goes into the orders in the orders. It's processing But if I also go like what's happened is that the cart service as soon as that happened sent a message out to to my To the orders you can see over here for example And then I go back for example and I see that my payment service has also been enacted So maybe let's look at that because as soon as the payment is processed is gonna soon die out It received the payment from Kafka message from orders And then obviously it just checked it and sent it and what we are doing is pretty simple And this demo code is that we wait for five seconds and after five seconds we say okay the payment is processed such an easy Process we should all have that right so easy to do that But obviously the eventing happens The event is received on the payment from the orders through the Kafka topic Which is called the orders once that's done it's processed and it goes on so I have one instance of Knative eventing running here, but then what if what if there were multiple instances and this this was supposed to scale as an Example right so I go into my simulator, which I've just written a very rudimentary simulator But I can actually just say I want to do some random orders right and over here I'm just gonna create hundred orders, which basically is gonna call the cart API And and put something in the cart and call the checkout process once it does that I expect that it's gonna send hundred messages into the Kafka Topic which is the orders topic and then the whole system will start to roll so Here's some nice names and etc. And we can see that these you know hundred Things are created when I go into my cart. For example, I look at the logs I can see that almost all hundred have been sent so they have been sent To the Kafka topic, which is orders And if I go into my orders here And look at the logs of the orders as well and these are just one instance is running They have also pretty much you can see sent all the all the hundred ones But if I go and back to my cool store I can see that in my orders some are completed very few and some are processing, right? So it's synchronously happening that it's receiving the payment service is receiving an event and it's synchronously Processing and every time it receives an event. It's it's basically waiting our awesome check process for for five seconds If I look at the payment logs here, I can see that as well is that yes, there is some activity, but this is not hundred Kafka Messages that are being received and the reason is because this is this is purely synchronous And what's it doing is if I go back again and see that there's only There's only one board running at this time. So it's synchronously handling everything that's coming in so obviously when you are When you're creating an architecture and you're using Eventing as an example There is something that's happening that you're sort of also creating a little bit of bottleneck for cases where it's not an HTTP request It was an HTTP request. Obviously it would scale but in this case we have events. So how do we do that? Yeah, do you have a tack for that? So yeah, so so basically this is that you just showed us the Vengeven. I just showed you the event Yeah, so there's yeah, so this is what we're looking at. It's the it's the top part of it, right? So we created the eventing system the Kafka message came in The part was scheduled the payment Service part and then obviously it's sort of like did it stuff and then went on but what if What if you have multiple events and you want to make sure that maybe in my case My code is so nice that it waits for five seconds So in that case like what what happens if I want to be able to use multiple of the pods in this case or multiple functions to take care of it rather than just this one and if I need Different eventing mechanisms to do that not just with Kafka So what how would we do that in that case? All of us want to order for our family friends and neighbors Exactly so so you could use something like Kera, which is Kubernetes event driven auto scaling it provides 61 built-in scalers and obviously it's nice that you can See these matrix coming in or events coming in and you can actually scale. So let's see. How would that differ? In our cool store application. So whoops wrong application. That's my backup server by the way So let me just log out Obviously, I have two different namespaces where I am doing this You know, I forgot the password Okay, so Same structure here same thing same environment, but this time I'm using the operator for Kata, right? And I'm basically Going to receive all of the events through the system as well So if I show you again, and I'm just gonna do that here It's the same cool store There's no orders. There's nothing in the card, etc. Etc. But this time what we'll do right away is that We will maybe try to create a Couple of orders again through the simulator and this time I'm gonna Be a little bullish and I'm gonna throw 1000 orders instead of 100 Right. So as soon as I do that I can see that over here the payment service has started scheduling and you can see that the pods have Already increased to 10 pretty much. That's the capacity I've said So obviously I can set capacity as to the maximum level of serverless functions. I want to run In this case, it's running 10 pods. So that's the maximum capacity but then the benefit of this is that if I go back to my Microservice to my Node.js I can see that some of the orders are completed But you can also see that they're randomly being completed, right? This wasn't the case before before they were just going synchronously here They're able to go in asynchronously because I have multiple capacity. That's running So if I look at the the cart service logs here for example, then They should have sent all the thousand messages and it seems like yeah It seems like the cart has sent all the messages to the topic as well if I go into the orders It's pretty similar as well Should be Yes There's a lot of a lot of log here. So that's also done But obviously you might ask like oh well payment service is not super fast at this time The payment service obviously not super fast because I'm waiting on each order. I'm waiting five seconds But the what's happened in this case is that the payment service is able to Take the same load rather than doing it synchronously with one pod now. I'm able to do the same thing with 10 pods so I'm a little confused chef So you talked about k native service scaling and then you talked about k dash killing. Yes, right? Which one is getting scaled by what? So obviously that's something very interesting. Let me go back to the slides All right So the payment service that we deployed as k native service is getting scaled up how the events are coming But what about a lot of external systems are sending a lot of events and we want to scale up the events themselves And that's where we are going to use k now The applications that you have deployed would be using k native and it would be automatically auto scaling depending upon how events are coming in We have a five-minute warning. So let's speed up a little But I think the yeah, sorry, you know, it's okay I do want to talk about that while scaling thing that we have seen What does it actually give you and it's the efficiency if you are not doing the auto scaling? You are going to suffer from either under provisioning or auto provisioning since you are doing the capacity planning based on the CPU memory and all that and The demand that comes in is going to vary and with serverless the k native You can actually make it pretty evenly matched up And that's what you get because it is going to scale up depends on the demand So if I may just add one more point quickly like we have a lot of these business critical systems that are running Maybe they're in Java for example, etc And a lot of times you're writing this code where an application is just going to receive an event And it's going to interact with that event and then it's going to finish up that task And that's a super nice use case where you actually do not need the capacity to just keep on running all the time You just want to do it ad hoc when that event comes in the service should come up Service that event and then and then die later So it's a very good use case when we talk about modernization, especially in integration scenarios as well I am going to talk about Functions because we cannot say the word serverless and not mention functions So we are running out of time. So I will just mention that k native functions. You can actually just in two steps Create from your code and just deploy it We provide you the templates in out-of-the-boxes runtime. You do not have to wrangle with HTTP libraries HTTP servers just bring your business logic and Deploy it on the code. So all this create deploy it has run locally and container build all is done by The k native functions itself That's the command down there and There is one more. So, yeah Where do we find your demolition script? So this link would take you to the demo script and we would be uploading our slides after after the talk so you will have this and This is our in conclusion. I wanted to leave you all with this So this is a five-hour model that you can use as a roadmap to navigate your Migration journey of moderate modernizations You there is no one way of doing this you depending upon where you are you start where right? So first is rehost if you already have a containerized application. There is no rewrite necessary Just deploy them using k native, right? Then in refactor try to see what applications you can containerize or start using cloud native technologies for example service mesh to use the platform wide observability and Network the another is Revised and this is where you can see what applications you can decouple using the event driven pattern that we have shown and The same thing with rebuild is all your new Applications that you are building see if you can do Containerized and in 12-factor app fashion or consider functions for even faster Iteration to market and the last one is replace depending upon as needed You can decommission whatever legacy part you do not need and keep the one that you need And that's how you can modernize with step-by-step and pick where you are So I hope it's as easy as that gopher is telling us, but thank you so much That's that's all but we do have a k native kiosk in the pavilion drop in there The p5a is our number chat auto about our demo and the stuff that we are not able to cover We would be Really happy to see you all and please please please scan the QR code and leave feedback for k native or for the talk We would love to have the feedback on our talk. I Do not think you have time for All right, and we already talked about the redhead booth. We have some cool swag. So come and talk to us there Thank you so much for your attention. I hope you have a great day