 Hi, I'm Simon. Thanks for joining me in this talk. I know there's a bunch of talk going on in parallel right now And I really appreciate your interest. I hope to make it interesting for you Today I'll talk about next-generation extension model for enterprise applications so what we'll look at is everything is moving into the cloud and Enterprise software is the next step exception. So It comes with a lot of problems when enterprise software is tuned to a customer's needs and he writes all sorts of extensions and Deploy to the cloud. We'll talk about how we solve that using serverless. I will introduce you to a new software called Kima Which is open source followed by a demo So again, my name is Simon. I'm a developer with SAP customer experience. We deal with end-to-end e-commerce solution in a rapidly scalable cloud landscape So cloud applications as they say make the big ship move This big ship which can be compared to enterprise software. It has a bunch of Features to it, right? That's there for the reason it's solving problems for big enterprises But in order to tune it to for specific customer needs customer right extensions and then moving this big ship gets There's a big hassle around it. So Let's dig deep So a bunch of our customers are using on-premise model So what happens is They write extensions they develop in their on-premise along with the extensions a typical extension development Lifecycle would be you write the extensions, of course, and then you test it and then you deploy so One advantage to this process is you have close connection to the code the developers had Closed connection with the code meaning if there is anything breaking during deployment A developer could take a look or fix it if needed This was all good, but then upgrades are the problem because up extensions were tightly coupled to the core and you couldn't upgrade easily And in the end what happened was customers Was running primitive versions of the enterprise software So this was not the way to go So Andrew came cloud model Let me set the context so in a cloud model what happens developers would write the extensions Give it to the operator operator will package with with the core of the enterprise software and then deploy it But there are two problems with this One would be the developer didn't have close contact with the code So as a result the operator would deploy in his own schedule. There can be time differences whatsoever so there is no immediate feedback and There's a long deploy time along with maintenance window because we are dealing with a beast of a software, right? And it's often bundled with other updates It's done that that way because This process is expensive. You don't want to do that too often and then The operator who is handling the core with extensions is having a varied set of Extensions, right? There's a bunch of customers with the different needs different sorts of extensions and that resulted in Snowflake deployments troubleshooting became a nightmare and it was really hard to update so We'll take a look what serverless is and then I'll try to connect the dots By solving the problem that I stated with Kima so As per Martin follower comm Who has all the definitions of the latest buzzwords? It service composed of three basic things The first is an ecosystem of third party services Which will work with client side logic and that all will be wired With remote procedure calls Which is hosted so you don't have to deal with servers But this is pretty obscure right now. Let's look at it pictorially Okay, so this that's a cloud in your left with a bunch of managed services There can be authentication service mail server or even database service Which is used by a client logic can be a Java JavaScript single-page application Imagine that a user needs to authenticate it so the client code can straight away use the authentication service from the cloud provider And then comes this lambda which is the function of the fast as a service. So function as a service There you would write all the back in code and mash them in order to serve the single page application and in turn It's gonna use this bunch of third party services in the cloud a Disclaimer here is fast is not the same as serverless. So fast is just your back-end service Back-end code running on service, which you don't have to take care, but serverless is the whole ecosystem of fast Third party services and your client code all wired up together So let me introduce you to chemo. This is open source And it's pronounced as chemo. It's a Greek word. It means waves as in wave of the sea so what happens in chemo is the hexagon is a chemo which is running on Kubernetes and The lambdas represent the functions in Circles and the hexagons are microservices. So what we are saying is we could extend enterprise software By writing either lambdas or microservices, which is running on Kubernetes and it's your place an important role Where it provides a service mesh around all the services Even it has a pluggable policy Layer which enables you to control rate limits or access control coders and With Kubernetes we get all the cloud native features like scalability or fault tolerance to name a few And these are all isolated from each other in Kubernetes world There is pods which is a group which is a collection of one or more containers and that way each lambda can be scaled or brought down without any impact to the surrounding and Now this enterprise software which gets hooked to chemo through applications connector An application connector is the secure channel from your enterprise software to chemo In a true serverless world, it's always even driven where some evens trigger your compute Stateless functions and do a certain job and that's the reason we have an even bus integrated with chemo it's based on that streaming and There we I try to show like the events flowing from the enterprise software to chemo to trigger these lambdas or microservices Solving a problem and next piece here, which is a service catalog So we said we need an ecosystem of third-party services, right? So that's what service catalog and a built in Kubernetes We'll take a look in the in detail the next slide So we have this console UI Which lets you manage all the resources in Kubernetes as well as chemo Like chemo uses a lot of custom resource definitions So custom resource definitions in Kubernetes is a way to extend Kubernetes API in order to do a custom functionality We use extensively to do a certain piece of function in chemo and You may use a graphical user as well as console CLI to do a certain job And this API gateway lets you control all the ingress egress into the cluster So service catalog service catalog It extends Kubernetes API in order to use the applications In order to make use of the third-party services in your applications running in Kubernetes And you could hook service brokers which are following open service broker API spec In order to list provision and bind the services to your functions or microservices and since it's well known With all the renowned cloud providers. So essentially you can use Most Common or the useful managed services in chemo to solve a problem So let's zoom into the fast in chemo a bit So right now in chemo we we are leveraging cube less to To work as a platform as a function as a service platform So what happens is you write a function and then it gets stored as a custom resource definition Inside chemo and there is cube less controller Manager which kicks in and it creates deployment pods and services for that function So a function again, it's a pod running inside Kubernetes Which is totally isolated from all other functions or microservices Now you may trigger the function in two different ways the first one would be through HTTPS You could expose the function outside the chemo cluster in a secured way with HTTPS In turn we use a CRD call API and bunch of Istio CRDs as well to achieve this Or There can be a trigger coming from outside landing into nets or the even bus It has few more components is the even bus is just based on nets and that's going to trigger the function to Work and finally we would set the context the context Using the service binding so using the service catalog We got instantiate a service instance, which we will Bind to a lambda or a microservice using service binding and what happens is the Connection details or the secrets gets injected to the lambda Function and the user can just use it without knowing much Like the details about it So a few aspects of operations inside Kima It comes with It is packaged with well-known solutions like Prometheus Grafana for monitoring for a logging We use ok log and for tracing Yeager and Under the hood it's all Kubernetes Istio, which is done. So what we are saying is the developer can Enjoy the cream, which is the built-in dashboards or the UI is to track and debug the microservices And we take care of the platform, which is running on Kubernetes Istio and open tracing Logspot is used to feed the logs to ok log so that it can be viewed in the UI So let's look at the demo scenario I were recording for the demo and I think that will never break for sure Okay, so What I have here, of course, I have a Kima cluster Lambda which is listening to an event and that's gonna Now this Kima cluster is in turn connected to the center for software in this case I'll be using SAP cloud Commerce cloud enterprise edition and lambda will be listening to an event called order created and once this even gets Triggered the lambda Is executed which in turn makes a call to the commerce cloud through OCC API and Get details more about the order and Then it calls a microservice called HTTP DB service, which will in turn Soar the details in Azure SQL That's a DB service provisioned in Azure Which will use Azure open service broker for that and this micro UI is used to view the order details Which will be fetching the records from Azure SQL using the microservice So let's see it in action Okay, so these are the environments here. We are working with CF summit EU. What this means is Right So environment is a namespace in Kima But with a few tweaks so we have resource coders and others enabled We'll be working with CF summit EU and here we can work with a bunch of Kubernetes resources as well as Kima resources through the UI Oh, this is really at the load. This is an administration tab Okay in the administration you could download the cube config and work in terminal These are the bunch of service brokers already provisioned. I have as your broker with me and some other brokers So here is the link to the graph on our dashboards, which we package So now let's see it in action. Okay, so we have a bunch of docs already integrated in the UI as well It's there in the website too great, so Now let's get into action. So remote environments once and Everybody software gets registered to Kima. We use a remote environment custom resource definitions to store the metadata of the connections So I've already registered in EC default the cloud commerce environment in order to use that I need to bind it to my namespace which is CF summit EU that's right and once it's bound I can use the service catalog Okay, so what happens here in the Once we register the enterprise software or the commerce cloud here It comes with two service in the service catalog The first one is events because we'll be listening to the events And reacting based on that. So We need to create a service instance To use it in our environment, that's what we are doing here We'll change the name just to make it short otherwise. It just generates a unique name every time Okay, and the next piece is This guy so what it does is the lambda when it gets triggered It's gonna make a few call to the enterprise software to get more details So in order to get the red API is from the enterprise commerce We need to create a service instance for that So this is the whole API and we'll be just using one endpoint to get the order details Here we are binding it to the environment that we're gonna use So essentially we are creating Two service instances Oh One for the events and other for the API in order to use it in our lambda. Okay move on Create awesome Okay, next comes the lambda. I've already created the lambda before I'll just explain what we do it here So first we are trying to subscribe to an event here order created Among this bunch of events which are available and in the lambda what we do is Yeah, I'm gonna explain once it moves on. Okay, so this request get will actually make a call to the enterprise software API to get more details That's the first call and the second call is to the ACDB DB service, which is a microservice deployed next to the function pod To add the record to Azure DB. That's what that's the call out here We said the URL the call is here the post and now what we're doing is we're binding to The service instance of that API in order to use it What happens in the background is a service binding gets created an event an environment variable called Gateway URL is Present to be used in order to reach the enterprise software API great, I Think we are set So finally there are three deployments in action here the first one is this guy it it contains the function and Then the HTTP DB service which inserts the record details to the Azure DB and finally the UI which pulls up the information from the Azure DB and The this Microservice is already exposed. This is the API section. This is exposed using The URL in here which we will access shortly So of course there are no orders right now. We will try to create an order in the enterprise commerce or the SAP cloud Platform and then we and then that order should be visible in the UI great the order is created which is 3109 and What happens in the background the lambda gets triggered and everything and we get to see it in the UI and Now a few more details On this chemo console We could expose a function using a CDPS in here and And This is an authentication service which is compatible with jwks And in out of the box chemo we do provide an authentication service, but technically you could have Any authentication service which is compatible with jwks hooked into chemo and that's gonna work in here There's a token part if you want to Take the token and make it a call by yourself Okay, so this is the whole service catalog in detail since we have azure broker enabled you see is a bunch of azure Managed services already available to be used in chemo But we use Azure SQL database 12.0 Which is used in HTTP DB service in here through this service instance. That's all we are saying Now this is the graph on a dashboard, which is already packaged with chemo This is a lambda dashboard. We see just one data point which was successful. It has response success rate response time And request rate and other stuff But we I have another function which is stress which has a lot more data points and hence That we can see some data coming in here But it's not just that we have a bunch of dashboards already packaged in chemo from Kubernetes overall monitoring to the Istio related network overview so and An user can create customized dashboards and save it. So this is the Kubernetes overall overview and then Pods or nodes or this is the nodes dashboard. So this is all packaged with chemo for now and now a Request is reaching a bunch of microservices inside chemo, right now debugging is a pretty big issue if we can't trust them That's the reason why we have Yeager already integrated with chemo and this is the Yeager UI So what we see here is the first call came to connector. So the application connector The event came through application connector to chemo So that's the reason we see connector out here followed by publish and push Those are the components of event bus Which in turn triggers the CF Yeah CF order service, which is the function Which in turn calls the EC gateway to fetch the orders So the first rest call to the enterprise software EC is a bit confusing term like EC term is going away. It's called enterprise commerce It used to be called now. It's called SAP cloud commerce. So we have still EC out here. That's It's the same thing Right, and this is the Lambda trigger followed by EC gateway and ACDB DB service and everything is part of service mesh because Istio is enabled and this is possible using Istio proxy, which is a sidecar to every deployments Hence we see Istio proxy in all the calls Great. So that's it Now, let me go back to my slides Okay, it's still on so what we want to take away from Today's talk I want to leave you with how we decoupled the enterprise software the Extensions developed in enterprise software in a different platform now agility is back in the business because you can experiment you can try out few things in a fearless manner in chemo and the extension developers are in control of the code that they write and SAP takes care of the core upgrades whenever possible. So in a way, we can move a lot faster than before and everything is characterized by highly cloud native features and We have deterministic deployments by that what I mean is A operator now doesn't need to deal with third-party code He knows his core if anything goes wrong that would be easier to debug than handling third-party code And of course it's secure with Istio and everything. So I encourage you to try this out today if not It's available in GitHub. There's chemo project at IO. We have a bunch of Documentation out there. We are pretty responsive in the community community snack channel and Active in Twitter peers are always welcome feedbacks comments. Just let us know Thanks for your attention and if you have questions I can take right now Yeah, grab me in the conference. I can answer a few more questions or You know discuss about Golang or distributed systems. Thanks