 Okay, are we live? Yeah, okay, we're live. Rishu, I think we can start. Piyush's colleague should be here in a minute. So let's get started. We just wait for some folks to join in. No, I think there's also a live audience on YouTube and there are folks who are waiting over there as well. Yeah. Okay, so this is great. So hey everyone. Welcome once again to another session by Haske. I'm Rishu, you're the moderator for today and to be setting the context on what today's session is all about. So I, as of now I'm a consultant by profession and I work with a lot of different organizations where I end up, you know, looking at the text tags, the transformation aspirations and where is it that they want to go from a technology roadmap point of view. And there's something that actually struck me given that I work with a lot of FIs or financial institutions, which is if you look at the major decision making engine, which is essentially, you know, the business process automation and the business rule engineering pieces of it. What turns out is that a lot of these organizations, given their age, given the runway they've had in the, you know, given the runway had in the industry, that the decision making tooling is still very, very proprietary, very, very enterprise. You usually have, you know, well established players who still come deploy. There's going to be someone who's, you know, IT contractors who are going to come customize it, run it for you and so on and so forth. And often what is happening is there has been this new generation of companies coming in, in tech as well and FIs as well, where, you know, there is a lot more new tooling coming around. There's a lot more emphasis on exploring what are, if open source technologies can actually, you know, take over and can kind of cause some sort of an incursion into this well-established area where these enterprise players are playing, right? So a lot of these companies that exist today are also contemplating that how can they accelerate, how can they also attract talent, because we said working on open source and enhancing open source is also a great pitch for getting exciting young talent on board, which has the hunger for it, right? So there is, there's a lot of this play going on. And at the same point in time, you know, given the size of businesses that write, given the kind of customer base and support that they have given in some cases, given the criticality of the businesses, these enterprises are also sometimes in a fix of where they should go for it, where they should not go for it, what is the right time, what is the right way to do it, right? And so in today's session, it is essentially about passing some of these challenges, some of these observations and also to, you know, share their experiences while being on this journey. We have Sujan from Razorpay who is joining us. We also have Saurabh and Hanu, right, from Capillary. So we are going to hear a lot from them and I'm going to see as much of a curious cat as most of the folks on this live stream are. So first up, huge welcome to Sujan, Saurabh and Hanu. Thanks a lot guys for making time for this. Thank you for joining the sharing experience. I'm just going to hand it quickly over to you for a quick introduction to Sujan, Saurabh and Hanu if you guys could quickly introduce yourself for the audience. Hi everyone, I'm Sujan. I'm a lead software developer in Razorpay Technologies. They're like in Razorpay. I'm part of the Razorpay capital engineering team. So capital is like a lending arm of Razorpay, where we offer credit to small and medium-sized businesses like maybe for their very short-term needs or for recurring needs. We have corporate cards as a primary offering and also we have cash advances, credit line for short-term needs. So yeah, it's about me. Thanks for the opportunity. Saurabh, over to you. Yeah. Hello everyone, I'm Saurabh Kumar. So I'm working as a director of engineering in capital technologies. And I have been managing three or four teams within Capri right now. One of the key projects where I was involved is around the OTA payments, as in over-the-air payments. And where we use the commander framework, some of those we will discuss going forward in this fall. So welcome, as in happy to have a good discussion and some suggestions from all of you. Great. Thanks, Saurabh. Right. And with that, we quickly move on to our final panelist which for today is Bhanu. Hey, Bhanu. Hi, I'm Bhanu. I'm a software architect in Capri Technologies. So I've been working on scaling applications, real-time applications. But one of the projects that I worked on for a few months is the payments orchestrator service. I mean the OTA that Saurabh mentioned. So in that piece actually evaluated multiple workflow orchestration tools. So in that process we have actually, we moved on to use commander. So I guess I will be discussing a lot on that and hope to learn a lot from you all. Thanks, everyone. And so before we quickly start, here is our customary plan, which is who we are, we essentially rootconf. What rootconf is, it used to run as an annual conference. We started in 2011 as practitioners for what was then DevOps, then became DevSecOps. And then eventually evolved into site reliability engineering. And the idea of starting this was to share approaches that the folks in the industry were following to solve infrastructure development, deployment challenges across the board as a community. Now what has happened is we've recognized how the industry is also changing. And accordingly we have also changed. We have now diverged into a continuous community program where we focus not just on DevSecOps. We've also split into specialized branches such as cloud ops, data security, data ops, and a whole bunch more. And we also have, for those on the cybersecurity side of things who feel like you're talking about Dev and also as a tech part of it, on the security part of it, since last year we have collaborated the privacy mode program. And since then we had a few sessions which are probably India's first and premier conferences on data privacy and engineering and product. And those are some of the things you should definitely check out on the rootconf portal. So that's it primarily for the plug. And with that let's just quickly jump on to the discussion. So what I'll probably start with some questions and to all our audiences please keep the questions coming in on the chat. They are super helpful and the panel can definitely share a lot of their insights onto this. So sorry for having you. So what I have usually seen is that the BPMBRE play has been as stated very enterprise driven where you usually have the heavy weights who have occupied this for quite a while. They have certifications that a lot of people do a lot of people are essentially specialists into these individual tools. And then there are open source solutions that you would have come down which is I've had some experience running coming down a very different non-filter kind of a domain. But then what I also want to understand maybe starting from Susan first and then moving on to you guys is how did you guys stumble? Rishu I think you muted yourself accidentally. We need to use this and then how did you guys come across and said that okay let's go for an open source solution as compared to one of the established big boys. Sorry actually we lost you for the last one minute can you please repeat the last one. I'll just repeat sorry I'll just repeat that. So what I'm saying is that so what I'm very curious to understand is that when you guys were going across it how did you guys come across the one requirement where is it that you would feel the need for a BPM or a BRE solution. And then also when you guys were making a decision of which tooling to go with it what is it that what considerations led you to using something like Commando where to you know the more established players who usually have an FI or you know a big footprint in the industry and under the establishment. When we starting from you Susan and moving on to a certain time it will be very interesting to hear that story. I'll give a bit of context on like the use case why we are like where we are using Commando actually so we are using Commando platform in actually even there just the workflow engine part of it. So in I mentioned like capital right like capital is a blending arm of razor where we are offering like multiple blending related offerings like a card or like a loan kind of a thing like a credit line. So for this we have an application journey so if you are a business and you want to get onboarded on one of these offerings like you'll have to do certain steps. So that journey itself is actually like a multi-stage one like you'll probably like you'll start with like some basic details about your business your like your pan GST and your business vintage and business description related details and probably you'll give your key stakeholder details you'll do a bureau check you'll give like more details on your business about like your previous business income and related stuff and all and then like you'll finish the first stage of the journey like application journey and then it'll actually hand it over back to the back office. So in the back office like we do have like business verifications like we'll do verify like whether whatever you have shared is actually valid or the documents uploaded is valid like your credit verification is done correctly or not those things and then we hand it over to the underwriting team so once everything is good the first part like if the first part of the thing is good then there is credit verification sorry underwriting. So underwriting is actually like a very last step so most most part of that stage or like function is actually happening offline not in our actually capital use case so what we wanted to do to start with at least to bridge this gap so take like most of this underwriting functions to online so the reason for that is that the underwriting process itself is actually like multi-stage right so as a product they didn't have any insights on like what are the time-taking steps or they didn't even have like idea on visibility as as to that like where is the current currently where is application so that itself is not available so it is not being tracked properly probably like it is tracked but like it is tracked in multiple different platforms not in a single platform as well so that was one reason like where we needed some visibility into like what's happening in underwriting functions and also if you get visibility you will actually be able to figure out the pain points and then improve on that pain points improve like optimize those difficult areas and all so that was one reason and also since this is actually a risky underwriting is something like you were risking more so there is some process we wanted to enforce so because this was happening offline entirely offline mostly offline we were not able to do that so bringing in this process like enforcing this process in the workflow was something was not possible if it were done offline so bringing online was the other other reason like which we were actually exploring this so then we could have actually done this with just the current existing setup like where handling the state machine yourself using your own application code and then everything but then again it was actually not just a few handful of steps it was actually like a very complex workflow where like based on certain parameters of the application the path and application can take like can completely change so different levels of approvals could be possible based on the correct limit we are going to offer or like based on the type of business how long the business has been running so that's why like having a proper workflow engine made sense for us so that's when we thought that we should add a workflow engine and this is just a underwriting use case so we also have like ops related use case like where I was talking about like business verifications and all right so this is just before the offer stage like where we actually say come back to a business saying that like this is what credit line we are offering to you with this interest and multiple other parameters but even after this like if they agree to go with the offer or like they negotiate and agree to go with the offer there are like more steps afterwards also like related to assigning a loan agreement providing more KYC details which could be later required by the lender and all so this was just part of the problem so overall including a workflow engine in the current application setup would be really beneficial in the long run also to even automate other parts of the application journey as well so that's why we choose the we planned to actually go with the workflow engine instead of just doing everything access so that was the first part and then like we thought of like exploring different options like we had so there was an internal implementation of like one of the workflow engines also so cadence was actually deployed internally and then on top of it like we had written a DSL as well but like I can go into detail on like why we didn't go with this and why we eventually chosen Kamunda also but otherwise as we explored like multiple other options also like AWS step functions we explored and we explored like cadence directly also and then there were a couple of solutions like internally which were available and Netflix conductors also like we explored like most of these cases at a high level like the reason like why we went with Kamunda was like majorly two reasons one for this right like I mentioned about having visibility into like what is happening was important so with the other solutions if I take an example of cadence it was like well suited for a automated mostly automated workflow or a workflow so if your use cases are like mostly automated like there are scripts you'll have to run under particular stage or like you'll have to do an HTTP call to do certain things and all probably like cadence would have been better and also it is mostly focused towards developers in the sense like you define these workflows as code so as a developer it will be like really great for me because like I'll have like more visibility I'll have more control and all but then again if I have to give this visibility to the product team it will be really difficult like you'll have to always dependent be dependent on the developer so the main value that we have with Kamunda is that it's support for BPM and 2.0 like it was we could just throw away everything and then we could just represent the entire workflow like the application workflow as a BPM diagram and everything will work out of the box so product will always have visibility into like what is currently running what is currently deployed and also they can because they have visibility they can suggest minor changes and probably even make some minor changes as well so that was one major one main point like why we went with Kamunda and the other one is that most of the other solutions like we had some support for async workflow but like mostly they were actually kind of hacks or like not directly supported but especially this Kamunda platform had like really good support for async workflow like having event based gateways or like event based tasks and having like workers to actually run any tasks which could take time and yeah this was and also human related tasks like actually we are not using that but like human based workflows human based workflows can easily be done with the Kamunda platform which they have this task list like solution as well right so which can be easily used to integrate with your internal like probably dashboard and then handle the human based task assignment and then processing as well so that was the other reason like why we went with the Kamunda so what I am hearing between and this is a very great point that you bring up which is around AWS step functions Google flow and the like yeah right that that it is usually the lack of visibility that would happen some point in time and it would be like oh wait what do I do right now that is one the other part what I am also hearing is that you said that oh you guys actually ended up using the Kamunda modeler which as far as I understand is one of the latest products that Kamunda has sorted out compared to the rest of the VPN so did you guys end up integrating with some of these underlying systems using the APIs that Kamunda provides or was it primarily via the modeler and the rational question for our audience also is that in industry there are two very main models that go out one model that goes out is that you usually have you know you go via the modular process modeler that Sujan was talking about it is the drag and drop workflow no code no code and you let the process run with that the other one is that you essentially take to your business logic and you invoke components of your rule engine and components of your decision engine separately using rest APIs and just simply remove the modeler because you know it just then becomes something like developing a backend service and integrating it to the front end there so Sujan could you and at this point in time I think it would be great for solving one as well as needed why take the modeler approach where you know and not the other one where you could just directly integrated with the the APIs yeah actually we are doing both it's kind of like a hybrid approach at least to start with like in the beginning like we deploy a BPMM diagram modeler so still like we don't have like an end to end solution as to like completing the loop where you actually make changes and that gets automatically deployed and then the new changes will take effect but like we are still in like a partially automated approach where we use modeler to define the initial workflow we deploy it and then still again the other communications between like if there are anything we are pushing to workflow or like we're getting we're listening to any changes from the workflow they happen using the go client we have like we use go line so this is again like you mentioned it is actually it's not RESTful but like it is over gRPC protocol so that still happens and then any changes whichever are there are actually suggested to the BPMM diagram and then we have like a release process like which will take in the new changes and obviously I missed one point so Kamunda platform it has like really good support for versioning so we don't have to worry about like previously running applications or instances on the previous version and then we can easily deploy the new versions and then make sure that everything works out for the cost so backward compatibility handling backward compatibility is also not a problem with this platform I hope this answers your question no this is this is very and I hope I'm audible yeah yeah yeah no this is definitely very very helpful this does a lot of light into you know the kind of decision that went into it I think at this point in time that's also it's all here from Saurabh and Hanu on their alliances around this because what would usually happen is that there would be a running business for underwriting and everything else that could be going on and then it comes in which potentially automates it so what was the kind of process and what was the kind of approach you guys took on because you guys would have had to create swim lanes you guys would have had to also you know go ahead and do a lot of automating around that so how was that entire journey what were your experiences when you were doing it yeah definitely so I'll just quickly explain why and first of all thanks for explaining so nicely first so from my side I'll first explain what was the reason why we went with something like Kamunda okay what was the primary reason and what were the advantages it provided us over some of the tools like Netflix Conductor as well as AWS step functions which we basically evaluated we will go over them as well Hanu will take into detail why we went with Kamunda but to start with basically the problem which was in our hand was we were developing a new product all together called OTA which is over the air payment okay so it was more of an orchestration where basically we had a we had a am I audible yeah yeah sorry yeah so what I was saying is that we were trying to develop a new product all together called OTA which was over the air payment okay the use case which we were trying to solve was basically this was for I would say the Asian markets where basically people used to come to the pump and automatically they will basically open their mobile app they will do some pre-payment and automatically the pump will unlock and the customer will do the fueling in and basically after that the customer can go away and automatically the recipient will be sent back to the customer so this was one of the flows when we basically undertook this thing we also knew that basically this is just one of the flow this is just the pre-authorization flow it can also be a post-authorization flow where basically someone is doing this someone is doing the payment post-fueling right the same kind of flow can also be used in order to basically simulate a let's say a coffee maker right so we basically undertook all these different assumptions while designing this product that ultimately this is kind of a workflow that basically people want to set up a certain different steps and they want to call these different steps based on a certain order or a certain sequence okay and each sequence has its own properties which should be very generic as in each sequence can have a STTP call can have a queuing based call or anything which can be very generic so our system should be extremely generic in nature okay so having all these requirements gathered we had a very clear understanding that we wanted to go with a workflow kind of setup because workflow all these things are ultimately a workflow the second reason why we went with a workflow kind of setup is because as I said as capillaries the multi-tenant kind of organization where every organization will have their own complexity every organization wants to do a certain step in a very different way some organization will add another step to whatever is existing right now for a client and every organization will be running every organization can run in a different cluster altogether okay and it can also be the case that as Swijan explained the versioning support as in someone wants to change something to the flow and they want to preview that flow before actually applying it to the production environment right so Kamunda had that basically these versioning support also very nicely built up so you can basically do a preview of the new version before actually making it the primary version or the live version right so all these cases put together basically we went with something we were very clear that we want to go with a workflow kind of setup now the options which we had in our hand it was the cadence we had the Netflix conductor we also had the AWS step function and we also evaluated the we also evaluated Kamunda one of the primary reason we went with Kamunda is basically the space where we were dealing with was payments if something is unsuccessful there should be a rollback step immediately if let's say I'm trying to authorize a pump or something and if that is unsuccessful I should immediately unauthorize or basically revert the amount which I'm trying to deduct or I would have already deducted or basically if I have basically authorized a pump and I'm trying to move to the next step and the next step is unsuccessful I should immediately unlock the pump so that some other user can use it so this complete rollback thing was very nicely built in Kamunda we didn't find this rollback so nicely built in when it came to AWS step function or when it came to let's say a conductor or a cadence as well so this rollback strategy for us was very important second important point why we went with something like as in went with Kamunda was the versioning thing which I already covered as I said that at different time basically people want to do different things all together with the existence setup as in I have completed all these things now do a webhook call on this particular URL so that I'm getting notified of this particular event as well let's say if this kind of huge guess comes now I don't want that this particular huge guess should break any existing flow so you can basically create this function programmatically as in we have exposed an API you can just call that API it will create a workflow with a new version all together for you you know the version ID you just do a call with that version ID and you can check the complete workflow before affecting the currently live workflow so that was the other advantage which we clearly saw with Kamunda and the third which IV file it was very important for us which AWS step function was especially not providing was something immediate deployment if I'm changing something in AWS step function it goes to a lambda deployment it's a half an hour of 15-20 minutes deployment that is something which was not there with Kamunda you can immediately create a workflow and if you are okay with the preview you can basically make that workflow live so some of these things I would say played a part why we went with Kamunda and when you say that that you could create you know newer version of a deployment and you could run it to see the changes I think there is also a dry run capability that can be utilized and sort of you know test out the flow correct so basically Kamunda the way Kamunda works is by default there is a default version so every request will go to a default version but they also have a versioning support if you want to test something it's a certain version you can do that as well so what I meant there is that if I'm creating a new version I know the version idea of that so I can do a preview checkup or a testing on the new version and if this new version is okay then I will make this as the default version got it that's awesome that is awesome and how did you guys and one of the other things I think we are talking about is how did you guys actually manage to implement so for us it was a new project all together yeah so it was not a movement it was a new project all together and where we basically from the onset itself it was kind of clear that it needs a workflow kind of setup and that's how we basically moved in this direction got it got it that that helps so there will be a green field implementation and one that essentially goes around this is amazing Manu would like to hear your take as well Manu you are on mute first when we initially started to evaluate all the workflow related flows the main problem that we had was that our workflows need to be created I mean everything as far as actually programmatic I mean we didn't use the model notation at all everything the all the workflows were created programmatically and we don't even use the UI it is only used for the developers for their own use case but the product team and our customers also don't really have any visibility into that and that is not a use case but the programmatic way of creation and the real time creation of the workflows that was something that that was the initial thing that helped us decide quickly on to invest where to actually you I mean whether to go with Comminda or the Netflix editor or with AWS the major drawback from AWS for instance from my point of view was the amount of time it takes to have the workflow getting created or the workflow getting updated so that is the initial problem for us and also because we have our capital is a very multi tenant service the number of workflows could be so different so many different kinds of workflows and the ability to copy the workflows and I mean the particular workflow from a test flow to a live flow so all these things are much easier because everything was done problematically so we were able to achieve that easily through Comminda whereas we were not able to achieve that so much through AWS step functions and also our request is essentially the whole pipeline in itself is a completely rest based pipeline and we would be doing aesthetic calls and the request and the response from the whole workflow the end of the workflow at least in that step we would want it in that same response I mean we are looking for a synchronous a synchronous response model actually but whereas when we try to have the same with Netflix conductor it is I mean passionately used more for the asynchronous workflows it has support for synchronous but that is what what it is used a lot there is not a lot of community that uses the synchronous workflows in the Netflix conductor so that was one drawback that we felt for the Netflix conductor and then we sort of also explained that we have compensation models that are built in very easily in Comminda so now capillary has like hundred plus of microservices built in so the integration between integration of this particular payment service with all these microservices were very straightforward for us we would be defining the basic structure for each step that let's say if I want to push my if I want to push my data to I mean I want to push my data to loyalty service where we would basically do some incentivization we would have a step defined for that if I want to push that message to when the data to a messaging service we would have a step defined for that the integration become so easy and because we have initially developed each individual step separately and and we have given the ability to combine these steps sequentially in the way they want for each tenant individually and independently and they would all be running in a single application so in that sense that became easy for us to have that money can and see big part onboarded much easier and also the other thing is that the scalability now because we are using we are using the Comminda jar actually directly in our main application I mean in Java application so we were able to scale it horizontally when the state of that Comminda is completely on dv I mean there is no state on the application so we can scale it horizontally that is something that also helped us a lot I mean the scaling is present in IW step functions as well as conduct but it ticked off our boxes and that is one thing that it would be like I mean we would be against it if the scaling was a concern but from the lotus and all that we have done we were able to meet our SLS so even generally when we look for a long running workflows there would be some short-lived part of it where we would want to meet our SLS so that we are still able to achieve with Comminda so that is one one box that we were able to tick off easily and also there is support for long running workflows obviously in Comminda so that also helped us in deciding better and also the ability to integrate with with APM services like Neural Lake or Data Dog all these that is something that we follow as a tradition in all our micro services in capillary so that is there then it could make our lives much easier so that came off because it is an in-house application and we are just using the Comminda jar so it is easy for us to integrate with Neural Lake whereas in IW step functions in some regions in regions like Beijing and all it is not possible whereas capillary services are present in different IWS regions so I mean different countries so that part was not going off properly with IW step functions so that is also one of the regions why we were against IW step functions so overall with all these features coming in together it helped us to choose easily that Comminda is the right thing for us and we were able to meet our SLS in terms of our response time so even though it is a workflow model for each step we would have particular SLS to meet and we were able to take that so all that combined together it was easy for us to decide that you would be going with Comminda that is indeed any other open source candidates that you guys also looked at like I know you talked about Netflix conductor, you also talked about step function but taking stepping back into the domain of Comminda because Comminda in itself is like open source BPMBRE engine the tools of similar nature you guys looked at as well because I think the comparison with conductor and IW step functions is amazing and I think it also highlights why you would need a separate orchestrator as compared like you know we use the BPM but any other tools that you also looked at Comminda these are the three IW step functions Comminda, Netflix conductor these three are the things that we did read about other workflow tools like MuleSoft also but that didn't help a lot because it is a great version most of them just to answer that as in we actually started with some other tools also like Cadence and all and MuleSoft as well but I think those were discarded at the very early state when it came to the workflow setup the kind of I would say the STTP workflow we were looking for and finally the three which was shortlisted as where we wanted to go deeper these three the IW step function Comminda and the Netflix conductor got understood and also sorry go ahead and the reason why that decision was made that is that there is no IW infrastructure need to maintain Comminda it is just a SQL DP that we need so it is that much easier to deploy and get it running understood also wanted to quickly kind of take a quick understanding of this that since you guys use the process modeler right very often so there are going to be two ways in which it works and I think Sojan said that we have a mixture of both approaches which is the modeler as well as the GRPC integration now I would also like to understand how is the UI manager is the UI something sort of what the BP industry called the headness UI where you essentially take the you take the certain inputs and you just pass them on to the decision engine in order to make a decision or is it that there is also you know there's a couple of UI all together because a lot of VPN players end up providing that right they essentially provide you a UI for your organization yeah I think on the platform 8 we also have operate which can be used exactly for the use cases like you are actually seeing here that where you can actually make modifications one is you can see what's happening in the in your workflow engine overall like what all the current training processes where are the processes currently at and making modifications and trying out it works mostly as like a back office you can debug stuff there or like you can if there are issues you can resolve them and then take them forward and that is one and also there is task list also which which can be used for like admin related workflows like where you have some tasks which can be assigned to like a group of people and then they have to work on them and then they have to mark it as done like normal proper task list or like a to-do list sort of so that is also available in the platform 8 which can be used like it has its own UI but yeah both of these are actually available only on the non-community version of it like for which like if you are actually self-hosting them you'll need to get a license for these things we didn't have use case for either of these things we had like our own admin dashboard which could be used to like visualize these things and then work on so we didn't use them we are actually self-hosting the commander platform but like without these options or like products okay the other thing also wanted to understand is that as you said at what do you call we also would also like to understand some of the aspects of you know when you start putting workflows together in a drag and drop module what is a huge sure is to say that introducing new surroundings and doing business rule engine and decision rule engine management right it makes it a lot more trivial and it kind of elevates us a little further from just you know being dependent on the technology so did you guys also experiment by having somebody who would be a non-technical person a business-facing person right and you know just kind of say that let's have an analyst or let's have a product person be exposed to some of these domains or decision-making rules engine and did you guys try that or is it still at a place where everything is just you know our technology backlog request yeah right now it is at that stage only we are pushing for the product teams to understand BPMN concepts and like give the requirements in that format only but still a long way to go like first first is like we try out like as a tech team being like properly comfortable with this and then we'll build solutions around for the product team itself to actually handle this like product or even other functions also like they should be able to manage their own workflows that's the idea like we have but like still far ahead because BPMN is actually like a standard notation like so it is easier for everyone to actually try out like they need not actually do it only on our platform like where this is managed they can just go to draw I or like any open source platform for drawing and then they can get started with it so that is the idea I think that sounds excellent and at this point I'll just quickly take a break I'm just going to probably look if there are any questions in the chat or anything for us looks like we are we are doing this on a Wednesday afternoon when midweek news are hitting really big and probably the first thing on first and foremost the minds of people just finish Wednesday head out and probably grab a beverage of choice so I think what we'll do is we will probably move a little bit ahead if a question comes I'll let the gentleman address it as well so I'm just going to now start moving towards a little bit towards because this is really amazing to see a green field implementation in a new age contact built on open source it's like one of those stories that comes into the play and I think the next question would also be probably a little bit directed more towards Sarabha and Manu which is saying that is this model something that you guys see and I know so then you would essentially I think the success of this model would encourage Razor to adopt one of it but I'm just trying to cast the next fighter so Sarabha and Manu these guys based on this experience you guys kind of feel that you know cases like these or projects like these paved the way for incursion or do you think there is still some distance to go before open source solutions like Kamunda can kind of go head to head with the more specialized solutions idea yeah I can go so in my opinion Kamunda is almost there especially in our use cases as I previously also said like most of the things which Kamunda was doing for us it was not getting done by some of these enterprise companies right and and the future for me also looks great not just in these kind of use cases but also in lot of other use cases where basically we do a lot of state management. We have N number of tools which we sometimes define our own code we basically handle things ourselves that this is how we will do the state management and all and we don't normally go with the workflow kind of solution whether it's a e-commerce kind of a journey where most of the time it's a workflow so some of the places where I feel these Kamunda kind of solutions will be forward as well and compared to as I said compared to enterprise solutions I think it's already there as in some of the capabilities like drag and drop for us also it's too far ahead as in we have also not explored some of these till now and for us also it is still very much tech driven we still have to go some way basically where it is much more friendly for non tech people also to configure these workflow those things are definitely easily supported in the enterprise solutions and all those parts still I would say might be slightly complex but when it comes to the internal the way it works and all of those have been extremely smooth for us that sounds awesome and is there also an element of machine learning and AI that you guys are coming into Kamunda as the same solution because I think for a lot of underwriting stuff like AI has kind of come forth I'll let probably Surya answer that question that are we using it is it in plans if it isn't plans is it something that is fairly easy to plug into it because what happens is other than the established players and you know open source there's a third category that is essentially specialized systems so now you essentially have into a learning system you have into a learning system which go all the way from even let's say suppose you know a product configuration to roll out to underwriting to claims to settlement like everything all together so very well you know decision making engines are just a smaller part of it and with this third kind of you know a challenger like the second type of challenger and a third player coming into the ecosystem right how easy difficult it is to plug something like an AI or an analytics engine into a piece like common because these new challenges they come with an AI engine already integrated not directly AI or ML but to some extent like we already do this so we have like a risk product which actually evaluates based on multiple data points like available inside Razor Pay and the ones which are given by the customer to make some of these decisions already like they are not decisions per se for the time being they are kind of suggestions so these suggestions are actually used while making the underwriting decisions so that is there but yeah AM, ML obviously they can be easily plugged into the Kamunda platform set up itself like the entire workflow engine itself but yeah we aren't there yet like we are still one step before that but that is definitely in the works yeah great great great and Saravan Banu just trying to get a sense of how easy or difficult it is to actually take an AI engine and just plug it into Kamunda is it something that is fairly easy is it something that would mean creation of maybe an adapter or a plugin or how would that go so for us I would say it has been still very much in a plug and play model only we have not explored the AI part of it and there has not been a huge case as of now I would say where we wanted to explore the AI part of it so that is something we have not explored from us yet yeah understood no this is definitely definitely definitely alright at this moment I think we are just going to I don't think we have any questions right now we are going to give back some time to everybody who is on the live stream we usually deserve 10 minutes for the question I think that is one question that is a very generic one from live stream and it has had a very wider implication where it comes around it says that why is it that we don't consider open source as secure in enterprise by and large right and it is about not just about the question is why not by and large why is it that even if we take source as sass so I am going to have you guys take a quick stab at it but I am just trying to answer this question from my own experience which is it is not always a function of security when it comes to open source by and large in a lot of companies that have technology or IT as a certain division of the larger business they are looking to adopt new technology there is also a function of support and timely output a lot of new gen companies essentially go tech first the product is not just powered by tech is tech much of the larger established companies when they are moving and they are essentially here to understand the shift that takes a lot to be done security is definitely a thing which they don't say it is not secure but based on my experience working with fintech there is a higher barrier of entry into the ecosystem because a lot of data a lot of sensitive information is usually at play that is one the second part is that once you have built a system it is also about finding the right skin set for it and as we said in the beginning of this conversation when a lot of a lot of these established players what they do is they actually start rolling out skin set programs and you will see a lot of people for example who are IV and BPM engineers you will see a lot of people who are mega engineers and these guys specialize in operating that complex system which means that once these companies are adopted they know that there is enough skin set on the market and those are just my 2 cents on how this is built up there is not anything against the viability of open source and like I said security we just raise the bar and ensure it is continuously scanned and whatever comes in comes in with a very high bar on security but with that I am going to also move to the experiential part of Sujan and Saurabh and talk about your experiences around the same in my opinion because like open source because of nature of it like you have access to the source code it is better in terms of like security because like you more people will be looking at the code and then more people will be using and then raising these issues but then again the part like you mentioned about like maintenance of it like how fast like someone will react to when there is a security issue if I don't know like may not be great in all the open source software so that needs to be considered so again like it is actually a trade off like one where like you don't even have any visibility like close source or like you go with open source take some of the risks of like there again like while making the choice like it has to be considered I guess Saurabh Manu you want to chime in on this one I think my viewpoint is also exactly same what Sujan told it is a trade off essentially as in whether basically we want to as in when you are working something on a Linux and when you are working with open source languages and all you can go inside a particular function you can understand how internally this particular thing has been implemented rather than basically that particular thing is completely abstracted out of you so that advantage you are getting but that also has a disadvantage that someone can get hold of it can inject something that we have seen multiple cases around the the latest one the log 4 which happened for everyone in the market right so those nuances are there and those trade off we have to take but what I personally feel the advantage of these open source systems when you go inside the code actually read about it understand the nuances of it it is much bigger compared to the shortcomings which we normally see in my there is a lot of learning the open source project the new skillets to conduct it and sometimes during the period when we are in the individual team or the dev team are actually learning those skills so during the there are problems that they will encounter and if there is the time that they can invest then definitely open source or else it always helps that there is support we can get for that that's great alright then I don't think we have any further questions and as stated let's give time back to our audience as well as to the panel so again once again thank you so much Sujan Saravanan this has been very very enlightening and I for sure learnt a lot more than I knew is what I would say in the past one hour or so and also thanks to our audience who have plugged in to the live stream and also the folks who will be viewing this later so with that I think we are pretty much at the end of our live stream and what we are going to essentially now do is the recording for this is going to be published somewhere between 11th and 14th along with our text summary with that again a reminder we have put on we have a really great telegram group and we would encourage you to go and also join that for more offline discussions which are very poor tech it's a great place to have for techies tech entrepreneurs, new tech practitioners with this new experience so with that I think we would like to thank all our panelists once again and thank you so much for joining and sharing my experiences and to get a lot of other folks thinking on how they can actually adopt newer tech and run I think with that at the end of this session so thank you everyone thanks a lot once again thank you thank you guys soon bye