 Hello, everyone. Welcome to OpenJS World 2021. Hope you are doing well. Let's do the basic introduction of us before we tell you why we need 10 minutes of your time. Hi, this is Sapna and I'm the technical head at NodeXpert, and I have Prabhal Raghav with me who is a technical lead at NodeXpert. The topic that we are going to present today is even-based communication in microservice architecture, so microservices. So they have a lot of different architecture pattern. Every microservice has their own architecture pattern customized to the requirement. We would like to take this opportunity to talk about our journey from Monolith to microservice architecture application that use even for IPC. We would be taking an example with a similar use case as we face in our live project. So what better example than a retail industry? So let's talk about an application that lets you order things, pay for it online, deliver it on address. So to start with, we have monolithic architecture. So in a monolithic architecture, there is just a single service or server on which all the models, business logic, read and write operations happen. So you may have multiple instance of same server running on load balancing in a form of horizontal scaling, but it is still on single server. This architecture might be suitable for smaller and simpler application, but not for large enterprise grade application. It will run into tons of problems. Let's see how. But before that, let's see how ordering something will look like in a monolithic world. First, we make sure payment is paid. We place the order, hence and after successfully placing the order, we independently update the inventory, generate bill and schedule delivery. All the IPC calls are like normal function calls since they are on the same system. It is definitely a lot of simpler to implement from an infrastructure point of view. It can be tempting to begin with this initially. If application is on MPP phase or POC phase or if the application will solve the simple problem. So that is how we started. We have written, so yeah, that is how we started. So if you look into the diagram, you will be seeing UI interacting with single server where we have all the matches written. So we have given you an example of how we have written it on the left-hand side. But there are certain disadvantages to it. This however does not work. If the business problem you look to solve is large and complex, difficult to implement continuous delivery with single, since a single module change need a complete redeployment of the application and retesting it. A single module failure can bring down the entire system. This would not have a scale well for us. So that's why we moved to microservice architecture. So microservices are logic or coupled services integrated to be part of larger application. Microservices can follow different design pattern and philosophies. Let's take the simplest design pattern with as less possible infrastructure component. So that is how we also moved here. So let's break down our single service. So in our previous example, you have seen we had a single application server where we had all the functions written. But here you can see we have broken down our independent module into various microservices. So we have, and each microservice has its own database. A simple design will use request response synchronous communication as means for different services to interact. This will be beneficial as it needs lesser infrastructure component. You don't have to maintain message queues, even stream and even source. Comparatively easier, easiest off all the microservice pattern to develop, test and deploy. This lets us independently develop and release services. If one service crashes, the application still stays running. Although synchronous communication between microservices simplest, the biggest advantage we get acknowledgement of request being made. So it enable faster, you know, enable CI CD, isolate pole tolerant, better faster development, testing and deployment process. There are still lots of problems with this pattern. Read and write operation will take more time since we are now waiting for acknowledgement from each service we call. There are high chances of us getting error or timeouts if the service being called is under a lot of stress or down. Since much aggregation happened over the network, we need to be best spoke backtracking to ensure transactions are followed in an operation. For example, let's assume an order being placed. The payment is done successfully, but there is an issue in order service. Let's say it is down. We have to backtrack the payment and initiate a refund. This need to be coded manually and is needed to be ensured for every operation that has multiple service calling it. Does this achieve all that we were looking to achieve in a microservice? Definitely not. We have just distributed our monolith over the network, which has further decreased its performance. We did get some benefit over it, but it still brings our tightly coupled. In order to fix the issue, let's add the event-based asynchronous call rather than a request-response synchronous call. Thank you, Sapna, for building all that up. Event-based communication, why event-based? This method of communication in microservice architecture models the real world very well. You can take examples of a lot of different sectors in the real world, not related to microservice or engineering, but they do follow this event-based communication really well. You will never see an accountant debuting old records. You will obviously see him adding new things, maintaining the old data as well. Same thing happens in a contract. You don't cross off things in a contract. You make amendments or you make addendums to it as we call it. So everything old and new is still there. And even if you take an example of libraries, for example, there's a library called Redux. It also does this event-based communication really well. So in event-based communication, there are producers, consumers and an infrastructure component that could be RabbitMQ, MSQ, or Apache Kafka that handles your events or messages. So a producer produces an event, a consumer consumes an event, but a producer might not know who the consumer is and the consumer themselves won't know what all different consumers are. And they pass this information using that infrastructure component. So when we replace this request response with an event, we could have done a bit old PubSub implementation using RabbitMQ or Redux, but we instant went for an event stream because we know it would be much more future-proof because even stream can do a lot more and also a PubSub. So hence we choose Apache Kafka as an infrastructure component to help us out. And so let's see an example. Let's see how our microservices map up to an event-driven communication. So in this, what do you think would happen? In an application UI, if an order is placed, that order would hit the payment service first. Let's assume the payment is successful. The UI would be acknowledged then and there and an end user would see a successful message on the UI without seeing what happens to the downstream services. Then the producer produces an event and the order services consumes it. Order services does its business logic and it for the producer in transactional event, which makes sure that the other downstream services that we consume that event, they don't fire their own event in the process, thus maintaining a transaction. So let's see what would happen if an inventory service or the billing service was to get down. In this case, you would queue up all the events to that service. And once that service is back up, it would start to consume it. So no communication failure would have happened. Kafka does that, it maintains all the things in a queue for something that is not working. And all the services would be in the same state. So now let's see what would happen if to an end user we need to show updates on his UI, what we can do. So what we did was we attached something called as the WebSocket service. This WebSocket service was responsible for pushing things on the UI. So rather than UI pulling the backend for the data of this WebSocket service itself push changes on the UI when the time was right. So once all the transaction for a particular placing in order was complete, the WebSocket service would update UI and show an order successful message. The order successful message would not have been there. You could have seen something like odd appending, but this is how we can update the UI. And it would have been reactive. The end user might not even have to be pushed to screen and reloading the screen. We researched the screen and this would have just updated his UI very smoothly. This procedure can also be expanded to use something like a materialized view. Or if you have a very complex list page that is compromised of data from different services, which obviously takes time if you take the request response approach. So you can create a service and push all the segregated data to that service which is further consumed by this WebSocket service and updates the list on the UI. And it does that in a fast manner. So that's how it is. We started from a basic model for our POC and we moved all the way up till an event-based structure for an app when it started to go. So we could scale it easily. So let us know if you have any questions, slackers, and until then, thank you for listening to us.