 So, before we move on to the topic, let me introduce a bit more. I work as a developer at Walmart and for past few years, I have been working on React and some other frameworks and libraries like Angular or Polymer and so on below is my Twitter handle and the personal and the official mail IDs where you can contact me on. So let's get back to the topic when we talk about delivering the applications at scale. There are multiple meanings to how can you scale the applications. As a developer, you can write efficient code, you can write performance code and then scale your application or from a diverse perspective, you can increase the number of servers when the traffic increase and then scale it. But what I'm going to talk today is how are you going to scale the applications by having a proper architecture and the CI-CD system to support it and then scale all the multiple applications. To explain a bit more, we worked on a solution to build host multiple enterprise applications under a single entity, sort of like hosting portal and we worked on scaling those applications which were built by different teams. So, our agenda is going to be something like this. We will start off with the challenges and the opportunities that we faced when building this approach. Then we will move on to the basic UI architecture of it and the CI-CD system to support that and we'll talk about something called a CDD that is Confit Driven Development and we'll finish it off with the performance benefits of using this approach. So, let us start with the challenges. As I told you, this is a sort of a hosting portal where you have multiple applications coming inside it. So, what it means is that these applications are built by different teams and they have their own road map and they have their own release cycle. So, you cannot tell them to release their applications today or tomorrow or day after tomorrow. What it also means is that these applications might have their own text track as well. So, we had the challenge of supporting the existing applications which were built on AngularJS and on top of these challenges, we also found out we could build on some opportunities like a single sign-on where the user logs in one application and he or she need not log in to another application or say user analytics where the application team will get to know how the people are using their application and some benefits like high availability of the application and faster development and time to market. So, let's start with the basic architecture that we followed. So, as you can see, there is a main application here and then you have these child applications one, two and three. Now, these child applications are MPM modules and these modules are added as a dependency to the main application. As you can see, the application one is at one dot zero dot two while the application three is at another version one dot four dot five because these applications are being built by different teams and they are coming inside the main hosting portal that is the main application. Now, one of the obvious question that you would have is why would you go with this approach of building MPM modules for your application? So, let's see the benefits. One of the benefits is that MPM supports versioning by default. So, you know something called as semantic versioning where you have the major update, the feature update and the patch update or it means that if you do a breakable change, you update the major number. If you do the feature update, you do update the middle one or if you do a bug fix or something like that, you update the patch version. So, you have the versioning capability built in. On top of that, MPM allows you to reuse the components very easily. So, you can just add the components that you have created inside your package.json and you are good to go. So, the whole dependency thing is also built in that and you also found out that when people write MPM packages for their application, they tend to write modularized code. So, if I am doing, if I am breaking my application into multiple MPM package, I want that those package should be used by other teams as well as by my team. So, I would try to write more modularized code. Also, our MPM module consisted of the usual source code and the distribution code. The source code was written in the JSX React, while the distribution code was the ES5 code, which was transformed by Bebel. Another advantage of using MPM is that the dependencies between the packages can be shared. What it means that if the main application has one UI framework dependency, then all the child applications are going to use the same one. So, you have the consistent styling and you have the consistent theme across all your applications inside the portal. Now, that was the theory part of how you can build your applications and divide it into MPM packages. But how are you going to do that in your code base? Now, think of this scenario where you have multiple repositories for your packages. So, if I divide my application into three or four packages, and if I have three or four repositories in GitHub, then if I want to release the application, I would have to go to each of the repository, send a pull request, get it merged and update the version number and so on. So, that's a bit of an overhead task. So, there's a concept called as mono repose wherein you develop these packages in a single code base. And it's not new. There are many open source projects that use it like, say, React or Angular or just, you can go to the source code of React in GitHub and get to know how are they doing it. And there are multiple ways as well. If you're using Yarn, then Yarn supports doing that something called a workspaces. So, you can have multiple workspaces in your code base and you can develop these packages. We use something called as Lerna because we were using NPM client and Lerna allows you to do the same thing that is managing multiple packages in a single code base. So, as you can see on the screen, we have the main folder here. We have the My Lerna repo folder. And inside this folder, we have the packages folder. So, this packages folder consists of the package, the NPM package you are doing for your application, and each of this package in NPM module and these packages are added as a dependency to the main one. So, you are developing these packages inside a single code base instead of having multiple code bases. Another advantage of doing this is that you can add these dependencies at local, test it out, and then publish it to the NPM repository that you have. Now, we use the internal NPM repository for publishing the packages, not the public one. And we use Lerna for management of the packages in the code base. That's why another approach that comes to mind when we talk about using NPM and dividing the application into multiple packages is the micro front end. Let me explain a micro front end a bit. Think of this application about a product where you have the description of the product, you have some filters, and you have some reviews. Now, this application, this product application is being built by multiple scrum teams. So you have TMA working on the description part, the team be working on the filters part, and the team see working on the reviews, reviews part. How would the development be here? So these teams would create the custom element for their package, it will be independent code base. And this custom element would in turn be developed by say Angular or React, or the framework of their choice. That's how Mono Repos works. And it would look something like this, where you have the product description custom element, the product filters custom element, and the product reviews custom element. And you are going to pass the product tidy to each of them. And based on this product tidy, it is going to fetch the description or the filters or the reviews for it. Now, how our approach differs from this is that when I showed you the previous slide, where you had the main application and the child application, so we work at a higher level, we consume the applications as a node packages, how these applications are being built, whether they are using Mono whether they are using micro front ends, or they are using monolithic approach, the single single SP approach does not concern to us. That's one of the main advantage that we have. We do not govern how the application teams are building their apps. Another distinction is that you don't exactly know how are you going to share the components that you have created in micro front ends by using the MPM approach, we already have the framework to reuse the components, we have the framework to share the components as well. On top of that, we provided a robust CI CD system, which checks all the code quality that is going into the code base. And by having this MPM approach, the styling and the theme of the portal was consistent. And that's how this approach of using MPM differs from using micro front ends. That's fine. We talked about having MPM approach, but it is of no use if you don't have a robust system where application teams can easily put their package in the repository. So for that, we had to build a robust CI CD system in place. And how did we do that? So we provided a boilerplate repo to the application teams. Now this boilerplate repo had all the code quality checks written. So all the links and the test would run whenever you send a GitHub pull request. And when these child applications are published with a new version, whether it's a feature update or a patch version, it would also trigger the build of the main app. So if I want to tell you the CI CD workflow from the server perspective, it will look something like this, where you have the child application at version 1.0.0 and the application teams are doing some parallel features that is feature one and feature two. So once they are done with the development, they will send a pull request to the dev server and the reviewer would check it and do the code review and finally merge it. So it goes to the dev server and depending upon the servers that you have, it will go to QA or stage and finally it would go to the prod one. Now in the prod server, the feature number that is one dot one is updated because you have done two features and not the bug fix or the breakable change. And how you increase the version number is again dependent upon the teams, they can do it manually or you can parse the commit messages and also do that. And finally, once you update the version, it is going to create the build of the main app. And if I want to show you the workflow from the developer perspective, how do they release the applications? If you look something like this. So you have to clone the boiler plate repo. This is a one time process for the onboarding process. Once you have clone the onboarding the boiler plate repo, development team has the development branch. Where can they do development? So the development team would take a feature or a bug fix shortly branch, do the code changes and send the pull request back to it. So the quality checks would run on the delta code that they have written. And once these automated checks are passed, it would be going to the reviewer who can review the code. And finally, it goes to all the servers that I've mentioned before, and it gets published to the NPM repository after running all the code quality checks on the entire code base and not on the delta code. And once it published to the NPM repository, the internal NPM repository, it is going to create the build of the main app. And when I talk about the code quality checks, I mean, as simple as this, this is a sample screenshot from the Jenkins server, where the links and the tests are running. And you have this report, the test report that is generated. This is a sample report where it tells you the coverage by statements or by branch or functions or lines of the individual files. And you can also have something like say a threshold. And if the threshold is not met, then the build would fail. When we talk about the branching strategy, there are multiple ways. Again, you can do it in your code base. We follow something called as trunk based development, where you have the main branch that is the trunk branch. And you take short live branches out of it daily, do some bug feeds or a feature update and send the pull request back to it. But there are other ways you can also do it in your code base like say git flow. And again, the application teams are free to follow their own branching strategy. You do not tell them how are they going to do development. But yeah, they can follow git flow or they can follow the trunk based as well. And for continuous integration, we use an internal tool of Jenkins called looper, which is built on top of Jenkins. And for continuous deployment, we use an orchestration tool called conquer. Now, what this orchestration engine or tool does is to take the artifacts from one server to another server, put it there, or spawn a new server and run the test and all. There are many open source engines available as well. You can check like Facebook Chef, it does the same thing. I also provided the link for conquer at the end of this slide. So you can learn more about it and also implement it in your project if it fits well. So that was about the UI architecture of the portal as well as the CI CD system to support it. But we found out another way where you could also scale the applications that is config driven development or CDD. Now what it means as the name suggests is that you are developing your applications by writing more configs and by writing less code. Now how does it help? So we found out that there are teams multi many teams who do not have a dedicated UI developer. So they want to do a quick prototype of their applications without investing much time into it. In order to fasten those development, we created a standard set of components. Now, as I mentioned in the previous slide where the application teams would divide their code into multiple packages and create these packages so that it can be reused by other teams. There it helps because we have already have the code base of these components created the standard set of components created. Why do I mention the word standard? Because there are many components you can create for each of the use case. You cannot tackle for each and every use case out there. So we focused on solving for visualization or the reporting solution. We do not try to solve for other solutions like say building the layout and etc. And by focusing only on the visualization aspect, we found that many teams could use it in the code base and do the faster development and put it in the production faster. There are two parts to it. First is creating the standard set of components that is already achieved. But how are you going to use these components in your code base? So for that, it looks something like this where you have the microservices API created for your application and you map these microservices API to some visual component that is out there like say bar chart or it can be a pie or a line or any of the chart. And then you export that code snippet and then you consume it in your project. It sounds very simple. Let me show you a short demo of how does it look. So this is a replica of the app that I have created. Let me expand a bit so that you can see it properly. Fine. It's not the full-fledged app that we have built, but it's a replica I want to showcase only how you can export the components and consume it in your project. So you have the standard components here on the left hand side. You have the bar chart, you have the pie chart and you have the line chart. Now these components are created by the application teams. It will be very large set here depending upon the visualization components that you want to use. And you have something called as configuration ID. What did what this means is that this configuration ID is tied to some microservices API. So when you fill some configuration ID here, it would fetch that data from the microservice API. So this is a replica app and there is no server running behind the scene. Think of this as a configuration ID where you have ID something like this. Sorry. And when you click on Submit, it's going to fetch that data here. And then you have the visualization of the bar chart created. And you can as well toggle between the pie chart and the line chart and do this different visualization for the same response data. And when you click on the export component, you have the code snippet available for you. And you can use this code snippet in your project and consume it. So where does this help? One of the advantages that the application teams do not have to write the transformation functions required to transform the response data to the format that required by the line chart or the pie chart. And the advantage is that they can just copy this snippet and use it in your in their project, which are without downloading the dependency, etc. Because we provide a boilerplate repo, the the package which does the line chart or the pie chart is already there in the code base. So they need not download it again. And they can quickly do the development of their project. So that's how the consumption of the components, the standard components works in conflict driven development. Yeah, so that was the demo of how CDT works. Let us see the performance benefits of using this npm approach. So I told you we are using the npm modules for sharing the dependencies between the application. What it means is the dependency payload that is downloaded when you move from application one to application two is going to be less. And because you provide a boilerplate report to the development teams, the all the code that is required for say, minification or say, obfuscation of the code or say the dead code elimination or tree shaking, everything is built in by default. So the developer need not invest their time on writing the code for say lint or minification, etc. They can quickly start doing their business use case. So all in all the developer experience is better. In conclusion, as I told you, we had the challenge of separate code bases by separate teams. Yet all of these code bases, all of these applications were hosted together in the portal. And we followed the npm based module system, where in the dependencies were shared between the applications. And because of having the robust CI CD and having the conflict driven development, we could provide the high availability of the applications, as well as faster development and faster time to market. These are some of the references that I have written. You can learn more about Concord here or how the learner works. There are some links for micro fronted semantic versioning and tongue based development where you can go and learn more about it. Again, the contact information and thank you. Now I open for the questions that you have. Yeah, my name is Anand. So one question is so it's fine like it looks fine and presentation was good. So you have supposed two or three components that you publish to npm, right? Yeah, either being internal or something. So once you use it, right, how each component can interact with the other one. So suppose suppose any component has some event that got triggered. And based on that, some other component need to update how your how that part is being handled in in your So if you have seen the slide where I talked about the mono repose, you are using learner so you can easily add the packages as dependency and you can even share the packages between them. So if there are three package and the package three is a dependency for one and two, you can just add it in there and you can do that. It's all answer your question. Kind of but the thing is then you are creating a dependency between one component to another component, right? So then it's not comes up whether that package has to be created or not. So if you have these dependency between the packages, it makes sense to have it in a single package. Once you have, say, if you divide, if you see your application and if you can think that I talked about product application, then you can logically divide it into three modules. I cannot divide it into other modules wherein you have dependency or cyclic dependency going on. So you need to properly, if you properly divide the application into modules 90% of time, you will not face that problem. Hi. Hi. So super interesting concept, but I'm having a lot of scenarios running in my head. One thing I want to know is how much granularity is Walmart going into? Is it based on product feature level, like a product search, product filter, or is it at like a very smaller level, something like search, which is used by product search or, you know, and review search? That is one question. Yeah, how much granularity is that? Okay. So as I told you, we work on a higher level where you have the applications coming inside the portal. How granular that application is or how they are doing a development, what branches, strategy are they following? That's not a concern for us. This is a portal where the teams are coming, applicant teams are coming in. Performance as I told you, we have provided the boilerplate code, developer experience is good. Improving the developer productivity, but I'm talking about the performance in the, when the app actually renders in the browser. So did you see a lot of performance benefit like the load time increased or the load time decreased or, you know, interactions became much more fluid? The dependencies which are common across the packages are shared. So if there are two applications which share most of the dependency, then it would not require downloading it again. But if you create an application which has, which has complete dependency, different dependency tree, it would again have some performance implications. So I would say because this is a portal, because we are at a higher level, then we could solve for some of the application performance, but most of the, the ownership of improving the performance of the application lies at the lower level. We did what we could do at a higher level. Hi. Yeah. Thanks for the presentation. My name is Sakir. I had a question around the boilerplate code. Yeah. So you said it's like a one-time operation. You just copy the boilerplate and then start development. I'm assuming the boilerplate contains all of the webpack configs. Yes, something like create react tab where you download everything. Right. So the question is, if you make an improvement in the boilerplate repo, how does someone who's cloned it and like development has moved forward significantly? Yeah. How do you back propagate these improvements? So that's where the versioning comes in. So if we do a breakable change, you update the major version. If you do, say, a bug fix, you update the patch version. So because we also have the versioning done, we also release it consistently, like say every three weeks or four weeks, depending upon the changes that we have and we then the teams would get to know these versions have been released and based on that they update their code base if it is required for them. Yeah, they can do reverse or they can do a merge as well. No, depending upon the changes that you have. So if we create a breakable change, you have to create a new repo. But if you're doing say patch versions that are say minor feature update, then it would be sufficient for that. Or we can talk more about it later after this. Sure. We can catch up in the burst of feathers as well. Could you could you throw some light into micro front end, micro front end? What use cases that did you use it for? We don't use micro front end because the micro front end approach sounds similar to the approach that we have. So as I talked to you about dividing the applications into multiple packages and even in micro front end, what you do is you have the application divided into multiple modules. So both of these approach sounds similar. I wanted to show the difference that we operate at a higher level. And this is at application level. So how do you do the application is not a concern for us. The application might be using say multiple scrum teams or they might be using single SPA something like that. Yes. Yeah, you can take a follow up questions offline. He'll be available. He'll be also there in the birds of feather session. Hello. Yeah. So how will you get something like code splitting in your this node modules kind of framework? We provide the boiler plate code which already had this course splitting. So you can't course split the code from node node modules folder. That's not possible. So you have to do it in the boiler plate itself. You provide the provision to do that in the boiler plate code. You can take up. Yeah. So one more question. So you are publishing the NPM packages. Okay. So for example, I publish one NPM package which uses React one uses Angular. Okay. So we are using react for this entire approach. So you are not having the approach where I can have different kind of packages that I can publish. No, not currently. So for the framework for the application which are built on different frameworks, we are using adding it as a as a single link. So for the existing applications like Angular JS, which are no longer supported and is deprecated. We are adding it as a link. That's it. We are not including it as an NPM package. Yeah. So the question that I have is around common services like settings and all. So these common services is something that is shared by like most of the components. So how do you resolve duplicates? What kind of setting? Like I have a common service which fetches the settings of a user. Let's say like settings as in it can be anything. What is the default language that I have? What is the time preferences part? Yes, and it fetches from somewhere. So I have a common services package which does so the common we also have a mono repo, but the common problem is if anything changes in the setting service because it is used across all the components, either all the components have to upgrade to a new version or there might be multiple versions of the same setting service sitting around in the code, which will cause our bundle size to increase. So how do you solve this kind of problem? So you're saying that the preferences would be a separate package and it is shared by multiple applications. Yes. So that depends upon again the application team. So we are working on the framework for supporting these applications. If two or three applications have their own individual package, then it's their ownership of making it proper versioning. You don't we're not concerned about the versions of the packages that they are using. Those of you who have given your names for flash stocks, can you just come forward? We'll get you set up. Let's keep the questions going if there are any more. Also, a lot of you have bought t-shirts along with your tickets itself. So you need to go and pick them up. A lot of t-shirts which are already bought have not been picked up. And if you would like to buy react food t-shirts, they're available. Yeah, I have a question for you. Yeah. So you mentioned that you'll let the user paste in the configuration and then configuration ID could be used to render. Yes. So aren't you making an extra network call in affecting the overall performance for all the pages that are using such things because a configuration ID would then load a configuration and then on that configuration ID is tied to the API. Anyways, you have to make the call to the API to get the first response data. There won't be any extra network call for first to get the configuration and based on the configuration get the data. It gets the configuration to load that response data in say pie chart or the line chart is already built in in the boilerplate. The extra call that we make is to face the response data which is dynamic for different scenarios. Definitely make a call for getting the configurations also, right? Based on the ID. There is an extra call. We have the data required. So the data that comes is the data required to visualize that component. So if I have the bar chart, I need to render three columns. I will require the data for that. Then we make the component, this network call to phrase it. So isn't it extra network call making affecting the performance or anything of that sort? How is it an extra? So normal scenario, if you want to do a if you want to visualize a component, you of course have to call to get the data, right? So configuration in the sense that the chart configuration that is built in. What we call is for the data, the data render it visualization data.