 Hello everyone, I'm from Hackerang and in Hackerang earlier we had a big fat monolithic app and every all of the front end projects were deciding on that particular app. So it was kind of a hurting us and then we decided like we should move from that monolithic app to a more modular and manageable infrastructure. So today I'll be sharing that experience, how we did that. But before we start I'd just like to know our audience, so how many of you had breakfast morning? Okay, cool. Yeah, do a long stretching. And how many of you have a single rapport for all of your projects? Okay. How many of you use multiple repositories for different projects? Nice. And how many of you don't have any repositories at all? So yeah, I think we have a good number of people who have multiple repositories so you guys can relate things. About me, I'm Sudhanshu, I work at Hackerang. I do a lot of open source projects. Currently I'm working on a very interesting project. It's a UI library, something similar to React. You should check that out and if you have any question on JavaScript, front end architecture or any design patterns, you can just ask me on Twitter. So the agenda of this talk is to tell the pin point we were having with the monolithic app and why we had a need to move away from it. Also how we did that and how we finetuned the overall process. So the first major reason we had to move away from monolithic app was to reduce the overall app complexity. So as I mentioned, all our front end projects were part of one monolithic app. And for a developer, if he's working on a feature, it was like a lot of context he had to keep in his mind and it was not productive for him. And also because everything was staying together, the code base was same for all the projects. So there was a lot of coupling happening between different modules and because of which if someone is working on a feature part and if you have to make an improvement on a core component, that was becoming part of a feature development. So that was kind of creating a bottleneck between teams and it was creating a kind of cross team dependency. So let's say one guy from a team needs code, like updates of a core component from the other team, but you have to wait for that feature to get completely pushed. The other reason was that we wanted a separate deployment for all of our products. So we have a different products. We have a hacker and for community. We have a hacker and for enterprise, for candidate, code pair, and a few more. And all of them have different stability threshold, I would say. So basically, the amount of stable we have to keep a particular product is a little bit different across products. So we can push fast on our hacker and community. We can break things there, but we can't do the same thing for our enterprise app. Also because we generally like enterprise application you do pushes in like two weeks or three weeks, you make a different version for community. We do that daily thing. So we need to have a different release cycle for each of our projects. So that was one requirement. And let's say if you are updating any core components, we want that to happen only for one product. We don't want that to happen on all the products. So making a release for one product should not affect any other products. And not just new features. This was happening with bugs as well. So we were producing some bug on one product and it was spreading across all of our products. And that is not a very good position to be in too. Like your community site is down and then because of that, your enterprise is down, which is not good. So we wanted to separate deployment for all of our products. And it was also becoming a bottleneck for us to innovate. By that, I mean like if you want to try out new things for a part of a product, it's a little bit hard. I'll give an example. So we wanted to experiment with just earlier we had Mocha with Karma, Chai, Sinon, and all. And now there was a way like for that monolithic app, either we set up two different testing pipeline and we keep them both. So that would increase our Travis build time. That would increase our production build times. And also, developer might get confused like if he's working on a feature which is spread across different files, then he might get confused like whether he has to write on just or whether he has to write on Mocha. So apart from that, enhancing a few part is also a little bit difficult. So if you want a feature improvement on one product and if that works well with that product, then you want to bring it on other products. It's a little bit hard because everything's here same code base. You can't actually do it. You can, but it's not very straightforward. So we knew that we have to move away from monolithic app. And the first thing we did was to evaluate our different options. So first we thought we already have Monorapo. So why not just use submodels into that? But it was solving few problems for us, but not all the problems. So our different part of a product have a different type of build pipeline. So modules have to be built differently. Our products have to be built differently. They get deployed differently. And Monorapo doesn't solve that very well. Also, the main reason why we wanted to move away from monolithic app was to create a mindset. So for a developer when he's working on something, if he's working on like a core pieces, he should work that isolated. But with the Monorapo, that mindset would not happen. People would again form a little bit coupling between different modules. They will work on core component along with the feature. So our best choice was breaking our Monorapo into multiple rappos. And we went ahead with that. The second thing we did was to list down all the different pieces of our application. So earlier, our rapport looked something like this. There were a lot of other products, but just to mention. So let's say we have a hacker and for community. We have hacker and for work, which is enterprise. And then both of those app would share some common things like our component library, utilities, common configurations, scripts, etc. So we decided to break this. We decided to categorize this into two types. One apps, which will be all the products and the other will be modules. So products, because products have a different deployment, they have a different build pipeline. So this category categorization was required modules are something which would be added on apps and apps will be entry points for all the products. So after the split, we had a plan that it would look something like this. So we'll have different apps, hacker and for community enterprise and candidate. And then we will have modules, all our modules as a separate repositories. So some modules to mention on that, like we splitted our icon library as a separate repo where it has a build tool which create earlier it used to create a font icons through SVG files. And now we create SVG icon components. So that is separated. And then we have atomic UI component library, which we call UI kit. We have a common utilities library, which we call HR util. We keep all our build related scripts, diverse scripts on HR scripts. And then there is a shared component, which is so UI kit is more of an atomic component. So shared component is something like components, which is used across products, like a pricing view or something. And but they are very specific to hacker and there were a couple of other modules as well, smaller modules, which I have not mentioned yet. But while we were breaking all our applications and modules, we wanted to keep our code style uniform across all the modules. So a developer switching between different trapeze, he should not feel that he has come to some other company itself. So to keep the code style uniform, we created presets, which are very specific to hacker rank standards. So we created a style in preset for our CSS files. We keep all the CSS rules there. We created ESLint presets for JavaScript and we created Bevel preset. So everyone uses the same level of JavaScript features across all the modules and apps. Then we had to create the build pipelines for our module. So for publishing any module, so we treated our all module as like a normal NPM packages. So as you import a third party package, you would import your hacker rank module in the same way. And you will give the same like a module name with a virgin name on your packages. We initially thought of using NPM private registry for it, but then because it has its own credential managing system and we were already doing that on a GitHub. We didn't want it to do that on two different platforms. So we decided to have a registry, have a self hosted registry. So we used Nexus repository manager for that. And with Nexus repository, we just have to create the two tokens read token and write token. And with each of our module, we would keep read tokens and we'll keep write tokens on the Travis part. So developer can't push it directly from the publish release will happen always through Travis. Now we are looking into GitHub package registry because we already manage the credentials on GitHub. So I think a GitHub package registry will also work for us. We are exploring into that. So the overall Nexus repository looks something like this. So whenever you publish a module, it comes under Nexus registry hosted part. And that happens through Travis and on our applications. If you add that on a package, so whenever you do Yarn install or NPM install, first it will come on Nexus registry check if it is already available from the hosted. If it is not available, then it will try to fetch it from NPM registry. And once so it act as a proxy between NPM registry and then it will keep that cash. So next time if you do Yarn install, it will directly return from NPM registry instead of fetching from Nexus registry instead of fetching from NPM registry. So we wanted to automate deployment of this modules and we did that through Travis. So we followed a simple rules like we make a release whenever there is a version bump. So you can do alpha version bumps on your feature branch. And for stable releases, you have to do a stable version bump on the master. So whenever the version bumps happens, Travis will trigger a release. It will also auto generate a doc if we have any doc script on that particular app. And it will push it to S3 in a version based system. So you'll have docs for every version of that specific module. And after releasing, we send a release notification on Slack. So everyone is notified that something is available for that module and they can start using that particular module. After setting up a pipeline, we went ahead separating our modules. And we didn't did that at once and we moved our module one by one. We moved UI icon, then we moved UI kit and so on. And with every move, we improved our tooling. We improved our build pipeline based on the learning of the previous move. So this is very important like we should move things one by one. So we have a learning across the process and we improve things across the process. So now we have moved almost all of our modules. Now we are in a phase of moving all over, we are separating all of our applications. While doing that, we saw that a lot of infrastructure code is similar between all the apps. So we decided to separate out that infrastructure, front end infrastructure as a node module. So that infrastructure would contain the isomorphic infrastructure we have built that is very custom to our use case. The tooling part, the dev tooling part and the application layer. And there are a lot of code on this infrastructure part. So this went as a separate module which app can use. And in application repository, we will only have the application code. It will not have any common code. Then right now we are in phase of separating out our apps. While this multi-rapo architecture is working very well for us, there were few hiccups with multi-rapo architecture. To mention on them was like how do we manage cross-dependencies across projects and how we do version upgrades across all the modules and apps. And the second one was how do we have the same level of productivity we had with single-rapo architecture, monolith app. So how we maintain the same thing or how we can improve that local development part. So for managing cross-dependency, first we strongly followed the semantic versioning. So for a module, if there is any API changes or if there is any breaking changes, we would do a major version bump. If there is any API addition which doesn't affect the older APIs, then we would do a minor version bump. And if it is just a bug fix, then we'll do a patch version bump. And in our modules, we would not keep things on dependencies. We will keep things as their dependencies and peer dependencies. We'll keep only those things as a dependencies which are very specific to that particular module. If a dependency is shared across app and module, we'll keep it as a peer dependency on modules and keep that as a dependency on application. The reason for that was we wanted to keep a package version source of truth on our app level. So if you want to upgrade a version of any package, whether our own module or third-party module, you have to just do it on application packages and you don't have to do it across product. And whenever there is a major version bump on the application, because we have defined peer dependency with carrot symbol, so it will start throwing error that peer dependency doesn't match. So at that point of time, we upgrade our specific module to have the latest version of that particular package. And for solving the local deployment with the monolithic app, we would just code things and we'd see everything at once. Now, when we have a multiple wrapper, the main pain point was what if you want to do some changes on UIKit module and you want to see how it looks on your application? Would you publish a new version, alpha version for UIKit and then add that alpha version on your application and then test it out? This will become a very long process. So the first thing we tried out was just giving like a local path on packages and doing yarn install. So what you have to do is like you do build on your specific module and then you give that relative path on your packages and when you do yarn install, it will fetch data from your local system, not from the registry. So that would work. But every time, even on this every time you have to see the changes you have to build on your module and you have to do yarn install on your application. The second thing we tried was yarn linking. It is similar to npm link. So what it does is like it creates a sim link between node modules and actual folder. So now you don't have to do yarn install on your application but still you have to do the build part. Because you write on source file and you build a distribution file out of it, either lib or es modules. And in your application product you will be pointing to those lib folders or es modules. So that was also like a little bit of work. So we created our own open-source solution to solve all this problem which we have already open-source. This is called package bind. And with this, so it uses a bubble plugin called module resolver. And with module resolver, what you can do is like you can give a relative path for your library. In an app, if I want to work on a local like it, I just give a relative path of my local folder and I will directly point to the source path. This by itself doesn't work very well. So what package bind does is you just have to wrap your bubble config on package bind. And it will return a new config with a resolver function which will resolve your files specific to if it is getting loaded by UI kit, it will resolve it properly. If it is getting loaded by application, it will resolve differently. It will handle all the peer dependencies. It will handle if you have given alias on your modules. It will also handle cases when you have defined some alias on presets. So it handles a lot of things with that. What you get is like you don't need to run build on each of your alias repo. The cross-repo development can happen with heart-loading. You make change on one repo and it will be reflected on the other repo. And this is a big pain point on multi-repo architecture. That's why people move to mono-repo with some tooling like LANA and all. But with that, you don't have to move your mono-repo. Multi-repo on mono-repo. So I have a quick demo for it. I'll just... So I have two projects. This is like a demo project. So this is a simple number formatting. This is, again, a module which I have open-sourced. Now this is a simple input form. What we want is like use some custom text-filled with that. So let's say we want to use text-filled from material UI. Now let's say there is no... Because input is being rendered by number format. We have to implement something on number format itself. We can't just wrap text-filled here. So let's say we added a new property here called custom input. We provide that as a text-filled. We change our bevel config. So we give a relative path for react number format. We use package bind to optimize the bevel config. Now you can see like this got affected. So you can make some changes on your other wrapper. Sorry, I have to restart because I change... Now let's implement custom input in our number format repository. So the proper is custom input. We say take input equal to custom input. If it is there, just say input string. Use that instead of input. Now we see here still it's there. When we save on our other wrapper, we can see that got reflected here. So now we have a text-filled instead of normal input items. So you can use it on app or your like a third-party module. And that will work flawlessly. So key takeaways I want you guys to take from this talk is because architectural change takes a lot of time. So if you are planning to move away from monolithic app, if you want to monorap or multi-rapper, first list down what are the challenges you are facing with monolithic app and then choose your option wisely. Then list down all the different pieces. First, create a plan like how you'll move things one by one. And also choose the right toolings for it or if the toolings are not available, build the toolings. But make sure that the developer experience is not getting affected by the architectural change. It should improve. It should not degrade. So you have to make sure that. And at the end, don't break everything at once. Move things one by one. So you learn on the process. With that, thank you. You can follow me on Twitter, GitHub. Any questions or do we have? Thanks, Sudhanshu. Any questions from the audience? I mean, you should have either understood everything or you didn't understand anything. Yeah. Well, if you didn't understand anything, raise your hand. Do you have any questions that you want to ask? Feel free. I can see some interest, but not necessarily asking questions. Yeah. So this was not a complete, like, all engineering effort we put on this. We did it slowly one by one. It took around six months, but there was, like, we were moving only one module at a time. And that's why it took a long time. If you put more engineering effort, we could have done it much faster. But that's a way I would suggest, like, give it time, move one by one, so you don't break everything at once. Anybody else with the question? Okay. Thanks, Sudhanshu. Thanks for that.