 Hello everyone. Can you hear me? All right, welcome to the last talk of the day, right? It's kind of going to be the last talk and it's also going to be the most challenging talk for me and it's also going to be the most boring talk for you guys. Right? Praise yourself. So I am Raghunath Jawahar. I work with this agency called obvious, which was formerly known as uncommon vangalore and I specialize in architecture. I specialize in the desktop and development refactoring and optimizing workflows. That's all I do. And this talk is probably going to be more obvious because I'm going to state the obvious. So let's say let's start with high performance. Let's talk about predictable, right? What kind of picture does it paint in your mind? To me, early, like long back when someone told me high performance, that means like burning the net oil, right? Working, working, working, working, like, and there's no end to it, right? But this is all about how do you build a sustainable workflow for yourself and other people in the team as well, right? So that you can sustainably like produce a certain quality of work whenever you get things done, right? And with that being said, I want to talk about something that is very, very personal, right? Something that's really close to my heart and something that I've never talked about in the public, right? So here it is. Soft boiled eggs, right? So these are very delicious. If you haven't had one, you should try it. And they're like really smooth. You don't have to chew, right? You can actually put it in your mouth and it's just like melt, right? But in order to make this thing, it's not easy, right? When my mom used to make these things when I was a kid and I would have them, right? One day, she was not at Hucktown, right? Not in town and then I decided, okay, let me go ahead and do it, give it a try, right? And then I tried the first time. It was almost raw, right? And then I started cooking it again and then it was like really hard boiled and I hated it, right? And then after a few trial and error and this was before where Internet was available to me, right? Or else today it's a simple Google search. You say, hey, how do I make soft boiled eggs? And it tells you, go ahead dummy, like boil the egg for eight minutes and put it in cold water, right? That's all it takes. Probably Vinay would agree with my methodology of making soft boiled eggs and it's totally fine, right? So what you do is like you go ahead and like do this, right? And someday one of your cousins calls you and he says, hey, I had this really nice soft boiled egg that you're hopefully, I want to make something for myself. Can you tell me how to do this, right? And then you give him a set of steps, right? Take this egg from this fridge, like put it outside, let it go to room temperature and then put it inside a pan of hot boiling water and then after eight minutes, like take it out, dunk it into cold water, peel the shell and then you'll have nice soft boiled eggs. And you cut it, split it open in the middle, add some salt and pepper to it and then you're going to have delicious soft boiled eggs, right? And I say soft boiled eggs. Soft boiled egg is not a binary, right? It's not soft and hard. It's a range of things, right? Just because I'm making the same kind of egg every day doesn't mean it has the same kind of softness, right? So it could vary from person to person, right? So that's about making soft boiled eggs, right? And the series of steps that you take to make a soft boiled egg is kind of a workflow, right? We will call it a recipe, but it's a representation of real work, right? I give you a workflow and ask you to do it in a certain way then you go ahead and do it and then you're guaranteed to achieve a certain result, right? That is all a workflow is, right? And this is how a software like this is a workflow for making a soft boiled egg, right? It's as simple as that. The moment I put this slide here and if you're a consultant, probably this number eight is going to bother you, right? And right now in your mind you're probably thinking, hey, you spend 240 minutes a month boiling eggs, right? That's too inefficient. Like let's make it 30% faster, right? So the moment you start listing this thing out, then you can actually look into it and then look into areas where you can improve and then you can actually come up with steps that will help you improve things, right? And probably I would say, I don't know, maybe someone actually disagrees on this methodology of making soft boiled eggs, right? And that's totally fine and I could actually take feedback and go ahead and improve this, right? To produce a totally different result in a much shorter time, which is like desirable, probably increase the temperature, right? And you probably end up with a cracked shell and that's okay for some people, but like for some people it's a no-no, like I want my eggs uncracked, right? When I take them out of the water, things like that. So a workflow can be as simple as this, right? And it's a very simple step, right? But when you're building software, it can be very, very complex. It can be very complex and can also be hard to replicate, right? So that's why we start with a guideline and then once we have a set of guideline in place, then we go ahead and look at, you know, think and like things and like customize it based on our requirements, right? So the thing is like we need to have a certain workflow so that we can start optimizing things for ourselves. So I'm going to give you a set of steps and I'm going to say, hey, this is your workflow, like go ahead and customize whatever you want to. And this is very specific to Android. It's kind of a workflow that I've been using for like three years in small, medium, large and fairly working well for us. May or may not work for you. The tools and techniques are not that important, but the principles that are backing the tools and techniques are very essential, right? And how many of you have like, when is the last time you saw a PRD? Sakit, sorry. Anyone else? Yeah, a PRD. Every now and then. Do you actually read it? That's a difficult question to answer. So how many of you have like, you know, something out and then one day your PMM comes to you and say, hey, dude, like this is not what I wanted. Like go ahead and redo it. Like how many of you have faced this problem? Nobody? Before PRDs? Yeah. So these things constantly happen, right? And this also kind of like stating the obvious, right? Say, hey, build a specification, like figure out what you're going to do, right? It's like we are software developers overall and why are you even talking about this? Like, dude, you're wasting my three minutes of your time already. So let's go ahead and talk about this. And I hope Imran is here. Oh, thanks. What do you see here? Oh, by the way, a friend of mine, Sakit always told me if you're going to convince the audience, so them show them some graphs, right? And I made it sure that for every presentation that I do, I always have a graph on the slide. It kind of works. What do you see here? So this is where one of your product managers comes up with an idea. Hey, say, hey, I'm going to build something, right? And then that's when that happens, right? On the x-axis, you see the time and y-axis, you see the amount of certainty, right? And this is a point where your product manager is having an idea, right? He has this really cool idea, but he doesn't know how to execute it, right? So he's thinking about it, thinking about it, thinking about it. And then like after a few days or a week, he really finds out, like finalizes, oh, this is what I want to build, right? And that's when you probably may or may not have a beard. That's very simple. What's happening here? So your product manager has this idea, and then like, you know, he puts down a draft, right? And then at some point of time, he says, he pulls in the design team and then asks them, hey, guys, like I have this idea. Why don't you actually, you know, build something tangible so that the users can use it, right? And that's where the design team starts working, right? And then they go build something because they are also not clear about what they want to do and then they start building stuff. And then as they start slowly building things, and you know, they get feedback from their product. They do some work and then the product says, hey, this is not what we want quite, right? Like this is not what we want. Like can you make these adjustments, right? And then the product team is going to influence your work to some extent, right? So which means whatever you've built until this phase is going to have some changes in it. Let's up the game. What do you see here? This fantastic, right? It's getting more and more interesting, right? So here is your product team. Here is your design team. And then there's your backend and then there is your you. There are you, right? And then everyone is uncertain about what they really want to do. And then like the API guy thinks, hey, this is how the API should look like. And then your design guy says, hey, this button doesn't go here. And I need you to add this one small piece of information right here on the screen, right? And you know how difficult it is to add that one small piece of information, right? Because you've not planned for it, right? And all this uncertainty accumulates in this place. And that's when, you know, Imran, your frustrations can actually be quantified on this slide, right? So you get, you start changing things really fast because you're like so confused, so pissed off. And we have to do something about this, right? So this is your zone of risk, right? This is a place where you have a lot of risk, right? Because making decisions is hard, right? The sooner you make it, other teams are not going to affect you in your large amount, right? So the magnitude of influence of other teams in your code is going to go down, right? So making decisions is one. It's like it's also progress. So we kind of don't have this thing. We have this constant adrenaline rush where we want to build things right away. And then we start coding things. And when you start coding things, you also end up doing a lot of rework, right? Like we are actually making our lives more miserable, right? So this is the place where when I say build a spec, you may not have a proper PRD. You may not have a proper design screen. You may not have proper API responses, but this is where you sit, collaborate with teams and then create something that is informal in your own, right? It could be on a piece of paper. It could be Gira tickets, Rello tickets. It could be anything, but you know, make sure that you know what you're actually building, right? So what we do in this stage is we have a very clear understanding of the product once, right? It's very abstract, right? It's very abstract this point of time. But if you're using an architecture like Redux, MBI or Mobius, you have to have a state diagram. You should draw a state diagram on a piece of paper, right? But if you're using other architectures like MVP, M-E-B-M, it's like kind of really hard because they don't give you a overall idea into the problem statement itself. So you probably have to make a to-do list or Gira tickets or like whatever it may be. You need to have a set of things so that you clear out what you're going to build, right? Second thing is we interact with the design team and then we look at all the possible scenarios, right? The designer might have missed a loading screen, right? For instance, Horry may have missed a certain element in a certain space, right? You make sure that you get this thing out. You write down what screens are missing. You tell them that. You may not have the screens right there, but you also set expectations on your own. And you have a fairly good JSON spec that you share with the backend team, right? And most of the places that I've been to, the backend team and the front-end team, they do not agree on a spec, right? And this is what I've consistently noticed. First, when you ask them to do a spec, they are really hesitant. They say like, oh, no, we have the screen. We'll go do the API spec by ourselves, right? And then you know, like the API constant changes and then like people come back to you and then like ask you for changes, which means which is more reworked, right? So one thing that I've observed is like, there is this little bit of resistance in the beginning when you have to start to do this with the API team. But once they start doing it, they become more accommodating because they, their rework also goes down because they are building things in a more clear fashion, right? So that's about it. So the takeaway is like human interactions have the largest feedback loop because if you want to reach out to a product manager via email or text message, then it's going to take a lot of time. So make sure that you reduce this feedback loop as small as possible, like go talk to them directly, make things happen. And this is one of the core strongest philosophies. Don't build something that you're not going to share, right? If I'm not going to build a UI, I'm not going to ship a UI, don't build it at the first place, right? Like don't, like this is this, you have to be very brutal when it comes to this. There's absolutely nothing that you're going to build that you're not going to ship, right? And then minimize rework for you and other teams involved, right? Fairly, that's it, right? And once you have, you have the requirements ready, you have to start somewhere, right? And if you look at any kind of architecture, front-end architecture, you can categorize into these three layers. You have the UI layer, you have the domain layer, the data access layer, right? It could be any data, it could be databases, it could be network, it could also be sensor, like your GPS sensor, I don't care, right? So this is how you'll broadly classify any front-end architecture, right? Now, quick question, where do you start? Sorry, all three. Domain, you start at the data access layer. Any more questions? You start with the UI, right? So there's like no right or wrong way to start, right? You can start wherever you want to start, but I've also given you some clues in the previous slides, right? I'll show you a bunch of graphs. Can you think about the graph and then like try to answer this question again? Does it give you any clue? So if you start at the UI layer, your design team is going to come and tell you, hey, something has changed, right? They probably have to go back to the place and then like change it again, right? If you're going to start with the data access layer, the design team would come and tell you, hey, something has changed, like go ahead and like change it, right? Which means you're going to do quite a bit of free work, right? So this is where like if you feel have this, if you have this grudge against your designer or your backend team, then probably you have to pay more attention to this, right? Do you really have a grudge? I used to have a grudge, like a lot of grudges against my design team and my API team, right? Not anymore, right? So if you see these are some of the volatile areas in your code base, right? And then notice this is something that is relatively stable because you've gotten your product requirements, right? And this area is relatively stable than other two areas, right? But if you're starting to build from the domain layer, how do you know it is working as expected? How do you like if you have UI, you can see it, right? We have even network layer can make fake API calls and then sorry backend calls and then figure out if it's working, but like how do you know if your domain layer is working? Sorry. You unit tested, nice. So tests. So we have to write tests. That's the only way to start with the domain layer and it's also important to notice the kind of tests that we can write, right? And write unit tests because they're really, really fast, right? Isolated tests because you need to test the entirety of the system, right? And isolated tests itself is a huge topic and because you probably have to go and look into like what kind of risks you have in your dealing with isolated tests because you will be using mocks and whatnot. But that's a risk that can be easily mitigated and then they provide really good feedback. So if you're going to add new features, you can add it or update functionality really quickly because you have the tests in place. You can make bug fixes because you know that your bug fixes not breaking something else but in different things. And you can also facilitate three apary factory, right? I would actually go ahead and prescribe something like test development, right? It's up to you to opt in or not, but like the reason why I prescribe test development is we as developers, most of the time we think about solutions, right? We are always writing production code, thinking about solutions, solving problems, right? But we don't spend enough time in the problem domain itself, right? But when you do TDD, you're going to write your test first, you have to spend a lot of time in the problem space, right? And if you miss something in spec, you probably find it sooner than later. So it's not that after you ship the build, your product manager looks at it or your QL looks at it and then tells you, hey, you missed this thing, right? Because you're spending most of your time thinking about the problem statement itself, right? And then design feedback. So this is one of the most notorious thing when it comes to software development because when do you get design feedback? You get design feedback at all. You get a lot of them like in terms of code. How it looks like. So you go to like you have written this code, a piece of code like two years ago and then like, you know, a lot of people are working on it. And someday you realize this is a Death Star, right? It's a black hole and it's Mordor, like nobody wants to be here. This cannot be maintained, right? We have to go and re-architect this, right? We have to update like, you know, refactor this, right? So the design feedback comes after like several months or years, which is like really, really late, right? But when you do TDD, you get it in a matter of minutes or days, right? So your design is constantly getting updated because you're writing your test first and that's why you get really good, quick feedback on your design, right? And also helps you write maintainable code. It also gives you a base and like most people don't believe me when I say this. But this is a very sample example. So if you look at this, it's a Roman numeral problem where you take a decimal number and then the program converts into a Roman numeral, right? That's it. So a person experimented with this. The blue ones are the amount of time it took for him to solve the problem without TDD. And the green ones are the same, the time to be taken to solve the same problem with TDD, right? So obviously there is space, but like how does this work in the real world? So I've opted into TDD for like two years now, like over two years now. And what I have consistently seen across teams is teams that do TDD get the same amount of time to ship the first version of the code, right? So if a team is doing TDD and there's another team that is not doing TDD, they ship the first feature in the same amount of time. But when it comes to bug fixes, new features or feature enhancements, the team that does TDD does it 30 to 60 percent faster than the team that did not do TDD, right? This is what I've observed in my stint for the past two years. So deliverables is just you get domain layer with your tests, right? So you have a domain layer and you have written your tests and like things are fine. So take away is aim for quick feedback cycles in every stage of development. If something goes wrong, you should figure it out like sooner than later, right? And then there is the amount of inference that creeps from other teams. So protect yourself so that you don't end up doing a lot of rework because rework is not fun. Rework is always, it has a lot of, you know, pain to it. The components sequentially in parallel, right? So once you're done with the domain layer, you still have a lot of collaborators that has to be implemented, right? And if you're the only developer in the team, then you should probably do it sequentially because there's no other way. But if you have multiple teams, this has to be parallelized, right? Because that tells the quality of architecture that you have at hand, right? The kind of coupling that you have in your hand, right? So if you think you cannot parallelize anything after you write your domain layer, then you probably have to, you know, take a look into your architecture already because it's one of the design feedbacks that you can get. So if you look at this, you have the business logic layer done and then there are like four different columns, right? And each one of these columns is different and this is the kind of components that we have to write for the architecture that we follow, right? So it could be different for you. It could be for MVP. It's totally different for MVVM. It could be different for MPI. It could be different, but this is just for the architecture that we follow, right? And you have like JIRA tracking. Like you do all the kind of tracking. You use JIRA, like any project manager software, that is different, right? This is something that you write it down, right? On a piece of paper, right? So that it's constantly in front of you so that, you know, you know, that you always get this sense of accomplishment when you get something done, right? And that keeps you going, right? And it works effectively. It's surprising. So it varies across architecture and it should be parallelizable. So that means how like it, if it's parallelizable, then you have a fairly low coupled look. The coupling in your architecture is pretty low, right? So which means high cohesion. So this one has a deadline and it's probably name of the screen. That's it. And then if you look at it, we have a check mark, which means, hey, I'm done with this. And then you have a deadline, which you probably way or may not put it here, like depends on what you're doing. And then see each and every color represents a person, right? So we have like different colors and we say, hey, usually our team is like three to five for when we are building a feature. So this is how we pull it off. This is an example sheet of a feature that I built a few months ago. And this was a bunch of screens for a certain feature, right? So if you notice like this is what would happen when you actually have it on a piece of paper, right? So it keeps you going. It's like, you know what is happening in your project. Like, you know what someone is up to. And it's pretty easy to take care of. Like your project management is not a replacement for your project management tool. It's just something that you can have quick access to because you come into the office every day and they're like, how should we step up our game to get things done? So deliverables is just like all other components plus test. Yeah, we have written all your components. You have all your tests and then this is and then these components are ready to be glued, right? You have to put them together. So take away is architecture is a cornerstone. So if you don't have an architecture, then you probably cannot or it's really hard to build a workflow for yourself, right? So architecture is critical and then our DI. So use DI because DI inherently facilitates you to write loosely coupled application and I'm not referring to any DI containers like dagger. I'm just like talking about plain old construct or method injection, right? Nothing else. And then critical information should be readily available because you have to be well informed about the project because if you're not well informed of the project, then essentially you have lost control over it because you don't know who is up to what, right? And you have no clue what is happening. So you have to make sure that all critical information is available to you at all times, right? Right? Glue code, right? Write them together. So this box at the last, that's where you write the glue code. So when I say glue code, these are the things, right? We probably have a DI container set up, right? So I'm saying the glue code for your DI container happens here because your tests should not have a DI graph. Like you should not have a DI container at the first place, right? It does not need one, right? All your DI happens only on your production code, right? And then view interface implementation for your activity, fragment, views, view controller, whatever. So all these things are considered glue code, boilerplate for your architecture. So you have a base class and you're extending stuff and then you have to add things to it, right? Inject dependencies, all the stuff. These are all glue code. And once you're done, like just take it off. And the nice thing to aim for here is what I encourage the teams that I work with is when you have built your application this way and when you're running it on your emulator for your first time, it should work. It should work just out of the box. Like that's what I push for, right? And like people get the kicks when they do it, right? Because now you have built everything using your tests and then the first time you run it on an emulator you're like very confident that it's going to work because that's how much predicate the workflow itself gives to you, right? So deliverables is a feature that can be paired around on the device or an emulator. That's your deliverable. You can like ship it to... Oh, don't ship it to QA because you have to do something else. Test with local systems, right? So why do we even do this, right? Because when you're interacting with external systems like say for example, database or network layer or whatever there are certain conditions that can never be tested, right? Like say for example, you're dealing with your database layer you can never test a scenario where your application runs out of disk space, right? And then your server is probably going to give you the same error message and how would you do a 503, right? How would you do something that is so totally weird, right? You don't know, right? You cannot test these scenarios because they're very hard and the state is distributed elsewhere in a remote machine, right? It's like almost impossible to do this, right? So what we do is like we have an interface and one of my favorite parts when it comes to doing stop testing is to use RX Java, right? Or you can use something like RX Swift or anything else because here when you use callbacks you have to deal with method calls, right? And it's really hard to represent exceptions and other things like that. But in terms of RX Java everything is a value, right? Your data is value, error is value, right? Your completion effect is value, like everything is a value, right? So when you're dealing with values it's easy to create error conditions, right? So that's my favorite part. So when I'm dealing with boundaries of external systems I tend to use RX Java because it works really well for me and the architecture that I use. But if you're still using callbacks then you probably have to figure out ways to deal with this but it's really hard when you want to emulate error cases when you're using callbacks, right? Because it's kind of tricky. And it's also very helpful in emulating delays, right? So here is a remote API and then this is probably a couple of implementations. The first one is with a positive response. So I can test the positive response manually and then the negative response is I can throw whatever error, the kind of error that I want. I can have a HTTP exception thrown with a custom JSON body. I can do a 503 with a custom HTML body and then see how the applicant reacts to it, right? So no matter what I can test everything right here, right? And the reason why this is essential is because you're going to ship this thing to QA and they may not find something when things are going fine, right? But something breaks a week down the line. You're already working on your next print and then this bug fixes on you, right? And it's going to create your unnecessary pressure that you don't want to deal with, right? So we have to make sure I'm not saying there will never be bugs. There will be some bugs, but for us it's our responsibility to make sure that we make sure that we figure out everything that can go wrong or covered by what we do, right? There is this like same binary of built probably may or may not have to do a bug fix but it should be ready to ship. So fail fast. Don't wait for the QA to find obvious bugs, right? Integrate and test manually again. That's it, right? So you have to again, once you're done with all these things then you have to go interact with the real world and your API may or may not work, right? We have cases where there were zero integration effort just you swap the URL and then like things work like magic. Other cases where, you know, one of you have missed something. Like you probably forgot to ask them for a parameter. They forgot to send something for you and then like, you know, you have to iron these things out, right? And there's also one thing like say for example when I said be brutal about don't build what you want to ship. So I kind of kind of discourage the API teams from building stub values or tub, stub responses, mock responses because it's just a waste of their time, right? So once you have agreed on a contract, both teams built a spec and get things done, right? It's just like, don't build something that you're not going to ship. Like they're not going to ship the stub API then why have it at the first place? So this is a very rough idea about the specific of the workflow that we were talking about. So this is a workflow, right? It's not the workflow and this was probably quoted by this famous Venetian I, right? And it's reusable, right? You could say this is a solution, not the solution, right? And it's three substitutable in most cases, but if I tell him like at Happy Billy, this is a shock sugar, but not the shock sugar then probably he wouldn't agree with me, right? So you can use it in multiple contexts like do whatever you want to get creative. So predictability. So the kind of predictability that we have achieved with this is where on a team, two member team, we kind of ship a fairly simple feature in three days, right? So we know the estimates for the workflow and everyone in the team follows this, right? And if something is going to go like above or beyond, like low or high, then you obviously get a red flag. Hey, something is wrong with the estimate, right? Let me go ahead and look into it. So here you get a really quick feedback because you know the workflow yields a certain result for a certain team, right? So that is something that we have achieved and we also achieved better acceptance rates for QA. So one of the clients that I worked with, they had an acceptance rate of 20% for builds, which means every 10 builds into the ship, eight builds will be rejected by the QA department, right? And once we started adopting this, we had an acceptance rate of 80%, right? Which is a huge win for us, because most of the time we would ship just one build and it will go through, right? So high performance also means low maintenance. So you don't have to be because if you ship a feature a year ago, you don't have to be fixing bugs for the next year, right? So we are very, what do you call it? Receptive or we are very aware of direct cost. So if someone is going to tell you, if you cost you $50 a month, we're very receptive to this, right? But we're not really receptive to time that takes us, the time it takes for us to fix bugs and other things like not use or buying a license for a certain tool, right? So these are costs that we are not really receptive to or like the human mind does not really understand, right? And so in order to achieve high performance, there are a few productivity tips that we, I would like to talk about one is like master the IDE, right? You have to know like everything about your IDE. There are a couple of ways to do this. One is to use a key promoter plug-in for the integer platform, right? And the reason why I ask you to install, use this plug-in is because we don't want to learn shortcuts, right? The last time I took a printout of a shortcut, it didn't go well, right? It doesn't help in your memory stuff, right? What we're going to bank on is something called as a muscle memory, right? So it's real. So if I give you a laptop and then I ask you to use a shortcut, you probably not know the shortcut. But if I ask you a shortcut, you probably not remember the shortcut. But if I give you a keyboard, then you probably type it, right? That's your muscle memory kicking in, right? So what key promoter does is whenever you use a shortcut, the mouse instead of using a shortcut, then it gives you this like on your face message that tells you, hey, you're doing it wrong. Like you are supposed to do this instead of like using your mouse, right? It's really quick. It's really irritating. And hence it's really effective, right? There's a new plugin called key promoter X, which is extremely nice. It's like the nicest person you've ever met. It's very ineffective because you will not even notice that it's nudging you, right? Don't get the key promoter X, get key promoter. And second thing is explore the IDE. Just like go around and get to know your IDE. Like how many of us have actually taken a tour of the IDE itself? We don't really care. The moment we understand the basic stuff then we don't really go look into the IDE at all, right? So this one lets you, you know, encourages you to go ahead and look into the IDE, right? And there are also multiple productivity features that come with your IDE, like multi-cursors, refactoring tools, like refactoring tools are amazing. They are my favorite. And then you also have like a lot of live templates. So if you're kind of typing out the same stuff over and over again, like, you know, probably typing a test of certain format, like go ahead, create a live template. The post fix completions are very surprisingly effective, right? And then staying informed. So if you're on the IDE, like don't look for information, there are gutter information, gutter icons. So if you're dealing with colors or image resources, the gutter icons can actually tell you what kind of image it is. You don't have to go and open the image by yourself. Line numbers, if you're pair programming, otherwise I don't find them effective. And lock at, make sure that your lock at is colored and every log level has a different color. So you know what you're looking for. You don't have to go through a stream of white text. So if you're looking for an exception, just like look for the red one, right? I use rainbow brackets, right? It's colorful rainbow brackets. If you have this problem of smatching parentheses, then this would help you out certainly. So and I also learned it's entirely possible to work on an ID like this. Like an intelligence, a very powerful ID, but the defaults that you get is not very effective, right? Because the IDs defaults are designed for beginners, for people who want to learn how to use the ID, right? The more time you spend learning the ID, then you will realize that you don't need any of it. You just have can have one blank screen with code and probably do whatever you want to do from there. Second thing is to practice, right? Because if we keep doing the same thing over and over again, we're not going to get a different self, right? If you want to have different results, probably go ahead and practice things in a different way. And how do you start practicing? There is this repository that's called Benchpress, right? So this is a repository that I've created, specifically to Android. So it takes very simple problems. Six different problems are very stupidly simple, like probably a counter example, right? Probably an image bigger example, right? But these exercise various aspects of the Android architecture itself, right? So if you want to try out a new architecture, you can take this repo and then there are like six sample applications, go ahead and build it, right? That is one person, one, because we constantly keep evaluating architecture. So this is what we do because we figure out different capabilities of Android, which has to be, you know, accommodated by the architecture. And then we try to build around those, solve for those problems using architecture. The second thing that it helps with, is that you can build different types of applications, right? You can build different types of applications, like for example, because you can focus on your workflows, right? Focus on writing tests, focus on learning the API for the new architecture and things like that, right? So like, go ahead and do it. I probably do it like several times a year. I probably build counter applications, like six, seven years, six, seven times a year. Different architect is different workflows, and surprisingly it works. So this is why we do it. So it also helps you build muscle memory. Which are like MVP, MVVM, they're like bi-directional data flows, which means it's like inherently complex. There are better ways to do this. That's where we introduce unilateral data flow architectures. And most of the unilateral data flow architectures, like flux redux, Mobius, Mobius Swift, they're like a lot of unilateral architectures. So what they do is they let you build applications based on state machines, right? So state machines give you an overall idea about the problem itself, right? The problem with MVP and MVVM is your state is distributed across the entire application, so there is no easy way to figure out what is going on, right? But when you use a unilateral data flow architecture, you can actually, you know, once after step one, you're supposed to produce this, right? And if you show this to someone, they can tell you if you're actually going to build the feature that the product team is expecting, right? And it's very helpful when you're working with team members because you can actually look into the state diagram and see if the guy has actually solved the problem, right? Because if you're using other architectures, then it will probably take three or four days, and you have to see something in person that was built correctly, right? That feedback is too big and it's too expensive, right? And this one you're looking into, you don't need a computer, just like you look into a piece of paper, look at the state diagram and say, hey, okay, I think you missed something over here, right? Go, like, fix it, right? And then the person, once the problem is solved, then you go to the computer and then you're just realizing that problem or translating the problem into code, right? You're not actually solving the problem in the IDE because you've already solved it beforehand on a piece of paper, right? Very inexpensive, but if you go back, you fail really, really fast. And this is one of the architectures that I would swear by, Mobius, there's also a Swift version of it and they're also going to be a multi-platform version of it. So, Jithin probably might be of your interest. So, these are the reasons why we are doing it, right? And there's some general guidelines about tooling, best practices. So, tools should provide quick feedback. If your tool is not providing quick feedback, figure out a way to make it give you quick feedback or replace the tool, right? And pick tools that seamlessly blend in your workflow. If you're spending a lot of time working on the tool, then it's probably not the right tool, right? Minimize the amount of external tools that come into play and restrict. So, reliance on debugger tools, like, say, for example, there are, like, several tools that let you debug efficiently and you're spending most of the time using those tools, then it's probably not the right thing to do because there's something, there's a serious problem underneath, right? And if you have additional tooling, make sure that you use it for certain purposes. So, if you have additional tools, like, use it only when you're available to get CPU and memory performance, things like that. These are some of the links to the ones that I've referred already. And also, please use a GitGuy client. I'm not a purist. I'm a pragmatist. And GitGuy clients enable a lot of things, right? They let you review your code before you push it, just like you would review someone else's code, but that's not possible. So, these are some of the links to the ones that I've referred already. Just like you would review someone else's code, but that's not possible with your command line. And when you're doing refactoring, then if you're doing some serious refactoring, then you would understand the value of a really good GitGuy client. GitKraken is my favorite and Prathu is already booing me to not use GitGuy clients. That's it. And this is my Twitter handle. You can find me on GitHub, Medium and other places as well. And I don't think we have time for questions. One minute. You have one minute. Yeah. Right. Okay. I agree to that. For new workflows, it works spectacularly well. But if you're going to use it for refactoring, the problem I see is your code is not going to be green all the time. The moment you refactor something and if your code stops compiling, I can either go to the GitGuy client and then look at what is not compiling because I know what files have changed. I need a compiler to see where my compilation errors are. So the compilation error is a longer feedback for me rather than going to the GitGuy client. So I find it extremely helpful to find because it gives me very quick feedback cycles when I'm refactoring. Not so much during building new features, but when I'm refacting the code, either you rely on your compiler, which is too slow, or you just go and look at the refactoring, the GitGuy client and say, okay, I pulled up this parameter from this compiler. Sorry, from this class to the constructor. And these are three places where it is broken. So that's the quick feedback that I get during refactoring. Yeah. All right. Thank you very much. You can catch me offstage or tweet to me. Thank you. I share the same sentiments. I share the same sentiments. Because we are also used to it on multiple platforms, like the next videos, and then there is nothing to do to GitGuy client that is consistent. And it will be fine found for GitGuy. So it took a lot of time to figure out what actually works. Because source space is also surprisingly effective when it comes to it. It's graphical user interface client. It helps you visualize your commit history, the graph and then you can also see the clicking on files, things like that. Yeah.