 Okay guys, today we are very pleased to have Michael Chang with the senior software engineer Singapore Power Limited. He has a lot of experience in more than 10 years in dynamic websites like using PHP. Right now he also does Ruby and he runs a Singapore PHP user group. He co-founded iOS Dev Scouts and is the organizer of PHP Contest here. So he also runs a lot of passion projects like engineers.sg which is a non-profit community initiative to document and achieve the take and start-up skills in Singapore. So please welcome him to give this wonderful lecture. So he will talk about software development in practice. So I will hand it over to you. Yeah, okay. Hey guys. Hi, my name is Michael. So that's me. You can get at me on Twitter if you're so inclined, you know. If you don't want to add me on Twitter, it's fine. I won't get offended. That's my email address. If you have any questions, you can always drop me an email. I'll be glad to answer some of your questions. A little bit more about me. I majored in History and Political Science. I did not major in Computer Science. I was in Social Sciences and stuff. I kind of like history. So I was from NUS and I've been doing PHP for about 10 years. The Singapore PHP User Group is just celebrating its 10th anniversary. So we've kind of been around for a while. The first startup I joined was basically a few back in 2011. It was a small little startup called Found. So it's kind of like the first startup I ever joined. And from there, it really brought my career to the next level. Basically, previously I was writing my own business and I felt that, hey, writing my own business, I was really tired. I really wanted to focus on the craft, basically be better at writing software, which is why I joined this company. I really felt I really learned a lot. From there, I also joined the last startup called MIG33. Of course, this company has now changed, they're rebranded and now they're called MIGMe. They have a social network. They're pretty big in Indonesia and Iran and some other third world countries. Yeah, they're really big on those countries. When I left the company, MIGMe was basically PHP only. When I joined Neo Innovation, I basically decided I needed to change the environment. I wanted to learn other stuff. I don't want to be just boxed into just PHP alone. When I joined Neo Innovation, this is a consultancy. They're really big agile shop. They do Ruby on Rails and all sorts of stuff. From there, I really learned a lot. Of course, Neo got sold and basically now they're called pivotal labs. If you guys are looking for a job, you can also look there as well or Singapore Power. That's where I am now, Singapore Power. Today, we'll be talking about modern software development practices. Why modern? Because personally, I feel that it's not just about the buzzwords. Building a building software nowadays is not just about, oh, it has to be agile, it has to be scrum, it has to be TDD, it has to be what? No, for me, it's become a way of life. It's become very natural to me to build software in this way, which is why it's pretty much like, which is why I say it's pretty much about, it's kind of a natural way of building software and it's about doing what works. Agility in practice, basically, have you guys heard of the agile manifesto? Yeah, you have, right? Oops, press the wrong button, shit, shit, shit. See, so many fancy things. Anyway, agile manifesto talks about individual and interaction over processes, working software over giant stacks of documentation, customer collaboration over contract negotiation. You just basically get things done and you collaborate with the customer directly and about responding to change over following a plan. Did I do the wrong thing? Right, sorry. So, the agile manifesto is pretty much like a bunch of principles and guidelines. It's not very prescriptive to your situation right now. What you're building, building a writing software, it doesn't really tell you how you should go about doing it. It's more like a bunch of principles that you should follow that they feel that could probably help you build better software. But they're not very prescriptive. So, how do we put this into practice? So, for many of us, people in my company and many of my peers, we don't care about bus words, we just care about getting things done. And it's not just about that. We care about getting the right things done and we care about getting the right things done the right way. It's not just about building right and building the right thing and building it the right way, right? So, that's pretty much about building software in the modern era. Oh man, okay. This is embarrassing. I'm an IT professional. I don't know how to operate this thing. Apologies. So, anyway, we talk about building the right thing. We also talk about building things right. So, it's in the right way. When we talk about building things, getting the right things to build, we talk about building lean. Okay, it is a bus word. Yes, building lean is a bus word. Yes, unfortunately. So, you find my slides are full of bus words, but I don't really care about them. What I care about is what we actually do. So, building lean is about the process of figuring out what is the right thing to build. So, it's pretty much like a hypothesis. The first time a hypothesis is basically about what customers, for example, we have a hypothesis about, say, customers at Starbucks, they would like to pay with Bitcoin. So, from the hypothesis, we look at building something. So, as a startup entrepreneur, or maybe you working for a startup entrepreneur, we kind of help them through this process of what you want to build, but why do you want to build it? Why do you want to build it is your hypothesis. Basically, I believe customers want to do something. In this case, it's about them. Customers, you feel your CEO or your co-founder, or even yourself, you may feel that, oh, there's a certain customer need. There's something that customers want. In this example, it's about we think customers at Starbucks really want to pay with Bitcoin. So, what we do next is we basically do some simple building. How do we validate that idea? We build something, we validate it, we process it. So, in this case, in this example, we build some really low-fidelity screenshots. So, before even writing a single line of code, you have to validate that idea. So, basically, we start by drawing some simple screenshots, show them to customers. So, a bunch of people at Starbucks, it's just valid, it's just invalid. This is about validating the idea. Before you write a single line of code, so it's about checking whether customers really do want this. After that, once you realize whether it's a valid hypothesis or invalid hypothesis, you go back and repeat by creating higher-fidelity screenshots until you come up with a proper working product. MVP, do you guys know what MVP is? No, MVP. MVP, yeah, basketball MVP, yes. But MVP also stands for minimum viable product. The minimum features that you have in a product for users to fully experience your application or your software, right? So, that's what a minimum viable product is. So, in Lean Startup, it's about not building out the whole shebang of features, the whole kitchen sink of stuff, but it's about finding out what is the minimum set of features that will help users experience your application, they'll delight them, and maybe get them to buy more things on your website or even use your application more often, right? So, building Lean is basically about pretty much about asking the right questions, asking questions, being willing to ask questions, not just of yourself, but also of your colleagues, of customers and everyone around you, even your employer eventually. Why are you building this? What hypothesis are we trying to prove here? Is this really what customers really want, right? You will, to people that you talk to may not always be responsive. So, a lot of this is about finding out, getting data and showing evidence that whether this is the right thing to do or not, right? So, building Lean is pretty much about gathering information to support your ideas. So, in Single Power, we also believe in building Lean. So, we also want to do customer surveys, find out whether what we're building is really what users want. When we're building, for example, an application we're building right now, instead of releasing it to the entire Singapore population, we decided to just start releasing it only to the people in our office. So, we built a mobile app that does some transactional things, and instead of releasing it to the entire world, we're releasing it just to people in our company. From there, we slowly get feedback before we look at and then we get feedback and find out what are the right things to build, what's the next step that we need to do. For example, in our original hypothesis, we think that customers probably don't want to change the password once they're logged in. They don't need to, right? But we feel that there may still be a need. So, we built the password-changing interface as a web application rather than being in the app. So, we have an administrator, which will sit with the customer and say, oh, you want to change the password? Yes, I'll change the password for you right now. But once we release the app, the people in our office start coming to us, hey, I came the wrong password. Can I change it right now? And it feels like there's a lot of them having the same problem. So, we realized, hey, actually, we do need that feature in the app, right? So, had we released this to the entire world, we'd probably get one-star or zero-star rating on the App Store, which would have a great experience for us, right? So, slowly validating your idea and slowly expanding and making a more sophisticated product. That's pretty much what Building Lean is about. So, having an idea, build it, write the code, measure, getting measurements about what users are actually using this product. Users are actually using this feature. So, in the feature that I wrote for MakeMe, we were kind of, okay, for a registration page that I built for MakeMe, we wanted to break down the sign-up form into either one page or two pages. Basically, the first page, we asked them to keep their email address, then the second page, we asked them to keep their password. So, we did an A-B testing. Is it true that users, to reduce sign-up friction, sign-up friction is about how fast you want to get into the app, how fast you want to sign-up into the app and experience your application. So, we wanted to reduce sign-up friction. So, basically, the hypothesis was if a user see four or five views on the same screen, do you feel paralyzed? Oh, my God. Do I have to give you my email address? I will give you this thing and that thing? We feel that maybe that could be a problem. Let's try to break it down into two pages. That was our hypothesis. So, we wrote the code and we did A-B testing. So, basically, we have, based on 50% of the user population signing up, we gave them a one-screen page which has four views there. Email address, mobile phone, and password and verified password. On the B side, we did one page where we just had their email address and mobile, and then the password was actually in the second screen. So, it released it and we started getting data. Basically, we had a log file, basically. Very simple. We had a log file that was accounting. How many of you were hitting page one? How many of you were using page two? And you were completing the sign-up process. And the feedback we found was that users prefer a single-page sign-up rather than a two-page sign-up. And for us, we were puzzled. We thought having two pages would reduce the sign-up friction. But the truth was, users were actually logging into our website on a mobile device. So, for me, it was actually mainly in third-world countries where the network was really slow. And a lot of them were still using no-care phones. No-care phones and logging in. The network was really slow. So, for them to load one page after another was a hassle. And then because on a no-care phone, navigating using a keyboard and the joystick wasn't a very pleasant experience. So, from there, we drew the hypothesis and our hypothesis that users prefer two screens was invalidated. So, we switched back to one single form. So, having data will help us measure and realize what are the right things to build and whether it is... Data will help reinforce what our hypothesis is true or false, right? So, I talk about building things, building the right things. But we also want to make sure our process is in line with how our process is set up in a way that helps us build software effectively. It's not just being effective, but also being sustainable. If you realize that if you're in a startup working long hours, you may end up getting burnt out. So, we really want a process which is sustainable which helps you retain and maintain your passion for coding and to continuously improve yourself and to continuously find strength in getting things done. So, first of all, building it right. Building software is a team sport. I'm sure you guys are working on projects right now. Is it a team project or is it an individual project? It's a team project, right? Cool. So, building software, in a way, it's not like one person doing a brilliant 10x engineer can probably do it himself if he wants to, but you probably don't have enough sleep and drink a lot of caffeinated drinks and probably burn out after two months. But being in a team helps us spread out the burden and basically the stress levels. So, building software is a team sport. It's not just about you as a developer but also there are other stakeholders as well, like business owners, product owners, customers, other developers, even investors and so on and so forth. There are Forbes people, people who buy you coffee and stuff. No, wait, that's not... That's also true. But still, do you understand the difference between a business owner and a product owner? No? Yes? No? Okay. So, in a company, the business owner is usually the CEO, the person who owns the company so that he owns the business, right? But a company may not have just one product, right? So, they may have like a utilities app, they may have a web application that has the same features, you may have a payment app somewhere, right? You may have other things that is going on. So, a business owner owns a business. The product owner is basically the guy who is in charge of that particular product that's going out, right? A mobile product that's going out here and stuff like that. So, a product owner is closer to the team, working closer to the team and he is like the man or person that has the final say about what goes into the next iteration of the application, right? So, that's the product owner. In this situation where as a team, communication is really key, we want to make sure that everyone understands and understands why and what we are building. And sometimes, as a team, we also have to agree on your practices. What time are we meeting? How often do we meet? How do we agree on the priorities of stories, user stories and features that needs to be built? Stuff like that. So, communication is really key. And in this case, working as a team is much better than working on your own. And I will share with you four practices that I felt that has worked for us as Singapore Power, even myself as working as a consultant in other companies. These are the four practices that I felt that has worked in helping us build things the right way. First of all, it's about building, design and build just for right now, having automated tests, YAKNI, I'll tell you what that is later, clear public issues early and get faster feedback on features. So, number one, about design, design and build for right now. So, engineers, being engineers, have tendency to over-engineer things. Do you guys do that too? Yes, yeah. Oh, I got to think about this feature. Why do users do it this way? Why do users do it one that particular feature? What if? Why if? But the truth is, future-proofing and considering all the possible use cases is actually wasteful. Because you're just thinking about how users may use the product without actually validating that really to answer what the users want. So, basically what we want to do is build for right now. What you feel will be the easiest way to get things done and get users experiencing your application or your software. In a way, we also have this problem of the unholy trinity of better, faster, cheaper. IPI-TIO. Right? So, sometimes your customer wants things better, but they're not willing to pay for it. And they want it faster. Not going to happen. So, let me show you a little graph here. So, for clients who want it all, they want better or good, they want it cheap, they want it fast. So, if it's built fast and cheap, it's probably a low-quality product. If they want it good and cheap, it's probably going to take some time. It will be good, but it will take some time. And usually, if you want things to be done faster, it's usually more expensive. So, usually we want to keep ourselves within this zone or this zone. So, this one is a little bit not possible. Those are unicorns. You can find those companies that will give you time, give all the money, and make sure you build stuff well. Those are unicorn companies that you stick with them. I've not found one yet. So, basically, as a consultant, back in Pivotal Labs, Pivotal Labs is known for not being cheap. We're not the cheapest consultancy around, but we build good software. And we build it fairly fast. So, basically, Pivotal Labs and even new innovation was in this zone. When customers come to us and tell you, you've got to pay this amount first, then we'll talk about this. But sometimes working in a company like Singapore Power, it's really not just about... We want to build things well. Fast is sometimes a negotiable concept. We do have milestones to reach. And then being cheap, I don't think we're cheap. Well, internet, I don't know. Anyway, you understand what I'm saying? Yeah, anyway. So, better, faster, cheaper. So, it's about finding out where your customers are most comfortable with. Yeah. So, if the situation is that you are as a startup, because I've worked in quite a few startups in the past, I'll just give you an example of a startup. In a startup, when you just first get funding, you have a limited runway. The runway is how much money you have to actually build, how much money you have. Like, say, a company gets half a million dollars. And that will last you for about a year. You talk about renting an office, paying for a couple of engineers and marketing people and sales and whatnot. When you want to get funding, that money will probably last you an X amount of days, depending on how frugal you are and how well you pay your engineers and how much you invest in hardware and all that stuff. So, when you have a limited runway, which means you have a limited budget, and you really want to get as much traction as possible in your application, basically get as many people using your app. So, once you have users using your app, you can then get to the next level, which is to get more funding, tell more investors, invest more money in us, or you move the other way, which is to start charging your customers and get some money from them. So, essentially, in a startup, your timeline is very compressed because you really want to deliver value at the shortest possible time. So, you really want things to be done fast. Cost-wise, there's a limited budget that's X dollars, and you have to get it done anyway. So, the concept of being better is sometimes... how good you want it. It's sometimes a negotiable concept that you can talk with amongst your team and say, how much do we need to have all these features right now? No, let's pare it down. I remember when we were in Fountain, we did the first pitch to get our first half million dollars for Fountain. Our CEO, Danny, he had a whole bunch of ideas. He had a kitchen sink of ideas, all users can do this, can do that, can do this. But when he realized that, look, we need to make a pitch and we need to get investors for our first seed fund. And we need to show them some slides. For him, there was a very clarifying moment where he said, look, I think we need to just start shaving away stuff. What is the core user experience? What is the core set of features that will help users experience the application the best way that we can? So from then, he just started throwing away features. I don't need this right now. I don't need this right now. Let's just focus on what needs to be done right now. And he came up with a very beautiful set of slides. He just shows the core features. And with that PowerPoint slides, he basically got $500 million in funding, which is kind of cool. Yeah. So, right, building for right now is... So you have to build also... not just focus on building it for right now, but also as a software engineer, we also want to make sure that it's easy for us to make changes in the past, in the future. We have to build it in such a way that it doesn't make it too expensive to make changes. Think about legacy code. Code has been written a long time ago. It's going to be harder to change. Why? Because maybe it was written in such a way that it wasn't easy to make changes. It was built for a certain purpose, but it wasn't built in such a way that it makes it easy to make changes in the future. Object-oriented programming helps you think about code in a way that is composable. Design patterns that you will learn in this course eventually will probably help you in designing a code in a way that will make it extendable and make your code cheaper to change. And that's important in your startup, right? Because you want to build in such a way that it's not... like, make me, for example, I'll give you a sort of example of make me. Make me, it was a chat application, and it relies on a bunch of Java servlets in the background just to make sure that we can do chat and it's super reliable. But it was built in such a way that it was very inflexible to make changes and whenever we need to add a new feature, it takes months for the guys to just roll out that feature, which is terrible. If you're in a startup which only has one year of worth of money, three months is a long time, right? So yeah, building it in such a way that it's easy to make changes in the future, that's crucial, right? Of course, don't overthink it because overthink it means oh, we should cater for these future possibilities, these future possibilities. Well, it's balanced. Finding out how much to do that, how much to work on is balanced. So that's about designing for a building for right now. Only testing. How many of you guys actually write unit tests? Do you even know what a unit test is? Okay. Sorry. Preview. So how do you test an application right now when you write your code? How do you test it? How do you check that it actually works? Very simply. Test cases. Test cases, okay. So yeah, those test cases being what? Just users should be able to do this, users should be able to do that and you just manually click through. Yeah, okay. So that's manual testing. So in agile, we actually value rapid feedback. Yeah, agile is... Jesus Christ. Okay, let's talk about this. Okay, agile movement. There's a whole thing. When I talk about agile manifesto, there's a bunch of guys that just felt that there has to be a better way to write software. And the whole movement is called agile movement. Again, it's a buzzword. I don't really care about it, but it's more like, this is pretty much how we feel is the best way to write software right now. With agile being quick, being responsive to change in the business world, because you want to make sure that your code is written in such a way that it can respond quickly when a user or a customer has a certain need. In the traditional SDLC software, Diverm Life Cycle, as we always talk about, it has a waterfall methodology which asks you to come up with a specification early and a giant manual of software specifications that you take years to write the software before that software actually sees a lot of data. A lot of government applications right now, what you see on the website, any of the government websites takes years to build. Even applications in Singapore Power takes years to build because they follow the old way of doing things. But what we want to do is, we want to make sure, but the problem with writing software that way is that by the time your software is released to the public or the customers, the customers may already have changed their mind. I don't really want this feature at all. So being responsive, being able to change quickly to respond to market changes, customer changes, and customer requirements, that's crucial. So that's the whole agile movement, how it came about because we want to write software that is more responsive and more useful and we want to validate ideas quickly. But in doing that, we also want rapid feedback. Basically, we want to make sure that when we write a piece of code, for example, we want to make sure that code is actually working. We can always test it manually but sometimes it's not possible to test that code manually because you have not built that web interface yet. You have a piece of code that's here, very far away, talking to your database and your web interface is over here or your mobile app is over here, which you have not written, your API is here, your mobile app is here, your mobile app is not ready yet. How do you test that API? So you can use tools that could poke your API server or you can go even lower by writing unit tests. Unit tests are basically programming code, code that will actually verify your production code. I think next week, you guys will go into that. So preview. Anyway, we also hate repetitive tasks. Repetitive tasks being opening your browser, clicking on something, doing millions of clicks before you realize that, oh, okay, that piece of code is working. Oh, no, there's a bug in this piece of code. And then you write your code and go back and open the browser again. And it's very manual. So that's manual testing. So we don't like that. I don't like that. So what automated testing is about, really about is about writing code and having software that runs that will actually verify your production code. So you have a bunch of testing code that will actually verify your production code. And it would do it such a way that you would run, you can either run an individual test or you can run an entire suite of tests. So basically, give an example. You were building a web application and you want users to sign up. So you want to be able to, users should be able to go to the homepage, click on register, fill up the registration form, click submit, receive an email, and have my account created. And I should find myself on the landing page saying, welcome Michael Cheng. So this entire sequence of events has to happen. If you're writing that software, you're writing that software, you want to do the testing manually, means you have to open a browser, do this, do that, do this, do that. And then before you actually, and you have checked your database, is this record there? And there was a bug somewhere in between. You got to go back and repeat the process again. So that's about repetitive tests. And all that can be automated because there's software that can actually open a browser in the background, do all that clicking, and then the software can even check your database and say, is this record in the database right now? Is this the easier email address in the database? The same as what you're keyed in in the registration form. So automated testing is basically about, is basically having software that verifies your code and it runs in a automated fashion. So you click a button or you run some command in your command line or your terminal and basically you run through the test. Yes? So by automation you mean that this process is automated but the scripts are still written by a human? Of course, of course. That's not written by a robot. That's not written by a bot. It's written by you as a developer. So in writing automated tests you also have to write your test yourself, right? Because you know this is the code that you're going to write and this is the code and this is how I want the code to be tested. Something that also... And when we do this, the whole process of writing tests, writing the production code and it's what we call test-driven development, TDD. Where is it? Is it here? No, I didn't put it here. Okay, fine. So it's about test-driven development. So you use your tests to drive the design of your code, right? So for example, my first example of user registration and user sign up. So the first step is to actually launch the browser. Go to the home page. So if I go to the home page, what should I see? The home page I should see the button called register. So you write your test. Your test will say, visit http example.com. I expect the word register to be on that page. Once that assumption is true, that goes to the next step. Click on register, which will actually automate the browser. The browser will actually open up, find the text, click on it, and go to the next screen. And then you can write the next step in the test will be, I should now reach register.php, right? And from the register.php, I should find the fields. First name, last name, password, email, and you fill all those in as well. So you're writing code that actually describes the behavior, how your code should behave. And then you run your tests, you will fail, because you have not written the production code yet, right? So what happens next, then you start writing your code. So you first write the test, which would then fail, because your production code is not written yet. The page is not there yet, right? So once you write the code, you write the simple page, you load the index.html, you include the hyperlink that says register, then you create a new page called register.php, where you have a form field there and all that stuff. That's when your code starts to go green, because it's now passing, your code is behaving in a way that you expected to, right? But it doesn't stop there. Why does it not stop there? Why does it not stop there? You can write more tests, yes. But there's another step that you could take, and that is to refactoring your code. Do you guys understand what, do you guys know what refactoring is? Okay, good, I see one not linked here. So refactoring the code is basically about going into the, going back into it. Okay, so when you wrote the production code, we have this belief in agile that we just write the minimum amount of code to get things to work, right? We don't have to, it's in line with not overthinking things, not over-engineering things. Just focus on writing the minimum piece of code to get your test to pass. So you just write four lines of code here, four lines of code here, run your test. It's passing, hooray! But there are some situations you have not taken care of yet, for example. You may have copied and pasted a bunch of stuff from file A to file B. You have copied a bunch of functions from another shared library into this, just to make sure it works. So you may have taken a lot of shortcuts in between, right? So refactoring is about going back into the code and say, okay, what stupid things have I done here? Have I been copying and pasting stuff? Or could this function be broken down into smaller functions? Because I want code to be more readable. Once I look at the function name, I can understand what this function does, right? It's about revisiting your code and finding out what are the ways that I can make this code better, right? So that's refactoring. So this whole cycle of red-green refactor is basically what TDD is about, using your tests to drive your code. So when you have tests, like say, for example, about the registration form, right? Once you have the test there, right, it gives you a lot more confidence when I go back to my code and basically move things around. I throw away this piece of code here because that function is available in another file and it's loaded in the header, right? Throw away this piece. Okay, I can use this one, but I should refactor this to break it down into a small function here. I should add one variable here and stuff like that. The biggest fear for any software engineer is that when you make some changes and your entire software breaks, right? It was working before. Why is it not working now? So how do you verify it? How do you protect yourself from this test? Because you have code that checks your behavior of your application. So when you make a lot of changes, when your test is passing, good. You will make a lot of changes and something broke. Oh, shit. I've introduced a bug somewhere. Let's see what have I done. Oh, okay, it was this line of code. I forgot to put at the semicolon at the end. Simple stuff, right? Yeah. So, and you use your tests. Your test is basically your safety net that helps you make sure that your code actually works the way that it is supposed to work. So your test is there. You write your code, you refactor, basically revisiting your code and finding out how to make it better. And then you repeat the process by writing the next step in the test, right? So test also helps us clean up, refactor our code with more confidence. You also guard against regression bugs. Regression bugs are basically, I make some changes here. And then some other feature down in another part of the application is breaking. How did this piece of code break that piece of code, right? Because you have tests that protects you, that checks that piece of code. Your changes here may have introduced a bug. If you did not have tests, you would not have known that the other file would have been broken, right? So everything tests helps you protect your whole application and also makes your customer a bit more confident in your capabilities, right? I've had this experience working on some code during a hackathon. You know, hackathons at 24 hours, kind of shit. And I'm working on a piece of code at 2 o'clock in the morning. I was so tired. But because I have tests, I don't need to think so much. I don't need to worry about writing code that could have broken something. So I was like, oh wait, I introduced a bug. Oh, the test told me where which line was broken. Right, I can fix that right now. And I can focus my energies on writing the feature. For even very simple things, when you're tired, having tests that says, oh, my function takes in this input. It should give me this output, right? And when you're tired, you don't want to worry about, oh, how should I organize this? No, you have tests that tells you your code should behave this way. You just, in whatever capacity you have left in your awareness, you can just write your code in a way that, at least your test will help you verify that your code is working. Does it make sense? Okay. Have I lost anyone yet? Are we all okay? Okay. So this again is the cycle. The rate, which means your test is failing. We are not reading your code yet. It goes green when you make the code work. And refracting is about things like eliminating redundancy, polishing up the design of a code, and so forth. Making your variables sound more understandable, and not just A, B, C, but user name, email, rather. Okay. So that's about rate green refactor. So when you learn about TDD next week, you probably will know more about this. A bit more about automated testing. So how much testing should I write? So I talk about having a lot of tests will help you have a safe, have peace of mind over your code. We have this concept called test coverage. How many percentage of your production code, in terms of lines of code, is being covered by your test? So you have a piece of code that exercises function A and function B, but there's an if-else statement there. Your code only tests the if part and not the else part. Means there is a piece of code, there's a few lines of code in your production code that's not tested. So what do you do? You could actually write another test, then you could check the elf segment. So that could be it. But that will give you 100% coverage because you've written an extra piece of code. But 100% test coverage is not a very nice thing. It's not a desirable concept. It's good to have 100%. It looks nice. It's a vanity matrix. A vanity matrix means it looks pretty. 100% looks pretty. High score looks pretty. But it can be a pain to maintain because you need to write more tests. It can be a pain to maintain. And again, 100% test coverage doesn't mean your code is bug-free. Even though you have actually exercised every single line of code in your production code through your test, it does not mean your application is bug-free. Why? Another discover bug is actually a test you have not written yet. Because your application is an object-oriented application, based on different inputs, you may have different behaviours. Right? And when your application reaches what we call user-land, when users start using your application, you use it in certain way. I was not expecting you to key in exclamation mark in the comments field because I have a wide list of all of text. Only alphabets numbers are allowed. Exclamation mark is not allowed at all in my comment field, which actually did happen. In the application I'm working on now. So I was like, oh, nicely, I can write a comment there. I put exclamation mark. Oh, it doesn't work. In another application that we have with SP, it doesn't take emojis. You key in an emoji, it crashes the server, unfortunately. But yeah, so a bug is basically a test you have not written yet, right? So even if you have to present text coverage, it doesn't mean that you have no bug. So how do you guard against this? To guard against this is as a team, you do what we call code reviews. Basically, you help each other look at each other's code. There are also other things called code scanners. Can you guys help with this website called Code Climate? You can open your browser, go to the website, codeclimate.com. This is actually a website that actually scans your code by doing stuff like running your unit tests, doing code standard checks on your code. It tells you that, oh, wait, you've done a lot of copy pasting in your code. It even highlights you, the lines of code that you have copy pasted from other parts of your application. So yeah, a bunch of code scanners. I think even Java has code scanners. Static file analysis, which could help you check all these things. So Bombline is investing time in preparing a test suite. It's actually good. I'm not saying it's easy to write tests because as a person who is not used to writing tests, your productivity will drop by up to 70% even when you write tests. In the first few weeks, because you're not used to writing tests at all, but once you get a hang of it, it will become second nature to you. So once you've gotten that muscle memory into your... For me right now, when I look at a piece of code and it's not tested, I get jitters. Oh, shit, how do I know this piece of code actually works? Right, let's fire up my browser or fire up my code editor and write some tests. You have to automate testing in some way to help you a lot. And in testing, we also have this concept of a test pyramid. So I talk about how unit tests are automated testing and TDD can help you in designing your code. I've talked about code coverage as in test coverage, how much or how many percent of your lines of code are being tested and exercised. But here, the code, the test pyramid talks about the type of tests that you should have in your test suite. So early on, I talked about the browser, opening the browser, clicking on the button, going to some place and all that stuff. That's a UI test. In other places, they call this an integration test. So it basically integrates an entire piece of your application from end to end. And those unit tests are actually quite slow. Or rather, those UI tests are actually quite slow because it has to open up a browser, it has to load a web server, it has to load this other thing and this other thing. So those takes a long time. Those are very effective, but it takes a long time to run. And in agile, we value rapid feedback. If basically we want as fast as we can, know whether our code is actually working or not. So we basically go all the way down to the unit tests. Unit tests are basically function tests that actually test your internal functions, like this particular method in your Java class. Does it actually behave in a certain way? If I give in 1 plus 1, does it give me 2? It's as simple as that. A functional unit test is as simple as that. So in your test suite, your whole bunch of tests that you write, write as many unit tests as you can because those are low-level stuff. And those tests usually run really fast. And leave your UI tests to keep your UI tests an integration test to the minimum. Basically, automation testing of a browser and all that stuff. Keep it minimum because those things take a long time to run. Third, Yagney. What does Yagney mean? You ain't gonna need it right now. So it comes back to that whole concept of building for right now. Sometimes building lean means you have to make some compromises on your grand vision until it's validated. So you may have the idea of, oh, my application should do this, do that, do this. But you may not be ready for that yet. Your users may not be ready for that yet. So sometimes I keep asking myself, or even ask the product owner, do we actually need this? Will the product suffer from the lack of this feature? Will the user experience be any less delightful? If the answer is no, you probably don't need that feature. Let's build that. We saw that not build that feature. It's more like we don't need that feature right now. Let's keep that idea somewhere. When we come back to that idea, maybe when we have more users or when we have more time. So we limited time and limited runway in terms of funds. This is, again, a start-up situation. You want to build traction of your software, but you have to prioritize. Priority of how much time you want to invest in this. Am I almost out of time? Oh, dear. Oh, shit. OK. OK. Next one, we have clear plumbing issues. Basically, try to remove blockers for rapid feedback. Basically, make sure you've got your project management tool set up properly, version control, continuous integration, all that stuff. All bus routes again. But you will understand what they are. They're pretty cool. OK. Make sure these things are done before you even start writing a single line of code. That will help you in making it really, really smooth and easy to get things done. So I talk about building things, building the right things, building the right things, and building things right. We also want to talk about how you should do it in a sustainable manner. Because the last thing we want is always for our engineers to burn out. So basically, it's about reflecting, evaluating, and growing together. So keep learning from each other. Don't be shy about learning from each other. And keep sharing and building knowledge as a team. So here's what we actually do at Signal Power. We do a flavor of agile code extreme programming. So we can Google this later and find out what it is. So in Signal Power, what we usually do when we start a project, we have a project kickoff and scoping session where we get everyone on the same page. So an example would be this. Everyone would be putting up stickies on the board and say, what is our product goal? What is our business goal? What are the things that we don't care about? Right. And then we come up with user stories. Basically, user stories are basically in a single card. It describes what a user should be able to do. A single task that the user can do on the website. So we have a whole bunch of this that we've put together. This is for an actual project we're working on right now. So once we have that scoping, we do what we call iteration planning meeting. So on a weekly basis, we check in with each other. We check in with the product owner and say, are we on the track? Are we building the things the right way? The right things that we need to do right now. And we use a tool called Pivotal Tracker. So if you have not seen this before, it's basically agile tool that lets you manage your projects. So all the cards you see are there on. All those cards here. Each card will translate into one story like this. So iOS open account to include a message of SD, whatever. App crashes. This is a bug, for example. This is a feature, email. As a user, I should receive a payment, receive as a text-based email. So this is a story, right? So we have a bunch of all these stories here in the current and the backlog. The backlog will be things that will happen the next week. And we have this thing called Icebox. This is where we talk about Yagney. You ain't going to need it now. Why are we going to put this inside here? It's called the Icebox. If we put it there, we kind of revisit it later on. And we have daily set ups where every day we will have, we'll basically talk about, we'll have a daily check-in with each other as a team. We'll talk about progress. We'll talk about blockers. So this is what we do. And once the code is done, we release it. And we use continuous integration and continuous deployment to get things out into production. And every few weeks, we'll do this thing called Retrospective. Retrospective is basically a time where we come together as a team and talk about what went right, what went wrong. So this is how what we usually do. We have a smart way of a retrospective board where we talk about what we have a smart way of talking about all the happy things that happened and a sad face talk about all the sad things that happened in the past few weeks. And the stuff in the middle, I'm kind of like, eh, I'm not too sure whether it's good or bad. But yeah. Of course, writing code is important. As developers, we pick up a story on Pivotal Tracker, which I've shown you earlier. We do TDD. We do pair programming. I'll show you a photo of that later. There are many ways to do pair programming. One is called a navigator, driver navigator, tester coder or pilot co-pilot. I don't have much time to go into this, but go figure it out. Comet and Push, we do Comet and Push to GitHub. It triggers a continuous integration on Jenkins which will test our code, integrate all the code of everyone's code. And we also do pull requests. Have you guys heard of know what pull request is? Okay. And once a pull request is accepted, it triggers a deployment into production or rather into the QA environment. This is an example of us doing pair programming. This is an example of us doing what we call mob programming as a mob. So four sets of keyboard are writing stuff on one person's screen, pretty cool. Different team roles. We have developers, coders and devops, team lead product owner. In a larger team, we also have other things like business analysts and so on and so forth. So yeah, this is usually how it is on a bigger team. So what's next for you? I hope you guys get involved in the community. Nikil and I were kind of like involved in the community as well. Go check out meetup.com, Facebook groups. There's this website called We Build the SG. I want you guys to check it out. There's a whole bunch of stuff there. Meet your peers. It's a place where you can share knowledge. You can participate in open source projects. You know, We Build the SG. Check this out. There's a whole bunch of events there. Engineers at SG is a website I run. So basically we have videos of all the tech talks happening around Singapore. Check it out. CopyJS, you won't be so sure. This is where you come to meet people and have coffee and talk about stuff. I also want you guys to hang out with us. Meet your peers. Meet your future colleagues. Meet your future employers and all that stuff. This is something I run every six months. It's called Geekbrunch SG. So basically we hang out in this place called Willers Yard. And we have coffee and we meet a lot of people. The last one we had, this is the photo of all the attendees, about 100 people. All engineers, most of engineers. So you read a lot of cool folks. Questions? No? Okay. Yeah. So you want to revisit my slide? Sorry, I went to the last few slides quite fast. You can download it at Speaker Deck. That's me again. And yeah, we're hiring. Okay. That's it. Anyway, you can email me if you have any questions or add mention me on Twitter. Yeah, if you have any questions.