 Hello, thanks for coming here. Thanks for staying awake after the lunch. Hope you guys are having a good time right now. I think most of us present here. It would be safe to assume for you guys that you are working on some apps which are at some of these stages of either seeing a lot of users using them or having a lot of engineers joining your team. The scalability in the app side can be described fairly in these two verticals when it comes to how loaded the backend server can get. That's a different topic altogether. We at Twitter also face these problems a couple of years ago when we were building our app and things were going berserk all the time. So we're going to talk about that in this whole talk. I'm Aman, I'm a developer advocate at Twitter and I sit out of the Bangalore office in India. Before we start digging into what we did, let's take a look back at how Twitter app was in 2011. This is the Android app. This is how it used to look. Three developers, one project manager and one designer. And this was the only team which used to decide what will go inside the app, what features will get shipped, and when will they get shipped. These were pretty much of dictator people. They would only see what's going on there. And because of this, the release cycle was somewhere around eight to 12 weeks. Fast forward to now, we have a lot more developers, different project managers and different designers. These developers form different teams. Every team has been working with different designers and different project managers. We used to release quarterly before, now we are releasing every week. That's a pretty intense of a change. So it now makes sense to talk about why we did all these changes, what we did. While making all these changes, we have these three goals in mind. We wanted to have our features getting to the users very fast. That's why we had the weekly release cycles. Some people, and then we added a lot of engineers. Some people would question just why we're having so many engineers in our Android team and mobile development teams are fairly small number of people doing a lot of things together. And there is a hypothetical theory that if you have an engineering problem, just adding a lot of people doesn't necessarily solve it. Turns out it's not entirely impossible if the only bottleneck that you're facing is peer velocity, where the speed of the chunk of the people is good, but that's the only thing you're facing that can be solved. And also if the architecture of your app is done in a way that features are technically and largely independent, adding a lot of people and keeping to the same schedule won't have a lot of problems. This will bring us to scope. What we used to have before is a small number of people working on some very key part of the app. We would first gather all the things that we can do in our one sprint and then freeze that sprint, build it out. That means even if there were some features or some problems that we would be having in the app, which would be really deficient, you would have to ignore them. What we then did is now, a lot of people are in different different teams and all the people are working on all the parts of the app all the time. This had some implications on quality because when there are a lot of people doing different different kinds of stuff, things are supposed to break, the implications increase, and there's always an emergency scenario coming up to you. So while building our strategies and testing policies, automation policies, we had to keep all these factors in mind. Before moving forward, let's also discuss what actually happens when you scale in terms of user base and your engineering. These are not very visible changes, but they do happen. First of all, communication time goes up. When a lot of people are in the team, it is very important for everybody to be on the same page and be on the same track. But everybody perceives information in different way. Every engineer is on a different track altogether. So this will take a lot of time for everybody to understand the things, the goals, and the way we are trying to proceed. Then we will always have the contributed pool changing. What this means is, even though the team size remains the same, the number of people remains the same inside the team, there are always the people moving out, coming in, and there will be always people who would be either new to Android or Twitter for Android or Twitter at all. So we had to take all these things into consideration as well. Apart from that, because there were a lot of peoples, we also had a lot of implications increasing. So for example, in one of the times, we were trying to focus and build our strategy related to different things, just like getting devices for all those people to test, we completely forgot to focus on our continuous integration server and it crashed by the end of the month. So everybody just got hearted. They couldn't work anymore. So we had to look at that. So these are the things that will always happen. And they are not always visible, but they will happen. We also had to have a very good testing policy because now these many people are putting a lot of different patches all the time, so things are going to break always. They'll always be people coming up to you and saying, this is an emergency and we have to put this in. And if you keep giving into their request, you also develop a bad habit of always delaying your release cycles. We did not want to do that. So we also accounted all these in our policies. So we built, found, and started using different tool sets to do some things. We'll discuss them. First of that, we built feature switches. So what this does is, when our app starts, it downloads a rule in JSON format from our server which tells the app what are the features that are currently visible and available to the users. This helps us in various ways. For example, if there's a feature which is not yet complete, we can ship it out and disable it in our rule in our feature set and that feature will never appear in the app unless and until we want it to appear. For example, if some particular feature, let's say a video player started breaking on a phone like my ASUS, let's say example. It started breaking on the ASUS and what I would have typically done otherwise is build a patch very quickly, fix that up for ASUS and release an update. The people who are not on the ASUS device, they'll also have to update the app, they'll also see an update there. That's a cranky experience and people are like, why should I update this app all the time? These people were not responsible for it but they still have to see a frequent update. So what we would do, we would silently disable the video player on the ASUS devices. This would have ASUS users open the video player and see a blank screen. It will obviously make them a little upset but it will not piss them off because the app is not crashing for them and at the same time the other users, they are happily using their own app. They don't see any difference. It also lets us buy a little more time to dig things a little deeper and we can then provide an update which is like a day or two day later. We can target in terms of by device, by feature, by manufacturer and different OS versions. Then we also did a lot of automation. There were a lot of people who would always come towards the end of the release cycle and say this didn't get shipped or while the review, the code review is happening, people will say, why did you put one parenthesis on the same line? Why doesn't it go on the next line? And people will be debating on that. So this is a bit of a lot of time. So what we did, we put things like iStyle Automator. So unless your code passes that test, you won't be able to commit. Similarly, we also used different things for testing more apps. We use unit tests and functional tests. These tests run on an internal device lab that we have. These are real devices that we're testing on. We're trying to come to a model where we move to an emulator-based model because some things are very peculiar to write tests for when they're running on device. So we're still doing this bit. We use UI Automator. We also were using custom build framework on top of Robotium. But then we found that it's going to be a little clunky and it was a hard curve for people to follow who were coming in new. So we moved to Espresso. Right now we're using a mix of Espresso and UI Automator to test things on the app. Then this is at the time when we had not acquired Crashlytics. Crashlytics is now a part of Twitter and comes as a kit in the fabric suite. We evaluated a lot of tools which would help us see how stable our app is. Crashlytics fared best of them and that's why we started using it and we heavily rely on this tool to see if the new features are having any regression impacts, if they're letting all the other users also getting affected and we see top three, four crashes every day. We reach out to them and we are always on top of it. It also helps us know if there are any API changes affecting our app which we would otherwise not get to know. We heavily rely on our Alpha and Beta programs. We have our own internal tool, Beta, using which we distribute the APKs around internally and externally pools of people. Alpha releases are very new. So for example, an engineer who is working on a feature, just two days after his commit, the Alpha release will go out. By the end of the week, when we do a release cut, that release branch code goes into Beta and it goes to the Beta users. So the Beta users are always on the production server. They'll see exactly the live things that other people will be seeing. They might see some new features but which will not be visible to the final user and nothing goes out to the final production app without going through Beta. It also lets us have a lot more insight into how things are going on on different devices. Then let's talk about a release process that we follow. There's a concept called release trains. People who are a little unfamiliar with this, you can consider a release as a train. Release would come to a station, for example, on very certain schedule schedules. If you are missing one release or one train, you can go down to the other. One release will have certain features and all these features will form that one train. Usually the schedule of one train going off is once a month. So if we divide the four weeks of a month in terms of what works goes on, we can divide them like this where week one and week two are active development week. There's an invisible line between week one and week two where we do a release cut. Around the week one and week two, everybody is committing to the master branch. Once the week two happens, the master branch is frozen. You don't see anything else in the master branch unless it's really an emergency. You cannot push to master branch without special permissions. Week three is completely bug fixing and week four is big where we test from the beta users. We take feedback and if we do last minute changes if we have and then it goes out. This is theoretically what it should have been but when we started using it, this is what ended up happening. So if your x-axis is changes in code base, your y-axis is phases of the month and this is like orange, it's appear yellow. Yellow is the patches and green is the bugs. What would happen is everybody would start putting in their patches by the almost the end of active development phase. This would make the code base very messy and the bugs rate will go very high by the bakes period. So we essentially will have only two weeks to gather everything around, pull everything together and push an update and somehow not miss our release date. So after doing this for a couple of months, we realize this is not going to cut. So what we did, we tried to make the amount of time that is between release cycle for the trains to become cheaper so that if you miss one release, you can always go into the next season and have your feature shipped. We also tried to make the changes spread over time. So what we now have is a staggered two week release cycle. One week you're doing the active development and the next week you're fixing critical bug fixes and doing the bake period and just right after that, up goes life. This is running in a concurrent mode. So at one week where one team or one feature would be in the active development phase, the other team or other feature could be in critical bug fixing phase and this ensures that every week we are having at least one release going out to the beta and alpha users and subsequently to the live users. This is a good balance for us. It forces us to release already pretty much all the same time. So it also may helps us maintain quality and speed at the same time. So I think I'm pretty much running out of time. This is all what we have followed as a summary. We have some feature switches. We do a lot of automation like heavily on automation. We have fabric, different kits of fabric and fabric is something that we use internally. So if let's say you're using graphics or analytics or answers or something in fabric, it's not like we are having it as a different product and we use something better internally. No, this avatar app is having 100 million plus users and everything that fabric has, we also use it internally. So you are always getting the best quality things out there in the fabric suite. Then we have alpha and beta programs. We were using Google Play's alpha beta program as well, but then we started having our own beta program. So we distributed APKs through that and then we modified the release train model and have our staggered two-week process to get the things out. So there's a few things. It's been a good, very good balance with us. We have been going on with this for almost more than a year and it's been smooth so far. We are constantly seeing more and more people joining the Android team and this helps us take care of both the things, the scale on the user end and on the team side as well. So if you have any questions, I have some time I can take it otherwise I'm always available in the lounge. I may not be able to answer a lot of questions on how the coding isn't because I'm not coding, but yes, let's see. Hey. Well, there is one quick question. Actually, I have many questions but I'll just make it short. Thank you. Well, you know, as you've seen, like any company, many new guys join. Let's say in one of your engineering team, one guy just, let's say I'm a very fan of reactive programming. So I'm gonna work on reactive programming, but let's say if new guy joined and he doesn't have any idea at all, what is reactive? All right, so then do you guys follow any coding standards or do you guys have any design patterns that anybody comes first, follow all the design patterns? Or, you know, this is kind of, do you have any kind of, you know, your own structure that yeah, this is what my kind of, you know, my coding standards, my coding practices. So do you follow something like that or you just allow the developers to use their own way of writing code? The short answer is yes, we do have standards, our own internal standards. I may not be able to answer it in specific React.js terms, but let's say in Android, we would first of all ever hire somebody who is very new to Android. We would always hire somebody who has substantial amount of experience on Android. They can understand how code is written, how algorithms is written. Then once we get them in, we have lots of internal bikis, pages, training. We have our own tutorial university where there are a lot of experienced people going around and training, a lot of training happening inside also. So these guys can go have that. We also have standards, some design patterns that we follow. So, and these people, the new engineers, they are given time to ramp up slowly, work on small features and then larger features and then larger features, be a part of the team. So they also get time to learn how we are doing things and ramp up with us. So they're given pretty much of a leeway, some time to do that. Before that, the engineers, the project managers, who these people would be working with, they're also involved in the hiring process to make sure that we are having the best people who would be best fit for the team. So that's a valid. Okay, that's just another question. Then how you guys, like, let's say on Android, so how many devices you guys actually run, you taste your apps? This number is always changing. In our device lab, we may have a couple of hundred devices, but in terms of alpha and beta users, the largest number that we have seen is around 2,000 to 3,000 that we have seen some time ago. So 2,000 to 3,000 was the largest number of different UIDs logging into our servers. So that's what we have seen so far. It's largely one code base that we use for the whole people. And yeah, I may not have very specific numbers, but that's just one measure. You have to talk to my Android team. More questions? Okay, great. You can ask more questions to our men outside. We definitely have a launch, which you should make use of. Yeah, I'm the only guy roving around with this thing, so I'm easy to spot. I'll be here tomorrow and today as well, so you can catch up anytime. Thank you. Thank you.