 to resisting it. And this has been around for like 10, 15 years, and we are all familiar with the notion of embracing change. But when it comes to writing software, embracing change is hard. And I want to talk a little bit about how to make that a bit easier and how to structure our team and our processes to make that happen in a nicer way. So what I'm going to be talking about stems from a couple of experiences of mine, both in the last year and a half or so, and both were with some early-stage products. So it's primarily about what I've learned from experimenting with early-stage products. I'll get to what experimenting means soon. So the talk itself, I've divided into these three parts. First, we'll talk about what is this experimenting thing that I'm talking about. And next, I'll talk about how to do these experiments and what are the different ways in which we can think about experiments. And I'll try and tell you about a new way in which we've been doing it in our teams a few times now, and it's been quite helpful. And it's using real software, and that's supposed to be in quotes. And I'll explain that also when I get there. And I'll have a small section in the talk about the consistency of the team that does these set of experiments. Before all of this, though, I'll set some context. So the context is that we are working on this product called Simple. Go to Simple.org. So this is a nonprofit initiative by the WHO, the ICMR, Indian Council of Medical Research. And together, we've formed a body called the IHMI. And this is to help eradicate or improve control of hypertension in India and throughout the world. There are a few organizations that are running this. So they are resolved to save lives and vital strategies. And there's also a couple of firms in Bangalore, Uncommon and Millenso, that are building this. So what it is is a simple app for nurses to use to record blood pressures of patients. So if you're unfamiliar with the problem, let me take a step back. IBP is hypertension. And it's very prevalent in India, about one third or one fourth of the population. That is 300 or 400 million people in India have hypertension. And it is a leading cause of death. And so that's what we're looking to help or prevent. So a little bit more about this is that it's all open source. So it's all on github.com slash Simple.org. And all the work is out there. Even the design process for this is open sourced. So there is Daniel Burka that works on this. So he's created issues that talk about the design, coming up with the right design systems and design languages for the app. You can have a look at that. And you can contribute as well. So a bit about what the users are or who the users are. So these are nurses, like I said. And they work in public health care centers. So these are PFCs, CFCs, and sub-centers, and district hospitals that are government-owned in India. And an important part of this is that they are in rural villages in India. So thing Bhatinda, Punjab, or Koshyapur, Punjab, those kinds of places. And interestingly, the first thing that we've done on the product is to go with a paper mechanism. So nurses have been recording BPs on paper. And so design challenge is interesting because we're moving from a paper-based system to an app. And so there is a certain information structure and certain workflows that are embedded in nurses. And we have to move slightly away from that. And also think of moving to a larger portion of thousands and tens of thousands of clinics throughout the country. So the place, like I said, just so that you can also get a visual idea of this, this is the place. It's a rural, rural Punjab. So no internet connectivity, if any. There's going to be lots of drastic weather conditions potentially. So people might or might not come to clinics on a day-to-day basis. And even if you have phone network connectivity, that might also be choppier times. The users are nurses in these clinics. So they work there from three to four hours and have a pretty hectic time because there are hundreds of patients coming in every day. And this is the team. There's a part of the team that works out of all these facilities. And this is the software part of the team. So this is based out of Bangalore. That's uncommon in Lenzhou. And the gap, that's not uncommon. So to give you an idea of what the app does, I'll take you through the basic flows. So the basic idea is, when a patient comes in, the nurse asks their name, age, and then finds out if they are hypertensive, they measure their BP, and enter their BP. Maybe prescribe a medicine and tell them that, hey, you know what, come back next month, and that's about it. So this is what that would look like. So they enter their first name or their full name preferably. You have a list of search results. You click on one of them. Enter blood pressure. And maybe update their medicines if that means changing. And you schedule their next visit, which can ideally, usually it's in one month time. And then what you do is you also follow up with them. If they have not come in a month, you call them up and say, hey, you know what, you're a bit late. Why don't you come back to the clinic and get your BP taken again, and maybe your medicines updated. Because for some patients, if their BP's are 200 and above, they are in serious risk of death, really. So that gets, following up with patients gets really important. So with a problem like this, where you're faced with some extreme condition that you're unfamiliar with, how do you know that you're on the right track? How do you know that from a product perspective, you're doing the right thing? Or you're even going towards the right thing? How do you know that you're building what users want? Every time you build the app, getting to your users, getting feedback from them, and then improving on that is an expensive process. And you want to reduce that as much as possible because there's not so many times you can go back to a government hospital and say, hey, you know what, try this new thing, try this new thing. And is the interface intuitive? To someone who's not familiar with a smartphone, who's not familiar with Android as a paradigm, how do you introduce something that's intuitive to them? So the solution is usually the scientific process of experimenting, where we build, get to users, fail, and then rinse and repeat. However, this process is expensive. So this is the fundamental problem statement that I want to address in this talk. So the first time we did this, we built a production quality app of some narrow set of features, went to Bhatinda in Punjab, and tried to deploy the app and get users to use them. They used them for that day that we were there, and then not after that. So this was about two months of effort that pretty much went to waste, because the search interface was not intuitive. People did not know if they were supposed to find new patients or add new patients differently. Their workflows were too acclimatized to the paper systems. So the need then is to build faster, get to users faster, and fail faster. And how do you do this? So exploring the existing methodologies for experiments generally, there's an notion of all these are terms that you see used in agile projects and XP projects and so on. The spike you can think of as a small experiment that's done once in a while and just to find out if something is feasible or not, and then you throw it away. That's why it's been great. And then there's a traceable net, which is production quality, a narrow scope, put into production so that you can get your functionality to the user and then find out what the user wants. And then there's, of course, the long-term R&D types, like Xerox, labs, types R&D, where you have a huge cost, a huge part of your organization dedicated towards making research, long-term, and growing products, creating new technologies all together. And so all of these have the pros and cons. And the methodology that I was hinting at is this, the notion of an experimental in. So this, you can think of it as a combination of all the three things that we saw before. It's an R&D lane for exploring ideas. And you throw away your prototypes once in a while, but you make simple versions of all the functionalities. I'll make it more grounded in a bit. But primarily with all this, what I'm getting at is making the feedback cycle faster. And in order to make that a little more explicit, I want to say that the biggest part of this is conducting a user study. So that's one big half of it. The other half is us coming back, analyzing results, improving design, and then getting back to users. But the most important thing here is doing this. And we narrowed it down from a period of like two months to two weeks. And we did this by getting users into the office every week or so. Establishing a set of users that can come in and help you with your user study is quite difficult. But if you do that, you can do this at your own pace and time that you want to do it. Analyzing results and improving design takes about a week after that. That's what's been in our experience. Like you can't straight away, after coming back from a user study, you can't take the results and go engineer them. It takes a lot of time to synthesize your results and understand what really you've learned from that. And then what you need to do is plan your next user study and then build backwards from that. So user study is still very primary to learning more about your users and getting your product out there. So the fundamental difference here from what you would have noticed in other such cycles is that you first plan the user study and then engineer backwards from there and you build exactly what's necessary for that. My proposition is this, that we create a simpler version of the app focused on happy parts of experimental features and flows. I'll let that sink in because that's all I could try and compress that down to. And I'll get to the specifics right away, I guess. How we do it is by prototyping. And in order to put this, like get an understanding of what's already out there, put it on a spectrum. So on this side of the spectrum is of course pen and paper and the other side, you have real software, right? And the spectrum is non-linear. So you have days, weeks and months. So real software can take months to develop, pen and paper is quite kind of immediate. Pen and paper is really fast. So on time, you can do five, six prototypes on pen and paper really quick with the user sitting beside you maybe, with an engineer sitting beside you. In terms of flexibility, you can throw away your paper prototypes or wireframes or even high-five mockups really quickly and then build new ones. With real software, it's very hard to throw it away once in a while. And to varying extent, based on the amount of involvement of real users, you get various amounts of feedback along the spectrum. Right? But on one note, on one axis, there is no spectrum, which is usage. Software is only used when it's fully built out. You can't take real patients and put them into a fake app. That's just not done. So how do you solve that? Another interesting part of the spectrum is that this part of the spectrum, the latter part of the spectrum involves engineering as well, right? So that's where it gets really complex because there's more people, there's more coordination, there's more communication. So before that, it can be a single loan designer kind of spearheading that, but that's where it kind of gets difficult. And I'll try to answer some questions about coordination communication between engineers and designers after that. So on this part of the spectrum though, I'm sure you love, you all are quite familiar with all of these things. I'm not gonna talk about good things about these things, I'm only gonna talk about bad things about all of these things. So first thing is that they are always online. You can think of them more like a browser sitting as an app, right? The moment you go offline, like we have to do in Batinda and Punjab, you don't have internet, you can't load the next page on a click of a button, you're doomed. That's not really an option for us. And it's also really, really slow. That's not what you want your app to feel like from a real user. So you don't get that feedback. There's no storage, right? So you can't record a patient, go back, and then see that patient again. Or where's today's BP, I just recorded it some time back. None of these things give you any storage. The programmability is subpar in that they all give you some kind of an IDE or interface where you can throw in some code or some plugins where you can modify a few things, but not really. They also need constant guidance. Like sometimes you need to sit beside your user and say, hey, you know what, don't click here, click there. And that belittles the very point of user studies, you know? Observe users do what they do. And there's no custom components. So a few things that we wanted to do was building a QR scan for scanning some interesting IDs. And we wanted to build a custom keyboard that will make it really fast to enter blood pressures and find out if that made a difference for the user. None of this is possible using any of these components, any of these tools, really. And there's a bunch of pitfalls that I would sum up as, you have to quit it and start it from the beginning. That happens all the time, right? It's unlike real software where, you know? There is some notion of error recovery and you can start from where you left off. So my experience and my recommendation would be this, right? Go to script react native. And I'm just gonna leave that there because this talk is not about that. I've spoken about it before, I'm happy to talk about it later. I'm gonna quickly move on to the more design aspects of this. So let me tell you with this what I can achieve, right? So this is a demonstration of the experiments lane and the experiments app that we built. So first thing is that you can build alternate flows. So in addition to registering patients, the nurse has to register herself also. And this is important for setting context. It's not something that she does over and over again. It's a one-time thing. But this is what I can do with the experiments app. There is a settings page I can go to, settings screen. I can change it from home to registration. And then if I go back, it takes me to the registration flow where I can do that and give it back to the nurse and say, why don't you register yourself? And the interesting thing is like all these security pins and whatnot are actually validated. So you know when, if entering pin is usable and whatnot. And another thing is she has to be approved, right? And ideally that happens over time, right? And then she says, you've been approved and she'll pick you got it. So that's a certain context. And we wanted to try out the usability or understand the usability of access. Does the nurse inherently understand access? We were using papers before this for that part of the flow. And there's an in between that we could have built where we just use single screens and move over the registration just tap to click just so that we can get to the access part. And then at that point where we want to change the access from requested to granted, we just borrow the phone, take it back, change it to granted and give it back to them and say, Hey, you know what? Assume that half an hour has passed. What will you do now? And then they read the screen and they figure out what to do from there. So an interesting thing that you can do there is you can visit or was it, but it's all a cart, which is that you can take all the registration screens and not build the flow at all. Just be a set of screens very much like a framework prototype or an envision thing where you just click, click, click, click done. You don't enter your name or phone number or whatever, but that's not very important to the flow. But for engineering, that's makes it really quick. There's also the notion of time and us modeling it, which becomes super interesting. It's usually a thing that we all miss to do in software. So here's an example. We have a notion of an overdue list, right? The follow up list that a nurse has to call back. So what I can do is go to the overdue mode and say empty, and then I can switch to the overdue and say, what do you think this means? This is an empty state. Then I can switch and say, okay, assume it's been one month from now, what does this look like to you? And then the nurse calls up the patient. Actually it opens up a dialer. They actually call the patient. I can do this using a real user study. And then what I can do is change it to six months later. Go to the phone, say in six months now, and then understand what they do when they are faced with a potential of 50 overdue patients. Do they feel overwhelmed? Do they call each one of them? Do they respond differently? Is it a list or a right paradigm here? These are things that you can observe. And these are not things that you can see in an Invisional Marvel prototype as much, right? And these are hard to build out in real software as well. There's only so much you can do with all feature flags and whatnot. And then of course, there's AB tests. Except these are observed in real time. So here's an example. So what I can change is there's an age versus age or DOB. And then you see that the age of filter is a single thing, right? I just say I want Madhu Meira, age 20, and then two results show up. An alternate thing I can do is say age or date of birth. And then both the fields show up there. I can either enter the date of birth or an age. But the date of birth field is a bit tricky because you have to get the format, right? And that takes a bit of getting used to. The slashes get filled automatically and they have to know the right DDMM YY format and they can get it wrong, right? And we want to know what happens, how the user responds when something goes wrong, right? So that's the interesting part, like build in the validation. So there are so many alternative things you can do there. So we tried them all out. We did year of birth. We did date of birth. We did age. We thought of doing a date picker. And there are trade-offs there, which is usability versus accuracy. And we also tried separate fields, like DD is a separate field. Month is a separate field. And year is a separate field. They all make for different amounts of validation and accuracy in entering data and also usability. So it's important for us to test these out in real time, you know, observe users doing this. And we can do that with such an app. And then this, I think is one of the things that people tend to skip on very easily and try to force their way through it. But here's a better approach. So usually when you do user study, there are a few cases that you want to try out, expose the user to and find out how the user reacts. My take on this is that you model it and treat it as first class, like you would do in, you know, modeling your real domain. So here's an example, right? There's Madhu Mehra. And you see the addresses are layout and like something in Bangalore. I can go change it to Bhat Hinda and then go back and then search for the same Madhu Mehra again and suddenly she's in Bhat Hinda. All the addresses are NFL Colony, Jhant K Road, things like that, right? So I can take my user study and either base it out of Bangalore where I do my user studies, or I can go to Bhat Hinda and then with a quick single change or a single click of a button, like make all the patients seem like as though they are from your local area. And that matters to user, right? That's context. And that's modeled in seed data. Here's another example. So there's Shreyas, for example, and there's Shreyas Garewal and Shreyas Malhotra, right? So we find out if patients are able to, or users are able to differentiate their last name. And Neha Gupta, there are three Neha Guptas all aged 40 and we want to find out if they're able to differentiate the incoming patient based on their phone number, right? So these are all things that you have to model except in something like Framer or something you would just end up hard coding all of these things, not subject to change. What you can do is something like this where you model a certain user archetype and say that these are the common factors and these are the variants. So for example, Shreyas Malhotra is 35 male and he has a hypertension profile of the fact that he's been hypertensive for weeks except their names are different. Similarly for Mahalakshmi Puri, the variants are that the first one is in village colony one, two and three. So that's like she's from the three different Mahalakshmi Puri's all around 70 ages. That's another thing because people are not exactly sure how old they are. They might be 70, seven, like if they're, even if they're 75, they'll say I'm 80 or I'm saying I'm 70. So you have to account for that, right? So this helps model that. And the interesting thing is when you notice like if you had noticed the call list, there's like 10 or 50 of them. So what I've actually done is I've taken this as a model and I have a tool that helps me generate data based on such model, right? So it's not just what I put in, but also in addition to what I design my seed data to be, there's a notion of random data that fills in as well, right? So what you saw there was a separate app from the production app that we used for user testing. That was built with closed script react native, but there are a few things that you can take away at a generic level. First thing I would say is that if you want to explore and move things, move fast types do slow you down. So I would suggest a dynamic programming language. A database also has a certain notion of a schema either on file, on storage, it has a certain, you know, storage mechanism or it has a certain type system embedded into it as well. So avoid that entirely. Either write to a schema storage or just write your application data structures to file, which is what I did. Use an in-app in-memory database because your experiments app is not going to have a lot of data. It's not going to have a million patients. It's going to have a hundred maybe. You don't need performance out of it. You need usability, you need the ability to move quickly. I really have live reloads of your code. So the validation mechanism is actually quite complex, especially when it comes to entering a patient. Like for example, a phone number, you can say, I don't have a phone number. You can type a phone number, but you might enter alphabets. That's wrong. You might enter characters, but it's not, you might enter digits, but it's not enough digits, et cetera. And the error states change a lot. And it was difficult to get this right. So what we did was we ended up pair programming, like I pair programmed with a designer really, and then changed code live to find out, is that what they want? And that we were able to achieve within 15 minutes of pair programming that would have taken days to achieve without a library loading. A Drapel is something that, I guess JavaScript also kind of has, but Closure gets it right. So if you can have that, that's great. And like I said before, the ability to generate data based on a certain schema is fantastic if you're doing experiments. And then apart from technology, there are certain engineering practices that are good for this kind of an endeavor. Keep your environment entirely independent. Don't couple it with anything in production, staging nothing. Remove all interfaces. Do not talk over the internet. Do not talk over device interfaces. You can mock all of that locally for your experiments because that's not really important. What's important is the user interface and the feedback that you'll get from your user. And any other interface is only gonna add more instability to the product which you don't want. Focus on happy parts. And I said this before in that summary. This is crucial, which is that you don't design for error cases. You work backwards from a user study plan and build exactly for the cases that you've thought of. Test it out extensively manually and ensure that all those cases work out well and don't engineer for anything more than that. It seems a bit clumsy, but also is pragmatic. It makes sense. Think about modeling external domain. This is not something that is obvious. For example, the notion of profiles of high-petensive patients have been in control for 12 months. Kinds of patients that have been high-petensive, high-petensive, low-high-petensive. These are not things that matter to the actual app in production. These things are not modeled there, but they're important here because you want to model the users. And that's not something that comes to you often. In first-class seed data, like I said before. So, and then there is how we work with designers, engineers work with designers. And this is really it. It's just extreme programming. The part that says with designers is really foofa. What that really means is that extreme, what I want to say is extreme programming can just be done normally with designers. It's not any different. So here are some examples. I just walked through some of the three fundamental XP values on that. So you talk about system requirements. You communicate a lot. So talk over pen, paper, prototypes, talk over whiteboard, the simpler prototype. Sit together and pair program. This might seem ineffective, but if done rightly, it can be super effective. So for example, pair program on ideas, right? Work until you have a certain structure that makes sense and then break off to execute the idea. That worked really well for us. So one example is that GIF you saw of scanning a QR code. Discuss things like can it be program, should it be a GIF, should we not do it? That would save us a lot of time. For example, that animation takes a lot of effort to get right in different phone sizes, right? You have to place it correctly and it has to move the right amount. And very often designers have thought of other solutions. From an engineer's point of view, there's a lot to be gained by just asking a designer, hey, can I do something else? What do you think? So the date of birth, for example, is a great example. Building a lot of validations into the date of birth field was expensive. It would take a couple of days, except when I went to the designer and we spoke for a while, we decided we'll just keep three separate fields because it makes this version easier and I got that done in two hours. Focus on simplicity. Yagni applies to design as well, very much so. So there's screenshots, like I said before, just walk through. The notion of a fancy BP keyboard is very interesting to experiment with, but we don't really need it because that's only a proof of speed, not usability. So remove. There's a notion of login and that exists in the actual app and you might be tempted to build a simple version of the login flow into the experiment app potentially. Don't do it, you don't really need it unless you're getting valuable user feedback from it. Design incrementally. So again, have a user study plan and design only for that and build only for that. And if you remember what I said earlier about spikes and the experimentally, if you can think of the experimentally as a spike, if it's taken you two weeks to build something, you can throw it away and build from scratch and it's only gonna take you two more weeks. That's not a lot of time. You can afford that. So if something gets really complex or your requirements change drastically, just throw away the whole thing and start over. And feedback, of course. You need to get it from the customer, from the system and from the team. So design user studies so that you get feedback from the user directly. From the system, you need to, if you're using the app on a daily basis and you fix bugs on a daily basis, there's a lot to gain there because you want your app to be in production at the end of the day every day. And from the team, of course, which is even for an experiments app or an experiments lane, you have a PM board, post-study meetings, even if you're riffing designs, right? Include engineers into that. So that they know what you're thinking and that helps them plan ahead. And quickly, because I'm running out of time, I'd like to touch on the size of the team. There's been a lot of studies on this. The whole two-pizza team idea of Jeff Bezos in. So there's this thing by QSM, which has some real data, like spanning across tens of thousands of companies, about how the amount of investment that goes into a team to build a certain piece of software increases a lot with larger teams. And with the same amount of investment in smaller teams, you get much higher quality software. So even in our small team size of 10 people or so that were building the entire thing, the experiments lane was three people. And that makes sense. And that makes even more sense for small teams because you don't want to waste times of 10 people in building the wrong thing. So you'd rather spend three people, get your users, fail fast, know what you need to build, and then use the time of the other seven people to build the right thing. And of course, like defect rates reduce with size. The experiments app in many ways had less of bugs because it was built to serve lesser things. It has lesser code, lesser people and lesser bugs. And of course, you'll also have to look into all this I'm speaking about technology, but it really matters who your staff on your experiments lane. So I go back to what I said earlier, which is in conclusion, get your users faster, be wrong faster. And a way to do it is to create a simpler version of the app focused on happy parts of experiment features and flows. That's all I have. Thanks. So thank you. Thank you, Srihari. Do we have any questions for them? Yes, we have a question here. Yeah, so you talked about building the app for the experimental lane, right? Now, if you do the user testing and finally figure out this is the final version of this has to go into the actual app, how much of the code that you have built during the prototype phase can be moved over to that? Or is there any reusability that can be done? Or you have to build again from scratch for the actual app. Okay. So the question is how much of code gets reused from the experiments lane into the production lane? Zero. That's usually the answer. So be prepared to throw it away, right? There are ideas that you can take away though. And that's far more important. Leave the actual code aside, but you now know that there are certain edge cases that you need to design for. Sometimes you're not sure what the edge cases are until you build them. In the experiments lane, you end up building at least one cut or the first cut of your feature and you learn a lot from that. So just like the way a spike helps you estimate a certain story or flesh a feature out fully. This will do that, but in a much nicer way. I hope that answers your question. Do we have any other questions? Okay. Thank you. Before we move on for the next talk, we have a few announcements to be made.