 Hey there. Today, I'd like to tell you about one of the hardest things I've ever done, which is learning how to do experimental product development in an industry where failure can be a matter of life or death. Another way I'd like to talk about this is how to move fast and not break things. I quickly introduce myself. My name is Tom Ultiman, and I'm on the product team at Asana. But I actually spent most of my career at startups and particularly in the construction industry. I fell in love with construction back in 2015 when I joined a company called Plangrid. Our founders were the first to see the form factors of the iPhone and iPad and how it could truly revolutionize how work got done on construction sites. They taught us to focus on building beautifully designed apps that construction workers would fall in love with. We had the highest rate in the construction app in the App Store, and at the time I left, we were being used by over 1.5 million projects across 100 countries. In 2018, we were acquired by Autodesk for $875 million. Now, while we always went deep in understanding our customers' needs, we had a challenge that may be familiar to many of you, and that's how to ship great products quickly when they have to work perfectly first time. Honestly, this slide is just an excuse to share my favorite gift of all time. Now, I'm sure that all, if not many of you have read this book, The Lean Startup by Eric Rees. If not, I strongly recommend that you do. It's really defined the last 10 years of how we build products and companies. Now, in it, Eric talks about getting things to market fast using experimentation, but the key tenant being fail fast, succeed faster. You may have also seen this quote from Reid Hoffman, the founder of LinkedIn. You are not embarrassed by the first version of your product you've launched too late. There's also Facebook's perhaps infamous mantra of move fast and break things, with the idea that it's okay to break occasionally as long as you innovate and execute on a vision. Now, a lot of these ideas really resonated with me, but another one also came to mind. And that was, I'm building tools for these people. They don't want to hear about anything failing ever. Their safety and livelihoods depend on the tools at their disposal being 100% reliable and dependable. Now, this is long been a challenging construction. Any change can be incredibly risky. So people prefer to try and stick to tried and tested methods, even if incredibly inefficient. Now, this was a world which we as a relatively scrappy startup were trying to build new tools to help them get their work done faster, easier, and safer. Now, you're probably not working in construction, but I'm sure you're working on software that your customers rely on using every day, and they never want to feel like guinea pigs for any of your experiments. If that's the case, this talk is for you. Now, I'm sure you're all paying incredibly close attention, but for those of you who have as bad an attention span as I do, let me give you a quick upfront TLVR of this talk. If you want to do experimentation in zero failure environments, build a cross-functional team that truly cares about the customer. As a team, learn about that problem you're trying to solve together. Treat everything as an experiment, not just the product itself, but the processes you're going to follow and even the roles you're going to play. Do the least amount of work needed to test your most important hypothesis. In other words, be as lazy as you can quickly. And crucially, when failure is not an option, use humans to power the first functional MVP of your product. So to elaborate further, I'm going to tell you two stories about how we learned these lessons whilst building a mission-critical product for construction. These stories revolve around an absolutely insane part of the construction process called submittals. What are submittals? In the United States, you need to get approval from the architect for every product and method you're going to use to build a building. This means every type of brick, the wood, doors, down to the actual nuts and bolts need to get sent to the architect and approved before work can continue. This can mean thousands of documents that need to be collected, reviewed, approved on time and corrected. That sounds crazy, it's because it absolutely is. Now there are two things to remember about this process. The first is that nobody likes it. It's hard, it's tedious, it's stressful for everyone involved. You're filling out spreadsheets for days, you're keeping track of a million emails, the paperwork is endless. If one of these is missed or late, the work will be delayed and can cost your company hundreds of thousands of dollars and that's the best-case scenario. The second thing is that it's important to get submittals right or very bad things can happen. The worst-case scenario is that buildings fall down and people die. That's not an exaggeration. This little diagram here shows a single bracket that was built incorrectly in a hotel in the 70s which caused a walkway to collapse killing 114 people. All of that because no one spotted the seemingly tiny mistake on one page of a 100-page document in a submittal. It's an important process. So the mission we were given was how to fix this process or at least make it not suck as much. Obviously this couldn't fail fast or at all. Now no one will be surprised that as a scrappy startup we had some very fun constraints. The solution needs to be something that could be built by four engineers in less than six months. Oh and obviously two of those engineers are free right now, the others will find for you soon. Oh also the future of the business is dependent on this being a success so you know don't mess it up. So with those constraints this required us to rethink how we were going to build this product. This is where we started getting experimental. The first stage was learning about that problem together. We needed to deeply understand the process to figure out how to tackle it. We did this with interviews, site visits, shadowing. Everybody on the team spent time with the users and we really mean everyone. This is a photo of us on one of those visits and it includes engineers, research, product, design. Even QA was there. We really got to know them. What are the days like? Well the boring parts. A lot of the submittals process takes place in spreadsheets and emails. Now our users thought it was really weird that we wanted to sit behind them and watch them do paperwork all day but we learned so much from even the mundane bits of their day. Now as we went we built up a picture of the process and the pain points. And after each conversation we would list all the questions we still had. We'd vote on them and then change our script for the next set of conversations until we felt we truly were confident that we understood the problem well enough to propose a solution. This is how we built a cross-functional team that cared about the customer. An important thing about this step is that everyone needs to be bought in and focus on that customer problem. If they're not you won't have the focus you need. And we thought this was so important that we decided it was better to be understaffed than under-focused. One of those engineers who was incredibly talented was more interested in doing systems work than being on a specific problem so he left to do great work elsewhere in the company. It was more important for us that everyone was bought in even if that meant being down an engineer for a while. So we understood the problem and we built a team that cared about the users. This helped us flesh out our initial hypothesis of what a great Simil's process could look like. Now we need to test whether we were right. So you're all probably familiar with the standard iterative loop of product development. You come up with an idea, you build, you launch, and you learn from it and iterate. Now we had an idea but we didn't have time for the build bit. We needed a way to shortcut directly to learning so we could iterate faster. We needed experiments. So we had these hypotheses from all of our research. So we used a framework called assumptions tracking. We sat down as a team, we listed out all the assumptions we had, and we ranked them by risk. What's the likelihood that this product will fail if we're wrong? Once we had that list, we thought about the cheapest way to taste each of them. The mindset here was be lazy. Don't do any work that doesn't help us de-risk an assumption. Now sometimes the test was simple. You could do a survey. Some things we could validate traditionally with a clickable prototype or concept tests or part and parcel of the work that we do every day. In the end, we were left with the hardest thing to test. Who would we build something that replaces spreadsheets and emails? Everyone's process was a little different and dummy data in prototypes wasn't resonating. At some point, we realized that the only way we were going to learn more was by actually giving users a version of idea to use with real projects and real data. This led us to the third stage, which was building out a concierge MVP. We needed to learn if our hypothesis was correct, we needed to use real data, and we needed 100% reliability. No buildings falling down. And we needed to do this quickly. So how do you validate a fully functioning product without building the machinery behind it? Now, for those of you who've seen Futurama, do you remember Bender the Robot and how he refers to humans? Well, we would be the machinery. This was project meat bag. We would sub in humans for the technical part of the product and test it without writing any code. Our plan was to slowly replace the people in this process with code over time, as we validated the ideas and built the product in parallel. So we found five customers that represented our target audience and we sent them a pitch about this great new submittals tool. It was going to effortlessly handle their submittals process. It will track the input and give them a report every week. Of course, we were the tool. Each of us took one user project and did all the data entry and processing. Users would send us existing spreadsheets and forward us all the email responses and files that came in. So here was V1 of our product. It was a Google sheet set up for each user, mirroring the system we wanted to build and populated manually. We moved all their data into the template. We kept track of everything that came through email and we looked through the files to figure out status. There was some basic automation here. You can see some colors and highlights. And then we just continued to automate things as we learned from what the users needed, starting building conditional formatting, automatic dates, data validation, effectively building our database for the end product, and really just finding anything that could reduce our workload. But by and large, it's still a spreadsheet maintained by one of our team. The critical thing here is that for our users, it's functioning like a product and a fairly magical one at that. They sent us data. It appeared in here. They didn't have to do any of the things we knew were pain points in our existing process. We then set up a dashboard driven by that spreadsheet to provide insights and next steps. It updates in real time, driven by a bunch of queries. So Excel scripts were really the first code written for our product. And this is where our first product breakthrough revealed itself and showed that we were delivering something really valuable. A lot of our users were shocked at the number of overdue or stall submittals they had. So much so they thought it must be wrong. A lot of submittals were two months overdue. Some we saw were even six months overdue stalling that project. Most users, we realized, had never seen an accurate snapshot of the whole process before. On top of that, we generated a weekly report based on that dashboard, which they used to sync up with the architect and the owner for the project. We put their logo on it and it gave them a really polished and professional way to share out the status with the stakeholders. So these three things, the spreadsheet, the dashboard, the report, were the pillars of the solution we wanted to build. Throughout this process, we met with users every week to see how things were going and get feedback. Because those are all manual, we were able to take that feedback and immediately iterate the format and the data, creating a really tight loop of idea to learning that we were looking for. Now, the careful thing here is obviously people are going to love it if you do their work for them. But we need to turn this into a software product, not a concierge product. So we set very strict ground rules that we were only going to do things in a standard way for all customers that we knew that we could automate and code later. So how did it work out? The outcome of this process was over 600 submittals tracked across five projects over four months. The project included this half a billion-dollar highway interchange in California, which saved them literally millions in cost savings, all down to the spreadsheet, dashboard, and report. Now, this was way more we thought we would do in the process, and we learned so much. We were able to validate our product solution and quickly iterate whilst real customers were using it. This meant that before the first line of code was written, we knew exactly what needed to be built and how it should work for it to deliver magic to our customers. It also built the deep customer knowledge and empathy on the team that we were looking for, which meant that everyone was really focused on the details of the problem. Now, only was that crucial to getting the product out quickly, but it also allowed us to discover a whole new product area we weren't even considering when we started. During our customer discovery process, one of our engineers bought an opportunity that was way beyond our plan scope. And this is my second story, so let me tell you that one very quickly. That's something to know about people in construction, like Ahmed here, is that if you ask him what he hates about his job, he'll say, nothing really, I love my job. They are wildly accommodating people. That's because Ahmed has just walked us through one of the most terrible admin processes that you've ever seen that he has to endure every day. So after a bit of experimentation of figuring out how to get him to reveal where true pain points were, we landed on a question that was really effective. We'd ask, if you could clone yourself and convince that clone to do the part of your job you liked the least, what would you have that clone do? Now this opened the floodgates and helped us see what we should really focus on to improve their lives. Next time you're trying to figure out how to improve someone's productivity, I really recommend asking this question. Almost unanimously, they mentioned something that we've not even thought about. At the start of the new construction project, someone gets the incredibly crappy job of having to read through a 3,000 page specification document and type into a spreadsheet every single submittal required, creating what's called a submittalogue. This is incredibly tedious but vitally important. On average, it takes two weeks of someone's life to complete. It's a task so hated, it's often given to the person who annoyed the boss the most on the previous project. Once we heard this a couple of times, one of the engineers said, hey, so it's like a pretty solvable machine learning problem. Not sure it would be that hard to do, but we had a challenge because we had no ML expertise on the team and a crazily tight deadline to meet as it was. This felt like a huge risk, but also a massive opportunity. So we wondered and we test whether customers will find this valuable. And luckily, we'd already worked out a process to answer that question. Enter the meatbags. I sat down with the engineers and wrote out a human readable set of instructions for how to extract submittals from a specification doc. We set the constraint that these instructions had to be simple enough to be followed by anyone without any prior knowledge of construction and must be easily replicated by a simple machine learning algorithm. And we reached out to a bunch of customers and said, hey, we have this fantastic new tool that extracts all of your submittals for you. You just send us your specs and we'll send you back a spreadsheet. Oh, still an early stage product, so it might take a couple of days to get back to you. At this point, we hired data entry contractors to do the work. We were all busy with the other meetback processes. This was pretty cheap. A couple of people at $20 an hour that were simply picking up specs sent to an email address and responding with the spreadsheets of their results. The result was fantastic. We heard literal shrieks of joy when people saw the results and realized they just got two weeks of their life back. Not only did this validate the idea, but we also learned incredibly valuable insights on things such as precision versus recall. It turned out that people were okay with us listing way more things than maybe were needed as long as they could filter them out quickly. But we could never risk missing any important items. The other great thing was that we were actually building the label training data that we needed for our machine learning algorithm. So we were getting the vital insights that made the product faster to build and more confident that we had something the market wanted and were willing to pay for. So these two things combined, with only four engineers, we were able to learn, experiment, and launch a production-ready solution in six months. This was a paid add-on to our core solution that from day one accounted for a large increase in company revenue, evidence that we built something that our customers really needed. One of my favorite things was that our product was featured in a list of the most interesting advances in construction technology that year alongside this robot that builds walls. What I love about this is that this robot was an incredibly complicated piece of technology, solving a relatively simple problem for humans. We'd built some relatively simple technology to solve a very complex problem, and initially, we powered it using humans imitating machines. So that made us feel very smug. Now, this was not all sunshine and rainbows, so I wanted to share some of the mistakes that we made that you can hopefully learn from. We didn't have an exit strategy from those concierge MVPs. The idea is that we'd start tracking some bills manually and quickly replace ourselves with code as the engineers got up and running, maybe one, two months max. Then things got delayed, and we were left hand-cranking this process almost for months. Now, we didn't want to leave our customers in alert, so these were mission-critical processes, and we promised the customers that we'd support them, so we ended up having two jobs for that time. If you're going to do this process, think about what the exit strategy is if things get delayed or you decide not to pursue this product. How can you do right by yourselves and the customers? We did a great job of having the engineers involved in the research, but once the concierge MVPs started, we decided that they should be focused on writing code, and we'll just share the knowledge. At this point, we could already see the cracks starting to form in the terms of customer empathy. There were mistakes that we made that could have been caught sooner if the engineering team had had at least a little bit of first-hand experience of managing those signals. If we could do this again, we would have had engineers spend at least a couple of weeks actually being the human in the loop for those concierge MVPs. We also only brought in marketing and sales at the very end of the process, which meant that it took much longer than it should have for them to understand their value proposition and therefore had a position and sell it to the market. Having at least a product marketing manager involved in that discovery and experimentation would have saved us a huge amount of coordination effort and rewrites when it was time to take it to market. So to recap, if you want to do experimentation in zero failure environments, build a cross-functional team that cares about the customer. Learn about that problem together. Treat everything as an experiment, not just the product, but the processes you're going to follow and the roles that you're going to play on that team. Do the least amount of work needed to test your hypothesis and use humans to power the first functional MVP if failure is not an option. The last point I'll leave you with is experiments come in many shapes and sizes. You can do this at any stage of development. Be it a new idea or something that's been in the market for a while. I just encourage you to think about what is the next most important thing you're trying to learn and what's the least amount of work you can do to learn it. Please feel free to reach out to me with any questions, comments, or if you'd like tips on how to implement this process. I'm also currently offering free coaching sessions to people starting out or looking to transition to product careers, as I know how hard that can be. If that's of interest to you, feel free to reach out by email, Twitter, or LinkedIn. Thank you so much for your time.