 It looks like the last few stragglers have wandered in. First off, thank you very much for coming and thank you for having me here. It's definitely a pleasure to be here. So this talk, as it says up on the slides, is about designing MVPs for enterprise customers. Provide a quick introduction in terms of who I am before I get into the main part of the presentation. So this is me. I work for a company called Pulse Energy. During the course of the presentation, if any of you are active on Twitter, please feel free to tweet about anything that you find interesting, any sort of feedback you'd like to pass on about the presentation. I'd love to hear from you. I've been to India a few times before. I spent a year and a half here during the 2004-2005 period, and during that period I was fortunate to have the opportunity to work together with Noresh and a few others to organize India's first conference on agile software development. So it's really great to be invited back several years later to see how much the conference has grown at that point in time with the first conference. I don't remember the precise venue, but it was a university just southwest of Bangalore on the way to Mysore, and this is obviously a much bigger occasion. So in terms of what I do, I'm the product lead at a company called Pulse Energy. What we do is we build energy management software, where what we do is we work with some of the world's largest energy utilities to help them deliver energy efficiency to their commercial customers. So what we do is we analyze energy use for commercial buildings and provide advice to the owners and operators of those buildings in terms of how they can improve their overall energy efficiency. For those people who are familiar with what's going on in the space pretty much across the world, there's been this transition where smart meters have been introduced. So digital meters have replaced analog meters for tracking things like electricity and gas consumption by major utilities. And what that's meant is that there is now a wealth of data about energy consumption in buildings, in homes and buildings, and that's provided great opportunities for analysis and also challenges associated with working with big data sets for utilities. And so that's where my company comes in. So what we do is we collect all of that data. Generally it's on a five or 15 minute or an hourly basis from these buildings, aggregate them within our data centers and do various types of analytics across that. So it's a bit of a meld of machine learning and data intelligence as well as behavioral science in terms of being able to bring about behavior change to help people adopt behaviors that are more in line with energy conservation. So part of what we do through the software that we provide to our customers is we allow them to track and report on their energy consumption and the associated energy savings. We help them identify anomalies in their consumption. So if there's equipment that's left on in their building, if their building is behaving in an erratic way, we can help them identify what those problems are, identify what the potential savings are associated with addressing those problems and help them build the business case to improve their overall energy efficiency and reduce their bottom line as far as, sorry, improve their bottom line and reduce their costs associated with their energy spend. So part of what we do is also work with occupants, the building occupants and the general public. So even though for us being here in this space, we don't own and operate this building, but energy is being consumed just to provide a comfortable environment for us to exist in. And so for people who work in, let's say, an office building or a medical building, there are ways in which their behavior is contributing to the overall energy consumption in that building and there are ways in which you consume that energy more efficiently. So we help engage those people as well. Our overall objective for us through energy efficiency is to try and address what we perceive as being the most significant challenge facing the planet, which is global climate change. At least in North America, about one-third of the overall greenhouse gas emissions come from heating, cooling, lighting and ventilating buildings. So it's a big portion of our overall carbon footprint. So if we can have an impact, then that goes directly to addressing some of these problems. And so we do it with not just electricity, but natural gas, steam, water, etc. And wherever possible, we try and walk the talk. So we use the software that we build in order to analyze our own energy consumption and act on those recommendations to improve our energy efficiency. So that's just to provide some context in terms of where I'm coming from with this talk. So before I proceed, I wanted to try and get to know you as the attendees a little bit better. So I just want to take a quick poll. How many people here work for a company that is involved in building products? Okay, so that's good. Most people here. How many people consider themselves to be product managers? How about product designers? Okay, what about more? Is it more on the project management side, or most people doing project management or product management? How about software developers? And how many people here, for the products that you're building, it looked like almost everybody is involved in building some type of product. How many people are involved in delivering products to enterprise customers? And so then how many people are involved in delivering products into the consumer market? Okay, so just a few. So that's good. So this talk is really looking at some of the differences in terms of delivering products for enterprise customers versus products for the consumer marketplace. So in terms of the overall context, in terms of where I'm coming from with this talk, basically the way that I look at things is that I think the product development landscape has shifted. So traditionally, software development, building software products is being perceived as being the constraint associated with bringing new products to market. I don't think that's the case anymore. I think that the biggest risk is no longer, can we build software in order to solve this problem? There's really more about whether or not anybody will care about the product that we built at the end of the day. Because frankly, the marketplace out there, it's flooded with great products that nobody uses. So the challenge is not building a well-polished professional product, but building a product that will actually deliver value and capture the attention of people within that marketplace. So that means that the emphasis is put much more in terms of how that product is designed, what need it's satisfying, and how successfully it satisfies that need. Some of the notions within this talk are inspired by work that's happening within the growing field of design thinking. How many people are familiar with design thinking? I'm not going into detail within this talk, but the key idea is that design is really so... It has got such a significant impact in terms of the overall effectiveness of the products that we build. Frankly, thinking about it, developing software, it's actually quite a constrained problem. You are operating within a generally constrained set of tools that may have been pre-selected, predetermined set of programming languages. You're solving problems that may have already been previously defined. Whereas designing a product is really wide open. There are so many different ways in which you can go and which you can attempt to address the specific customers need. The implications of the decisions that you make tend to be very far-reaching and have significant implications on the overall success of your company. As a result, being able to quickly explore this significant design space is one of the key challenges that I think face us as individuals that are involved in bringing products to market. What we want is a mechanism, a tool for tackling this risk directly, this risk of building products that don't meet a market need that nobody will use. The minimal viable product is a tool for meeting this purpose. In terms of what an MVP is, the framework that I'm using within this talk is guided by some of the thinking within the lean startup by Eric Rees. Eric Rees defines an MVP as being a tool that will help entrepreneurs start the process of learning as quickly as possible. With a key emphasis on learning, really trying to build the simplest possible thing that will yield the greatest learning because of the fact that there's so much that we don't understand about the consumer, about what will really engage the consumer in the marketplace. So we want to be able to learn as much and as quickly as possible. So a few examples to elaborate on the idea of what an MVP looks like. People may be familiar with the story of Groupon, certainly familiar with the company at this point in time, but where it started, its MVP was a WordPress blog where the founders of Groupon posted some deals that they'd arranged through some local retailers just to test, to see whether or not people would be interested in purchasing deals in an online forum in order to and then cash them in at retailers in their vicinity. So what they perceived as being their riskiest proposition was whether or not people were going to go for this. So they were looking for a mechanism to be able to deliver it as quickly as possible and setting up a simple blog, acquired no development, a little bit of hosting. They were able to test out that proposition, turned out to be wildly successful and they've turned into one of the, if not the fastest growing company to date, to reach a billion dollars in sales per year, I think which they achieved last year. Is it billion dollars? Anyone know? Sorry? Four billion dollars. So not a bad start from a WordPress blog. Another example is Dropbox where I'm sure most people are familiar with Dropbox. The key idea behind Dropbox was seamless file sharing which now seems to be intuitively obvious but at the point in time that the product was first proposed, it seemed like this was a marketplace that was already saturated and that there was not a significant customer demand for this type of a tool in order to be able to deliver the full product to require a significant development effort. So in order to be able to test out the product with the market in order to raise capital in order to support building out the full product, Dropbox launched a video which explained the promise, the potential of what could be delivered through their type of online service and that video really helped catch fire and went viral and spread the message about their product. So these are some fairly familiar stories of MVPs of wildly successful companies. Another very common example is especially for companies within the consumer space just wanting to be able to test out a concept is to build a simple landing page, try and do some email capture. See who's coming to the page, what kind of search traffic is being driven there, what is really going to engage. Is there some sort of market out there for this type of a product? This is a fairly standard technique and you can do this before you have an actual product. So these are fine and these are examples from the consumer space and so you could see how these types of MVPs might work for companies that are in the BGC market but what about for the rest of us that are selling to enterprise customers? What is there for us? Because really selling to an enterprise is different and it's different for a number of reasons. So I'm going to go through some of those right now in terms of the difference between B to C versus B to B. So one key one is obviously deal size. So when we're selling to enterprise customers we're talking about not deals in the tens or hundreds or even thousands of dollars but generally deals in the millions of dollars and as a result that means that there's significant risk associated with these deals and there's typically a lot more due diligence associated with a customer committing to make a purchase. The sales process is very different. So for the companies that I just described Groupon, Dropbox their sales model is all self-service. They're marketing directly to consumers. Consumers come to their website and purchase their product directly whereas selling to enterprise customers it rarely goes that way. Instead it's a process that is based on generally sales representatives, outside sales that will build a relationship with the customer and build that level of trust. As a result each sale is a much more expensive proposition. It generally takes much longer to close and so it's quite a different process. So the risks for a consumer purchasing a product is generally pretty low. They're not outlaying a lot of money. For B to B the risk can be quite high. There may be regulators involved that need to be satisfied. There's the potential for lawsuit or loss of brand recognition if the deal goes sideways. So there's a lot more risk at play and as a result the purchasing decisions tend to be much more conservative. The decision maker within an enterprise sale also tends to be quite different. So for B to C it's normally the buyer and the consumer of the product tend to be the same person. So if I'm going and purchasing a Groupon deal I might be purchasing it for somebody I know but generally I'm purchasing a deal for myself. Whereas when we're operating in an enterprise context there's normally a separate procurement group which is involved in assessing the offering and they could be totally different than the group that ends up using in this case a software product that's being purchased. For us where we're delivering software to utilities to offer to their commercial customers there's quite a broad division between who the buyer is and who the consumer is. The decision making process tends to be very different. So for B to C the consumer is going out and doing some research themselves in terms of whether or not this is a purchase that they're interested in making. Whereas in the enterprise space especially for large commercial customers they may be regulated in order to have a more formalized purchasing process so tendering out an RFP and collecting a number of vendors responses and then assessing each of those against each other. So the purchasing process the decision making process it's all quite different. So as a result what sort of impact does that have on what an MVP would look like within an enterprise space because the challenges are still the same. There's still a lot of uncertainty in terms of what type of product will ultimately be purchased what type of product will realize success within the marketplace. So really what we're trying to do is figure out what will get us to the table in order to start to have these conversations with some of these enterprise customers. And really it's going to be more than a landing page. Landing page won't get you very far at least from our experience in terms of selling to enterprise customers. So the sales process simplified down significantly for an enterprise deal normally looks like this. You start with the pitch to the enterprise customer and so in terms of what that means that means having a compelling product vision that the customer will buy into it means having collateral that can be provided to the sales team in order to support the claims that are being made in the vision and then training the sales staff so that they are articulate in terms of presenting the vision and potential of the product. So that's really there just to get the conversation started. So that allows you to validate the product concept whether or not you're headed in the right direction. Are these customers even remotely interested in what you have to sell? So the second step and this is actually going to be the primary focus of this the rest of this presentation is on the demo where so you cleared the first hurdle you've established some connection with the enterprise customer now they want to be able to see something they want to know what this is that they're thinking about buying. The next step on from that would be providing them with some sort of a limited access account but the precursor to that is having a demo and really the goal of these two steps is we want to be able to close the deal. Neither of these generally involve giving them a complete product. That's one thing that's quite important to note is that at least through the sort of deals that I've been involved with you can go a long way through the sales process just by having a really convincing demo. But we have a bit of a catch-22 here which is that we want to have an MVP but we don't want to have to build out the entire product itself. So what does that look like? So I've broken it down to five steps in terms of what we've followed in order to build up an MVP for enterprise customers. So the first is we want to start with what are the constraints that we're dealing with? What do we know we're going to be relatively confident in and start with that? Two, we want to defer commitment on everything else. So if there's anything that we don't have to build or decide on at this juncture we want to be able to push that out. Three, we want to be able to leverage external products and services wherever possible. So that's what will allow us to very quickly get a convincing MVP to market. And then we want to be able to start simple and iterate quickly. So build something very simple and then through the process, the learning that we get through demoing it to different customers we want to be able to expand on that and alter our MVP in order to be able to resonate with as many potential buyers as possible. And really the key thing is from a demo perspective is we want to have a focus on telling a compelling story. We want to provide the customer with a reason to buy. So here's an example of what, based on our experience of what an MVP can look like for an enterprise customer. So starting with the fixed constraint. Where we started with, and I'm assuming that a good proportion of you that are building products for enterprise customers are building web applications was we said, okay, our starting point is we know that we're going to be delivering a web application to utilities. As it turned out that this, even this assumption turned out to be invalid but it did take us a very long way through the process. But that was the simplest starting point in terms of what the MVP meant. So as a result that provided us with a foundation. And what we did was we built a simple single page web application which is something that's now becoming quite common. It provides a number of advantages from a quick iteration perspective but the key thing is it was static. There was no back end, but it provided significant interactivity all done and run on the client side. So in terms of these constraints, it meant that we had three tools that we were using to build up this MVP. Basically, HTML, CSS, and JavaScript. In virtue of this being a web application. So this allowed us to, we were able to defer commitment on what our back end was going to be. We started out as a Java development shop. What we found was that Java was not providing us with fast enough iteration speed. We knew that we wanted to switch to something that was more dynamic, that would give us greater opportunity to iterate quickly but we weren't sure exactly what we wanted to use. So we decided, okay, well let's see how far we can get without actually having a back end. And the reality was we could make, as it turned out, we could make it very far without having, just going with a static single page web application. So we were able to run with that for three months. We did about 30 demos to prospective customers. And it's amazing how convincing you can make one of these SPAs look. We also didn't, we deferred commitment on data storage. So we didn't want to worry about how the data was going to be persisted. We just worked with hard-coded data, which was perfect from a demo perspective because really what you want to be able to do is tell a good story in a consistent way and having all that data canned and familiar to the sales team that was doing the presentation was perfect. In terms of leverage, we made extensive use of different types of web application frameworks. Most significantly, Knockout.js, which is a data binding web application framework that allows you to very quickly build interactive web applications and provides nice decoupling between presentation and data. We use Bootstrap in order to provide a lot of web UI controls and look and feel. So using these types of tools allow you to very quickly build up a fairly complex web application just running client-side. Leverage hosting. So because of the fact that our application was just a static site, there was no backend. We were able to throw the files up on Amazon S3 and serve them directly from there. So very simple, very low-cost hosting. And then being able to leverage third-party web services also allows you to integrate significantly more functionality into your MVP without incurring significant development overhead. So in terms of iterating quickly, which is the third step, implementing practices like continuous deployment so that commits that were being made were being at least continuously deployed into our test environment. We did not push our changes all the way through to production, but that whole process was automated. Having that in place right from the start because the pace of iteration was obviously key. Establishing cross-functional teams was fundamental because of the fact that so much of the iteration was happening on the design side. So being able to bring in the sales representatives, the marketing team, product management and development in order to build and iterate on this product quickly was key. And then to be able to get the feedback directly from the customers, whether it be sitting in directly on customer demos, so you could hear directly what customers were saying about what was being built or conversely getting the having regular meetings with the sales staff in order to get the latest feedback. So the last part was being able to tell a compelling story. The process that we used was something that we developed ourselves, which we called user journeys, where what we would do is we would effectively write stories around what a user would do to interact with the software. But it was a complete journey rather than the typical implementation of user stories tends to be very segmented and small based on units that can be implemented quite quickly. Whereas our focus was let's focus on identifying, because we were looking to tell a story, we build a persona, we'd look at what was the context that would bring them in to interact with the application, what would they do within the application. Normally that may span several sessions of interacting with the application because more often than not, that's what's required. Customers don't just go in once and get everything that they need, often they'll have to go in, check something out externally, come back and validate it. So building up these sort of workflows was, we found to be hugely valuable. And what we found was that it was personas that really worked. I've had experience working with teams building up personas in the past and normally people have a lot of fun producing the personas, they're great, they're interesting, but then people get onto the real work and the personas tend to get left behind. Whereas in this process, the personas were key components of our stories. They were the stories that we were telling in our demos and as a result everybody knew who these people were. They were manifest in the system, so there was data associated, in this case Bob Johnson who's the manager of a hotel, there was data, so Bob Johnson had an account in the software, his hotels were represented, when our staff, sales staff would go in and would demo the product, they would log in as Bob and they would walk through the way that Bob would interact with the software. And that proved to be hugely valuable for us. It exposed tons of problems where we had built up features that we thought would be useful, but then when it came to actually telling a story about how somebody would actually use them, it tended to fall very flat. So this for us was a key part of being able to provide convincing demos, tell convincing stories to our prospective customers. The key benefit of this also meant that we focused much more on workflows rather than on individual features. So what was the path to value for these customers? We wanted it to be as simple as possible, as intuitive as possible, especially for customers in our user journeys that were coming in for the first time. So how would they find, what was the reason for coming in? We wanted to clearly articulate that and then how would they be able to find that and then we could validate those types of stories with actual customers that the personas were archetypes of. So we were able to do all of this with just a static site. No backend functionality, no data persistence. You could test out all the workflows and we were able to iterate extremely quickly. And none of the customers ever questioned whether or not this was a fully built out and functioning application. The nice thing about working in the enterprise context is because of the fact that the procurement process is long, you can make these types of representations to customers and with the assumption that you can build in what you need by the time that the deal is closed and the product is actually delivered to the customer. Has anybody had this type of experience in terms of building these very simple prototypes for customers? How many people have done something similar to what I've described here? Absolutely, absolutely. At least from what we found, ensuring that the sales team is suitably trained and they know where some of the pitfalls are, they know where there's a button that goes nowhere, where they can kind of hand wave and talk about what the customer would see if that button was clicked on is normally sufficient. There are definitely pitfalls. I think we found we were able to navigate around most of them and that the risk was not as great as we perceived that they might be. The bigger challenge was getting the sales team really comfortable with doing the demos, confident that they could go through, especially given the pace of change that was happening. Any other stories or experiences about doing something similar to this? When the sales people are on the field, they don't end up facing a lot of technical glitches because they're not dependent on the back end, the platform, the servers and things like that. It just makes it so portable for them to actually go out and actually show it without having any major technical glitches. Absolutely. So dealing with unreliable connectivity. In this case, it's actually building a website that supports interactivity. So there's navigation between pages. There are forms that can, in dialogues, they pop up that you can interact with. You can log in as different types of customers. We present data on charts. But it's all canned. None of it is being supplied by a back end to support it. So it provides the illusion of a fully fledged application without necessarily having that foundation. Now, in our case, we had a little bit of a luxury because of the fact that we had an existing flagship product which did a lot of this in the back end. So we were very confident that we could deliver on the back end. But we knew that we could defer making that commitment for as long as possible. Some other comments? Go ahead. Yeah, so that's a great question. Even with this where we're able to iterate very quickly, you could still iterate much faster at a whiteboard or with paper prototypes. And so that we would use as the precursor to getting this far. That would be used to decide what would actually get put into this. But we were generally delivering new screens or new features once a week. And so as a result, that iteration was happening quite quickly and on an ongoing basis. Yeah, please. So the question is, is the customer aware that this is just vaporware or are they in the dark about it? So one thing, enterprise customers tend to be quite conservative in their buying habits. And so for them to be revealed that there's nothing necessarily behind this would make them quite uncomfortable. So we generally would not expose that. If pressed, then you can do it without necessarily telling a lie. Well, we found most of the time they just bought it. They just kind of believed what it was that they were seeing. They bought into the illusion. And that was sufficient. Obviously, there's some risk because we're really hedging our bets and assuming that the procurement process is going to be long enough for us to be able to actually build out all the things that we need. But it was a risk we're taking. And the thing is, because of the fact that our key objective is on learning, in the event that we actually needed to kill a deal because of the fact that we would not be able to deliver on the timeline that was set up, that would be okay because it would be a validation of the product that we were planning to build. We were confident we'd be able to find other customers that would then be interested in following suit. So the comment was ensuring that you have a team that is ready to actually deliver on this if the customer bites and a team that is agile ready that can deliver this as quickly as possible. So in our situation, we don't have a lot of idle capacity. So it's the same team that's involved in building up the MVP as would be involved in delivering it. I mean, with landing a few larger deals, we'd be able to hire on some additional people. Absolutely. Please. Sorry. Who is speaking? Actually, how about you first? Because you've done so many things. So the sales team would participate in many of the product design meetings and then members of the product development team would participate and attend the demos that were done for customers. Make sense? Question here. When did you actually started working on the back end? Was it like once you really got the deal or was it more like after a particular logical point? So the point that we introduced the back end was at the point where we actually needed to have data persistence. So we needed to be able to introduce functionality into the MVP that really could not be faked. And a big part of what we wanted to tell a compelling story about was about the initial user experience of the initial user journey for new customers that were signing into the software for the first time. What did that process look like? And we found that it was difficult to deliver that without actually introducing a back end. So what was more like a gap between when you got the first deal implemented or you got the first deal and when you started and when you got the first deal? How much of a time span you had actually? So for us, we didn't know how much time we had. As it turned out, we had eight months. But initially it looked like we could have two or three months. You don't necessarily know. But the customers are making their purchasing decision on the basis of what's there. So you're normally just obligated to deliver what you've been able to demonstrate within the MVP. So as the MVP became more sophisticated over time, at a certain point in time, the risk associated with delivering it was significant. So significant that we needed to start to put more of a foundation in place for it. So today we have, for this specific product, which we started on just over a year ago, we have three significant customers. Most of them have just come within the last couple of months. One thing about selling to enterprise customers as well as relatively few of them want to be the first to bite. So finding that early adopter and then structuring the deal in such a way that allows you to talk about that arrangement allows some subsequent customers to follow. But these are large energy utilities. And this product that I'm describing is in the process of being rolled out to one of Europe's largest energy utilities. It will go out to 50,000 of their small to medium commercial customers within the next four to five months. One thing about the marketplace that we're operating in is it's quite immature and it's evolving quickly, which is part of the reason why we put a very high emphasis on MVP because it was really not clear what was going to resonate with the commercial market. What kind of services would commercial customers want to interact with in order to be able to realize energy efficiency? With our existing flagship product, we'd had the experience of building something up that customers said they were interested in but had fairly low levels of actual usage and didn't achieve the kinds of results that we were looking for. So we knew we wanted to do something a bit different. And this approach was our attempt to see what else we could come up with. Yes, absolutely. In this case, there is no existing demand. We're in the process of trying to create demand moreover. Well, I mean any demand creation activity is basically an exercise in behavior change. So what we're trying to do is be able to foster behaviors that would encourage the use of the application. So create a demand where there wasn't one before. Not in this presentation. I'm doing a presentation that's more in-depth in the experience report based on the organizational change that we made in order to come up with this. We underwent a company-wide pivot and so I'm going to talk about that and about what that experience was like to go through. It's tomorrow, yeah. Just going to quickly do a time check. I think it's great to get all of these questions. I'll take one more comment and then I'll move on with the presentation. I think what you just answered to this question, this type of methodology might work when you're trying to create a demand because if such a product already exists in the market, you might keep developing these minimum value products but the other company might win the client. And how do you justify the cost of your minimum value products that you're doing? I think that that's always a risk and I don't think it's necessarily, this problem is necessarily endemic to the environment where we're in because really what it is, it's a process of trying to find what your competitive advantage is, what your market differentiator is going to be. And so you could be in a very well-established market and want to do this type of iteration in order to figure out what's really going to make your products stand out from the pack, what kind of problems are existing customers having. In fact, where we started with this was our initial set of requirements came from an RFP. And one thing about, for those people that are working with enterprise customers that issue RFPs, RFPs make it very difficult to have a strong product differentiator because effectively the buyer is asking each customer to meet these list of requirements. And as a result, you end up with a lot of homogeneity within the marketplace. So you still have to figure out what, it tends to be more symptomatic of an organizational problem rather than a problem with the notion of being driven by feedback received from customers during demos per se. I mean, obviously you don't want to be reactive where every little bit of feedback that you're getting from a customer you're immediately going and responding to and building. And that's where product management comes in is leveling out those requirements and figuring out the ones that are truly high priority. But having software that is easily demoable is highly valuable. So really what we want is demonstrability to be a first-class concern of the application that we build as opposed to something that gets bolted on later. I mean, my experience in the past of building up products is that we don't, like the product is built based on a list of requirements, thinking about the end user rather than thinking about what is going to convince the buyer to purchase. And so bringing some of those thoughts into the design process and the construction process early on I think is actually quite valuable. So one of the things that we had in right from the start is the ability to reset the application, at least for demo accounts, back to a known state. So that's a key aspect. Initially, obviously we didn't need to worry about this because we had no backend, there was no persistence, refreshing the browser with sufficient in order to undo all the state change that had happened during the course of a demo. That was another pitfall that the sales team obviously needed to be aware of. But then carrying that through once the backend was in place, that for these accounts we could ensure that no matter what they were always back to a known state made the software much easier to be able to consistently demo. And that's something that has generally had to be retrofitted onto applications that I've worked on in the past. And it's normally been a fairly messy process. Another benefit that we found, especially with this rich client approach and deferring commitment on the implementation of the backend was it turned out that it made it very easy for us to be able to implement services that would support the functionality that had been flushed out within the client. And because of the fact that the iteration had already largely happened on the front end, the interfaces to the services that we built to support it were much more stable. And we could really focus on what the service design was. Using an MVC framework on the client side also helped in this regard because we had dedicated models that would communicate with specific types of services. And really it kept us focused on the user, telling the story that what was going to be delivered to the user, what was the story that the user was going to, what was the user's experience going to be and deferring commitment on everything else. And really focusing on speed, so having that high pace of iteration built in right from the start because we knew we needed to be able to move quickly on it. In terms of the effect on the culture of the team, it really also encouraged much more of a growth mindset in the team because it meant that we had a lot more developers within the team that were active in product design. We had sales that were active in product design. Really typically what I've seen in terms of the notion of cross-functional teams has been focused on the delivery side. So building things up with the key challenge and key constraint being on the design side, having cross-functional design teams I think is really where a lot of the benefit comes from. In terms of being able to bring together different experts and bodies of knowledge within the organization to be involved and iterate on the design very quickly. And that meant people stepping outside of what their comfort zone was and saying I don't know how to design an application or a product providing a forum for that to happen. So again, focus on having T-shaped people involved in this process. So we wanted to have people who had the depth of experience, whether they be technical to say, okay, yes, this is technically feasible or not, or sales people saying, yes, this is the feedback we're getting from customers. Marketing, this is what we're seeing from the market, et cetera. But coming together and sharing their experience. So behind any sort of an MVP are a set of key value hypotheses. So basically what is the key assumptions that we're trying to validate as a product of building up this product? And within an enterprise context, I think that there's basically two. One is will customers buy our product, which is really most of what I've been talking about so far, which I call the purchase assumption. People will buy what we have. The other is will users find the product to be valuable, which I call the value assumption. So the purchase assumption, it tends to be more immediate because if nobody's going to buy it, then there's not much point going forward with it. And it tends to be a much simpler thing to test. We can test it through demos. We can test it through talking to customers. The value assumption is more difficult, but it's ultimately more important because what we've delivered is we can sell it, but once it gets to market, it's going to fall flat. Then that's going to kill any future opportunities that we have. But it's harder from a testing perspective in terms of being able to test products within customers. The key thing to keep in mind is that the customer, the buyer's perception of value or what users will value is very different than what users really value. That's what I was mentioning before. So to focus on the value assumption, what we were doing concurrently with the demos that we were running for customers was to actually do concrete user testing with the end users for the customer and validate that the journeys that we were creating were valuable. And we could do the same type of testing with the MVP that we built up as we could through the demos that we ran. The big thing, the big learning for us in this regard was to be able to set up a context that would allow us to run small pilots on our own independent of the buyer with a set of hand-picked customers. In our case, we were fortunate because we had an existing customer base and so we were able to pull some of those customers out and move them over and have them trial the new product and get their feedback. But that was key as well. So to summarize, key takeaways are recognizing that in the enterprise space there is this separation between the buyer and the end user that there are two key hypotheses and we want to be able to ensure that we're testing both of them. And that the five steps that I outlined earlier to design an enterprise MVP are a good place to start. So specifically identifying what are the fixed known constraints that you have to work with because having some amount of constraint is essential to get going. Defer commitment on everything else and you can be surprised at how much you can actually defer until later. So as I talked about, we had no back end. It was just a static site and we were able to get a lot of mileage out of that. Leverage external products and services where possible, especially anything that's not related to your key domain area. And there are so many services out there now that it really makes it possible to assemble products very, very quickly. Start simple, iterate quickly, and then focus on telling a compelling story. So that's it for the presentation. Thank you very much. So some additional questions that people might have had that weren't addressed with the remaining slides. So we did use the Lean Canvas approach and I quite like it. And it was really interesting to see how that iterated over time. How many people are familiar with the Lean Canvas? Ashmaria's Lean Canvas work. If you are starting in on building up a new product, it's a great thing to try out to see if you can fill in. Effectively what it does is it distills many of the key considerations associated with building a new product down into a limited set of categories that would fit within, I think it's an A5 sheet. Please. You know, whatever you are trying to put into the demo, how would you validate that in the enterprise scenario because it's just a demo, right? Right. The validation was generally more qualitative than quantitative, so it was effectively what customers were telling us, at least in terms of the sales demos. With the user testing, that could be a little bit more quantitative in terms of the discoverability of features and the overall intuitiveness of the product. But I think that what you are hitting at is that one of the mechanisms for being able to do validation of products to ensure that you are moving in the right direction is to do things like split testing. It tends to work if you have large sample sizes and trying to do split testing with the kind of sample sizes that you are likely to have with enterprise products is more difficult. So as a result, we relied a lot more on that type of qualitative feedback. In the back. Yeah. I think in the enterprise cycle, the enterprise is worried a lot about the scalability, the non-functional side of things also, right? Yes. So any thoughts on that? Did you work towards that in this model? So not through the MVP, but certainly through the sort of deals that we went through. There was a due diligence process. Normally, where considerations around security, scalability, service levels were considered. But at least from my experience, they tend not to be assessed, per se, prior to getting far enough through the sales process. So again, it's about having a compelling story. We were fortunate in as much as we were starting from a position where we had an existing product in market that we could refer to as being an explanation of the foundation that we would provide for this product. But I think that it is definitely a consideration, but it normally comes a little bit further down the sales cycle. Please. I'm sorry I couldn't hear. Did our customers ask us about scalability? Yes. Yeah. So scalability is definitely a consideration. And it was something that, but it's not something that you can necessarily demo through an MVP. It's something that, as I was just saying, we were able to talk about based on our existing product and the foundation that we had established there. Even though that product was not, that foundation was not yet being used by the MVP that we were demoing to these customers. Please. Pick on a web application. So that's a good question. So the question was, why do we start with web application instead of mobile? And the answer is two-fold. One is that it was what the buyers were asking for. They were asking for web portals that they could offer to their customers. But mobile was always seen as being an additional, as an additional value add. And we felt that we were able to meet that and demo that through some of the frameworks that we used. So using, for example, a flexible grid-based layout framework or responsive layout for web applications allows you to have a consistent web interface that with relatively minor style tweaks renders effectively on a mobile device. So that allowed us to build a web application that was mobile-ready without expending a lot of effort associated with trying to support a large, support and demo on a large variety of mobile platforms. So it did allow us to show that, we could show the application running on an iPhone and show how the layout responded to that reduced screen size. It was a responsive app, yeah. But largely achieved just in virtue of the frameworks that we were using. Yeah, and that was actually part of our MVP process. Within the first three months, we used four different grid layout frameworks and just swapped one out after another. We ultimately used Bootstrap, and that's what's got, I would say, the most momentum behind it right now. But that was key. Oh, it was, absolutely, it was incremental. So we started with just the functionality that we needed to be able to provide persistence around and implemented a service for that. And then slowly, over time, we moved functionality from the client to the server. And we found that we could do that actually very quickly. Yes? That's right. No server side, no backend, no persistence, all done with canned data. The product backlog really evolved as we went along. So it was based on what we found customers who were interested in from the demos that we were running. And then from the user testing that we were doing. There was, to a certain extent, it really just came down to decision-making process. Yes, within this context, yes. As product lead, that was my responsibility, was working together with our product management group and the other stakeholders involved in order to charter the path for the product. Well, part of the sales collateral that's produced is to provide a product roadmap so to show where the product is going to go. But, I mean, there's not necessarily any commitment on that until after the deal is signed. So we had enough large features in the backlog. I mean, as you said, there's no shortage of ideas. It was really, what could we test out as quickly as possible? What could we validate as quickly as possible? Absolutely. Hello. Hey, hi. Hi. By the way, first up, great talk. Thank you. That was awesome. I have a question. My question is, so say, for example, you identified your problem. One, do you come up with the specs for MVP, which is features ABC that really solve this particular problem and now you're taking it to the market to figure out whether there's an actual demand. One, and is it, you're validating whether you're actually solving a customer's problem. Now, when you take it to an enterprise customer, he tells you that, you know what? ABC features are really awesome, but I want XYZ. Now, when you come back to your drawing board, you realize that oops, my MVP maybe needs to be tweaked. Yes. Then, it's just the first customer. Now, I go to another customer, he still tells me, tells me that, you know what? This is awesome, but I need XYZ as well. My question is, what is the safest path here when you really don't have a lot of customers giving you qualitative feedback? What is the safest MVP path that you would suggest when you're really uncertain whether is it the right customer, just possibly a potential that showed interest, or maybe we should go ahead and add that feature to your MVP as well? I think it's a difficult question to answer. I wish we could say that we had a lot of rigor around that process. It was really driven based on what we felt had the, which deal had the greatest potential of closing. So, where do we feel we have the greatest momentum with that specific customer? And then how quickly could we turn around and produce something that would demonstrate the direction that we would head in for that? At the same time though, what we were really trying to do was validate some of the feedback that we were getting within the marketplace. And so we were doing that through a few mechanisms. One was we had an advisory board, which consisted of, in this case, the product was for small to medium commercial customers. So it consisted of different business owners that we would present concepts to, normally in the form of low-fi prototypes. And just as a way to check to see whether or not what we're hearing was actually something, a direction that we wanted to head in. That was our reality check. Our company, it's not a huge company. And so normally we're talking to maybe 10 prospects, 10 utilities at a time. The market is also not that huge within North America. I mean, there's a lot of... The U.S. utility market is deregulated, so there's a lot of smaller players. But for the kind of deals that we're talking about, we're really looking at a handful of large regulated utilities. And so as a result, the requirements were not really all that much all over the map. There are definitely some differences. And so part of it was, yeah, it was really looking at, what do we have? What's our foundation? And what do we want to be able to build upon? So for example, at the risk of being too technical within our specific domain, a number of utilities are interested in services around demand response. So what that means is they've got peak demand management issues. So if there are demand spikes at certain times of day, they want to be able to provide financial incentives to customers in order to be able to consume that load elsewhere, like later or earlier or whatever. And we knew that that was functionality that would bring us in with a set of utilities. But that was not functionality that we had a lot of experience with. And so we did not focus on that to the same... I mean, we knew that it was potentially out there, we could put it on our roadmap, but it wasn't where we were going to start. We were going to start with the stuff that we knew a little bit better. I don't know, it's a bit of a rambling response, but I hope that helps. Yeah, it did give some idea in terms of... So this is the same uncertainty that we had faced while actually chalking out our MVP, because every time we took it to customers, they would come up with XYZ now, and now we're back to the drawing board and now we're like, okay, does our MVP truly represent a generic solution that appeals to a lot more users than just a specific set of users? So it was really difficult, and I wanted to know your thoughts. Some of it was also doing market research in order to figure out how many potential customers actually had that specific problem, so we could assess the size of the market associated with building that up. But normally, given the fact that we were able to build up these fairly superficial features quite quickly, we would take risks where we thought that it would pay out. Please. Yeah. The key thing with the minimal viable product is that it's not a fully-featured product. It may not even be a usable product. It's really there to help you test what the key assumptions are underlying your business model. So if you see that the key risk within this product that you're delivering within the financial service market is customers basically purchasing stock, closing a deal, is that the risk, then that would be the focus of your MVP. If it's identifying which specific stocks to research and then purchase, if the risk is on the research side, then maybe that's where you want to focus and then just assume that the purchasing process, it will take care of that or you're going to test it at a later point. It's really more about structuring the experiment around what you perceive as being the most significant areas of risk so that you can maximize learning rather than trying to build out something simple but broad. So the key areas of risk that we were trying to assess is can we convince energy utilities to buy this product from us and is this product actually valuable to the utilities and customer? So those were our two biggest concerns. And so our MVP was fully focused on attempting to validate those. Sorry, dedicated what? No, we do. Yes. So the question was did we have a dedicated team that was involved in doing this? Yes, it was a dedicated team that was involved in building up the MVP and carrying that through into its product implementation phase. Did that answer your question or did you have a follow-on? So we may ask some of the free cycles of a team who is otherwise engaged in other activities to do that. So I was wondering how we will be able to balance the demand from the marketing team for taking the MVP to a certain customer demo versus his priorities for the other mainstream product in that situation. I understand it's not the case in Europe, but... I think in terms of balancing demand, and this touches on what Craig Larman was talking about this morning, but having the marketing representative part of the product design team so that they have some ownership over that process as well and that they recognize what's involved in the production, what some of the competing demands are, and have a sense of ownership over the results goes a long way to mitigate some of the driving force behind that demand. Please. Basically, I come from the R&D team. I'm just looking... Is there a thumb rule like you take so many MVP per iteration or things like that? Because generally, from the R&D per se, it's like we've been bombarded with MVP and I understand that from the marketing perspective, they have their own pre-agenda, but then I don't see a defined MVP that an engineering team can take at some point because we end up doing POC and finally we kind of narrow it down. We can only take so much. From your experience view, is there a way you kind of channelize it and give a list of MVP to the R&D team? I'm sorry, I'm actually having a hard time hearing you. Okay. See, my concern is like, is there a thumb rule like in terms of number of MVP that will be given to R&D team or it's like randomly given? Sorry, is there a thumb rule about what? Number of MVP that's given to R&D team to develop. A thumb rule about the number of MVP that are given to the... R&D team. Given. Absolutely, absolutely. Not necessarily. I mean it depends on the number of experiments that you want to run and what your capacity is to run those experiments. So for us, we had two MVPs that we are running concurrently. One that was started earlier, which is most of what I've talked about here and another which we pivoted to in November and sold to Utility in December and that is my core focus right now. But it wasn't like we had a lot of MVPs. So it's a product, it's not a feature. So for us at least, it carried on over the course of several months where we were continually iterating on it, maybe adding features, taking some features out. There were a lot of features that we got discarded over the course of the learning process. So I don't know if that answers your question. To the extent possible, I was actually looking at is there kind of a number like you kind of project to the R&D team in terms of MVP, but it looks like that's not the case. No. Any further questions, please? So variations in terms of what the customer is looking for? Yeah. Yeah. So okay, I mean the key thing is this is... So okay, one of the... When telling to enterprise customers, I think what you're hitting on is they often want a certain level of customization associated with what it is that they're going to see. And so we knew that that was something that we needed to be able to meet and we needed to be able to demonstrate that we could need it and we were able to do that through the MVP in a few ways. One was obviously, for us delivering a product like this through an energy utility, it needed to be skinned and look like... It would reflect the branding of that energy utility. So as a result, theme ability was a key consideration early on. So that was something that we had right from the start even at the point where we had... It was just a static site. The other was some ability to show differentiated features for different types of customers. And so we were able to demonstrate that just by hiding or showing features depending on the user account that was logged in or which, you know, we created some demo energy utilities and so we could show the differentiation from one to another. So in terms of that, we were able to demonstrate the potential for customization without needing to do all the customization that they were asking for. And then within the structure of the deal, the contract that was signed between us and the utility, they expect that, yes, they're prepared to pay for a certain amount of customization activity that's going to happen in order to get it looking just how they want it. And so that was just kind of a part of our process. Does that help? So we were able to keep the core intact. At the same time, we had a number of features that we deemed experimental ourselves that we wanted to be able to test out through a demo. Whether it was to see whether we're on the right track or whether to see whether or not this was an idea that they were interested in. And so we could easily hide or show those specific features depending on what the opportunity was that we were pursuing. I would say actually having that flexibility built into the product from a starting point was also quite valuable because we needed to be able to have that level of customization when it came to actually delivering the product to the customer. Any further questions? All right, well, thank you for sticking it out. And I'm more than happy to talk with any of you one-on-one about this some more. Thank you.