 I'm Steve Sanderson. I'm really glad to be here. This is my second chance to speak here at Lone Star. When I was here a couple of years ago, I had a pretty different job with a pretty different company. Back then, I was with a company called Five Runs. I can see at least two people here that have heard of that company. It's no longer around, but I am. I've moved on to something else that I'm honestly excited about in a whole different way than it was then. That's why I'm here to talk about today. I'm working at a small startup here in town called Food on the Table. As you can guess, it is not a technology company. We do not deliver products to technology folks. What we do is deliver services to moms and dads and families and singles and adults and whoever needs help with meal planning and saving money on their meal planning and eating healthy and nutritious meals. Who don't want to take all the time to spend doing that kind of work, takes a couple hours a week. They want to know what's on sale at their grocery store. So we pull in a lot of data, we integrate it and we deliver a personalized set of recipes and options for folks on a regular basis. It is a consumer product. It's a consumer service in Austin, which I'm very happy about. But we do things pretty differently. I was going to say somewhat a lot differently from pretty much every other startup I've worked at before. I wanted to share some of that because talking in terms of pain. I've worked at startups before, I think probably a lot of you have worked at companies before where you've had the experience that you do great work, you produce really good code. If you are not in product management, you have the experience that product management is very capable of giving you direction and guidance and feedback on what features to work on that represent what the market wants. You're a great team, you go off, you build really cool stuff, you deliver it, you wait for people to be the path to your door, and sure enough, the first person that does starts to tell you, yeah, you know, but it doesn't quite work the way I need it to work. This is really cool that you guys did this shiny rotating cube here. I don't really need that. What really hurts right now is this problem over here. So you don't get feedback that you can use until well after you've delivered the product. At that point, you start to scramble. If you're lucky, you have some heart to heart about what you didn't learn and what you did learn and where to go. If you're unlucky, you start the blame game and product management and product marketing and marketing people blame the technology guys for not delivering something that's slick. Technology guys blame the other people for not telling them what to build. And it degenerates into lots of useful conversation and arguments. We do something a little different. And the way I tried to represent that for the piece of talk I want to do here today is get your facts first. Then you can distort them as you please or why I love continuous learning and continuous deployment. So this is just a piece of the larger thing that we do. But continuous learning and continuous deployment, the idea there is we are strongly metrics driven about how we choose what to do and how we know whether what we're doing is useful. And we do it in an increment that's incredibly small. It will probably seem very familiar from a process point of view if you look at it just as how you might do XP, how you might do some agile practices. But the difference is it's applied across the business. That is purposely faded. It's not a problem with the projector. There's two approaches that I think we've all had the chance to live with. One is very well organized, very planned. Everything's very structured up front. Product management and not to disc product management but they're sort of the guys that I get to pick on in this conversation for a moment. Organize and create a very good model of what it is going to be valuable and deliverable to the customers and then off we go to build that. The other alternative is chaos. We just sort of start cranking crap out and we don't know what's going to happen and we don't have a lot of structure involved. It seems like those have been the two alternatives. We've all seen XP and agile and that provides us an alternative to how to improve that in the development life cycle. But it's bigger than that. So what if there was another alternative? What if actually you had a chance to grow organically based on feedback each time you had to make a decision? And the decisions that you made were very, very small and you were actually driven with data as opposed to opinions. So in a real quick way, what I hope to give you a brief introduction to is how you can continually run live experiments with your users to see what works. Gather more metrics than you know what to do with at the very beginning, which is counterintuitive and continually deploy changes to adapt to those learnings. I'll seed some stories as we go about what we did and what results we achieved. So I want everybody to read this closely. Now I'm not actually going to talk about this slide. I put this slide up there to give you the larger context. That there is a larger context, everything I'm talking about is in. But instead of having you guys attempt to read it, because actually it's out of date, I rewrote my talk this morning. What I do want to do is give you a little story about how we got started at food on the table. We went and talked to lots of people. They gave us lots of good feedback about our idea about meal planning. If you go talk to a mom and say, is it painful for you to do meal planning with kids who don't like this, who don't like that, who have to go play soccer? And you say, we'll do for you in ten minutes, which you normally spend two hours doing if you're lucky enough to do two hours of it, and we'll save you money in the process. Well, guess what, nobody says it's a bad idea. Everybody loves it. I mean, it's a little bit like going and saying, do you like puppies? It's like, sure, we're kittens for those of you that don't like puppies. So that's pretty easy. Focus groups are great, but they're only, they only tell you so much. So what we did is we sort of turned this backwards. I'm a software engineer by, or developer by trade. The guy I was partnered with or am partnered with is a marketing business MBA kind of guy by trade. So my tendency is to want to build shiny objects. But what we did instead was, we actually figured out how families do meal planning. We went and met with a mom and convinced her to be our first customer. We called her on the phone, found out where she shopped, what she likes to eat. We pulled together a bunch of recipes off of the Google. We cut and paste them into a pieces of paper. We printed them out. We met her at Starbucks, and we proceeded to simulate the whole process of doing meal planning for her. They're at Starbucks to laptops and a printer. By the end of it, she ended up with a nice meal plan that she went to go shop with. Then we met her again at the same Starbucks next week after getting some feedback from her on the phone, prepared another meal plan, asked her if she liked it well enough, and she did. So we asked her to pay for it. So we got ten bucks out of her to do this service. And the reason we did that is because of our fundamental hypothesis was that this was a valuable service. But if we couldn't get somebody to pay us ten bucks to get two grown men to spend hours and hours and hours doing meal planning for a small family. If we couldn't get someone to pay us ten bucks for that, then we were not gonna make any money off of this service. We were in the wrong business. So we are never going to deliver the service as a customized personal valet with grown adults at the other end of the phone delivering meal plans to people. But we knew that was the maximal valuable service that we could deliver. So we tried to deliver it and see if we could get results. We did, people paid for it. So it became time to scale. So now I'm gonna jump into where we are now. And I'll see jumping back and forth. One of the first things that we did is we really had to understand what was our important business metrics. And this is a great crowd to start talking about business metrics. By the way, that chart's meaningless, I just found it somewhere. I had another one that talked about gross sadness or something like that versus domestic product. So I thought that was a little too glib. Business metrics, probably not the crowd that's gonna jump up and down and get really thrilled about business metrics. The reason that I started with this slide is because in order to know whether the decisions that are being made about what to work on next are legit or just fantasy or opinion. You've got to understand why you would choose one thing to do over another. So for us, we have business metrics that are pretty basic. We want people to come back every so often. We want them to come back so many times. We want them to pay so and so amount of money. We basically have some idea that we're gonna make a business out of this. Whether it becomes a business that gets sold to food planners international for $100 million in a year. Or it becomes a lifestyle business that I do for the rest of my life is really irrelevant. Pretty much like any kind of development activity, you gotta know what the goal is and you have to work backwards from there. That's representing the business metrics. Now that you've got these business metrics, you need to look at the work that you're doing. So at the very beginning, we had a process going where Manuel and I literally sat down with the newspaper, with the Google. And we cut and paste recipes onto pieces of paper. We actually got a printer at some point. And we serviced one customer, then we serviced two customers, and then three. At some point, we began to encounter obstacles. It became really expensive for us to drive to Starbucks every time we wanted to talk to a mom, not a scalable model. That was the biggest obstacle we had in our current process. So for us to improve the metric, a number of customers, we had to look for the biggest obstacle in that current process, which was time spent in car. Well, they had this cool thing called phones and screen sharing. So now we're on to three or four customers. We're doing collaboration with a customer over phone, screen sharing. And we advanced to Google Docs, where we would cut and paste copies of meal plans and recipes while they were on the other end of the phone looking at a URL that we had emailed them. We had not written a line of code yet, but we had revenue. We had already started iterating on improving our business metrics. So that is almost a cliched photo about the biggest obstacle. I should have put it in the little black frame. It says happiness is something you can achieve. The point is, look at your process at any point in time that relates to the metric you're trying to improve, find the biggest obstacle, and start focusing on improving it. That pattern, if you will, has been true for the last year. So every time we get confused about what we're doing, we'll stop and apply that. Why are we working on this? Is this the biggest problem that we've got right now? And honestly, sometimes it's not, and it's because we've gotten off the path. All right, now just in case you're not clear, we aren't doing anything with dogs nor tape measure. But the idea here is, while almost every other place in our company, we have this notion that we never want to do more work than is necessary. When it comes to metrics, we have a different rule. We instrument our app and everything we've done in an unbelievable amount of ways. Let me just rattle off a few examples for you. When we first started, we were keeping track of sort of major events that people would do, going to pages, creating things, adding things to leading things. And we started writing them all to an event log. At some point, we said, you know, there are these very cool services. There's Kiss Metrics, there's Google Analytics, there's Mixpanel. So we started routing a lot of that event log data, what we called event log data, to Kiss Metrics specifically. We did that for a while. We also kept some of it local, and then we put other stuff into Google Analytics. What we discovered, and this is part of my promise to you guys to give you specific technology and techniques, we discovered that we had a insolvable problem where we had our data distributed over multiple services. Google Analytics, Kiss Metrics, Mixpanel at one point, and our local data. We could not reconcile them. We couldn't reconcile them because the moment the data left sort of our scope, it was transformed into some locally optimized form. Google Analytics keeps our data in a form that works for Google Analytics. It does not work for what we need to do when we have a new problem to solve. Same thing with Kiss Metrics, you can't actually get inside their data store and reexamine their data in a way that you are the ultimate controller of. So after a lot of pain and a lot of missteps, we decided that no data would ever leave our environment about events or what we call metrics that wasn't already copied into our local store. So we are the master for all of this. The big motivation for doing this is since we are so metrics driven, we have this flow where I'll go back and tell you how this flow works. Basically, we will find some problem with metrics, find some problem that we want to improve. We will go back and examine the historical data and slice and dice it in a new way. And then we'll turn around and say, now that I understand the problem, I think I have a hypothesis about how to fix it. We'll do the minimal change to try and fix it, and then we'll monitor to see if we can actually improve the results of that. Because we do not know when we capture our data, how we're gonna use it later. And because all of these other services do transformations on that data, we had to keep that data locally. And because we never were able to reconcile them consistently, we would spend hours, every time we had a really big problem, trying to figure out whether our data, their data, some combination of other people's data was correct when it came to key metrics we were trying to fix. So we spent a ton of time doing that. Kiss Metrics is a neat product, been a big fan of it for a long time. Obviously have some experience with metrics products and how critical that is. And honestly, I'm not really wild about pushing all my metrics data out into the cloud, though I happen to have a company in the past that was responsible for doing that. So I have a different view on that. All right, so measure everything. Last couple of notes on that. One little tip, we were using jQuery. We were using some code to basically keep track of click events, whether those click events did things local to the page or they actually turned the page. We discovered, as many of you probably have, there's a race condition in the browser. When you have a asynchronous request to log an event at the same time that the page request goes out to an HTTP request to get a new page. It's a race condition about who's going to win. And at least 15% of the time for us, the page won and we never got the event for the click. So we saw people who were on page one, then they were on page two. We knew we were on page two because at the control level we logged that event. But we never saw an intermediate event that said user clicked button. That's because the browser basically terminated the thread before it actually had a chance to complete. So what did we do? We stuck parameters on the end of the HTTP request. So now those know how to turn any HTTP request into an event to say, user has clicked on the button that was on this page. It's made life a little simpler. I'll tell you, it's really frustrating when you depend on the data and it's not there. All right, so I mentioned slice and dice historical data. That's been key for us. We accumulate a tremendous amount of data. We originally started out, I love CouchDB. I think it's really cool. I love key value pairs. I want to play with them. I several times wanted to go down the path of dumping all of our data into really nice slick key value pairs. The truth is with our sort of credo of do the minimal required to get it working. I backed off of it a couple of times and we just dumped data into my SQL. Is it perfect? You know what, for what we need right now, it's perfect. Will it be perfect later? No, but we know how to migrate data, so it's not a big deal. So right now, we have a ton of data in my SQL that represents everything every user does. That includes things like checking checkboxes, clicking radio buttons, even scroll events, so we know a lot. How do we analyze that? I've heard that. Yeah, good point. And that's, I keep wanting to find a tool that will do all of that for me. There's a guy here in town, Ash, that's got some great ideas about that. But at the end of the day, what do we use? It's a pretty embarrassingly simple set of tools. My SQL, Excel, we write lots of little queries and dump them out. And Manuel does an incredible amount of slicing and dicing in Excel. And every time we've been tempted to move something out of Excel into hard-coded views that we build, we have to stop ourselves because the only ones we really should migrate into code are ones that we do so often and so frequently that we've proven that we know we're going to keep with them, we should move those. But honestly, that's not our biggest problem, so we're not fixing it. We also continue to use Google Analytics for some things. As I mentioned, we were using Kiss Metrics, we retired use of that. We are using Kiss Insights to gather a qualitative data. I always have to test that. Qualitative data. Nice surveys, though most of my talk right now is on quantitative data. There is a whole long discussion about when to use qualitative data to speed up the results of learning when you're only gathering quantitative data. And then finally, we had been using Mixpanel, but we retired that. So formulate the hypothesis. Everybody knows how to do that. I'm assuming you know how to formulate a hypothesis, if not, there's smart people who can tell you how to do that. So you got a hypothesis. The hypothesis now you need to test it with the minimal change. And here's the interesting part for me. You know what obstacle you're going to fix. You slice and dice historical data. You got a hypothesis about what's wrong. Let me give you a concrete example. We have a, we track retention on users. We know if someone's returned once or twice, three times, X times. We were not happy with how many people were returning for a second visit. So we sliced and diced our data in a new way. We determined that of the users who did return two times, that they were strongly correlated with people that added a recipe, looked at our recipe catalog, and printed out a grocery list. So we're like, hot damn, let's make sure everybody does that. So me being really naive in statistics, I was like, well, can we just put all that on the first page? And it's like no, no, there's difference between correlation and causation. So we then looked at the flow for first time users and looked at each step in the user flow. Where are people getting stuck? What we saw that was the biggest percentage of people were getting stuck on going to our grocery list page, which is our last page in our site. So that's our biggest obstacle to solve. So we used the business metric, said we want to improve short term retention. We looked at our metrics data, re-sliced it, and said our biggest obstacle to that right now isn't getting people to go to the grocery list page. So then as a team, we started to brainstorm, how do we solve that problem? My suggestion was a radical redoing of the flow. Right, redo where the pages are, completely change that. Other ideas we had were, let's see what were some of them. Improve the visual design, right? You know, people aren't going on to the next page because the current page kind of is ugly. We make it more attractive and beautiful. Maybe if we actually pull some of the data from the next page, from the grocery list page, up into the current page, that'll motivate people. They'll see what's there. They'll say, damn, I want to go to that next page and see all the cool stuff that's there. Then we did this one thing that I think has become so valuable for us. There's one person on our team at any given point in time who seems to be the one that can punch a hole in a good idea. Now, I don't know about you guys, but most teams I've worked on, there's at least one person like that. And it rotates. The trick we seem to come up with as a group was brainstorm some ideas, understand the problem, brainstorm some solutions, and then turn to this guy, hey Fred, this is our problem. This is how we're going to solve it. We're going to build a big flow and we're going to rework the pages and new controllers. It's going to take us like four or five days to do it. So this guy turns to me and goes, well, why don't you just put a pop up on the meal plan page that says, be sure to go to the grocery list page. Now, that's a really ugly user experience. But the key thing that he remembered that I'd forgotten was the point is not to implement really cool stuff. The point is to test your hypothesis. So you can test a hypothesis really, really cheaply and with minimal effort, which is a lot different than saying you can produce, you can solve a problem in a really elegant and beautiful way. So we had to do one more thing that I forgot to mention. We had to check our ego at the door because that meant that every developer who was on the team was going to be in the position of doing something like putting a really fugly pop up on a page that says, hey, don't forget to do this. The reason we do that is because we want to see if that pop up will change the metric results for people in that alternative. Now we're going to do a split test with the new thing and the old thing. We want to see, does the new thing with that ugly pop up that gets their attention actually get us a better result than moving people on to the grocery list? That took all 20 minutes. My solution would have taken us days and some of the other solutions would have taken us days too. We would have had to get a great designer in there. We wanted to test the hypothesis, not the elegance of our solution. So we did that. The results were interesting. The results in this particular case was that we actually didn't move people on to the grocery list more. But it did not improve our short term retention as much as it should have. Okay, so our hypothesis was not right. So I could have spent time and money having the team spend a week building a new flow and a really beautiful layout and all of this. And then a week later I could have gotten enough data to tell me that it didn't work or I spent a few hours and then a few days of running that test and sure enough, the result was that's not our problem. That's not what's keeping people from retaining. So I don't know about you, but that gave me a lot more life when it came to running out of money at the end for our startup and that makes me happy. So key things there, you have to check your ego at the door. Because as a team you're gonna be putting stuff out that people are gonna call you and say, this is really ugly, would you stop this? And we have those calls and emails now. And the answer is, I'm sorry, we'll try and move you to an experiment that's prettier. All right, so isolate the test. You can see his test here, he looks really sad and lonely, he feels isolated. This is a complicated concept that I did a really poor job at trying to communicate on my notes. The notes will be online later. But the essence of this is once you start running multiple tests in a series, to test these different hypotheses and different parts of the user experience, they're gonna start bumping into each other. One test might affect the other. If you do what I've done, which is to drop one test right in the middle of the other, so suddenly, remember with split tests, you have, say, two alternatives, fork in the road, A and B. If on the B path, I stuck another test that suddenly added another step for some of the people on the second test, then one user could have one step to get to the conclusion, another user could have like four steps. So that is not apples to apples comparison. You cannot look at the test results and say, well, everybody that went down this other path is there's so much more problems there and your hypothesis is really wrong. You know, your test is wrong. This is a little bit of an art because I haven't been smart enough to figure out how to make it a science, but I work with enough smart people who can look at this and go, if we have these things together and then you change the data, you're gonna screw this test up. So make sure you isolate your test. It hurts their feelings, but you need to do that. All right, testing solution. This is our testing solution. No, our testing solution is these pieces of technology. We use vanity, okay? I know there's a bingo out there. I haven't used a bingo. We looked at it a while ago. Vanity is solid. There are some tricks with vanity. Let me rattle off a few of them in case you guys have heard of this. But essentially the problem is you need to be able to provide a point in the user flow where a user is given an alternative. They either go down path A or path B, and you keep track of that so that when the user returns, they have a consistent experience. Let me give you a concrete example. You are in our product, you come to the meal plan page and you're in the test about going to the grocery list page. 50% of you will see the page as it exists. The other 50% of you will see that stupid pop up when it's time for you to move on to the grocery list. You need to see that same pop up next time you come back. Otherwise, my test is really, really meaningless. If I keep going back and forth, it's not session based, it's user based, okay? Vanity does a good job of keeping track of that for you. So we use it out of the box for a long time. There's AB tests, split tests that people do. They're on page levels so that you get different pages. Feature levels, whether you have features enabled or not. Buttons enabled, or in our case pop ups enabled. Vanity works for all those cases, okay? There are some easier out of the box solutions for doing page level split testing. I think it's Google's website optimizer is one. I know Kissmetrics is at it and AB testing, I haven't looked at it. There are other ones. I have not found an AB testing solution that gives us the kinds of control we need for something that lives outside of our app. So just doing JavaScript in bed isn't sufficient for the kind of AB testing that we need to do. All right, so we've recently modified Vanity to allow for non 50 50 splits. We have tests where we don't want to have everybody included. We want to maybe have just 10%. All right, one other thing is Vanity right now expects to measure just one metric. But as I mentioned earlier, when we do our, for instance, that pop up test to move people on to the grocery list page. We're trying to actually affect two metrics. Trying to affect the number of people who return a second visit, which for us is at least a week later. So that's a long time to wait. And the other metric we're trying to affect is how many people move from the meal plan page to the grocery list page, which should happen in moments, right? Within 60 seconds. So I need to measure two things. Vanity lets me measure one. And by measure, I mean it has built in support for measuring a single metric. It has a notion of showing you a graph and how well you're doing and determining who the winner is. To compensate for this, we have built out our own set of views where we show our event data combined with Vanity's notion of AB tests. And so we can look at additional metrics besides what Vanity provides us. The end of the day, we go back when we are running a test and it starts to look like a conclusion is coming. Unless it's like extremely obviously good, we will do some slicing and dicing of the data again to confirm. Another reason why we log everything. You will never be surprised. Log everything. It's crazy, but you should do it. All right, so that's our testing solution. We use that. We live with that. We're good. We had some stuff in ActiveScaffold. We've since added some more simple pages that aren't ActiveScaffold for showing data. But that's our story for our testing solution. All right, so now building the test out. I think I've covered a lot of this already, but there's two problems that you've got. When you're building a test, you need to make sure that you include the current alternative. When you're doing a split test, you have to compare old against new. But you also have to understand which specific piece of the user actions you want to measure. And you have to understand the business metrics that you want to measure. So as I mentioned in that example, we wanted to measure users who returned. We wanted to measure users who moved advance from one page to the next. So those are the metrics that we kept. We designed the test to have 50% use the old way, 50% use the new way. I don't know a good place to mention this. I'm not sure what my next slide is. No, I'll mention it now. One of the things that we've discovered is that there's a tendency that when we find a problem and we think we know we have a solution, our problem is we're not advancing people from the meal plan page to the grocery list page. There are sometimes as a team or as individuals, we will definitely be convinced that we know the exact right solution to build. And we will have the strong temptation to go build the final complete the way we think it should be changed. Back to the point I had earlier about the purpose is not to build out the code as you think it should be sort of in its end state, purposes to test the hypothesis. So you have to make sure you do the minimal required to test the hypothesis. It takes a lot of effort as a team to sort of keep each other in check on that. The urge can be to build out some really cool stuff. And let me give you, this is one of my favorite stories. So right after Manuel and I just got funding, we decided to run this company for real. We went and hired a few developers so that it wouldn't be my code that's actually in production anymore. It would be real code. So we sat down. We understood the process about how people do meal planning. We saw it end in. We had done it ourselves. Even the developers that would join us, got it. So we said, these are the changes we want to make. We're going to build this system to do meal plan generation. Up until now, it had all been manual. The team spent four weeks on it. It's a very beautiful model based on tags and correlating tags between recipes and sale items and ingredients and all this stuff. Put all the code together. Four weeks later, finally it was ready to run. I did a test run. Had a lot of data in the system. Generated the first meal plan. Looked at the ingredients. Chicken, wheat, mayonnaise, and a few other things look pretty good. It's like, damn, nailed it. Generated our first automated meal plan. It found the based on the sale items. It pulled the right recipe and used the right ingredients. Only I then looked at the title to the recipe that it recommended. It was dog biscuits. It was technically correct. It was actually the right recipe for everything we had programmed. But we had violated one of our fundamental rules, which was try and prove the hypothesis. Instead, what we did was build the solution. Our hypothesis was we could do automated meal plan recommendations, automated recipe recommendations. So we spent four weeks building that. Now, in every other company I've worked at before, four weeks off the air building a piece of code is not a big deal for us. In this company, four weeks off the air would be like we had all died. We deploy and we generate code easily a half a dozen times a day on a good day per developer. To have spent four weeks building this and delivering it, what happened was is we proved this notion that we had built this thing without feedback from users. We'd done an increment that was too big. We had missed some key parts of how people actually pick meals, right? They don't pick things that are for dogs when they cook it for their kids. That's a big one we missed. We fixed it, but it taught us that we needed to have lot smaller iterations. Last one, continuous deployment. So this is a great buzzword. Raise your hand if you've heard of continuous deployment. Raise your hand if you haven't heard of it. Don't worry, we won't make fun of you. One, two. I'd say what, like 80% of the room's heard of it? Then there's a shy people that don't want to raise your hand. That's totally all the buzzword now. Continuous deployment. Eric Reese talks about it in his lean startup lessons learned. Timothy Fitz from InView has a great article on continuous deployment. I highly encourage you to check it out. Everything that he talks about a continuous deployment, I totally, I'm there with them on it. I read it and I learned it. Just so happens that my partner in this company was Eric's partner at InView. So I kind of got all this stuff from the horse's mouth. But that's not enough to say you do a continuous deployment. All the stuff leading up to it is what feeds the pipeline so that you continually deploy many, many, many times a day and you don't go off the air, off into La La Land thinking up what you think ought to be done for the customer. But the net effect is following this essential patterns, essential techniques. You've now generated a series of tests, hypothesis to test them with your tests. You've got your events that you're logging. You're always going to constantly add them. So continuous deployment will be how you get this out there. How much time have I got? Two minutes? Negative one. All right, I'm going to wrap it up. I'm sorry. Short answer. Has this worked? We've been doing it a year. We got hit by Life Hacker a few weeks ago, a few months ago, a month and a half ago. Our orders, our number of registrations went up two orders of magnitude in an hour, system scaled. Not a problem. We didn't want that publicity. We're not ready. We haven't cracked the code. That's a whole separate conversation. We are making revenue. We're not making a profit. We are, it's live and it's living. And the net effect is, by following what we did, while we found lots of things we shouldn't do, we did not write code that was useless. We did not write code that was irrelevant to the customer. We were able to test and prove it was relevant and then lay down the real code or the real changes that made it better for them. And I personally wouldn't go and do anything different. And that is my proposal to you about how you avoid writing a bunch of code that gets thrown away, which is a miserable experience. All right. I doubt I have time for questions, but I'll be happy to answer. I'll take, what happens to me if I do it? Am I going to be in trouble? Go ahead and answer questions. Are you going to get mad at me? I'm sorry. Yeah, go ahead. I noticed it says free, so how are you making money? We have upscale. We're a freemium model. One of our experiments we ran is we ran three different monetization models, free, freemium, and paid. We discovered that people sign up for free better, but they retain better when they've paid. We don't know why. So what we did was we have been tuning how much you get for free and how much you get when you pay. It's like any freemium model. And people upgrade. And we know when they upgrade, and we know what conditions were in place when they chose to upgrade, and they upgrade, and they pay us $9 in something a month. Do you get your un-data for sales and how do you get your data? There are like a dozen different ways we do it. All of this stuff is available either printed, some of it's online. There is no, unfortunately, no massive repository in the sky we can dig out, so we have a lot of different methods we had to get it from. And that was one of the things we worked on very early. Because I know the retail industry is very, everybody does their work and it's not. It's not, but they mail that stuff out to everybody's home every week. And mechanical Turk is a useful mechanism. So are overseas people typing stuff in. I mean, there's all sorts of alternatives. Other questions? Last thing I wanted to say, if this interests you at all, working this way, there's my pitch. I am looking for help. Please come find me. We're here in Austin. I would love more developers to work with me. Thank you.