 Okay, we're back here live in Silicon Valley in the heart of Silicon Valley in Santa Clara, California. This is the O'Reilly Media Stratoconference. This is siliconangle.com. Exclusive coverage, theCUBE, our flagship program. We go out to the event to distract a signal from the noise. I'm John Furrier, the founder of Silicon Angle. I'm joined by my co-host. Hi everybody, Dave Vellante from wikibond.org. And we're here with Eric Coulson, who's the chief analytics officer at a company called StitchFix. Wait till you hear what these guys do. Eric, welcome. Thank you. Appreciate having you on. We got a chance to see your keynote here. They did the speed dating this morning, the speed key voting. So let's get right into it. I mean, you guys build this recommendation engine, but like nothing I've ever seen before. So tell us your story. So some people have described us as Pandora meets Zappos. So algorithmically chosen clothing. So it's e-commerce and we offer apparel items, but the customer doesn't pick them out. Rather we will choose on behalf of the customers based on their preferences and ship the merchandise right to their home where they don't have to keep anything they don't want. If they decide they love it, they can pay for it and send us the money. But they're under no obligation. So it comes down to having very relevant merchandise for the customer. So that's where the algorithms kick in and we combine it with a little bit of human oversight because they add a lot of value in the human intuition plays a big role in terms of curation and looking at unstructured data and making sure this stuff is relevant for the customer. When we have penalties of being wrong that are severe, meaning we're paying shipping and we're making this customer unhappy if we're not sending them relevant to things. You want to have a lot of resources on it. So that's how we can justify some human oversight on this. You've got some big incentives, but so that's the interesting part. You're not taking the human out of the equation, but I called it earlier IBM Watson for apparel where essentially you've got a machine saying, hey, this looks pretty good. What do you think the human interacts with that? And then the machine learns from that. Is that the way it works? Let me talk about the tech a little bit. Now I'll concede I had this ambition that I thought that maybe the algorithms is all we need and we wouldn't need any human. I've since retracted that and I found that they are very complimentary. They're doing very different things. The algorithm is narrowing results down to finding very relevant stuff, but it still needs to be curated and that comes from human intuition. So the human stylist is picking things that go well together or perhaps taking stuff away that's too similar. And that's stuff that's hard for an algorithm to do or to customers can provide links to their Pinterest account which displays pictures of them and things they like. And that says volumes about them, but it's very unstructured data, very hard for an algorithm to do anything with. But the human has no problem extracting that information and applying their judgment to the curation of the products that we're going to send them. So you were at Netflix and you were saying this morning that you're the perfect recommendation engine and say, I want to watch a movie tonight. It just starts playing a movie. Yeah. Like it reads your mind. So okay, so you realize that we can't get to that vision, but this is a big idea. So how does it work or how do you envision, how should people envision it working in terms of that human interaction? And then as I said before, the machine then learns from that human interaction. That's right. We found that the combination of algorithms and humans are actually not just complementary but they're reinforcing, meaning they make each other stronger. So when humans don't take the exact recommendation, so we recommend, here's what we think is the highest probability that this customer would love this stuff. And then the stylist says, well, maybe not that one they skip over. That's information to us. Why did they skip over? And we can figure out, well, do they know something we don't? And if so, we can then fold that back into the algorithm. In other cases, humans have beliefs that they think are right and it turns out they're not always right. So sometimes we expose biases and prejudices on the part of the humans. And again, we're learning that way. So we get to learn either way, no matter which side is right. And that's how they're reinforcing. So they get better and better over time. And like you say, you've got a huge incentive to get it right. Absolutely. Talk about the tech involved. Obviously Netflix was pioneering recommendation engines before they were popular. Now you're in a different space, it's lifestyle. What data sources are you using to kind of mine that kind of targeting? I mean, you're talking about micro-targeting and delivering a consumer product to a customer. A very difficult technical problem. Can you talk about that? Sure. So in terms of technology, so we don't have the scale challenges that a Netflix has. The data is not unwieldy. So it means we get to put more of our attention towards the analytics, the math, the modeling that we do to build good predictive engines. Less so around infrastructure and making sure this stuff runs in a reasonable amount of time. Not a lot of volume. The volume is not daunting, right? So we have some issues where we want to make sure it runs fast enough. But as we introduce more complexity, we have tricks we can do. We can run that offline and batch and shuttle over the results. So we're not as worried about scale. We want to just keep getting better and better at relevancy. And that's where all of our mind share can go since we don't need to worry about keeping the Hadoop cluster running and stuff. That stuff is very managed. But it's a Hadoop cluster, yeah. No, it's not. It's a post-quest database. We do most of our discovery and are and we cobble the two together, which is something that we'd like to solve. Right now, it's about elegance rather than scale. We'd rather have nice elegant code. Fidelity on the targets and with the customer satisfaction. True. It's more, so when I say elegance, I mean coding elegance, right? So we have, of course that needs to be elegant. But when we write our code, it's a very different paradigm from database SQL. And unfortunately, we need to marry the two. So we're trying to, we're experimenting with different ways on getting those two together. But now they're decoupled. They are decoupled. What we do now is we do our discovery and are, and then we'll rewrite some of the code to make it work in a production system. And we'd like to get rid of that rewriting step. We'd like to have it all seamless and nice. And that's what I mean when I say elegant is the way we think when we're doing math, it just works. Yeah, elegant programming is always the way to go. My final question for you, I know you got to go quickly, is what were some of the things you've learned when you rolled out? Because obviously this is kind of one of the, I would put your company and your efforts in the new bucket. It's an old way and new way. The new way is disrupting and providing new user experiences. You're in that bucket, congratulations. But what have you learned along the way? What can you share with the folks out there in terms of taking the chance? Obviously to do it's great, but now you got to go out there, do some building, write some code, try to get some elegance going on there that you mentioned. What have you learned you'd share? Well, the thing I mentioned earlier was this combination of human judgment plus algorithms is really powerful. It's something we did implicitly at Netflix. You make recommendations and you evaluate them after the fact to see if you're right or wrong. And you can sometimes, as a human explainer, oh, no wonder they didn't like this. It had X, Y, and Z. So that X, Y, and Z is something. Sometimes it's obvious, right? Sometimes it's obvious. We used to joke about the sexy box shots would always increase the click rate at Netflix. And that's something that isn't manifest in the data. There's no attribute that says, yes, this has a sexy box shot. So it's things like that that we've gotten really good at Stitch Fix of getting that captured, get that data, get that encoded in the data. And that explains most of the variation. In fact, we can be a bit relaxed on our algorithms because the data's doing most of the work for us. You know, obviously we're in the content business and we have a big data back in and we've always been a big believer that machines can't always get those nuances. The human and curation and aggregation fails at some level, that's a good job. But to be really well done, you've got to have a human element in the equation. Well, congratulations, Eric Coulson with Fix. Stitch Fix, there you go. Stitch Fix, chief architect and analytics officer. And I got some big news coming up and we hope to hear more from you later. They're coming inside Silicon Angles theCUBE. I'm John Furrier with Dave Vellante. We'll be right back with our next guest. Here, day two, wall-to-wall coverage, three-day strata, O'Reilly media and theCUBE. We'll be right back.