 From New York City, it's theCUBE. Covering IBM Data Science for All, brought to you by IBM. Welcome back here on theCUBE. We are live in New York, continue our coverage here for Data Science for All, where all things happen, big things are happening. In fact, there's a huge event tonight. I'm going to tell you about it a little bit later on, but Trisha Wang, who is our next guest, is a part of that panel discussion that you want to tune in for live on ibmgo.com, six o'clock, but more on that a little bit later on. Along with Dave Vellante, John Wall, Trisha Wang now joins us. A first ever for us, how are you doing? Good. A global tech ethnographer. You said it correctly, yay! I learned a long time ago when you're not sure, slow down and breathe. You did a good job. Want to do it one more time? A global tech ethnographer. Good job. Studying ethnography and putting ethnography into practice. How about that? Really great. Let's take you on the challenge, Trisha. I'll say it 10 times faster. How about in a row? We're done. We're done. Also co-founder of Sudden Compass. Yes. First off, let's tell our viewers a little bit about Sudden Compass. And then I want to get into the ethnography and how that relates to tech. So let's go first off about Sudden Compass and the origins there. Yes, so Sudden Compass, we're a consulting firm based in New York City and we help our partners embrace and understand the complexity of their customers. So wherever there's data and wherever there's people, we are there to help them make sure that they can understand their customers at the end of the day. And customers are really the most unpredictable, the most unknown, and the most difficult to quantify thing for any business. And we see a lot of our partners, really investing in big data science tools and they're hiring the most amazing data scientists, but we saw them still struggling to make the right decisions. They still weren't getting their ROI and they certainly weren't growing their customer base. And what we are helping them do is to say, look, you can't just rely only on data science. You can't just put it all into, you know, only the tool. You have to actually think about how to operationalize that and build a culture around it and get the right skill sets in place and incorporate what we call the thick data, which is the stuff that's very difficult to quantify, the unknown. And then you can figure out how to best mathematically scale your data models when it's actually based on real human behavior, which is what the practice of ethnography is there to help, is to help you understand what do humans actually do? What is unquantifiable? And then once you find out those unquantifiable bits, you then have the art and science of figuring out how do you scale it into a data model? Yeah, see, that's what I find fascinating about this is that you've got hard and fast, right? Data, objective, black and white, very clear. And then you've got people, you know, we all react differently. We have different influences and different biases and prejudice and all that stuff and aptitudes. So you are meshing this art and science, right? And what is that telling you then about how best to your clients, how to use data to a best practice? Well we tell our clients that because people are, you know, there are biases and people are not objective and there's emotions, that all ends up in the data set. You know, to think that your data set, your quantitative data set is free of biases and has some kind of inscrubbed of emotion is a total fallacy. And it's something that needs to be corrected because that means decision makers are making decisions based off of numbers thinking that they're objective when in fact they contain all the biases of the very complexity of their humans that they're serving. So there is an art and science of making sure that when you capture that complexity, we're saying don't scrub it away. Traditional marketing wants to say, put your customers in boxes, put them in segments and, you know, use demographic variables like education, income and then you can just put everyone in a box, figure out where you want to target, figure out the right channels and you buy against that and you reach them. That's not how it works anymore. Customers now are moving faster than corporations. The new networked customer of today has multiple identities, is better understood when in relationship to other people. And we're not saying get rid of the data science. We're saying absolutely have it. You need to have scale. You know, what is thick data going to offer you? Not scale, but it will offer you depth. So that's why you need to combine both to be able to make effective decisions. So I presume you work with a lot of big consumer brands. Is that a safe assumption? Absolutely. We work with a lot of big tech brands, you know, like IBM and others. And they tend to move at the speed of the CIO, which tends to be really slow and really risk averse. And you're afraid to over rotate and get ahead over their skis. What do you tell folks like that? Is that a mistake being so cautious in this digital age? Well, I think the new CIO is on the cutting edge. I was just at a constellation research annual conference in Half Moon Bay. Our friend Ray Wong. Yeah, Ray Wong. And I just spoke about this at their constellation Connected Enterprise where they had the most, I would have to say, the most amazing Ford thinking collection of CIOs, CTOs, CTOs all in one room. And the conversation there was like, we cannot afford to be slow anymore. We have to be on the edge of helping our companies push to ground. So investing in tools is not enough. It is no longer enough to be the buyer and to just have a relationship with your vendor and assume that they will help you deliver all the understanding. So CIOs and CTOs need to ensure that their teams are diverse, multifunctional, and that they're totally integrated, embedded into the business. And I don't mean just involve a business analyst as if that's cutting edge. I'm saying, no, you need to make sure that every team has qualitative people and that they're embedded and working closely together. The problem is we don't teach these skills. We're not graduating data scientists or ethnographers who even want to talk to each other. In fact, they think that each side thinks the other side is useless. And we're saying, no, we need to be able to have these skills being taught within companies. And you don't need to hire a PhD data scientist or a PhD ethnographer. What we're saying is that these skills can be taught. We need to teach people to be data literate. You've hired the right experts. You have bought the right tools, but we now need to make sure that we're creating data literacy among decision makers so that we can turn these data into insights and then into action. What is that? Let's peel that a little bit. Data literate, you're talking about creativity, of visualization, combining different perspectives. Where should the educational focus be? The educational focus should be on one, we storytelling. Is that right now, you cannot just be assuming that you can have a decision maker, make a decision based on a number or some long PowerPoint report. We have to teach people how to tell compelling stories with data and when I say data, I'm talking about it needs the human component and it needs the numbers. And so one of the things that I saw, this is really close to my heart was when I was at Nokia and I remember, I spent a decade understanding China. I really understood China and when I finally had the insight where I was like, look, after spending 10 years there following 100 to 200 families around, I had the insight back in 2009 that look, your company's about to go out of business because people don't want to buy your feature phones anymore, they're going to want to buy smartphones. But I only had qualitative data and I needed to work alongside the business analysts and the data scientists. I needed access to their data sets but I needed us to play together and to be on a team together so that I could scale my insights into quantitative models. And the problem was that, your question is like, what does that look like? That looks like sitting on a team, having a mandate to say you have to play together and be able to tell an effective story to the management and to leadership. But back then they were saying, no, we don't even consider your data set to be worthwhile to even look at. We love a candy bar phone, right? This is, it's a killer. And we love our numbers. We love our surveys, that's how it goes. Market share was great. Market share is great. We've done all of the analysis. Forget the razor. Exactly, and I'm like, look, of course your market share is great because your surveys were optimized for your existing business model. So big data is great if you want to, you know, optimize your supply chain or in systems that are very contained and quantifiable, that's more or less fine. You can get optimization. You can grow, you know, you can get that one to two to five percent. But if you really want to grow your company and you want to ensure its longevity, you cannot just rely on your quantitative data to tell you how to do that. You actually need big data for discovery because you need to find the unknown. One of the things that you talk about in your passion is to understand how human perspective shape the technology we build and how we use it. Okay, so when you think about the development of the iPhone, it wasn't a bunch of surveys that led Steve Jobs to develop the iPhone. And so it was, I guess the question is does technology lead and shape human perspectives or do human perspective shape technology? Well, it's a dialectical relationship, you know? It's like, you know, does a hamburger, does a bun shape the burger, the bun or does a bun shape the burger? You would never think of asking someone who loves a hamburger that question because they both shape each other. Okay. So symbiotic, purely symbiotic, right? Yes, sir, you weren't expecting that. No, but it is kind of a, okay, so you're saying it's not a chicken and egg, it's a both. No, absolutely. And the best companies are tuned to both. The best companies know that. You know, the most powerful companies of the 21st century are obsessed with their customers and they're going to do a great job at leveraging human models to be scaled into data models. And that gap is going to be very, very narrow. You know, you get big data or you're going to see more AI or ML disasters when their data models are really far from their actual human models. That's how we get disasters like Tesco, you know, or Target. Or even when Google misidentified black people's gorillas. It's because they did not, their model of their data was so far from the understanding of humans. And the best companies in the future are going to know how to close that gap and that means they will have the big data and big data closely integrated. Well, who's doing that today? I mean, it seems like there are no ethics in AI. People are aggressively applying AI for profit and not really thinking about the human impacts in the societal impact. Let's look at IBM. They're doing it. I would say that some of the most innovative projects are happening at IBM with Watson where people are using AI to solve meaningful social problems. And I don't think that has to be- Like IBM for social good. Exactly, but it's also, it's not just experimental. So I think IBM is doing really great stuff using Watson to understand, you know, identify skin cancer. Or we're looking at the ways that people are using AI to understand eye diseases, things that you can do at scale. But also businesses are also figuring out how to use AI for, you know, for actually doing better things. And I think some of the most interesting business, we're going to see more examples of people using AI for solving meaningful social problems and making a profit at the same time. I think one really great example is Workit. Is they're using AI, they're actually working with Watson. As they are, Watson is their server, that who they hire to work to create their AI engine where union workers can ask questions of Watson that they may not want to ask or may be too costly to ask. So you can be like, if I want to take one day off, will this affect my, you know, contract or my job? That's a very meaningful social problem that unions are now working with. And I think that's a really great example of how Watson is really pushing the edge to solve meaningful social problems at the same time. I worry sometimes that that's like the little device that you put in your car for the insurance company to see how you drive. How do you break, how do you drive? You know, do people trust. Feeding. Feeding that data to Watson because they're afraid somebody, you know, big brother's watching. Right. Well, that's why we always have to have human intelligence working with machine intelligence. This idea of, you know, AI versus humans is a false binary. And I don't even know why we're engaging in those kind of questions. I mean, we're not clearly, but there are people who are talking about it as if it's one or the other. And I find it to be a total waste of time. It's like clearly the best AI systems will be integrated with human intelligence. And we need the human training the data with, you know, machine learning systems. All right, I'll play the yeah, but. You're going to play the what? The yeah, but. Yeah. The fact is that machines are replacing humans in cognitive functions. You walk into an airport and they're kiosks. They're, people are losing jobs. Right, no, that's real. So, okay. So that's real. You agree with that. Job loss is real. Job replacement is real. And I presume you agree that sort of education is at least a part, the answer. Yes. Training people maybe, differently. Absolutely. Just straight reading, writing and arithmetic, but thoughts on that? Right. Well, what I mean is that, yes, that AI is replacing jobs, but the fact that we're treating AI as some kind of rogue machine that is operating on its own without human guidance, that's not happening. And that's not happening right now. And that's not happening in application. And what is more meaningful to talk about is how do we make sure that humans are more involved with the machines, that we always have a human in the loop and that they're always making sure they're training in a way where it's bringing up these ethical questions that are very important that you just raised. Right. Well, and of course a lot of AI, people would say is about prediction and then automation. So think about some of the brands that you serve, consult with. Don't they want the machines to make certain decisions for them so that they can affect an outcome? I think that people want machines to surface things that is very difficult for humans to do. So if a machine can efficiently surface, like here's a pattern that's going on, then that is very helpful. I think you have companies, we have companies that are saying we can automate your decisions. But when you actually look at what they can automate, it's in very contained, quantifiable systems. It's around systems around their supply chain or logistics. But you really do not want your machine automating any decision when it really affects people, in particular your customers. So maybe changing the air pressure somewhere on a widget, that's fine. Right, but you still need someone checking that because will that air pressure create some unintended consequences later on? There's always some kind of human oversight. So I was looking at your website and I was looking for, I'm intrigued by interesting curious thoughts. Okay, I have a crazy website. Well, no it is. No, it's pretty good. But back in your favorite quotes. I'd rather have a question I can't answer than an answer I can't question. So how do you bring that kind of, there's no fear of failure, right? To the boardroom. To people who have to make big leaps and big decisions and enter this digital transformative world. I think that a lot of companies are so fearful of what's going to happen next. And that fear can oftentimes corner them into asking small questions and acting small where they're just asking, how do we optimize something? That's really essentially what they're asking. How do we optimize X? How do we optimize this business? What they're not really asking are the hard questions, the right questions, the discovery level questions that are very difficult to answer, that no data set can, no big data set can answer. And those are questions, the questions about the unknown are the most difficult, but that's where you're going to get growth. Because when something is unknown, that means you have not either quantified it yet or you haven't found the relationship yet in your data set. And that's your competitive advantage. And that's where the boardroom really needs to set the mandate to say, look, I don't want you guys only answering downstream company-centric questions like how do we optimize X, Y, Z, which is still important to answer. We're saying you absolutely need to pay attention to that. But you also need to ask upstream very customer-centric questions. And that's very difficult, because all day you're operating inside a company, you have to then step outside of your shoes and leave the building and see the world from a customer's perspective or from even a non-existing customer's perspective, which is even more difficult. And the whole know-your-customer meme is taking off in a big way right now, but I do feel like the pendulum is swinging. While I'm sanguine toward AI, it seems to me that there used to be that brands had all the power. They had all the knowledge, they knew the pricing, and the consumers knew nothing, the internet changed all that. And I feel like digital transformation and all this AI is an attempt to create that asymmetry, again, back in favor of the brand. So I see people getting very aggressive toward, okay, certainly, you see this with Amazon, Amazon I think knows more about me than I know about myself. Should we be concerned about that? And who protects the consumer? Or is it just maybe the benefits outweigh the risks there? I think that's such an important question you're asking. And it's totally important. A really great, Ted talked this one up by Zayn up to Feki where she talks about the most brilliant data scientists and most brilliant minds of our day are working on ad tech platforms that are now being created to essentially do what Kenyatta Cheese calls advertising terrorism, which is that all of this data is being collected so that advertisers have this information about us that could be used to create the future forms of surveillance. And that's why we need organizations to ask the kind of questions that you did. So two organizations that I think are doing a really great job to look at are Data and Society. Founder is Dana Boyd based in New York City. This is where I'm an affiliate. And they have all these programs that really look at digital privacy, identity, ramifications of all these things working with AI systems, really great set of researchers. And then Vince Cerf of Mailing Fong co-founded People-Centered Internet. And I think this is another organization that we really should be looking at. It's based on the West Coast where they're also asking similar questions of like, instead of just looking at the internet as a one-to-one model, what is the internet doing through communities? And how do we make sure we leverage a role of communities to protect what the original founders of the internet created? Right, Dana Boyd, CUBE alum. Shout out to Jeff Hammabacher, founder of Cloudera, the originator of the greatest minds of my generation are trying to get people to click on ads. Quick Cloudera, and now is working at Mount Sinai as an MD, amazing, trying to solve cancer. A lot of CUBE alums out there today. And now we have another one. Trisha, thank you for being with us. You're welcome. Fascinating, it really is. Great question. It's nice to really just kind of change the Lentons a little bit. They're a different way. Trisha, by the way, part of a panel tonight with Michael Lee and Nier Coderra, that we had earlier on the CUBE, six o'clock to 7.15, live on ibmgo.com. Nate Silver also joining the conversation, so be sure to tune in for that live tonight at six o'clock. Back with more of the CUBE though, right after this.