 So, are you ready for our first speaker? Yes! I can see you, I can hear you, I can feel you, so let's go with him. First, as Bill Gates used to say, your most unhappy customers are your greatest source of learning, he said. So tech companies that forget that, no matter how cool their tech are bound to fail. Steve Jobs put it like this, he said, you've got to start with the customer experience and work back towards the technology, not the other way around. He goes on, he said that he's made this mistake himself many times. And it seems it many continue to get it wrong. Studies routinely show more than 80% of analytics, big data and AI projects will fail. Our first speaker is a consulting product designer, host of the Experience Data Podcast and the founder and principal at designing for analytics. And he firmly believes that to build innovative machine learning and analytics solution, it requires a people first technology second approach. So there he is, let's welcome Brian O'Neill. Hello Brian, how are you? Hi, I'm great, thanks for having me. It's great to be here. It's lovely to see you, Brian is connecting from Cambridge, Massachusetts in the United States very close to Harvard and he was telling me behind the camera that it's a very nice weather to get in Chile and that he'd rather be here with us. But maybe next year, Brian, it's lovely to have you tell us about all your secrets, what you came here to say. Yes, I'm going to jump right in here. We're going to talk about technically right effectively wrong, the data products that customers don't want. And so what do we do about this, right? So first, let me jump in here with the story. This is a story about a six-year project that it created thousands of artifacts and outputs of different kinds, diagrams, specifications, all kinds of that good stuff. And technically this solution worked. And what did it produce? Well, this is an example of output of the system, ZDA-110-3-15-1. This is the answer from the technology. This is the solution, so to speak. More specifically, the message that was actually presented to this customer in the software was FAA, Attic, NAVE, Sacristy, ZDA-110-3-15-1, aspirating framework, which is my favorite part of this message. So my question for you is, does this feel like human-centered decision support? And so much of our work with data products and analytics and data sciences is decision intelligence or helping people make decisions with information. So my question is, how compelling does this information feel? And this is a story about Notre Dame. And I think, especially for our European guests today, you all know the story here well. The software data output here was to identify the requirement was to identify the ID of the fire detection device and display it, right? So there's different smoke detectors, fire detectors, all these different telemetry inside Notre Dame. The human goal, the business objective, or whatever you want to call it, the organizational objective, was to prevent and stop fire. So the question is, did this technology work or not? And I think we know what happened here. What would you do with this information? So I want to talk to you today about why design matters. Bad design is not only expensive, it can be damaging, it can be harmful, and this is especially more true as we move into predictive technologies with machine learning, et cetera. And I'm sure many of you know the high profile cases here. But again, what would you do with this information? And so today, what I want to talk to you about in Hammer Home is that if humans are in the loop using data, your mindset as a leader, and I'm assuming most of you watching this, are leaders of data product teams, analytics teams, or software products that use intelligence or insights. You have to shift from producing outputs to thinking about outcomes. Outputs are nouns. They're things. Outcomes are results that we get from our outputs. And decision support and decision making is one of these outcomes, and it's one of the key ones that so many of us should be striving for, but we're not. So what might they have done here? This is just a very fast mock-up by three together, but I want to try to hammer home some of the points, right? The context here was completely missing from that solution. There were no visuals here. It's hard to look at a message like that and know without memorizing what this code means. Where is this fire? How far away am I? A fire is a group experience. It involves multiple people, the public, the fire safety people, the employees, the people that may have been on site there. This is an ecosystem, right? So simply technically focusing on the fire suppression device or the detector that was triggered is insufficient. It's not a complete solution. It is a very technically right solution. It might have pinpointed exactly where the fire was first detected, but if you can't go stop the fire, we didn't really provide the information that was required there. So you need to be thinking about holistically how the people will interact with these solutions and not just on what the technical output might be. This is especially true with when we think about machine learning. The simple prediction itself is not the end of the journey. It's probably the beginning of the journey. It's the beginning of the solution. So we need to change the culture here and put people first and the data and the technology second. And I know this isn't always a natural place, but as you heard Helena said in the introduction, it's hard to fall into this. It's very easy to jump to what we can do now because we can do so much more with cloud and all these new technologies that are now here. The assumption is we can throw data science and talented software people onto these problems and magic will come out the other end. And they are absolutely integral to this part. But if there are humans in the loop with your solution, there's more to it than simply getting the data part right. Okay? So why do I really think this matters? What are other examples of this? Well, this is a conversation I had on LinkedIn and this is from a guy that he sells a piece of software and he was saying how, you know, I've heard this many times, Brian, I went into this bank and the production solution was based on a database and a batch run of R scripts. And when David asked about, you know, he was asking this customer, well, what about the people who don't know R or SQL, which is 100% of the people that would use your solution? And this data scientist said, deadpan, those without technical skills are not relevant and will not be helped. So besides the fact, I think this is a management failure. If you're hiring people with this attitude where their job, they think their job is only to focus on the technical quality of the output. This is a culture problem here. I mean, it's a poor attitude to begin with, but the culture really has to change here, right? And so he kind of jokingly said, well, I'll just, I'm going to call back in a year or two after this guy's done enough damage that senior stakeholders will notice. And for what it's worth, I think the, I think the gravy train, the, you know, go get a data science degree, go pick up a really fat paycheck at a job because the businesses don't know what they're doing anyways and you'll be paid well. The time for that is going away and business expectations are growing around what are we getting for our investments in data, in innovation. And a lot of the expectation is that new innovations are going to come out of the work that data science, analytics and engineering teams are doing. So if you're not producing outcomes, you're at risk, right? So what do we do here? We need to change this mindset. So again, outputs and outcomes, we cannot just measure the technical outputs, the amount of code we shipped, whether we put a model into GitHub. That is not a solution. Operationalizing the model is really all that matters because until it gets in the hands of the people who are going to use it, the work is not done. The work is just getting started when you get to that point. When it's actually being used by the people who have to take action on it, who have to make a decision with it, this decision support thing. So who am I? I'm a professional musician in Boston. That's my formal training. But I'm also the founder and principal designing for analytics. I'm a consulting product designer. And I specialize in helping companies create innovative machine learning and analytics solutions. And some of you may know my podcast, Experiencing Data. Feel free to check that out, where I interview leaders in this space about what they're doing to produce more human-centered data products and analytics solutions. So design matters. We've talked about this. Bad design is expensive, and it can even be harmful. But I want to talk to you about what leaders can do today with their teams, especially if you don't have product people and designers on your team right now. There are behaviors that you can start practicing now. And you don't need to be a trained designer to do all of this work. There's a lot you can do today. Okay, so I'm going to give you about eight different activities, behaviors, or techniques that I think strategically need to be part of the DNA of your culture. If you're going to be producing software that's going to be used by people. Whether it's analytics in mind or date price and machine learning or AI or whatever, at the end of the day, if you're producing a digital software application that's going to be touched by humans, these principles most likely apply and will help you. So the first one is this mindset, right? And this is that analytical people and our data science friends and that this community of technical experts we have need to adjust to the great mushy world of people, behaviors, emotions, people that are illogical. Why people are motivated to do the things that they do. There's a lot here and I think most of you probably know that people don't just make decisions based on data. We're not robots, we're people and there's different incentives for why people may decide to turn right or turn left when presented with the fork, and even if the machine learning algorithm says to turn right, there's going to be times where they turn left. And so part of our job is to understand, well, why are they doing these other activities? Why are they turning left when it says to go right? What do we need to do to get the company or our customers to embrace this technology and to feel like it's been designed for them and it's not abusing them, it's not failing them at the time where they need to be successful in their work. So we have to start embracing the non-analytical mindset that comes with dealing with people. This gray world of design, design is all about embracing this mindset. And the second one here is more practically is embracing empathy, particularly routine one-on-one customer research and getting into the mindset of falling in love with problems. And so not in love with solutions and there's plenty of technical problems to go work on. There's always going to be data cleansing and getting enough data sets and running all your different recipes to see which models work and on and on and on and on. There's plenty of plumbing to create. This is about going out and really getting to know the people that are going to use your solutions, not relying entirely on quantitative data but real qualitative data. I would say this is probably the number one thing that your team can do to start developing real empathy. And this is easy if you are working at an internal, if you're an internal applied data science or analytics group or BI team that's servicing operations or other employees, you don't have thousands, well you might have thousands, you probably don't have millions or tens of thousands of customers. Your customer base is an internal employee. You can go out and start to develop relationships with these people. You can do write-along interviews. You can go learn about what it's like to be in sales or in marketing. What is it like to do their job? And how does our work fit into that? And more importantly, you can start to, instead of just waiting for a ticket to come in and Jira that says I need a model that gives me X, Y, and Z or I need a spreadsheet with the following columns of data, you're going to start asking why? And you're going to start honing in on what the real problems that these people actually have so that you can start producing outcomes and not just the outputs that they literally asked for. Because a lot of times the problem is not fully articulated. This is really important to understand. When someone says what's wrong with you, it's just like going into a doctor and saying, doctor, I need surgery on my arm. A good doctor does not pull a scalpel out and start cutting into your arm. They diagnose the situation. They're going to get to understand you and what kind of goals you have and how long it's been going on, et cetera, et cetera. They're not just going to give you the thing that you asked for. And I think this is something that teams need to do too, is to understand really what's that last mile look like when this person gets the solution. So the third one is this is kind of the activity of synthesizing all this stuff down into human-centered problems that are tied back to business needs. So when I talk about human-centered, it's important to remember that the business is formed of people. So the business, just like government, is people. They're still human beings, right? So some of you, again, you may work on solutions that touch external customers who pay you for goods and services, or you may be servicing other employees, suppliers, or whatever, more internal friendlies that aren't paying for your solutions necessarily. You still need to come up with clear problem statements that address the business objectives, but they also address the literal day-to-day problems that this person has. If you're servicing a salesperson, maybe you're trying to help them understand who is going to not re-subscribe next year or not re-up their subscription, right? And instead of them kind of running through their own algorithm in their head about finding in the CRM which leads they think are most likely to cancel, you need to understand not only who is like, not just to predict who's likely to cancel their subscription, but what's that process like for being a salesperson? How will they engage with this person? What's the next information they need? How much evidence do they need when they pick up the phone and call this customer and say, hey, we just wanted to check in with you, etc., etc., etc. What kind of evidence does this person need to have an effective conversation to actually prevent them from canceling? If you can't express these problems in a clear way that your whole team can get around, you're not likely to put out a great solution, and you may find out that this salesperson in this example just says, you know what, pass, not going to use your solution, I'm going to do it the old way because I know how that works and I trust it. And I don't know how you came up with this solution because I wasn't involved with it and I have no idea who these names are, I've never seen these names before. And at that point we failed and it's not because your model is wrong. Your model might have tested really, really well against the test set, but it doesn't matter if Mr. or Mrs. Salesperson doesn't decide to use that information at the point of the last mile where the technology matters. Another exercise here, I really like this one, is journey mapping and service blueprints. And these are tools that we use in the design world to understand how people move through a process because most of the time with decision making, it does not come down to one screen, one report, one dashboard, one visit, et cetera. There are design and user experience typically happens over some period of time. And so mapping out either the current journey or even an aspirational journey and it's much better to start with auditing your current customer journey, understanding that makes it really easy for the team to get honed in on what part of the phase are we working on? What are we trying to improve here? And when you can see what's wrong with it now because you've actually audited how people go through that, it's a powerful way to get alignment on where we can be strategic with the data and the insights that we bring and where we insert technology into the process. And service blueprints, I'm not going to get super detailed into this, service blueprints is really more of a back office. It's looking at the operational side of the business instead of the paying customer experience, which is more of what the journey mapping tool is for. Onboarding, honeymooning, and many of you know what onboarding is through using apps and this kind of thing. Honeymooning is another, is the period of time for me, which is after that first touch with a new product or software application, but you're kind of in the first couple of weeks, so the first couple of months and you're getting used to it. Maybe you're collecting data, maybe the algorithm has to learn for a while. This experience also has to be designed with intent. Sometimes it doesn't matter. Sometimes you can go right from zero to one. Other times there's steps along the way and that experience really matters because that's when you're bringing someone into the fold to start saying, hey, we have a new way of doing this. We have a new way of presenting insight and you've actually designed that experience to bring them along instead of them having to understand, oh, the gravy is going to come three months from now, but in the meantime I look at a blank dashboard because there's no data and you're like, well, just wait three months and it will come back. Well, they may just decide not to come back because you haven't alerted them that the system has collected the data it needs and that there's finally some insights here. That experience needs to be designed and thought through. The fifth one here is understanding that design is a team sport, right? And so being able to lead what I call design jams or design sessions, this is where we get low fidelity. This is also where I think algorithm design planning can come in, especially when we talk about the human parts. For example, if you're looking at the way humans do something now, let's say, what's the sales team's process again for predicting which customers are going to churn, right? They probably have some methodology that they use now which doesn't involve predictive analytics or something like that. This is the time where we might understand how do we model that behavior? How do we integrate that knowledge into the software so that they will actually trust the new version of this when we put this thing out and they will adopt it and they will say, yes, please give me more. This was really helpful. Your team is really valuable to me and my organization. I have 10 more requests for help. This is the point at which we start building those relationships down the road. I like working in low fidelity. I think teams should work in very low fidelity, getting visual, getting off of heavy written requirements for things. It's a little bit harder with coronavirus, but tools like Miro, et cetera, exist to do whiteboarding exercises and working in low fidelity early. The sixth one here is where we get more visual and we talk about interface design, MVPs and prototypes. Probably the key thing I want to talk about here is that visual design matters more than just aesthetically because the visual attention that you give to your artifacts, your dashboards, your software interfaces, all this kind of stuff, they say a lot about how much you care, how much attention you put into the work, how much does the user matter. And when it looks really complicated or it looks poorly constructed, it says, it sends a signal that not a lot of care went into the technology either. And I'm sure most of you have seen this before when you see something that looks really shaky or unstable, you might be questioning the engineering that's actually behind that as well. Even if the technology is rock solid in the background, the visual part of it can actually communicate a lot about the trust in the intent and the usability and the utility of the solution that you're going to present to them. I like finding ways to get stuff in front of people very quickly and I think sometimes we assume that there's no way to work quickly with some of these technologies, especially machine learning. And I agree that Agile and some of these traditional ways we do software, well, it's not traditional for everybody, but Agile is not new anymore. Some of these things don't translate well when you have large amounts of infrastructure required just to get to the point of doing modeling work. But there are ways to prototype. There are techniques you can use to prototype predictive analytics. You can present a prototype to a customer. Let's say it's a scenario planner, where it's going to be running simulations. You can come up with 10 different predictions for a different scenario that are completely fake, but they're somewhat realistic. And by showing people these different scenarios, I would use this example, like planning grocery store produce purchases. How many carrots should each store in the region purchase for next week? Well, maybe there's some kind of tool there that looks at past customer volume and some other signals or whatever. You don't actually have to have the real data to test what that solution might look like, because you could run someone through a bunch of scenarios and ask them, what kind of information would you need to make a decision to actually purchase or take the next step here? Would you buy carrots at that price? Would you buy that many carrots? What if it said, you know, buy the carrots at 10 cents a carrot instead of 4 cents a carrot? Would you pay that much? And by having a conversation with them over the design and evaluating it, this is where you're going to hear things like, well, there's no way I'm never going to pay more than this. And I have no idea how the system came up with something like that. I would have to know X, Y, and Z before I would ever do that. And when you hear this emotionally-charged language and they say, I would have to hear X, Y, and Z, the customer has just revealed a bunch of really important insights about how they make decisions. And so when you present your solution to them, you might find out that the actual features and the stuff you need on the screen is not necessarily super technical. It's not advanced or anything, but it reinforces the decision-making and the human behavior that you know is there. So maybe if your model puts out a really outlandish answer, there's some recovery mechanism. There's some way to challenge the model. There's some way to give a different kind of feedback and say, well, let's adjust for this other variable and rerun the simulation and see what we get because you talk to this customer, the purchaser, the person that buys the carrots. And you know what they're doing during their job because you've been out, remember back in step one, you've been out getting to know these people and the way they do their work so you can anticipate some of this stuff. And through this prototyping, we can learn a lot before we commit to the big work of building out these large enterprise data products, which tend to fail. And so anyhow, low fidelity, moving into high fidelity, but always thinking about small prototyping, getting feedback early, right? And so there's also the form of testing these user experiences before we ship. So this is more and again, some of this can happen prior to having a working code, but the point here is there's kind of more the qualitative research aspect here. This step number seven is really about actually going out and running people through different scenarios with the software to understand, will they do this? Were they able to accomplish a very discrete set of tasks that we gave them? Can they get through them? And where did they fail? And then understanding why did they fail to complete here? Was it a lack of desire? Did they not trust the information that was there? Was it too difficult to drill down into a bazillion different analytics and metrics because the dashboard didn't present any type of conclusion or opinion. It just presented all the evidence and it asked them to put together a conclusion on their own that these kinds of questions, we put these together in a protocol and we go through and test them. And I think this step has become more important when we move into machine learning and predictive technologies where we're putting out solutions that are probabilistic, where it's no longer that there's three scenarios. It's either true, false, or no response, something like that with traditional software. We know what all the different states are and we can kind of plan for all these and test them. We have to test more different scenarios that might come out of a system that's probabilistic like this or when there's a disruption like COVID-19 and the models are still learning from the data but there's been a catastrophic change in the business climate, et cetera. How do we test these ahead of time so that we build the resiliency and not the data resiliency but we build the customer loyalty or the dependency on the software, the trust has been built in because we anticipated these changes. We studied it, we actually went out and ran them through different scenarios to understand what works and what doesn't and we recycled that feedback back into our product development cycle and we're constantly getting better at it. So we're not waiting until the end to find out whether something works and just saying, next year a ticket please, I'm on to the next project. That is not the model we want to have and the final strategic thing that I want leaders to take away is you have to sustain this behavior as a normal way of building indispensable data products. So it's not a one hit thing, you just do this one thing and then you go away. This is again, when I talked about changing the culture there has to be a mindset shift here and it's really about telling yourselves are we going to measure the quality of our work based on the outcomes we want to get or just the outputs that we're able to put out. We migrated to the cloud. We plumbed all this new data engineering pipeline, etc. These are easy to measure things. They are not the outcomes. They're not the human piece. They're not the decision support. They are not the last mile. So we have to sustain these as regular behaviors that our teams do and there's different ways to do that. There's training. There's bringing in people that help change the culture. Designers help with this kind of work and remember a lot of the work of design even if you're not a designer is understanding this as a team sport. We're facilitating the knowledge of our SMEs, our data scientists, our analytics people, our software architects and engineers, our product and business sponsors. It's a team sport and making sure the right people are in the room at the right time and also say just with the machine learning and AI piece, there's also not... And the reason I don't talk about UX so much is that UX doesn't include the people who are not users but who may be affected by the solution. So we think about un-serve populations that may be affected by the use of the technology. So you think about facial recognition and some of these kinds of services. Well, the guy at the security desk might be the one that's looking at the screen but me going to the store, I'm technically part of that ecosystem. And so human-centered design levels us up and it changes our mindset to think about all the parties that are affected. And I know that not everybody... Some of this doesn't matter. The ethics thing is a little less relevant in certain circumstances where you're talking about low-level business processes and things like this for internal tools. But I think it's important to be aware it's not just about UX and the business. There can be third parties that need to be part of how we approach this as well. So the act of participating in design and bringing design and a product mentality into your process helps us know who needs to be in the room, okay? So will traditional companies embrace this product mindset of making human-centered data products? And why did I put the slide in here? Well, for a lot of the non-digital native companies and there's probably a mix here of people that come from software companies and traditional companies where you're working on software inside the business, a lot of these in the latter category, they want the results of what the startups and Silicon Valley and all these digital native companies are doing great things with data. What they're not doing is copying the way these organizations build software. They're missing steps and they're missing roles. And some of those roles are what I call the data product manager, product and user experience designers. And by not having them often, what it means is the SMEs and the proper business sponsors are also not in the room because no one is championing that cause. The technical people are focused on the technical work only. So we're missing, we don't even have the right team in place because we don't have someone in the room who knows how to figure out what the right team is and what the activities are that the team needs to do. So we need to copy not just the results and hope it's kind of like saying, well, we don't have any money for design but we want to be like Apple. And it's like, well, if you want to invest and have results the way a company like Apple does with design, you have to invest in those things too, the people, the processes, and make a decision that it matters. So this mindset here, you can either ride this waiver or you can create it and I think this wave is already coming and I just wanted to tell you that this isn't just my thing. Drew Smith from the International Institute for Analytics, they see this product thinking approach instead of projects and outputs but thinking about data products, even the stuff we don't sell to customers, like if you're servicing your internal business units, et cetera, a product approach to how you produce technology is very different than a project approach because you're thinking long-term and strategically. So they think this matters. Karim Lakhani from Harvard Business School who's an author of this great book, Competing in the Age of AI. I did a deep dive podcast with him which you can check out on my show but he said developing a product focus mentality is essential to the AI-centered operating model, especially this data and analytics product manager. I think sometimes what's missing here is that the expectation from the most senior levels and executives in our organizations see the possibility of what we can do with data as a source of inspiration. There's some fear there as well but they think it's going to be a source of innovation for the company as well. But if we're treating it as operations and cost savings and these kinds of things, then that's only half the battle and those may be some good places to start. But if you want to be innovative, you have to change your approach to building solutions there. And I think the expectations are going up for what we can do with things like AI. And I don't think the teams are necessarily keeping up with the expectations. And so you may say, oh, well they have inflated expectations. Well, part of the job is to kind of meet the two organizations in the middle to help them understand what's possible. It's to be going out and doing the work that produces innovation instead of looking at as I wait for a year ticket. Someone says, I need a churn model. I deliver a churn model. I go on to the next ticket. This is not how it happens. This is not how it happens at digital companies. And it's not going to serve you well. Gartner also in their chief data officer model, they put out this different model for what this role is. What was it in 2019? Moving from project-centric to product-centric, right? So this theme of product keeps coming through. And so today's big takeaway, your analytics, machine learning and AI solution will fail to produce value if people can't and won't use it. If you want to improve this, I have a bunch of free resources for you on my site designingforanalytics.com slash big things. I suggest you go there, check that out. There's a self-assessment guide. If you're developing your own analytics tools, processes, you can run your product through my guide and my assessment and start to learn what you might change. You can get the perspective of a product designer or user experience professional looking at that. What might I change? What might be wrong? What questions am I not asking? I'll also send you a link to the podcast, my insights mailing list. And if you'd like 30 minutes with me, you can book a call on my calendar. If you'd like to learn how I can work with you, just quick here. I run training seminars called designing human-centered data products for private teams and individuals. I also run it publicly twice a year. So in March, it'll be coming out again. I also do design strategy road mapping. I have an audit service. If you have an existing product that you know, it's not easy to sell. It's hard to use. It feels like an engineer builds it. Customers aren't seeing the value of all the IP that you've put into it. I can help you do an audit of that. And then there's custom design projects as well. So again, designingforanalytics.com slash big things, if you'd like to grab some of those free resources there. And I'm glad if it looks like I finished early. So here's the URL, my email address. I'm not super active on Twitter, but I'm at Rhythm Spice, if you'd like to hit me up there. And feel free to just shoot me a note. I'd love to have you in the community. And I hope that you all have a great 2020. Wear those masks, stay safe. Questions. Thank you so much, Brian. My God, I am exhausted. Lots of stuff. All the things you do. When do you have time to eat? All these things. I was eating actually during that, so. Offering. You're going to have so many people contacting you now. You know, we're seeing worldwide. So don't offer yourself so much because, my God, thank you so much for this super interesting talk, Brian. Starting with that photo that we know so well in Europe of Notre Dame in Paris caught in fire. What a good example of, you said, bad design can be harmful, damaging, and can be expensive. So probably design can change the world for good. And you mentioned, very interesting, the change of mindset that we have to shift from the output to the outcomes into the decision support frame. And you gave us these eight points to follow, which everybody took very good notes. Somebody's asking you a question in this regard. We have a question here from Nicola asking you how to show, convince people, that their decision making process is wrong. So there is no point to support it by any data technology. Well, so this is difficult, right? Changing the culture about the way things are now. It's hard and I think you have to understand that some of these things are going to take time for the culture to change. So what I often advise people to do, and this is kind of a different skill set, but what makes this person tick? What problems does this person have now that you might be able to help them with to build trust such that over time, they may start to listen to you more and take the recommendations that you give them. And I think attacking people and saying, well, your decision process is wrong. Let me just give you an example of this. Which shirt is the right shirt to buy when you go to the store? And how did you come up with the decision to buy that shirt? This gets back to the point that data is not sufficient to make decisions. It's not the only thing humans use to make decisions. So chances are, in that scenario, my assumption there is there's probably a good chance that this person is afraid that if I start adopting this thing, like the need for me and my job or the role, the thing that I think I'm valued for is going to go away and I don't want to let go of that thing. So this could be a problem with management or the way that the business has talked about our strategy and how our culture is going to change when we start using these kinds of tools. So I think if you can find another problem that you can help them with and start there, I think another thing is to learn to understand how do you make decisions today and what informs those kinds of things. You can present them with alternative options there and use it as a process of getting feedback and talk to other people as well and see if the culture is making those wrong decisions the same way. Is there a repeated pattern there? Because a repeated pattern of making the wrong decisions may suggest the culture. Something is reinforcing that model of making decisions and maybe management needs to be involved with this and you say, well, look, we can build this thing for you but guess what, no one's going to use it. Is this still a good use of our time and our budget and our resources because the evidence suggests they're not going to get used? You also mentioned actually that designing is a sports theme so maybe you're talking also about maybe this is that if this person, Nikolai, was asking us from a corporate point of view, I guess he's in a corporate. Is it more of a top to bottom approach, that bottom top approach or is it more a transversal kind of cultural change of that mindset? I think bottom up always works better with this kind of stuff. It's not something that's going to be dictated from below and I think, again, one of the best ways to get adoption for why this kind of stuff matters is to show the current state and show what's wrong with the current state and give visual examples of that. Like one of the classic things that the tricks designers like to use or use or experience people is we run through people using a piece of software now that we already know is bad. We run the video camera with their permission, of course. We run them through a study and we watch them sit there and struggle and we record these moments and we share these moments. We bring the engineers in the room. We bring the executives in the room and I'm telling you, if you want to convince somebody to change, just let them watch someone struggle with the thing that they think is just fine. This is one of the best ways to show that the culture needs to change and that's not about your opinion. It's like, well, just look at this is what we saw and there's something very powerful when it's people and not just data and report a study. Well, we run out of time, Brian. It's so interesting. You actually said that we are not robots. Thank God I was having my doubts and that we are not... Oh, I actually am. Well, yes, you could be because as I say, you don't eat. You have no time, so I wonder. But we're just to say that obviously, designing designers are important to get more visual. As you said, get visual in your 0.6, not only in MVPs and prototypes everywhere, be more visual. Designing is important. And I'm just going to leave the audience with a question, the one you said, are you riding the wave or are you creating it? Thank you so much, Brian O'Neill, for joining us on The Attic today. And we hope to see you next year. And stay with Aram in The Attic for more interesting chats. Our love and kisses to Cambridge, Massachusetts. Muchas gracias. Keep yourself... Ah, andale, orale, and we'll see you very soon. So the audience in The Attic will see you in one minute. We'll be back with our next speaker. Thank you very much. Bye.