 Welcome back everybody. Jeff Frick here with theCUBE. We're in the Palo Alto studio talking about customer journeys today. And we're really excited to have a professional who's been doing this for a long time. He's Jeff Weidner. He's an information management professional at this moment in time and still in the past and future. Jeff, welcome. Oh, thank you for having me. So you've been playing in the spheres for a very long time. And we talked a little bit before we turned the cameras on about one of the great topics I love in this area is the customer, the 360 view of the customer. And the nirvana that everyone says, you know, we're there, we're pulling in all these data sets. We know exactly what's going on. The person calls into the call center and they can pull up all their records. And there's this great vision that we're all striving for. How close will we do that? I think we're several years away from that perfect vision that we've talked about for the last, I would say 10 to 15 years that I've dealt with from folks that were doing catalogs like Sears catalogs all the way to today where we're trying to mix and match all this information. But most companies are not turning that into actionable data or actionable information in any way that's reasonable. And it's just because of the historic kind of siloed nature of all those different systems. I mean, we keep hearing about, we're going to do it. All these things can tie together. We can dump all the data in a single data lake and pull it out. What are some of the inhibitors and what are some of the approaches to try to break some of those down? Most has been around getting that data lake in order to put the data in its spot. Basically try to make sure that do I have the environment to work in? Many times a traditional enterprise warehouse doesn't have the right processing power for you, the individual who wants to do the work or doesn't have the capacity that will allow you to just bring all of it in and try to ratify it. That's really just trying to do the data cleansing and trying to just make some sense of it. Because many times there aren't those domain experts. So I usually work in marketing and our customer 360 exercise was around direct mail, email, all the interactions from our sales maker and the like. So when we look at the data we go, I don't understand why the sales maker is forgetting X of that behavior that we want to roll together. But really it's funny in that environment. Second is the harmonization. I have Bob Smith and Robert Smith and master data management systems are perhaps few and far between as being real services that I can call as a data scientist or as a data worker to be able to say, how do I line these together? How can I make sure that all these customer touch points are really talking about the same individual, the company or maybe just a consumer. But finally it is in those customer 360 projects getting those teams to want to play together, getting that crowdsourcing either to change the data such as I have data, as you mentioned around chat and I want you to tell me more about it or I want you to tell me how I could break it down. And if I want to make changes to it, you go, well wait, where's your money in order to make that change? And there's so many aspects to it, right? So there's kind of the classic ingest, you got to get the data, you got to run it through a process as you said to harmonize it to bring it together and then you got to present it to the person who's in a position at the moment of truth to do something with it. And those are three very, very different challenges. They've been the same challenges forever but now we're adding all this new stuff to it like, are you pulling data from other sources outside of the system of record? Are you pulling social data? Are you pulling other system data that's not necessarily part of the transactional system? So we're making the job harder. At the same time we're trying to give more power to more people and not just the data scientists but as you said, I think the data worker. So how is that transformation taking place where we're enabling more kind of data workers if you will that aren't necessarily data scientists that have the power that's available with the analytics and an aggregated data set behind them? Right, we are creating or have created the Wild West. We gave them tools and said, go forth and make something out of it. Okay, then we started having this decentralization of all the tools and when we finally gave them the big tools, the big quote unquote big data tools that could process billions of records that it still is a Wild West but at least we got them centralized to certain tools. So we were able to at least standardize on the tool set standardize on the data environment so that at least when they're working on that space we get to go, well, what are you working on? How are you working on that? What type of data are you working with and how do we bring that back as a process so that we can say, you did something on chat data? Great, Bob over here, he likes to work with that chat data. So that exposure and transparency because of the centralization of data. Now, new tools are adding on top of that data catalogs and putting inside tools that will make it so that you can actually tell that known information all in one Wiki like interface. So we're trying to add more around putting the right permissions on top of that data cataloging them in some way with these either worksheets or these information management tools so that if you're starting to deal with privacy data you've got it flagged from its ingest all the way to the end, but more controls are being seen as a way that a business is improving its maturity. Yeah, now the good news, bad news is more and more of the actual interactions are electronic. People aren't going to places, they're not picking up the phone as much as they're engaging with the company either via a web browser or more and more mobile browser or mobile app, whatever. So the good news is you can track all that. The bad news is you can track all that. So as we add more complexity and then there's this other little thing that everybody wants to do now which is real time, right? So with Kafka and Flink and Spark and all these new technologies that enable you to basically see all the data as it's flowing versus a sampling of the data from the past, a whole new opportunity and challenge. So how are you seeing and how have you tried to take advantage of that opportunity as well as address that challenge in your world? Well in my data science world, I said, hey, give me some more data, keep on going. And when I have to put on the data sheriff hat, I'm now having to ask the executives and our business stakeholders, why streaming? Why do you really need to have all of this? It's a new shiny toy. New shiny toy. So when you talk to a stakeholders and you say, you need a shiny toy, great. I can get you that shiny toy, but I need an outcome. I need a value. And that helps me in tempering the next statement I give to them, you want streaming, so or you want real time data, it's going to cost you three X. Are you going to pay for it? Great, here's my shiny. But yes, with the influx of all of this data, you're having to change the architecture and many times IT traditionally hasn't been able to make that rapid transition, which lends itself to shadow IT or other folks trying to cobble something together to make that happen. And then there's this other pesky little thing that gets in the way in the form of governance and security. Compliance, privacy, and finally marketability. I want to give you a, I want you to feel that you're trusting me in handling your data, but also that when I respond back to you, I'm giving you a good customer experience, see so-called don't be creepy. Right, right. But lately the new compliance rule in Europe, GDPR, a policy that comes with a, well, a shotgun that says if there are violations of this policy, which involves privacy or the ability for me to be forgotten of the information that a corporation collects, it can mean 4% of a total company's revenue. Right, right. And that's on every instance. That's getting a lot of motivation for information governance today. Right. That risk. But the rules are around trying to be able to say where did the data come from? How did data flow through the system? Who's touched that data? And those information management tools, but mostly the human interaction. Hey, what are you guys working on? How are you guys working on it? What type of assets are you actually driving so that we can bring it together for that privacy, that compliance and workflow, and then layer on top of that, that deliverability. How do you want to be content? How do you, what are the areas that you feel are the ways that we should engage with you? And of course, everything that gets missed in any optimization exercise, the feedback loop. I get feedback from you that say you're interested in puppies, but your data set says you're interested in cats. How do I make that go into a customer 360 product? So privacy and being, and coming out and saying, oh, here's an advertisement for hippos. And you go, what do you know about me that I don't know? Wrong browser. So you chose Datamir along the journey. Why did you choose them? How did you implement them? And how did they address some of these issues that we've just been discussing? Datamir was chosen primarily to take on that self-service data preparation layer from the beginning. Dealing with large amounts of online data. We moved from taking the digital intelligence tools that are out there knowing about browser activity, the cookies that you have to get your identity and said, we want the entire feed. We want all of that information because we want to make that actionable. I don't want to just give it to a BI report. I want to turn it into marketing automation. So we got the entire feed of data and we worked on that with the usual SQL tools. But after a while it wasn't manageable why either all of the 450 to 950 columns of data or the fact that there are multiple teams working on it and I had no idea what they were able to do. So I couldn't share in that value. I couldn't reuse the insights that they could have. So Datamir allowed for a visual interface that was not in a coding language that allowed people to start putting all of their work inside one interface. They didn't have to worry about saving it up to the server. It was all being done inside one environment. So that I could take not only the digital data but the Salesforce CRM data, marry that together and let people work with it. Then it broadened on to other areas. Again, allowing in that crowdsourcing of other people's analytics. Why? Mostly because of the state we're in around IT's inability to change rapidly, at least for us in our field. That the biggest problem we had was there wasn't a scheduler. We didn't have the ability to get value out of our work without having someone having to press the button and run it and if they ran it, it took eight hours, they walked away, it would fail. And you had to go back and do it all over again. So Datamir allowed us to have that self-service interface that had management that IT could agree upon to let us have our own lab environment and execute our work. So what was the results when you suddenly give people access to this tool? I mean, were they receptive? Did you have to train them a lot? Did some people just get it and some people just don't? If they don't want to act on data? I mean, what was kind of the real world results of rolling this out within a population? The real world results allowed us to get $10 million in uplift in our marketing activities across multiple channels. $10 million in uplift, and how did you measure that? That was measured through the operating expenses by one, not sending that work outside. Some of the management of the data is being sent outside, and that team builds their own models off of it. We said we should be able to drink our own champagne. Second, it was on the uplift of our direct email campaigns of having a better response rate, and generally not sending out a bunch of extra messages that we weren't needing to. And then turning that into a list that could be sent out to our email and direct mail vendors to say this is what we believe this account or contact is engaged with on the site. Give us a little bit more context. So we add that in, that we were hopefully getting and resonating a better message. And where did you start? What was the easiest way to provide an opportunity for people new to this type of tooling access to have success? Mostly it was trying to, or it was taking pre-doctored worksheets or already pre-packaged output. And one of the challenges that we have are people saying, well, I don't want to work in a visual language, while there are users of tools like Tableau, Clix, and others that are happy to drag and drop in their data, many of the data workers that tried and drew are saying, I want to write in SQL. Mm-hmm. So we had to give at least that last mile analytical data set to them and say, okay, yeah, go ahead and move it over to your SQL environment, move it over into the space that you feel comfortable and you feel confident in the control. But let's come on back, we'll translate it back to this tool and we'll show you how easy it was to go from working with IT, which would take months to go and doing it on yourself, which would take weeks and the processing and the cost of your siloed shadow IT environment or go down to days. We're able to show them that acceleration of time to market of their data. What was your biggest surprise? An individual user, an individual use case, something that really you just didn't see coming. It was kind of a pleasant, you know, a lot of unintended consequences on the positive side. That there was such a wide adoption. I mean, honestly, again, back from the data science background, we thought it would just be bring your data in, you know, throw it on out there and we're done is that we went from maybe about 20 large data sets of ad tech and mar tech and information advertising technology, marketing technology data to CRM information, order activity and many other categories just within marketing alone. And I think perhaps the other big hot moment was since we brought that in of other divisions data, those own teams came in and said, hey, we can use this too. So the adoption really surprised me that it would, you would have people that say, oh, I can work with this. I have this freedom to work with this data. Right, right. Well, we see it time and time again. It's a recurring theme of all the things that we cover which is, you know, a really big piece of the innovation story is giving, you know, more people access to more data and the tools to actually manipulate it so that you can unlock that brain power as opposed to keeping it with the data scientist on the Hogyny Rowe and the super big brain. So it sounds like that really validates that whole hypothesis. Right, I went through reviewing hands-on 11 different tools when I chose Datamere. This was everything from big name companies to small startup companies that have wild artificial intelligence slogans in their marketing material. And we chose it mostly because it had the right fit as an end to end approach. It had the scheduler, it had the visual interface, it had the enough management and other capabilities that IT would leave us alone. Some of the other products that we're looking at gave you the ability to work with data, will allow you to schedule data, but they never came all together. And for the value we get out of it, we needed to have something all together. Right. Well, Jeff, thanks for taking a few minutes and sharing your story, really appreciate it. It sounds like it was a really successful project. All right, he's Jeff Weidner, I'm Jeff Frick. You're watching theCUBE from Palo Alto. Thanks for watching.