 From theCUBE Studios in Palo Alto in Boston, connecting with thought leaders all around the world, this is a CUBE Conversation. Hi everybody, welcome back. This is Dave Vellante and welcome to the special CUBE presentation made possible by IBM. We're talking about data ops, data ops in action. Steve Lueck is here. He's the Senior Vice President and Director of Data Management at Associated Bank. Steve, great to see you. How are things going in Wisconsin? All safe? We're doing well. We're staying safe, staying healthy. Thanks for having me, Dave. Yeah, you're very welcome. So, Associated Bank and Regional Bank Midwest, you cover a lot of the territories, not just Wisconsin, but a number of other states around there, retail, commercial lending, real estate, all kinds of stuff. I think the largest bank in Wisconsin. But tell us a little bit about your business and your specific role. Sure, yeah, no, it's a good intro. We're definitely largest bank headquarters in Wisconsin. And then we have branches in the upper Midwest area, so Minnesota, Illinois, Wisconsin's our primary locations. My role at Associated, I'm Director of Data Management. So I've been with the bank a couple of years now and really just focused on defining our data strategy as an overall, everything from data ingestion through consumption of data and analytics all the way through. And then also the data governance components and keeping the controls and the rails in place around all of our data and its usage. So financial services, obviously one of the more cutting edge industries just in terms of their use of technology. Not only are you good negotiators, but you often are early adopters. You guys were on the big data bandwagon early. A lot of financial services firms were kind of early on in Hadoop. But I wonder if you could tell us a little bit about sort of the business drivers and where are the pressure points that are informing your digital strategy, your data and data ops strategy? Sure, I think that one of the key areas for us is that we're trying to shift from more of a reactive mode into more of a predictive prescriptive mode from a data and analytics perspective and using our data to infuse and drive more business decisions, but also to infuse it in actual applications and customer experience, et cetera. So we have a wealth of data at our fingertips. We're really focused on starting to build out that data lake style strategy, make sure that we're kind of ahead of the curve as far as trying to predict what our end users are going to need and some of the advanced use cases we're going to have before we even know that they actually exist, right? So it's really trying to prepare us for the future and what's next and enabling and empowering the business to be able to pivot when we need to without having everything perfectly prescribed and ready for us. What if we could talk a little bit about the data journey I know it's kind of a buzzword, but in my career as an independent server and analyst, I've kind of watched the promise of whether it was decision support systems or enterprise data warehouse, give that 360 degree view of the business, the real time nature, the customer intimacy, all that. And up until sort of the recent digital meme, I feel as though the industry hasn't lived up to that promise. So I wonder if you could take us through the journey and tell us sort of where you came from and where you are today. And I really want to sort of understand some of the successes that you've had. Sure. No, that's a great point. And I feel like as an industry, I think we're at a point now where the people process technology have sort of all caught up to each other, right? I feel that real time streaming analytics, the data as a service mentality, just leveraging web services and APIs more throughout our organization and our industry as a whole, I feel like that's really starting to take shape right now and all the pieces of that puzzle have come together. So kind of where we started from a journey perspective, it was very much of your legacy reporting data warehouse mindset of, tell me the data elements that you think you're going to need, we'll go figure out how do we map those in, we'll perform them, we'll figure out how to get those prepared for you and that whole life cycle, that waterfall mentality of how do we get this through the funnel and get it to users. Quality was usually there, the enablement was still there, but it was missing that rapid turnaround. It was also missing the what's next, right? The what you haven't thought of and almost to a point of discouraging people from asking for too many things because it got too expensive, it got too hard to maintain. There was some difficulty in that space. So some of the things that we're trying to do now is build that enablement mentality of encouraging people to ask for everything. So when we bring on new systems to the bank, it's no longer an option as far as how much data they're going to send to us, right? We're getting all of the data, we're going to bring that all together for people and then really starting to figure out how can this data now be used? And we almost have to push that out and infuse it within our organization as opposed to waiting for it to be asked for. So I think that all of the concepts, so that bringing that people process and then now the tools and capabilities together is really started to make a move for us in the industry. I mean, it's really not an uncommon story, right? You had a traditional data warehouse system. You had some experts that you had to go through to get the data. The business kind of felt like it didn't own the data. It felt like it was imposing every time it made a request or maybe it was frustrated because it took so long. And then by the time they got the data, perhaps the market had shifted. So it created a lot of frustration. And then to your point, but it became very useful as a reporting tool and that was kind of the sweet spot. So how did you overcome that and get to where you are today and kind of where are you today? I was going to say, I think we're still overcoming that. We'll see how this all goes, right? I think there's a couple of things that we've started to enable. First off is just having that concept of scale and enablement mentality and everything that we do. So when we bring systems on, we bring on everything. We're starting to have those components and pieces in place. And we're starting to build more framework-based, reusable processes and procedures so that every ask is not brand new. It's not this reinvent the wheel and resolve for all of that work. So I think that's helped expedite our time to market and really get some of the buy-in and support from around the organization. And it's really just finding the right use cases and finding the different business partners to work with and partner with so that you help them through their journey as well. Because they're on a similar roadmap and journey for their own life cycles as well and their product development or whatever business line that they're in. So from a process standpoint, did you kind of have to jettison the, you mentioned waterfall before and move to a more lean and agile approach? Did it require different skill sets? Talk about the process and the people side of it. Yeah, it's been a shift. We've tried to shift more towards, I wouldn't call us more a formal agile. I would say we're a little bit more lean from an iterative backlog type approach, right? So putting that work together in queues and having the queue would be reprioritized, working with the business owners to help through those things has been a key success criteria for us and how we start to manage that work as opposed to opening formal project requests and having all that work have to funnel through some of the old channels that like you mentioned earlier, kind of distracted a little bit from the way things had been done in the past and added some layers that people felt potentially wouldn't be necessary if they thought it was a small ask in their eyes. I think it also led to a lot of some of the data silos and components that we have in place today in the industry and I don't think our company is alone in having data silos and components of data in different locations, but those are there for a reason. Though those were there because they're filling a need that has been missing or a gap in the solution. So what we're trying to do is really take that to heart and evaluate what can we do to enable those mindsets in those mentalities and find out what was the gap and why did they have to go get a silent solution or work around operations and technology and the channels that had been in place. What would you say were your biggest challenges in getting from point A to point B, point B being where you are today? There were challenges on each of the components of the pillar, right? So people process technology, people are hard to change, right? Behavioral type changes has been difficult. There's components of that that definitely has been in place. Same with the process side, right? So changing it into that backlog style mentality and working with the users and having more of that maintenance type support work is a different culture for our organization than traditional project management. And then the tool sets, right? The tools and capabilities, we had to look and evaluate what tools do we need to enable this behavior in this mentality? How do we enable more self-service data exploration? How do we get people the data that they need when they need it and empower them to use it? So maybe you could share with us some of the outcomes. And I know it's, we're never done in this business, but thinking about the investments that you've made in tech, people, in reprocessing, the time it takes to get leadership involved. What has been so far anyway, the business outcome? Can you share any metrics or just sort of subjective guidance? Yeah, I think from a subjective perspective, some of the biggest things for us has just been our ability to truly start to have that 360 degree view of the customer, which we're probably never gonna get there officially, right? There's everyone striving for that, but the ability to have all of that data available kind of at our fingertips and have that all consolidated now into one location, one platform and start to be that hub that starts to redistribute that data out to our applications and using that out has been a key component for us. I think some of the other big kind of components or differentiators for us and value that we can show from an organizational perspective, we're in an M&A mode, right? So we're always looking from a merger and acquisition perspective. Our, the model that we've built out from a data strategy perspective has proven itself useful over and over now in that M&A mentality of how do you rapidly ingest new data sets, get it understood, get it distributed to the right consumers? It's fit our model exactly and it hasn't been an exception. It's been just part of our overall framework for how we get that data in. It wasn't anything new that we had to do different because it was M&A. Just timelines were probably a little bit more expedited. The other thing that's been interesting in some of the world that we're in now, right? From a COVID perspective and having to pivot and start to change some of the way we do business and some of the PPP loans and our business model sort of had to change overnight and our ability to work with our different lines of business and get them the data they need to help drive those decisions was another scenario where had we not had the foundational components there in the platform there to do some of this, we would have spun a little bit longer, I think. So your data ops approach, I'm going to use that term helped you in this COVID situation. When you had the PPP, you had a slew of businesses looking to get access to that money. You had uncertainty with regard to kind of what the rules of the game were. You was the bank, you had to adjudicate but it was really kind of opaque in terms of what you had to do. The volume of loans had to go through the roof and the timeframe, it was like within days or weeks that you had to provide these. So I wonder if you could talk about that a little bit and how your sort of approach to data helped you be prepared for that? Yeah, no, it was a race. I mean, the bottom line was it felt like a race, right? From an industry perspective, as far as how can we get this out there soon enough, fast enough, provide the most value to our customers. Our applications teams did a phenomenal job on enabling the applications to help streamline some of the application process for the loans themselves. But from a data and reporting perspective behind the scenes we were there and we had some tools and capabilities and readiness to say we have the data now in our lake, we can start to do some business driven decisions around all of the different components of what's being processed on a daily basis from an application perspective versus what's being funded and how do those start to funnel all the way through doing some data quality checks and operational reporting checks to make sure that that data moved properly and got booked in the proper ways because of the rapid nature of how that was all being done. Other COVID type use cases as well, we had some different scenarios around different fee reporting and other capabilities that the business wasn't necessarily prepared for. We wouldn't have planned to have some of these types of things and reporting in place that we were able to pivot because we had access to all the data because of these frameworks that we had put into place that we could pretty rapidly start to turn around some of those data points and analytics for us to make some better decisions. Hey, so given the propensity and the pace of M&A, there has to be a challenge just fundamentally and just in terms of data quality, consistency, governance, give us the before and after, before kind of before being the before the data ops mindset and after being kind of where you are today. Yeah, and I think that's still a journey. We're always trying to get better on that as well, but the data ops mindset for us really has shifted us to start to think about automation, right? Pipelines, that enablement, that constant improvement and how do we deploy faster, deploy more consistently and have the right capabilities in place when we need it. So where some of that has come into place from an M&A perspective is it's really been around the building scale into everything that we do. That's real-time nature, the scalability, the rapid deployment models that we have in place is really where that starts to join forces and really become powerful. Having the ability to rapidly ingest new data sources, whether we know about it or not, and then exposing that and having the tools and platforms to be able to expose that to our users and enable our business lines, whether it's COVID, whether it's M&A, the use cases keep coming up, right? We keep running into the same concepts, which is rapidly get people the data they need when they need it, but still provide the rails and controls and make sure that it's governed and controllable on the way as well. We haven't talked much about the tech, so I wonder if we could spend some time on that. Can you paint a picture of us, what we're looking at here? You've got some traditional EDWs involved. I'm sure you got lots of data sources. You may be one of the zookeepers from the Hadoop days, with a lot of experimentation, there may be some machine intelligence in there. Paint a picture for us. Sure, no, so we're kind of evolving some of the tool sets and capabilities as well. We have some generic kind of custom in-house build ingestion frameworks that we've started to build up for how to rapidly ingest and kind of script out the nature of how we bring those data sources into play. What we've now started as well is a journey down IBM CloudPad product, which is really gonna, it's providing us that ability to govern and control all of our data sources and then start to enable some of that real time ad hoc analytics and data preparation, data shaping. So some of the components that we're doing in there is just around that data discovery, pointing at data sources rapidly running data profiles, exposing that data to our users. Obviously very handy in the M&A space and anytime you get new data sources in. But then the concept of publishing that and leveraging some of the AI capabilities of assigning business terms and the data glossary and those components is another key component for us. On the consumption side of the house for data, we have a couple of tools in place where Cognos Shop, we do have Tableau from a data visualization perspective as well that we're leveraging. But that's where CloudPak is now starting to come into play as well from a data refinement perspective and giving the ability for users to actually go start to shape and prep their data sets all within that governed concept. And then we've actually now started down the enablement path from an AI perspective with Python and R and we're using CloudPak to be our orchestration tool to keep all of that governed and controlled as well. Enable some new AI models and some new technologies in that space. We're actually starting to convert all of our custom built frameworks into Python now as well. So we start to have some of that embedded within CloudPak and we can start to use some of the rails of those frameworks within that. Okay, so you've got the ingestion side, you've done a lot of automation it sounds like it's called a data profiling that maybe what classification and automating that piece. And then you've got the data quality piece, the governance. You got visualization with Tableau. And this kind of all fits together in an open quote unquote open framework, is that right? Yeah, exactly. I mean, the framework itself from our perspective we're trying to keep the tools as consistent as we can. We really want to enable our users to have the tools that they need in the toolbox and keep all of that open. What we're trying to focus on is making sure that they get the same data, the same experience through whatever tool and mechanism that they're consuming from. So that's where that platform mentality comes into place having CloudPak in the middle to help govern all that and reprevision some of those data sources out for us has been a key component for us. Well Steve, it sounds like you're making a lot of progress toward sort of the days of the data temple where the high priest of data or the sort of keepers of that data really to more of a data culture where the businesses kind of feel ownership of their own data. You've achieved self-service. I think you've got confidence, much more confidence in the compliance and governance piece, but bring us home just in terms of that notion of data culture and where you are and where you're headed. No, definitely. I think that's been a key for us too is part of our strategy is really helping we put in a strategy that helps define and dictate some of those structures and ownership and make that more clear. Some of the failures of the past, if you will, from an overall monster data warehouse was around nobody ever owned it. There was, you always ran that risk of either the loudest consumer actually owned it or no one actually owned it. What we've started to do is in that Lake mentality and having all that data ingested into our frameworks, data owners are clear-cut. It's who sends that data in. What is the book record system for that source data? We don't manipulate it. We don't touch it. We don't transform it as we load it. It's there and available. You own it. We're doing the same mentality on the consumer side. So we have a series of structures from a consumption perspective that all of our users that are consuming our data, it's represented exactly how they want to consume it. So again, that ownership, we're trying to take out a lot of that gray area and enabling them to say, yeah, I own this, I understand what I'm going after. And I can put the ownership and the rules and the stewardship around that as opposed to having that gray model in the middle that we never get to. But I guess to kind of close it out, really the concept for us is enabling people and users, right? Giving them the data that they need when they need it. And it's really about providing the framework and then the rails around doing that. It's not about building out a formal data warehouse model or a formal this or like you mentioned before, some of the ivory tower type concepts, right? It's really about purpose-built data sets, getting our users empowered with the data they need when they need it. All the way through infusing that into our applications so that the applications can provide the best user experiences and use the data to our advantage. Yeah, all about enabling the business. I got to ask you while I have you. How's IBM doing as a partner? What do you like? What could they be doing better to make your life easier? Sure. I think they've been a great partner for us as far as that enablement mentality. The CloudPak platform has been a key for us. We wouldn't be where we are without that tool set. Our journey originally when we started looking at tools and modernization of our stack was around data quality, data governance type components and tools. We now, because of the platform, have released our first Python AI models into the environment. We have our studio capabilities natively because of the way that that's all container is now within CloudPak. So we've been able to enable new use cases and really advance us where we would have had a lot more technologies and capabilities and then integrate those ourselves. So the ability to have that all done has for us and be able to leverage that platform has been a key to helping us get some of these rolled out as quickly as we have. As far as a partnership perspective, they've been great as far as listening to what the next steps are for us. Where we're headed, what do we need more of? What can they do to help us get there? So it's really been an encouraging environment, I think as far as what can they do better, I think it's just keep delivering, right? Delivery is paying, so keep releasing the new functionality and new features and keeping the quality of the product intact. Well, Steve, it's great having you on theCUBE. We always love to get the practitioner angle. Sounds like you've made a lot of progress. And as I said, we're never finished in this industry. So best of luck to you. Stay safe and thanks so much for sharing. Appreciate it, thank you. All right, and thank you for watching everybody. This is Dave Vellante for theCUBE, data ops in action. We got the crowd chat a little bit later. Keep right there, but right back right at this short break.