 Hi, this is your host, Olim Bhartian. Welcome to my brand new episode of T3M, our topic of this month. The topic of this month is data. And today we have with us Neil Palmer, partner at Keri Group. Neil, it's great to have you on the show. It's great to be here. Thanks for having me. If you look at the whole evolution of, of course, from traditional IT to cloud IT or Kubernetes centric world, how you have seen data evolve? How you've seen the evolution of data in this world? I think we've all seen it, certainly within the last 10 years or so, just the sheer explosion of growth in the amount of data that's not only being generated, but that's also accessible to organizations. We've seen some of the struggles organizations have had around cataloging and accessing the data. And we've seen some of the struggles organizations have had of actually proving the value out of having some of those, you know, what are now becoming fairly significantly sized data lakes that are out there. I think there's certainly ongoing challenges, but there's ongoing benefits as well to what some of the firms are doing. And particularly when you see initiatives like things coming out of Snowflake and BigQuery about being able to share data sets across larger organizations, controlling both accessibility and security around the data sets that are being consumed within your organizations really become quite a useful tool that's available to people. We are not only consuming huge amount of data, but we're also creating massive amount of data. And of course, a lot of modern business, they are built around users generating massive amount of data. How do you see organizations are dealing with this data? I think we're starting to see the emergence of what I mean, what I mean, advocating as a new dimension to systems engineering and a new dimension to systems design, which is the economic value of the information that you're trying to either produce or consume from the system that you're building. And so the analogy I use is that from the financial world, an overnight market is always an overnight market. That doesn't change. But a risk calculation might change based on the volatility in the market on any given day. And so you're scaling within the application, you may choose to scale up if the market's having a particularly volatile day and produce that information on a 10-second basis as opposed to an hourly basis. So I think, again, having organizations being able to think about what the economic benefit of that information is to them across the board allows them to put a cost-benefit analysis over both storing and consuming that data. But now the most critical aspect of data which goes beyond just creating and consuming is backup and recovery if something goes wrong. While public cloud does take care of a lot of stuff, it's not a magical place that takes care almost everything. And users still have to deal with some of these issues and challenges that are related to data. Talk about these challenges, these complexities. Nothing is ever a magical button for anything, right? Unfortunately, as much as we would like it to be otherwise. Look, I mean, as we've moved from significantly over the past few years from sort of data center native approaches to cloud native approaches, I think we've seen a pivot in a way from traditional sort of backup and DR to more of a viewpoint around resiliency and scalability within the applications. And by that, backup and DR for a lot of organizations used to be a very static sort of semi-regular processes that would happen. It'd be expensive. You'd end up with a lot of storage sitting on tape somewhere that was never really reused and was of no ongoing value to the organization. I think from the pandemic, from a people and process perspective, everybody going remote kind of solved the people side of the disaster recovery equation. And I think with a lot of the cloud native applications now, you can bake in or improve on the need for DR backup, I'm sorry, by using say, multi-reasonable storage for your data. And so you're not just doing backup. You're gaining the benefit of scalability and resiliency within your applications, rather than just sticking something on tape to gather dust somewhere. I cannot stop drawing a parallel with the whole shift-led movement, especially when it comes to security. Or if you look at the whole DevOps movement, if you look at data, what kind of cultural shift you are seeing there? Absolutely. I mean, I don't think in you bring up an excellent point with the security. But I don't think any application developers, teams can work in isolation anymore and just sit there and churn out business logic. They have to be conscious of the data and where they're consuming it from and where they're providing it to and what those benefits are. I mean, again, from a security and availability perspective, it's all a function of risk, which is a function of either reputational or financial. And so there's no technical reason why, for the most part, for anything that is essentially cloud-enabled, why you can't have something be available to a number of nines that make sense for your business. And how you're seeing this whole ecosystem, which is about projects, vendors, how is this whole ecosystem evolving to help customers wherever they are in their journey so that they don't get overwhelmed with this cloud-enabled complexity and we help them wherever they are in their journey. I wonder about the complexity versus the visibility of the problem. I think with data centers, a lot of people weren't aware, didn't know. And so with cloud, that responsibility has moved to the left, whichever way you want to look at it in terms of the engineering teams. I think that's important. I think we still see traditional backups on relational databases and things like that. But then when you have some of the large-scale cloud-native databases, that multi-regionality removes that problem for you, which I find to be a great advantage in terms of what's happening. But you look at the scale of some of these data anyway, when you're talking several petabytes of data, having a static copy of that sitting around is expensive and not terribly useful. If you look at some of the discussion we had today, some of the concern related to data that we discussed today, are they relevant to only large enterprises or large companies that are dealing with messy amount of data? Or is it relevant to any company that is dealing with sizable amount of data? Obviously, the scale of your problem introduces complexities within itself, but all organizations have this problem. I think the consumption of data across any given organization and through to its third-party providers or its third-party clients has become more and more critical. Again, I come back to the Snowflake data set sharing. I think that solved an awful lot of problems for people by providing, essentially, IAM and access controls around what was previously a series of fairly complex batch jobs that would run at obscure sets of times of the day who were the scripts were understood by one or two people, whereas now, from, again, security perspective, you've got easily controllable access to the data that your clients or your providers need. Of course, if you look at the traditional IT data backup recovery, that market is very, very mature. But if you look at the cloud native Kubernetes and how mature is data backup and recovery market in the cloud native space? I think we've come a long way. I see far less resistance to the idea of having multi-regional data sets within your systems. For us, it is a, for the most part, and again, it depends on the environment that we're talking about, dev versus QA versus integration versus production. But for production workloads, multi-region is the de facto requirement. Now, whether it's multi-region within, say, the US or multi-region on a global basis, then you get into the broader questions around data protection and legality, and it's not a technical question so much as a business and law question. But I think for the most part, the production workload, it's not a question we see coming up anymore that that availability is required. Now, testing and making sure that the systems work on the multi-regional capability is probably a slightly different matter. And as we've discussed, sort of the advent of the chaos engineering coming out of the Netflix side of the world has really helped organizations embrace that concept of failure from the get-go and the need to test for resiliency from the very beginning rather than using a traditional disaster recovery mode where once a year or once every six months, they unplugged stuff and hoped that it still worked. As we talked about cultural aspect earlier, I also want to talk about what are companies doing to build a culture that is prepared for failure? We talk about things like chaos engineering. A lot of practices are there where teams are ready when something goes wrong, they can recover quickly. Talk about the cultural aspect. Yeah, we see slightly less of it than I would have thought at this point. I think organizations are still somewhat struggling to adopt to it. I think there's a lot of pressure within the current economic environment to be constantly delivering business value, which for a lot of folks can tend to mean new features, new capabilities. But addressing, some organizations addressing technical debt tends to, in a very human nature, get pushed down in terms of prioritization. But it's when you hit those failure points, as you invariably do within the cloud, that the need to consistently invest in that kind of resiliency engineering becomes apparent. And we do see organizations that have, they have an engineer hour budget that says 20% of every sprint or whatever it is we're doing is invested in resiliency engineering. Now, let's talk about how is carrier group helping customers in their journey? Yeah, so Carrick is focused on app modernization and platform engineering, helping organizations move from data center native to cloud native. So a lot of the work we do, particularly on the platform engineering side, is creating these resilient platforms and the pipelines that the developers will then use to build out their systems and applications. And so, accessibility to Kubernetes clusters that are multi-regional and will, scalable, sorry, not multi-regional. Kubernetes clusters that have the appropriate level of scalability, failover, et cetera. CICD pipelines that then have the SecOps built into them. Data tagging, audit trails for the data. So people understand that as the data moves through the application, even through the build systems, where it's coming from and where it's going to. And hopefully making these tool sets, within putting them in the hands of the developers so that they can focus on the business capabilities they need to. They need to, but at the same time, give the organization the capabilities they need around the resiliency and the audits ability for many aspects, particularly if you're a regulated environment. Neil, thank you so much for taking time out today and talk about data and of course, share your insights how this whole ecosystem feels evolving and I would love to chat with you again. Thank you. So Neil, thank you very much. Appreciate your time.