 Live from San Diego, California, it's theCUBE. Covering KubeCon and CloudNativeCon, brought to you by Red Hat, the CloudNative Computing Foundation and its ecosystem partners. Welcome back to theCUBE. I'm Stu Min and my co-host is John Troyer and we're in San Diego for KubeCon, CloudNativeCon 2019, our fourth year of covering this show, over 12,000 in attendance, such growth in the ecosystem, lots of different projects to talk about, not just Kubernetes, but joining us, first time on the program, longtime watcher, Webb Brown, who's the co-founder of KubeCost, yet another project here in the ecosystem, so thanks so much for joining us. Thank you so much for having me. All right, so every time we get a founder on is, tell us a little bit about your background and give us that why of what led to the creation of KubeCost. Yeah, absolutely. So our founding team all worked in infrastructure monitoring at Google for a long time and working in container orchestration environments. We saw this challenge where teams that were moving to Kubernetes, were finding themselves, finding it easy to let costs kind of get away from them. There are a lot of moving parts that weren't there before. There's a lot of dynamic aspects that are hard to just really get your arms around. And we found ourselves just really pulled towards helping teams solve those problems. So yeah, that was a little over a year ago today when we made the plunge and here we are. Yeah, we remember the days when public cloud was supposed to be simple and inexpensive and we found out that maybe it's neither of those things necessarily. Let's click in a little bit as to containers, Kubernetes. What's different about this than everything else we've been doing in public cloud for the last 10 plus years? Yeah, yeah. So we believe in its ideal state, it still has the ability to be exactly those things, simple and much more affordable. But we think that there's tools and elements of this that create risk to the contrary. And we think there's three things that are different here. First is that you now have access to these incredibly powerful abstractions that are available at global scale that give you access to these really expensive resources. And mistakes can be costly there. Two is you're seeing this move towards decentralized deployments where you're now having individual product or application engineering teams managing their own applications, even provisioning their own infrastructure. And it's a lot higher velocity, a lot higher dynamic environments. And then three is just it's much harder to have visibility when you're in these multi-tenant environments, right? You can now have many teams, many even departments shipping on a single VM or a small set of VMs. If you could just give us a kind of bumper sticker on the project itself, how long has it been around? It's available on GitHub, I see. And how many people are using it to grow things like that? Absolutely, absolutely. So we started the project about a year ago. The GitHub project specifically is for doing cross-cloud cost allocation. There's a lot of challenges for measuring the cost of, say, CPU, RAM, swords, et cetera, when you talk about having spot instances in US central on AWS versus committed use in US central on GCP. So this project helps develop a uniform standard and library to measure costs across all these different environments. Hundreds of teams are using it today. We have integrations with Azure, GCP, AWS, and we also support on-prem Kubernetes clusters. Kind of a minor detail, Webb, but I mean those costs change week over week as announcements happen, as instances go up and down. I mean, how does the project and the community come together to even track all that? Yeah, no, we know it well. I mean, we're living and breathing and seeing exactly that. And that speaks to, you know, really the complexity here. And the project is designed to support exactly that. So constantly refreshing billing data, dynamically looking at when pods or jobs are coming up and going down and in real time looking at the costs of the nodes that they're actually running on. That is both the beauty and the challenge that we face is things can change so quickly and oftentimes that's for the better, but it's also a challenge to just stay on top of all this changes happening. What does the community help assemble that data? Are there AP, I mean, does that, I don't think there's cost APIs for every cloud or maybe I'm wrong. So we do have billing API integrations for these three cloud providers. So like I mentioned, AWS, Azure, GCP. The community has been instrumental in finding all these edge cases, right? So like, you know, GPUs in this environment versus, you know, storage in this environment. And that's, it's really this long tail of complexity that's really hard for getting this right. And the ecosystem has been absolutely, you know, key to finding all those like nooks and crannies to get this just right. Okay, just finishing that thought on the billing. You've got billing APIs in the public cloud, but on the on-premises environment, your mileage may vary, I'm assuming. How does that fit into the equation? Yeah, you can think about it as kind of bring your own pricing sheet, right? So like, we want to support your environment. And that could be you care about, you know, just the price of CPU, memory, storage, GPUs, et cetera. But it could also be, you know, you have some centralized ops teams that you want to allocate or like amortize the cost of across all of your tenants in that cluster. So we want to meet you where you are and give you full custom, you know, like inputs to tailor this to your environment. Okay, so we've talked about the project. There's also a company associated with us. Help us understand the relationship, the size of the team, kind of the business strategy there. Yeah, absolutely. So we have an open core model where our commercial product is built on top of this open source library. You can think about it providing a lot of the UI and enterprise management functionality, things like, you know, multi-cloud view, long-term durable storage, SAML integrations, that sort of stuff. We're a small team of engineers right now. All engineers, so we're living and breathing the like actual, you know, writing code every day. Well, we live in a world where maybe, I don't know if we're post dev ops yet, but there's a lot of dev ops here at this show. We've got many flavors of dev ops, dev sec ops. I mean, is this, and where I'm going is, is there a dev cost ops? Do developers now have to be worrying about the cost of what they're doing? Who is paying attention to the cash register at the top of the Kube-Cross stack? Yeah, I think it's very similar to what you just said, is all of this is in flux, right? And there's so many different models that are working and are constantly evolving. What we typically see is it's someone from the finance org and someone from the dev ops org that is jointly caring about this picture. So, we have opinions on how this can work really well, but we also love to just let the industry and different enterprises guide us and kind of meet them where they are. But we think that this is going to be continuing to evolve and change for the years to come, just because so much is changing. It's such a big challenge. I've talked to some large enterprises that they assign engineering resources to do the financial engineering thing. And it seems that number one is the cloud providers should be able to put some pieces in place. Secondly, automation and intelligence of this entire ecosystem should be able to help there. Is that really where your team and your project is focusing to take, I don't want that. You should be building new apps and helping my business, not sitting there watching the meters and saying, oh wait, I need to go turn some knobs. So, I think the first part of what you mentioned is very relevant and was kind of the kicker that really pushed us over the edge to start this project, start this company is, we saw teams that were building their own internal solutions or doing all this ad hoc analysis and oh, by the way, pretty much every team we talked to was doing it differently. So, that was our real inspiration to say, okay, we have to do this. We absolutely see an evolution to just more and more automation and intelligence, but you have to think about cost is not an isolated variable. Cost is very closely linked to reliability and performance, stability, all these things. So, you want to be really thoughtful and really careful when you start handing this stuff over to an algorithm. Because it can mean performance regressions, it can mean downtime. So, we absolutely see the industry evolving there. We see a lot of teams that in our view are rightfully cautious before kind of handing over the keys to an algorithm or set of algorithms that are going to really dial the lever for them on the right amount of, say, memory, compute, et cetera. I imagine there's also trade-offs between engineering resources and cost, right? I could do it the fast way with one engineer and it might have one cost implication. I might, sure, I could get my cloud cost down, but it might have taken me 10 engineer months to do. So, it's interesting. Is there a conversation in the, like, let's use the community in the broader sense about how to do this kind of capacity management and trade-offs? Is there an emerging, you know, it's hard in the OSS world that there's not a project that you can gather around. How do you have a conversation around, you know, cost and engineering trade-offs? Yeah, I think we're still really early here. I think there's still huge opportunity. And we just feel that it's incredibly challenging if you just look at the engineering side, you know, there's so much uncertainty to go in and say, what's it going to take to move us from, you know, on-demand a spot or move us from one reason to the other or one provider to the other, that it's really hard to really put an expected cost on that and do an appropriate ROI. What we've seen that a lot of teams are able to really easily identify the low-hanging fruit where there's a very clear ROI, but these, like, you know, marginal decisions absolutely think there's more frameworks and more tools that can help teams make those decisions well. All right, so I'd love to get your personal viewpoint as you're working for a startup, you're here in this massive ecosystem. Tell us about that kind of environment, how it is in this cloud-native ecosystem and, you know, any specific things around, you know, the event itself are welcome to you. Yeah, so, you know, we're coming from Google and a lot of our exposure to bigger conferences was, you know, things like Google I.O. and Google-specific events. And those are amazing to have their own, you know, ecosystem and kind of atmosphere, but I've never felt energy like this. I've never seen just so many things that are new, so many things that are changing all at once that it's just, it's impossible to not get here and be excited by this stuff, right? Of, like, you know, a lot of us have ideas how things were evolved, but I definitely can't claim to, like, you know, have really any real conviction around how this broader ecosystem will evolve and that just adds to the excitement of so many things are improving and evolving all at the same time. Yeah, do you feel a small company like yourself can get attention with everything that's going on here? Yeah, I mean, what we want to do is we want to be the very best at cost and capacity and while that touches on many things, that's a really small area. So, you know, our approach is we're not going to be everything and while that can be hard at times, we think that's right for a small team and that's my general advice to anybody that comes to this eco is find a real problem and be comfortable not being everything for everybody, but go and solve that for a set of users and do it the best. All right, if you could just give you the final word here, what should we be looking for from cube cost kind of over the next year? Yeah, I think just really getting going deeper and broader in cost and capacity management, that's bringing our tools to more platforms, more users, more intelligence and automation over time, but just continue to improve visibility to make this easier and easier for teams to make these appropriate trade-offs where they invest engineering resources and how they optimize costs. All right, well, well, Brown, thanks so much for joining us. We are welcome to welcome, we're glad to welcome cube cost to the cube alumni. Thank you so much. For John Troyer, I'm Stu Miniman. Check out thecube.net for all the coverage. We've been four years at this event in the U.S. We've also done the European shows and so much more coming. Three days wall-to-wall coverage. Thanks for watching theCUBE.