 From around the globe, it's theCUBE with digital coverage of AWS re-invent 2020, sponsored by Intel, AWS, and our community partners. Hey, it's Keith Townsend at CTO Advisor on the Twitter and we have yet another CUBE alum for this AWS re-invent 2020 virtual coverage. AWS re-invent 2020, unlike any other, I think it's safe to say, unlike any other virtual event, AWS, usually 60, 70,000 people in person, every conference, there's hundreds of thousands of people tuning in to watch coverage and we're talking to builders. No exception to that is our friends at Densify, co-founder and CTO of Densify. Andrew Hillier, welcome back to the show. Thanks, Keith. It's great to be with you again. So we're recording this right before it gets cold in Toronto. I hope you're enjoying some of this and breaking the cold weather. Yeah, no, we're getting the same weather you are right now. It's fantastic. We're ready for the worst, I think, in the shorter days, but we'll get through it. So for those of you that haven't watched any of the past episodes of theCUBE in which Andrew has appeared, Andrew, can you recap, Densify? What do you guys do? Well, we're analytics where you can think of us as very advanced cost analytics for cloud and containers. And what I say advanced, what I mean is, there's a number of different aspects of cost. There's understanding, your bill, there's how to purchase. And we do those, but we also focus heavily on the resources that you're buying and try to change that behavior. So it's basically well down to a business value of saving a ton of money, but by actually changing what you're using in the cloud as well as providing visibility. So it's, again, a form of cost optimization but combined with resource optimization. So cost of resource optimization, we understand this stuff on premises. We understand network compute, storage, heating, cooling, et cetera. All of that is abstracted from us in the public cloud. What are the drivers for cost in the public cloud? Well, I think you directly or indirectly pay for all those things. The funny thing about it is that it happens in a very different way. And I think everybody's aware, of course, on demand and be able to get resources when you need them. But the flip side of on demand, the not so good side is it causes what we call micro-purchasing. So when you're buying stuff, if you go and turn on an Amazon cloud instance, you're paying for that instance. You're paying for some storage as well and implicitly for some networking, a few dollars at a time. And that really kind of creates a new situation in scale because all of a sudden now what was a controlled purchase on-prem becomes a bunch of possibly junior people buying things in a very granular way that adds up to a huge amount of money. So the very thing that makes cloud powerful, the on demand aspects, the elasticity also causes a very different form of purchasing behavior which I think is one of the causes of the cost problem. So we're about 10, 12 years into this cloud movement where public cloud has really become mainstream inside of traditional enterprises. What are some of the common themes you've seen when it comes to good cloud management and cost management, hygiene across organizations? Yeah, and hygiene is a great word for that. You know, I think it's evolved. You're right, it's been around. This is nothing new. I mean, we've probably been going to cloud expose for over a decade now, but it's kind of coming waves as far as the business problem. I think the initial problem was more around I don't understand this bill because to your point, all those things that you purchase on-prem, you're still purchasing in some way and a bunch of other services and it all shows up on this really complicated bill. And so you're trying to figure out, well, who in my organization owns what? And so that was a very early driver years ago. We saw a lot of focus on slicing and dicing the bills, what they could call it. And then that led to, well, now I know where my costs are going, can I purchase a little more intelligently? And so that was the next step. And that was an interesting step because what the problem is, the people that care about cost can't always change what's being used, but they can buy discounts and coupons and our eyes and savings plans. So we saw that there was a, then start to be a focus on, I'm going to come up with ways of buying it where I can get a bit of a discount. It's kind of like having a phone bill where I can't stop people from making long distance calls, but I can get on a better phone plan. And that kind of the second wave. And what we're seeing is the next big wave now is that, okay, I've done that. Now I actually should just change what I'm actually using because there's a lot of inefficiency in there. I've got to handle on those other problems I need to actually hopefully make people not buy giant instances all the time, for example. So let's talk about that feedback loop, understand what's driving the cost, the people that's consuming that, those services and need to understand those costs. How does disify bridge that gap? Well, again, we have aspects of our product that line up with basically all three of those business problems I mentioned. So there's a cloud cost intelligence module that basically lets you look at the bill any different ways by different tags, look for anomalies. We find that very important to say, well, did something unusual happen in my bill? So there's that aspect that just focuses on kind of accountability of what's happening in the cost world. And then now the, one of the strengths of our product is that when we do our analytics, we look at a whole lot of things at once. So we look at the instances and their utilization and what the catalog is and the RIs and savings plans and everything all together. So if you want to purchase more intelligently, that can be very complicated. So we see a lot of customers that say, well, you want to buy savings plans, but man, it's difficult to figure out exactly what to do. So we like to think of ourselves as kind of, it's almost like an analytics engine that's got an equation with a lot of terms in it. It's got a lot of detail of what we're taking into account when we tell you what you should be doing. And that helps you buy more intelligently and it also helps you consume more intelligently because they're all interrelated. I don't want to change an instance I'm using if there's an RI on it. That would take you backwards. I don't want to buy RIs for instances that I shouldn't be using. That takes you backwards. So it's all interconnected. And we feel that looking at everything at once is the path to getting the right answer. And having the right answer is the path to having people actually make a change. So when I interviewed you a few years ago, we talked about very high level containers and how containers is changing the way that we can consume cloud services. Containers introduced this concept of oversubscription in the public cloud. We couldn't really oversubscribe an M4 large instance back then. But we can now with containers. How are containers in general complicating cloud costing? So it's interesting because they do allow over commit but not in the same way that a virtual environment does. So in a virtual environment, if I say I need two CPUs for job X and I need two CPUs for job Y, I can put them both on a machine that has two CPUs and they'll be over committed. So over commit in a virtual environment is a very well established operation. It lets you get past people asking for too much effectively. Containers don't quite do that in the same way. When they refer to over commit, they refer to the fact that you can ask for one CPU but you can use up the four and that difference is the over commit. But the fact that I'm asking for one CPU is actually a pretty big problem. So let me give an example. If I look at my laptop here and I've got Outlook and Word and all these things on it and I had to tell you how many millicores I had to give each one over Zoom. Let's say I'm running Zoom. Well, I want Zoom to work well. I want to give it 4,000 millicores. I want to give it four CPUs because it uses that when it needs it. But my PowerPoint, I also want to give 4,000 or 2,000 millicores. So I add all these things up of what I need based on the actual more granular requirements and it might add up to four laptops but containers don't over commit the same way. If I asked for those as request values in containers, I actually would use four laptops. It's those request values that are the trick. If I say I need a CPU, I get a CPU. It's not the same as a virtual CPU would be in a virtual environment. So we see that as the cause of a lot of the problem and that people quite rationally say I need these resources for these containers but because containers are much more granular, I'm asking for a lot of individual resources and when you add them up, it's a ton of resources. So almost every container environment we see, they're very low utilization because everybody rightfully so asks for individual resources for each container but they are the wrong resources or in aggregate it's not creating the behavior you wanted. So we find containers are, people think they're going to magically cause problems to go away but in fact what happens is when you start running a lot of them you end up just with a ton of cost and the people are just starting to get to that point now. Yeah, I can see how that to easily be the case inside of a virtual environment. I can easily say my VM needs four CPUs, four VCPUs and I can do that across a hundred applications and that really doesn't cost me a lot in the private data center. Tools like VMware, DRS and all of that kind of fixed that for me on the back end. It's magical. And the public cloud, if I ask for four CPUs, I get four CPUs and I'm going to pay for four CPUs even if I don't utilize it. There's no auto balancing. So how does DeciPy help actually solve that problem? Well, so there's multiple aspects of that problem. One of the big ones is that people don't necessarily ask for the right thing in the first place. That's one of the biggest ones. So, you know, I give the example of I need to give Zoom 4,000 millicores. That's probably not true at all. If I analyze what it's doing, maybe for a second it uses that, but for the most of the time, it's not using nearly those resources. So the first step is to analyze the container behavior patterns and say, well, those numbers should be different. And so for example, the one thing we do with that is we say, well, you know, if a developer is using Terraform template to stand up containers, we can say instead of putting the number 1,000 in that 1,000 millicores or 400 millicores in your template, just put a variable in that references our analytics. Just let the analytics figure out that number should be. And so it's a very elegant solution to say, you know, the machine learning will actually figure out what resources that container needs because humans are not very good at it, especially when there's tens of thousands of containers. So that's kind of one of the big things is to optimize the container requests. And then once you've done that, the nodes that you're running on can be optimized because now they start to look different. Maybe you don't need as much memory or as much CPU. So it's all, again, it's all interrelated, but it's a methodical step that's based on analytics. And, you know, people, they're too busy to figure this out. They can't figure it out for thousands of things. You know, again, if I asked you, on your laptop, how many millicores do you need to get PowerPoint? You don't know. But in containers, you have to know. So we're saying let the machine figure it out. When you ask, how many millicores do you need to give Zoom answers? Yes. Yeah, exactly. At the end of the day, you need some way to quantify that. So you guys are doing the two things. One, you're quantifying. You're measuring how much does this application typically take. And then when I go to provision it, we're using a tip tool like Terraform. The, the, instead of me answering the question, the answer is go, identify and identify will tell you, and then I'll optimize my environment. So I get both ends of that equation if I'm kind of summarizing correctly. Absolutely. And that last part is extremely important because, you know, in a legacy environment, like in a virtual environment, I can call an API and change the size of VM. And it will stay that way. And so that's a viable automation strategy for those types of environments. In the cloud, when you're using Terraform or in containers, they will go right back to what's in the Terraform template. That's one of the powerful things about Terraform is that it always matches what's in the code. So I can't go and change the cloud. It'll just go back to whatever's in the Terraform template. Next time it's provisioned. So we have to go upstream. You have to actually do it at the source. When you're, when you're provisioning applications, the actual resource specifications should be coming through at that point. You can't, you don't want to change them after the fact. You can update the Terraform and redeploy with a new value. That's the way to do the automation in a container environment. You can't do it like you did in a VMware environment because it won't stick. It just gets undone. The next time the DevOps pipeline triggers. So it's both a, it's a big opportunity for a kind of a whole new generation of automation doing it. We call it CI CDCO. It's, you know, continuous integration, continuous delivery, continuous optimization. It's just part of the, of the fabric of the way you deploy apps. And it's a much more elegant way to do it. So you hit two trigger words or a few trigger terms. One dev Opsi to, I'm saying dev Opsi, CI CD and continuous operations. What is the typical profile of a disified customer? Well, usually they're, they're mixed of a bunch of different technologies. So I don't want to make it sound like you have to be a dev Opsi shop to benefit from this. Most of our customers have some DevOps teams. They also have a lot of legacy workloads. They have virtual environments. They have cloud environments. So when I started to have a hundred percent, all of these things, but usually it's a mix of things where, you know, there might be some newer born in the cloud as being deployed and this whole CI CDCO concept really makes sense for them. They might just have another few thousand cloud instances that they stood up, not as a part of a DevOps pipeline, but just to run apps or maybe even migrated from on-prem. So it's a pretty big mix. We see almost every company has a mix. Unless you just started a company yesterday, you're going to have a mix of some, you know, EC2 services that are kind of standalone and static, maybe some skill groups running or containers running in skill groups. And, you know, there's a, there's a generally a mix of these things. So the very, the things I'm describing do not require DevOps. The notion of optimizing the cloud instances by changing the marching orders when they're provisioned, not after the fact that that applies to any, anybody using the cloud. And our customers tend to be a mix. Some, again, very new, new school processes and born in the cloud and some more legacy applications that are running that look a little more like an on-prem environment would, where they're not turning on and off dynamically. They're just running transactional workloads. So let's talk about the kind of industries because you hit on the key point. You know, we kind of associate a certain type of company with born in the cloud, et cetera. What type of organizations or industries are we seeing densified, deployed in? So we don't really have a specific market vertical that we focus on. We have a wide variety. So we find, we have a lot of customers in, you know, financial services, banks, insurance companies. And I think that's because those are very large, complicated environments where analytics really pay dividends. If you have a lot of business services that are doing different things and different criticality levels, the things I'm describing are very important. But we also have logistics companies, software companies. So again, complexity plays a part. I think elasticity plays a part in the organization that wants to be able to make use of the cloud in a smart way where they're more elastic and obviously drive cost down. So again, we have customers across all different types of industries, manufacturing, it's pharmaceutical. So it's broad range. We have partners as well that use our, you know, like IBM that use our product and their customers. So there's no one type of company that, you know, that we focus on, certainly. But we do see, again, environments that are complicated or mission critical or that they really want to run in more of an elastic way. Those tend to be very good customers for us. Well, CUBE alum Andrew Hillier, thank you for joining us on the CUBE coverage of AWS re-event 2020 virtual. Say bye to, you know, a couple of hundred thousand of your closest friends. That's for having me. That concludes our interview with DeciFi. We really appreciate the folks at DeciFi having us again to have this conversation around workload, analytics and management to find out more of more, find out just more great CUBE coverage. Visit us on the web, SiliconANGLE TV. Talk to you next episode of the CUBE.