 Hello, I'm Rob Hirschfeld. I am a member of the OpenStack Foundation Board and CEO and co-founder of RAC-N, a company that specializes in scale physical deployment automation. Thank you for being on Rob. Thanks again. Obviously you're well known in the community for the work you've been doing around Defcore. But lately you've also been talking about something called OpenStack deployment fidelity. What is that and why is it important? Deployment fidelity has been really exciting. It's sort of this challenge I'm trying to take up post-Defcore. I feel like we've made a lot of progress on that front. And from that you start thinking through what is going to make, we know what the core is, we know what we need to install, but how do we make it so that we can really get those installs to work. And our experience has been that developers doing development work do things very differently than people who do testing. They do things differently from somebody installing a proof of concept, and then that's incredibly different from doing a production deployment. And so that's what we call fidelity gaps. So every time we're working on a deployment, every time there's a transition of how that work is done, that creates these gaps. And then that slows things down, it makes it much harder, and that creates communication issues back and forth throughout the community. So the classic example with this is a developer working on DevStack. I think it was Dreamhost made some shirts. It worked in DevStack, classic. And we want developers to be productive and fast, but if they work on something that only works on their desktop, and then they hand it over to somebody else who installs and uses the code a different way, those gaps cause us to have a lot of expensive rework and miscommunications, and it really slows down the overall process. And that's been a focus area for me. Great. So talk about what are the challenges to deployment fidelity? Why isn't it something that's typical in a DevStack deployment? That's been a question I've been asking for a long time, and I appreciate the chance to think through it. What happens is that a developer who wants to work really quickly wants to have a very small environment. They don't want to deal with a lot of networking issues. They don't want to spend time. Time is really the biggest challenge. Dealing with all of the other infrastructure that you need to do production, and then when you get to the end of the journey at production scale, then you've got DevOps and controls and hardware and networking topologies, and you've got rules and guidelines where somebody says, well, you have to name the servers this or the IPs that. You have a lot of rules. Developers aren't prone to want to incorporate those rules into their workflows. And so what happens is that at the expense of fidelity, we have taken out all those rules that you have going in. And then that in turn translates into, oh, my DevOps scripts that worked in my test lab don't work in my POC lab, and those don't work in my production lab. And we've seen this in community, where people do a POC for OpenStack, and then they go to do production, and they find it's much harder to do it. Go ahead. So how do we address that gap you're talking about, and where does Def Core play a part in that? So we've been working to address that at my company, Rack N, with a project that we had in community for a long time. Now it's called Digital Rebar. In the past, it was Crowbar, and this is actually a third generation of where crowbars come. The way that we see addressing it is a couple of ways. One is we add abstractions that allow you to say, well, the networking for my physical environment and my desktop environment are totally different. But they are similar in the fact that we know we need a public network and a storage network. So instead of actually hard-wiring in ETH0 or EN6 or whatever that networking is, what we do is we actually treat DevOps as much more composable and functional, and then you inject in, okay, in this environment, use this interface, in that environment, use that interface, and then you take that same approach all over your deployment, and it allows you to then sort of have these generic deployments, describe things generically, and then apply that logic forward. And then you can use that first chef, poppet, salt, ansible, homemade scripts. It doesn't matter. The idea here is that once you've built the environment in a consistent way and you have these abstractions, then those scripts, when they run, they can take advantage of whatever the environment is. That's really how we've been trying to build those things up. The relationship with Defcore is that we really want to say, here is the core, right? That has to be stable and reliable and repeatable. And one of the big lessons from Defcore is that implementation matters, right? Defcore, people are like, oh, it's the APIs, it's the APIs, the APIs. We've known from the very beginning that it's the APIs and the implementation. And if you don't have a consistent implementation, you don't have a way for people to share, they built it this way, oh, okay, I can learn from that. Then what you see exactly what we've seen in the community, which is 1,000, I don't want to use the phrase, 1,000 snowflakes, where those little differences in how you did deployments, they make the systems incompatible. And we have to be able to provide a consistent way to say, all right, I can operate, you can operate, we're going to share, actually do some shared lessons learned. So the next step for Defcore is say, all right, we're going to test everybody and get them in compliance, and then they're looking at each other saying, well, I built that OpenStack my way, you built it your way, how do I reconcile the building process? And it's a significant challenge, right? A lot of vendors have a lot of installs, and we need ways to start rationalizing those things, allowing people to share operational tips, build common infrastructures. In my expectation, the reason we did Defcore is we want to accelerate adoption around OpenStack to really grow. That will happen to the extent to which we make these installs consistent, repeatable, reliable, and interoperable. Thank you. Thank you, Rob. That was great. I appreciate you being on. Appreciate the time. Thank you very much.