 Good afternoon, nerd fam, and welcome back to beautiful Paris, France. We're here at KubeCon, CloudNativeCon, CNCF's biggest European event, actually the biggest KubeCon in history, as a matter of fact. My name is Savannah Peterson, joined by my fabulous co-host, Rob Strecce. Rob, how are you feeling this afternoon? Awesome. Got some tea in, got some more caffeine going. The vibe is awesome. People eating lunch and, you know, having some great- You're gonna spill the tea? No, no, no, I don't think some great conversations over lunch, it was good. It was good. Speaking of great conversations, very excited to have our first afternoon guest back on the show, Scott, welcome back. Thank you, good to be back. Good to see you both. Yeah, it's wonderful to have you. How's the show going for you? Oh, it's fantastic. Biggest show ever, right? Step one, step two. Like, everyone is so excited that tech shows are back. That's why people come into the booth, it feels like the vibe, it's positive. So, yeah, it's fantastic. It really is. And it's nice to be in a state of the before time, so to speak. To see the enthusiasm back up here, we've all missed each other. That's right, that's right. The nerds love to get together. This is our- It's positive, right? You're talking about tech, it's optimistic, so it could be better. That's actually a really good point. The whole room is extremely optimistic. And when it comes to AI, we're going to talk about that later in the interview and a bunch of other stuff. This is not a room of doomers. This is very much people who believe. We're looking forward, it's positive, it's going to happen. Exactly how, when, where, why? It doesn't matter, because we're going to figure it out. Right? That's great. Yeah, and we're going to figure it out together. There you go. And hopefully with increased productivity. Yes. Tell us about the Docker build cloud. Yes, yes, thank you. So we just went GA about five weeks or so ago. Congratulations. Thank you, thank you. And it is exactly that. It is giving time back for developers. At the high end, we see it speeding up builds by 39 times. Casually. Right? Yeah, just a little. Just a little, right? 39 times is crazy. Put that in context. What used to take an hour? A minute and a half. Imagine an hour back in our day. If you had an hour back in your day, would you be excited by that? Yes. So devs walk into the booth and say, time back? Oh, tell me more, right? Very straightforward, very exciting conversation. How do you do that? How do you get that time? How we do that back? So today Docker build, up until today, Docker build has been a local constrained by the local CPU disk memory. And what we do is behind the scenes, Docker build offloads it to the cloud. And so instead of the local GPU, you can use a big GPU in the cloud. Instead of local memory, you can use much bigger node in the cloud with more memory, more CPU. And by doing that, you put more, basically more horsepower behind the build. But also if you're with a team, we're caching the builds for that team. So if a team of 10, they're using the same sets of images, generally speaking. So why should each one build it themselves again and again and again? They don't, we cache it, and that speeds it up as well. So those two factors, bigger machines and shared cache, accelerating up to 39 times. It would seem that. Impressive. They, especially when you're trying to do that and you have to do build and test and then go on and keep going through this, that time back just helps them get through iterations more and be more, not only more productive, but more innovative and spend more time on innovating versus building. 100%. I'm sure you've seen the stat, we were talking with one of the industry analysts. Of a developers day, only about 37% is spent actually writing code, being creative. Because they're spending the other 63, whatever, like waiting for builds or waiting for tests or whatnot. So we're like, okay, let's take that off and give them more time back to create, to be innovative. In fact, we've seen very similar where it's a third of their time is basically fixing things. A third of their time is actually building and doing that. And then a third of their time is innovating. And I think that's where I think people are leaning in and trying to, and especially in this community here where people are like, hey, well, I also want to contribute to the community, but I have my day job as well for Fidelity or Discover or Deutsche Bank who was on stage or Goldman. And all those. So yeah, I don't know why I went down the financials. I was going to say, you just went pure fit. But they're all here. But they're all here. They're all here. They're excited about what they're doing. They're excited about building. So it's great to see. It's great to see. It is really great to see. And speaking of productivity and the developer experience, tell me a little bit about your friends at Red Hat and test containers and your partnership there. Yeah, so this is another verb test. We talked about the build verb, the test verb, similar sort of thing, where the developer has a great local test experience. But if your tests are getting kind of big or if you're doing multi-arc, you're building on a Mac M1, but your production is X86, then again, we bring the cloud to the table. So you can now burst your tests out to the cloud. Fantastic, right? Now, for SMBs, small, medium-sized startups like multi-tenant cloud is fantastic. Larger companies want more control of where their workloads run. Enter our friends Red Hat OpenShift where you now have the option of doing tests locally and then bursting out to your Red Hat OpenShift cluster to do your runners, your test runners there. So again, offering devs that flexibility to speed things up to be more productive. Yeah, and they're deployed all over the place in every cloud. So I think, again, it gives them also that choice to do it either on-prem, in-colow or out in the cloud. And I would assume that helps them from an ROI perspective as well, because you're not maintaining all of that gear all the time. You can actually, that burstability aspect of it, like a cloud, 100% helps them from an ROI. ROI as well as a developer experience standpoint. So what I'm describing here about cloud bursting, developer doesn't have to touch a thing. They just flip a switch. So they don't have to set up anything. They don't have to provision anything. They don't have to worry about security, configuration. That's huge. They flip a switch. That's another time game. I just want to stay in flow state. I want to keep creating. I want to run my tests quickly. Oh, great. My administrator set up this Red Hat OpenShift cluster. I'll just burst out there without doing a thing other than flipping a switch. Makes it super easy. I mean, that's got to be super attractive to the people who are floating around. I was going to say, I feel like this is probably an easy picture. Developers love it. Platform engineering loves it. The financial services love it because they've got Red Hat deployed. So it's a very, very good conversation. I mean, it's all the tech that they know already. But to your point, and we've had a couple of conversations already around platform engineering being kind of that center of excellence and kind of protecting the devs from the infrastructure. If you can even make it simpler for them. That's right. That's even better from a DevOps perspective. 100%. 100%. Make it simpler, safe, secure, fast. Yeah. Right? And from us, so in handling all the security, you're handling all of the encryption and in transit and all of that, all the pipelines and things of that nature. Correct. And so they only have to go in there and hey, they maybe have an account with Red Hat and they go and just set it up and it launches in. That's right. Point it. Today for the build cloud as an example, they just flip a switch on the command line. So they flip a switch, say build it locally, or they flip a switch, say oh no, build it up in the cloud. Or build it over there on Red Hat OpenShift. So again, developer productivity, developer flow state in a secure, trusted environment. And that's simplicity. Everybody just wants the easy button and we don't need to reinvent the wheel every time we want to deploy something. That's right. So let's move that 37% up to 50% or 60%. They spend of their day building, creating, making versus all this other stuff that isn't adding value to their business. And I think it also helps them from a workflow perspective as well, right? Without question. So and probably from a security perspective, it's nice to have those guardrails so that companies can feel more secure about where their builds are happening. I mean, there's a lot of espionage that goes on and things of that nature as well. That's right. So having that trusted runner environment, that trusted builder environment, like just additive to anyone. Yeah. Oh, that's awesome. All right, so I got to ask because it is a little bit of a conversation here, AI. Meme of the moment. Yes. Yes, of course. I'd be surprised if you didn't. Yeah. What stock are doing with AI? Yeah. Two really exciting things. One is we both customer driven, of course. One is our customers have said, wow, Gen AI sounds exciting. I hear all these great things. How do I get started? Going back to the easy, going back to simplifying, right? And there's all this tech and how do I assemble these pieces? And really importantly, especially for developers at large organizations, my company doesn't want me putting company data in the cloud into a public API. So together with partners, Neo4j, LangChain and Olama, we put together a Gen AI stack, all containerized, safe signed, secure containers they pull from hub. And in one command Docker compose up, they stand up the stack. We've got four applications in the box so that Dev can just focus on building again and not have to worry about this LLM or that vector database or this framework for managing the context windows. And so we've seen the project just take off on GitHub in terms of stars and forks and downloads. And so it's a way to quickly get up and running, putting GAI into your application. So that's one. The second is, as you might expect, Docker has a lot of data anonymized from all the different developer consumption. So from Docker Hub, from Docker desktop, Docker build, and that data we can use to inform our, we use an off the shelf LLM but to tune that LLM so that when a developer gets stuck and they might not know the exact Docker command or they might not know the command line, we can help them take that step so they don't have to go search documentation or go off dig off through Stack Overflow or Google. Love those partners, but we want to keep them in this flow state, keep them working so the AI can give them just a little bit of a nudge, meet them where they are, give them a little nudge and automation so they can complete the task. So those are two ways. Do you see that coming closer together with them doing that? It's like even helping them beyond that from an ops perspective longer term, LLM getting more and more involved. We see it as a collection of agents going forward and each agent will do one thing well. And so to your point, there'll be an ops agent or a security ops agent that says, hey, let me just check that you conform with all our compliances and not just check, let me suggest to you how to correct things that are out of compliance. So again, automate, help the dev make smart decisions early in the flow state. In fact, we're already seeing this with Docker Scout where Docker Scout will help the developers see the state of their images, that they might not have visibility into the dependencies, the CVs and whatnot and then recommend in one click, here's how you comply with your company's policies. Just make it easy for them to take that step. You'll see AI do more and more of that as well. I think that's where it should sit, is doing that, the things that are outside of your purview and making those connections to things that maybe you can't see in there. That's right. It's providing that context and taking a page for manufacturing, if you will. So we know from manufacturing, Japanese manufacturing that they have the concept of the andon cord on the manufacturing line, right? So the worker can stop the line because they have figured out over decades that if you fix it on the line, cost a dollar. It gets to inventory, costs 10X, $10. If it gets on the ship to the United States, costs 100, if they have to recall it from the customer, it's a million dollars. So you have orders of magnitude to fix a thing, the further it gets away from the point of creation, same thing in software. Help the dev just make smart choices in that inner loop so that they don't have to wait 30 minutes and catch it in CI. They don't have to wait several days and catch it in production. They don't have to wait, oh my God, a customer gets impacted by it. So that's it, it's providing the context so they can make smart decisions while they're in the flow state right then and there. So what we've been seeing in our data, so we have a partner at ETR, we gather data with them, and what we've been seeing is that somewhere near 30, 40% of AI is actually being done on-premise and being developed on-premise in particular. Even, it's an even higher percentage being developed. Are you seeing that become a really big use case for people coming to Docker? Yes, yes, so we see it in a couple of ways. One is we see it in this GNAI, which is more recent about the last six months or so. But importantly, we have hundreds of AI tools up on Hub, PyTorch and TensorFlow and all that, and you can see the consumption on that spiking being pulled into private IP spaces. And then the third is, which is really cool, is you see the Docker format being used to package up LLMs to share with each other. So they're not using hosted LLMs, they're actually packaging up Lama, any open source, training it inside the Docker container and then distributing it to their colleagues through Docker Hub. So Docker Hub is now becoming this wealth of models that 10 years ago had no idea that that was going to be the case. What a cool evolution. Very cool, very cool. Yeah, how does that feel for you to see all that come together? It's, I think, the reason why so many of us are in this industry, that you have powerful tech and you have no idea where exactly it's going to go. Yeah, exactly. But the community can take it to places that is just mind-blowingly cool. Mind-blowingly cool. Docker, humbly, Docker is 11 years old this month. Wow. 11 years ago, Solomon, founder of Docker, walked on stage at PyCon 2013, revealed Docker to the world, and here we are 11 years later in our second decade. It's just fantastic, feels fantastic. What's going to come in the next decade for Docker then? Oh, there you go, right? So clearly, Gen AI, AI in general is going to be huge and we do see that changing the world of developers because no longer is it just about code, it's about data and models and code. And so the workflows, the tools, the automation, that all of that works together, there's a lot of things to figure out there. And we're right in the early stages as an industry, like figuring out what is those flows, what are the tools, what are the automation, what are the safety guardrails you got to put around that, right? So that's going to be super cool. Then is, you'll also see us, it's related, continue to focus on the interloop and how to help the developers go faster, shipping innovation by making that interloop go faster so that they spend, again, not 37% of their time, but 90% of their time building. So there's more friction we can take out of the interloop. So those are two areas, Gen AI and interloop acceleration in the next decade. Love it, speaking of acceleration, you have a race car in your booth. Oh, yes we do. We do, we have a simulated race car in our booth, but we're really excited to partner with McGuinness, who specializes in the IT community, both vendors and buyers and participants. And so you'll see Docker on the track this season. We're very excited by that. I know, we're going to have to go to a race. Yes, they're real great. They're a lot of fun. I've been to one before and it was a fantastic time. Fantastic, yes. They're not going to let me drive, but that's probably safe, safe for all concerned. Yes. I don't know, I feel like as a sponsor you should be, right? You should definitely. I feel like, come on, get it. Can I get behind the wheel? I can get for at least a couple laps. Just one lap, just one lap. Yeah, it's just a lap. No, I'm rooting for it. Well hopefully, I mean, maybe we just changed their mind. They're watching theCUBE, hopefully we just changed their mind. Do you hear us McGuinness? Yeah, yeah, he's like, this is all for you folks. We would also like to drive, just a record. Scott, you've obviously been on the show many times. We love having you. What do you hope you can say on the show next time we have you that you can't say today? Say whatever you're just going to say. She would have thought about that one, no. Look, I think there's a huge opportunity for GenAI. Docker has a huge role to play. So this time next year we're going to be sharing those details with you on how Docker is helping devs empower their apps with GenAI and how we're using GenAI ourselves to make developers' lives even better, even more productive, even more in flow state. So that's what we're going to talk about next year. Fantastic, I love it. Scott Johnson, thank you so much for joining us here on theCUBE. Rob, thank you for your fantastic insights, as always. And thank all of you for tuning in live from home or wherever you happen to be spending this fabulous Thursday. Here in Paris, France, at KubeCon, CNCF's largest European event. My name is Savannah Peterson. You're watching theCUBE, the leading source for enterprise tech news.