 Welcome back everyone to CUBE's live coverage day two here. I'm here, I'm John Furrier with Rob Stretchy. We're analyzing all the data, asking all the right questions about open source, changing the game with AI, security, cloud native technologies. The Linux Foundation's annual, this was a premier conference where the brightest minds get together. It's not a huge event in terms of like a CUBECon or cloud, but it's the right amount of people. It's the biggest names in open source, charting the future, setting the agenda. Chris Jones is here, Business Development Manager, Platform 9. Chris, great to see you again. We had a chat in our studios in Palo Alto a few months ago. Great to see you. Yeah, thank you. Great to be back. Platform 9 has been doing a lot of work on providing managed services for clients going to Kubernetes. We've had many conversations about that. But that's just an example of what is being done. A lot of platformization is happening or platform engineering is happening. Big topic. AI and ML, again, is like provisioning large infrastructure. It is. It is. What do you guys see right now? What's the big focus? Think a minute to explain what your focus is right now at Platform 9. So this year, finally three areas we're focusing on, right? It's data center modernization, application modernization. Let's use what you've got today and do more with it, right? Maybe moving on to something like CUBEvert to get onto a newer virtualization stack that's converged with Kubernetes. That's one area that we're focusing heavily on. Another is retail edge and leveraging orchestration capabilities to do bare metal management in any location, but particularly in the retail area where modernizing that in-store experience means transforming the compute layer that's there. So let's make that cloud native, make it simple, reduce the truck roll costs. The third area is AI and ML. And that's an area that I've been spending a lot of time working with. It's an area that we saw users coming to us back in 2020, saying, hey, this seems like the next thing for doing orchestration of compute, batch computing, running training jobs as well as serving inferencing. And this conference has really shown that Kubernetes will become that default foundational layer for running training and for serving models. What's exciting about the AI, ML piece, obviously we've been talking about the impact open source, mainstream enterprises are adopting it, people are kicking the tires. What do you see? I think the, it's choice and variety that is happening in the AI and ML space right now, especially in the open source world. You know, you've got Kubeflow, Airflow, you've got things like flight that came out of Lyft. There's options there. And, you know, sitting through the last session I was asking that was presented by Ikea. They've got a slide that they pulled up that showed their entire ML stack. And this is what they're running in public cloud. This is what they're running in data centers. And it was all 100% open source. They've got a team that's operationalizing that, right? The foundation is obviously Kubernetes. There's things like Argo CD in there and Knative. And on top of that, there was MLflow and a few other tools that, you know, personally would knew to me, but they're looking at model drift platforms and everything, all open source. And that, I mean, that's incredibly exciting. I think what, I want to kind of catch on to something you said there because him and I had a little disagreement when we were wrapping up yesterday around, I think the word repatriation from cloud gets thrown around. And I think it's more right sizing, right sizing the stack and, you know, coming from a data background and some open source, looking at it, ML, AI is just, it's about data. And it's about massive amounts of data. And a lot of times going up to the cloud can be super expensive for doing that. What are you seeing? I mean, you just talked about IKEA is doing it both places, which to me makes total sense. But what are you guys seeing? We're seeing an availability problem. I need a GPU, where do I do that? Right. I'll go to the public cloud, it's going to be there, it's not. So businesses are being impeded by the availability of resources. Now, you could say repatriation, I think a lot of modern companies might be saying, can I do this elsewhere? Like is cloud my only option? Cloud isn't, but then they get, oh my God, I need to go back into server mode. Right. Do I have the skills to do this? How do I even then avoid using a spreadsheet to let my data science team run workloads? So that's where that ML Ops thing is coming in. But I think a lot of organizations might be fearful that there's a lot they need to do. Right? IKEA's done basically an MVP. And they're like, now we have to start scaling this. They've done a lot of that hard work. So it is achievable. And the thing that I've been wondering is, should you do it in the data center? What's the ROI there? Right. Right? Ask IKEA, they're going to say it's fairly high. Right? You can go to Dell, HP, Supermicro, you can get a server with a bunch of GPUs in it. You can put that in a data center and you can do it today. Yeah. And that's what we're beginning to see. And I think one of the things is that people are getting subscription fatigue to a certain extent as well. And they're looking at it, especially in that kind of economy that we have right at the moment where having something that you can write down that's CapEx-oriented seems... Is that also some of the stuff that you're seeing? It's more some of it's coming from the CFO and the CIOs saying, hey, you know, we have the skills. Maybe we're a little rusty at them, but we have the skills. That's a hundred percent. 37 signals, the base camp and hey guys. Yes. They've been pretty vocal recently, so they're saying, hey, we've... That's very bullshit. I'm telling you, I call bullshit on there. I got to say. First of all, I had a whole repatriation rant. So, first of all, I love that guy by the way. He's awesome. However, he's a little dogmatic on this one. So my message is this. Repatriation is not... The people who are reframing repatriation are trying to make it like clouds fail, because they want a big... Oh no, it's all a data center. It's they're not repatriating. They're refactoring for cloud operations. That means they're moving stuff over to set the footprint for distributed computing. In some cases, it makes sense for a company not to have anything in the cloud that can do it on site better. That's not a repatriate. That's not a trend. It's not a thing. Now, that's a business decision, but that's not actually happening if you look at the numbers. Dave Vellante's got the numbers. Charles Fitzgerald's got the numbers. Fitsy. It's complete BS. So that whole... And by the way, the data centers are getting out of the managed service business because they want to get bought by Amazon and Google and Azure. So, if we're going to pick the scabs of the data... But I think you're actually saying the same thing that we're saying, which is that it's the right place for the right workload at the right time. And I think what we've been talking is the complexity that goes into platform engineering and having a common stack across those is really key. I mean, there are people out there that are hardcore, to call them the repatriates. The repatriates, that's what Fitsy calls them. They're just, they just want to see... Like the mainframe huggers. They want their old way back. And it's never going to happen. Cloud has won. Cloud operations is going to win. And I think that's the edge is next. So that's what's, we're here. That's what open source is driving to. So I don't see that as repatriating. Refactoring is a different story. Right sizing, refactoring, cost optimization is natural. And you need to do it with the right tooling. I mean, 37 signals has created Merck. I think that's how you say it. What is it like? Mezos or Docker Swarm 2.0? It's like simpler. It works really well for their particular way of running, their containerized applications. I personally would probably say, I think Kubernetes is the right approach. Just have a conversation if you don't want to run it with a vendor. They very publicly said, well, we went and talked to Rancher and it didn't go very well. Well, maybe you were talking to someone that was selling in a very old fashioned manner. Have an upfront honest conversation with a vendor and be like, okay, we can run cloud ourselves in a more cost efficient way and find a data center provider that can provide everything for you. Right? Actually, I want to bring this up because you were talking before we came on camera about what you guys are doing at Platform 9 with AI and ML. We had one person on the queue came up and we were talking about one approach is to keep everything on premise in a data center for the LLM because you can actually over provision and leverage the hardware better until you get a handle on it. That's one, I forget where that was, but I don't remember either. Maybe this one approach. That's plausible, I can see that. Amazon's saying, hey, no, no, we can use us to say, say, what do you think about that? What's your reaction? Cause it's not yet known, but that can, if you have the hardware. If you have the hardware and the data's on-prem, then buy some GPUs and do it on-premise. Like, you're going to get an ROI versus renting those GPUs in the cloud in somewhere between three to six months. Like, go and, I did it. Go to Dell, spec out a server. Think it's like somewhere between $360,000 to $380,000, one up front cost for five years of support, list price. You look at the equivalent required hardware and compute capacity in any of the cloud providers, the math is simple. Like, if that's where your data is, if you've got petabytes you need to move out, the equation's going to change pretty quick. I think the one thing that is going to be interesting and was brought up earlier today actually was kind of eye-opening to me and I think to you as well was the power consumption aspect of that as well and how you control that. I almost think you could control the power consumption of your LLM on-premise better than you could in the cloud. So there was this fascinating presentation by IBM where, and there's a GitHub that people are following along. Look up the session, go find the GitHub link, follow the tutorial, what they're doing is they're augmenting the GPU's frequency, like the processing frequency, to reduce its power consumption in real time. And they did a whole bunch of training simulations and serving simulations and they actually showed that obviously there's a response time latency difference. You slow the GPU down. I ask a question of a large language model, might take two seconds versus half a second. But they captured the power consumption at each of the different steps. Two seconds is not that bad. And they basically said, what is the user's expectation here? And what was truly fascinating is the power consumption and temperature went below the idle state. There's no workload state of the GPU. That's really strong. Yeah, it was actually, I think that's the folks from Kepler that we, the project Kepler is what they're contributing back. That's IBM research and Intel and Red Hat, Kepler. Yeah, we talked to them earlier and I think that you're hitting the nail on the head. I think that, that's a power awake workload schedule or auto scaling feature. Yeah, yeah, yeah. Pretty cool stuff. I mean, I come from an application performance monitoring background. So I love nothing more than charts and widgets and things that are spinning. I was just looking at it thinking, this is huge. That's monumental for a public cloud provider. That's monumental for anyone that has to be in the data center business that will be running GPUs. And if you want AI to help, at some point you're not renting GPT4. Because that's table stakes. You get whatever else has. We were talking about the fintech side of the impact of fintech with AI. One of the things we were talking about on the same lines is the first phase was physics. Packet A to packet B, high frequency trading. Get that edge. Every second and you'll mel a second gals, nanosecond. Now, the data edge is in time to insight. So that's not physics. That's querying. That's latency, language models. Do you have the data? You tune it. That's nothing to do with packet moving. Although latency doesn't matter when you get the answer. It does. But maybe they might not get the answer. So that's an intellectual challenge. That's an algorithm challenge. The right algorithms, what's your reaction to that? Because that's like the two-step process. Nail the physics, latency, and then nail the query. It's the question, right? I think the really great data scientists out there are the people that can have that thought. They have that cognition before you do. And then they start building a model on it. And then they're like, I need the data to train and validate this, put it in production. Well, you're in the fintech world. Something changes. Another thing changes. You've got drift. Okay, now I need to retrain my model. Yeah, wherever it does that fast wins. Correct, so it's that life cycle. Yes. And I think that's fascinating. Yeah. Right? I'm pretty new to this area. I'm learning as I'm going along. And to me, it's a brilliant area to be focusing on. I think it will help many businesses and organizations globally. But how can I best make use of this technology? How can I make it accurate, fair? How can I understand what it's doing? Right? How do I keep it up-to-date and accurate? Well, explain to the folks out there watching what does Platform 9 do that gets from contact? Because you guys have been in this business of doing heavy lifting on behalf of the Incubanettis. But this is also portable to AI. One example is, let's say you're a retail organization. You've got self-checkout. And one of your solution providers says, hey, we've got real-time machine learning that will use your video feeds of your self-checkout and alert you when there's some nefarious activity. But it requires some server hardware that's got GPUs in it. And you think, oh, OK. I hear this. I think you've now got the ability to inference in the store. Let's lay it down cloud native there. Let's use Cubanettis. But don't do that by yourself, just along with the solution provider that's doing that video piece. Get a stack that means your operation teams don't have to go and learn Cubanettis and figure out how to run every store. Remove the truck roll aspect of that. Manage that hardware remotely as well. So all of a sudden you've got provisioning remotely. You're laying down a cloud layer. And that hardware that might have been just doing the video inferencing, now your data scientists can give you models to run in the store. And do other real-time inferencing to improve your customers' experience and launch new products. You need Cubanettis to do that. Running Cubanettis in 300 stores is going to get pretty complicated. You want to be able to do that remotely, standard, repeatable. And if things are going wrong, you need a partner to lean on to solve that. And the trend too is to let the talent be focused on the activities that they need to be. Correct. Get them doing what they should be doing. Data science teams, they're the teams that are just doing this, right? They're, it's not even shadow IT. It's like fully funded new business unit spinning up stuff in a cloud or maybe doing stuff in a data center. If they're using spreadsheets to organize who can run what on which hardware and when. That's not a huge ROI. But the gap to get to, well, I've got a pipeline tool and I've got some automations and ML Ops happening. Once again, they need Cubanettis. You don't want them focusing on that. You want them focusing on asking the right question. They need to see a really strong infrastructure on-prem, cloud-native, getting it up. Platform nine is doing great stuff. But quit a pocket for platform nine in the last minute we have here. What's going on with platform nine? When do people call you? When they realize their teams are spending too much time building Cubanettis platforms and cloud-native infrastructure when they should be helping them launch new applications and hit their business objectives. And getting it to AI quickly. Getting it to AI quickly. Like if you've got GPUs sitting there and you're thinking, well, maybe my team shouldn't be manually scheduling time to run workloads, then take that step, reach out to platform nine and say, hey, help us do this more efficiently. Consume more of that cloud-native stack anywhere we want it so our team can actually be more productive. Large language models, foundational, ask code. Yeah, check out IKEA's presentation. Go to that, I think it's the second or third last slide. See that stack. The one from here. The one from here. The one from today. Check it out, IKEA's presentation. Chris Jones, thanks for coming on platform nine. Wrapping up day two. I'm John Furrier, Rob Streche. Wall-to-wall coverage. This is the most important open source event in North America, open source summit 2023. The Cube is on the ground. Been involved in open source for day one. Breaking down the analysis, trying to get the best. Ask them the right questions. What will AI do? When will security be solved? How will the community respond as it continues to grow exponentially? And ecosystems are forming. A lot of dependencies and becoming platforms. These are all amazing next-gen open source questions. We're here to help. See you tomorrow. Day three, we'll be back tomorrow. Thanks for watching.