 Foundation conference here in San Jose. And with me, I've got Aaron Sullivan, who is a distinguished engineer at Rackspace. Welcome, Aaron. Thank you. So what do you think of the conference so far? It's amazing. It's grown so much in the last year. 15 designs to almost 60 in a year and lots of system launches. Yeah, very impressed. Well, one of the things that has been announced today, which was caught my eye in a big, big way, was the agreement or the announcement that you and Google have. Can you paint a little more, put a little more light on that announcement? Yeah, sure. So Rackspace and Google started working together when Rackspace was developing Barreli. Of course, Google had already had their system available at the time. And our collaboration just on what we had with Barreli was very positive. We were just kind of looking to trade notes and, you know, share our experience. And a few months ago, we got back in touch and said, hey, this was, you know, this was positive enough. We should think about doing the next one together from the start. And so that's basically what we're doing now. We're gonna do a power nine system that comes in multiple mechanical form factors, but just one motherboard. And we're going to, like we did with Barreli, we're going to contribute that to open compute when we're finished. Part of the open compute foundation, part of the open stack, part of the open power foundation. That's right. Open everything. Open everything. Yeah, yeah, yeah. Excellent. So what about the Barreli that you also announced some things about today? Can you, what is Barreli and what's, what's, what's different about it? So, so Barreli is named after a fish that's got a transparent body. Most of our servers are named after fish. We thought having a server that was fully open would be great to have that name. Barreli just entered its first data center shipments. It's headed to our Virginia data centers right now. And in a few months, we expect we will begin providing services to customers on it. So that's the progress on Barreli so far. We contributed to open compute about two, three months ago now. And it was accepted. So the specifications are online. And if you look around the show floor here, you will see there are other companies that have put their brand on it or something else and are also taking it to market, which is exactly what we hope for. Great. Well, I've got a question, which is, why have you, why have you put these resources into Barreli and then the future into the power nine, et cetera? What are you looking for that's different about open power that, for example, you couldn't get with a standard x86 server? Yeah, so I know it gets to be tired and people get tired of hearing the word open, but really even with open compute and open stack, the freedom that comes with developing in that particular universe is really significant. Before open power even started, there were parts of the system we really wish we could get into in an open way, where we could develop and share instead of just doing it all on our own. And having open power come in the first place fit that. But then we also have this problem, this Moore's Law problem. And the types of changes that we're going to have to implement as an industry to continue to accelerate and get higher performance computing and more efficient computing over the next few years, they're really huge challenges. They go from the chips all the way to the top of the stack. And if you don't have the chip part open and you don't have the firmware part open, it becomes really difficult to collaborate. You can't bring to bear the sort of force of the world software developers on to it. You end up in these little silos and niches. So for us, Barolai provides a lot of value as a business. And it has a great influence on the industry at large. And so will Zias, the power nine system we're doing with Google. But it also is there as a platform for developers to begin to start wrapping their minds around these new problems and opportunities that we have. And if it's not done in the open, these types of software aren't really scalable across the whole industry. That's a very interesting answer indeed. And as you say, I'm Dar's Law has come to a screeching hold from the point of the amount of power per CPU, it's still going on in terms of the number of transistors, etc, that you can have. What are the what are the things, you know, as a distinguished engineer, what are the things that really are most important about the power architecture that allow you to develop these new ways of doing things? Yeah, I think it depends on the type of your business you're in. But in our business, I think in mini cloud service providers, and in some other environments, certainly some HPC and a lot of enterprise, the performance of a single core is still really important. And it will continue to be for as long as we can keep getting more performance out of a single core. So power provided a great platform with a very powerful core. And it also has a huge number of threads per core. So you get a little bit of the best of both worlds there. And you need a really powerful core, you have it. If you want to spread your load really wide over a more cloudy, webby type application, you get to use all those threads. And there's all that memory bandwidth and so forth. So so that was the benefit of power in general. And then we run out of core performance, and those cycles per, you know, CPU aren't going up. And maybe we can't even scale cores like we used to anymore, which that's coming in a few years. The fact that the platform is open in areas that others aren't allows us to bend the rules about how components communicate. And we cut out a lot of overhead between them. So that's a sort of software and silicon type argument. You want to bring the software closer to the silicon. Yeah, closer. And in many cases, to do the same work that we do today, like that's the hard part is people think it's all about genomics or oil and gas or something. It's the same work. But you know, we've already demonstrated the open power community has demonstrated that there are certain workloads that are very common today that you can boost 10 fold or more simply by reintegrating your software tighter the hardware, right? You pull out overhead that we were fine with when Moore's law was working. But now we got to do something different. Yeah, great. Well, thanks very much indeed for being here. And thanks very much for watching.