 So we're here at Delinero Connect, Vancouver 2018, and who are you? I'm John Masters, I'm a computer architect with Red Hat. And you've been busy in the last year with a whole bunch of stuff? Yeah, I guess there's two things that I do, there's many things that I do for my day job, but two areas this year I think have been continuing the work on our servers, you can talk a bit more about that, and also leading the security response from a technical perspective to the side-channel attacks, like speculative execution attacks like meltdown, Spectre, recently foreshadow and a whole laundry list of different security vulnerabilities in modern processors, yeah. So why were you doing that? How did you become the person that does that? All part of the team that does that? Yeah, so the backstory is that for the last seven and a half years I've led Red Hat's work on the ARM architecture and the novel approach to how we've handled that is to work with a lot of silicon companies for many days of building their microprocessors. And this is before they've even taped out or produced the silicon, turned the sand into a wafer and produced a microprocessor. And so we've worked very closely with design teams and we've done a lot of things that we haven't done in the past. And as a result we have expertise in this space in terms of how processors are designed and how they behave internally. So a lot of us like myself who have a background in computer microarchitecture have been involved. So I would guess I was the right guy at the wrong time in the right place, you know? So effectively we sensed that there was something going on last year and so we started an effort, a little bit, in anticipation that there were going to be some problems in this space. And it started with me writing an internal briefing on the risks posed by speculative execution. And then at the end of last year, in the fall of last year, I got meltdown working in a reproducer internally in advance of the disclosures. And of course at that point we were also starting to work with the partners on mitigating that problem as well as many other vulnerabilities that have come out over the last year. And they've included very recently something called foreshadow or L1 terminal fault. So we've seen a transition really in terms of the focus from the original set of vulnerabilities which were very cross-architecture and impacted all the computer architectures to a few recent ones that have been more Intel-centric. But I don't necessarily think this is exclusive. We know it's not exclusive to Intel. They've had some focus recently from the research community but this has affected everybody. So we've tried to sort of, folks like myself, I wear an arm hat a lot of the time but when it comes to security I don't think you should play favorites or have a particular opinion other than just let's fix it and make it all safe for everybody. I was able to sort of, you know, put my arm fanboy stuff aside and say, you know, let's go and try to help do something here. So that's been an interesting journey for the last year and what I'm trying to do now is to get, still do some of that stuff but also focus a lot on the arm server work that we've been driving for the past few years. But you weren't the one that found the meltdown buck, right? No, no I was not. We've worked closely with folks like the Google Project Zero team and, you know, as I said, we had a heads up about some of these issues and then we began some internal research in anticipation of, in advance of some of the, you know, broader industry plans, right? So there was an industry effort to mitigate these exploits prior to January and I think what we've learned over the last year is that we've gotten pretty good at responding to these. And so even though your business card says arm, what does it say? It says chief farm architect. What it might say soon I think is computer micro-architecture lead. So what I've done over the last year, like I said, is try to broaden my focus out to include other architectures and emerging architectures. So, you know, we look at different, we look at x86, we look at both Intel and AMD, we look at the IBM architectures, we obviously have the arm work going on that is my principal focus. But we're also looking at some research around things like risk five and, you know, all kinds of different technologies out there and what I'm trying to do is to make sure that Red Hat is always positioned from a computer architecture point of view that we understand what's going on and we're able to see where the industry is going and when we have particular needs to address that, we have the right people internally who can help drive that. So you mentioned that Meltdown was not only an Intel issue but still they were the most affected kind of rates? Well, you know, the way I put it is kind of how I've heard from the researchers if you want to get a paper published right now, what you do is you find a bug in an Intel chip or you find a bug in an ARM core in a cell phone, right? So, you know, just given the market penetration of some of the other players, there are fewer people out there with, say, a mainframe in their house, right? So if you're a researcher, you could go looking for all kinds of problems in different processors out there, but if the market, the main focus of people is on, you know, x86 laptops and servers and this kind of thing and you want to publish a paper, then you've got a lot more likelihood of publishing a paper if you find a problem that you can say affects, you know, that particular processor vendor, right? So I wouldn't say that, I wouldn't say Intel is particularly, you know, bad from a security perspective. What I would say is that people are very incentivized to find problems with that affect Intel because they can market that and they can say, you know, you need to see my paper, you need to see my logo, you need to see my research and when I talk to a lot of security researchers, they are very frustrated about that as well because a lot of them would like everyone to care about security the same. No matter, you know, who's affected by it, but the reality today is, you know, you go find a problem that affects, you know, an x86 PC or you go find a problem that affects Android or iOS. If you find a problem that affects, I don't know, the plan 9 operating system running on an alpha, that's not really something that anyone is going to care about from a security point of view. And so it's very important for Red Hat to understand everything how the CPU works and optimize all this stuff that Red Hat is doing for the CPU itself. So one of my purposes, one of my pet projects is to make sure that we solve this separation we've had in the industry between hardware and software people. So I was fortunate to be able to participate in the keynote at the Hotchips conference last month where John Hennessy, who's the president of Alphabet, and a guy with a huge amount of experience, he literally wrote the book called Computer Architecture. And he invented the risk. And he invented Patterson, yeah, exactly, right? So, you know, fabulous human being, great set of expertise there going back decades. It was really, really great to be able to just be a small part of his keynote. And I gave kind of my perspective from the software community on dealing with Meltdown Inspector and other side channel micro-architectural vulnerabilities. And what I said was, hardware and software people don't ever communicate with each other. Some of us do, but generally speaking, if you're a software person over the last few years, software people almost find it exciting to ignore hardware people. If you read the Linux kernel mailing list, you will hear a lot of people say, well, that's just hardware, I don't care about that, right? And so there has been a huge problem where we've made hardware kind of boring enough. We've got a couple of big vendors there who've kind of just taken the lead and driven it. And the software community doesn't care enough about how the hardware works. And so when we have these challenges and these problems, sometimes we don't understand what the problems are because we don't communicate, right? So I've had a strong interest in helping to repair that damage there and make sure that we do communicate and collaborate when it comes to really understanding how machines work. And as a result of that, I think we'll get two things. We will get more secure machines, but we will also be able to build interesting new technologies because the machines of the future are not going to be like the machines of the past. Machines of the past, every year, every two years, you get a machine that's like 10%, 15%, 20% faster than the previous generation. That isn't going to happen in the future. Moore's law is dead. We're not going to get faster and faster computers. So as a result of not getting faster and faster computers, we're going to have to build technologies differently in the future. We're going to have to build technologies that have software that doesn't use 10,000 layers of abstraction and run really, really slowly, but instead write software that's more carefully optimized for the hardware and also build hardware in collaboration with software so we understand how to build a complete solution. And that takes the kind of expertise and the kind of skills that we have inside companies like Red Hat because we've invested in understanding how machines work. Is that also... It's very interesting that as John Hennessy, who's the chairman of Alphabet, who's a trillion dollar company, one could think that maybe they're thinking about CPU a little bit now. And maybe, I guess, maybe his speech is also like what you just said, right? Because maybe Moore's law is not going to continue forever. So what's next? Well, I think Moore's law is dead, basically, right? It served us well, but it's gone. And in many ways, that actually excites me personally because I think over the last few years we've had this kind of cheap performance improvement so no one has really put the time into caring about building the best systems we can build because they've just kind of gotten a bit faster each generation. And so if we can get to a point where people actually have to care about it again, that's good for everyone. It also means we have opportunity right now for innovation and for competition because you look at manufacturing technologies, right? Nobody has a monopoly anymore on particular manufacturing technologies, so that's exciting as well. So there's lots of opportunity there. And I think any company that's building out at scale or any company that's kind of interested in it in the longer term is looking now and saying, what should we do? How should we respond? The idea that you're just going to have general purpose CPUs everywhere is obviously nonsense. And, you know, NVIDIA with CUDA proved that out, you know, more than a decade ago, right? The idea that GPGPU is going to be a thing has turned into a big thing. We have a lot of special purpose hardware. Machine learning has demonstrated the benefit that you get to having a lot of custom hardware, both for training and for inference. And you will see only more and more cases of custom hardware and custom accelerators. You know, and finally, look at the average phone. The average phone has more than 100 different accelerators and widgets on that SOC in that phone. And this is because, you know, in order to get the battery life that you want out of that device and the experience that you want, you have to have this mixture of accelerators alongside the general purpose compute. So you're working towards the heterogeneous multi-processing systems? Yeah, so we're interested in looking at this kind of heterogeneous future where you don't just have, you know, more and more big, brawny, high-end processor cores, but you have a good combination of appropriately sized compute, right? So in many cases, maybe you have, you know, GPU resources mixed with CPUs, maybe you have FPGAs, maybe you have other custom-asic accelerators in your system, and they all have to talk together. You know, so one of the other things that I do is is chair the software working group inside C6, which is building a cache coherent interconnect for acceleration. And this again lets you plug all these pieces together when you're building a server so that you can have your GPU or your machine learning accelerator share memory in the same way that a CPU sees memory. So there's lots of research going on on how to plug all these pieces together in future servers. So that's very exciting for the future servers, but there are some pretty awesome ones right there right now, which we've been looking at the ThunderDX Tuna for a little bit more than a year or two or something talking about it, right? Potentially huge, huge, huge, right? I mean, that's what we are hoping. Yeah, I would say the ThunderX 2 is one of the first sort of, you know, mainstream quality ARM servers where you can take your existing workload on your existing high-end server and you can take that same workload and put it on to a ThunderX 2 server and you can have the same kind of experience, right? Because you've got a two-socket machine, you've got a high number of cores, you've got eight memory channels, poor, poor socket, right? Eight memory controllers, sorry. So you've got the ability there to build a very, very high-end server system with, you know, a terabyte of, multiple terabytes of memory and what that lets you do is it lets you really start to realize the promise we always had with ARM servers, right? The thing with ARM servers is until we get some of these really high-end machines, people running their very high-end workloads and they see that, you know, we can do just the same as any other architecture until they can really see that, then they're not going to look at the real promise of which isn't always going to be just the high-end machines. Although those are important, there's also all this opportunity in the edge, edge compute and in, you know, sort of mainstream commodity, not super high-end, not super low-end, not super low-end, kind of that middle where, you know, hosting companies can start to offer, you know, virtualized machines at scale and you can see, for example, just last week, a company called Vexhost announced commercial offering in which they're going to be providing open-stack-based virtualization where you can just get ARM-based VMs and deploy your workloads there and I think that's an exciting beginning. If you combine that with announcements, for example, VMware announced that they have a release of ESXi for edge compute. So you're going to see a lot more of that kind of stuff happening, I think, over the next year or two. You'll see people look at the success in HPC, you look at the success of ThunderX 2 in the ASP employment, for example, for the national labs. They'll look at this and say, I don't have an opportunity to use ARM, how can I apply that in many other scenarios? And the other scenarios, I think, are where you're going to see a lot more scale at the edge and in mainstream cloud computing. So at the supercomputing, last time you announced the Red Hat Enterprise Linux was 7.4. 7.5, yeah. 7.5 totally ARM-compatible from end to end, right? Everything just supported. So does that mean some of these supercomputers and stuff that might be using it? Yeah, so I think it's a public record there that Astra is running a Red Hat-based operating system. And you'll see a lot more, I think, over time that the supercomputers are, the ARM-based supercomputers are running REL or, you know, running Red Hat-derived operating systems out there. Astra, is that the one from Sandia? That's the Sandia machine. The Thunder X2 supercomputer that was announced? That's the Sandia machine, yeah. That's exciting, right? Yeah, it's super exciting. I really enjoy working with that community of people on these technologies. And like I say, it's just the beginning. What you saw at supercomputing last year was kind of, in a sense, a deliberately boring announcement, right? It's just, here's a REL offering, it just works, you can go to HP and you can buy an Apollo 70 right now that you can just get REL for it, right? Well, that's fabulous. That's the same experience you would have on, say, an X86 HPE Apollo platform. But what you're going to see going forward is you're going to start to see some of the layered technologies that run on top of REL. For example, we are building, you know, demonstrations at this point of various container technologies. So for example, we have, you know, of course we have OpenStack up and running, of course we have Ceph and other technologies. We're starting to look at things like OpenShift and various container platforms. Are we going to ship those yet? No, but what we do is we get them up and running, we explore, we look at how well that software works. And in most cases, it's just a case of, you know, kind of maybe changing one or two things, building it up, and more or less it just works, right? We've reached a point in the ARM server space where things are, you know, pretty much fabulously boring, right? There are some differences from X86 or Power or any other architecture, but not many. And over the next few years, you're going to see increasingly an opportunity for people to deploy, for example, an ARM-based container. And for developers to not even have to care what the architecture is. I mean, if I'm a coffee shop, you know, startup entrepreneur-developer, and I'm writing in Node.js, right? And I'm used to using containers to do that. There's no difference for me from a developer point of view, deploying an X86 container today or deploying an ARM-based container, right? So over the next couple of years, you're going to start to see, as we get these bigger deployments, as we start to see, you know, cloud opportunities like the VEX host announcement we just saw, developers looking at this and saying, that offers me, you know, maybe a price benefit, maybe some other benefit, maybe access to more ARM servers closer to me in an edge kind of scenario. And they're going to say, well, I have my software. It's just a Docker container. It's just a Docker file that says pull these pieces together. And I just hit one button and have my ARM-based container and deploy that instead. So, for example, Cavium, I mean Marvell, sorry. Yeah. They're saying that maybe in terms of power, it's actually not less than the Intel, but is it the price, then it's an advantage, or is it customization that is advantage, or? Well, I think it's a very good question. So I think there will be cases where ARM is more price competitive. So, for example, in the edge compute scenario that I described where you're deploying very close to the edge of the network, maybe you have a smaller machine. It has a good amount of RAM. It has a good number of cores, but it's trying to build something that is less, you know, than the big brawny, high-end compute systems that you see out there today. Because today you see kind of a one-size-fits-all, right? Whatever the answer is, just put this there, right? That's how we build machines today. So they have very high-end parts, but they have high cost, and the economics are not always great. So you can build machines that are more appropriately sized for the workload. So that's going to be interesting. That'll give you pricing that really starts to make this compelling. For some of the other cases, nobody has a monopoly on the laws of physics. So no one can say, oh my goodness, my transistors are way better, right? So as a consequence, even though the ARM architecture is more energy efficient and more performant in that way, it's only a few percent difference using risk-versus-risk instructions or something like this. And the reality is if you're building a very high-end core, you're still going to use a lot of power to do that, right? So if you're building HPC, maybe you're not going to save a lot of power, but you're going to be able to do some different things. So Cavium and now Marvell, what they can do because they're focused in a particular market segment and they're not trying to sell a part that solves the entire world's problems in one go, they can say, well, we're going to put eight memory controllers in here and we're going to be able to give you a ton of memory in this machine and we're going to build a really high-end interconnect. Maybe they're not trying to put that part into a laptop, which is what some of the others are trying to do with the same design. And so that means they can build something that aims more towards the higher end of the market. At the same time, others can come along and say, well, that's great. We're not going to try to do HPC and high-end cloud, but we're going to do edge compute and we're going to take ARM's own cores, which are pretty good and we're going to put them into a much smaller edge case scenario. And if we do that, then we're going to get really good economics, good power and we're going to try to address a different part of the market. So really it's the whole ecosystem playing out and saying that there isn't one right answer. The only time I think there's one right answer is when it comes to standardization. So because we have built standards for these platforms, even though they may be big, little in between, they can all run the same software. But the ultimate promise kind of of the ARM, which was in the beginning was kind of like a surprise for developers and stuff. It's potentially much less power consumption, which is crucial for enabling amazing cloud application for everybody. So maybe there will be the power advantage. I think there's some power advantage. It's definitely true that there is some power advantage. But when you're looking at a very high-end HPC scenario, there's so much other energy use there beyond just the core that the absolute energy is lower, but not a lot lower. However, when you're building a more mainstream edge or middle market server platform and you're using ARM's cores and you're really going for that right-sized dynamic, I think you can save significant energy in your design. But I don't think energy alone is the reason people are looking at ARM. I think people are looking at ARM for supply diversification because I can play vendors off against each other. But they're also looking at it and saying I can go to this vendor and I can say I need you to do this. I need this accelerator or I need this security capability or whatever it is. And because there are multiple vendors there, you can kind of play them off against each other and get what you want. There are two big guys and what they have to do is they have to service everybody with the same solution. And that's very difficult in the world we're going into. So much so that I think a lot of the big guys now they focus on the six or eight or ten big cloud customers they have. And if you're a smaller cloud vendor, you know, cloud provider, maybe they care about you but you buy fewer parts so maybe if you use an alternative you're going to be able to get more of what you want by using some of these alternatives out there where, you know, you don't buy as many chips but you buy quite a few and some of the new players coming to market, that actually works well for them in their economic model. So right here at the conference there was just a couple of days ago there's a new chip available, the EMAG right? That's exciting and there's this Qualcomm centric so what are your opinions about those? Well I think there's a couple of announcements we've had this week. So we had Fujitsu announce their FX64 part so the Fujitsu part is particularly interesting because that's going into the post-K supercomputer. It's been a great pleasure to visit Kobe and see the K computer in person and then to talk with the team about the post-K design for the past few years and now they're starting to talk more publicly about what's going into that. How does it look like the K computer is huge? It's massive. Oh sure. It's actually on many different floors if you get a chance to go to Kobe it's really cool to see it because the computer is kind of in one giant hole like a data center but they also have storage. They've got some really cool technology in there and they've got a power not power generation but they've got storage systems on site. They also have this really cool earthquake shock dampening system because of Japan of course is subject to earthquakes. The whole building is actually suspended so if they have an earthquake it's still fine. Up to some crazy high magnitude it's really cool to see that and it's really cool to work with the team on what they're building so Fujitsu was here at Lunara Connect talking about their SOC that's going into this machine. Huge chip. They say it's the biggest processor in the world or is it the biggest ARM processor in the world? It's so far I think they're claiming to be the highest performance ARM processor that's been built but most of these chips today are very physically big because they have a lot of off them, a lot of IO so even though the die inside might be actually small the package tends to be pretty big but yeah it's pretty big it's definitely very interesting. We also saw this week like you said the Ampere EMAG announcement so Ampere have announced their... so Ampere is a startup founded by Rene James the president of Intel and Rene is a really great person she's got a really good team she's got Artik who has a huge history in leading the design of high-end microprocessors Who's that? Artik, he also came from Intel and various others on the team that we've worked with as they've gotten the Ampere organization up and running so Ampere kind of started out acquiring the assets of applied micro but also pulling in a lot of these other industry veterans and then folks from other companies who've joined and so they've built up this really cool base of engineering and they've taken the road map that was applied micro, they've brought what used to be X-Gene 3 to market in the EMAG in their first generation I think they call it EMAG but I guess we'll probably call it EMAG 1 in the end but it's just the beginning they're working on a cool road map with some really cool stuff in the future and so this is them saying here's a really credible arm server and it is really credible it has really good performance it's not going to give you the best performance you've ever seen because that's not the point the point is they're going after the middle of the market so it's a really good processor to put into your data center or your cloud environment you might not want to put it in a super computer but they're aiming it they're aiming it directly at that middle it's got good pricing it's got good scalability it's just right for that part of the market that EMAG now? the list price is $850 and actually X-Gene 3 is a big jump compared to X-Gene 2 right? oh yeah absolutely that particular part is very performant but as I said it's just the beginning it's a fully standard so when we first got them we've had them for quite a while when we first got these parts in-house getting our operating system up and running I think it took maybe 30 seconds maybe a minute the first time you basically just take the operating system because we've driven all of these standards the SBSA, the SBBR the various arm server standards because our OS is compliant to these standards and so is the hardware the secure operating system and you boot it on the hardware and then sometimes you have a need for a driver for some IO adapter something like this and so occasionally you will have to make some changes but generally speaking we've made the process kind of just as boring as X-86 and that's deliberate so you can just take a new part you've never seen like the EMAG and you can boot on it and in that particular case the X-86 machine you've got a high end a set of cores, 32 cores over 3 GHz and you've got tons of PCI so you can just plug in PCI network adapters we plug in Melanox cards SAS storage adapters Melanox and Infiniband connectors Intel and it just works GPUs? Oh sure, yeah, people have tried all kinds of GPU technology in there too now sometimes with the GPUs we have a few driver issues to work out but this is true on I have yet to find any architecture where there isn't some issue trying to ship an Nvidia driver or something like this so you get the same kind of challenges as you get with an X-86 server having to install a set of binary blobs to get some GPU working unfortunately it's not better there but it's not really that much worse and people are working on solving these problems for ARM as well Can you say what's your latest opinion on the Qualcomm centric because Qualcomm is a huge cool company there was well sometimes there's all these I don't know what to call it the industry is a little bit crazy sometimes even for huge companies people trying to acquire them and then all these talks and stuff but they're definitely full on full going forward with this chip which is a cool one Well, so I think it's an interesting world we live in 2018 was an interesting year for those guys a couple of people tried to buy them all kinds of craziness going on but at the end of the day centric is a really good product and it's got great performance they got a great follow on to that 10nm Intel doesn't have any 10nm server chips Well, depending on who you talk to they'll say well, okay but one thing I would say on the nanometer thing is Intel's 10nm is different from somebody else's 10nm where we are today Intel's 14++ or whatever node they're on now whatever Intel's 14nm is similar to the foundry 10nm node so while that 10 number is better than what Intel's number is I think in terms of the process and the technology it's basically the same process but the huge thing there is we had an industry where you had this one player that basically had the the whole thing sewn up in terms of having the best process on the planet and now you're seeing the foundries competing with basically the same level of process technology so this means that someone like Qualcomm can come along and they can say we can make a chip just like the other guys a nice big chip so what it means is they can now compete on the architecture previously you had to compete on manufacturing the actual building of it and on the architecture and the micro-architecture, the design well now you're going into a time where TSMC for example had a really good 7nm technology that everyone's looking at so when people start to bring out TSMC based 7nm designs, you saw Apple announce the A12 just recently for their phone that's the first high volume 7nm part but you'll see a lot more I'm sure as people start to roll those out that will be at least as good or maybe better than some of the other guys manufacturing and that means that now we can compete on the architecture itself the actual design inside there and we can compare apples to apples, oranges to oranges rather than saying well here we've got an Xg1 which was great years ago Xg1 came out and it was 40nm and Intel was on 28 it's very hard to compare when you're a few generations behind in manufacturing but now we're at a point where people are bringing out server processors on ARM and they're bringing them out at the same level of manufacturing potentially even better than the other guys so that's going to let us do much more direct comparisons and hopefully have much higher performance so it's exciting what they have now but it's also exciting just to kind of because you do that think what might come in the future from these guys because who knows I don't want to try to get any secrets or anything like that but I'm sure it's exciting too but it would be nice to see what's there become big already well I think the thing for me is this was always going to be a 10 year journey so I I'm very grateful to the Red Hat management for letting me start the ARM project inside the company formally I think it was March 1 well I know it was March 1, 2011 that was when we started the project and we did some work before that too but I always thought it would be a 10 year journey going into about 2021 before ARM servers were just something that everybody could buy but this year what people can buy like real credible well performing ARM servers you can go and get a Cavium Thunderex Station right here a workstation with a high end really good a couple of processors in it a couple of sockets you can get that now you can get other vendors selling the Thunderex too today and you have packet.net where you can get access to this machine so we're starting to see availability and adoption of this hardware and what's going to happen over the next couple of years is that's going to then grow and you're going to see some very interesting things happen between now and 2021 I think you're going to see a lot of changes and people really kind of getting to where we always thought we would be but we always thought it was a 10 year journey what happens what do you do in the NARO Connect what's your day to day activities is there a lot of meetings yeah so I sit on the data center group steering committee and also the technical steering committee for the NARO so that's two different steering committees and then I go to some of the talks and have lots of other meetings as well and of course I have my day job going on at the same time so it's a lot of scurrying around with lots of people and catching up on particular projects and then also helping to drive the strategy for where the data center group inside the NARO is going for example and we're now called the data center group we used to be called LEG the NARO Enterprise Group and we renamed ourselves the data center because we wanted to really focus on that data center component and the wood enterprise kind of had people thinking of maybe the wrong connotation there is much broader it includes cloud it includes lots of use cases and also edge computing and so what we do there is we drive the strategy well I just want to point out that that group is now six years old in another couple of months it will turn six and in that time we have driven a lot of the standardization work we've gone from upstream Linux having no support for ARM servers to now having great out of the box support for ARM servers and it's been basically just playing whack-a-mole every time one little thing comes up you've got to get rid of it and you've got to just solve this one bit at a time but now all the foundation is there so upstream Linux is great but we just in the last week had a blog post go out with the OpenStack foundation where Hema from the data center group she voted a great write up of the work that's happening in the NARO on the Rocky release so OpenStack Rocky it just works it's a first class citizen all the tests pass on ARM well that's great but Rocky is the latest release of OpenStack so that just works but the even better than that just working is the NARO have been leading the work to containerize that so there's a project called COLA K-O-L-L-A and in COLA what they're doing is they're containerizing the OpenStack components so you can just deploy them you know you just take a server an ARM64 or AR64 machine or just your X86 machine you could do this side by side if you want to right for comparison and you can just use your standard container deployment technology you can use Ansible playbooks which is a way of just easily automating this and you can just kind of have type one command and deploy OpenStack on a machine in a lab environment if you want to do an all in one config a test config you can just run a playbook that will automatically install all the pieces for you and give you a machine that's just working and it's very similar to deploy this at scale so the NARO have a developer cloud that they built and they use this COLA container technology to actually deploy their own cloud as well and that's just an example of kind of the cool work that's happening in the data center group so it's going beyond getting the bits working in upstream Linux and now delivering these layered technologies things like OpenStack we're working on container technologies we're working on storage we're working on HPC we're working on big data so lots and lots of different pieces and it's all starting to come together really well. But now for a while already the whole industry has basically regarded the NARO as a huge success right? I think so I mean I think the NARO has been fabulously successful at its goal which is to solve the open source challenges that ARM had and caused you know back in the day Lina's toolvolts to rant about ARM status in Linux has kind of not been that great right? Well now ARM is really a really good citizen in open source and in Linux and other projects and the NARO is a huge part of how that has happened looking forward to what happens in the future it's exciting and you're right there on the where it's happening. Well that's how I like it I like to be right in the middle of it thanks for talking it's really good