 The Lunar Connect just after your keynote, and you talk about arm-taking of the future of CERN on that? Not yet. The end investigating multiple architectures, whether that's new flavors of S86, or that's a 64-bit arm or it's even a power PC. We are looking at multiple options and how potentially we could solve our problems once you're going to go and do high-level menace that we've seen in the ten years to do our computations. So who are you? I'm David Adergmanov. I work at the University of Nebraska-Lincoln and I also started to a CMS experiment at CERN. And we're here at the Grand Budapest Hotel right here in Budapest. Yes. And is it interesting to be at the Lunar Connect? Always. With your friends, right? Yes. So who are you? I'm Jakob Loma. I work in the Scientific Software Group at CERN. So scientific software, that's like scientific OS, so what is it? That is tools. We are tool makers. We build the tools that help physicists making sense of the data. So how do you work together? What's the kind of collaboration that you do? We work on different things. So again, I'm an experiment, so we work on an experiment software. Same applies to Joshua from Atlas and Jakob is working on more infrastructure sites. So he is providing the distributed file system to distribute our software and our data, also the environment, virtualized environment, we could actually run our software. So do you do physics? Yes. I'm a PhD student at University of Göttingen, but based at CERN right now. And we essentially use their innovations for our experiments. So you're Joshua Smith? Joshua Smith, yes. So do you work on finding the meaning of life and Higgs boson and parallel universe and all these things? You could say that, yeah, in a sense, sure. So it's very important to get lots of computation, right? You can't just do, why do you need so many computers to do all this stuff? So the amount of data we collect is not so small. So if you look at the current situation in 2016, we have about 75 petabytes of data recorded out of those 75, about 50 petabytes, at least the experiments. And if you look 10 years or 11 years forward, what you call a high-limit mass of shit again, we see that the first year is going to be an exabyte scale problem. You plan to have about 600 petabytes of raw data and about 900 petabytes of data and derived data from that raw data as one single copy. That's a lot of hard drives. It is. But not everything is being stored in a hard drive. So much as data is going to be sitting on the tapes. So do you guys work with ARM servers? You're testing them out, right? You've been doing that for a while. Yes. So in general, at CERN, I think it was the HTTP experiment, the wireless experiment. And HTTP has started. Yeah, I think we have. We started it in 2013. And before that, we looked at the ARM v7 solution. That's the 32-bit version, also known as ARX32 mode at this point. And CMS joined also in 2013 with the ARM v7 port. And then we quickly moved to 64-bit world even before we could have a hardware. We started from ARM foundation model. And I guess now more or less every experiment is looking into alternative options. Yeah, it's something that each experiment needs to worry about, right? It's our computing needs. As we try to increase our energies, not try as we do increase our energies, it's constantly on our minds. Increase in power, increase in computational needs. It's always in the forefront. So why do you need so much computation? Well, the processes that we want to study in the physics world and the standard model and the on-standard model physics are so rare that we actually need just this huge amount of data to be able to spot, well, not spot, but to be able to pick out the interesting events. So we actually throw away most of the data that we collect. And we only store 0.00001% of it. And it's in this that we hope is something interesting. And to do this is a high computational need. Is there AI and crazy algorithms to figure out what to keep, what to throw out? Oh, yeah. AI is something that's kind of exploded in the last year or two years. I don't know. It hasn't been used hugely yet. I would say at certain. But it's definitely something that's making more of an appearance. Are you working on that? Not directly. What I see is that there's a lot of algorithms being used that are called by some people AI. And, you know, everybody has its own name for things. So we might call it differently, but in the end it's the same algorithm. I mean, you see, you saw the pictures, you know, of tracks coming out of the detector of clusters when, you know, when they hit detector material and it showers. And to reconstruct all of this, you have, you know, you have a Kalman filter, you have a cluster finder. So all of this, you know, you could also call it AI algorithms. And so do you work a lot on the software? And that's a big part of CERN is software engineers, right? That is true. Many people end up working in software. Computer science is a big part of CERN. Yeah, we have a good track record and a healthy relationship between the computer scientists and the physicists. And you need both to tackle these problems. You need folks who understand the physics and you need folks who understand the computing. So you are the computing guy? I come from the computing side, yes. And what's the biggest challenge you have right now? How would you say? There are many big challenges. Well, one of the big challenges is that the technology cycles, they are much shorter than the cycles of particle accelerators and particle detectors. We plan and think in decades. And we have to make sure that the computing system evolves with this, right? We have to make sure that the computing in 20 years is able to process what was designed by detector people, you know, 20 years ago. So what do you think would be nice from Leonardo and from the ARM servers to happen for this to be better? You need to have more data so it's impossible to do it on x86. Is too much power? Is it true or not? Is it one of the challenges? Not at all. Not at all. It's not impossible to do it on x86. The question is, is it perhaps more beneficial to do it on a different architecture, I think, is the main question. More affordable. More affordable. And one of the main, as a user of our computational power at Atlas and CERN, I don't want to tell a difference. If I run a grid job somewhere around the world, I don't really care if it's running on x86 or ARM. And that's, I think, the biggest hurdle for ARM to overcome is that it needs to seamlessly work with everything that our computational model does at the moment at CERN. But is it seamless, right? Already? If you listen to people like John Master from Red Hat or Jim Perrin who is also doing CentOS and some other people, especially here at the NarConnect, you're going to see the same thing being mentioned. It doesn't matter if you're an administrator or you're actually a user. You're talking about actually building a very boring system. You don't want to see any difference. Again, none difference from an administrator side, none of the difference from a user side. So it's a different architecture, but it's still like into a system. I run my jobs, I run my analysis, I do my histograms, and at the end, no one knows what's their architecture. You're on top of things when it comes to ARM stuff, right? You're checking it all out, and you have the first generation, and now you're really looking for kind of like second generation and Thunder X2 is going to be good, X-Gene 3, whatever Qualcomm is doing, and all kinds of other things are going to be very exciting. It is very exciting. So again, we spend, at least CMS spend almost, it's a fourth year for us to look into 64-bit ARM. Our first partner and the first real slick on the guard was APN X-Gene, that was a Mustang board, and we moved to HP Moonshot Systems, and 400 nodes. We have some Gigabyte systems, there's the Thunder X, so we have this X-Gene 1, so look at the X-Gene 2s. So if you know that our software works, you know that the structure can also work, we're now thinking more of, you know, when can we get the right silicon? When are you going to get something which falls in exactly what we tend to buy, and I think that's approaching. I think that 2017-2018 is a very important year for ARM, or also it's a year of third generation and a third wave and a fourth wave of silicon, and then we need to start thinking how the platform is going to look like. So again, it's not the silicon that you buy, the silicon might be ready, and the question is how the platform is going to look like, and you buy the platforms because they're users. But has it been moving fast enough? Because every time it seems like, ah, it's just about to come, but then you get it, and it's not good enough, and then you have to wait three years, or how does it work? I understand that. Do you think it's fast enough, or do you think, oh my God, they should actually have done things faster? We all like to think it's going to move faster, and I'm going to say the same things. I mean, I always tell it's going to be very soon, three years, and I'm going to tell, I'm telling you that 2017-2018 is a very important year. Which is what we said in 2014. And you know, in two years, I might say the same thing. But you're saying that we're not happening. You're saying 2017-2018 is good. I think it's a very important year. There's still no market for 64-bit ARM, but I think what's going to be in 2017-2018, the silicones, not the platforms yet, those are going to be very important for the market. We might finally get into a position where it becomes something that you might want to buy, but it can depend on your workloads. Because right now what you have is just for testing. It is testing. You're not buying thousands, you're not buying millions. No. But you're waiting for something that might be a candidate, potentially, and suddenly there's a switch. And then you recommend to all these 170 countries or how many people are participating, institutes, that maybe they should be ordering this and because it works. So we do investigations. Again, we look at the performance, we look at the power efficiencies, power consumptions, we look at the densities of different vendors, different systems, different silicones. And we make both as an hours in the conference. You can see the posters, you can see the papers. And you know, it's up to the computer sensors if we can actually find it. And if you want to use that, you know, it's beneficial. The thing is that an external computer center as a resource should not be restricted to a certain architecture. And so certain's goal here is to be able to just deploy their software and not have to worry about this. And that's what this research will enable when a software center looking to buy some hardware doesn't know what they're going to buy. They can look at the papers, they can look at the white papers, and they're not restricted. And if you look at the projects, so again, there is one big project in Barcelona, one blind project where there's going to be a further vision of that. This is going to be, again, powered by Finder X2, I believe, that's what they picked. And they also, Barcelona has announced that you're going to have a system that's going to be a mixed one. So you're going to have X8664, you're going to have a power PC-based system, and you're going to have 64-bit ARM. You're talking about supercomputers? Yes. Yeah. I'm talking about big clusters. If you look at the Japanese, they're also going to be Fujitsu. They actually want to build an extra-scale system based on ARMv8, where HPC silicon is being designed in China. So you can see that different regions might do different things, and it's very broad. We want to be ready for this. If it comes, we want to be ready to run our programs there. And it's going to be super stable, and Linar is helping to all the stuff that needs to be done, like all these... Yeah, we hope so. Kernel, support, and all the stuff. Yeah, all of this is necessary. All of that is a lot of work. It's something that CERN doesn't necessarily have the manpower to do. So the fact that there are companies and organizations out there doing this is just fantastic for CERN. When you talk about China, for example, are they doing supercomputers on their own kind of architecture? Is it similar as it really is to ARM, or is it ARM also? There's a bunch of stuff happening over there. They're making the biggest ones for some reason. Yes. As far as I know, we have the most powerful supercomputer at this point. And in general, in terms of how much supercomputers actually have more than yes at this point, I think. Can you hook up to it and use it? I personally cannot. But, again, they also want to build an extra scale systems. And we just recently announced the latest supercomputer, which is not an X86 system. They actually have the only custom silicon, which has amazing, I think, more than 200 cores at this point. And they do have research programs that are built correctly, and they're investigating different architectures for the next generation of supercomputers. I think our ARM V8, this full bit, is actually part of one of those systems you've been investigating. Maybe one way that all this can actually happen is not just awesome software, but products like this need to be running on ARM because it's related to servers kind of, right? You want to get something bigger than a smartphone chip? I can only provide you a personal view on this. As a software engineer, I would like to live as an instruction set that I am working on. So if I need to work on the 64-bit ARM instruction set based software, and I would like to have my daily driver running the same instruction set. There needs to be a non-part macro. And there's going to be a talk here on Friday. Here's the Narrow Connect discussing about exact this issue that majority of ARM developers are still using X86's full bit systems for doing ARM development. So how do we change that? So there's a new owner at ARM. It supposedly is huge, and they could have huge amount of funds, and maybe China, maybe Trump, I don't know. But is there a need for much more investment in stuff like CERN? What could you do if you get like 100 billion more? That's a very difficult question and starts going into the politics of how CERNs run. And I don't think any of us can answer that. We definitely do a whole lot more physics. Yeah, I mean, more money, more physics. We could look into RARA for this. We could collect mouse statistics. I mean, having more money is never a bad thing. But yeah, those are very difficult. How did you find the Higgs boson? That was on X86, right? There was an old process on existing infrastructure. Yes, our grid is based on CPUs that are, I don't know, more than 10 years old in lots of places. And so this is all X86 or AMD. And yeah, it was just a brute force search with some very smart analysis methods that were used to find the Higgs boson fundamental particle. And proving that Einstein was right or not? This is a different thing. Yeah, this is unrelated. It just put another... It confirmed our standard model once more. The theory is very good, but it's not complete. Is it fun to... So do you live around CERN, all three guys? You're all over there in Geneva, right? And so is it a bunch of fun people over there, like really weird particle physicists? And they have like crazy big visions and they want to do stuff and they get to do stuff. I mean, there is the... The best thing you can do is you can come and visit us. We have 100,000 visitors a year and we are happy to show everybody who passes by everything we do and we have. There's two groups. You can go down to the detectors when the beam isn't running, obviously. There's guides that will take you around CERN and CMS, Atlas, ALICE, LHCB. You can get detailed information on all of these. And there was kind of like a second run. What do you call it? When you rebooted everything and you got things faster, double the energy, but also what did you do in terms of the IT infrastructure? You also reset that, right? Was that part of your presentation? We have some changes. Again, CERN runs in what we call... when we have runs. So, you know, when you announce the HISPACS on, that was based on around one data, if you remember correctly. And then it shut down the detector and we had the long shut down of one, which lasted about 18 months. And now we are in two. So again, we have updated the accelerators. We have updated the detectors. We also had time to work on our software and the structure to make sure that you can actually handle them out of data. It's going to be coming out of these detectors. You're going to work until the end of 2019 and we're going to have another two-year-long shut down period. We're going to, again, upgrade the hardware and start thinking about how to optimize our software. So... So end 2018, you say? Yes, it's kind of shut down in two years. Yes. More or less, even a few months. You can shift, but that's the plan. And afterwards, two of the four experiments will have a big computing upgrade. LHCB and LS. And then the following run, run for CMS and Atlas will have their big computing upgrades. Are there lots of ARM chips in the CERN? Like in the state, I guess there has to be lots of microcontrollers in every... Everybody's on the phone. Yeah, it's because they're actually using smartphones. Hard drives, you know. You have ARM chips for most small ones, Core M0s and everything else, and microcontrollers. Both are in multiple products. You only have servers you might have, your systems running by small chips, ARM chips and stuff like that. It's everywhere. I mean, it's impossible to avoid ARM at this point. So do you think ARM is a perfect ecosystem, or could it be better? I don't think anything is perfect. I don't think it perfectly exists. I think it's a fun thing, no? Because there's so many companies, and it seems like there's a lot of potential for them to innovate, do something new. That's why we like ARM. That's why we're doing the research that involves ARM because of this business model, because other industries can get involved and have their input. We can have our say, that's why we really like ARM. It's interesting to try to follow the latest news, all the stuff that's happening, right? It's crazy. I think the news is just jumping too fast, so you can see the TSMC, Global Foundries, new technology processes, the new silicon chip designs, new systems being done. It's a very fast moving thing. And talking about Global Foundries, let's say, and probably TSMC, they have this kind of thing where you can go and you can get your custom chip designed. Shouldn't you kind of go there and say, we want a custom chip? No, that's not something you could do. I don't think it's necessary to do that. I don't think it's necessary to do that. I was going to say, yeah. We have many square meters of silicon for inner tracker and the detectors, and this is custom produced silicon, only for this detector. But universities generally produce silicon. Yeah, that is true. All right, so we'll see what happens. This is going to be interesting, and thanks a lot. Definitely. Thank you.