 Wir hatten gerade schon das spannende Thema mit Power, oder? Hallo. Hello. This is a talk about power, about computation power. At home, you work at SETI at home for Search for Aliens. Dieter Kanzmiller will from the Supermook, he will give a talk. The translators are Kaste und Tatzelbrum. We are translating the German to English. Hello. Hello. Thanks for the good introduction and good evening. You have noticed that he has said something wrong. This is not a talk in German, this is a talk in Austrian, but I'll put in an effort. The subject of the talk, Supermook NG, NG stands for Next Generation as in Star Trek. The subtitle has gotten lost, but that was important. Where are we in the race for the fastest computer in the work? And what does NG have to do with it before I want to start with the core motivation? It's about this, the Munich Tatort. The TV show was turned here. The people are not in the room here. But this was an interesting story, because everything was toppled upside down, because there was a movie turned and nothing worked. And in the background you see the Supermook. And that was a story. I don't want to tell you the contents of the story. But Artificial Intelligence, the interesting thing about this story was what happened to. As soon as this was transmitted, on the next day there were the first phone calls. The first calls were some newspaper. That said, what the hell are you doing here in Garching? And then were questions like, how dangerous is this? What are you doing? What else can you do with this apparatus? And those are things where I thought about, and that many people in the room are in the same situation, that hardly anybody understands what we are doing. And this is the same thing with the Supermook. You might be the experts of the future of digitalisation. You are closer to it, but it's about time that you are looking closer at what is such a Supercomputer, what is a Supermook? And one or the other who didn't see the Tartot might have seen a movie by Edward Snowden. And in this movie the Supermook is in there too, because they were not allowed to take a movie at the NSA. And they asked us to turn to do the movie at our place. And there are some scenes in there where Supermook is playing in the background. There are scenes also where our Suss admins, the people who administer the system, are running through the picture as cameos. That gives the movie more authenticity. And one of the pictures in the Snowden movie was this one, where I have to say that for copyright reasons I have done that myself, that's not in the movie. But you see the same thing in the movie, but the lights on top are off and it looks more futuristic. What you see is Supermook Phase 1 and Phase 2. And they are the dearest behind. Supermook 1 is what you see in the background. And the two rows in the front are the Supermook 2. And the room has 1200 square metres. And it's pretty filled. The empty space is for the movie, for the TV show TARDOT, the empty. The interesting thing is the power. The Phase 1, 600 square metres is 3.2 petaflops. And the two rows in front are 3.6 petaflops. And everybody is supposed to know what the petaflop is. He also wears the petaflop T-Shot. It's 10 to the 15th floating point operations per second. It's 1 with 15 zeros. And that was 1 petaflop is in computational power. That's Phase 1, 3.2 with 15 zeros. And it's actually 14 zeros, because it's a decimal point. Okay, that's what they can calculate. And if the media call, what does that mean? If it says 17.000 people at the surface, it's some billions of calculation power. This calculation power, what the devices are capable of. The interesting thing is that in 2012, half of the room in the background had 3.2 petaflops. And in 2015, the two rows in front, 3.6 petaflops. So within three years, we only need a quarter of the room and a third of the current and get the same computational power. And that's the increase in computational power. And that's the reason why we always get new devices. Well, because there is a point where it's cheaper to get the new device than to operate the old device. We see in this example that the computational power is based on the computational cores. It's 230.000 computational cores. The intention is that these computational cores calculate collectively in parallel and that they use everything available and they can use large calculations and there's memory requirements that match it and there's current requirements. In this case, the power consumption is a small city with 30.000 households. The free room in space is for the Tartot movie and then we put this in there and this is the Supermuck NG which is at the time Germany's fastest computer which is 26.7 petaflops and 11.000 calculation cores and they should all work together and be used for the application. Main memory 307 terabytes. We have all the digital pages in the Bavarian National Library but they would comfortably fit in there. Before we go into the details, we go to the background. Why do we need such a system at all? Why did we decide to get that? There's the Gauss Center for Supercomputing, that's a society that's been created for the National Supercomputing, the HLRS Institute, the Jülich Supercomputer Center and the Munich Center. For the scientific community you get the highest calculation of power. There have been agreements between the state and the federal state of Germany. So half and half between Bavaria and the national level. If you look at it closely, this is the proposal for the project for the time range of 2017 to 2020. In total it's some of 450 million euros. We take care to construct those highest-performance-Computings in order to support this kind of research that is done for very high computing demand. We also have to take care that we select the project that will have the ability to run on that system. So it's available to all of the German researchers and they can propose projects that will run on this. There is a board with scientific members that select the projects that will be allowed to run. So there are specific requirements for the systems. If you look at the LLZ, which is the organization that runs the system, it should have a fairly broad spectrum of applications that can run on this. The specific areas that we look at are the astro-geolife sciences and the environmental sciences as well as various areas of physics, fluid mechanics and chemistry. So we take a closer look at this and what I want to do is shed some light on certain aspects. In the past there were students who did it as a job, that they were plugging together all the hardware and then installed some software and then run the software on it. But here there is a bit more of a process around this and more requirements. 2016 we started a proposal in a platform called SimApp where, if you think about it, the system was already installed and we are now two years past this timeline. And if we now look back, that 2015 we installed the older system. The result is that we basically, the moment we have installed it, we already have to think about constructing and designing the new system. So we went into a dialogue with various partners to fix the legal requirements around this. We also have to put down the criteria and the guidelines in advance before constructing the system to show that what we are going to offer. There are also NDAs involved and it also needs to match what we have regarding the space requirements in terms of the cooling systems. So, on the 23rd of January, the first applicants were bringing proof that they are supported in their financial needs and their economic capabilities. So, in order to know if they also have to name a reference installation for us to prove for us that they can handle the requirements for energy efficiency and cooling. So, the question is who is actually capable of constructing such a system? And there is only a very small number of companies around the world who are actually capable of doing this kind of thing. In the first round we looked at five companies who are capable of that. We basically met with all of the applicants three times. To give us a proposal this is a very interesting process that we went through. It took a lot of human capital that was involved in selecting the final contender. There were at least 25 people involved at any given time. So in the second round after we went through all of the three rounds of the first five companies we narrowed it down to two. And both of them had the chance to basically give us the best offer. And we got this description of goods and services. I chose free that I wanted to show you. They have different colors. Red is mandatory. Yellow is what is important. It would be nice to have. Green is more of a those are target areas that we wanted to have, not that important. So basically this is like a whole list of questions that each vendor has to answer. It's basically we want a complete description for every as an answer for every question. For example here this is relating to cooling. So basically we get a complete folder full of answers every time we put out one of these catalogs and there's a few legal requirements around this especially about antitrust and market protection. We're also looking at total cost of ownership. We said about 64 million. Sounds like a lot of money but it's total cost of ownership. So the hardware, the system, the software the maintenance have to all be included and it may not go over the limit that we set. So it's not possible that after the fact they come around and say okay we need a little more this really is a strict requirement. The vendor that is making a proposal really have to make a good suggestion that whatever is left after the buying really needs to be enough for this. Otherwise the companies themselves have to find it out of their own pocket. This is a little different than what you do when you buy a laptop obviously. We need the maximum power for the budget that we can get. So there's only going to be one vendor and there's no side offers and they're forced to work together we said Q3 18 to Q4 19 so they still have some time to get their act together and to maybe perform a partnership. So often it's best for them to be in their earliest. There's a whole competition going on there between the vendors and we're going to do measurements to prove the capabilities of the system and it needs to be tested in regards to functional capabilities and there's also questions around the proposal where the vendor can ask us if they don't understand the questions that we're asking them. So we basically need to always replicate that question from the vendor to all the other vendors so they're in the loop of effort for us. So we really hope that it's easiest and most understandable so there's less effort for us. So in the end hopefully everybody's happy and we signed a contract and you can see it in my hand I'm holding it in my hand in the picture that's the contract that I had to sign and hopefully every detail is in there. This was the 14th of December 2017 so in total some it took it lasted a year for us to it took us a year to get this done so here you can see the green and finnyband cables connecting it so those are the numbers that we got and we're pretty happy about them. Those are the technical specs that we got for the final system so now we're we're obviously interested in how we compare to all the other fastest systems in the world now so let's look at them. There's a list that's called the top 500 supercomputers in the world and those are the first 10 spots in the list so there's obviously 490 more down the list this is updated twice a year once in June and once in November interestingly enough there were 13.000 as a conference of 13.000 participants interestingly enough it's smaller than this 35C3 now so this is the first system in the list this is the fastest supercomputer in the world it's called Summit and it has an IBM Power 9 architecture and NVIDIA Volter GPUs in total it has 2.4 million cores and it has a theoretical maximum peak this is about 200.000 Teraflops this is only if the computer did nothing but calculate all the time this is of course unrealistic so this is really theoretical if all of the cores really weren't doing nothing but computing all the time so the second value is the lower value it's the Rmax it's an empirically evaluated measurement it's the maximum performance gathered in the LIN pack so all Quage National Laboratory is one of the American super computing centers they have codes and we know all the codes so the DOE interesting is also the power consumption it takes about 10 megawatts in power in Tennessee the electricity is about a sixth of what it would cost here so for them it's not that important for us this is a lot just because of the running course now we look at the first five computers there is a competition that's a race and in the current list on place one place one is US third and fourth is China and then the Swiss computer those are prestige projects and the presidents like the current American are inclined to join in this competition so the department of energy has the supercomputers and they have no problems to get money if they do environmental science it wouldn't be as easy to get money naturally they already have a nice competition the computation the computer on the third rank has a sun way processor which is an interesting processor now you see the connection to the net if you listen to the net politicians talk there is a Chinese variation of that the Chinese would have liked to build the Taihu light with inter processors but the American president said no we have an export sub and then China just built their own processor and back then nobody believed that would be possible but they pulled it off in four years and with computational power that matches and you see how important competition is and if you look at the importance at the fifth place that is a conspiracy theory that that colleagues in the Switzerland had added two more rex with additional graphics boards to get to 21.23 petaflops because before they had 19.3 and if we look on the next list that they wanted to show that the supermuck is there which has 19.5 that is this malicious assumption but they wanted to be ahead of the supermuck this picture here shows from the super computing the award ceremony there are there is a framework here like the occasion you see the green annotations are the changes in the and it says that in the top ten our system is the third Chinese one because it's been provided by Lenovo company but in truth we have a contract with Intel and in that contract it says it's Lenovo it's listed and not Intel and that's the contractual details I can live with this being a Chinese computer now if we look closer at our system that's on rank 8th rank we see that there is Lenovo 305.000 860.000 cores and for the linpack we haven't used all the cores we only used 305.000 cores in der rettical peak performance of 26.8 petaflop and 19.5 petaflops we reached in real life and we don't have current measurements yet we haven't done that yet there are some improvements to be made this is not the last thing that we did now we need to do some additional changes we talked a lot about linpack what is linpack linpack is a solver for a linear system of equations from deckjongara we see a linear system of equations on top there is a benchmark with 64 bit floating point operations in the flay paper that is described in detail how does that fit to such a computer if we scale this if we have many processes we need to look at the andals law that the speed is increased but the sequential part and gustavsans law that we need to parallelize the problem to the amount of we need to scale the amount of data with the size of the computer we are in the situation that it doesn't make sense to put in a computer with more processes we have more data too we can download this list as an excel table and we see the size of the problem and the nmax is the number of variables that need to be solved and if we look further and this is the do-it-yourself round you can look at this on your own smartphone there are the links in the description and you can install that this is on an iphone 6 of my girlfriend in this case she has 7 gigaflops computational power with the same algorithm and we have the possibility to compare what the supermook is doing with what we have so we can compare the computational power the interesting thing is that the comparability we can also look in the past we can also compare this to the fastest computer in 1988 we have 2.6 gigaflops an iphone 7 is 3 times faster than the fastest computer in 1988 one or the other would be older than born in 1988 you see how fascinating this development is we also see the increase in computational power the curves that shows how the computers become faster how many floating point operations per second we see an exponential growth this is a log scale in the 25 years beginning of the risks we had an increase of a factor of 2 million from than before that ties into the previous talk we are always getting faster there is the Moore's law I am going to skip this we also have this exponential growth in the parallel computers there is also a top 500 list it is interesting to see why they get faster is that the number of cores is growing if we look at the list of 1993 there is between 10 and 100 processors if we look at the list in 2018 all the computers have about 100,000 cores and the biggest ones more accordingly we can look at the accelerators and there is a list here and many of the many of the supercomputers work with GPUs many of them use accelerators and they are distributed accordingly the most favorite one is the Tesla P100 from Nvidia the interesting thing there is that the supermook does not have accelerators those are just pure Xeon processors it is Intel Xeon Platinum 8174 processors they have some properties that you can go up with the maximum clock frequency because we have water cooling we have cooling effects there are other properties those are which are top of the line there is nothing else but the process is the laptop computer it has the advantage that you can just upscale the codes from your laptops you could theoretically run windows on the supermook but that doesn't match the performance anymore the interesting thing is if we look at the water cooling here we see one of the boards we see that the boards are double boards there are two boards with two processors and we see that these red and blue pipes are the water pipes and if we look at it from the back sides we see the water connectors in and out and if we look at the whole rack we see that it is how it is piped how it is routed we don't put in normal water usually we cool with 8-12 centigrade water but the supermook is cooled with water at the temperature between 40 and 50 centigrade why do we want to cool at 40 or 50 centigrade water still has very nice cooling properties and the processor is still hot so much cooler than the 50 centigrade as long as the temperature difference is great enough there is a cooling effect the question is if the water has 60 or 70 degrees on the other side how much energy do we need to cool it down back from 60 to 40 degrees if we think about how it is locked like at side we need zero energy to take 60-degree water outside we need no cooling effort because we just cool it down by ambient temperature and this is one of the cooling managers and this is we need 3000 litres per hour per rack but it is also clear that what we have there the cooling manager is twice as high as the racks just in order to all the infrastructure in there and the double ceiling has a height of 1 meter 80 so that can all be put together so we do a few more tricks too we don't need we don't need fans per note we save a lot of electricity by that we can scale down the jobs we do job frequency scaling we adjust the clock frequency so that it is optimal for the code that we run there are some heuristics that tell us that and we get 15% gain by cooling ambient water we see the Leibniz center with 5 buildings there is this cube 10.000 m2 5 floors on the ceiling the boxes which are cooling 2 MW and what is important here this is the cross section of the building from the basement where the transformers are up to the super performance computer that this all has to play together whatever we do needs to be a holistic approach we developed something where we optimized not just for the current consumption not just the system software but also the application that determines what current is required and the building is designed to respond to the power requirement and this is how the demand is what happens if the demand is needed for the current consumption for one run of linpack which is started around 20 o'clock which is running into 9 am and we start with 800 with only 800 kW when the operating system is just running when the job is started it goes from 800 kW to 3 MW and what's in addition is also the cooling so we need 5 MW and that's what the infrastructure has to deliver that has to sustain and once the job is started that runs up and goes down again and that sends linpack is the peak load and we can see that this curve for the current consumption we see it zoomed out we have a weird an odd amount of current and there is a lot of optimization potential what we can do on the other hand is linpack also a reliability test for the infrastructure we see a output box with the fuses you also see the two power connections to the left and the right left the closed one, right the open ones and what happened is that one of these linpack runs the box looked like this you see that the top of the lid has actually molten down by the heat that was produced in that box why does something like that happen it also happens at home sometimes if you think with stuff with the wrong stuff here it happens because here it's a problem is because the human that put that in didn't tighten the screws so you had arching voltage arching and that heated up the box and that has called failure the linpack is the benchmark to find out what problems have this is kind of a secret that we use linpack for failure checks these are the transformers this is the transformer number 11 at least 11 12 transformers and this was at the open house that was interesting because I went there out at half past 8 I thought that everybody was out, most of the people were over there and there was a massive loss of power that was caused by the transformer number 11 that's if you see the sign it has 1600 KVP and if we look at the log files then we loaded the transformer with 1500 KVP and why did it fail at 1400 and we learned something we learned that by the dust that was all there even though we kept the maintenance intervals the dirt made the transformer so hot that it switched off and then the other transformers tried to compensate and they couldn't compensate and then the whole house threw off the power and that was at 8 hours open house and that has occupied the queue until 2 am and there is also a diesel that is for the uninterruptible power supply that's also interesting if we look at the whole system if we look at this linpack who cares about linpack who cares if this is position 8 it's nice to say that we are in the rank and linpack tells us where the technology state of the art is and we understand better how the system performs under peak performance if we want to we can also use a different benchmark we rank 1 at BFS benchmark we are 5th rank but that's not it what we really want is to provide a system that the scientists can work with this is nice to have this result in the linpack but what we really want is that the scientists are unable to do things that they couldn't do before so this is one of the examples this is about phylogenetic trees this is from alexis stomatakat stomatakat sorry, from Heidelberg it's about insects and daigino his interest is how are these insects related can we can we can we find the relation between the different types of insects and then we are going to try and figure out the relationship between them so he does sequencing and then alignment and then he's trying to do trees they're called phylogenetic trees so because this is a lot of different data points in the genome you need a super computer for this so this is the number of possible trees for 150 species it has 301 different digits just if you want to look at 150 different species the scientist was waiting on the supermarket to arrive because he knew he had the code there was no computer that could run this so he had to wait for supermock in order to solve this type of scientific problem that he had and it's not so much about the performance in itself but the data that he's crunching is so huge and he needed to bring all of this together in order to solve this scientific problem so what's in it for us well not really we just have to supply him with the capability and support him but he was able to publish a paper in science and this is the actual thing that we're after so we did our job because we helped him get into it and he did it on our computer now he said if it works for the bugs maybe it works for the birds so he got another one out of it this is not so much about money or power this is about getting capabilities that we didn't have before so we just issued a new post stamp about simulation about species and about phylogenetics of species and about astrophysics we provide them with things where they can do new methods so this is another nice example we got a lot of good prizes for this so for example the highest performance computing prize we were in the finals there were two different groups that worked together the geophysics group from the LMU and the institute for informatics from the TU what they did was they simulated a volcano they they looked at the shock waves from an earthquake within the volcano and the kind of order of magnitude we are talking about here those are of 1.4 petaflops from 150.000 cores this is about 44.5% of our peak performance so in their normal run they basically almost got 45% of our peak performance which is so this is the point where we are asking why are we doing this in Germany at all this is we could basically go anywhere and work with them for example in Oak Ridge we could go there and just have them calculate our things but what we can't have is get the whole machine because they obviously have their own jobs so the differentiator for us is that we can actually provide the whole machine to someone that's why we are doing it here locally so if he really really has something important to himself he can't do that anywhere else in this order of magnitude this is why we get more progress this way if I had another hour I could just show you more examples but I don't the nice thing about it the examples are all really different those are also more applications where you can see the whole spectrum of things that we can do here so computational fluid dynamics for solid state physics, geophysics material science and so on it's really interesting if you look at the distribution of the computational time according to different disciplines you can really see the fluid mechanics needs the first physics needs the next third and then the material sciences are next informatics is really small if you compare it those are things like AI or other things that are starting to grow in those kind of informatics in the computer science but it's really interesting for me to see all the different applications that these other disciplines are utilizing us for so if you analyze what you get out of this you see how many different different sisters are being employed and what you can see here we've been computing for 7.6 billion hours there were 5.6 million jobs that were being processed and 2.000 researchers as a client and the breadth of the applications that we see is unique worldwide if you really look at it what you see here this is our report on the whole system that you you can take the barcode if you want to look at it it shows all the different applications that you get out of this system it's really interesting you can download it if you want if you're interested in that kind of thing there's also a science symposium that we recently did where we show the applications of the future there's really interesting new projects coming up you can look it up on the website so if you go back to the list the top 500 supercomputers the race for the fastest system we're actually getting something that's very usable in regards to development of new technology and computing power but in the end this is something like formula one the fastest computer is about as relevant as who won the this is not something that all of the science is gonna profit from so we have to put this into relation into the kind of effect it might be nice to go for a run with formula one car but I really I have a friend who is living in Northchimien city and he sent me this picture yesterday this is the Konghoh street if I visit him we always do our trips with Moped in your tours there's no chance you just have to walk around and you can do things like this basically this is a normal way to go to your office over there this is cheaper and faster than if they were taking a car together it's just a relation you just have to put it into relation of what you really wanna do and this gets me to my last remarks what I wanted to talk about but I didn't have the time for I knew this in advance obviously what's the next step we're all we're all in front of the barrier with the 10 to the 15, what's the next step so the next thing is the extra scale the extra flop barrier then after that there's the zeta flop who will reach it first China, USA or maybe Japan they've already shown once before with the Earth Simulator that they can also do this kind of thing so next one what's the European highest performance computing system is the EU commission able to get into the top 3 and do we really need it then there's also the European processor initiative so for example do we care about all the security issues that we have with the current generation of processors why would Europe then build its own processor and would it be related to what we do and the most important thing that we care about is what are we gonna do with the power that is dissipated from the system for one we're heating our own building obviously so that uses about 0.5% of the dissipated energy from the system that we use so we have about 99.5% that we still have no use for and there's the idea to use an adsorption cooling machine so the question is how do we make beer from this with the hot water from the supermold so this is a very bavarian answer to the question of what to do with the heat so this is a very interesting project first we already thought about the labels so we're quite far in that question so this is the end for me the slides are already uploaded if you're interested just look at them if you have questions just ask thanks a lot for this interest for this fascinating talk Tito Kanzelmüller if you have questions please use the microphones 1, 2, 3, 4, 5 and the internet damn I thought I could ask my question first thanks for the talk why did you decide completely against extra GPU system that's a good question we didn't decide against it we specified what needs to be calculated in the system there's a number of codes that already run and that's what the vendors got and they saw what you can run on it of the 5 vendors there were 4 that used accelerators but the best performance we got without accelerators and that's the best that came out of the competition and I'm just as surprised as you are why this is like that thanks for the interesting talk my question is do really only people from science and education run stuff on it or is there also military well it's science it's research and teaching so it needs to be from research or teaching I need to return the question is there military in military research is there anything that's scientifically relevant I have two questions so just one why did the guy with the bugs completed and why didn't he just use a different one the supermook was the first computer that you could compute with all the properties you also need an interplay of computational power memory and connectivity and he knew that that was coming and it would have that capacity you can possibly compute that on larger systems but the efficiency is a question it was the best system for him to do it with thank you the internet has a question how much do the users of the systems have to know about the architecture and adjust their programs in order to maximize the performance they are going to get from the system and are there differences in the performance they get yeah to the second question the best users are the physicists it wasn't serious but there are no metrics it depends there are enough programmers with more talent and with more brain capacity you can do better there is no metric who is doing better or not to the first question you need to know a lot it's not only evaluated how that meets with the scientific requirements how scientific is it it also has to fit the hardware architecture so if somebody submits the code they need to prove that it also scales because otherwise it's a waste of money so all the factors have to match so that the system works accordingly and yes it's a lot of effort we tried to cover it with a relatively large crew that gets together with the team and puts together the code with them thanks for the talk again you were saying that the system is cooled with water what about the transformers they also dissipate power that's right, the supermook we still have fans you see that on one of the pictures of the supermook where the inrow cool fans that the network components, the hard disks are still cooled with fans here in the island in the middle there are the fans and this is still a disadvantage we told the vendors that for the next system we also need to do it we also have a small cluster which is coolmook 3 which dissipates 97% of the power by water and the power supplies are in cage with holes for the water to go through so we already cool all the components by water it's a very good question, thank you you were pretty open about what's being calculated on the computer but you don't have an idea about what's being calculated on the other 10 in the top 10 list so when we look at the first 4 systems it's department of energy or it's working with the Chinese military it's with military academies we know we don't know we have an idea what they do but we don't really know with the Swiss system according what you just said it's climate computations in the system in the Swiss system because you have a special code in Lugano so it's very different for the different systems for the different target architectures the nice thing about our system is the breadth that we can run a whole spectrum of application no other system has that back to number 2 so in relation to the question before about what's being calculated in the other 10 systems in the beginning of the talk you were saying about the bidding system for the vendors what are the proposals being done for and what information is in there what are you going to get from that so let's take an example we have in the moment we signed the contract we already prepared the next system we already think conceptually about the 2024 system in order to do that we need to estimate what is available in 2024 so we have a non-disclosure with the Intel and IBM to have an insight into the road maps beyond what is publicly known it gives we need to find out what processes are coming for the next design it's an interesting problem but there are people who can't talk to us because it's necessary that we are very involved to what we need to have as a concept and we need to know the road map we need to look into the future so those are information about the companies and not handwritten codes it's very open, it's a nice community and if you go very deep in the code then if we optimize that we get involved no question on the internet how large is the proportion how much of what you're doing is political, how much is scientific that's a very good question so officially only 25% of my time in the computational center and the rest of the time I'm a teacher for distributed systems university teacher so 25% of my working time is four days a week so you get an impression and I wouldn't estimate it they won't complain I have a dream job I have an understanding family that allows me to be here I'm here because out of scientific interest because of the privacy and ethics that is also has related with the job and of course I have to do be a little science manager that's also in the whole paragraphs it's not nothing more more sleep inducing than paragraphs I don't know if other people are like that hi I'm interested in the operating costs that are part of the budget how long is the computer system gonna run for and what's gonna happen to it so that's very simple it usually doesn't pay off operating a computer longer than six years supermook in 2012 we installed it in 2012 and the day after tomorrow it's getting switched off at that time the supermook and G takes over the load so that's the next processor generation with the code porting there is a difficulty we have a system here and it's too expensive to operate it longer in all those contracts we have return options so the companies can take it back and if we compare that to the system before the supermook was trashed in place the valuable materials were recycled that's cheaper than just to continue operating it someone said they have a power procedure at home there might be an ideal opportunity for you hi how often does someone have to walk up to a rack and do some maintenance on it basically the double cube is a dark center there is no offices there is nobody in there at the supermook in G it's like that a lot of things has to be done in the installation phase in the normal operation it goes down effort goes down there are a number of components the mean time between failure was at the supermook was less than one hour we had one critical failure within one hour we can do most of the thing from the desk so every once a day somebody needs to walk over and do some maintenance in place how lot is it in there just so interesting good question this is actually the quietest system because there are no fans we are at a point where in the aisles we have interviews and the normal camera cancels the background noise you don't get that in any other center next starter comes thanks for the talk you showed that there was a very very long running competition was using 50% of the first phase is it actually worth it to up the performance by 10% if you really don't use it that much so really used effectively or totally we all use it correctly even if it only reaches 10% of the maximum power we all do things that you can't couldn't have done before in science now the systems are so big the software, the data that you are using you are making them so large that they can just be calculated on the system and on the volcano certain effects and the Heinrich Egel talk in the science symposium because he can't calculate that those resolutions that means that the theoretical maximal peak power is a fictive fictitional number that is useless in principle but the 44% utilization is relatively high compared to what else runs on it it's not about it's not about filling the system consciously to maximum capacity somebody only needs 300 cores from the 311 we don't fill it up so the next guy that the next guy gets a system that can fill it up so we take into into account the holes and we don't use it up like Amazon because they are revenue based here it's science possible and we close the talk with this question a lot of applause that's also our finishing line this was