 We'll be checking on your supercomputer solution here, so hi, so who are you? Hi, my name is Michael, I'm a UMass Boston student, junior CS major, and I'm part of the MGHCVC team, and who are these guys around you? These are my teammates, who are you going to introduce yourself? My name is Donato, I'm a junior at University of Massachusetts Boston, studying computer science. Are you all in the same class? My friend here is not from UMass Boston, but he is from one of the University of Massachusetts schools. I'm Todd, I'm a sophomore computer science and math major at UMass Lowell. And who are you? Hi, I'm a computer science senior in UMass Boston. So what kind of stuff are you doing? I'm here for the HPCG benchmark. Benchmark? Yes. Benchmark, so is it showing on the screen? So here's just our power consumption, our current power consumption, which is pretty low at this point because we're running a lot of CPU jobs. During the benchmarking it was much higher because we were running on GPU. That's when we really pushed the power on the load. What do you work on? Working on the teresoft reproducibility challenge. Is it on the screen right now? Oh, yeah, that's it. Something else? Yeah. You're not allowed to show, yeah? No. So you're part of this competition right now? Yeah. So what are you going to win if you win? Fame. Fame. Fame? Inverse fortune. Fortune? Inverse fortune. So what does a court curb bill do compared with you? Curt was the former advisor for the team and he sort of started the team many years ago. Historically the team has always incorporated students from the member institutions of the mass screen high-performance computing center and the UMass institutions are part of that consortium. So here's just the competition and stuff you're doing right here but in general you're doing all kinds of projects, right? So what kind of stuff are you working on? So this is basically, I'd say, a hobby. It's a way of life for us. Computing is just what we do. So we, even prior to this, we would meet weekly to discuss, you know, what's going on in the HPC community and kind of what's going on in the sphere and what ways we can improve not only ourselves but the applications that we're running and sort of the methods that we use every day. So it's really, really fun. And the competition, you know, it's a lot of work but it doesn't, you don't really feel the work because it's, the majority of it is just stuff that you've been doing, you know, every day. Do you think it's fun too? Do you think it's fun? Yes, definitely. Yeah. Does it work? What are you doing? I'm not working right now because my part has been done. Your work has been done so you're just having fun? Yes. I enjoy it. And is he working what you're doing right now? We're working on it. We're working on it. Yeah. Have you seen, do you have any idea what the competitors are doing? How many teams are there around here? 16, 12, 16. You think they have something that's better than yours or? Yeah, some of them have pretty impressive hardware, you know. But a lot of things can go wrong because you can do a certain step wrong. You can be out of here when the power is off, which is part of the challenge. The power gets, there's a shutdown that takes place. So there's a lot of things that can go wrong. So there's more to the challenge than just the hardware. That's hopefully something that we will try to focus on. What's the rule of this competition? How does it work? What is it supposed to do and how do you get the challenge and stuff? Well, I guess so the main thing of the competition is that all teams are limited to 3,000 watts. And so that sort of is the way to level the playing field so that people don't bring incredibly gigantic systems. And it's actually interesting because that artificial limit on wattage forces teams to do sort of interesting approaches to sort of concentrate on flops per watt rather than what the full output of a single node can be. For example, you'll see many teams here like us. We brought 16 GPUs, but we didn't necessarily run them at full frequency to take advantage of the fact that the scaling is definitely not linear. The sweet spot on the NVIDIA V100 seems to be around 800 to 1,000 frequency, which also operates at half wattage, which is beautiful. And I think we get about 75-80% of the output that's expected. So is the competition to make as much performance as possible within a certain amount of... but doing it in a nice way somehow, showing off what you can do? So it's kind of a mixture. The initial applications, the benchmarking applications of HPCG and HPL are definitely about all about showing off the speed of your system. Later applications are more about understanding how the domain science around the applications work, how to take advantage of various tweaks that you can do. For example, around maybe you want to change the precision level. If the accuracy doesn't go down too much, that will increase your processing speed for your workload, things like this. So really the throughput portion in this year was HPL, HPCG and the boron application, which provided more data than most teams will be able to finish in the time of the competition. When is it finished, the competition? It's at 5.30 tomorrow. So finally you'll have time to go around the halls or have you had time already? We try to make sure that people aren't stuck here. I mean the team is six people, there's four of us here now, so two are off. Having fun? Having fun at going to talks, enjoying themselves. We have another talk coming up around, what was it, 4.15 that somebody's going to run off to? So you're doing a bunch of Intel and the TX1, the ARM solution there. How much ARM? TX2. TX2. How much ARM stuff do you do in general at the university? So myself, this will be my intro to it. I'm pretty sure that this gentleman right here does quite a bit. You've done a bunch of stuff with ARM? Yeah, so ARM is pretty cool. We've been using ARM to port kind of application heavy or high performance applications to embedded systems and to kind of take applications that would otherwise take enormous amounts of power and be sort of efficient on large clusters and bring them down to the scale of kind of computers that fit in your pocket. So that's a tough problem in that you're kind of low on resource but it's also an important problem to solve because a lot of the compute that's done with the wearable tech or even in your mobile devices