 So let's check out the student cluster competition right here a bunch of students working on super computing Hi, so who are you? Which team are you? I'm Georgia Tech Georgia Tech And you're doing a, where do you have an open part going on over here? What is this? What do you have there? So we have two Minsky, IBM Minsky Powerade servers. Each machine, the cluster have two CPUs, they each tank core and we also have four NVIDIA PLN 100 GPUs in each machine. So is it going to do a lot of performance and the power is not too high? So that was one of the problems for us because if we go full throttle, it's going to exceed the power limit that the competition has set for us. All right, so do you think you're going to win? Yes. You're going to win maybe. Okay, cool. Okay, let's check it out. Let's check out this team right here. Hey, hello. So who are you? Where are you from? I'm from Taiwan. Taiwan? Yeah. So this is Tsinghua University? Yes. So what is your project? What do you have here? What is this? This is, you mean the server? Yeah. What is it? Is this your super computer? Yes. And what is inside? Is it Intel or? S5BB from QCT and it has two Intel 6130 processor per node and also 4 GPU per node. And your team is working on some stuff to make it work? What are you doing? What's the challenge? Big challenge? Try to complete the test case. You're going to win? Yes. You're going to win maybe. Okay, cool. Let's check these guys out right here. Hey, hey, what's up? So which team are you? We're from University of Utah. Utah, and so what's your super computer? Who wants to introduce this one? Can you show this? Can you show what you have? I don't know if we're allowed to have people in the booth. So we have four nodes that are R740s each with 16 core Xeon processors and 16 gigs of memory. Three of those have NVIDIA V100s. And then we also have a head node that's R730. And what kind of work are you doing in the team to make it work? What's the jobs you have all together here? So I was focusing on the benchmarks and then working on the mystery application, which is MPASS, and then the rest of the team I guess you can introduce yourselves. Hey, so what are you doing? I'm currently working on Born and getting that up on the cloud. Is it going to work? No question. No question? What kind of stuff are you doing? I'm doing the reproducibility report. So we've produced the results from a paper that was presented last year. And what are you doing? I'm writing Born on a cluster. Cluster? Yeah. All right. So do you think you're going to win the competition? I hope so. You hope so? Okay, cool. Good luck. And hey, hope. So who are you? Where are you from? I'm from China and my name is Robin Wang. So it's a university of science and technology of China. Where from? From China. It's in Hubei province. Hubei? Hubei. Hubei province and Hubei. So can you introduce this? What is this? Is this your supercomputer? Yeah. So what is it inside? We got a top GPU or V100 and we got two CPU each node. We have three nodes so we have six CPU and each CPU has 156 cores which provide us with a great performance. So what is your job? What do you do with this team? Which one? Who's doing what? Your software? What is this? The labs. The labs? Yes. So is it going to work? No. Is it going to work? Is it working? Yes. And how about these guys? What do they do? My app is Mr. Bayes and the mysterious app which is M-P-A-S now. And she's my kind. What do you do? We are both working on Mr. Bayes. Mr. Bayes? Yeah. To optimize this software, today it can run faster. So how do you optimize software? Is it easy or not so easy? It's not that easy. It's not that easy. But the soft code is so, so, so much. So why is the fastest supercomputer in the world in China? I think the main reason is that China is a growing country. And we think supercomputers can help us with both technology and science, which it can make us life better. Cool. So you're looking forward to even bigger supercomputers in China, right? Yes. Are you going to work on them? Do you want to have a job with this or not? Maybe. Maybe. Maybe. So are you going to win the competition? We hope to win. We hope so. Okay. Okay, good luck. Okay. And we're jumping over here. Hey, can I jump in here? So who are you? Which team are you? Right there. Austin. Yeah. So where's your supercomputer? Right here. Right here. So what do you have inside? So this looks like some Dell EMC solutions. So do you have GPUs in there? Yeah, we have... Well, running, we have eight NVIDIA V100s. And the whole thing, we have ten, but we're only using eight. So it's cool to be in the city where Dell is, right? Yeah. So does that mean you work with Dell a lot or...? Well, they are one of our sponsors, but we don't necessarily work very close to them. So what kind of work are you doing during the competition? You're guys and girls, what's going on? So what me and him are working on is we're running the mystery application that was given to us last night. It's called MPASS. What is that application about? What does it do? It simulates an atmosphere. So this is the plot that just came out of it. That's a simulation of a year. Yeah. Simulation of the atmosphere over the whole year? Yeah, over one year. And we're looking at the CO2 content eventually. But this is just, we're building up to it. And is it fun to work on this kind of stuff? Yes, it is really fun. What's your part? I'm working on Mr. Bayes, which is like a biogenetic application. And it determines like relationships between different species. So it's interesting stuff. So do you think you're going to win? We're trying our very best. You're working very hard, right? You're not going to sleep tonight? It's tomorrow, right? The deadline? Tomorrow at 5 or 5.30. You have lots of stuff to do still? Optimize everything? Cool, okay. So good luck. Thanks a lot. Thank you. Hey. Hey, hello. Good afternoon. Hey, which team are you? We are Peking University. We came from Peking University. Beijing? Yeah. Where's your supercomputer? So which is, what is inside here? Two hundred and two inches. So what is it? Does it work good? Everything is working? What are you working on right now? Yeah, we are running three applications, including lamps, and oh no, three applications. Lamps, the ball, and new space. Everything is working? Your software? What do you do? I'm majoring in boring. Yes. So you're optimizing? Is it going to work? Yes. It works pretty well. Cool. All right. Good luck. Thank you. Thank you. Hey. Hey. Which team are we at here? Right here. North East East. North East East. So how's it going? How's that? How's that? Hey. So what's your supercomputer? What's inside? Oh, so we have a four-node system here using the AMD Epic CPUs and NVIDIA's Tesla V100 GPUs as well as their P100 GPUs. So how's the AMD stuff going with the supercomputer? Is it pretty good? Yeah. So the AMD Epic CPUs are definitely really powerful and also have great performance per watt. So it's able to help us run all the cores at full power all the time and still not go over our power budget. So it's really helping us so far in the data sets. So what's your role? What do you do in the team? Oh, so right now I'm working on the Mystery App, which is M-PASS. It's like an atmosphere modeling application. So we're still running the data sets for that, but we're actually almost done. And then I also worked on HPCG, which is one of the benchmarks we did on Monday. How about these guys? What do you do? What do you do? What do you do in the team? Hi. I'm working on HPL in the Mystery Application. Does it work? Yeah. It's running. We're almost done. The performance is good. The power consumption is not too high. Yeah, very good. Do you think you're going to win? We hope so. So what do you do? So I'm in charge of Born. So I'm actually currently just submitting a bunch of Born shots to run in the cloud. Is it okay if I check this out? So is it a bunch of mathematics, or how does it work? Super computing. This is actually my homework. That's your homework. All right. But is it actually mathematics? So there's a little bit. So we're going to get interviewed on the apps we're doing. And part of that is knowing a little bit about how the math works behind the applications that we're learning. Do you think you'll have a good chance to win? We'll try. We'll see what happens. All right. Good luck. Thanks a lot. Hey, can I jump in here? Can I interview you on camera? What's your supercomputer? What do you have in here? So we have two nodes. We have eight computers and eight of the nodes. So in total we have sixteen. And it runs pretty well for a lot of the applications that run on the team. So what's your team? Nan Yang. Oh, yes. So where is that? We're from Singapore. Singapore? Yeah. So what do you do with a different role? What's your job? What do you do there? Okay. So I'm the team captain. Right now I'm just helping with the supercomputer. Yeah. And he's doing Mr. Bayes, which is a phylogeny application. Does it work? Is it going to work? Yeah. Yeah. So do you think you're going to win? We're pretty sure we're going to win. You're pretty sure? Yeah. Hopefully. That's cool. All right. Cool. Good luck. Thanks. See you, mate. And here we have the William Henry Harrison School. Can I do a video? Absolutely. Yeah, sure. So where is your supercomputer? What's inside? Right here is our supercomputer. Okay. So it has two CPUs in each node, each with 12 cores, which gives us 12 cores per node, 24 cores per node, and 48 cores per entire cluster. It also has eight V100 split across the nodes. We have four on the first one and four on the second one. We also have Infinimand, cabling between the two nodes for communication. So is this good, good the hardware? Is it good stuff? Yeah. I mean, depending on the application, some applications are CPU balanced. Some applications need, like, maybe a little more VRAM. Others need, you know, as much power for the GPUs as they can. So it depends on the application. So where's the William Henry Harrison High School? What? Where is that? It's Indiana. It's near Purdue. It's about five minutes away from Purdue. So what are these guys doing? Who's doing what? He's running scaling studies for Tersofs. He is working on the Mystery app. Something to do with the atmosphere? Yeah, the atmosphere application. These two are working on Mr. Bayes and he's working on the cloud component. So do you think you're going to win? I mean, this is our first year. It's also the first all-high school team that's gone. So I don't think we're really expecting to win. It may be kind of competitive, but, you know... Is it cool? The supercomputing conference? Is it fun? Yes. Yeah? So do you think the supercomputing market is, like, exploding? Do you want to work with this in the future? I definitely think it's the future. It's definitely exploding in things like atmospheric simulations and stuff. They are very big and we can get a lot of information from them. So it's definitely continuing to expand and it would definitely be something that would be interesting. So there's some people just in computer science, but you're, like, supercomputing science, right? It's more interesting than what... Is it the cutting edge of computer science? I mean, yeah, there's always, like, cooler things that'll happen, you know, hardware side. There's, you know, libraries and stuff that come out that help different areas. In the end, this is really where things are kind of used for, right? Like, a lot of things, like, kind of... This stuff is what demands a lot of that innovation and, of course, has the means to pay for it. Are there lots of companies trying to give you jobs around here? Oh, no. Not ready? Not ready. Not yet. I don't think we've quite reached the age where they really seek us. The more the college-age kids, they're sort of looking for those kind of employees. Yeah, like, high school is below university, right? Right. At this point, it's the colleges that want us and the job market that wants the colleges. We can still cross our fingers about it then. Yeah, of course we can. We can still cross our fingers. Do you want to go directly to a big job? It would be fun. It would be fun, yeah. Or do you have to study for, like, ten years or something before you... I think the reason that I want to get into a big job right away is to get those college debts done. Yeah. All right. That's cool. Awesome. Good luck. I hope you win. Maybe it'll be cool, right? Thank you. Then you'll definitely come back if you win, right? Okay, cool. Okay, thanks a lot. Thanks, mate. Jumping yard right here. Hey, can I do a video at your booth? Can you come back in a couple of minutes? Okay, I'll be right back. What is this flag? Oh, let's see. There's a little horse. Okay, cool. Jumping right here. Here's another supercomputer. Hey. Hey, so who are you? What team is this? Tsinghua. Tsinghua. Yeah. What is this supercomputer there? They are all direct servers. This is our sponsor, so... So it's Bitmain. Bitmain, yeah. So what is inside? Dell server. Dell, right. Dell, Dell. All right. Cool. Are you going to win? Yeah. You think so? Okay, cool. Okay, thanks. It makes sense to get as many points down as possible. Hey, can I jump in here with the camera? Oh, sure. Cool. So which team are you? University of Illinois. University of Illinois. Can I check out your supercomputer? What do you have in there? What's your supercomputer? So we've got two nodes, two 1U nodes with dual Xeon processors, and then four of the NVIDIA V100s that they supplied per node. So it's a pretty low overhead, but high GPU cluster, right, that a lot of us do perform really well on the benchmarks and really well on the applications that use the GPUs, all the while not sacrificing too heavily the performance of the processing CPU. So it's within the power requirements? Yeah. Is there a lot of performance in there? I would say so, yeah. So one advantage that we have with having pretty small nodes is that there's not a lot of overhead associated with the operation. And so we're able to push up right to that power limit and know that all of that power is going to the GPUs and making them run faster. Cool. Do you think you're going to win? Hopefully. Hope so. It's hard to say. There's a lot of good questions. Why are you going to win? What's special about your way you do things? Well, we have a lot of industry sponsors and experts that have been helping us through this whole process. And U of I is the home of the Blue Water Supercomputer, which is the largest academic supercomputer. So there's a lot of support that we have and a lot of expertise in our team. So I think we're definitely going to win. Cool. All right. Cool. Good luck. Thanks. Thanks. All right. We jump over here. So it's Friedrich Alexander University of Munich. No, no. Erlangen and the Technical University of Munich. Technical. Sorry. It's T-University. It's Friedrich Alexander University in Nürnberg-Erlangen and it's the Technical University of Munich. Cool. So it's definitely in Germany, right? Yes. And what's your supercomputer right here? Yeah. So what's inside? We got like two nodes with six V100s each. And each node has also a processor with 40 cores and two sockets. So is it Intel or NVIDIA? What are you? Yeah, Intel processors, but NVIDIA GPUs. And what is the job that you're doing? Which job do you have each of you on the team? So what are you doing there? Making it work? Replication. Replication. So right here, is it going to work? Well, it's much longer. Well, I'm working on the cloud component at the moment. So we're trying to get Mr. Bayes running on the cloud. And we have all the scripts set up. We're just going to start it right now. The internet is a little bit slow. This is taking a little longer than it should. So, yeah. Isn't it supposed to have super fast internet? They said they have terabits. But not for you, huh? Well, yeah. Are you on the Wi-Fi? No, no. It's just there are too many people on it right now. It was really, really good when we started the competition. It's probably going to be not good again like in the evening. It's just too many people right now. So do you think you're going to beat all these Chinese and American teams? Right? Do you have a chance? We're definitely going to do really, really well with applications. We had some issues when we were doing the benchmarks though. So we're probably not going to win the overall prize. Probably not. But maybe. No. Well, our system was delivered. Well, we had the European power adapter in. So it took us a while to get that fixed. So we couldn't do as well in the benchmarks as we should. So they are pretty big part of the competition. So we can't probably get the overall high score. We're doing our best with the applications. And I think we're doing really well with those. So it should be okay, I guess. Cool. Is it fun to work with this? Yeah, sure. Computing is the future, right? Yeah. There's a lot of stuff that we've been working on in the halls. Seeing some cool, interesting things. It's amazing, this conference. Okay, good luck. Okay, keep going. Hey, can I jump in here? Yeah, go ahead. So you're the San Diego State University team, right? Yes. So who wants to introduce the supercomputer? What do you have here? What's your hardware? So this is an IBM Powerade architecture. We're running on five nodes. We have an Ethernet rack switch and Melonox and Phineban rack switch. Now for the competition, we decided to run strictly on four nodes. That's the head node that acts as the brain of our cluster that's able to distribute the jobs and coordinate between the compute nodes. The next three are our compute nodes. Now the first two are Firestone servers. They can hold up to two PCIe NVIDIA GPUs. In the third compute node, that can hold four NVLink GPUs. And combined together, we have about eight GPUs. Now the last server is our storage server. But due to power constraints, we decided to disconnect it strictly for the competition. So this is power IBM? Yes, IBM Powerade. How does that perform compared to the other server in the supercomputing world? Well, it runs pretty well. I would say the only drawback to it is the power consumption. Because when we first built the cluster, our idle power was at 2.2 kW. And when we came here to the competition, we had to go through a lot of different scenarios and different strategies to get the power down. And our idle power now is 1.8 kW. So we really only brought it down by 400 to 500 watts. So I would say that it runs great as far as the applications go. If we had a larger power threshold, then we'd be able to take more advantage of our GPUs. It's very important to get a lot of performance within a low power. That's an important part of the supercomputing industry, right? Right. So if you can bring it under a certain power, you can utilize more nodes, more GPUs, and get better performance. So what's your role in the team and what do these guys do? So I am the team captain. I'm the sysadmin. So I make sure if anything goes wrong with the cluster, that I'm able to bring it back up and make sure that it's running fine. You can just fix anything that goes wrong? Yes. Kind of, yeah. So the application that I worked on is the LIMPAC benchmark. And when I ran it for the competition, I could only use these two compute nodes because the LIMPAC benchmark is computationally intensive. It requires a lot of power to run. So before we got to the competition, when I ran it, on all AGPUs, it got up to about 5.2 kilowatts with a score of 30 teraflops. So pretty good score, but really high power. So coming to the competition, we had to sacrifice a lot of power. I deal with four GPUs. And so our submitted score to the competition was 17.24 teraflops. Do you think you have a good chance to win? We're not sure, considering all the other equipment that some of these teams have, but I think that we're still going to do pretty well. Now... What do you guys do? This is Twan. He works on Mr. Bayes. And he's running it on the cluster right now. Does it work? Yes. He's working on it. He's working right now. He was working on it during the competition, trying to get it to run as many cores as possible. And where did he stand then? So this is Vaughn. Hi. He's working on the reproducibility challenge. And so far it's going very well. And this is Ryan. He's our power management person. He's done a great job in developing scripts to monitor the power and bring down the power so that we can work within the limits of the competition. And this is Sing. He worked on Vaughn and the cloud. So he worked on probably, I would say, the two hardest applications. Why? Is it because you like working hard? It's fun. I just do it for fun. So how do you figure out who does what on the team? Well, this was decided during the summer. So somebody prefers to do this and they just do that? Well, when I started working on this, this was during the summer. My advisor wanted me to work on the Limpact benchmark. And then we started to recruit more team members and basically pick what sounds interesting to me. Cool. All right, so good luck. Good luck with the competition. Thank you. Appreciate it. Hey. Hey. Hey, can I jump in here? So which team are you? The Warsaw team is the Poland one. So Warsaw and Lodz. So this is the Poland team. Right? Yeah, it's the Poland team. Well, we are called the Warsaw team because we all live in Warsaw. One guy is studying in the University of Technology. So that's the thing. Why are you all out here? Because you've finished with the work? Is it working? No, it's working. It's computing. It's everything is calculating and so on. One guy is on the door and two guys are in hotels sleeping out but they should be here shortly. And what's your super computer? What do you have in here? It's like two Broadwell Xeons each, which means we have 28 cores on each node. 128 giga gram, two Volta, NVIDIA Volta. Each has quite a lot of power. And what is your job each of you? What do you do? I'm one of two guys responsible for hardware and software to get everything running for the application guys. What do you do? She's our born expert for the application. One thing about Dominik, Dominik is our magic sysadmin, who is able to fix broken graphics cards to make them run. How do you do it? You put some water and some chocolate pepper in there? When in doubt, use a hammer. Use a hammer, yeah? Yeah, a screwdriver or you don't need it. What do you do? I'm like a kind of cloud expert and I was involved in working on a born application. And do you think you're going to win? Yeah, of course. You think you're going to win? We're fighting. We're fighting. You're going to beat the American, the Chinese and the Germans, right? We have to. There are only two teams from Europe. We have to show our best. Yeah. The EU is counting on you, right? Actually, we think so, we think so. Okay, cool. And my job was working on the born application and the whole idea was to port it to GPU. Do you succeed? Yes. And we have something like nine times speed up with respect to the CPU version. Nine times. That sounds like a good thing. Yes. It is because the original born was a really slow application. We know it's from running this on the cloud. The GPU is blazing fast. And on the cloud, oh, three hours. Cool. Here, 20 minutes. So good luck. Good evening, competition. Thank you. Thank you. Hey. Hey, so since I did the video like half an hour ago, did you do some progress? On the TX2? You're the Boston team, right? Yep, on the TX2. Yeah. So what's good? Has something happened already in half an hour? Or do you need more time to get it thing soon? So for the TX2, what I'm working on is, we're going to deploy Tersoff on it. And what I'm working on right now is making the compilation compatible with the MPI so that we can make use of all cores. Because the Tersoff runs that we have to do, if we don't make use of all cores, it's going to take much longer than we need it to. So you're porting some stuff from the Intel to the ARM, or no? What are you doing? Nope, we're just compiling the Tersoff so that it can run through MPI. So we have a little ARM there and a bunch of Intel there, right? Yep. Yeah. But this is not about the Tersoff. This is not about the TX2. It's just the compilation for Tersoff so that it runs multi-threaded ways. Cool. I think you have a good chance to win. Thank you. We'll see. That's great coming from you. Okay, cool. All right, thanks. Have fun. Hey. I just did a video with your team. This is your team, right? Yeah, yeah. These guys really know their stuff. And so... You already interviewed me. Yeah, yeah. But I just do it very shortly because how does it work when you work with the students there? Yeah. How do they figure out they want to be in a super computing period? So the submission process... So they have a call for proposals. They tell you a little bit about the competition codes so that you've got a rough idea of whether you're going to go with a GPU-centric system or a CPU-centric or what kind of network interconnect you're going to have. And then you start building your cluster. The big restriction is 3,000 watts. You can't build a cluster bigger than 3,000 watts. So that basically drives a lot of design decisions. Then you look at the codes that they're running. This year they had Born and Mr. Bayes and Tersoff. So you look at the codes they're running and you say, okay, well that lends itself to this architecture or that architecture. Then you start programming and optimizing and profiling and then you get the students there on Wednesday night and then you call up one of the other teams that have a similar architecture and you say, what are you doing? So it's very much of an interactive process. And all these industry people over there can just drop by and try to give some help if they know their hardware is there. Exactly, yeah. They can come and try to give the different teams some tips. Yes, well in our case, our exact vendor from Boston is here in the vendor's room. They have a booth on the showroom floor. A lot of these other folks that have big ISVs or OEM supporting them, they also obviously have either an IBM or an Intel sponsor. And then if you go down to some of the... NVIDIA. NVIDIA is a good example. Yep. And then you've got some of the European and some of the Asian schools that are trying to show off their country's best architecture. So you'll have a Boston limited system or an InSpar or a Sugon system from China. But there's not much arm yet. Not yet. Not yet. You have one. I think we're the only game in town. You have one arm there. Two. Two arms over there. Two arms, yes. But maybe next year do you think it's possible that more of the teams will be using arm? Especially now that they have such a broad HPC initiative, you've got Cavium, you've got Kaleo, you've got a lot of these companies that are coming out with multi-core 64-bit arms. So ARM V8 is the tricky bit that you've got to get covered. But these HPC systems are coming out now. The Qualcomm has a 64-bit arm that's exclusively 64-bit. They don't do 32-bit. Yeah, you're familiar with this processor. Yeah. So that bodes very well for HPC. 10 nanometers. They just need to ship boatloads. That's right. They need to give all these students a bunch of them. They do, like NVIDIA does. You give a student a product and they will write code for it. That's a proven path. I'm pretty sure your team wants to have a bunch of arm stuff. So it'd be nice if they just ship it to them, right? Absolutely, yeah. Yeah, this is a pitch right here. Yes. All right. Cool. Yeah.