 And here's a DGX station here at NVIDIA and who are you? Hi, I'm Marcus Weber. I'm actually the product manager for that DGX station. So that looks like the most beautiful desktop in the world. What is this? It's absolutely beautiful. I think so too. Why do you make it so golden? That's a design feature. So it just looks beautiful. Inside is all black. Very nicely laid out. It essentially packs four of our most powerful compute GPUs. So I'll just go right here. So the most powerful NVIDIA GPUs right here, this is the 1080. What is this? It's a V100. V100? So that's above 1080? Yes, it's a Tesla from the Tesla product line, which is specifically for Datacenter compute. Four of them and the new Volta architecture, the V100. So were you already showing this and talking about this at Compitex in June, right? Compitex in Taiwan? Since when is this shipping? So we actually announced this in May this year at our own GTC conference. And now for about a month or so we have been shipping those systems. So you got four GPUs. What else is in here? We have a very high-end Intel 20-core Xeon CPU. We have 256-gig system memory. What's also special about it is that those four V100s are connected with NVLink. So it makes them be able to talk to each other much faster than just over the PCIe bus. And of course it's water-cooled. So here we have the water-cooling system. That really keeps the GPUs nice and cool. And therefore the whole system extremely quiet. Quiet? So is this for supercomputing guys to sit and do supercomputing development? Exactly. So it's both for deep learning training workloads, first of all. But you certainly can use it also for GPU accelerated HPC workloads. So what's the price? The list price on it is 69,000 US dollars. So do you have many customers already? We have many customers already. Of course I can't tell too much about that. Like all the high-end supercomputing employees of the world, right? So all the deep learning research institutions around the world, a lot of healthcare customers, automotive customers, have this already a lot of the big universities. So this Volta, how much of the booth are you talking about here? Pretty much, maybe every booth is somehow related to Volta. That's our new chip architecture, right? The latest and greatest that essentially tripled the performance since last year when we launched Pascal. So can you maybe one or two highlights? Where would you show, should we look over there? Sure, this one. So what are we looking at here? This is the DGX1, so this is the server form factor of our DGX product line. This has actually eight of the greatest GPUs in it, the V100. And this is the one? This is actually the actual GPU, exactly. So it's just a GPU with what is going on? With GPU memory right next to it. And this is the form factor that goes into the server base. So you can see eight of those here. Those things you see here, of course, are the heat sinks that keep them cool with air. So every chip is basically underneath, mounted underneath those heat sinks. All right. So if I go right here. What are you talking about here? Well, there you probably should start asking other people to be by others, yeah. Yeah, I mean, we have a running DGX station over there. Let's check it out. So this guy would be perfect for that. Yeah? Let's jump over there. Maybe, yeah. We need to try to pull him over. Do you want to talk on video? Do you want to talk shortly about it? Yeah, let's go. Hi. Hi, I'm Ryan. I'm Ryan from NVIDIA. So what are you doing over here? All right. So this demo station, we're going to be running the same demo that we ran for Jensen's keynote. We're going to be showing off our new HPC containers. And so this example, we're going to be running the NAMD molecular dynamics code from the University of Illinois. So when was that keynote? That was earlier Monday. So Monday at three. Don't worry, you can find it on livecast. Yeah? So what is it about? Yeah, so maybe we should jump in there. So maybe you can stand right there? Because he's kind of going through it. Maybe we want to come back and do this when I can. So he's basically, what's it called? Just because I'm filming in 4K60, I don't have a hard time editing this. It would be great if we can try to show something. When he moves, I'll do it. You're really doing 4K? Yeah, 4K60. What's going on here on the screen? Let me just get it up and run it. Okay. So today we just announced that we were distributing HPC container apps on our NGC cloud registry. So along with our deep learning frameworks that we have here, we now have HPC applications, Games, Gromax, Lambs, Yandy, and Avion. These HPC apps, just like the deep learning apps, are very complicated, difficult software to install and manage yourself. So we put them in containers. Containers make life easy. Containers are really cool. Is there something to do with what do you call it? The container or the smoke container? Oh, sorry. Docker? Yeah, so this is a docker container registry. And all you do is you choose to download the container. So you sign up, you get an API key, and then you download the container to your system. And so here we started up the container in this window. And we paused. We paused waiting for a connection. Now we're going to connect this other molecular dynamics simulation to... So we're going to connect the visualization of the simulation to the container that's running the simulation. And we're going to now visualize the molecular dynamics in action. This is an HPC container running the simulation. This is the visualization connected to that, showing it in real time. And so you can see that we... You can see the molecules vibrating a little bit. So the idea is we want to run this for a long period of time and understand the dynamics and why the protein is as stable as it is. The last point I want to make, this graphic... That's actually it. That's all I got. That's all you got, yeah? But what are they doing over there? Is it all related with the V100? Yeah, so this simulation over here... Is that in space? Is it in the brain? It is in space. That's looking at the entropy field of a supernova before the supernova event happened. So a supernova is when a star collapses into a black hole and just... Crazy happens. They're like one of the more unique astrophysical events in the universe. So this is the simulation of the entropy field. And this is a massive simulation that they've run. And now they're using containerized visualization frameworks to visualize that result. Cool. And right over there, they're doing stuff with cars? Let me check that one. Let me check that one. I don't know that one. Sorry, I'm putting it near the spot. But there's a whole bunch of stuff. Do you want to talk to these guys? Do you want to be in video? Can I ask you? Hey, cool. Thanks a lot. Thank you. So what are you showing here? So this is ANSYS Discovery Live. And this is first of its kind. And what we show here is the CD design simulation. This sort of couples the design tool and the multi-physics simulation very tightly. So by that way, one could sort of really explore a lot of designs. For example, in this case, we could create what-of scenarios like I could just end up creating another copy of this whole truck. So is it about aerodynamics of cars? Yeah, this is external aerodynamics. And what you see is a velocity plot here. So is this only possible to do on NVIDIA hardware? Yeah, this is completely built on CUDA platform. And it runs only on NVIDIA GPUs. So it won't run on CPUs. It won't run on AMD GPUs. It's only on NVIDIA GPUs. Why? Why won't it run on something else? Because it's all on CUDA, built on top of CUDA. So it needs tons of GPU performance. Yeah, at most it uses two GPUs. In this case, one GPU for visualization and another GPU for compute. But you could also end up doing both compute and visualization in a single GPU if that is what is necessary. Nice. Is this an app? Yeah, this is called ANSYS Discovery Live. And made by NVIDIA? The app is made by a company called ANSYS. But they sort of use our NVIDIA technology. What's over there? The developer zone. So they have stuff to do with the developer? Yeah, probably I'm not sure what exactly they're doing. I'm going to introduce myself. I'm PR guy for NVIDIA, so I just wanted to... I won't be on camera. You cannot be on camera? Are you sure? Yeah, you don't want me on camera. Can you introduce me over there? Is it okay if I film while I'm walking? Oh, that's not my car. Let me go over there. People are queuing up with NVIDIA juice. What is this? Sorry, can I jump in here? What kind of stuff is going on around here? Do you have somebody who can talk about it? People using computers. I'm just taking this. I'm just going to check it over there. AIHPC. And this is the Twitter feed. I'm sorry. Sorry, can I just try to film it? I'm sorry? Yeah, so you can... If you get close to one of the windows, you can actually reach out and grab it. That's cool. All right. Hi. So who are you? What are you doing around here? Sorry, my voice is kind of dying. My name is Mark Ebersol. I'm a training platform engineer. I run the training platform that the Deep Learning Institute uses to teach people how to do lots of really great things with deep learning and CUDA and OpenACC. So teaching people how to use... Yeah, deep learning, CUDA, OpenACC, basically how to solve their challenging problems with deep learning or CUDA or OpenACC or any of the other technologies that are accelerated on GPUs. So during this week... Yeah, so DevZone, these are our self-paced labs. It's somewhat buffet style. You come in, you sit, you choose the lab you want to work on, and they're all self-paced. You work through it your own pace. And then we have experts in our expert area who can come and help you with your specific problems and your code. So yeah, we do trainings all over the world at all our regional GTCs. Our training at Supercomputing was actually the first place we did this developer zone. I think it was Supercomputing 12. We've been doing it every year and it's always popular. It's a great way to get free hands-on training. So how hard is it to work... develop on Supercomputer? To develop on a Supercomputer? Well, there's a lot of really challenging problems on Supercomputers. Our tools try to make it as easy as possible to get started. OpenACC is a great way to take existing code and accelerate it with just a few compiler hints. Deep learning is a great way if you have lots of data to go and start training on how to solve those problems. And then CUDA is really when you need to get in and fully optimize your code to run on the GPU. So the developers that can do Supercomputing are kind of like the top developers, kind of? Yeah, it's a great crowd here at Supercomputing. You need to be an advanced developer to understand all the stuff for it. No. Getting started with CUDA labs, our Python labs are all very easy to get started with if you have a base understanding of the language. And then all this stuff works and all the NVIDIA GPU is not only in the top end, the V... Yeah, like these laptops themselves have gamer GPUs in them, and you can write the same deep learning code and you're running on a Supercomputer, you can run on these laptops. Obviously it doesn't run as fast, because it's not quite as a high-end GPU, but you can still develop locally and then go deploy on a big Supercomputer.