 and here at the Nvidia booth here at Computex 2017 you're showing a pretty cool box. What is this? So this is the DTX station which is powered by our new Volta V100 GPUs. It's basically designed as a workstation for deep loading resources so they can have the power of supercomputers in a box. It's using four of our Volta V100 GPUs and it's designed to give the same sort of power that our server offerings, our DGX-1 does. So DGX-1 is our rack mount 8 GPU server. This is for GPUs and it's designed so researchers and other deep learning people can just have it sitting by their desk rather than having to use something in a in a data center. And so there's xcd6 chip somewhere? Yes so it's a Xeon based workstation. There's like one or? Yes one Xeon and then whole bunch of GPUs but a special GPU. This is not consumer GPU. No no so the V100 is the most powerful GPU ever released. It's got some special stuff in it designed for deep learning and we've got four of them connected by our NVLink technology for the fastest possible performance. So this huge performance you're talking about 480 teraflops. Is that what you're talking about for this one? For deep learning there's nothing that comes close to this level of performance. And everybody's working on deep learning right now. Everybody wants to do stuff in that direction? Deep learning has so much potential for everything we do from robots to self-driving cars to when you use your android phone and say okay google that's using deep learning. So it has so many so many potential uses and you really need a powerful powerful server to train these neural networks. And right here we're seeing an animation of this so what is this one this is for the server version? Yes so this is this is the latest version of DGX1 we now have two versions one that uses the Pascal based P100 and this new one which is coming in a couple of months that uses the newest Volta P100. So this gives this is the most powerful deep learning computer you can get and for training neural networks you can't get anything that that comes close to this. So this would be there could be a whole bunch of them stacked on top of each other there could be a data center full of this. Yeah yeah so one on its own is is ridiculously powerful but once you really have a whole lot of these you can do things in days that once used to take months it really increases the efficiency and the power of what you can do with deep learning. And the way you've designed it is there's huge memory bandwidth. Yeah so it uses what we call a cube mesh so all of these GPUs can talk to each other incredibly fast using a technology called NVLink so rather than having to go through the PCI bus or you know communicate through the CPU they can all talk to each other and so you get much much better performance and much more power out of having all of these GPUs in one box. So this is a different implementation so it's used so hdx is a partner-based offering and all these manufacturers can can take that and design different different solutions for the server market. So like 2GX1 you can have eight of these GPUs sitting in there and it can be implemented by partners in different ways and still take advantage of NVLink and the technologies that really help to maximize the performance of communicating between GPUs. What are we looking at around the chip all these things going on here? Is it part of the fast communication chips kind of solutions? I'm pretty sure these are like multi-fragilators and power circuitry so the actual chips themselves use HBM so the memory is on the is on the actual module itself and yeah so these are basically powerful graphics cards if you will but yeah GPU is designed to process deep learning information and this is going to do all the AI kind of stuff maybe is demonstrated a bit over here there's some the way the AI works is you use these powerful servers to train a network so you have a problem you feed it data you play play around with the way the network's structured to get this information out and this one for example recognizes people yes and not my light yes it recognizes think real people so this is using using our Tesla GPUs as well but what it's doing is for each pixel on the screen it's working out if that's a person if it's not and tracking all the movements as we do it so there's a lot of computation that goes into to making that work and over here is some other safer and smarter cities so we're doing a lot of AI city work so this is actually really cool if you if you have a look at this one this is a police body camera so this is walking down the street and what it's doing is it's identifying faces so you see the little boxes are coming up over faces as they come come towards you and running it and just checking it against data so are they all Nvidia employees I'm not sure okay I'm checking yeah maybe all right so that you can just the police can just walk around recognize people and the system can track there's so many cameras in London for example yeah they need to run your and this and that's the problem is that there are so many cameras but for someone to sit there and actually watch all this footage is impossible and so by using these deep learning networks and the ability of GPUs to to crunch through data quickly you can have say in this case what what this network is doing is as the car's driving along it's identifying other cars it identifies what type of car it is what the registration plate is and it's also picking out people on the side so it's it's able to process a lot of data in real time and sift out the actual information that you need so the really you can just run on consumer GPU um some of these things could be like this could be in a car maybe well so so in this case this is our tesla p4 so this is designed to be run in a data center I just with data so you know you you don't have any graphics outputs or anything it's purely designed it's designed around inferencing which is the running a network and identifying objects so at the moment it's getting 30 streams of video at once and processing all of it so it's identifying every car in each different stream in real time and here's some other this is a bigger one so this this is a um the p40 which is the big brother so in our tesla lineup we have the the p100 which is what you saw over in these in the sort of dgx1 implementations but then you you have the p4 and the p40 to be run in data centers so in this case it's identifying um letting you search through uh video for specific things so it'll find you blue cars or specific objects so if you've got all this video you can go back and find the information so one thing I should show you is we also have um I was showing you the body camera before this is what it is and that isn't it isn't talking to the cloud that actually has a jets in tx2 process the processor inside it yeah so that's capable of actually running that software in real time um without the need for an internet connection or any sort of extra data so it can do stuff offline and also speed up through the cloud yeah because because especially if you're using things for um for these sort of uses where you want the information quickly or self-driving cars robots we've got some drones that can fly themselves uh you don't want losing an internet connection to suddenly mean nothing works so there are sometimes when having these devices that can process the networks like on the edge um that are really really useful and other times you can rely on pushing data back to the cloud uh another demonstration of edge this is the the drones you have AI and these drones too yes so these these drones you also use jets um and so what what they can do is essentially fly themselves they can identify what's going on around them avoid objects um they they can in this case it's flying around a tree it can track things so if you want to follow if you want to follow you know an object it can do that automatically and so these quite powerful drones um are able to do so much more because they use deep learning and use the the jets so here the complex uh uh with the CEO of nvidia you have a keynote yes yeah that's a big deal right now AI is is the most important thing happening in computing nowadays it's allowing so many more usages and driving so many areas of computing like robots like self-driving cars um video analytics its uses are so broad and it really has been a revolution that's been powered by the GPU so it's something that's very important to us so there's a big future in there you're going to sell lots and lots of GPUs it's been it's been the major growth area for us um and yeah it's it's something that we're now developing our GPUs around where the volta uh card that i showed earlier that has specific hardware in there for running AI which is you know it's something that's that's quite important for us we're not just taking our existing GPUs and throwing them into a server we're building our GPUs around AI cool so that's awesome looking forward to the future with lots of AI powered by nvidia