 the NVIDIA booth and this is the NVIDIA Tegra X2 right here the first time you show it you just launched here the better world and who are you? Vincent Nguyen I'm a Tegra system architect located in Europe and this is the Jetson TX2 development kit what is that? So Jetson TX2 is essentially a development platform that we use for customers that want to develop applications primarily around compute or artificial intelligence and that's a main focus of the of the products. So this is what is the cores you use? So there's a combination of multiple cores so on the CPU side you essentially have six cores four of them are 857 core and two of them are Denver core which are our own architecture for the 64 bit instruction set. Is it Denver core is one that's asymmetric or what do you call it can go very low? No it's actually no it's the same type of core in terms of let's say operation so it stays in same core domain it's more like it has a higher performance in terms of SMD instruction at the expense of power consumption so the six core are available at the same time under Linux it's not one of them or the other. The target market could be like robots like this, drones, there's some other robots over here actually jump in here sorry and here you have like partner doing and they actually put the X2 right here. Yeah so what they did is that they took the Tegra X2 module that you took the Tegra X2 module that you can see here and they put it on the carrier board which is just below. On our development the carrier board is already so big because that's what you use for development and what they are showcasing here is what is called pure pixel segmentation which is essentially you want each pixel to be classified, is it a street, is it a sidewalk, is it a tree, is it a person right and depending on the accuracy that you want to achieve you have a certain execution time which is running and that's based on neural network and neural network execution is running on the GPU for the... Basically you just launched this so how are you able to put the X2 in this you just swap it in it's backwards compatible or not? Yeah it is. More or less. More or less yeah. We got the Tegra X2 because... Yeah I said it. Who are you? Who are you? We are Antmicro, Antmicro from Poland, we are partners of NVIDIA and we got the Tegra X2 and we were asked to prepare a demo with a deep learning demo segment demo and what we did we just took what they gave us basic implementation of the network. We tailored it for the needs of such a demo, we speeded up the calculations, implemented all the processing there, deployed it on this car. Fortunately the board is backwards compatible more or less so we didn't have to re-implement the whole baseboard, we didn't have to you know... With the TX1 backwards compatible. So how does the performance jump? Within this application it's 50% 50% faster. Yes. That's a big deal no? Yes absolutely. It's very cool to get more performance and how's the performance and power consumption difference? Power consumption depending on how you measure it so roughly we get double the performance in terms of power consumption per watt okay so well so what you have here is that we have two modes of operation and the maximum performance mode is going to be at 15 watts okay for running so in this particular case what we are showing is a neural network which is used to classify and localize objects so we can state oops there is a car there is a person you can detect person on the sidewalk you can detect road signs okay and it takes a 4k input to demonstrate that. On Tegra X1 actually it runs as well you can run one 4k input stream on the Tegra X2 you can run two Tegra two 4k inputs at the same time simultaneously which means that you can have one camera face forward and one camera back off looking. Why did you choose the ARM Cortex 857? Well it's a question of power consumption it's a question of internal architecture versus what we already had for us the compute capability is not in the side the CPU. The CPU is just there because some customers need it frankly and there are some use cases where you actually need CPU performance but the fundamental compute power is coming from the GPU cores. Which is not the Denver it's the GPU. Yeah exactly. Actually no on this one so the takes one was Maxwell based architecture the Tegra X2 is Pascal based architecture right so it's just so which means that the preference for what has improved and and we are trying now to decrease the time between discrete GPU architecture availability to the time it is available in the SOC world for example on Tegra X1 we released the Tegra X1 about a little bit more than a year ago okay and you can see it's actually almost exact same motherboard. There's some different leads about it and the module is exactly the same shape because fundamentally we want it to be compatible right so Tegra X1 was launched roughly 12 months after it's 18 months after it's counterpart industry GPU world on Tegra X2 we just launched Pascal about six months ago if memory is correct right so we're trying to decrease the time of ability for given GPU architecture. So this one can recognize that yeah so this one is things so this one is running what we call Google net. Google net is a very well-known neural network which has been developed by Google. Is it offline? It's online right now everything is done here. So what it does is that it's not very easy to stay. Yeah, it's not standard, works better with core. Yeah exactly. It just can recognize things. Yes, using image net as a data set. I was going to train for that. Let's walk over here. Maybe I can check this one over here with the Artec Leo. Maybe they can show off what you have. Absolutely, so it's absolutely not related to deep learning in this case. It's related to compute. This gentleman can maybe talk much better about it than I can. So who are you? Hi, who are you? My name is Ivan. I'm with Artec 3D. I'm from the technical support department. This is our latest invention. What is that? It's a 3D scanner. Useful for scanning all kinds of objects which you can use in a digital format. Is that TX1 inside? Yes. All right. So it will be able to 50% improvement if they get the TX2? Right. Yes, it does all GPU processing potentially. Cool. So what can you show what it does? Oh, it's booting. Right there. So what kind of Linux are you running? This is a prototype. It doesn't represent the actual operating system which we'll be using soon. So you just walk around like this and it will scan? Exactly. It will boot and I'll show it to you in just a minute. So actually you have all these Linuxes you're showing off because is the GPU open source GPU? No, the GPU is not open source. Although right now we have two approaches. One of them is what we call binary or user space component only. They are provided by NVIDIA for various Linux distributions that we provide. And the open source driver which is called Nouveau does have GPU acceleration in some way but does not have all the other accelerations which are confused. So most of our customers are actually using the binary level components. So you say Nouveau? Nouveau. Nouveau. And Nouveau is being developed officially by NVIDIA or some people? It's an open source project. There are NVIDIA community, NVIDIA engineers that are participating to it. But it's an open source project. I thought maybe there was some talk about you actually releasing everything, not yet. What is released? I mean the full GPU driver with GPU compute and everything. No, not yet. Not yet. Are you considering that or not? That's an internal discussion on which I cannot comment. Cool. And what is the Linux that you're showing off here on the TX1, right? Yeah, actually applies for TX1 and TX2. Fundamentally, the file system that we provide is a sample Ubuntu 16.04, right? But you could build your own distribution because we provide the kernel source, the bootloader source. So if you want you could build something based on Red Hat or you could build something based on Yocto or Open Embedded or whatever, right? It's just that on this particular unit, because those are our development kits, they come directly straight out of the box with Ubuntu. All right. So there is the drivers necessary to run any Linux? So to run not any Linux. There are some dependencies on the kernel, but fundamentally yes. So you have graphics acceleration, OpenGL and OpenGL ES. You have multimeter accelerations with G Streamer for video and code and VOD code. And of course you have to compute for doing CUDA type applications. All right. Let's check if this one is ready to show. Is it ready to... All right. So the scanner is ready. Yeah. It's to use. Rest a new project. Nice. Oh, what's going on? Is there just light, normal light? Yes. Well, the light you see is for... It's a photo camera flash kind of light which is used to eliminate the texture. We create full color models. The actual light which is capturing the geometry is not visible to the eye. It is using a very narrow spectrum. So you just move the scanner around the object and scan. It's supposed to be as easy as capturing a video except that it is in 3D. And this you are only able to do because of Nvidia? Nvidia makes us completely mobile. Yes. Before we used to have an external computer which does the manipulation and processing. This device allows us to be completely portable. This kind of scanning, it's using specific sensor or no? Like is it IR? No, IR. It's just camera. It is using a projector VCSEL light source. All right. Cool. Awesome. So people can buy this? Of course. How much is the price? This one is about 23,000 euros. 23,000? People can find it online? The Artec Leo. All right. We have a network of distributors in multiple countries so you can get a demonstration the place closest to you. Very cool. Thank you. All right. Thanks a lot. And there's lots of other stuff going on around here, right? Yeah. So what you have here is an Ialtronics drone. Ialtronics is a company specialized in professional missions for drones, primary visual inspection. So, for example, building inspection, detecting cracks, detecting defects on the building, for example, a bridge. What are these guys doing here? It's basically a robot for tunnel inspection so it's autonomously driving and building maps of tunnels, underground tunnels and essentially trying to detect things that might be problems in those tunnels. So what we're doing is we're using the Jetson to give us a really low power way of running complex AI algorithms which allows us to detect things that might be problematic in the tunnel so we can detect them. We can also localize them. Is this creating a 3D space? Yeah. The 3D space here is what we're saying are detections of bad things. These are walls. They're not good. But in a real tunnel, we would actually locate these in the actual overall tunnel model. It means then we can find where those defects are. So is this out there in the world? Yeah. This is now being used as a sort of proof of concept in a bunch of different places in the UK. We're also doing essentially using the same AI to do analytics as a service so we get a lot of image data and we then process that and turn it into stakeholder reports. Yeah. All right. Thanks for the worries. So is it great to work with NVIDIA on this kind of stuff? Yeah. I mean, this stuff's great. I mean, I'll be honest. We couldn't do what we do. For us, doing computation on low power is essential. I mean, basically the more watts we save, the longer we can go. There's a very powerful GPU in this stuff. Yeah. But one teraflops on the X1 and about twice as that on the X2. The power consumption is about 10 watts on the X1 and 50 watts on the X2. So that gives you the scale that we are talking about. So there's more performance and the power consumption is still low? Yeah. Is it a little smaller nanometer? 15 nanometers. But that I have to check because I don't remember. Probably 14, right? Yeah. And then maybe before it was 28 or something like that. So this power, most performance? Yeah. What we did also is changing the GPU architecture because most of the compute power come from the GPU. By changing GPU architecture, we essentially improve the performance and keep the same power and block. And it says you're a global leader and stress tax. So that's a different application. And for this one, this is a multimedia digital signage box. So this is actually this box here, which has been designed by our colleague at PC Partner. Hello. Who are you? I'm Norbert Kupojans from PC Partner. So we are a design partner of NVIDIA developing and designing. You did the board? Yeah, we did this board, yes. And what is the functions? It has all kinds of outputs, HDMI and stuff. Yeah, this one is based on the TK1 and it comes with two display outputs. As mentioned before, this is for digital signage. So it's possible to run two displays at the same time. And it does primarily full HD or 4K video play. All right. So it's great for the digital signal market to use the TK1? Yes. Yes. This one, for example, is used in fast food restaurants to show the menus. Cool. So it's out there all over the world. What you've been doing is... Absolutely. You just don't say it because it's hidden. It's hidden. Yeah. And then people go from there and they can do mass production. Well, right now those guys are already mass production. They already sold quite a few units. Yes. And it's actually already deployed as far as I know. It is deployed, yes. So it's just that you don't say it because the digital signage system is kind of hidden. You just see the screen, right? You don't see anything. Yes. And right now, basically the screen here is being driven by this box. And you have a content management server, which you can connect to, and we can manage content on how it is being displayed. Essentially, dynamic interactions. That's what is being used in all the retail shops out there, like fast food, like retail mall or whatever. And what is this one? This one is done by a company from Retrix. It's a partner. It's essentially using Tegra X1 module. And it's a six 8K camera. So each camera is 8K? No. Yes, it is. 8K? Yes. Each of them is 8K, so you have six of them. How can you do that on just one X1 board? Because we actually have six controllers, which allow us to connect this camera simultaneously. Of course, when you use 8K, that means that the frame rate is lower, right? You're not getting 60 or 120? No, you're not going to get 60. But after that, it's a question of speed. So right now, with this one, they can get the six 8K camera at two frames per second. Or you can decrease the resolution and you get a higher frame rate. So with X2, they can do four frames per second? No, it's the same. It's a question of limitation of the bus. With X2, I am not tested in it because there might be some dependencies. But 30 frames, I should be able to go to, yes, three or four frames per second. I would have to make the calculation. But if you go down to 4K each, maybe you can do a full frame rate. Oh, if I go to 4K, I can definitely do full frame rates. I already have that training. Nice. And you can connect storage and record a perfect panorama video. Exactly. And stitching. Stitching is not done here by this application. However, we do have a stitching technology from partners. We also have our own VR works technology, which is kind of offline stitching. So you basically have two options. Either you do offline stitching with the software that we have in our portfolio or some partners are going to do online stitching, which is much more problematic at those resolutions. Nice. And there's some more partners over there doing all kinds of camera stuff and 360. We have a lot of camera vendors out there because it turns out that when you want to do camera, you usually want to do visual processing. If you want to do visual processing, you end up needing some kind of GPU compute capability. And you say Pascal is the one now? Sorry? It's Pascal. Yes. X2 is a Pascal architecture. So it's not the Kepler? Where's the Kepler? Oh, Kepler is old. Yeah, Kepler is old. Tegra K1. That's K1. Tegra K1 is Kepler. Tegra X1 is Maxwell. Tegra X2 is Pascal. That's the newest cutting edge GPU. Yes. Pascal was released, the memory is correct, about six months ago, on the discrete GPU world. So are you basically in X2 able to do as much as you would on a desktop? No, because after that, it's a question. An architecture is not necessarily a product. An architecture is something which allows us to scale all the way from the top, like Titan X type product, to the bottom, like Tegra X2. So it all depends on your performance of look. If you have a use case which requires, let's say, Titan X power, right? Titan X performance. Of course, you're not going to be able to run that on the X2. On the other hand, there are many use cases out there that cannot accommodate the Titan X power consumption. We are talking about 300 watts plus. So if you want to have, if you have an application that has a power limit, like 10 watts or 15 watts, then the SoC architecture are the best solution that you can use for that use case. Can you combine a few together? Yes, you can. We have actually customers that are taking multiple X1 or multiple X2 and combining them uses an interconnection using an FPGA and PCI Express. And after that, it's a question of how you transfer data and what you want to, what is your use case. But yes, you can combine them. We actually have a customer. It's not shown here, but this is Connectech based in Canada. And they have a server with a rack of 24 teraflops. And this is used for values, compute application within a given space. So that's a server. You're in a server business. Not us. I mean, as Nvidia, we only set the module, but Connectech is selling this server. Of course, this is a very specific server. I mean, 24 teraflops can mean that you basically have 24 teraflops within a power envelope of about 500 watts, maybe a little bit more. So it's an interesting server for some applications. Of course, it's not a design to do heavy duty server data management. It's designed for doing inferencing or compute calculation within a small power envelope. This is a robot here. Can analyze and find things and pick them up. Is that an X1 part? The X1 is not inside the robot. This one is inside the camera itself. The camera is doing depth. Yeah. Let me jump in this way. That's all right. But yeah, I think Nvidia involved them, yeah. So I have the camera in there, the X1. Yeah, yeah. I'm excited to have the X1. Cool. And it just picks up the things no matter where they are. So is there any chance you can pick up a drink from the fridge and bring it to somebody? The same kind of technology? Yeah, theoretically it is possible. Yeah. All right. And so this is the perfect technology for self-driving cars. So this is the same core technology that we use. For self-driving cars we have a different product offering which is managed by our automotive business units. So they have their own engineering investment and all products. One SOC? Different SOC? No, it's the same SOC architecture. It's not exactly the same cheap offer because in automotive you have automotive grade constraints in terms of certification and so on. So it's not exactly the same SOC in terms of manufacturing. But the same 6-core same Pascal GPU? Yeah, the same Pascal GPU, the same 6-core CPU. And actually we have a development platform and automotive for DrivePix too. This is a development platform that customers can use to start working on the autonomous car. And this development platform essentially has been designed to emulate what will be available in two years time frame within an SOC. So essentially you take advantage of the discrete GPU today on the development platform to be able to mimic the GPU performance that you will have in a future SOC. That allows people to start working on it. So you shared the roadmap with everybody? Some of the roadmap is public obviously. So it has been announced at last GTC Europe in Amsterdam. X3, X4 and... So how is CEO Jensen Huang announced Xavier for the automotive industry in sampling end of the year and that's exactly what he has been designed for in terms of GPU performance. That's cool. I mean is there in Tesla and Audi they're using your stuff? Yes, Tesla Model S is using our technology, yes. So they're not using Mobile I? They're using NVIDIA. They did use Mobile I? I cannot comment because I don't know if it's public or what they've done. So they're using NVIDIA and they're looking for the next one. It's going to be even better self driving. It's already working, right? The cars don't crash right now? So how can it be even better? So right now you cannot say that the car don't crash. They don't crash in most of the case, right? And they don't crash in general case. But we are not yet at the point of wide fleet deployment. Even Tesla if you take a look at it they have their autonomous autocruiser type but they require you to keep the hands on the wheel, right? Officially. Officially. Okay, I'm joking. But there are cases where some customers have not kept the hands on the wheel and it could cause a problem, right? That said, the technology has been tremendous improvement over the last two years for example NVIDIA is going to show on the video over there. We have our own which is based on DrivePix too and which is driving on the road right now, right? So the technology is there. It's a matter of finalizing the, let's say, the last stretch to be able to have something which is reliable and safe. So there's a huge future for NVIDIA and the whole visual computing area. This is, we move very enthusiastically right now. If you take a look at the main company motto fundamentally, we move from the visual computing area to the AI company. Right now, yes, GPU is still our main business. We're still in visual, but it's a given. I mean, GPU is what we do and that's what we do well. Now, we also use GPU for something else, which is compute and we use it for artificial intelligence and everything that we do right now is around. We want to present on artificial intelligence be it in the cloud with Tesla products for inference in the edge, like on this product which are literally a small device with intelligence inside the device and at the training phase for the neural network. But I would think that the X2 would be great for Chromebook or laptop or something like that. Right here, you're talking about something else. You're not talking about this kind of stuff, but it could be. It could be. I mean, Chromebook, for example, there is one Chromebook which has been using X1. X1. I think so. I think it's X1. I will have to check but the ASO one. I'm not quite sure because I'm not a part of this business anymore. But it was like three years I think it was the K1. No, but there was one which was more recent. More recent? Yeah. So yes, there could. I mean, definitely. Actually, we have Android on X1 because, I mean, just Shield TV, for example, a consumer product is using Tegra X1, extremely good console type. The technology of Tegra is being used in a very recently launched console, the Nintendo Switch. Is that the X2? It's a... X1 makes... It's a... It's a secret architecture. No. It's a secret design. No, it's not a secret architecture. It's a Tegra architecture. It's actually awesome. When you are... Sorry. When you are in portable mode, it uses less frequency and less power. Absolutely. If you dock it, it boots up and it goes even faster. Absolutely. It's the same type of scaling architecture that we have. Essentially, you have DVFS. You can change voltage. You can change frequency on the fly. And then the thing that Nintendo has done is actually even one step further because the resolution of the game changes as well when you dock it. It's a very good testimony of what you can do in terms of mobile SoC in terms of GPU performance. We are getting close. I mean, in terms of roughly right now, if you take a look at it, we are releasing one new architecture in the SoC world about every 18 months. The Tegra X1 was launched in the US in November 2015. We have just launched the Tegra X2. On March 7, actually, last week. It's used to be spinning up. You said at the end of this year it's coming the next one. Yeah, something. Something is something. All right. And Tegra X2 has been launched actually in automotive before that. So people knew about Tegra X2 for quite some time. Nice. So there's a lot of activity and a lot of Sorry. Yeah, absolutely. I mean, the embedded world is a good place to be with regard to the ecosystem. What we are looking for is actually the partner that can help our customer to achieve a given use case. And that's the whole point of being there.