 conference here in San Jose and with me I have got John Ashley who is the Senior IBM Software Developer Relationships Manager at NVIDIA. Welcome. Thank you. Can you say a little bit more about your job and why you're here at the conference? So obviously it's part of our larger GTC conference as well so we're here for two reasons but my role is primarily to help people adopt the power computing platform with NVIDIA GPUs so in that role I talk to developers, software houses and generally any partners that are interested in moving to that platform. Great. Well yesterday you had a fantastic announcement at NVIDIA. I the power 100. What's it again? The Pascal P100. Amazing piece. 15 billion transistors and 21 teraflops. Can you tell us more about that? So it's a fantastic chip. We're really excited by it. It's a chip that we've been working on for a while and it really combines everything we know about building HPC systems with everything we've learned about how to build systems to solve deep learning problems as well. So one of the key innovations with it related to open power is the NVLink technology where we actually have moved off the PCIe bus. We don't require just that bus anymore. We can use this NVLink bus now which is a higher speed connection to interconnect GPUs together or to interconnect the GPU and the power CPU and so with that we can move the data to the right processor at the right time much more quickly than we've been able to do in the past and we think that's going to pay huge dividends in trying to solve the data-centric problems that people are having these days. So one of the issues of computing has been that the gigahertz of driving a processor has come to a full halt hasn't it? At 4 gigahertz or 5 gigahertz. In fact the latest x86 announcements I noticed once 1.8 gigahertz they seem to be going down again. How do you overcome? How does your products and how does power open power overcome Amdahl's law? The fact that when you have all of these you have to make it parallel you run out of processes to solve a particular job. How does your technology help with that? Well that's that's actually why we believe in teaming up a powerful CPU and a powerful GPU. We know from Amdahl's law that we can't crank the as you mentioned we can't crank the clocks up much higher on either side. So we have to have a processor that's really good at the serial data-driven data management work and we also need a processor that's incredibly good at the parallel computation and expecting one processor to do both of those equally well we think is the wrong approach. So we've designed in the Pascal P100 sort of the ultimate parallel processor. The architecture has always been parallel from day one so we've avoided a lot of these scalability challenges that affected single threaded single core devices that tried to become parallel. We've always been parallel but we've also always known we're part of a team. So for us the CPU and the host is an important part of the system to solve the overall problem and that again is why we introduced NVLink to allow us to do a better job handing work off between the two pieces of the team because that way we're better together than we are separate. Great well earlier on I was talking to John Zanos and to Calista Redmond and they were talking about their sort of vision of really bringing together silicon and software. So how do you take this highly complex area and make this available to programmers in this sort of data-centric environment? What's the methodology, what are the techniques, what are the things you're learning about applying this to real problems and real programmers? That's a great question and the answer is complex but I'll try and boil it down to the way we think about it at least or the way I do maybe can't speak for everybody. For certain developers the absolute right thing is to get their hands as dirty as possible. They need every bit of speed, they need it all, they need it now and they are willing to do what it takes and for them we provide the low-level tools and with our partners, we provide the consulting and the infrastructure support to let them do that to solve their science to get their critical simulations done. For a lot of other developers they may not be all that interested in being developers, they're interested in solving some other problem and for them we may be providing libraries like our Kudiyanen library which accelerates deep learning. We don't provide a deep learning framework, we are not the world's experts in solving every business problem with deep learning but what we are and do employ the world's experts on is how to solve parts of the deep learning training and inference problem on a parallel processor and so we get down into the weeds and we code that stuff up to be maximally efficient and fast and we package up in a library that snaps into all these popular frameworks like CAFE and TensorFlow and CNTK and Torch and Tiano and others and by doing that we relieve the programmer from having to be the expert in these complex systems, you know we've done that bit for them and then for other systems some of the scientific systems there are other layers of abstraction, open ACC for instance, some of the new compiler technologies and some of the things were tools like Mathematica and Matlab and also things that are coming down the pike we're working with the standards committees for C++17 to make it easier to incorporate these architectures into problem solving. Great well just one last question then this is so different from last year if you were here last year or not this is so much more activity what one thing that's that you've noticed that strikes is is really noticeable for people who are deep learning. One thing is hard I would say there's a growing acceptance of deep learning as a new methodology to solve a variety of problems it's not just about deciding whether a picture is a cat or a person right there's a lot of really fundamental science and business problems that are being attacked that way and I think you're seeing the fruits of that starting to show up here. Great well thanks very much indeed John for for being here and thanks very much indeed