Loading...

NVidia TensorRT: high-performance deep learning inference accelerator

2,952 views

Loading...

Loading...

Transcript

The interactive transcript could not be loaded.

Loading...

Loading...

Rating is available when the video has been rented.
This feature is not available right now. Please try again later.
Published on Jul 3, 2018

In this episode of TensorFlow Meets, we are joined by Chris Gottbrath from NVidia and X.Q. from the Google Brain team to talk about NVidia TensorRT. NVidia TensorRT is a high-performance, programmable inference accelerator that delivers low latency and high-throughput for deep learning applications. Developers can create neural networks and artificial intelligence to run their networks in productions or devices with the full performance that GPUs can offer. NVidia TensorRT allows developers to enjoy the diversity and flexibility of TensorFlow, while still having the high accuracy and performance that TensorRT provides. Watch to learn more and leave your questions for the TensorFlow team in the comments below!

Get started → https://developer.nvidia.com/tensorrt

Subscribe → http://bit.ly/TensorFlow1

Loading...

When autoplay is enabled, a suggested video will automatically play next.

Up next


to add this to Watch Later

Add to

Loading playlists...