 Deep Networks are the current state of the art in pattern recognition, but they build upon the decades old technology of neural networks talked about in the past video. It took many decades after the initial concept to arrive at functional deep nets because they're very hard to train. The method suffered from an issue called the vanishing gradient problem. Up until around 2006, deep nets underperformed relative to more basic nets and other machine learning algorithms, but everything started to change after three breakthrough papers published at this time and today they're the hottest topic in machine learning. Deep Learning is a machine learning method based on neural networks. What distinguishes deep learning from the more general approach of neural networks is its use of multiple layers within the network to represent different levels of abstraction. Deep learning algorithms use a cascading structure with multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as an input. In this way, they learn multiple levels of representation that correspond to different levels of abstraction. Just like neural networks, deep learning software attempts to mimic the activity in layers of neurons in the neocortex. It uses multiple layers of nodes with each excessive layer using the output from the previous layer as an input. Varying numbers of layers and layer sizes can provide different degrees of abstraction. Deep learning exploits this idea of hierarchical representation where higher level, more abstract concepts are learned from more lower level basic ones. When you simply have 10 or fewer parameters as input, then other forms of machine learning are typically better such as support vector machine or logic regression. Basic classification engines and shallow neural networks are not suffice for complex tasks and neural networks with only a small number of layers can become unmanageable. Thus, when the patterns get very complex, deep neural networks start to outperform their competition. The key to deep learning can be largely ascribed to breaking the processing of patterns down and distributing that out across different layers in the network. For example, we might be applying this ML system to detect flowers in an image. We would then use edges to detect the different parts of the flower, petals, stalk, etc. and then combine them to create the whole flower. The process of using simpler patterns as models that can be combined to create more complex patterns is a key part of the power of deep learning. As another example, if you feed the network a bunch of images of lorries, down at the lowest level there will be things like edges and then higher up things that look like tires, wheels or a cab and at a level above this, things that are clearly identifiable is lorries. Once the network is trained, you can put one image in at the front and the nodes will fire when they see the thing that they're trained to identify. In the example of face detection, it first learns features like edges and color contrast. These simple features form more complex facial features like the eyes and nose, which are then combined to form the face. The neural network does all of this on its own during the training process, without any direction from the person building it. These neural networks are almost always built for a specific task, such as voice recognition or various forms of data mining. The system self-organizes in such a way that the nodes in the layers closest to the input data become reactive to simple features and then as you move through the layers the features that the neurons respond to become higher and higher order. Interestingly, people have found a very similar structure in our own brain where the visual system for different layers also extracts higher and higher order features. Once you have a deep learning network that is trained in this way, it should be possible to also run it backwards. If you've trained a network so that it knows everything about what a cat is like, it should be able to produce new pictures that look like cats and these are called generative neural networks. Deep nets take a long time to train but the advent of new hardware in the form of graphic processing units can reduce the processing time by one or even two orders of magnitude relative to traditional CPUs. There are now lots of different types of deep nets to use. For text analysis, such as name recognition and sentiment analysis, recursive tensor networks are typically used. Speech recognition processes often involve a convolutional net or deep belief net. For object recognition, one may use a convolutional net or recurrent nets may be used for speech recognition. These deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are much more abundant than labeled data. The end result of training a deep learning neural network yields a self-organizing stack of transducers well-tuned to their operating environment and capable of modelling complex non-linear relationships. A deep learning platform is an out-of-the-box application that lets you configure deep nets without needing to know anything about coding in order to use the tools. A platform provides a set of tools and an interface for building custom deep nets. Typically, they provide a user with the selection of deep nets to choose from, along with the ability to integrate data from different sources, manipulate data, and merge models through a user interface. These platforms may also help with performance if a net needs to be trained with a large data set. The downside is that you are constrained by the platform selection of deep networks as well as its configuration options. But for anyone looking to quickly deploy a deep net, a platform is definitely the best way to go. There are now a variety of such platforms. One of the most widely used one is TensorFlow, an open source library of machine learning methods created by Google which has grown rapidly in popularity.