 Artificial Intelligence is a pretty big deal. In the recent years, we have begun to see its early manifestation in the form of things like smart assistants on our phones or business analytics. However, despite its growing popularity, few understand the basics of artificial intelligence and are instead more concerned with the risks that come with it, which we will look at in another video. Anyways, in this video, we'll be taking a look at the very basics of artificial intelligence, including machine learning, neural networks, and artificial neurons, and explore some types of artificial intelligence used today, without any of the complicated math or coding. It starts with one question. Can machines think? This is the question English computer scientist Alan Turing asked in his 1950 published paper, Computer Machinery Intelligence and Learning. In order to answer this question, Turing proposed the idea of a Turing test. The setup of the Turing test is as follows. A human interrogator asks a question to an unknown subject, which can be a human or computer, and within a given amount of time, the interrogator must guess whether or not the subject is a human or computer based on the response. Therefore, if the interrogator wrongly mistakes a computer's response for a human's response, then the computer shows a success at thinking, thereby passing the Turing test. Turing then goes on to propose the ideas of computers simulating human beings. Over 50 years later, we are witnessing the dawn of a new era in computing with things like Google Assistant or Siri to something more advanced such as Google Duplex. So what exactly is the science, or in this case, the computer science, behind this popular yet mysterious wonder we call artificial intelligence? You see, the thing is that there is no straightforward answer, unfortunately. So where do we start? Well, in order to understand artificial intelligence, we need to understand something called neural networks. Neural networking, simply put, is a form of computing modeled after the human brain. Just like how the human brain is composed of neurons, neural networks are composed of what we call artificial or virtual neurons. The first type of artificial neuron we'll be looking at today is also one of the most basic ones, the perceptron neuron. Perceptrons work by taking in several binary inputs and then giving one binary output. The output of the perceptron is dictated by whether or not the sum of the input values meet a parameter or what we call a threshold value set by the neuron. If the sum of the input value is less than or equal to the threshold value, then neuron outputs a value of zero. And if it's greater than a threshold value, then neuron produces a value of one. To demonstrate how a perceptron algorithm works, let's take a look at an example. And as always, you're free to skip past the example if you already got the concept. Anyways, let's say that the new Avengers movie is coming out to your theater on Saturday. And you've heard there's gonna be a huge sale on butter popcorn at the theater. You then contemplate whether or not you should go by taking into account three factors, whether or not you're available, whether or not you'll be able to make it on time, and whether or not you'll have room in your stomach after attending the birthday party that takes place before the movie. Now, let's assign each of the factors a corresponding variable, x1, x2 and x3. And for each variable, a binary value of zero or one is assigned. With the value of one being assigned if the condition for that variable is met, and a value of zero if the condition for that variable is not met. For example, for x1, if you're able to make it on time, x1 is given a value of one, and if not, it's given a value of zero. For the second variable, if you have room in your stomach after the party, then x2 is given a value of one and zero if not. And I'm pretty sure you get the idea of this by now, so we'll just skip x3. Anyways, after you do this, you assign a threshold value, which in this scenario, let's say is two. And to sum of x1, x2, and x3, it's greater than the threshold value, which in this case is two. Then the neuron gives an output of one. And if it's equal or less than two, then the neuron admits an output of zero. Additionally, if some factors are more important than others, such as whether or not you be able to make it on time, then you assign each factor a weight instead of a binary value, which in this case would be W1, W2, and W3 respectively. And the weight can be an integer other than one or zero, such as six. Depending on how important a factor is with the higher number or weight corresponding with that factor having more importance. Perceptron output can be modeled with the equation Wx plus B, where W is a vector or a table of data containing the weight input values. And x is a vector carrying the binary input values. And B is a threshold of the neuron times negative one. If Wx plus B is less than zero, then an output of zero is given. And if it's greater than zero, an output of one is given. Again, if you don't know how to multiply the two vectors, that's totally fine, is we aren't going to be really focusing that much on the math in this video. Perceptron neurons aren't all that bad. But the thing is that in modern computing, other forms of artificial neurons are usually preferred. The reason for this is that even a slight change in the weight or value of the factor of a perceptron neuron can cause the neuron to give a completely different output, which can significantly impact the whole computing network if it's composed of perceptron neurons. And so to combat this issue, we tend to use a more popular form of artificial neurons known as sigmoid neurons. Sigmoid neurons are similar to perceptron neurons in the sense that they take in multiple factors and even assign weights to them depending on the importance of the factor. However, they differ in the sense that instead of being limited to receiving the binary input value of zero or one, they can receive any value between zero and one, so it's just 0.783. The sigmoid neuron's output is modeled by the sigmoid function sigma times wx plus b. And while we aren't going to go much into the math, I thought it would be worthwhile for you to look at the graph of the sigmoid function. As shown by the graph, the output of a sigmoid function can range anywhere within zero to one. That's all you need to know as far as the basics of sigmoid neurons are concerned. Sigmoid and perceptron neurons aren't the only types of artificial neurons. There are other types of more complex artificial neurons which we will talk about in future videos for sure. Anyways, as stated above, these many types of artificial neurons come together in what we call neural networks, which are computing forms modeled after the brain. There are many types of neural networks, but in this video, we'll be looking at a brief overview of the most popular ones used for machine learning today and in technologies you may use. The first and perhaps the most important type of neural network we'll be looking at today are feedforward neural networks. FFNs are considered to be one of the most basic types of neural networks and are the basis for other types of neural networks such as convolutional neural networks. FFNs are created by multiple layers of perceptron neurons. An FFN can be divided into three parts, an input layer filled with input nodes, which take in the input values, the hidden layer in the middle, which are filled with what we call hidden nodes, which do all the calculations and processing with the inputs, and an output layer filled with output nodes, which give the output for the neural network. Oh, and just to put this in because it's important to know, this architecture is what all neural networks to some extent are composed of. Thus is the reason FFNs are considered to be the most basic framework of all neural networks. That being said, it doesn't matter to me if you don't fully get every other type of neural network in this video. As long as you understand the basic concept of FFNs, we should be good. The next type of neural network we'll be looking at is a recurrent neural network. Recurrent neural networks take in a list of sequential data and try to make predictions about it. However, the neural network must have access to a large data set, which it is trained to use in order to draw knowledge to make predictions about the input data it's given. For example, let's say we have a recurrent neural network and use the giant history textbook as our database, and then we input it into the neural network. George Washington was the first president of the recurrent neural network in this case understands that we're looking for a specific noun, and we'll then use the information from the data set, which in this case is the textbook to predict the missing noun that would complete the sentence, which in this case is the United States. Recurrent neural networks are used in many text-to-speech applications. So if you use anything like Google Translate or something similar, chances are a recurrent neural network was used. The third type of neural network we will be looking at is a Kohanen neural network. Kohanen neural networks are a type of self-organizing neural network. Self-organizing neural networks, such as the Kohanen model, can take inputs of data with multiple dimensions and then create a low-quality two-dimensional visual representation of it, which we call a map. Then using this map, the neural network can look for patterns in the data and cluster the data into its appropriate groups. Now, you're probably wondering, what exactly is data with multiple dimensions? Well, dimensions are attributes associated with the data. For example, a data set on temperature has multiple dimensions slash attributes to it, such as time, altitude, latitude, and longitude. Kohanen neural networks have applications in the medical field, where it can be used to cluster medical data into different categories based on multiple attributes of the medical data, such as blood type, patient history, patient height, which can be helpful for determining what type of disease a patient carries. The fourth and final neural network we'll be looking at today is a convolutional neural network. Convolutional neural networks are used when it comes to making sense of images. CNNs classify group and recognize objects within images. If you've used any sort of facial recognition technology, chances are a CNN was used. The process of how a CNN works is quite complicated. But to simply put it, a CNN takes in things such as the information about every pixel of an image, such as the pixel color, and the image width and height, and makes sense of it by trying to look for patterns in the image, such as shapes, textures, and borders. Keep in mind that for a convolutional neural network, the hidden layers in this case are the convolutional layers. The convolutional layers process the patterns in the image. A convolutional network can have layers besides a convolutional layer. But for this video, we're just going to show you the most basic structure and go more in-depth than later videos. The bottom line is that CNNs are good for interpreting and processing any sort of visual data. That's all we're going to be talking about in this video, as far as neural networks are concerned. Keep in mind there are way more than the four types of neural networks that I showed you. But for the sake of keeping this video as short and simple as possible, I limited it to what I thought was four of the most important neural networks. With that being said, you're probably wondering, how exactly do we get these neural networks to act in a certain way? Well, that's what we call deep learning. Deep learning falls into a broader category of what we call machine learning, which is getting computers to perform a certain desirable action without explicit programming. There are two types of machine learning, supervised learning and unsupervised learning. First, let's take a look at supervised learning. Supervised learning is the more used of the two forms. Supervised learning is where you have a bunch of input values and a set of output values for each given input values. The computer uses an algorithm to predict the correct output values for the given input values. For example, let's say we have a set of input values called x, each of which have a corresponding output value, which we call y. We can use an algorithm, which in this case is the function y equals 2x, to predict the output, which in this case would be the y output value for every x input value. That wasn't bad. Anyways, now let's take a look at unsupervised learning. Unsupervised learning is where you have a bunch of input values, but no corresponding output values. That's because the goal of unsupervised learning is not to predict the output values, but to analyze and find patterns in the input data. It does this by clustering and association. Clustering is where the data is grouped based on similar trends the data might share with each other, such as grouping theater customers by their movie preference. Association is used to find trends and correlations in data. For example, when analyzing search data, we tend to find that people who search up the science first on YouTube also tend to subscribe to science and educational channels in addition to having a really, really good taste in YouTube channels, wink, wink. Anyways, that's all for today. Keep in mind this video was scratching the very surface of AI to help those of you who don't have a strong technical background get a better understanding of the subject, which is why we didn't get into any complicated math or coding. We will return to this topic in later videos where we'll begin to take a little in-depth look at some of the math and coding. I'm so happy you made it this far in the video and I really look forward to educating you more on science and technology in future videos. If there is any more topics in artificial intelligence you want me to cover, be sure to let me know in the comments below. And also, don't forget to subscribe and stay tuned for more science videos.