 we are going to cover the standard neural nets that are used for dealing with time series, but I thought it would be fun to have a brief digression and look at some of the other ways that time and memory show up, particularly in human memory and with a slight hint of how that can be addressed with neural nets. We won't actually implement most of these, but they're sort of things where we may be going in the future with deep learning. So it's worth noting, and I'm not going to walk through this slide in detail, that there's lots of different time scales in human memory. There are short-term potentiation where you build up little amounts of chemicals, ions, and neurons, and they then deplete, happening at the time scales of seconds. There's long-term potentiation where you actually grow synaptic connections between different neurons and actually learn things that you can remember for years, for lifetimes. You taste some food and you get sick, and 20 years later, you still may have a food aversion to that particular food. So learning happens at many, many different time scales of memory. In this course, we're going to talk about how most neural networks work, which is very short-term memory, but there's also long-term memory. Now in humans, the short-term memory from a high level is working memory. You can read three things, pencil, automobile, evil, and if I say close your eyes, recite them, you'll be pretty good at remembering them. If I ask you to read seven things, you'll find you can get roughly that many pieces memorized. And if I were to take the time, which I won't do right now, and say, hey, read these 11 items and then sit down and write them down, you'll get six or seven or maybe eight of them. So you have a fairly limited working memory, or you can store things in, even though you have a huge long-term memory. You can remember 100,000 different words and things and sort of a typical vocabulary of a Penn student across maybe the different languages that you speak. That if you think of all the people and items and names of everything that you know, the nouns and the verbs and the adjectives, 60,000 would be a lot, maybe for you guys, 70,000, 80,000 words. So long-term memory, lots of stuff. Short-term memory, quite limited. Now there are lots of different kinds of memories that humans have. Working memory, I just talked about being able to remember seven digits for a phone number, but 10 is pushing it. Episodic memory, remembering specific things that happened. What did you have for lunch yesterday? Things that are sort of autobiographical. Declarative memory, remembering facts about the world. The Washington DC is the capital of the US and bizarrely enough, what's the capital of Pennsylvania? Not Philadelphia? Okay, but that's a fact about the world. Not a thing that happened, but a fact. Procedural memory, being able to ride a bicycle or drive a car. Things that are not so conscious or fact-based. Something that will look more like what we do in reinforcement learning. So lots of different memory types, lots of different time scales, lots of different brain regions. If this were a neuroscience course, I know some of you are disappointed. It's not that it's not a neuroscience course. We talk about things like the hippocampus, which is involved in translating working memory, start to memory straight up front, into longer-term episodic memories in particular. Different brain regions, different chemicals even for episodic versus declarative memories. So thoughts tend to persist over time, and you could ask how can we build a deep learning network that will do that? And one method, which we will not cover this week, but it's very cool, is any of our neural Turing machine, which tries to actually have a truly long-term memory. It takes in some external input, think of an X, runs it through a neural net, the neural net does some quote thinking, okay, it's just multiplying numbers like all our neural nets. It then writes stuff out to a memory, stores it to memory the same way you might store to a database, and then reads it back later if it needs to, and then makes a prediction, maybe a why. So one can build deep learning methods that actually explicitly store stuff to long-term memory and then read it back again where the long-term memory persists like a SQL database. It can last forever, or almost forever. But in fact, most of the neural nets currently, all the commercial ones that do things like speech recognition and machine translation, use only something that corresponds very much to human working memory. And those are the ones we're going to focus on for the next two weeks.