 OK. Hi, everyone. I'm Melinda, and I will be talking about artificial intelligence. So to jump right in, does anybody know what the reference of that poster is? Yes. And why am I mentioning that? Yeah. So it comes, it's basically the first time that the term robot was introduced to the English language. It actually comes from the Czech word robotic, which means surf labor. And this play by Karl Chapec, if I'm pronouncing that right, was written in 1920. And it's about a factory where robots are made. And it tells the story of how a robot rebellion leads to the end of the human race. So quite happy play, I guess. So ever since the Industrial Revolution, we've had a fascination with machines coming to life and causing our downfall as a species. But as our technology evolves, our stories about technology is evolving as well. So just thinking of recent movies, we've got her, where a guy falls in love with an operating system. Chaffee, where a police robot is given new programming and becomes the first robot to think and feel for himself. And we've got Ex Machina, which is basically all about a Turing test. An experiment to evaluate whether that robot can be considered alive. So I'm a huge movie geek. But in my day job, I'm a developer, a Ruby developer, to be specific. And I work at a company called FutureLearn. You might have heard of us a couple of minutes back. We had Nikki Thompson here talking about FutureLearn as well. And we are a social learning platform. So we work together with universities and cultural institutions to create free online courses. So our mission is to pioneer the best learning experiences for everyone everywhere. Now, what this means, though, is that, as a company, our team is encouraged to learn more about the theory and the principles of how we're building these learning experiences. So that means we have internal talks about pedagogy. We are encouraged to take online courses on our own platform or elsewhere. And we have learning technologists working together with us to create these best learning experiences. So my own background is in artificial intelligence. Back when I was at university, I specialized in AI and machine learning. And my master's research focused on facial expression recognition. So what I realized, though, if I'm listening to these internal pedagogy talks, is that how machines, the artificial learn, is very, very similar to how people, the unartificial learn. So that's pretty much what this talk is about, just looking at some of the basic ideas of artificial intelligence and applying that to the unartificial. So to start off, before we can look at artificial intelligence, what is intelligence? So how do we define that? What makes something or someone intelligent? So I did what every geek would do and looked it up in the Dungeons and Dragons handbook. And we get this quote. Intelligence determines how well your character learns and reasons. Disability is important for wizards because it affects how many spells they can cast, how hard their spells are to resist, and how powerful the spells can be. It's also important for any character who wants to have a wide assortment of skills. Now, obviously, the parts about wizards and spells aren't really relevant to us here. But these parts are. How well your character learns and reasons having a wide assortment of skills. And the reason why I also pick out the Dungeons and Dragons handbook is the fact that it puts intelligence next to wisdom. As a kid, I remember looking at this and thinking, I know the same thing. But intelligence is not about being intellectual. It's not about how much you know. So here's another proper definition. And again, it's about not just having knowledge of skills. It's about knowing how to obtain them, how to reason about them, and how to use them. So when we're talking about artificial intelligence, what do we mean with it? So it actually has two meanings. On one side, we use the term to describe the intelligence of actual machines or software. On the other hand, we use it to describe the field of study that looks at creating intelligence within machines or software. So within the research area, we've got four different approaches that we can think of looking at implementing AI. So these are kind of the four areas. There's a little bit of overlap between some of them. But in general, most of the approaches fall in one of these areas. So on the one side, we have systems that think like humans and systems that act like humans. So this is very much intelligence compared to us as humans. And on the other side, we have systems that think rationally and systems that act rationally. So this is very much about the ideal concept of intelligence. So a system is rational if it does the right thing. But how do you determine what the right thing is? And then we can split it this way. Systems that think like humans and systems that think rationally. And on the bottom, systems that act like humans and systems that act rationally. So it's very much thinking about thinking versus acting or thought versus behavior. So the first area is systems that think like humans. And this is the approach of cognitive science. So in here, we're looking at trying to discover a theory of the mind and being able to recreate this as a computer program. So it's this idea of if we understand how our brains work, we can recreate it. And vice versa as well. So once we know how computer programs think, can we get a better understanding of our brains and how humans think? The next area is systems that think rationally. So this is very much the logistic approach. So it's completely rooted in logic. And this is pretty much the traditional classic AI that the AI field started off with. So this approach kind of goes from the perspective that any problem can be described in logical notation. And once you have a problem in logical notation, that means that you can easily solve it. Because any problem that's described in logical notation can be solved. So this was mainly used for puzzle solving. So things like chess playing and stuff like that. But the problem is not every problem can actually be described in logical notation. So there's a lot of things that it doesn't work for. And also on the same side, not every single problem can actually be solved. Everything has a single answer. There are some cases that it's a bit more fuzzy than that. So the third area that we're looking at is systems that act like humans. And this is very much where the Turing test comes in. So I'm assuming everyone here has heard about it. Yeah, lots of nods. So it was introduced in the 1950s. And it was Turing's response to the question, can machines think? And rather than focusing on what we actually mean with thinking, he proposed another question. Can machines do what we as humans do? So he proposed the imitation game, which is based on a party game from the 1950s. Because I don't think anyone here has ever heard of the imitation game as a party game. But the idea is, if you have an observer or a judge talking with both a human and a computer, will that observer know which one is a human and which one is a computer? So it has a couple of limitations, though. So the way it worked is that the judge and the persistence all conversed just by text. So it's all text-based. And not everything that humans do is text-based. We do a ton of other stuff. We dance and paint and sing and run and tons of other things that we do that isn't through just conversation. So rather than attempting to the answer the question that Turing initially proposed, which was, can machines do what we as humans can do, he's actually answering the question, can machines appear to respond in text as humans do? Which isn't quite as catchy. Next to that, there's some other limitations to it. There is some human behavior that is unintelligent. So for instance, think about the way that people make grammar mistakes and typos. So the initial chat bolts that won the Turing test, they did so by just randomly throwing some typos in and pretending that they were human in that sense. And on the other side, there are some intelligent behavior that is inhuman, like being able to calculate mathematical equations really quickly. And a lot of the chat box initially also realized, well, the designers of the chat box initially realized that this was a limitation. So they introduced ways to not be able to solve those equations. So this very much introduced the idea of artificial stupidity, since dumbing down an algorithm for the sake of passing it off as human. And I find this rather anti-climatic. I mean, yes, it's all about creating a intelligence which is as smart as humans. But is that really what we want? Do we really want a machine that is only just as dumb as we are? So the final area that we're looking at, which is more appropriate for us, is systems that act rationally. So this is the idea of intelligent agents. So rational agents is one that acts to achieve the best outcome, or when there is uncertainty, the best expected outcome. So here's a very basic diagram of an agent. And when I talk about agents, I'm talking about it in a abstract sense. So this is not necessarily a standalone computer program or an app or anything like that. This is more an idea of an agent. So an agent exists within an environment. It has sensors with which it can observe that environment and it has effectors with which it can act on the environment. And it has this huge white books so that it can make decisions about these observations and what actions to take. So when we're talking about humans, we can think about it in the same way. We've got a human in an environment and we've got sensors, which are our five senses, so sound, sight, smell, taste, and touch. And we've got effectors, so that's anything that we use to act on our environment, so our hands, our voice, et cetera. And again, we've got this big black box to decide what actions to take. So going back to the artificial, this is a simple reflex agent. So this is the most basic type of agent that we can think of. So in this case, we have observations and recreate a state of what the current world is. We then have condition action rules, so these are just simple if-then rules. And based on these two things, we can decide what action it should take. So a simple example of this is email filters. So again, I'm talking about agents in a very abstract sense. So in this case, our environment is your email inbox. Your sensors are the sense that it will get an incoming email. And once it gets an email from a certain email address, it will apply a filter, a label to that email. So it's a very simple concept. So again, we can apply the same thing to the unartificial. So who here has heard of Pavlov? So he was a Russian physiologist and mainly known for his work in classical conditioning. So he trained dogs to associate the sound of a buzzer with food. So a couple years back, I tried to do the same thing with my cats. So this is Casey and Dusty, and they get hungry all the time and they'll get really, really annoying when they're hungry. So I wondered, could I actually train them like Pavlov did? So I tried it out. So they started off like normal cats and whenever they smell food, they'd know that they'd be getting it soon and they'd run off to the kitchen showing that they want the food. So then I started training them with a standard iPhone alarm. So I'd set the alarm and I'd only go and feed them once that alarm went off. And it worked. Eventually they started associating the sound of the iPhone alarm with food. Whenever they heard it, they'd jump up and rush to the kitchen. What I didn't expect though is that it didn't really work as in even though they associated the sound with food, it didn't stop them from being hungry the rest of the time. So they'd still be annoying and irritating even though they didn't hear the sound. So I kind of stopped that experiment. But even now, and it's like three, four years later, it still works. Whenever they hear that iPhone alarm on TV or on the movie, they'll jump up and rush to the kitchen expecting to get food. Luckily it's not used that often anymore because it's one of the old iPhone sounds, but it still happens. So we're not really that different from cats and we use the same principles on ourselves for habit forming. So for instance, each morning when I'm in bed, I hear my alarm clock and I know that that means I need to get up. Then as developers we know that when we have failing tests that means that we should go and fix them. Now this is a really simple loop though. The question is how do we actually learn what those actions are and how do we learn new things? So here's a diagram of a slightly more complex agent, a learning agent. So again, we have different elements. The main one here is the performance element and this is pretty much the agent that you saw before. It has again the idea of creating a state of the world, the condition action rules and being able to decide what actions take. It's only wrapped up now in one element to make it a little bit more easier to understand. So in this case now we can change any of the components within the performance element through the learning element. So this modifies these components so that it learns how to make better decisions. We then have the critic and this looks at past actions and gives feedback on to the learning element. And finally we have the problem generator but what this does it's responsible for suggesting completely new actions. So if we didn't have this, the agent would just remain doing what it thinks is best rather than exploring new things. So this is very much for the experimental things. So again we can look at it in the same way for the unartificial and we can talk about it in a bit more human terms. So we have the main decision element, which is control. We have a part that's looking at reflection, part of understanding and part of planning. So what we're mainly interested here is the learning element. How do we define these things and how does it learn what changes to make? So this is where learning algorithms come in. So there are a couple that you might already have heard of. So supervised learning. So in this case all the feedback that the algorithm gets is up front. You get a lot of training data. You get a lot of label training data. And you're basically trying to infer which input belongs to which output. So as a very basic explanation, if on the left you have shapes as your input, you then have labels saying whether or not it's a circle, a square, or a triangle. So you're basically creating a mapping between your input and what label you have. So then when a new input comes along, you can go, hey, I know this shape, it's a square. We then have unsupervised learning. And in this case, the feedback is just a bunch of data. There's no labels, there's no mapping to a specific output. All we need to do, well, the algorithm in this case what it needs to do is identify patterns and structures from that data. So again, a simple example is we have a bunch of shapes and it would infer that all circles belong together and all the rest belongs together purely because it can see that some have edges and some don't. So it's looking at different features and finding commonalities between those features. And then we have reinforcement learning and that's kind of most similar to how humans learn. So rather than having correctly labeled data, we have actual proper feedback. So the agent makes a decision and then gets feedback whether or not it's wrong or right. So it's a much more general way of learning but it also at the same time needs to have a better understanding of how the world works around them and they need something to tell them what's wrong or what's right. So most of the time it's human input that does that. So again, simple example. So on our left we have our shapes and the agent basically just goes past all the different options and then gets feedback whether or not it's right or wrong and eventually learns from the feedback what the mapping should be. So in the same way that machines learn through different types of algorithms, humans learn through different types of activities. So here's a quick overview of 16 different types of learning activities and I'll just highlight a couple here because we don't have time to go through all 16 of them but most of them are the names kind of explain what they are. So the first is delivered and this is the one that most people consider when they're learning something new. So this is when learners are presented with information with content. So this is quite similar to supervised learning in the sense that there's content which contains all the information you need. So at FutureLearn we do this through different steps on courses, so showing videos, showing articles, showing audio files, so it's very much about delivered content that people can consume. And as developers we do the same thing when learning something new. We read books, we read articles, we watch videos. So again it's all about consuming content and learning through that content. The next one that we can talk about is conversational and collaborative learning. So in this case it's about learning by conversing with others and learning by collaborating with others and constructing a shared understanding. So this is very similar to unsupervised learning. So again at FutureLearn we do that by common friends. So on every step that we have we encourage people to talk about the things that they've learned and basically learn through the social conversations that they're having. And again as developers we do that through pairing. So again conversing with others and constructing a shared understanding. The third one I wanna talk about is assessing. So in this case it's receiving constructive feedback. And again this is similar to reinforcement learning. So again we, yeah we get feedback and we learn from that feedback. So FutureLearn we do that with peer review steps so learners can submit an essay or a poem or whatever the assignment requires and you get feedback back from other learners. And as developers again we do the same thing with pull requests. We put out a pull request and receive constructive feedback or hopefully constructive from other developers. So here are again eights of the 16s that you saw before. So there are just a couple of examples of the kind of activities you do when you're learning. So I said before I'm not gonna go through all of them but they'll be in the slides if you wanna check it out later. And I just want people though to quickly reflect on this and think about what's the last time you learned something? What type of learning activity was that? And which of these things aren't you doing? And which of these things might you learn better from? So finally what makes us different from machines? There's a lot that we do that is the same but what makes us better right now? So for starters contextual learning. So anything new we learn, we know in what situations and what context we're learning it. So unlike machines we're not bound to one domain or one purpose. So when we learn something in one field we can actually extrapolate and apply what we've learned to other fields. So it's less domain specific than what machines currently can do. Next to that we're constantly learning. So we don't have an off switch for learning. It's not like machines that have a very specific input state and a learning state and then a output and action state. Even though we might not consciously be learning new skills or knowledge we're processing everything the entire time and we're learning from everything around us. Even now just looking at people's faces here I'm learning whether or not some slides work or other slides don't. Whereas machines are very much state-based and they need to be in specific states to learn something. So next to that we have prior knowledge. We have this huge backlog of things that we know because we've got an entire lifetime of things that we've learned. And again we can make associations between different types of information that we find and we can discover patterns and connections in ways that other people might not. Next to that we're emotional which doesn't necessarily seem like a thing that people will consider when it's valuable for learning but we actually attach value and emotion to certain skills and information and experiences that we've had in the past. And it's actually proven that the people that there's been research done into people that have damaged parts of their brain and lack the emotional processing capability and they actually make worse decisions. And so we need that emotion to be able to make good decisions. And next to that we're social. We learn from other people. We learn from everything around us which at the moment machines don't really do. So we started creating machines that have these abilities in very small different ways but none of them have all of this combined. So machines have to be able to do all these things in a much more generalized way before we can consider them learning as the way we do. And I don't think we're that far off. So I'm gonna make a little prediction. Within the next century, I think we will have artificial intelligence that learns like we do. So it won't be the type of artificial intelligence that puts out downfall and causes our species to die out but rather we'll have AI that can learn and reason about skills and knowledge like we as humans do. And I think that's when things get really, really interesting because in a world where humans and machines learn the same way, does that also mean that humans and machines will actually learn together? Will our schools become places where both humans and machines learn? And will those with artificial intelligence be treated the same way as those with unartificial intelligence? So when we're thinking about web design of the future, consider this, will what we design and develop for humans also be used by machines? And will what we design and develop for humans also work for machines? Or do we need to approach this completely differently? So that's it. Thanks for listening. Any questions? I read the book Thinking Fast, Thinking Slow a year ago or so, I don't know if you've read that. No, I haven't. Okay. The basic, I mean the online book is discussion of how important the subconscious mind is to our basic function. And I was wondering if you had any thoughts about how computing might be more integrated with our subconscious being rather than our unconscious mind? I haven't. If not, it's okay to ask that question. Because I have a feeling that the current computing technology is much more lined up with the way our subconscious mind works than whether the way our conscious mind works, and that in terms of the evolution of an integrated computing and carbon-based intelligence, that might be an interesting path. Well, I guess there's a question there whether or not being intelligent means that you're also conscious. Right. Which is an interesting thing I find is like, can you actually maybe have artificial intelligence which isn't aware of itself? So it still is able to learn in reason but isn't aware that it's learning and reasoning. So yeah, the question about consciousness I think is a step beyond that. So I think maybe if the current computer systems that we have are like our subconscious, artificial intelligence is kind of the in-between step, and then there's a proper consciousness level about that I think. At least I think that's how I see it. Okay, that's good, thanks. Any other questions? Do you really think you're going to take that long? I'm saying within the century. Oh, within the century? Yeah. So I think the current predictions that I saw was in the 2040s, that's kind of when people are thinking, we'll get artificial intelligence. Yeah, according to the experts. But presumably it doesn't suddenly arrive because in one day it's there, in one day it's not. No. So what is the foreseen, what is the roadmap like? I don't know. There's some interesting developments already right now, like have you guys heard about Google's DeepMind? So DeepMind is a project. It's currently being used to, and basically training it currently to be able to play any Atari game. Which sounds a bit silly, but it's actually one algorithm that they're training to play any game. And there's an interesting graph showing which games that now can do better than humans and which ones it's still struggling with. So there's about, I guess, 4040 games in it and it's about halfway there with which ones it can do better than us. But it's completely, we're based on reinforcement learning. So something going, well, it's learning from the feedback that it gets from the game on how to play it, which is really interesting. And they're starting to look at how can we apply these type of algorithms to learning other stuff. And then there's things like IBM Watson, which is basically starting to learn how doctors think and being able to not replace doctors, but replace the ability to read papers and build a knowledge based on all the medical research that's out there. So there's a lot of interesting things, I think. So I heard this fact that when someone's in an excited emotional state, in a positive, excited emotional state, they're more likely to have a learning workforce. Which kind of makes me wonder, how does the sort of our emotional being fit within this idea of artificial intelligence? Because a lot of what we perceive of artificial intelligence is quite sort of like Spock and very mechanical and very logical. So yeah, so my own background is in facial expression recognition. It was very much from that perspective that for computers to be able to reason, it will need to first understand what emotions are and being able to recognize those in people. So there's an entire research area which is just focused on getting, well, getting computer systems to recognize emotions. I think that's the first step in being able to use emotions itself. But there are certain types of reinforcement learning where the reinforcement is structured as emotion. So it's basically negative and positive type of feedback that it gets back. And you can kind of, it's mainly the negative feedback that you can start describing in different types of emotion. Because positive emotion is just one that's happy. It's looking at it from a basic emotion state. I guess it's whether that's quite relevant to sort of turning machines into something that's much more capable of learning. Yeah, I think it's one aspect of it. And it's eventually all the different parts have to come together to really be able to learn, I think. Maybe it's what we're having. Yeah. So it seems like a lot of the principles of machine learning derive from observations about how we humans learn. Are there types of learning that machines can do that we can't? Yeah, I guess so. I think about maybe like brute force learning that machines can parse and process a lot more information than we can much more quickly. Yeah, so that's the, so again, going back to the Google DeepMind, it's interesting seeing which games it's mastered over humans because it's just figured out how to, what the optimal reaction is for every situation. So breakout, for instance, it will always win because it can figure out what's the most efficient way to get all the little blocks disappeared. So it's exhaustively explored every possibility? Yeah, eventually. So yeah, there's obviously a chance that, well, I think it's more likely that AI will become smarter than we do and learn at a much faster rate than we do rather than it staying at the same level as us and us really learning side by side because they're just gonna be quicker than we are. So yeah, is that, that's your question? I think so, yeah. There's a concept, I'm sorry. Yep. So go ahead, I have already asked that. I'm just gonna say, do you guys know if you're now interested in it? So if you can take me through as an example, let's say AI can suddenly become an example, program it over and over again. Do you think it's in human nature of interest to encourage AI to learn to a point where it can actually learn quicker than we can? I think it is because we want it to take over, I think things that we don't wanna do and having AI being able to help with problems that we currently cancel would be extremely beneficial for us. But at the same time I think we need to put limits in place as to how fast and what it is learning and what type of thing it's learning. But it's kind of the same thing as raising a child. It depends on what we teach and what we put in as to how the AI responds and what its values are. In the same way you can, you know, turn a kid into a massive jerk when it grows up, you can do the same thing with an AI. You know, you could create a kid that's a serial killer in the same way you could create an AI that's a serial killer and turns us into, well, what's the downfall of us? So yeah, it's just approaching it in the kind of most human way as possible I guess. It's a bit of a downer to end on. One thing that's itching to ask is, because computers are kind of targeted at its narrow domains, one of the concepts of kind of very quick jump step evolution is the idea of the singularity and machines and improving on machines. Could the singular, could the narrow domain of machine be focused on building the evolution of machine? And are we at all close to that? Because then you could see a big jump quickly. Not that I'm aware of yet, but I can imagine it happening. Yeah, I can imagine sort of systems only, not necessarily working together, but clashing together and causing problems I guess. But yeah, haven't really thought about that in that sense actually. Okay, that was awesome. Well I have an argumentation about this subject. There is a very amazing talk from last year it's online from the people from Microsoft who they apply reinforcement learning on Kinect. Basically we, without notizing, when we do a movement and maybe we don't do the perfect movement to write or something, we do something like this. And then just after the movement, we do the opposite movement to go back because we mean the Kinect fail. Kinect learns for us and from the way we move and we learn that me, I used to raise my hand slightly to the roof when I want to go to the right. So it applies a correction of my movements when you recognize my movement. So let's hear it again from Linda.