 Hello, I'm Steve Furber. I'm Professor of Computer Engineering in the Department of Computer Science here at the University of Manchester. And today I'm going to talk about computers and brains. Computers and brains are both information processing systems with some similarities. So they both take inputs and process those inputs in some way and generate outputs. But there are also major differences in the way they work. And in today's talk I want to try and convey some of those similarities and some of those differences. Now of course we're not the first people to be interested in brains. And almost 200 years ago this lady Ada Lovelace wrote about her interest in brains. Ada was the only legitimate child of Lord Byron. But she's best known for her work with Charles Babbage who built early mechanical computing machines. And she worked with him. She thought about algorithms that could run on those machines. And to many people she was the world's first computer programmer. But more relevant to today's talk are her interests in brains. And amongst her extensive notes which are held at the University of Oxford, she says, I have my hopes of one day getting cerebral phenomena such that I can put them into mathematical equations. I hope to bequeath to the generations the calculus of the nervous system. Now this was very high ambition indeed for 200 years ago and even today we don't know how to achieve the objectives that Ada set for herself. Sadly she died quite young at the age of 36 and never got to progress this agenda much further. If we come a little closer to today than just over half a century ago a very well known computer scientist Alan Turing came to Manchester and this slide shows the house where he lived. It's a fairly undistinguished semi-detached house about 10 miles south of the city centre. In fact it's quite close to where I live, fairly near Manchester Airport. And on that brick archway you can see to the left of the house there's the blue plaque that's shown below that says Alan Turing, founder of computer science and cryptographer, lived and died here. And indeed this is where Turing spent his last few years because he'd moved to Manchester because Manchester had built the first machine to implement his big idea from the 1930s of the universal computing machine. Now while he was in Manchester Turing worked on various things but most relevant here is the paper that he published that's shown on this slide with the title Computing Machinery and Intelligence. And this paper begins with the words, I propose to consider the question, can machines think? In this paper Turing goes on to say this isn't a very well posed research question and so he turns it round into a test for human-like artificial intelligence which he calls the imitation game. But which we all simply know as the Turing test. And in this paper he reckons that all a computer would need compared with the Manchester baby is more memory. And he reckons about a gigabyte should be enough and he predicts in this paper that by the turn of the 20th century computers may indeed have that much memory. And this was a remarkable prediction because it was about that time when a typical desktop PC would have about a gigabyte. Now you have to remember that Turing wrote this paper very early in the history of computers in 1950 and the Manchester baby machine that he came to Manchester to use had 128 bytes of memory. So extrapolating it to a gigabyte was extremely far sighted. And at the turn of the century computers would have the gigabyte. They'd also be about a million times faster than the Manchester baby machine, but they would not pass Turing's test. And indeed even today no machine has convincingly passed Turing's test. This would have surprised Turing a great deal. And the reason I think that Turing's estimates of what it would take to build human-like intelligence and the estimates of many people since Turing have proved optimistic is because we don't understand how the brain works and the brain is the foundation of natural intelligence. Until we understand natural intelligence it's difficult to see how we can build a human-like artificial intelligence. Now Turing came to Manchester to use the machine that's shown here, the baby, and it's shown here with its two principal designers, Freddie Williams and Tom Kilburn. And Manchester's been building big computers ever since that day and the most recent of the big computers is the machine I'm going to talk a bit about later. This is the Spinnaker machine that we've developed for brain modelling applications here in the university. But for now I just want to compare these two machines. There's just over half a century between them and you can compare them on the basis of their physical size. The baby machine would be a bit taller than the man and as wide as a typical room. It would use about three and a half kilowatts of electrical power and with that it would execute 700 instructions a second. And if you do the arithmetic then you can see it would consume about five joules per instruction. If you take a modern energy efficient processor such as the one we use on Spinnaker then it uses 40 milliwatts of power and with that it executes some 200 million instructions a second. And if you do the same arithmetic you get a number with lots of zeros after the decimal point and the ratio of those two numbers is the improvement in the efficiency of computers over those 63 years. And that ratio is a factor in the region of 25 billion. Now that's a huge number. It represents a huge rate of progress in computer technology over half a century. And it's very hard to get your head around these kind of big numbers. But one thing I like to reference this to is that I understand that the UK road transport fleet that's all the buses, cars and lorries in the country between them use about 50 billion litres of fuel a year. So if the energy efficiency of cars that improved as fast as the energy efficiency of computers we'd be able to run the entire country on about two litres of fuel instead of requiring super tankers applying the oceans with vast quantities of fuel every day. Now of course that's not a criticism of the car industry. It's much harder to improve the efficiency of cars than it is to improve the efficiency of computers but it just gives you a reference point to see how much progress we've made in computers. Now it's not the first technology that's gone through this rapid progression and again going back to Victorian times another university character, William Stanley Jevons looked at steam engines and he published a book with the type of the coal question. What he observed in this book was that James Watts' newfangled coal-fired steam engine was much more efficient than Thomas Newcombans and yet, and so you'd expect that the coal consumption would go down because the steam engines would use less of it but actually what he observed was that coal consumption was going up instead and you don't have to think about this very long to work out what's going on as you improve the efficiency of a technology the number of uses to which it can be put increases even faster and so the total amount of energy that's consumed goes up rather than down and of course we can see that with computers. There are so many computers in the world today that even though they're formidably more efficient than they were 50 years ago collectively they are consuming an increasing proportion of the world's energy resources and this is a concern. It's not a problem to which I have a solution but it's one of these factors that we have to bear in mind when we think about building more efficient technologies. The picture at the bottom right of this slide is a picture of Manchester in Jeven's time. Today it's a lot cleaner and nicer than it was then because all of those coal-fired steam engines have been replaced by cleaner and more efficient technologies. Now the progress in computers has been largely driven by one particular technology which is the integrated circuit and this picture shows a close-up of an integrated circuit. In fact what you're seeing here are mainly the layers of metal that connect the transistors together on the chip. If you look carefully you can see three different layers of metal in this picture stacked vertically and if you look very carefully right at the bottom of the stack of the transistors that are built into the surface of the silicon itself then there are a couple of transistors towards the bottom left of this picture. Now if you took the chip out of your mobile phone and magnified it to this scale then the total area of the chip will be several square miles at this level of detail. So these are formidable technologies that we have available today. Again an analogy I find useful is that the job of designing the wiring for a modern integrated circuit is about the same as the job of designing the road network of the planet from scratch. So that's every road and public footpath on the planet that you have about that many wires on a modern chip. So it's a formidable challenge and of course on the chip if you get any traffic jams then the chip won't work as intended. So we have to use the best tools available today to put these chips together and to guarantee that you don't see anything that resembles a traffic jam. And that's delivered formidable technology at very low cost and that's one of the reasons why computers are so prevalent in today's world. Another reason is that we've learnt how to turn most things that we're interested in into numbers because computers really can only handle numbers. So we've worked out how to turn sound into numbers in the consumer marketplace this happened earliest about 1980 the compact disc came out and then photographs came next in the 90s we saw advances in digital photography which mean that an old fashioned camera that uses chemical films is now quite a rarity and then we went from still pictures to moving pictures and only over the last 20 years has most broadcast TV converted from analog systems to digital. Because we can turn all these things we're interested into numbers we can use computers to send the numbers around to process them and to store them and this has required a formidable amount of work and of course we've not done everything that needs doing we have more senses than hearing and sight and we still don't fully understand how to digitise other senses such as touch although there is progress on that or smell or taste I'm not entirely sure that I personally wish to live to see the invention of the smelly vision but I'm sure it will come one day. Once we've turned the things we care about into numbers we then have to be able to do three things with them we need to be able to store those numbers to process them and to communicate them and the early history of computers was very much focused on the first of these storage digital storage was the biggest challenge in building the earliest machines I remember in the 1980s talking to my colleagues in the computer business wondering whether we'd ever be able to store a single track of music on a solid state chip today I have my entire CD collection on my phone for no particular reason other than that I can and so you can see how much storage technology has advanced just in the 30 or 40 years since the 1980s Now how does all this work? Well here I have a demonstration a very simple demonstration this is an old iPad tablet computer and I'm going to open a document on it so that's a library of documents I'm going to open this document which happens to be a data sheet with a spinnaker machine and I'm going to flick through the pages there and what I want you to do is think about what's happening how does that work? it's being built by engineers so clearly we know in great detail how it works but can you explain how it works? well I don't know if you can answer that question but let's start when I push my finger across that page firstly the computer has to be able to sense my finger and it does this because it has a very fine mesh of electrical sensors behind the glass screen and they can detect where my finger is so as I move my finger across the screen the sensors can detect the finger and detect its movement that's not the aspect that I'm particularly interested in here when the machine detects my finger movement it then has to make that image move as though I were pushing it with my finger now that image is formed from about a million little picture elements it's about a thousand by a thousand and each of those picture elements can be programmed by the computer to be a particular colour we see here white, purple, red, yellow those colours can be changed if I want the image to move a bit to the left what I have to do is basically read the value of each of those picture elements and copy it into another pixel a few pixels to the left of the one I'm ready from and if I do that for all the million pixels on the screen then the image will have moved a little way to the left what does it take to do that? well it takes about ten basic instructions to read the value of a pixel and write it in a new location and there are about a million pixels so it takes about ten million instructions to move the image a small distance if I want that image to appear to move smoothly then Hollywood is known for a century what I have to do is update the image about 25 times a second so 25 times 10 million means I need to be able to execute about 250 million instructions to create the impression that that image is moving smoothly to the left and that's the key to how computers work they can execute very large numbers of very simple instructions to create the illusion of all sorts of things that are familiar in the real world that they synthesize in their artificial world if I move to a more up-to-date iPad then I may have two or four million pixels on the screen and the amount of computer I require to move an image up by a factor two or four to maybe a billion operations a second now if you're familiar with this technology you know that most of this work is not done by the main processor it's done by a specialized processor called the GPU but the principles remain the same so how do we understand how the machine implements this well let's look inside so here's a tablet if we peel it apart inside we'll find a circuit board with lots of black plastic chips on and then if we go and peer inside these chips we'll find they have contents with names like an arithmetic logic unit and ALU, some memory those are made out of logic gates the gates themselves are made out of transistors the transistors control the flow of little clouds of electrons around the system and the transistors themselves, as with all matter are made out of atoms and if we want to understand how the machine works if we go down to the atom that's too low level there's too many of them, it's too complex if we just look at the tablet from the outside we can see what it does but we can't understand how it works so to understand the tablet we have to go to an intermediate level of understanding and the typical level that we use for example on a computer science course is the level of these kind of block diagrams where we have a block that's called a microprocessor we have a block that's called memory and we try and understand the relationship between these and we know how to build these out of logic gates and we know how to build logic gates out of transistors so we can engineer systems and if you want to understand what's happening at the level of the microprocessor then you can look at the memory which is basically just a very large set of pigeon holes each pigeon hole has a unique address when you put something in a pigeon hole and you're expected to stay there until you go look in the same pigeon hole later and then you find what you put there earlier and you can move the contents of the pigeon hole across to the microprocessor which has a much smaller set of pigeon hole called registers and it can do things with those registers under the control of instructions and there's a typical program down the left of the screen here load register 0, that's the pigeon hole in the microprocessor from memory 3 that's the pigeon hole in the memory with address 3 do another load to register 1 then add register 1 to register 0 and put the result in register 2 and store register 2 in pigeon hole number 6 in the memory and so on so these are very simple data movements very simple arithmetic operations and the way the computer achieves its impressive results is by doing these very simple operations very fast so that's the first key message of this talk is that computers work by doing really quite simple things very fast indeed and if you do enough simple things fast enough you can create some quite rich and complex phenomena now if we look at brains then brains have some different properties firstly brains don't do one thing after another they do many things at once our brains are built from brain cells which are called neurons and we have just under 100 billion of those that's 10 to the 11 and they're all operating at the same time so instead of doing one simple thing the brain's doing many things at once and each of those neurons has a lot of connections and if you add them all up you find that inside your head you have something like 10 to the 15 connections which are called synapses that's a thousand million million synapses and those synapses are not static they adjust, they change and indeed to the best of our understanding they are where all your memories are stored and your personalities form from how your synapses connect your neurons initially of course as a result of your DNA but they change throughout your life through experience so in some sense you are your synapses now if we look at other characteristics of the brain we can see that brains are formidably power efficient if we try and build a model of a full human brain on a computer we need a computer that will consume tens of megawatts of power whereas your brain runs on about 20 watts so the biology is much more efficient than the best microchip technology we know how to build and some of the key to that efficiency is that unlike the computer which runs very fast indeed the biological technology is quite low performance nothing inside your head works on timescales much shorter than a millisecond whereas computer chip designers worry about picoseconds that's tens of a million millionth of a second also the communication inside your brain is quite slow compared with that in a microchip your brain is also very good at coping with component failure while you sit there listening to me if you're over the age of about 20 you're losing about one neuron a second that's not a big problem it only amounts to one or two percent of your neurons over the useful life of your brain and your brain can easily accommodate that if you start losing 20 or 30 percent then you have a problem but that typically doesn't happen in the healthy brain and we've no idea how to make computers that can tolerate that kind of degree of component failure but the key to the brain is this massive parallelism that lots of things are happening at once and those are quite complex things so brains do complex things quite slowly computers do simple things very fast indeed brains do complex things quite slowly now I do have to offer a caveat here that of course our understanding of the brain is far from complete so whenever I tell you something about how we think the brain works we're generally not completely sure that we've got anything like the full story today but to the best of our knowledge these are things, these are facts which apply to the brain now again if we want to understand the brain then we can look at the whole brain that's too big and complex to understand we can take it apart and we can see it's made from various regions or modules the modules are constructed from neurons which go ping every so often that's how they communicate the neurons contain DNA and at the bottom of all matter the DNA is composed of atoms if we want to understand the brain then there are too many atoms it's too complex at the level of the atom if we look at the top level all we see is the function so we're interested in levels in between those two extremes and most of the time people who try to understand the brain look at neurons the basic brain cell for which the brain is composed and here's a picture of a small area of cortex that's the outer layer of your brain and the darker blobs here are the cell bodies themselves and the hairy stuff the wires that connect these together so you can see that this is a very complex system to begin to understand if we look at the brain cell then this is the cell called a neuron it's a bit like a logic gate it has multiple inputs typical neurons have thousands of inputs and it has a single output and on the output a very communication mechanism is through spikes so every so often if the inputs to the neuron are sufficiently interesting to that neuron the neuron will go ping it will send a little impulse out down its axon to all the neurons it connects to and so again if you've got your head around the idea that your personality and memories are all formed in your synapses then your thoughts are patterns of spikes between your neurons and these spikes operate it a few spikes a second hundreds of spikes a second just as neurons have many inputs the output connects to many other neurons and we have these synapses at the connections and the synapses adapt in response to experience and that's really our best understanding of what's going on inside the brain now here in Manchester we've built a computer to try and contribute to our understanding of the brain and we set ourselves the target of putting a million mobile phone processors into one computer from the outset it was clear that even with a million mobile phone processors we can only get to about a percent of the human brain in fact even that's a bit optimistic or you can think of it if you prefer as 10-hole mouse brains the mouse brain is conveniently very similar to the human brain but a thousand times smaller so it's a machine that's bigger than most computers that can support brain models but it's still a long way short of the full human brain we designed this computer from the microchip upwards so the heart of Spinnaker is this silicon chip which took about five years to design in my lab and we've had it since 2011 and we package this chip with a memory inside the black plastic package at the bottom left of the screen and then we can build machines with these chips so we have the basic Spinnaker chip we can tile a two-dimensional printed circuit board with these, this board has 48 chips that's 864 processors and then we can assemble these boards into machines of different scales and the biggest machine is the one that's in the Kilburn building in Manchester and that contains over a million processors distributed across 11 typical data centre cabinets and this has been online at half the scale seems March 2016 full scale since November 2018 and offers an open service which is accessible to users anywhere across Europe or around the world it's supported under the European Human Brain project but it's openly available to anybody who wants to use it building these machines is quite interesting and here are all those chips being arranged in a 2D mesh and then divided into units of 48 to put on circuit boards and then these circuit boards have to be wired together here are a couple of people in my group working very fast as usual and they are involved in the job of wiring these boards together to form the large machine so there are a lot of cables in the machine and we don't want any of these cables to be too long or they won't work reliably but instead of building the machine in a simple flat way we fold it and interleave it in two dimensions and then the longest wire that we need to connect any two boards together is just under a metre in length so we can assemble these boards into the machine room cabinets and then these are built into a room that's been especially converted to have the cooling equipment necessary to keep the machine working reliably and then we can wire them together and here you see three of my group busy doing the wiring of the half million poor machine they're wearing headphones not to listen to music or entertainment but because the computer these headphones are wired to is giving them instructions and the machine is also lighting up little LEDs to plug the wires in when they've plugged both ends of a cable into the machine then the cable is tested and so you can be sure that there are no mistakes made in the wiring because they'll be detected very early and corrected and the entire wiring job took just under four and a half hours and they looked suitably pleased with themselves at the end of it so that was the assembly of the big machine now what can you do with it? well obviously a primary use is brain modelling and one of the more detailed models that's been run on the machine is a model of a cortical micro column this is a small fraction of the brain just about a square millimetre of cortex where the cortex is the outer layer of your brain like a sheet but screwed up to fit inside your head and this has 77,000 neurons 285 million synapses it's quite a complex model and we can run this both on Spinnaker and on the conventional supercomputer and compare the results to check that we're getting sensible answers a slightly more abstract application is to build a stochastic neural network that will solve sonoku and similar problems and this is... it was developed by one of my PhD students who forms the basis of his PhD thesis and this solves the hardest class of sonoku problem in about 10 seconds which is a lot faster than I can do it but then I don't have the rules of sonoku widening my brain in fact I don't really spend much of my life doing sonoku so this is a more synthetic problem that it shows that there's a lot of flexibility in modelling neural networks on the Spinnaker machine so where does this leave us what does this say about the future well, Spinnaker is just one example of a recent development in this area and what you see if you look around is that there are many developments in computers that are in this direction of enabling machines to increasingly sense and make sense of their environment and these kind of control systems can now be used to enable new sorts of products such as driverless cars which you hear quite a lot about in the news they aren't really quite here yet but they are coming and at a more domestic level robot vacuum cleaners the example here is Dyson and this uses a fisheye lens you can see on the top to look at your room analyse what's there understand where it is and it will work out the route it should take around the room to clean your carpet as well as it can as we learn more about the brain this trend of building machines that have cognitive capabilities will increase rapidly I think and this is a relatively new phenomenon but for your generation this will have significant consequences that you should be thinking about and this is going to I think change life for everybody over the next decade or two so to conclude my talk what are my key messages I hope I've conveyed the fact that computers have progressed spectacularly in just over half a century they work by doing simple things very fast indeed brains are also information processors but they work in quite a different way from computers that we still are not close to fully understanding but it does seem that the principle mode of operation is that they do really rather complex things quite slowly unlike computers that do very simple things very fast here at Manchester one of our contributions to the science of brains is to build the spinnaker machine which is a computer designed specifically to be good at modelling parts of brains and we hope that an outcome of this research will be to help us understand how the brain works and in turn we hope that that new knowledge will help us build better computers in the future