 Welch chi'n Manchester. I'm Jim Miles. I'm the local chair of this meeting. So it's my duty to, or my pleasure to, welcome you all to Manchester so that we can celebrate the award of these two IEEE milestone awards for the baby and the Atlas computers. First of all, some really basic orientations. To those of you who are watching on Zoom, I'm sorry, this doesn't really quite apply to you. You can try and apply it in your own homes if you like, but it probably won't work. But these are things that I have to tell people. So first of all, the fire alarms and the exits, we're not expecting the fire alarm to go off today. If it does, it's a very complicated two-stage system where it starts to sound intermittently and we don't have to do anything but get ready to think about it. But if it starts to go continuously, we have to leave the building. There's a far exit at the front there. Go through that door, turn right and down the stairs and you're straight out onto Oxford Road. Or the two exits at the back go out across the bridge, go left and you're outside. So I don't think it will go off, but if it does. There is a map of the Kilburn building, but hopefully by now most of you will have worked out where everything is. You should have been given a map on the way into the building. If you've taken a look at it, you'll realise that this is a nightmare building that was designed in the 1960s by some people who had some very strange ideas about architecture. Yeah, say no more. All right, so that's the general orientation. So back to everybody, including the Zoom audience. Welcome. What we're going to do today is we're going to celebrate the award of the two IEEE milestones for the baby and for Atlas. So we're going to hear a bit about the history of computing in Manchester and then we're going to enjoy the reception afterwards. And again, I'm sorry Zoom audience, we can't really deliver wine to you, but perhaps you can arrange your own. So, how did we get here? This is not a profound existential question. This is how did we come to be in the room today. In September 2019, Simon, Lavington, Rowland, Ibbott got in touch with me and said that they'd had an idea that Manchester should apply for some milestone awards. And I think I might be misinformed here. I think, Roderick, you might have been involved at that stage as well. We're in Glasgow together in September 2019. Yeah, so apologies, my slide is slightly wrong. Roderick was involved right at the beginning as well. Got together, decided Manchester should apply for some milestones for the great computing achievements here. I joined shortly after that later in September, so we were a team of four, all of whom are Manchester alumni and none of whom actually worked for the university anymore by that stage, who decided that we'd try to do something for the university. December 2019, Brian Berg, Brian is over here. John was appointed as the IEEE advocate for the proposal. And then followed a period where we had many discussions about what born the proposal should take, what should be in them, how they should be written and everything else that you might want to think. So by March 2021, which if you fancy working it out, is 16 months later, we actually reached the point of submitting applications. These then got approved fairly smartly by the IEEE History Committee and then the Board of Directors. But then we went into another hiatus where we had a long discussion with the IEEE about whether we could include Tom Kilburn's name on the Atlas citation, which we eventually resolved by April 2022. So quite a long process to get to the point where we're actually here with the plaques hidden away behind the curtain. I tried to do some sums on this one and gave up trying to get any exact numbers, but between us we exchanged over 2,750 emails in this process. And I can't calculate the total amount of data that we used, but it was definitely over 30 times the total capacity of the Atlas disk file. So just as a scale of how things have changed, that's a thing. So here we are. Today is the 74th anniversary of the day that the baby run its first programme. And it's very nearly 60 years from the inauguration of Atlas, so Atlas will be 60 years old in December this year. So it's really quite a good time to have the meeting where we unveil the plaques. I will finish off. I'm not going to spend a great deal of time talking, but I'll finish off just by saying a little bit about the history of computing in Manchester. Manchester is probably now most famous for its post-war efforts in computing, but actually there were things that happened way back into the 19th century where some really interesting things happened in Manchester. And if you have time either during the break between the two sessions or during the reception afterwards, there's a whole new set of displays on the first floor running around the courtyard and into the Atlas lobby behind, which covers some of the history of computing in Manchester, looking at people like Jevons and Hartree and what they did before the Second World War, and then goes on to look at some of the post-war stuff as well. So if you get a chance to take a look at those, please do. That's something which I've been leading, which has turned into an almost equally monstrous project to the, not monstrous, another equally enormous project to the milestone awards, which involved quite a lot of people. So please try and take a look at them, see what we've managed to produce between us, and I would like to thank these people for that. Steve McCann is the University of States designer who handles overall design, and the visual impact of that display is really down to Steve. Hailey Cox faculty comms who rewrote all of my appalling text and turned it into something readable, who had the patience of a saint, I think, with me. Samantha Beath and the team from Manchester Museum who produced some fabulous display cases for us. And Steve Rhodes, computer science electronics technician. I don't know whether Steve's actually in the room, but Steve has done some fantastic work refurbishing some of the items that are out on display. So if you get a chance, please take a look. One of the items that's on display, I will briefly mention because this is one of the things that I did myself. Chris Burton, can't see where Chris is, but Chris did a tremendous project rebuilding the actual baby itself back in 1998 and led the team that built the replica baby that some of you will have seen this morning in Museum of Science and Industry. I did a rather smaller and less ambitious project trying to rebuild the photograph of the baby, which still turned out to take me about a year. So, there's a famous original photograph of the baby or Manchester Mark I in December 1948, which is the complete machine as a panorama. It was actually taken as 20 separate photographs by Alec Robinson, who was a technician in the department. And I discovered that we actually have 15 of the original negatives from that. So I decided that I'd gone on a mission of trying to rebuild the panorama. But for that, I had to actually try to find the best quality prints that I could to fill in the spaces that we didn't have negatives for. So I trawed around a thank you particularly to Sylvia Robinson. Sylvia, Alec, unfortunately, is no longer with us, but Sylvia, his wife, is and still has some of his material. So she lent me some prints. Jamie Robinson, who works in university archives, who did the digitisation and constructed the panorama. Chris Burton, who lent me some prints. And James Peters University Archivist, who gave me access to some of the prints in the university archive. And here's the result. You really need to see it blown up full scale. It's actually blown up to four metres long in the display area over there. So you really can see some of the detail. This is the whole thing. That's what we had before. This is what we have now at that scale. Very difficult to see the difference. So I've blown up a couple of little regions here. That's the best quality that we had on the panorama before. And now nearly all of it is at that resolution. And that's displayed nearly full size in the display area after that. Down at the far end of the first floor corridor. OK, so that's all that I want to say. We do have two minutes if anybody wants to ask me any questions about any of this. But I think this wasn't really meant to be the main focus of this afternoon. If anybody does have any immediate questions, then I'll try and answer them, but otherwise we'll move on. No, OK. So with that, Steve, we'll introduce the next speaker. Please. Thank you. OK. So I would like to introduce the next speaker who is Steve Welby. Steve is IEEE Executive Director and COO. Steve has more than 28 years of US government industrial experience in technology and product development, including US presidential appointments as Deputy Assistant Secretary of Defense for Systems Engineering, and then Assistant Secretary for Defense for Research and Engineering. And in that role he served as Chief Technology Officer for the US Department of Defense. Steve is now the Executive Director and COO of the IEEE, which is a global organisation of over 400,000 scientists and engineers in over 160 countries. So please will you welcome Steve. Thank you. Thanks so much, Jim, and I think this is on. Hopefully you can hear me. Great. So just very quickly, good afternoon everyone. Thanks for joining us all today here at the University of Manchester for a very special celebration today, dedicating two IEEE milestones. I'd also particularly like to share a warm welcome to all of the IEEE members joining us from across our United Kingdom and Ireland section. It's a pleasure to be here today representing the IEEE Board of Directors and to share my congratulations and best wishes on the event of this milestone. For those of you who are unfamiliar with IEEE, we're a global charitable association dedicated to a clear and simple mission to advance innovation and technical excellence in support of the benefit of humanity. IEEE is the world's largest technical professional organisation involved in all aspects of electrical, electronic and computing fields and the related areas of science and technology that underpin our modern civilization. And today we have a large focus on supporting the general public as well as a driving force in supporting and stimulating technical progress that underpins economic growth and improves standards of living and conditions well-being around the world. IEEE itself stands for the Institute of Electrical Electronic Engineers. However, IEEE's membership is made up of engineers, scientists and many other allied professionals including computer scientists, software developers, information technology professionals, physicists, physicians and many others in addition to our traditional core of electrical and electronic engineering. As Jim mentioned, we have over 409,000 members in 160 countries around the world all committed to advancing their professions connecting with peers globally and giving back to their community. And maybe most importantly we have 125,000 student members who really represent the future of our fields and supporting those young people is a critical activity of IEEE. Among the things that we do we have a very large publication program that supports our technical communities. Today we publish over 200 journals, magazines and transactions each month. We sponsor over 1,900 technical conferences around the world. These are events where researchers come together to discuss emerging research work, to connect and network with collaborators and to accelerate discovery. We play a major role in educational activities from our global pre-university STEM programs, our support of the global Ada Kappa New and during Honor Society, our efforts in support of curriculum development and diversity accreditation and our efforts in supporting members with lifelong skills to help keep up with this continuing pace of technical evolution. And then finally as one of the largest standards organizations in the world, IEEE has been committed to the development of standards for over a century helping to shape the trajectory of technology through engagement with industrial communities internationally. Today IEEE standards shape entire industrial sectors helping to ensure safety, interoperability and compatibility of products that are used by billions of people around the world. As an example of IEEE standards, I bet that almost everyone in this room has in their pocket at least one device that uses IEEE standard 802.11 which is commonly known as Wi-Fi. And so that's one of the many projects that has enormous reach that IEEE supports. We also help to shape our communities of professions by helping to engage in a larger societal conversation on critical matters and through helping to engage in public policy. At IEEE we know that advancement of scientific and technical knowledge has always been at the heart of improving the quality of life and therefore it's critical that decision makers understand the trajectory of technology and its impact. And today we continue to work on those kind of global shapers helping to inform the general public and our leadership around the world on how technology can be used to solve important problems, be it how to provide clean water, deliver reliable energy, ensure food production in a complex environment, mitigate climate change and provide greater care, access to health care and education around the world. Today we're celebrating a historic moment and I truly believe roots go back to the time when electricity and telegraphy were a new thing. Back in 1884 when the American Institute of Electrical Engineers was founded, 30 years later another technical revolution appeared with the versions of vacuum tube electronics and radio and in 1912 the Institute of Radio Engineers was founded and I see some of the papers outside on the early work on these two machines were published in IRE journals. And in 1963 those two communities came together to form what's now IEEE, interestingly just at the birth of digital computing in a way that exploded from the 60s onward. So I truly believe it represents 138 years of supporting technical revolutions and energy communications and computing. There are underlying disciplines of hardware and software and all of the industrial activities, consumer products, transportation applications, medical applications and other domains that have helped shape modern technical life. I personally believe that the profession of engineering has a deep, rich, long and important history and it's essential that those pioneers and those pioneering technical discoveries that have shaped the world around us be preserved and recognized and that we communicate those to generations to come. I think it's an important part of IEEE's mission to help preserve the legacy and heritage of our profession and to recognize great achievements and to promote the importance and impact of engineering. And so today we're going to commemorate two major milestones, two major landmarks in computing that occurred here in Manchester, built using technology developed in the Second World War to support radar communications equipment. Baby was a critical prototype, eventually leading to the development of the Frotty Mark I. And it was really the world's first commercially available computer, the Mark I, as it derived from the work that went on to build the baby and we'll hear about its initial teams as well. It served as the testbed for the Williams-Kilburn tube, a remarkable mechanism, an early form of random axis memory. And with components like that Williams-Kilburn tube, the baby, the machine that we're going to recognize today, really had all the elements of a modern digital computer, a modern electronic digital computer. It was all here at the birth of Manchester. And today all modern computers are still built on the same basic principles that were demonstrated in this machine. We also, today, are going to commemorate one of its successors, the Atlas, also developed here in Manchester, which introduced concepts of virtual memory, a memory paging that allows a machine with a discreet amount of physical memory, a small, limited physical memory to solve much larger problems by using slower storage to overcome the limits of physical memory size. And of course today modern computers operating on problems of enormous scale take advantage of that exact same technique in every system ranging from the computer in your pocket to the supercomputers that are being employed today. Manchester's rich legacy of industrial innovation, scientific discovery and ideas have changed the world. And the pioneering work done here and the work we're going to recognize today really helped shape life as we know it. So before I wrap up, let me just thank you again for the invitation to join you today. It's always great to help join these milestone dedications. At these events we get to observe and preserve our history and our understanding of how events have shaped both the trajectory of technology, the trajectory of our communities and the trajectory of institutions. History is not something obscure and unimportant. History actually plays a vital role in our everyday lives. I believe the lessons we learned from the past help us to achieve greater influence over our own futures. History serves not just as a model of who and what were, but also what we want to be and what we want to champion. Every technological advance that's made today builds on the work of those who came before us and led the way. IEEE's legacy is a story of innovation and collaboration. IEEE's founders came together in a spirit of cooperation to support the public with technological improvement. And today it's a pleasure to join my colleagues in recognizing and celebrating the significance of the pioneering landmark developments in the history of computing that took place here in Manchester. Thank you. Thank you. I'd also like to welcome Jose Mora, who former president of the IEEE. Jose is the Philip Andal Marsha Dowd university professor at Carnegie Mellon University, where he works in statistical signal processing and telecommunications. He has served the IEEE in many roles, including editor-in-chief of the IEEE transactions and signal processing, president of the IEEE signal processing society, director of division nine, and in 2019, Jose was the president and CEO of the IEEE. So once we sorted the microphone out, we'll hand over to you, Jose. Thank you. Is it okay? Okay, great. Thank you, Jim. And I apologize, because I have about two hours' speech. So stay tight. So it's an amazing feat that we are here today to commemorate. And it's a tremendous engineering talent and pioneering computer accomplishments. The world's first stored program computer. The world's first transistorized computer. And what was, at the time, the world's fastest computer in 1962. So 74 years ago, and it's interesting exactly on June 21, except it was on Monday, today you say, what, Wednesday or Tuesday? Tuesday. Thank you. I lost track. The other day I was talking to someone saying, on Saturdays it was a Friday. So 74 years ago today, a landmark development in the history of computing took place here at the University of Manchester. Those of you associated with the university should be extremely proud. At 11 a.m. on the 21st of June, 48, the small-scale experimental machine, it was a too confusing name, so they renamed it as the baby, and it started running its first program. It's interesting, I was talking to Chris Burton, and I said, oh, so the computer took 52 minutes to compute the largest factor to the 18. And he corrected me and said, no. The computer took only, let's say, a couple of minutes to compute the largest factor in the number. 19, the answer was one. The only thing they wanted is to prove the concept that the computer, through an electronic stored program, would compute the right answer, no matter how simple or how difficult it was. And that's it. Over time they did go to the 18th thing and to the 3.5 million calculations, that was when they felt comfortable that whatever they had there really worked. So I was talking with Steve after, and I was telling him really what they did was to take a very complex problem and reduce it to the simplest instantiation that could prove whatever they wanted to prove, which was that an electronic stored program would run a computation in, let's say, real time correctly. So you heard the stories, but I think the feat is such that we should repeat now possibly with names. And so it started early in 1948 when Freddie Williams, newly appointed as head of the electrical engineering department at this university, I think at the time maybe it was called UMS, now University of Manchester, together with his young wartime assistant Tom Kilburn, who was a 22-year-old individual, after their work on radars during World War II came here and they thought, why don't we try to take a first step forward on computing, which would be capable of carrying out multiple tasks on command. But to do that, they were missing a component, which was the memory, and that was realized by different people, but they said, okay, if that's the real problem, let's do it. So what they did, of course, and you know better than me, is to further develop the Williams tube that became known as the Williams-Kilburn tube with a cathode ray tube, of the time commonly associated with old-fashioned TVs, bulky televisions. To test their idea, they built in a prototype stored-program computer, as you heard already, using war surplus supplies, and once finished, this baby was only 17 feet long, so it was a little bit big, and it only weighted one ton. Okay, so you think it's too big, too bulky. Well, compare it with the competition at the time, the ENIAC at the University of Pennsylvania. It was 95 feet long, so 5.5 times bigger, and in terms of weight, it was a little bit slimmer at 27 tons. So it does make a difference. Okay, the baby also was instrumental. I'm told in attracting Alan Turing, and the rest is history. Soon, of course, a larger computer, so what Chris also told me, is that they kept improving. It was not just they were done and turned on to go to something else. Now, they kept improving. Civil succession, and eventually came the Mark I, the first commercially available, the first commercial computer. So the other interesting thing about this project is that it started a long running partnership between the university and Ferranti, and that is not just the partnership with that particular company, but I guess, I don't know if it started the University of Manchester, but at least it built on that and continued the partnership with industry, which is a great characteristic of great universities, like University of Manchester. So, fast forward now to the Atlas, and again, an effort in partnership between university and Ferranti, and Tom Kilburn, the same Tom Kilburn of the baby, is now also the leader of that joint university team with Ferranti. And the interesting thing is that the Atlas computer appeared a year before the IBM 360 was announced. So, it's a great feat also of technology. And another interesting point is that in 1964 Tom Kilburn and his team moved from the Department of Electrical Engineering to found the new Department of Computer Science, which apparently is the first computer science department in the United Kingdom. So, I can't tell you is the first in the world because my university would tell me, oh, wait a minute, wait a minute. So, I'll just let it at that. Well, it's really a great pleasure to join you as we celebrate historical significance of this site and innovative spirit and pioneering work that has taken place here. And this tradition of computer innovation at the University of Manchester, of course, continues today, and we will hear about that later this afternoon. I'd like to thank everyone who took the time to join us today for this celebration. In particular, I'd like to thank our hosts, Jim Mills and Richard Jones. Dedicating a Nitriple Emile Milestones takes tremendous effort. And the members of the Nitriple United Kingdom and Ireland section are to be commanded for their generous gifts of time, energy expertise in making this a reality. I'd like to thank professors Mike Inslee, somewhere in the room. And Isaac Kale, who couldn't be present, the past chair and the chair of the Atriple EA UK and Ireland sections. And all of the members of the local committee responsible for organizing this event in particular, Professor Rod Matram, for their tireless efforts to make this a reality. I also thank Professor Matram for inviting me to participate here today. So you heard and you'll hear details on the Atriple EA history committee, which is the organization within Atriple E that promotes the history of science and technology and supports events like this today. So I will let Brian Berg to address that. So let me just finish. So just as numbers, today's milestones are Atriple E's 225th and 226th worldwide and the 18th and 19th for the UK and Ireland section. Now, milestones recognize technology advances, but also from our perspective an opportunity to educate the public at large to the role that technologies like many of us here play in advancing the world. I am honoured to be a part of this celebration to recognize these pioneering events and the people behind them. They serve as landmarks both in the progress of technology and of civilization. It's my pleasure to participate in the Atriple E milestone dedication. Again, I'd like to thank all the organizers of the event. I applaud these efforts to celebrate and safeguard the contributions to society made by countless technologies that they seek to advance technology for the benefit of humanity. Thank you very much. Thank you. So next I'd like to welcome Richard Jones. Richard is vice president for regional innovation and civic engagement at the University of Manchester here. He's an experimental soft matter physicist, a fellow of the Royal Society, and he won the Tabor Medal of the UK's Institute of Physics for his contributions to nanoscience in 2009. Richard is the UK's vice president for regional innovation and civic engagement. He's the independent science adviser to innovation Greater Manchester, and he chairs the Greater Manchester Civic University Board bringing together all five universities to support the economy, the people and the communities of Greater Manchester. Thank you very much. Well, it gives me huge pleasure. It's a great privilege to represent the University of Manchester at this really important recognition of the central role of the university, working with its industry partners and for anti in the history of computing. I just want to say a few words about the history of the university and how this discovery fits into that broader history. So the University of Manchester absolutely is one of the original civic universities. It was founded in the city where the first industrial revolution began and it's been intimately connected with the city's evolution as a hub of innovation and of world importance over nearly two centuries. So the origins of the university are in the Mechanics Institute, which is the precursor to the University of Manchester Institute of Science and Technology, and in Owens College, which turned into the Victoria University of Manchester. So the Mechanics Institute was founded in 1824 at a time when the first industrial revolution had really got going. That was a revolution based on the factory system, the deployment of large-scale mechanical power, first in water and steam engines. Of course, as this first industrial revolution took hold, there was a need for new institutions to connect the knowledge of the fast-developing new science with these new industries. It's in this culture of improvement that the Mechanics Institute first, then Owens College, were founded. Owens College had been founded in 1851 by a bunch of industrialists recognising the need of people in Manchester to have a university education, but a university education was suitable for an industrial city built on high technology and it really was the high technology of the time. Of course, one has to remind oneself at that time Oxford and Cambridge were still essentially focused on training the dimmer younger children of the aristocracy for careers in the church. So going into the second half of the 19th century, sometimes what was called the second industrial revolution by economic historians was taking root. This was an industrial revolution that was based on the direct application of the new sciences that were being developed at the time, particularly chemical sciences and electrical sciences. So we saw chemical engineering, electrical engineering, really being the basis of these new industries. And at the time there was kind of real paranoia actually about the position of the UK compared with Germany as the rising new power. And this led, we saw new institutions being formed, the development of formal research and development as something that happened in industrial concerns. And there's a particularly important Royal Commission, the Royal Commission on Scientific Instruction and the Advancement of Science in 1872 to 1875, which is actually a really important document in the creation of the UK's science system, perhaps not appreciated. One of the things it did was look at the universities, it looked at Owens College, recognised the progress that was being made, the importance of Owens College for Manchester, and it was the first time that it was recommended that the government should partially fund Owens College with some of the other new institutions at the time. So at the time a chemical industry was developing, that came out, the textiles industry needed to bleach and dye textiles. So in the late 19th century, that technical school, which is what the Mechanics Institute had turned into, led the development of chemical engineering as a discipline, and then electrical engineering soon followed. We got companies like Metropolitan Vickers and Later Feranti, establishing really innovative businesses in this fantastic new technology of electrical engineering produced. So after the First World War, this was a hugely fruitful time at Manchester. Really many, many great innovations were happening. I just want to mention a couple of them. One of them, Douglas Hartree, and I'm kind of particularly keen to mention Douglas Hartree, he's one of the founders of theoretical chemistry. He devised new methods to solve the new quantum mechanics that arrived, these new equations of quantum mechanics. He devised ways of solving them for many body systems. In that way, it was a foundation of quantum chemistry. In my personal career, as a young lecturer, in my own field, I needed to use Hartree's equations. Of course, I had a deck alpha to try and solve Hartree equations. Hartree didn't. Hartree actually needed to go and build himself a computer to solve his equations. He built the UK's first differential analyser. That's an analogue computer to solve these equations that he'd found for understanding the quantum mechanics of atomic structure. But then he did that with Metropolitan Vickers, and then they later were able to commercialise it. Getting on to the subject we're talking about today, Freddie Williams was an engineering student at Manchester. He did a D Phil at Oxford as a Ferranti scholar, in fact, looking at some very fundamental issues in electronics and came back to Manchester as a lecturer. Crucially, in the Second World War, Freddie Williams went to work with Ferranti on radar. Tom Kilburn, a very young Tom Kilburn, joined his team. They came back in 1945 to work on computers and the story that you'll hear much better from others than from me leading up to the Manchester baby into Atlas. But there's one thing, a comment, a quotation from Freddie Williams. I'd really let doubt at me and I really want to quote it because it's lovely. So Freddie Williams said, we knew nothing about computers but a lot about circuits. Professor Newman and Mr A. M. Turing in the mathematics department knew a lot about computers and substantially nothing about electronics. They took us by the hand and explained how numbers could live in houses with addresses and how, if they did, they could be kept track of during a calculation. The collaboration was very fruitful. It was indeed a fantastically fruitful collaboration and I think I want to draw out some of those themes here that I think have occurred throughout the history of the University of Manchester. That interaction of pure science and the development of new technology. The kind of combination of mathematical excellence and engineering prowess working together and then that partnership between industry and academia to get the products out into applications in the wider economy. I think these themes continue in the University of Manchester. They continue and it's very fine computer science department, it's engineering departments. You'll hear more from Steve Furber about some of the great work that's going on in the computer science department today and I hope that will include some of his own fantastic work on neuromorphic computing and biologically inspired architectures. But I think the university now, which now combines those great traditions of the Victorian University of Manchester and UMIST, continues to be a great civic university with that combination of rigor and practicality, its commitment to driving the economy of the city and the nation while still having a world impact with its discoveries. So I think as a university we take huge pride in its past achievements, the two great ones that we're celebrating today and we have great confidence in the future. Thank you very much. Thank you. So next I'd like to introduce Brian Berg. As I said earlier, Brian actually acted as the advocate for our proposal when it went to the IEEE History Committee and made a lot of very valuable contributions that helped us to refine the proposal and make sure that it went through the history committee without really any problem once it finally reached them. So thank you to Brian. Brian is an independent consultant working in Silicon Valley, California where he specialises in flash memory and data storage. He's a long time IEEE volunteer and a member of the IEEE History Committee and he's been directly involved with 24 milestones including the two that are being dedicated today. Very good. Thank you. Can everybody hear me okay? Whoops, let's see. Here we go. So I'd like to first recognise all the people I worked with, the key team on this milestone effort. You've heard some of these people including Jim Miles just now but I just want to recognise Roland, Rudd and Simon. They were a great team to work with and they put a huge amount of effort into this. So thank you very much. Also I want to recognise one of our expert reviewers. Any of these milestones have any of our expert reviewers to verify their accuracy. A fellow named Thomas Hay who teaches at both Wisconsin in the States and also in Germany and also has a recent book on modern computing and he helped us craft the words that precisely describe how the Manchester baby is unique in the world. So I just want to tell you a little bit about the milestone program. You've already heard a little bit about it. Any milestone needs to be at least 25 years old and there's a bronze plaque that commemorates the milestone and there's up to 70 words on that plaque that describe the milestone. So here's a photograph of a plaque from the 1989 CDMA invention. Anybody in this room who's got a cell phone in their pocket uses technology based on CDMA. It's a very important technology that came out of Qualcomm in San Diego, California. Here's a plaque that's mounted outside the Qualcomm headquarters. A very prestigious location because it's such a prestigious thing to get. So here's a close-up image of what these plaques look like. This is one of the milestones that I worked on. It's called the Apollo 11 Lunar Laser Ranging Experiment. This is the Apollo 11 astronauts when they were on the moon July 20th, 1969, placed what's called a retro reflector on the surface of the moon so that you could shine a laser from Earth, get it reflected back and based on the speed of light determined the distance of the Earth to the moon with centimeter accuracy. There's a lot more that came out of that as well, but it's a really interesting milestone to work on. So milestones are one of the most visible ways for IEEE to celebrate and show the general public about important technology achievements. 224 of these have been dedicated so far so the baby and Atlas will bring that count to 226. And they are the 17th, excuse me, the 18th and 19th milestones here in the United Kingdom Ireland section of IEEE. Now I read a book recently called A Biography of the Pixel and it's by a fellow named Elfie Gray-Smith who co-founded Pixar. He had done some research and went to various places around the world to find out some information about the background of pixels. He had not known that the first pixels were actually created with the Manchester baby. Now pixels were far from the mind of Williams and Kilburn when they were creating this machine. However, coincidentally, a photograph that Tom Kilburn took in 1947 of the CRT in the Manchester baby is the first known photograph of a pixel. And when he came here, fellow named Brian Mulholland, probably as an audience right now, programmed the demo machine, the replica, so that it would display the word Pixar because he knew that Elfie Gray-Smith was a co-founder of that company. So here's that photograph that created in 1947 and just shows the importance, one other aspect of the importance of the baby. So another milestone that I recently was working on that was for the computer graphics work developed at the University of Utah in the States. Dr Ed Catmull, who was Pixar co-founder with Elfie Gray-Smith and who received six Academy Awards, was one of the graduates of that university. He's just one example of the technology that came out of that. And so Toy Story released in 1995 was the first fully computer generated full length motion picture. So here's a gap of years of 48 years from 1947 to 1995 from the first pixels to a motion picture created only from computer generated pixels. Another milestone that I worked on was, I worked with Dr Eli Harari. He's a 1969 graduate here of Manchester and he was one of the key people who demonstrated how flash memory needs to work in reliable fashion. So he did some work in the 1970s initially and he also founded Sandisk in 1988. The very first milestone I worked on with him that was dedicated in 2012. And some of the words from that milestone were recited by President Obama when Harari received the National Medal for Technology Innovation. So that's just an example of how important these words are. These words are supposed to be read perhaps 100 years from now so that people can recognize and understand what was being celebrated. Now if you want to find some milestones here in the UK Ireland section there's a website called the Engineering and Technology History Wiki or ETHW. If you go to that site you can go to any place in the world you can scan around a map, zoom in and find milestones. So this map shows the 18 that are in the UK Ireland section. And if you zoom in on Manchester you'll find the two plaques that are going to be mounted here in the city. This is a screenshot so I only have the name Atlas Computer for one of them but were you to hover over that other purple little marker you would see the baby milestone plaque. So some other milestones that I've found in travelling around UK Ireland myself have included the first transatlantic telephone cable that was laid on the bottom of the ocean and this is called the TAP-1 and there's a plaque in Oban Scotland and it actually became part of the hotline connecting the White House to the Kremlin in August of 1963. And another important one, Blitzley Park. If you go to the country house entryway you'll find a plaque as you see in the lower right here. Get your own selfie there if you'd like. I think many people know about Blitzley Park a very important development that's short in World War II. Also, if you go to Api Road Studios you'll see a milestone plaque to the right of the entryway and that honors the fact that there was a gentleman, Sir Edward Elgar who had opened EMI recording studios and then there was a patent filed right around that time about stereo sound reproduction. And three years later at Api Road the first stereo sound recording was created by the Lyndon Philharmonic Orchestra. Of course everybody knows Api Road from the Beatles but Api Road has a long history including this important technology to go to stereo recording. You can also go elsewhere in the world and find other milestone plaques. Go to Japan, find one for the bullet, excuse me, the bullet train and go to the Netherlands, find one for the compact disclair. You can go to India, find one for the giant meter wave radio telescope which is set of 30 antennas that have been set up there in India. Another milestone that I was involved with that was dedicated last year was there was actually plaques in three locations. Two in the United States and one near Pisa, Italy. And the gravitational wave antenna proved in 2015 Einstein's general theory of relativity with regard to what gravity is. Another one I worked on was called Chiki the Robot as the world's first mole intelligent robot and it also had major developments with regard to computer vision. And at the Computer History Museum in Silicon Valley that's where the one and only Chiki is located and there's a plaque for it there. And I also worked on milestones for the Apple One, Apple Two and Macintosh including with Steve Wozniak and the Macintosh team. So if you ever do visit Silicon Valley go to the Computer History Museum and you'll see a set of 13 duplicate plaques. I've started a program in which duplicate plaques will be installed at the museum because the original plaque goes into the historically important location. You can create a duplicate plaque and I've started the program of having these plaques at the museum. Thank you very much. Thank you. So now we've reached the moment where we're finally going to unveil the plaques. So can I invite Steve, Richard and Jose down to the corner over here and we'll actually finally reveal the plaques which are hiding behind these lovely purple curtains. Okay, so basically what's happening here is that the IEEE are formally handing the plaques over to the university so Steve and Jose are going to read out the citations of the two plaques and then after that Richard will formally receive them on behalf of the university. So Steve, Richard, I'd like to hand the microphone to you. So I'll read the citations on the baby computer. Manchester University baby computer and its derivatives, 1948-1951. At this site on 21 June 1948 the baby became the first computer to execute a program stored in addressable read-write electronic memory. Baby validated Williams-Kilburn II random access memories later widely used and led to the 1949 Manchester Mark I which pioneered index registers. In February 1951, Ferranti's Limited's commercial derivative became the first electronic computer marked as a standard product delivered to a customer. And the second plaque today, the words I'll read will be on the plaque that will be installed at the Conest building where Alice was installed, it was invented and installed and the plaque of course goes in after the event today. It'll read Alice's computer and the invention of virtual memory, 1957-1962. The Alice computer was designed and built in this building by Tom Kilburn and a joint team of the University of Manchester and Ferranti Limited. The most significant new feature of Atlas was the invention of virtual memory allowing memories of different speeds and capacities to act as a single, large, fast memory separately available to multiple users. Virtual memory became a standard feature of general purpose computers. Thank you. So with that, we will finally unveil the plaques. And so here they are. Well, on behalf of the University of Manchester, we're enormously honoured to be recognised in this way by the IEEE, a fantastically important and august institution. And I'd just like to offer my thanks to everyone in the IEEE who've worked to make this happen. Everybody in the University of Manchester who's contributed to this really important recognition and a very important piece of technological history. Thank you very much. Thank you. I shouldn't have. Is that okay? Right, good. Welcome back. So we're into the second technical session of the meeting and now I'd like to introduce the first speaker, who is Simon Lavington. Simon graduated in electrical engineering from the University of Manchester in 1962, which coincidentally is the year that the Atlas computer was inaugurated. Simon was a member of Tom Kilburn's computer design team from 1962 until 1981. In 1986, Simon joined the University of Essex where he worked until 2002 and he's now a Meritys Professor at the University of Essex. Simon, as a sideline whilst he was at the University of Manchester, developed an interest in the history of computing and wrote his first book on the history of Manchester computers in 1975. Since then, he's written six books and seven published academic papers on the history of computing with a particular emphasis on the computers of Manchester. So with that, I give you Simon. Thank you, Jim. This afternoon, I'm just to describe a story of a memory invention and its consequences for computing both at Manchester and distant parts. I'll go through these phases of this five-year history of research quite briefly first and then in part two of my talk, I'll choose my own five highlights, technical highlights, that I think have a longer term consequence for computing. So to begin at the beginning, in the summer of 1946, two electrical engineers, Freddie Williams and Tom Kilman, started researching computer memory systems, storage devices at the government's telecommunications research establishment, TRE, at Malvern in the West Country. TRE was where a lot of the innovative radar research took place during the Second World War and it became a centre of excellence for electronics. By the autumn of 1946, they've managed to store just one bit and file one patent. Now, unfortunately, we don't have a picture of the early apparatus, but what it do have instead is a tidied up commercial version of what became known as the Williams-Kilburn tube, which stored binary digits as elements of electrostatic charge on the phosphor coating of a cathode ray tube. When these elements of charge were bombarded by an electron gun, interesting voltage signals were developed and were picked up by the pickup plate and fed to the main computer, and the innovative aspect of this invention was the crafty interpretation of the voltage signals. Now, this was a random access memory unit rather than serial access, and furthermore, it used off-the-shelf components, or modernity components, which were available in their thousands during the war for radar purposes. So, back to more than TRE at the end of 1946. A significant thing happened then. Freddie Williams accepted a post of Professor of Electrical Engineering at the University of Manchester. So, the staff, the director of research at TRE was so anxious that the memory research would continue, that he organised two things. He organised equipment and resources to continue to flow from TRE to the University of Manchester, and he seconded two people, Tom Kilburn and Geoff Tootle, to help with the research. So, the research certainly moved to Manchester, but the impetus remained. Now, the move to Manchester University was propitious in two respects. At Manchester, the professor of pure mathematics, Max Newman, already had a grant of money from the Royal Society to set up a computing machine laboratory to investigate problems of pure mathematics. His funding was as yet unspent. The professor of physics, Patrick Blackit, he was a man of influence who had the ear of the government, and Patrick Blackit was convinced of the strategic importance of computers. So, the three engineers, Williams, Kilburn and Tootle, got plenty of encouragement at Manchester, and by the end of 1947, they developed a storage system storing 32 words of 32 bits each, quite small, but enough to encourage them to subject that memory to a test. Now, what test should they use for this memory device? They could have constructed test equipment that ran through all the combinations of digits possible in that store, but that would have taken a long time to run and to make, and they decided that it would be much more convincing to incorporate their memory in the heart of a very small computer as they called it a small-scale experimental machine. This they did. The picture shows Tom Kilburn on the left, then age 25, and Freddie Williams on the right, then age 35, at the control console of what became known as the baby computer. The baby computer first ran a programme successfully on Monday, the 21st of June 1948, as we've heard before, sometime in the morning, I'm told. So that was a significant day 74 years ago. What kind of programme did they choose for their first demonstration? Well, the instruction set for their machine was more or less the minimal that they could think of. I'll come to that in a moment. The programme has been mentioned previously this afternoon was to factor a very large number, and the real test came from factoring 2 to the power 18 minus 1 a programme that took 52 minutes to run during which about 3.5 million instructions were obeyed. The programme itself was 17 lines long, and a photograph on the left is a page from the laboratory notebook of Jeff Tootle showing a modified version of that programme in July of that year. On the right, in modern terminology, is the instruction set, and you'll deduce that this was a single accumulator architecture, and indeed it was based on the structure suggested by John von Neumann at Princeton University for his project at the Institute for Advanced Study, IAS. It became known as the IAS architecture, and several of the early computers had this sort of basic architecture and similar sorts of instruction set. So there we are, we have a working programme in September of 1948. The team had been expanded to seven engineers, and they set about expanding the baby machine, increasing its word length, increasing the repertoire of instructions, significantly adding a drum backing store, a magnetic drum backing store and index registers. The machine of index registers had some significance in computing to this day. So this machine, which was first operational on user programmes in April 1949, became known as the Manchester Mark I. It occupied two rooms, the room shown, which is about 20 foot square, and an adjacent smaller room, i o equipment and magnetic drum. Now in October of 1948, Alan Turing arrived to join the computing machine laboratory, and his salary was paid by Professor Neumann, and Turing's arrival was significant. He suggested five-track teleprinter equipment as the i o mechanism for this computer, and of course Turing and others at Bletchley Park knew a lot about five-track teleprinter equipment. Alan Turing wrote the low-level subroutines necessary to drive the input-output equipment, produce the bootstraps, et cetera. The first useful programmes run on this machine included an investigation of Mersenne primes and the Remazeta function, and the machine operated until the autumn of 1950 when it was closed down. It was very much a university prototype, but it demonstrated particularly the integration of backing store and fast primary store, and I want to say something about that later. So it was a university prototype. Felly, in the autumn of 1948, Patrick Blackit had alerted the UK's Ministry of Supply, which is the forerunner of the Ministry of Defence, and persuaded them to give a lot of money to the local Manchester firm of Ferranti Limited to produce a fully engineered commercial version of the Manchester University prototypes. And here is the first off of the production line. The Ferranti Mark I, which was delivered on the 12th of February 1951 to the university, it was housed in a new building financed by the unspent equipment money granted to Professor Newman by the Royal Society. On the lower photograph, you see, sitting at the console, two Ferranti engineers, Keith Longestale and Brian Pollard, led the production team at Ferranti's Moston factory, which is about two miles to the north of this place, of the university. On the right standing at the console is Alan Turing. Ferranti went with enthusiasm into the computer production field and produced two Ferranti Mark I computers and seven modified Ferranti Mark I star computers. And you can see the installations on the screen. Up until 1955, there was no market competition for computers outside America. And you will notice that three of the Ferranti machines were exported, one to Italy, one to Canada, and one to Holland. And there was a sense in which the bronze plaque, which today honored these developments, saw its kind of ultimate manifestation from the outside world's point of view in these nine computers to be delivered. But what I'd like to do in part two of my talk now is to reflect on what were the highlights, the landmarks, if you like, in the technical sense of the five years of research. And I've chosen five landmarks, as given in the next three slides. The first three historical landmarks we've met before as an additional comment on number one, the Random Access Memory, Williams-Kilburn tubes were adopted by about 25% of all early computers. Other computers used sequential memory, like mercury delay lines, but random access was certainly to be preferred. And the Williams-Kilburn tubes were only superseded in the mid-1950s by the introduction of ferrite core stores. On an additional comment on number two, TRE in Melbourne did in fact produce their own computer after Williamson-Kilburn had left, but it lagged behind the developments at Manchester and all the exciting stuff happened at Manchester. A comment on number three, well, index or a modifier registers are rightly highlighted as a significant computing development. It is also the case that the Mark I had a few more interesting instructions in its order code, in its instruction set. There's no time to go into them. The thing I would like to pick out though is the attempt to integrate the fast primary memory, which was the electrostatic store, and the slower but more capacious backing store of the magnetic drum. This sort of mirrors the development we have in your laptop with RAM and hard disk. So on to my fourth highlight, which is some reflections on struggling to make this integration, this storage management. The photographs shows the operator's console displaying two pages of information, and you can see why Manchester adopted this terminology of the page as a block of information that would be moved from primary store to backing store and vice versa. It does look like pages, doesn't it? You'll notice that there is an extra 20-bit line at the top left of the operator's display, and that was Programmer Accessible, and that was a page address register that gave some assistance to programmers in the management of their information shuffling. That was a germabyte, an idea that led 10 years later to hardware page address translation and virtual memory in Yata's computer, which is the subject of the next talk this afternoon. For my fifth and final highlight, I've chosen a primitive high-level language. In March 1954, Tony Brooker, who had taken over from Alan Turing in the management of software at Manchester, introduced his Mark I autocode, a primitive high-level language, perhaps the first usable high-level language. You'll see that users were enabled to employ algebraic-like expressions to represent arithmetic operations with named variables and memory management, and indeed the simulation of floating-point operations was done behind the scenes automatically by the autocode system. To illustrate this, I've got a simple program on the screen. This calculates the root mean square of 100 floating-point variables, v1, v2, et cetera. N1 is an integer. In the bottom line of the program, the asterisk signifies print a quantity to 10 decimal places on a new line, and the f1 in the bottom line is the intrinsic function square root. Clearly, programmers using this type of language were able to produce software somewhat quicker and software that was somewhat easier to maintain and modify than hitherto. Well, that's nearly all what I want to say this afternoon. There's more information on the website, and indeed send me an e-fail if you have further questions. But finally, I'd like to point out that there is one last surviving member of the Mark I design team, Tommy Thomas, who is alive and well and living in Australia. He may be listening virtually to this seminar, but Tommy, if you can hear us, good on you, mate. Thank you, Simon. So we have time for a few questions if anybody in the audience or anybody in Zoom hire Rod. If anybody has any questions for Simon. So I'll just repeat that for the Zoom audience. The question is which do you think is the most significant invention, the index register or virtual memory? Well, I think definitely at the time, the index register in the late 1940s, 1950s and the late 1960s, but of all the... Once computer architecture had kind of settled down in the Von Neumann style, I think that virtual memory was indeed significant but, of course, it built on a number of primitive constructs including index registers. So I'm not answering your question deliberately because I think it's got to be seen in the historical time frame and who knows, you know, tomorrow there might be an invention that far outshines, perhaps it's quantum computing, made easy or something. So I'm not going to commit myself. Is cathode ray storage for at TRE, what were they interested in? So I'll just repeat the question for the Zoom audience who won't have heard you. The question is, what did William start to design the cathode ray tube store for at TRE and what was their interest at the time? Yes, an interesting question. As he has said, after the war, when the pressure of hostilities ceased, there were all these scientists in government establishments, what did they do? They were looking for projects. Now, William had been a couple of times to the states, he saw people struggling to invent storage devices. In fact, the search for cost-effective memory systems was the most significant problem facing all the embryonic computer design groups wherever they were. He saw primitive work on electrostatic charge storage that wasn't really getting anywhere at that point and decided that computer storage was the thing he would address. At that time, as he will freely admit, he didn't know nor care about computers as such, but he knew that there was a problem there waiting for a solution, namely how to store binary digits. Thank you. Okay, one last question that we'll take very quickly and then I think we need to move on. Why did TRE use delay line when they had access to the CRT storage The question is why did TRE use delay line technology when they had access to the CRT? Short answer is they didn't. TRE focused on Williams-Kilburn tubes and the first computer that arose from TRE was TRIAC 1953, which used Williams-Kilburn tubes. It has to be said that the mathematicians at TRE knew about the IAS, the Princeton Architecture and indeed gave lectures on it in April 1947 and sent transcripts of these lectures to Manchester. So I give that as an example of how the IAS type of architecture was fairly common knowledge and TRE grabbed hold of the Williams-Kilburn tubes and built a computer that someone later. All right, thank you Simon. So with that I think we should finish and hand over to the next speaker. So thank you again. You're going to say something. I'm going to say something. You can't stop me, I'm sorry. And we'll wait until you. So I'd like to introduce the next speaker who is Roland Ibbott. As it happens, Roland also graduated in 1962, the same year as Simon and the year that Atlas was inaugurated. Roland graduated in physics from the University of Manchester and he joined the staff of the computer science department in 1966 and was here until 1985 when he was appointed to a chair in computer science at the University of Edinburgh where he's now emeritus professor. Manchester, Roland was a major contributor to the MU5 project and he was the machine that followed on after Atlas but he's also written a book on the architecture of high-performance computers in which there are major sections on Atlas which drew on the experience of the development team of Atlas at the time. So with that, I'll give you Roland. Thank you. Right, okay, thanks. Well, good afternoon everyone. It's my privilege and pleasure to talk to you this afternoon about Atlas, a machine that inspired me under many others of my generation to pursue careers working with computers. You've heard the citation. Those of us who wrote it spent many hours honing the words of the citation as well as the whole proposal in collaboration with her advocate, Brian Berg and other members of the IEEE history committee. So I put them up there in case you haven't heard them before in case you didn't see them. Now, because they're on the brochure anyway. What's important is we've got Tom Kilburn's name. Go to history committee at first. It did not want a name on the plaque and we were adamant we did. So we got it. At the bottom, there is a link to the proposal which you can find on the ETHW website and there is a QR code in the brochure that will take you to it. The plaque is going to be mounted on the... what's now called the Zaconis building which was the electrical engineering building known as the Dover Street building. But that is the front of the building and that's where the plaque will go. Now, those of you who work there or with students there will recognise the back of the building rather more easily because that's the door that it all went in and out of. Also familiar to those of that generation will be the college hotel to which members of... some members of the Ferante design team would repair at lunchtime. But them also when the same guys came back to the university to help design MU5. The stone marked in red on there as the college hotel was rescued and if you go to the far corner of the quadrant you will find that it is built in to the fabric of this building in perpetuity. This might be my greatest legacy. Manchester Atlas was inaugurated officially by Sir John Cockcroft. How I learnt today was the first person from a village that John Crocroft's father, John's here today, went to university but he didn't count, didn't John, John Crocroft because his father was a mill owner. But he's the one that got them the bell prize. Anyway, there he is, opening inaugurating with Sebastian Ferante and Tom in typical pose with his pipe. Here's a couple more photographs familiar to as many of you I'm sure taken inside the top of the WE building of the Atlas. These are the two most familiar ones. Now with hindsight, possibly the most significant feature of Atlas and the thing we're here to celebrate today is virtual memory. But there are many other features of Atlas that are in common use today. The first three highlighted were the subject of patterns and I'll say a little bit more about those later on. For now I want to say something about asynchronous pipeline operation. Pipelining involves splitting the hardware into separate bits that carry out part of the operation and then you can overlap these activities which means you can make instructions go faster and you need timing pulses to make the pipeline work. But in Atlas we had a problem because multiply, for example, takes an awful lot longer than addition and so the speed of a clicking clock would be far too slow so Atlas didn't bother. It was asynchronous and so the hardware was sent its result to the next stage when the next stage was ready to receive and so on. And so you kick it once at the beginning and then the thing just free runs and hopefully it will continue to work until such time as in Atlas terms it lost the pre-post and then you'll start again. So that's a different version. The top version is the standard pipeline with a clock. Most computers do have clocks these days and one of the issues in designing computers is actually distributing the clock across the whole of the chip. It's a known problem that's been going on for some decades now. With an asynchronous pipeline then you don't need a clock but then that brings other disadvantages. It's a harder design. The Atlas patents covered, three of them covered aspects of the one level storage system. One was concerned with interleaving of main memory and the other concerned multiplication. The initials, TK, DBGE, FHS and DA refer to Tom Kilburn, Dyedwards, Frank Sunder and David Aspenall. I'm pleased that we've got members of Dy and David's family with us here today. Let me say something about multiplication and I'll come on to remembering interleaving. Multiplication is a much more complicated operation than addition and it's carried out typically by repeated addition. I'm not going to give you slides about it because it's far too complex for me to remember how it actually worked 60 years on from Tom explained it to us in lectures. But with Dy and Dave Tom worked out a way of reducing the number of additions that he needed to do by dealing with the multiplier digits three at a time and so they made the multiplier go faster and that was a patent. That's a picture of the architecture of Atlas. The bit in the middle is the V store, the fixed store, the subsidiary store, the page address registers and the main course store and on the right we have the operating controls, the peripherals, tape decks, clubs door and on the left we have essentially the processor, the floating point unit and the B unit, the address unit and the control. When I asked Dave Aspinor what the V stood for in V store, he said, victory, we've solved the problem. A similar technique was used in a number of machines actually and typically known as memory mapped peripherals and the PDPLM for instance is a typical example of a memory mapped peripheral processor. The processor was the accumulator, the floating pointer arithmetic and the bere arithmetic and the logic of the control section and the accumulator was designed by our Chen. I hope Yow is watching us today on Zoom because I have the greatest respect for him. Having built myself a simulation model of Atlas, I know what it's like to try to make that thing work. This is the instruction format. The function code, ten function bits, so there are B codes which worked on B lines and that's a hangover from Art 1 and in fact other manufacturers use the term B lines for index registers, CDC in particular. Test and count instructions, accumulator operations and extra codes. I'll say something about those in a minute. Two B registers, BAMBM and four A codes, you could use both which means you could easily do matrix multiplication type operations. In fact most supercomputers are designed to do matrix multiplication, CDC in particular and create design machines to do just that. So that's quite interesting. Then the address, 24 bits of address, the first bit told you whether it was a user address or a system address and the system addresses were the V store, et cetera, et cetera. Twenty bits of word, a million words and three character bits. Now the word length was 48 so you had eight six bit characters and this was before the eight bit byte became pretty well ubiquitous thanks to IBM. Parallel of the addition. The way the parallel ladder worked in Atlas was described in the paper by Tom and Iedwes and Dave Aspinall. Essential reading for their students as I recall. Prior to Atlas most computers not all had added numbers together serially one bit at a time the way people do but that's a bit slow. But if you do it that way the carry or the borrow is available in the next time slot in order to make it work. If you do it in parallel then I need to go back one. Parallel of the addition then you add all the bits together. So we've got A bits on the accumulator X bits on the number from store and you add them together. Now the problem is that you can generate a carry here and it might go all the way to the end. So you've got the problem of carry propagation. And in the next slide what they discovered was that a particular type of transistor called the SB240 SB stunning preserved barrier was very good for building the kind of adder where you could make the carry whip from one end to the other if necessary. So what happens is there are situations where there's no carry or whether you generate a carry or propagate a carry. Generate and propagate never occur at the same time so you can wire them together. So that's the basis of the parallel adder. There were some SB240s lying around and they were given to a pair of students to build a student project in their final year. And that was me and Simon. So we became very familiar with SB240. Okay, extra codes. Now extra codes are orders that had to be obeyed fairly quickly but were rather complex and difficult and expensive to do in hardware so they were made up of sequences of existing instructions and the sequences were held in a fixed store, a high speed specially designed fixed store. And when the control hardware recognised from the function bits that it was going to be an extra code it switched to using one of a different B register for the control. So control has become program counter for most people. It was known as control originally because the A register on the mark form was accumulator, B was the index and C was the control. But Atlas had three control registers, a B registers which made for interesting stuff. What Simon and I do remember is Keith known to everyone as Ferdy Bowden building this fixed store by sticking ferrite and copper rods into little hairbrushes and sticking the hairbrushes into the mesh that made up the fixed store. So when we had a problem we'd go into the Atlas room and we'd badger Ferdy to tell us where we'd gone wrong and what we should do next. Certainly enough about that. Store into leaving. The problem is that in Atlas there was two microsecond cycle time. Hopefully the addition was going to take less than that but you have to get the instruction and you have to get the number out of the memory so that's four microseconds which is a bit much. So the answer is to interleave the addresses and so you can make four, so the 16k words of Atlas are made up of four stacks in pairs and so you could bring out two words at once and if you do all the sums it turns out that the average time it's going to take depending on where the operands are is pretty close to the floating point addition time. So that was... Store into leaving is very common in all big computers but it originated on Atlas as a patent. So let me come on to virtual memory which is the first thing. These are the words that Tom wrote in the famous paper, One Level Storage. The essential problem is that you can buy fast small stores and big slow stores and what you want is a big fast store and so you have to have some way of putting them together. Now in previous machines time it took to do peripherals under... You'd have to drive peripherals directly in the early machines. Instructions would drive the peripherals but on Atlas the instructions are going to be very much faster than the time it took to use a peripheral. So you've got to do it somewhat differently so the ratio is about 10,000 to one. I'm going to beat it down anyway. So peripherals have got to run autonomously and interrupt when they require attention. And so you have to have multiple programs in the memory at the same time in order for this to work. And then you say, oh, but we can't let the users program the transfers between the core and the drum because they don't know where the other users have got their stuff. So if you want to have this system working you've got to have the... you've got other things done differently. And Tom's stroke of genius was to recognise that the program address of an item could be distinct from its physical location. Then only the computer would know where things actually were and not the users. So the program addresses would have to be virtual and translated. So that's where the notion comes from. Now, if you're going to do this translation you've got to do it in hardware because if you do it in any fancy software it's going to be far too slow so it'll ruin the performance. So built a set of content-adjustable registers and this is a diagram of the registers. So the address coming in from the processor is here. 11 page bits, nine line bits because it decided 512 words seemed like a good idea at the time and most of the manufacturers subsequently agreed. So the page bits are presented to all these in parallel above them if you're lucky we'll say I've got it. And then that's encoded and that's then concatenated with these bits and that gives you the real address to go to the course tool. And here is a typical mapping. So you've got virtual blocks on one side and you've got real pages on the other. There's confusion as to whether you use blocks or pages but don't really matter. But you'll notice that there is an empty one and you need an empty one in order to bring stuff in quickly. Now if you've filled it you've then got to empty another one. So now you're into page replacement algorithms and I'm not going to say a lot about that because Peter Denning is going to talk about that in the next talk. But there was a learning programme and the learning programme used these bits so these use bits which would help the learning programme to know which was a likely candidate to throw out. The other bits here are the lock-out bits which would be set to make pages belonging to processes that weren't running unavailable and that is one of the other important features of virtual memory. It separates out spaces between different users and so they can't interfere with each other. Now the software to do this became part of the supervisor programme and that's the original paper about supervisor. Now sadly as some of you may know David Howarth died about a month ago and so he's not able to be with us which is very sad. Let's say I don't have time to talk about replacement algorithms but Peter Denning will do that next. He was then at MIT and he has become Mr Virtual Memory in America. Now virtual memory was included in the design of a number of machines that came after Atlas fairly soon and if you read the proposal we put forward to the IEEE you'll find mention of various of these machines. PDP-10 was one in particular. IBM adopted it. Today IBM announced tomorrow they said and Frank Sumbar Angamotton said, oh no, we add it first. There's also a question about what language to use and Tony Brooker came to the fore again in that. So here is a picture which will be familiar to some of you of the Atlas Auto Code reference manual and that's a copy I've still got at home very proudly and it was designed by Tony with Derek Morris who was also, Derek was one of the leading lights subsequently in MU5 and his son-in-laws here today, isn't he Colin? It was similar to our role 60 but was implemented a lot more quickly and the language called in was created at Edinburgh. I'd have to mention Edinburgh, wouldn't I? And that stuck around to about the year 2000 and that's the final switch-off. And there we have Derek Morris, Yando Warburton, Tom Kilburn and Gordon Haley and that was in September 1971. And if you want more information, Simon's written an awful lot of stuff about that and I've seen those far more about it than I do but we had decided to split the work of writing these proposals. He'd write one, I'd write the other and that's where we got to. The end of my talk probably can't. You can. OK, our next talk is from Peter Denning. Peter is actually in California and so especially, there isn't a train strike between America and Britain but actually flights are somewhat unpredictable as well so Peter isn't actually able to join us in the room but Peter is joining us via Zoom Peter is live at the moment. We're actually going to show a pre-recorded film of Peter's presentation which Peter put together and then at the end of Peter's film then we're going to come back to Peter live and then Peter will handle questions live over Zoom. So quick introduction to Peter and then we'll go over to his film. Peter has been a member of the IEEE since 1965 and a fellow since 1982. He received the IEEE Women of ENYAC Computer Pioneer Award in 2021. He began his interest in virtual memory while he was a PhD student at MIT in 1965 where he invented the working set model that enabled the solution of two performance problems of virtual memory thrashing and near optimal throughput. In Peter's long career he's published 12 books the most recent of which is Computational Thinking by MIT Press. Peter is currently at the Naval Postgraduate School in Monterey, California where he continues to teach operating systems. So with that I'm going to hand over to Peter's film which we will project in the room and live via Zoom and then we'll come back to Peter for questions. Peter, thank you. I'm here to tell you a story about how virtual memory evolved from initial performance problems into a stable technology that is present today in all operating systems and chips and it also evolved into a foundation for computer security. Atlas hosted the world's first virtual memory. It elegantly solved some difficult issues in the design of the operating system. These were manual overlays, logical partitioning, fragmentation and relocation. The virtual memory was more than an innovation for system designers. It improved programmer productivity by two or three times and that was an astounding improvement from this one technology. I'd like to show you some diagrams of the core idea of virtual memory so you can see for yourself the two main bottlenecks that impeded its performance in the early years. This picture shows a standard picture of a CPU accessing pages of its address space which is then being mapped into the page frame slots of the RAM, the main memory. And there's a disk down at the bottom there where pages move up and down from the secondary storage. That's the location of a bottleneck right there at that disk. In between the CPU and the hardware memory is a mapping unit called MMU that receives virtual addresses and either routes them to the memory or routes them to the disk if the page is not in the memory. That is what generates the traffic across that interface and is very expensive because the disk is typically 10,000 times slower than the CPU. The first bottleneck then is the interface between the main memory and the secondary memory. With a speed difference of 10,000 this is very expensive. Designers focus attention on finding the best page replacement algorithm to minimize page faults. So the replacement problem is suggested by this little picture here where you see CPU and RAM and disk and the whole goal of this game is to reduce the amount of paging because each page fault is extremely expensive. So the focus of the designers in the replacement algorithm was to find the one that produced the least number of page faults. That was the strategy for overcoming that bottleneck. The Atlas replacement policy was called the learning algorithm. It measured loop periods for each page and replaced the one with the longest time until reuse. But this did not work for other systems with different workloads and requirements for low overhead replacement decisions. This set off a massive search for the best paging policy that took place between 1962 and 1970. The most comprehensive study was the one by Les Biladi of IBM in which he published in 1966, leading to somewhat discouraging conclusions. Another IBM project in 1970 introduced a unified theory of replacement called stack algorithms. Their theory did not change the overall conclusions. There were three conclusions. One is the highest overhead of the replacement policies called least recently used LRU at the lowest paging. That's good. The lowest overhead replacement policy, FIFO, meaning first in, first out, had the highest paging and that's not so good. All of them, whether they're FIFO or LRU, were far away from the optimal, which was called MEN, meaning minimum. This diagram shows where these policies were. The two in the upper row there are the ones with high overhead, the two in the lower row are the ones with low overhead. On the left end of the picture you see the MEN, which has the least amount of paging followed by the LRU and then finally out on the far side of the picture you see the FIFO. In between there's a strange little one which used to be called FINUFO, F-I-N-U-F-O, meaning first in, not used, first out. Today that's called CLOC and it's kind of a hybrid between the FIFO and the LRU. Let's look at multi-programming, which created the second bottleneck. Here's the same picture of the virtual memory, but now on the other side, on the right side, there's another CPU mapping its pages into that same memory, of course, in different page slots so they don't interfere with each other. That creates more demand on that disk interface and that demand shows up as queuing, the length of the queue of the different jobs trying to get hold of their own pages. This second bottleneck was the queuing of the page requests. Too many page faults at the same time and different jobs produce long wait times at the disk. This little picture shows the situation that the designers were facing. Instead of a single CPU accessing a single memory, it's now multiple CPUs accessing this big memory that contains multiple programs in it. This is called multi-programming and this changes the whole game because there's many CPUs and we now have to figure out how to divide up the memory and how that affects that queuing that's going on at the disk. Multi-programming created other system problems. One of them was how much memory does each job get? Second one is how should we partition fixed or variable? And the third one was how many jobs to put into the memory at the same time? On top of those three unresolved problems is another one that came up completely by surprise. Nobody expected it. It was called thrashing. Thrashing was originally called paging to death. It meant a state of the system in which everybody was paging and nobody was doing any useful work. This picture here shows on the right side it shows how the solid line in that picture is the rise of the throughput of the system as you add jobs to the multi-programming. That's the horizontal axis and at some threshold point it suddenly collapses. Instead of going up and saturating it falls down to near zero. This is the thing that was called thrashing. It's precipitated. It's unpredictable. It's very sensitive to the workload. In fact, it was such a big problem that virtually every paging policy was susceptible to thrashing and it lead people to question the viability of virtual memory. To deal with this complex of issues we needed a mind shift. A mind shift for managing dynamic multi-programmed memories. This came in 1966 in the form of the working set. Informally defined as the smallest set of pages it must be loaded in the main memory for the job to achieve a low paging rate. The precise definition of working set is the pages accessed in a window of the recent past. This little picture shows what we're talking about here is a horizontal axis representing time. You see little X marks representing page access is one per memory cycle. You see a little window of time extending from the present time backwards for a period of time called the window size and the contents of that window is the working set. This definition enables a supply and demand view of memory. The demand is the working set. The supply is the memory allocated by the operating system. Prior to this, operating systems could only guess at how much memory a job needed. The principle of locality gives us the assurance this will work. It was discovered at the same time as working sets where it was a justification for the claim that working sets would work. It is best appreciated with the help of a page reference map. So this diagram here is a recording made on a computer by simulating or monitoring a Firefox browser. The colored areas show the areas of the pages that are being used. The horizontal axis is time, the vertical axis is the pages, and a colored spot represents the use of that page in that time interval. So you can see some very definite patterns here. We summarized the patterns by calling them locality sets, which are the sets of pages used, the phases, which are the periods of time during which a locality set is used, and the transitions between them. So this picture, if you studied it for a few minutes, you'd see there's five phases in it, and five transitions. So this type of behavior turns out to be universal in programs. The working set tracks the locality sets across time, across phases and transitions. Phases are periods of tranquility, zero or very few page faults. Transitions, on the other hand, punctuate tranquility and produce most of the page faults. The working set policy for memory management answers all the questions about partitioning and total memory load, and eliminates thrashing all at once. So this little picture shows what's going on. This represents memory with several working sets in it, so the size of the working sets does not have to be the same. When a working set encounters a page fault, it gets a new page out of the free part of the memory, which adds to size of the working set, and when a page leaves the window, it gets thrown out of the working set and placed back in the free space. So this allows the total number of jobs to grow and shrink as long as they don't eat up all the free space. So each job is intrinsically inefficient. The total load is managed because you can't put more working sets in than will fit. There's no stealing of pages, so they can't force each other to have more paging than they wanted, and it can't thrash, therefore, because there's no way to push it into that zone of heavy paging. The optimal policy for managing a memory was called v-men, v-meaning variable, men meaning minimum. It's like the working set, but it has a forward-looking window instead of a reverse-looking window. The page faults are identified as the same in both these policies. So they both track the locality sets exactly except at phase transitions, and the difference between these two policies is small at these relatively few phase transitions. The conclusion is that the working set policy overcomes the two bottlenecks, it eliminates thrashing, really good, and it generates near-optimals throughput even better. But wait a minute. The virtual memory and its working set policies were invented 55 years ago when memory was scarce and expensive. Why is virtual memory not obsolete today when memory is plentiful and cheap? A partial answer to this question is that the virtual memory principles are used extensively in the CPU and network caches and in internet content distribution networks. Occasionally, a large job appears that this operating system cannot accommodate within the available memory. Without the virtual memory, no such jobs could not execute at all. But that's not all. The logical partitioning principle of virtual memory provides the basis for operating system kernels to be provably secure. Virtual memory isolates the jobs from each other's memory. No memory leaks can occur if this is done properly. It's an elegant solution for the confinement of untrusted software. The mapping principles of virtual memory were soon generalized to allow access to all digital objects, not just pages, to be targets of virtual addresses. This is the basis of object-oriented programming. In summary, virtual memory, paired with working sets, solved the performance problems, eliminated thrashing and optimized system throughput. Virtual memory also provides the basis for secure operating system kernels and for object-oriented programming. These are the reasons that virtual memory has survived all these years and is still one of the most important parts of our operating systems networks and chips. This is why virtual memory is a milestone of computing. That's the end of my story. Thank you so much for listening. Peter, thank you very much for joining us from California. I know it's a fairly uncivilised time in the morning. Thank you for getting up early and joining us here. Do the audience have any questions for Peter? Or is there anybody on Zoom who would like to ask a question? I'm glad you're here. Thank you. Thank you. Sorry. Brian. A question from Brian Berg in the audience saying that you're actually being very humble in your talk about virtual memory in that you were responsible for a great deal of what you've talked about. And I wonder if you'd like to make a comment on your contribution to the area. Well, the contributions were actually, I think, the product of the times because I was at MIT. I was in the middle of a project map which was trying to build the multi-system which was going to be a sort of like the place that Atlas was trying to go but it affected and it was illuminating a lot of the difficulties that Atlas still had. And one of the big concerns of Baltics was the virtual memory. They were really worried about the performance of it because there were so many mixed reports coming in about paging algorithms and there were starting to be new reports coming in about paging. So I was looking around for a thesis topic and my thesis adviser said, could you look at this and maybe figure something out? And that's how I got into it. And it turned into a big quest. It turned out to be a lot harder than I thought and required the invention of a new theory. To replace all the previous theories. But then it turned out to be successful which of course was proved in the context of real operating systems and not just theoretical results in a paper. So I was in the right place at the right time with the right set of interests and lots and lots of learned smart colleagues to interact with at MIT. OK, thank you. Are there any other questions for Peter? OK, well, thank you again for joining us. Oh, sorry. A couple, one comment that occurred to me as I was listening to other talks is that virtual memory was originally conceived to deal with the fact that the real memory was smaller than the size of the average space. The problems that came up with the other systems were not caused by the lack of of that forced subsetting of the average space. They're actually caused by other multiple average spaces that all actually be fed into the memory. And that just created a different environment than the original one in Alice and brought with it a whole bunch of performance problems. Also like to mention that the collaboration with Roland on paper combines his story, my story today into a single article which will be published in the ACL communications in September that we call the Atlas milestone for the celebration of what we're here today for. Thank you. OK, thank you Peter. So I think with that we'll thank you again for passing on to the next speaker. So thank you very much for joining us Peter. OK, so we just had three speakers who've been talking largely about the milestones and previous developments. We now move on to Steve Furber who's going to give us a talk about what's going on in the university at the moment and what the future is. Steve is ICL Professor of Computer Engineering in the Department of Computer Science here in Manchester. He's a full designer of the BBC micro-computer and the ARM 32-bit risk microprocessor and he now leads the Spinnaker project at Manchester that's delivered a computer incorporating a million ARM processors optimised for brain-modelling applications. And Steve's going to tell us a little bit about that and some other things at Manchester. Thank you Steve. OK, thank you very much for that introduction, Jim. It's a pleasure to be here and it's been a fascinating afternoon listening to the ancient history of Manchester and my job is to bring you a little bit up to date with what's going on in the department today. So the department has a pretty website and this describes the areas of research that are currently active. You can see it covers areas such as artificial intelligence, data science, future computing systems, human-centred computing, software and e-infrastructure and theory and foundations. So it's a very broad area of research that's going on and we're addressing some of society's greatest challenges such as learning how to make sense of the vast quantities of data that are streaming from a bewildering array of sources and also issues such as keeping communications and personal data secure and learning how to socialise the cognitive robots that will increasingly feature in our future lives. So it's a very broad area of research and since my time is officially already out I can't cover all of this. What I'm going to focus on is what I think has been the main subject of today which is computer architecture and engineering but you should be aware that this is now quite a small subset of the research going on in the department. One aspect of our engineering and architecture research is that we are an arm centre of excellence and we have a very long-standing relationship with ARM which is the Cambridge-based company that designs the microprocessors that are in everybody's pockets in this room today I guess and pretty much every other device you find around yourself at home in the car or on the move. Of course I personally have quite a long history with ARM I came to the department in 1990 having spent the 1980s at ARM and I thought given what we've been hearing I'd tell you a little anecdote that in about 1984 or 1985 when I was at ARM we'd designed this new microprocessor and I was thinking about what we would use for a memory controller to support virtual memory and in those days the typical way virtual memory was supported was by having fairly complicated table-walking hardware which would take a virtual address and look it up, do two or three look-ups in memory to the physical address with a local translation buffer that sort of cashed that to make it faster most of the time and I had this brilliant idea of simplifying this instead of using in-memory tables I thought we could use a content addressable memory and we'd have a slot for every block of physical memory and that slot would say the virtual address of that physical memory is this and then when we presented a virtual memory to this store it would look it up and if you found a match it would point you to the physical memory if it didn't find a match then you'd revert to the operating system which could do the hard job of the table walking to update it now I was quite proud of this idea it seemed much simpler than what everybody else was doing and I talked to my colleagues including Herman and Herman said well I really ought to go sanity check this with David Wheeler who was the principle hardware guy at the Cambridge computer lab so I invited myself into David Wheeler's office and I spent 20 minutes drawing my great idea on the bike board and asked David so this seems very simple almost embarrassingly simple do you think it will work and David sat there and thought for quite a long time as was his style and he said well it seemed to work in Manchester in 1963 so I don't see why it shouldn't still work today and yes I mean my knowledge of computer history I was a young man then I still had hair you know that kind of thing I didn't know much about the Atlas and I was working in a completely different technology I was in the microchip era but I come up with basically the same idea and of course what I'd like to recognise is that ARM is one of the sponsors of this event today so that we have very good links with them and it covers a range of activities to do with ARM's business interests now I'm going to focus on the particular research topic that I lead here at the university and this again has historical origins so I live about 10 miles south of the city centre here in Manchester, quite near the airport and not far from where I live there is this fairly undistinguished semi-detached house which has a blue plaque over the brick archway to the left pointed to by the blue arrow and on that blue plaque it says Alan Shoring founder of computer science and cryptographer lived and died here and this indeed was the house that Shoring moved to when he came to Manchester after Williamson Kilburn had built the baby which was the first machine to implement his big idea from the 1930s of the universal computing machine he came to Manchester to play with it and while he was here he worked on a number of topics some quite biological in nature but the one that's most relevant is this one with the title Computing, Machinery and Intelligence and this paper begins with the words I propose to consider the question can machines think now this paper was published in 1950 two years after the baby ran its first programme so Shoring was really thinking a long way ahead as to where this technology could go and in this paper Shoring decides that can machines think is not a very well posed research question and he turns it around into a test for human like intelligence that he called the imitation game and of course that's the title that Hollywood picked upon for its movie about Shoring he recorded the imitation game but we in computer science simply know it as the Turing test for human like artificial intelligence and in this paper Shoring speculates that all a computer would need relative to the baby machine was more memory now the baby machine started with 128 bytes of memory I think and in this paper he reckoned that that would be about enough and he reckoned that computers might have a gigabyte of main memory by the end of the 20th century which is a remarkable prediction to go from 128 to a thousand million bytes in 50 years and yet it was pretty accurate it was roughly the turn of the century when a typical desktop PC would have about a gigabyte of main memory that PC would also be about a million times more powerful than the baby machine but it would not pass Turing's test and even today with the spectacular progress in computing power no machine has convincingly passed Turing's test for human like artificial intelligence this would have surprised Turing enormously and my personal take as to why human like AI has proved much harder than Turing and many people since Turing expected is because we don't understand natural intelligence and therefore we don't know quite what it is we're trying to make with AI natural intelligence is of course generated by the brain and we don't understand the brain the brain remains one of the great frontiers of science we cannot explain its basic principles of operation as an information processor and so that's why my work is for the last 20 years has turned towards building computers to help us understand the brain and the computer question is the spinnaker machine spinnaker is a simple compression of spiking neural network architecture based on the observation that the neurons inside each of our heads communicate mainly by emitting spikes they go ping from time to time and all your thoughts when you're sitting there thinking about what I'm saying or what you're going to have for dinner or when's the end of this going to happen those are all patterns of pings flowing around to the best of our knowledge as I say it's not fully understood and we set ourselves the goal of putting a million of these arm processor cores into a machine and connecting them so that they could support models of brain function from the outset it was clear that even with a million processors we were not approaching the scale of the human brain best case we might be at 1% from what we now know even that's a bit optimistic we're probably a fraction of a percent of the human brain or as I prefer to think of it 10-hole mouse brains the mouse brain is conveniently very similar to the human brain but a thousand times smaller it's a nice place to start and this work started from the design of a microchip this occupied the first five years from about 2005 to 10 or 11 designing this chip designing any microchip is now a huge undertaking you know I did think of this chip as having 100 million moving parts all of which have to work in perfect synchrony and we put this chip together then with that chip we can tile the two-dimensional surface and we use printed circuit boards with 48 chips on each of those chips has 18 arm cores so 48 chips has 864 arm cores and then you can assemble those boards into a large machine and in November 2018 we built up to the full million core machine and that's been offering an open public service under the auspices of the European Union human brain project since November 2018 and in fact the service started in 2016 with a half million core machine so we reached our objective and that's enabling quite a lot of people with interest in computational neuroscience to explore different models of the brain and hopefully it's adding slowly to our understanding of the brain which is still far from complete it's not the only spinnaker machine in existence we've been lending and then selling boards to labs all around the world and there are about 100 spinnaker systems out there in various different research labs across the globe which seems to be satisfyingly more than atlases were sold to the earlier table and they're all being used for a range of applications the sort of thing we designed the machine to do was model bits of brain and one of our milestone papers is on a model of a cortical microcircuit this model was developed by human brain project collaborators at Ulich in Germany and they have run the model on a supercomputer and groups have run it on GPU systems we ran it on spinnaker and spinnaker was the first platform to achieve real-time execution of this model it's been quite a healthy competition between these different platforms and we've all learnt quite a lot from each other and now the HBC and GPU systems have all managed to achieve real-time as well we learnt tricks from them and they've learnt tricks from us in improving the efficiency of these simulations so that's what spinnaker was built to do and it's delivering that kind of thing quite effectively now the brain is fairly ferocious as a computational task neurons typically connect to the order of many thousands of other neurons and so every time a neuron goes ping you have to connect that ping to thousands of targets in real-time compute the effect of that ping which might be several pings coming back and so on and this model has represents about a square millimetre of cortex it has 77,000 neurons just under 300 million synapses which is a connection from one neuron to the next and we model it at a 0.1 millisecond time step which is small enough to get pretty accurate realistic biological performance so we can model this model reproduces the measured firing rates of the different layers of the cortex so in some sense it's capturing the essence of the biology the interconnect in this bit of cortex is schematically shown in the diagram on the right of this slide there's lots of toing and froing it's very unlike an artificial neural network which tends to have a very simple flow of information here the information is flowing backwards and forwards and we can model it but we still don't understand it but of course with a computer model it's much easier to interrogate it effectively to insert probes and measure things and test hypotheses than it is with the biological original whose owner usually complains if you insert probes and so on although that's also done so that's just one example of what we've done spinnaker's also running robots in various parts of the world using biologically realistic neural control systems and so on and we have moved on to a second generation chip the spinnaker 1 technology is now quite old and the second generation chip now has 152 ARM processes on it instead of 18 and it needs more memory and more interconnect throughput and it will deliver a chip that's roughly ten times the sort of performance and effectiveness of spinnaker 1 and we've developed that with our human brain project collaborators at the Technical University of Dresden so that's one example of the sort of work that's going on in the computer architecture area in the department which is building large scale machines really from the baby onwards and it's interesting to sort of see the progress over the 70 years since the baby it's about right isn't it, Mendel arithmetic failing me these machines are actually relatively similar size they occupy order 10 rack cabinets of some sort the baby was built on a fairly open post office rack spinnaker has actually got boxes around it looking a bit tidier but if you look at the numbers the baby machine was a single processor built across those multiple rack cabinets and it would execute the order of 700 instructions per second the spinnaker machine with its million processors in a similar sized physical space execute some 200 million million instructions per second so I can't do the arithmetic there but if you are familiar with Moore's Law Moore's Law sort of suggests the doubling every two years in terms of the number of components you can build and if you translate that into performance then this track is very much in line doubling every two years is exponential and I use that literally in this case not linguistically doubling every two years is a factor of 1000 every 20 years or a factor of a million every 40 years a factor of 1000 million every 60 years and that's roughly what we're seeing here so we still have other architecture work in my group in the department evolve into various European projects but I've just given you a quick snapshot of one thing that's going on that's relevant to the historical background that we've been hearing about today and with that I'll finish thank you very much well thank you Steve are there any questions for Steve? Steve you mentioned that your research going on into artificial intelligence do you believe that Google's AI is sentient? so if I can just repeat the question for the outside audience so there is research going on into AI do you believe that Google's AI is sentient? no and of course not as Google okay there's one Google individual has said things which I don't think his company is happy with him saying I don't subscribe to the view that all that AI systems are are matrix multipliers that's also oversimplifying the statement and whether any computer model could be sentient is a very interesting question to which we don't know the answer because how does sentience work? we don't know if you built a very faithful model of the brain with the same complexity the same interconnect the same dynamics the brain is sentient would that model be sentient? we don't know okay one more question and then we should move on so the question is do you have any figures on the energy efficiency of atlas versus spinnaker or of the baby versus spinnaker I suppose? I don't have the energy numbers for atlas actually I know the baby used about three and a half kilowatts so that's about five joules per instruction that's the number five joules per instruction spinnaker if it's executing that will use about a hundred kilowatts get your calculator out I can do mental arithmetic but doing it in real time before an audience is slightly risky it's what computers are for isn't it? I think the answer is there are many many orders of magnitude improvement and indeed I do talks on that but that's a different one okay thank you Steve I think with that we should wrap up and move on because time is running away with us so thank you very much Steve okay Mike Henty here yes okay next up was due to be as it Cale who's chair of the UK and Republic of Ireland section of IEEE unfortunately it has been sabotaged by transport problems but Mike Henty has very generously volunteered to step up so if I can sort this microphone lead out thank you the first sorry Mike has volunteered to step up Mike was the chair of the UK and Republic of Ireland section of the IEEE in 2019 at the time that we actually started the project of putting the applications together so it's actually very relevant that Mike comes along to seize the end result thanks Jim great thank you very much the first ever conference I went to had an after dinner speaker called Ted Homnilson he invented hypertext and he said I only have seven slides so I won't be talking for very long two and a half hours later the hotel asked him to shut up because he was only on slide 2 so I only have five slides seriously these are is it slides, does it apologize he did make it to the other event that we had on Friday the problem of course today is travel it's much easier to come from another country than it is to come from the opposite side of Manchester but you all understand that so thank you is it has a sort of a standard slide letting you know all about IEEE and I'm sure you all know about IEEE already so I don't need to tell you about the 422,000 members actually Jose and I were having lunch the other day with somebody and I said it claims to be the world's largest technical professional society and he said to me what do you mean claims but in fact I am the membership development committee chair for region 8 so I can tell you that the 422,000 is correct but the numbers go up and down pretty much every day so roughly that number of people so we have 39 societies several technical councils and what we're very proud of is the fact that the UK and Ireland section is the largest section outside of Silicon Valley and we claim this to be true and I've tried to disprove it by going through the figures and the numbers and I can't prove it to be untrue so we just have to take it somewhat on faith these are the officers so is it as I said the section chair he had planned to be here Paul Cunningham is the vice chair and he will be the next section chair that's the way the system works Eduardo is the secretary Matthew is treasurer Nick here is our website manager and Mona is the past chair so basically I'm just here to represent is it and to say thank you to everybody who made today possible and to the people who helped us on Thursday and Friday last week as well thank you to all our speakers I really enjoy the talks and thank you to the people like Brian and Jose and Steve and the other Steve one Steve who came 10 miles which is probably much harder to do than come from the US so thank you all and all the speakers the talks were fantastic it's been a great event I really would like to thank everyone for their contributions and remember this is not a one day thing this is something that has taken many many years to do and I would like to ask you to join me in thanking Rod for an amazing effort and putting in such fantastic work and he's great achievement in making this all happen thank you and have six more slides okay so we're now going to finish up by handing over to Rod Mutram Rod is actually the meeting chair so Rod has chaired the organizing committee of not only the applications but then this meeting and I just haven't let him have a word until now so Rod gets the final word so I will shut up in a minute Rod actually also graduated from Manchester University in electrical and electronic engineering in 1976 and I didn't know until a couple of days ago when I asked him for his bio Rod's claim to fame here is that Rod's final year project tutor was Freddie Williams so it's really nice to have that come back around for the celebration of the baby Rod has a 40 year career in system and safety engineering in four industries he's a fellow of the Royal Academy of Engineering fellow of the Institution of Engineering and Technology a fellow of the Institution of Railway Signal Engineers and a senior member of the Institute of Electrical and Electrical Engineers so with that I'll give you Rod thank you Jim well as Jim says it closes falls to me to close the meeting with a few thank yous and I'm very conscious of the fact I'm the last thing standing between you and a glass of wine so I will try to be reasonably brief first I must thank my fellow organisers of this event when Simon Lamington and Roland Ebbott in that conference in Glasgow came to me and suggested that I might like to help them put this milestone together because it was their idea not mine I didn't realise how much of work it was going to be and as Jim said this morning all those thousands of emails flashing backwards and forwards and several iterations around the proposals with the help of Brian Berger our advocate so it was a lot of work to put them together and Brian we have to thank you as the advocate for helping us to do that and for marshalling our experts Thomas Haig who Brian mentioned and Peter Denning who you heard this afternoon also I have to mention Robert Colburn at the IEEE History Centre for all his help in walking us through the process and in helping us to order the two bronze plaques in one case three times and if anyone wants to know about that come and talk to me over a glass of wine the black at the bottom is actually plaque number two and will not be the one that goes on the wall it's actually being remade at the moment because if you've looked at it very closely you'll find it has some flaws I must also thank Bob Geetrol and all the other volunteers at the Science and Industry Museum those of you who went to the museum this morning and saw the baby I'm sure is keep that duplicate machine running and all the other things there at the museum which seems to be undergoing a pretty major renovation at the moment so not worry much if it is open but it's a very good museum when it is I of course also have to thank the senior IEEE people who were here Steve Welby Executive Director and COO and Jose Mura who is an old friend of the section he's unveiled several milestones here before for us and when the current president was unavailable Jose was the first name I thought of because he's very keen on the milestone program he came and unveiled a milestone in Dundee with us and in Glasgow when we were cooking this pair of milestones up he unveiled a milestone for the standardisation of the home last night we had Hugo de Franti with us the son of Sebastian who saw in some of these pictures I haven't got a point at this the gentleman in the picture with Cockcroft and Kilburn so Hugo unfortunately couldn't stay again because of the rail strike but was keen to support this and did go to the museum yesterday Bob organised for him to go and see some of the exhibits and the final part of the family archive following Sebastian's death is currently being transferred to the museum so there's a huge history of the Franti company at the museum I also have to thank some of the local people here in Manchester who helped to organise this Ruth Maddox and the operations team who you've seen some of you will have seen with the Cockcoffy and Tea Outside Handled Registration and other organisational matters Andrew York and all the AV team who've been swapping all the pictures around for us and Christine Bowers and all the house services team the lot of people in the background in organising one of these events that you don't necessarily see but it can't be done without them I must of course also finally thank our sponsors again Arm Limited the Computer Conservation Society the Science and Industry Museum and the university itself of course and the Life Members Affinity Group of the UK and Ireland section I guess when this came up I was sort of the obvious choice to lead in helping putting these milestones together as both a Manchester University graduate and a director of Franti for ten years so whilst I was never a computer man I used computers and never designed them clearly it was interesting to me to get these two milestones done and I'm delighted that we finally got there Manchester University looks very different to when I was here as you know the Victoria University in U Miss merged on this site when I was here was the maths tower and it was very different from the days when I used to come across here doing programmes on punch cards putting them in as a batch and coming back sometimes the next morning and sometimes two days later to see not whether I got a successful result but whether the programme had run at all computing was rather different in those days this building bears Tom Kilburn's name and as you've heard we had a fight to get Kilburn's name on the plaque and I'm delighted we got there in the end because everyone you talk to says how seminal Kilburn was in getting these things done and as we heard from Steve Ferber Manchester is still at the forefront of computer research and I've little doubt that future generations will see some further milestones on this campus and with that I'll thank everybody for coming and I'll let you get to your wine which I'm told is not where the tea and coffee was but is on the first floor gallery reception area which you can get to through these two rear doors through these two rear doors or you can go out through that door and back up and then up the stairs at the side so roughly in that direction you will find some drinks and I'll see you there