 Ty, rwy'n gydag i'w peth, i'n i'r amser yn ymddangos o'r rhai, ond yn ddigon i'r ddigon yn academiau cael yn ddigon. Felly mae'n ddigon i'r ddigon i'r ddigon i chi i'r sefyd. Ac nid yna'r ddigon i'r ddigon i'r ddigon yn ddigon, yn y 2014 mae'n ddigon i'r ddigon i'r canadau. Felly ei ddu i'r ddigon i chi i gael i'r ddigon i'r ddigon i'r ddigon i'r ddigon. sy'n fy ngyfnir i'n hollol i. Mae'n dweud i wneud i gael ei llei ddynion i'n ei hollol araltreid yn gwybod a parol ac i wneud i eich bod gwybod ni i gael fy ffrthu. mae ein hwn yn ddysgwyr i'r rhai gwrs. Mae'r idea yw, mae'r idea ond gwybod e'n gafodd ei hwn o'r hwn mae'n ddysgwyr arweith ym mhwy nesaf mae'n ddysgwyr arweith ymddech chi'n performedd a yw'r hoffi'r eich ysgrifiyn yn gweld yn derbyn. ac mae'r cwmdeithas o'r hyn eich amser y gallwn i chi. Ac rydyn ni'n bod yn dod o'r defnyddio'r hynny. Rydyn ni'n meddwl i Canada yw 2014... Rydyn ni'n meddwl i Canada yw 2014. A'r cyfnod, rydyn ni'n meddwl i... Rydyn ni'n meddwl i amser am y cwrdd ynchydig. Rydyn ni'n meddwl i y cwrdd, rydyn ni'n meddwl i'r bydd yma. Rydyn ni'n meddwl i'r wych yn llunio'r fath. Wel y gallwch yn fyddi'r bod yn ysgrifio ychydig. Mae'n hynny'n gallu rym mutedu i'r bobl yn ychydig. Dyaf oedd oedd y canodaeth i'w gwellig. Mae'n ffachwyr ar y cyfrofi'n gwybod eu cael y tuarnopeth. Mae eich gynhyrchu, roedd y cyfrindig sydd wedi gael'i gynhyrchu. Wrth gwrs dyna, ychydig ei fwrdd yr amlath yng nghymru. Mae'n fwrdd rhaid i bwyd i gael'i ei gael jydydd. Rydych yn hynny, ac eich gael ei wneud o'i gael, Roedd yn ei ddod i, yn ôl. Felly, mae gennym nhw'n cael meddwl. Byddai'r sgwrt nifer ynghylch yn gyfwneud. Mae'n argyrchu'n sgwrs i ddod i'r sgwrs i ddull. Mae'r sgwrs i ddod i'r sgwrs i ddweud. Mae'n argyrchu'n sgwrs i ddweud. Mae'n rhannt gwneud eich hwn. Mae'r gallu cymaint i'r cymaint i'r cymaint i'r cymaint. a rhaid i'n gweithio gyda phoedd y bydd y cyfnodol ac y cyfnodol. Mae'n ymgyrchu'r modd, rhaid i'r modd, mae'n maith o'r modd yn cael y propatio'n blaenol. Erbyn rhaid i'r modd, mae'n gweithio'n gwahanol oherwydd ond Alun yn ddiddordeb yn ymlaen, y gallwn i'n cael y byddai y bwysig? A'r gweithio'n gweithio ymlaen? Gweithio? Mae'n gweithio'n gweithio, ond rhaid i'n gweithio. Ac mae'n bod ni'n gwleisio am y gwaith i gael er Specialist a'r Gweithdoedd. Ac mae'n bod ni'n gwleisio am nd Wellsieon yn ddenchill yn gweld. Ac maen nhw'n gweld gwbl carbon nanotube, mae'n gael cŷn o'r ffordd i'na. Mae'n gwybod cyhoedd dwy de Wherever, ac mae o sylfor yn cysbiad hynny ar gyfer carbon nanotube. Ac mae'n gweithio gwybod dweud. Chlawes carbon nanotube gennym gyda gwyddo, mae'n gwybod eu yna i gael i'r llfa o'r gwybod o'r gwyd. I came out of confidence saying to the carbon nanotube guys, you should commercialise what you're doing, because there's a lot of money in... Because they were doing classifiers, there's a lot of money in that. And then the most relevant guys were the memrister guys. Now has everybody heard what a memrister is? Does everybody know what one is? This is an educated audience. It's a bit of false gold. It's a bit of false gold, yeah. So it's a memory resistor, it's a resistor with a bit of memory. And there were some things two weeks ago, seven bit memristers in Southampton. So I was a bit confused by these guys, and they were a bit confused by me, to be honest. And I see computing as the Feynman way to shut up and calculate. We're really on about, like Al said, number of operations, number of operations. So that's where it got to. Why is that interesting now? I think this is the next slide. Why is unconventionalness becoming commercial? Well, it's exactly what Al said. There's this downward path of AGI or AI or whatever, or being able to process, do interesting things decision-wise. And then there's this real need upwards to provide hardware to support it. And it really is like ridiculously open space at the moment. Al said a lot about NVIDIA, but it's like they're not really rethinking this space. They're not really rethinking what they're doing. They're just doing what we've done traditionally very, very, very well on a distributed level, and it just fits with the modelling. But if you look at this, this came out last week, OpenAI 5, which is the big, does everybody know about OpenAI 5? OpenAI are the deep mind of America, and they're trying to democratise the whole AI space. And they do a lot with what's called reinforcement learning, which is kind of, I don't want to go into it, but it's like the bit that looks most like what we do, sort of thing. Like the sort of artificial general intelligence bit. Rather than the stuff that Al said, which was computer vision, which is just like our eyes, the reinforcement learning is a bit more like how our brain might work. But I don't think it is, but anyway, that's in this side. But if you look at how much they've got this proximal policy optimization, 256 GPUs, 128,000 CPUs, right? That's a lot of computing power just to do this one problem. We worked out that spending about half a million quid on electricity just to solve this one problem. So, like, you're faced with that, and you think, oh, hang on a minute, that doesn't seem right. Like, fundamentally, when you look at the brain, and Von Neumann published it, when he was into this, he published a paper saying, look, the brain ones are, I think it was like 30 watts or something like that. It was like a ridiculously small number of watts. So my take on it is, right, we're kind of wrong, right? We've done all this thing with computing until, like, 2018. And, like, it's gone really well, right? It's gone, let's face facts. We can all do a lot with computing, but it's not maybe what we want to do. Again, Al's done this talk for me. We have to step back and go back a little bit and then come again. So this talk is really about that coming again. So we want to do a new bit. We're not concerned with binary anymore. We're concerned with the low-level representation and changing the low-level representation. So here's where your audience participation joins in, right? I've got some questions, and I was going to buy it. Who answered it first? I was going to buy them a pint, right? So there is, you know, we're up for deals here. So this is the, and we mentioned it before, this is the von Neumann bits. This is classic, right? So everybody can answer that question, right? What is that number there? Name the number, 1001. Everybody know how to answer that one. Can I get an answer from the audience? Nine. Nine. I owe somebody a pint. It's D, right? So that's the way it's done. Why did we do it that way? Well, it's kind of efficient. Memory-wise, it's an efficient way. It scales as a log or as an exponent. So we could have said A if we were assigned it, but most people would say that. What's happening in unconvention computing is people are rethinking this. So here's some... Oh, nine, yeah. That's the answer. Here's the next thing, and this is really interesting. It's called stochastic computing. So we're computing in the range of probabilities. And quite a few people are doing that because a lot of machine learning is in the range of probabilities. So can anybody have a guess? At the top is a little kind of cheat, right? So if you look at that top line with a zero and a one, can anybody guess what that number represents? D, very good. We're all on the form. 0.75. So why is that interesting? Well, it's not really compressed. It's not really a compressed representation, but here's the trick. In multiplication, what we do is we permute those values, we permute the values, and then we line them up with an and. So if you think about this bottom, this column here that's lined up with an and, if that's 0.5, that's 0.5, that is 0.25, yeah? So we permute the value with randomness. So it's kind of like using the probability in reverse to calculate multiplication. So you get that idea that you can calculate multiplication very, very quickly, which, as we know, again, I was absolutely laid this out for me, then, you know, multiplication is a big operation. It's a big operation in machine learning, a big operation in neural models. So we do that very quickly. The guy who invented that, I read his PhD, it's like, that's his idea in that PhD. He's done that, and he's now at Oculus Rift, the Goggle guys, the VR guys, that's it, yeah. So why is this useful? It's multiplication and also probabilistic. And we live in a probabilistic world when it comes to things like cars, there are going to be probabilistic decisions about whether we run down the baby or the guy on the other side that looks less fun. Is that right? That's not a really right, but an ethical thing to say. Okay, so this is one, again, I'll talk about, this is my take on quantum. That looks like a load of digits for those who are... Any answers? E is a great answer, and it is the right answer. You guys are super smart. So we need to know, this is the real trick. So the best way to understand quantum is to look at this SMBC comic and read this one thing. It means a complex linear combination of a zero one state should think of a new ontological category. You can't necessarily think about it in the way that we think about classical systems. And why is that useful? Well, we have this thing that they don't tell you about, which I call magic collapse, which is when you take an optimization problem, you collapse it down to its solution. Like, oof, it just collapses into its solution. It just finds the right solution. And that's like the magic collapse of quantum computing. And the algorithms are magic collapse algorithms. They take a complex space and reduce it down into a classical space. And that's what they're going to do. I'm going to be like the IBM guy in 1960 and say there'll only ever be five of them. And I want to be proved wrong, because I want everybody to have one on their phone. But I think there'll only be five. Remember that I said that and laughed at me in five years' time when they were on your mobile phone. What's that? All the quantum you'll ever need. All the quantum you'll ever need. Yeah, yeah. So I'm making that prediction. I don't know. It just seems such a rarefied device. Classical device is not so rarefied. But quantum device, you've got a rarefied atmosphere to hold the physics of how it works in place. So they're all really boring, those first three. Von Neumann is now boring. Stochastic was interesting two years ago. He's now boring. Quantum, we're nearly there with quantum. So we're all bored of that. This is the interesting one. And it happens to be the one I'm involved in. I'm curious enough. OK. So here's what I call a temporal bit. And you'll see why I call it a temporal bit. Has anybody got an answer for that? And again, there is a little cheat. It's not b. Bad guess. You have to buy me a pint now. That's the way it works. Oh, that's all right then. We'll buy each other a pint. It is, in fact, c. So what a temporal bit is, imagine you've got a kind of click is a bit. And imagine I want to communicate 8 to you. I want to communicate number 8. So what I do is I click once. And that's that start bit there. And then what I do is I wait eight seconds. And then I click again. So we've got this kind of period. So if everybody knows, that's two clicks, click, click. So it's two bits. But I've actually communicated four bits of data. Because the time channel is orthogonal. So I've got eight, but I've sort of started. I've got four bits, but I've sort of started with two. So that's really the way the brain does it. It uses this channel to do computation. To do this idea. So why is that interesting? Why is it interesting because it's compression? And the really interesting thing is, there's two other interesting things. I may be going to that slide, actually. There's a few interesting things about this. One of them is you can do addition. Now, if I want to do addition, what I do is I say to you guys, let's do an addition. And I click once. And then I click again. And you miss that middle click. And then I click again. So what we do is three clicks. And the second number is the two to three. The first number is the one to two. And what we get is you just count the two end bits. So you do that computation with no hardware. You've done it because the channel, the way it's represented is natural to the computation. And those clicks don't have to be anything, actually. I mean, I'm using my fingers. They can be electromagnetic signals. They can be anything that can oscillate because that's really all a clock is. And the interesting thing is you can have a clock that runs at a different speed to me like that bottom line. And what you get is you get this ability to do multiplication because you can start and stop clocks at a different rate. I communicate number two and say, oh, I actually run your clock twice as fast. And then what you do is go, I want to, oh, that's four. So you're doing multiplication. You're doing addition and multiplication. And you're not really having to do anything. You're not having to build a half adder. You're not having to do anything. So that's why I'm interested in temporal because it's this different way of representing. And actually the paper that I published at the conference in Canada was this unconventional arithmetic. So this idea that that arithmetic was the way it works. So we published that. And the other idea I have is this idea of memory. So if I click and send it to you, you send it back to me, we've memorised that, right? So we have a circuit that's kind of memory. And if the brain does anything in terms of memory, that's probably what it does. It remembers temporal relationships between neurons. So you end up with that as a way of storing data. And the interesting thing about that is, when you're storing data, it's the same channel that you're using to compute it. So you've got this idea of no bottleneck between memory and computation. It's all in that one channel. So that's what we... And you can make that eager. So what we said was, when you're doing that, if you've got an add, if I keep sending you an add, oh, you just do the add when we're doing the memory. So you have this idea of eagerness. If anybody's ever written any Haskell or any programming that has this laziness concept, this is the idea of eagerness. So that was published. And actually, interestingly, of all we've talked about, there are hybrids. And I'm interested in hybrids. So one of the problems with temporal is, if you've got the number a million, I've got to wait a long time to do the computation. So is there a hybrid way of using what binary represents things and also what this is, which is effectively unary codes? And is there a way of hybridising those too? And that's what I've been working on. So this is the key paper for me, John Hopfield. If anybody knows about neural models, John Hopfield has a model named after him called the Hopfield Network. And he said this. He said, if you try to divide a number by seven, if it's in base 10, it's really, really hard to do. But if it's in base seven, then it's really, really easy to do because you know whether it's divisible or not. So what he's really saying is that representation allows you to do the computation much, much more efficiently. It's sort of, it's not just an optimisation, it's a rethink. So that was the paper I read that made me think, oh, maybe this is something interesting. So I tried to get him in the light, his nose hanging over the bottom of the thing there. So domain-specific archets, where is this all going? So AGI, we're all going to be in this artificial intelligent world where everything is calculated for us and we don't have to think ever again. And we've got to build hardware for that. And the ACM cheering guys are saying, we've got to build domain-specific hardware. So hardware that's specific to a problem just as NVIDIA are doing now. And they say, you can't see this very well, but they say we're in a new golden age for architectures. So the opportunity, both commercially and I have to mention open source, the opportunity open source is to build, there is an agenda and FPJs play a part of an agenda in building these systems. So we need to do that. So where am I in this? Well, I'm the business behind the temporal aspect of it. We got some funding for it. We got some government funding for it six months ago. And we're doing feasibility studies on building all kinds of systems based around this. The major thing we're interested in is multiply accumulates, which everybody's interested in. We've got an idea for a multiply accumulate unit using this. A multiply accumulate unit is the workhorse of a neural model. It's the bit that's doing the kind of dot product bit that I'll set me up earlier for. So that's what we're doing. That's where it ends. There is this huge agenda to do interesting things with hardware. And I guess all you guys are hardware guys and it's worth thinking about. It's not going to change. It's not going to be seven nanometres, three nanometres. It's just not going to happen. We're going to get three nanometres. We're going to run out. We're not going to think of ways of doing anything more. The reason I put this up is all you guys here should go down the road, turn right at the cathedral, and there's a plaque for George Ball. His house was the guy there. So it's quite fitting that I'm talking about reinventing binary and the guy that invented it. Well, he didn't sort of invent it, but nearly enough, was just down the road. So that's me. That's my business. That's what I think. I don't really do social media and thanks for listening. Any questions?