 Felly, clwyddyn, ddod wrth gwrth y gallwn gyrgyndoedd deilleredd yn ysgriffa amdano. Mae'n gweithio'r profischei David Deutsch, elu'n dod o'n gyd yn ddod. Mae'n gweithio'r cyfeirio David na ddod, ac mae'n deall o'n prosiectol mewn eich bod yn ymddangos cyfrifloedd rydyn ni'n ymddi, a'r cyfrifloedd o gyd, o'r cyfrifloedd o gwybod o gwybod amgylch yw brunol, o'r cymdeithasol, sy'n gweithio, OXID wedi ei wneud y gweld y glas iawn. Yn ymddangos, David y Prifysgwyr mae'r cymdeithasol yw'r cyntaf wedi'i gydag yma, ychydig yw'r cyfrifysgwyr duol i'r argyrchu ddeudol, ac mae'r argyrchu i 85. Mae'n ceisio'r cyfrifysgwyr yw'r cyfrifysgwyr yn ei gydag, sy'n gweithio'r ychydig er gwrs i 87, mae'n gweithio'r cyfrifysgwyr i'r cyfrifysgwyr i'r cyfrifysgwyr. ac yn ddweud yma o'r ystyried gynedd y ffordd, yn y 90-lu David'r ymdweud, dylai'r ffordd gyda'r unrhyw ffordd. David yw eu bod yn ymddiwch yn Ysgrifennig, i'r profiad yn cyfrofiadu cyllid yma yn ymddiwch, ac yn 2008, yn ymddiwch, yn ysgrifennig yma yn y Rhywbeth Rhawn Sgfaith, yn ymddiwch yn ymddiwch yn ymddiwch yn ei fod. Yn 2013, David efallai ymddiwch yn ymddiwch yn ymddiwch yn ymddiwch ddiadau cymdeithaeth, gan oedd oedraeth i'r stylu cyreffol ynllw'r fathio'r cyffirthu gynhalwch ac yn allan wedi'w mor i ychydig. A rydyn ni'n credu yna y gallant cyffirthwyr ar gyfer cyffirthwyr sy'n fathio cyffirthwyr yn yw'n cyffirthwyr cofennol. Mae'r ddweud yn ddweud o'r cyfnod o'r ysgolion cyfnod o gael y ffysigol ym mhwyaf, ar gyfer bod yn ymdweud o'r llunio'r llwyddiad, y llwyddiad o'r cyfnod o'r ddweud o'r llwyddiad, mor cyfrifio a'r bwyl o'r ales teimlo a'r bysyn sy'n gyfle i sianhau. Felly dyna'r mod yma? Mae'r mod a'r mod wedi'u sianhau'n mytsio ydy'r bod yn ymd söf â'r tref. Mae'n partYcal ac mae'r ffieldr yn gymgyrch, ac mae'n meddwl o'r llwyddoedd yn ymddangos i'r llwyddoedd, ac mae'n meddwl o'r llwyddoedd, mae'n meddwl o'r llwyddoedd yn ymddangos i'r llwyddoedd. Ond mae hi'n meddwl o'r ffordd yn y cyfnodol, yn y ffysig, ac yn ymddangos i'r ffordd, ac mae'r ffordd yn ymddangos i'r ffordd. No? Yn. Yn. Yn ymddangos i'r ffordd ar 500. Yn ymddangos i'r ffordd. Diolch ITAC, y cwm pedwch, dwi'n mewn gwiriannod. Yn y ffordd, yn y cyfnodol, mae'n meddwl o ffordd yn unifiad i gael gyda'u swythau ffordd ac yn y ffordd, lainenu, i gael rhai envelope sydd o'r mywch chi gael platformau a ei arrhwyfiwr ariio. Bamracell am hydion gwain mwy ymwneudu'n teimlo, iawn gwb Tod lle ziwethaf yr angened lle cydyn nhw rydyn. Reto sydd y tawr yn gallu eithaf ddefnyddiaeth, mor hynny yn gyfilyd o maen nhw'n cael ei ddod sydd flour o fewnydd yr angened, mae'n fyddyn nhw'n cael ei ddod o'r tuwaith yn cael ei bobl yn y ddiddorol. Felly mae gennym 0 a 1. Bydd yn gweithio'r newid, mae'n gennym ni'n cael ei ddod i gael i ffras Yprys, beth bod yna'n ardal chi'n teulu. Ac felly nid wath arweithio gwahog i'ch cofnidol, wedi gwahog i'ch cofnidol o bobladaeth o gyfnodol, a fydd hi'n fath o bobladaeth o boblad o bobladaeth o roi'i cysymru. Ond gwneud hynny'n ddangos e'r cyfnodol ar bobladaeth o bobladaeth gwahog i'ch cofnidol i'r colliad. Yna dylai rhywfodol, mae ydym ni'n fyw. Mae'n mynd i hwn i yn bwyd i gyrwch iawn i'r mynd i'n gyrwch iawn, y math yn ff analysio'r mynd yn dod yn ff-liseg a yn llwyddiad. Mae'n bwyd i'n bwyd i'n bwyd i'n ff-liseg o'r mynd o'r mynd, oes os y gallw'r fath i'ch hwn i'ch gyrwch iawn. You may find it useful to use the predictions of the flat-earth theory when you're planning out your garden. But when you're thinking about what the world is like, and even more when you're thinking about what the laws of nature are, it would be hopeless that the theory of the flat-earth would just be an impediment to understanding what's out there. Mae'r ystyried i'w rhoi, i'w rhoi'n rhoi yma yn ymddangos. Maethe'n rhoi, ac mae'n ddwyll Lizwrs Gynnalau Aric, neu mae'r ystyried i'w rhoi gyntaf, a'r hollbeth gynnalau yn ydynt coflau o rhanu. Mae'n rhaid digon ni'n meddwl gyda'i waspan bod ar gyfer знodol i fyfyrwyr o'r ffaith neu llyfrwyr. The basic mathematical theory of probability, namely certain numbers attached to certain elements of a set, which then obey axioms like they will add up to one and stuff, was invented in the 16th century by people who only wanted to win at games of chance. This is one of them, Cardano. And that started off game theory, and game theorists use the idea of a random sequence as a mathematical model of physical acts that are integral to games of chance, such as shuffling a pack of cards or throwing dice. And probability, via the central concept of equally likely, was a mathematical model for the intuitive idea of fair shuffling of cards, a fair dice and fair throw of the dice. So, when I say that, which is also essential to games of chance, when I say that this was merely a mathematical model, perhaps I should say more clearly what I mean, I mean that they didn't assume that anything in the physical world, shuffling cards and so on, was literally what we would today call a stochastic model. It was a process that it literally had properties such as probability. That's because, first of all, the properties that they actually needed from shuffling cards were far weaker than full literal randomness. They were basically that the outcomes of shuffling should be fair and equitable among the players and unpredictable to them. So that meant that the outcomes were not related in any simple way to any sequence or algorithm that they could call to mind or execute during playing the game. So, most sooner random sequences would have done just as well if they'd had computers then. And secondly, more importantly, I don't think it would even have occurred to them to connect their probability in their model with some real physical quantity because classical physics, which was in its infancy at the time of Cardano, but bit later when it was perfected, was deterministic. And therefore inconsistent with stochastic processes happening in nature. So, why did those game theorists blithly use mathematical axioms, a probability theory that were both far too strong and already ruled out by what was known about fundamental physics? Well, simple. By being too strong, that's harmless, that meant that they certainly satisfied the conditions that were necessary for analyzing games. And the real properties that they really wanted, such as fairness and so on, couldn't be expressed precisely mathematically. By the way, they can be expressed precisely in constructive theory, but that's another story. Consequently, it didn't matter also that their model relied on impossible physics because the conclusions about game playing, if there had been that impossible physics, led to the same conclusions about strategy and tactics as for the real physics. So it didn't matter for them. There's another reason why they might have been blasé about the connection between their theory and physics. Certainly this was a move made later in game theory and in probability theory up to the present day. They might have regarded the probabilities not as attributes of the dice and the cards or the dealer, but as attributes of the player receiving the cards. That's the subjective interpretation of probability. I'll come back to it in a moment, but note already that whatever role the subjective state of mind of the player may play in this story, fairness must depend on physical properties of the cards and dice and how they're treated during the game. Therefore, game theory must, in principle, be rooted in some sort of model of those objects and processes. Anyway, from that theory, game theorists derived maxims for playing games of chance, such as in poker, never draw to an inside straight. The important thing is that this was a narrow, parochial application of a mathematical trick, the theory of probability. Indeed, the reason that game theory was possible at all was that its conclusions don't depend on the detailed physics of any game playing processes. It didn't matter what the orbits of the planets are, it didn't matter what matter is made of, or let alone anything deeper about the laws of physics. The theory was directed specifically at modelling a particular human social behaviour, which even other animals don't do, let alone the physical world at large. So, it should be very surprising that this little mathematical trick, the theory of probability, has found further applications in a very diverse fields in science and beyond, and even that it seems to be fundamental to some of those fields. So, here are some of them, roughly in the historical order of their invention. So, we'd be surprised if, say, the seven of diamonds appeared in a law of physics. But probability, which has the same provenance as the seven of diamonds, seems to be central to physics in particular, and it seems to work. So, why would anyone want to expunge it from physics and from every other fundamental theory? Well, of course we don't need a reason. Physics likes to do without things. As we discover more and more about the world, we sometimes find out that things that we thought existed don't exist. And such as celestial spheres and the force of gravity and trajectories of particles and so on. They were all thought to exist and then discovering that they don't exist was a great advance in understanding. But before we can decide that something doesn't exist, we must explain what the world without that thing is like. And also, preferably, why thinking in terms of that thing seemed to work and why it was a useful fiction and why it may still be a useful fiction. So, compare these two statements. The first statement specifies a factual observable property of the world. It specifies what will happen when poker is played and specified hands arise. But the second one is not about physical facts. It is consistent with any sequence of cards arising in a poker game. In fact, it's consistent with any physical events such as someone repeatedly drawing to an inside straight and repeatedly winning. Not inconsistent. Yes, it's true that it was risky. The second statement is true all right. But it doesn't apparently refer to any physical events that actually happened in that case. One sometimes hears a sort of desperate denial of this along the following lines. The statement it was too risky is about the physical world. It refers to all the other players who drew to an inside straight and lost. And it refers to the fact that there are more of them. They outnumber the winners. Well, first of all, it doesn't. There have only been finitely many poker games in the world. Ignore for the moment the fact that there are parallel universes and in those particular player both wins and loses in different universe. Quantum theory does in fact solve some of these problems, but not by counting the number of players. So for the moment in a given universe or in classical physics, the proportion of players who drew to an inside straight and lost doesn't exactly equal the probability of losing. Just as repeated tosses of a fair coin, however fair it is, don't result in equal numbers of heads and tails in general. And anyway, since when do gamblers care about whether other gamblers lose or win? If probability refers to other players losing money, it doesn't refer to any physical fact about the game as it actually was in the case that I cited. And the same holds for all probability statements. So that's meant to be in the second column, that's meant to be all probability statements. That's the dot dot dot. The blatant fact that is generally overlooked is that no statement from the second column can ever imply any statement from the first column. In other words, assertions about probabilities do not refer to the physical world. They don't assert anything about the physical world. Frequencies, like the fraction of winners over losers historically, are things that happen in real life and therefore they don't in general equal probabilities. And it's no good saying that they equal them approximately because they don't. They only probably equal them approximately and that's a statement from the second column. Similarly, you can't say that probability statements are about what will happen in the long run. No, they aren't. All you can deduce from them are statements about what will probably happen in the long run. But frequencies in an infinite sequence of measurements, of experiments, do equal the probabilities exactly. And this inspires yet another desperate denial to the fact that the finite sequences are approximations to infinite ones. But nothing about a finite subsequence of an infinite sequence can possibly follow from a statement about relative frequencies in the infinite sequence unless the subsequence is a typical one and a statement that is a typical one belongs firmly in the second column. Or you may take that subjective route that I mentioned. You could imagine that probability statements are not assertions about the world at all. They are assertions about our minds, probability being a measure of ignorance or of a degree of rational belief. Those are two different subjective theories called credence. But that's no good for present purposes because then you still need something to connect statements about our minds to statements about physical reality. So, to this end, the philosopher David Lewis proposed his principle principle, which just asserts as an axiom that rational agents have the same credences as the physical probabilities. But note that that gives no explanation about why those physical, purportedly physical numbers should inform decisions in that way. You may as well propose an axiom that rational people avoid black cats or ladders. That's not physics. So, the upshot is that Cardano and the game theorists never did succeed after all in their purpose of finding ways of winning at games of chance or of minimizing their losses. They only found ways of probably winning. And later people added these purported philosophical principles that say a rational person would do this or that. But actually a rational gambler knows that having probably won, no matter how often one does it, won't pay the rent. Physically, physically, it is most unlike actual winning. So, in case there still appear to be any clothes on this emperor because of the mass of cultural interpretation that's been loaded onto it, just replace all the probabilistic terms in the second column by magical terms. Why, what physical reason is there to allow statements in the second column to inform decisions by fiat and not statements in the third, which we could equally well connect to reality by some fiat. There, that's the magical column. Thanks, thanks a lot. So, what I have called these desperate denials of the firewall between the first column and the other two are more commonly known as interpretations of the probability calculus. Subjective interpretation, frequency interpretation, ensemble interpretations and all in many variants. Regarded as attempts to connect the second column with the first, they are all like the examples I've given either circular or meaningless or conflict with the probability calculus or just don't do what they say they do. This discussion that I've just given about the defects in theory of probability draws on the work of David Papineau, who's a philosopher of probability. He has called this situation at the heart of his field a scandal. Probability concepts and language and the whole theory simply form a closed system of statements and ideas that just refer to each other and can never yield a statement about the physical world. Now, if we go back to the applications of probability, we can see that there is a common feature running through all of them. They are all in a certain sense and slightly different senses. They're all normative. That is to say, they are about at root how one should act if one were to believe that certain potential events have particular probabilities. Now, how one should act is rather a strange thing for a scientific theory to talk about. You know, you can't get an ought from an is. In fact, the firewall between factual statements and moral statements exists for a very similar reason to the one separating factual statements from probability ones. So, another way of papering over the divide or ignoring it is by different kind of fiat. One simply puts a normative statement about probabilities into the physical theory. That's called a stochastic physical theory. There are many ways of expressing that fiat. Here's one. It contains a lot of hidden stuff. It pretends only to contain that first row. Actually, there are some purportedly factual statements about numbers that purportedly have something to do with physics. Then there has to be a principle to give them some normative psychological meaning, the second row. In the third row, actually, you need another axiom from decision theory to say how one should actually behave rather than how one should think. This is weird, but luckily, it so happens that the only stochastic theory that has ever been proposed in the history of science as a fundamental description of the world is quantum theory. In its mid-twentieth century state vector collapse form. In that form, its probabilistic part is called the Born Rule, which says that if and only if an observable is measured, then the probabilities of the various outcomes are the moduli squared of those coefficients. The bottom line there. By the way, does anyone here actually believe that the state vector collapse occurs in physical reality? Show of hands? Okay, no, good. So, one person. You have my sympathy, and I hope you'll see in what follows that help is at hand. Closer than you think. So, because, well, help is at hand because ordinary unitary non-collapse quantum theory provides in large part the way out of that whole probability scandal. It's called the Decision Theoretic approach. Sometimes called the Decision Theoretic approach to the Born Rule or to probability, but those are both misnomers because neither the Born Rule nor probability nor collapse, of course, ever appear in the Decision Theoretic argument. In its simplest form, it goes like this. We imagine an array of gaming machines. One-arm bandits in a casino. So, there's a whole array of them. And you play by inserting one casino token, and then when you pull the handle, the machine prepares a quantum system inside itself, in a state psi, and then measures an observable x of that system. It then displays the result, as shown in red, on the middle panel, and that will therefore always be an eigenvalue of x. And then the machine delivers a payoff of that number of casino tokens, which need not be a whole number. We're allowing fractional tokens if x has non-integer eigenvalues. Different machines in this casino are identical except for the state psi, which is constant for each machine, but different for different machines. But the psi is not a secret. It's written on the front of the machine, just like this, conveniently expressed as a superposition of eigenstates of the observable x. Now, four machines whose psi is a single eigenvalue of x with eigenvalue little x, then playing on that machine is not a game of chance. It's just a matter of putting in one token and receiving back little x tokens. Let's call that a classical machine, because it could be implemented without quantum systems, without quantum technology. Other things being equal, a rational player would be willing to play on any classical machine that has little x greater than one and unwilling to play when it has little x less than one. What about cases when psi is not an eigenstate of x? What then is the dividing line between being worth playing and not being worth playing? Well, let's call the dividing line for a machine operating with state psi. Let's call that dividing line v of psi. This is perhaps a little bit of a wordy definition, but it's just the maximum amount of money that the player would be willing to pay for the privilege of playing with that machine. So, for a classical machine, psi is an eigenstate of x and v of psi is just the eigenvalue. Well, some other facts about v of psi in more general psi are obvious too. We don't need probability or the Born Rule or anything. For example, if the state is a superposition of eigenstates of x, all of whose eigenvalues are greater than one, then elementary rationality says that it's worth playing because whatever the outcome, the machine will give you more than the one token that you put in. Similarly, for a superposition with all eigenvalues less than one, it's not worth playing. That is, it's not worth playing in the token-winning sense of the game theorists. You might well play for fun, but then you're being paid partly in fun, so let's ignore that complication. But now, what if the hard case, what if psi is a superposition of two eigenstates, let's say one with eigenvalue below one and one with eigenvalue above one? Well, in collapse quantum theory with the Born Rule and everything, that tells us that for general states psi, the probabilities of the respective outcomes are those coefficients squared. And then the principle principle tells us to adjust our credences to match those numbers, though it doesn't say why. And the probabilistic game theory tells us that a rational player should value playing such a machine the same as if it were guaranteed to produce that expectation value. Again, it doesn't say why. In general, axioms of stochastic theories are not explanatory, which alone should disqualify them from being part of any scientific theory in fundamental science, but I digress. Now, what about without collapse? Well, let's take a simple case of an equal amplitude superposition with eigenvalues x1 and x2, each with amplitude one over root two. So, we're aiming to prove, without collapse or anything, that v of that equal amplitude state will be the average of x1 and x2. So that's what we want to prove. So, here's the proof, a quick and dirty version of the proof. The devil is in the detail, which I'm not going to talk about. The details are extensive, but they lead to the same conclusion. So, first of all, we need to note two implications of elementary rationality. Not probabilistic in any way. Namely, equations one and two. I'm stating them for a slightly more general case, but it doesn't matter that the theta is going to be pi by four in the end. So, equation two says that v of psi does not depend on the objective exchange value of casino tokens, so long as they're additive. If the casino suddenly declares that it's going to redeem tokens in pounds instead of dollars, the relative order in which a rational player values playing on different machines will not change. So, that's one implication of rationality, and the other is expressed in equation one. If two machines use superpositions where each eigenvalue in the expansion of one of them differs by constant k from that in the other, then the rational player's valuation of the two machines also differs by that same constant k. And that's just because. In that case, all the first machine does is physically is the same as what the second one does plus additionally paying out k tokens. Now, we make those substitutions there on the slide, and it follows with a bit of algebra that for the equal amplitude state, v of psi is indeed the expectation value of x in that state, and we can prove the same for a general state and that's QED. But look what we've done here in a deeper sense. We've proved the same conclusion about what rational players do as we would have from classical collapse quantum theory, but without assuming that collapse happens. We've proved it without the born rule, without those axioms and all that stuff. So, we've done this on the right-hand side, whereas with collapsed quantum theory we have to go through all that process on the left-hand side to get to the same conclusion. So, on the right-hand side, the only assumptions are elementary rationality and unitary quantum theory without collapse. From the perspective of this talk, that means that we have dispensed with probability, both in nature, in the quantum world, and in our minds if we have any credences about the quantum world. Because in nature, as described by quantum theory, there are no stochastic processes and no credences in the sense of beliefs with numerical measures that obey the probability calculus, no credences affect the decisions of any rational person making decisions about quantum systems. But it's even better than that. Those decisions now on the right-hand side are not only derived, but unlike in the collapse case they are explained because we don't have to introduce all those unexplained postulates. And that also explains why collapsed theory, despite its false assumptions, was and is successful in a particular domain of application. So now, let's look again at that list of fundamental applications of probability. We can strike out quantum theory from the list. And we can put a tentative mark there about credences too, since we now know that they're not needed in this particular application. We've also eliminated probability from the theory of games of chance that use quantum processes where the chance element is generated by quantum indeterminacy. So what about games of chance in general? Well recall what I said at the beginning, probability is a very large sledgehammer with which to crack the egg of modelling things like fair dice. Now we see that the same job could be done by quantum theory as by the impossible physics that they assumed. Since a pseudo random sequence would have served the same purpose, so would a quantum generated sequence. And for the same reason we can strike out the use of probabilities in general decision theory, which is just game theory writ large, and in actuarial science. I just put that in because it was historically a very early application of probability. It prepares the system in a state psi and then it measures an observable x on that system. The result is the pale. Just as in game theory, the theory of evolution doesn't depend in any way on true randomness in the mutations. All that matters is that the theory can explain how biological adaptations to an environment can evolve if the mutations aren't systematic, if they don't depend systematically on the environment. Only the selection is supposed to be systematic, that's the essence of the theory of evolution. Probability randomness is just a feature of the model, convenience, so we can strike that out too from the theory of evolution, strike out theory of evolution. By the way, Chiara has just published a paper about the constructor theory of biology, which among other things describes evolution without evoking randomness in any way. Of course, by all these strike outs, I don't mean that the mathematical formalism of quantum theory isn't sometimes useful. I'm saying that the quantities called probabilities in that formalism do not refer to any stochastic random processes in nature, nor to anything in rational minds such as degrees of belief or credence, nothing in physics or in minds thinking about physics. In information theory, probability was again originally used as a model for an even simpler thing, namely this could be any message of n bits. Our communication system needs to be able to cope with any message of n bits and we don't know which it's going to be. This was translated in the model to all 2 to the n strings are equally likely. This has caused all sorts of confusion, such as people saying that a state contains maximum information when it has maximum entropy, which is nonsense and so on. But that's another story. Anyway, we can strike it out. I'll just say in passing that constructive theoretic information theory is also something that Chiara and I have recently developed and it does fulfil all the hopes that we have for constructive theory in general, including not being subject to the kind of confusion about information that I just mentioned, and it unifies classical and quantum information. Our paper on that was published a while ago. So what's left? Quantum statistical mechanics at least doesn't need probability since it has entanglement and decoherence and therefore it can avail itself of what I've just described about quantum theory, the decision theoretic approach. I should say why the universe is in such a state as to make the laws of thermodynamics hold, for example that it's uniform, that it's initially ordered and so on, is a substantive question, but it's not a probabilistic question. So we can strike that out too. So I'll leave striking out classical statistical mechanics as an exercise for the audience. Now, experimental error, that's an interesting case. I think it was historically the earliest application of probability after games of chance, and it has some interesting misconceptions in it in addition to probability, which one of them is as follows. So it's connected with probabilities. Error processes in experiments are traditionally categorized as random and systematic, but both of those are misleading terms. For simplicity, just imagine a measurement of a constant of nature such as the speed of light. And suppose that we are asked to give an estimate of the best error attainable with a given instrument like Fiso's wheel. Processes that were random with known probabilities would not be sources of error at all in such a case, since they could be reduced without limit just by repeating the experiment. But experiments, so the important errors are the ones that affect experiments through processes whose governing laws are unknown. That is, the laws may be known, but how they affect the experiment is unknown. And those cause systematic errors, so ironically a systematic error is one that obeys known system. So what does it mean to estimate an error caused by the unknown? Traditional to go into probability and subjective probability, but as I've said, that's all nonsense. So what can it mean? Well, suppose that a physical constant, let's say, call it chi, is to be measured and that the bound for a given technology is claimed to be epsilon. So if little x is the average of measurements obtained with a particular instrument, well, the best result obtainable with a particular instrument, then the first line, x minus chi, modus of x minus chi is less than epsilon, we're saying. And for simplicity, assume that the individual outcomes that we obtain when we do the experiment repeatedly with different copies of the apparatus and so on are x1 and x2, then we have both x1 and x2 both obey that inequality as well. And then just from some algebra, it follows that the average of x1 and x2, just as in the, if they were random errors, has a smaller error than either x1 or x2 separately, which is a contradiction, because epsilon was supposed to be the best error obtainable with that apparatus. So systematic experimental errors cannot be bounded by any known bound. Therefore, among other things, they can't be described by probabilities, nor can our knowledge about them. These unknown variables in science are counterintuitive and often misunderstood because we have been accustomed by Bayesianism and other subjective philosophies to replace real ignorance by fantasy probabilities. But we've just seen that neither physical probabilities nor probability credences can enter the analysis of errors in a fundamental way. So what do we mean when we estimate the error in an experiment? Well, that turns out to be a big question with surprising answers that there's no time to go into here except I'll just briefly mention, I hope to complete a paper on that quite soon. But I'll just briefly mention that what an error estimate really means is it's the error such that if the experimental result turned out to differ from the true value by more than that, when the true value is later discovered by some other method, then if it was more than that it would make the theory of the apparatus problematic. So that's in short what an error means, but for present purposes we can just strike out error analysis there. So Brownian motion, that doesn't actually purport to be a fundamental theory since although it has a stochastic law of motion, that's assumed to be an approximation to a more microscopic cause such as impacts from molecules that aren't explicitly treated by the theory of Brownian motion. So that theory is also related to the theory of errors, but in the approximation that the so-called random errors are swamping the systematic errors. Although we've just seen that that's something that cannot be known in a particular case, but if the unknown systematic errors are too large then it's not Brownian motion, so we can strike that out. And with it we can also strike out the applications from high finance that are directly analogous to the theory of Brownian motion. Finally, since none of those other applications of probability now involve stochastic processes, they do not require credences either. It doesn't matter what scientists think, what scientists believe about whether a theory is true or false, so long as they execute rationality. And so Bayesianism and the other subjective interpretations of probability have no remaining scientific function and for these purposes they can be dropped too. Now I hope I have shown you that probability doesn't make sense as a description or explanation of what really happens. It can be a metaphor, it can be a technique for calculation or an approximation in a certain sense, but an approximation to make sense has to be an approximation to something. So if probabilities are to inform decisions in some approximate way, there has to be an explanation rooted in a description of an actual physical world in which events and processes happen, not probably happen, and not just via some ad hoc axioms. So I hope I've also persuaded you that it's right and proper to try to expunge every trace of probability and randomness from the laws of physics and from our conception of the world and from the methodology of science so that we may fully restore realism as well as rationality. It's a simplification, a unification and an elimination of nonsense, and it's true. Now I bet there are hard headed instrumentalists in the audience who might be thinking, okay, so this simplification is all very nice and elegant. But since the principle uses of the mathematics of probability are largely unaffected, what really is the benefit of eliminating it at the fundamental level either? Well it's true, fundamental falsehoods don't always rear up and bite you. You could believe in a flat earth as many people did for a long time and that falsehood may never have affected your life or your thinking. On the other hand it might have destroyed the entire human species because belief in a flat earth theory as a description of reality is incompatible with developing say technology to avert asteroid strikes. Similarly a belief in probability in quantum theory may not prevent you from developing quantum computers, quantum algorithms in practice but because probability and the born rule entail fundamental misconceptions about the physical world they could very well prevent you from developing the successors of quantum theory. And in particular, constructor theory is the framework in which I suspect successors to quantum theory will be developed. As I said, constructor theory is incompatible with physical probabilities. So that is my case, thank you. Just before we take questions, let me mention that there's a website very easy to remember, constructor theory or one word.org. We all find many of the things that David alluded to, papers and so forth, easy to find. Okay, I saw one hand go up so please. I'm sorry but this slot machine I'm afraid I'm so far walking out of here with no understanding of how to modify the way of thinking about a single machine that ended up coming in. So if someone asked me how does, what are the funny numbers in the square roots mean in front of the eigen states in the slot machine? I would say well, what those funny numbers mean is if I pull the handle lots and lots of times then you know the limit that I do it forever, one third of the time I'm going to get this one result and the other two thirds of the time I'm going to get this other result. And say well, what do you, which result will you get next? I don't know. I didn't understand how to modify my way of thinking about that based on the stuff you talked about. Well, the idea is that I'm not sure which version of quantum theory you're thinking in terms of possibly collapse theory in which there is such a thing. I don't know. The way you should think about it is not in terms of the outcomes. The outcomes are an emergent property in some complicated way. What those coefficients are, they're just descriptors of the state, the state that the thing is preparing. So, for example, if it was a spin system, it would be prepared by rotating it in some way with magnetic fields and those coefficients would depend on the angles through which it's been rotating. That's what it actually means. Now, what that causes in our subjective experience, do you need an interpretation of quantum theory for that? Interestingly, all we need, and the right interpretation is the many universe interpretation, but we don't really need to draw on that for these results. We just have to say that the motion is unitary. That's all. That the state exists physically and its motion is unitary. Other questions? I'd like to find out more about what you think about credences. I can go along with the idea that it's very weird to have them in a fundamental micro-physical theory. In the future, we find some kind of theory which is a determinist that we have to wrestle with what that means. But as a high-level concept, there seems to be useful in some kind of vision philosophy of science which doesn't. It's not trying to talk about the fundamental physics itself. So, I observe that people would say, as a credence, maybe if they don't know exactly what the microstate of the real world is, and they have some idea that I think it's within this region, but I've got no reason to differentiate between actual microstates. Having no reason, so these are two slightly different ways of doing a subjective interpretation. The first one, where you guessed what the probabilities are in some theory, and it's going to be an approximation to some microscopic state that you don't know, that's a legitimate way of approximating something. You have guessed what the probabilities are, and then you can test, if theory says that they're something, you can test whether that's so. When you say you don't know what the microstates are, then you're trying to derive knowledge from ignorance, and that is simply logically not valid, and it's also inconsistent with the probability calculus. It's only by sort of hand waving, ignoring that fact, that one can apply Bayesian philosophy in words to things like, we don't know what the state is. If we don't know what something is, we don't know what it is. The standard refutation of that kind of subjective interpretation is, if there are three possibilities A, B and C, you don't know anything about which state it's in, then you might say, well bet on whether it's A or either B or C. If you say that because you don't know which of those two possibilities is right, that you must give them equal probabilities, then the same is true for the permuted versions, and that's inconsistent. It's logically inconsistent, so it can't be used for that. The situation is very analogous to the flat earth theory. When you assume that your garden is a plain surface, you're not assuming the flat earth theory, you're just assuming the mathematics of the flat earth theory applied to this process, but there's no way in which you are assuming that the true thing is an approximation to the flat earth, it isn't. You've asked the principle principle and it has arrived with one rule. You invoke this function V as a valuation function. However, you also asserted that it had some properties that it was linear in the eigenvalues of the science. And you also asserted that the rational entity would... No, we're assuming a rational entity who has linear valuations. So this same issue arises in classical game theory. As I said, a person, a real game player is not the same as the idealised game player of game theory. A real game player has all sorts of other motivations. Otherwise, a real game player would never play a game of chance such as you read, where the expectation value of your winnings is negative. So we assume this idealised game player in classical game theory, and we assume exactly the same player in the decision-theoretic approach to quantum theory. So it's not because we want to say why people play games. It's just this one particular aspect of game playing that has probability. And it's that that is analogous between the two cases. Okay, but you still made an assertion about how this player would behave, right? So how is that any more elegant tool than the principle principle? No, the assertion about how the player behaves is the same in both cases. We're analysing how a player who believes a particular thing will behave. We're not saying that any real player actually behaves like that. No real player does exactly. I hope you take another question. David, you were saying that you're eliminating the distinction between epistemological chance and ontological chance. I'm saying that epistemological chance, well, yes, I am. I'm saying that neither of them exist, but ontological chance can be a good approximation in some situations. And epistemological chance really should be dispensed with all together. Okay, so there could be two kinds, one where we don't know enough about the system to be able to calculate the outcome. Yes. There's another kind in which, even if we did know everything, we would not be able to compute the outcome because the problem was not computable. Yes. Does that not create an effective ontological chance? Well, no, that's a particular situation again in which the probability calculus may be a good approximation. Rather like if we think about the distribution of the digits of pi, then it's a meaningful approximation to say, well, they're going to be random. They're not really random, but the mathematics of them has enough in common with the mathematics of truly random sequences for it to be a good approximation for some purposes, but obviously not for all purposes. So the common feature is just that, no matter what my call, they're just unpredictable whether it's ontological or epistemological. Yes. Unpredictability can be modelled by randomness sometimes. I think we've probably got time for one more question. And then I expect that it's happy to hang around for a few minutes afterwards, so those of you who didn't get to ask the question then. What have I asked? Do I see one here? I just wanted to come back to something that you seem to be wanting to expand probability from any fundamental physics. I think we all know that probability is vertically problematic, but you seem to be wanting to do this by replacing it with the decision-theoretic governments in which are made with respect to some admittedly non-existent player or some game that is probably also not being played. Yes. So I'm really struggling to see how that is any more desirable than any of the other approaches to the admittedly philosophically problematic probability. Yes, so the main advantage is it doesn't assert anything false about reality. It's asserting that reality has something to do with this thing that we know doesn't exist. No, it's just saying that if... So it's taking an idealised game with idealised processes happening both for randomising and for the players. Yes. And it's analysing how those players would behave if they believed some rational... If they conform to some elements of rationality that have nothing to do with probability. But that entire argument is counterfactual. Yes, it's counterfactual, but then you need an argument to say that a real situation resembles that idealised situation in particular ways. So if I go into a casino to play roulette, obviously I'm not satisfying the axioms of games theory because I'm playing a game which has an expectation value that's negative of winning. On the other hand, if I play poker and I'm good at it and then I wonder whether to draw or not or how much to bet, then I can use game theory because and only because I guess that my real situation resembles the idealised situation in the relevant ways. But it's only as good as that. It's not a theory of reality. So it's based, as you said, on a guess on the correspondence between what you might be doing in certain situations to something that has counterfactual. Yes, exactly. So inside me right now I'm assuming that there are carbon-14 atoms decaying. That there is no-ones doing that game of trance. There's no sensible way in which you can prepare it to an idealised. Making that game of trance, talking about the length of time it will take it to give a moment. I think we'll just let David reply to that. Okay, well you can. If you want to work out the half-life, then you simply have to imagine a situation in which these game players bet on how long it will take and what the rational thing for them to do is, and you'll find that the rational thing for them to do is to bet on the exponential decay. And that's that one. So lots more if we could thank you.