 I'm very pleased to introduce Professor Samson Abramsky from the University of Oxford. Samson is a leading expert in several different areas of theoretical computer science and today he will talk about the fundamental aspects of quantum theory from a geometrical and logical okay thank you very much Olivia and thanks to all the organizers for inviting me to speak at this meeting it's a fascinating meeting well one thing I've picked up is a kind of theme that many of the talks sort of take topos theory into related to other areas so I guess we heard on Wednesday about sort of connections with wider areas of mathematics yesterday with the talks of Daniel and Jean-Claude we heard about connections with neural networks and today I want to talk about connections with some fundamental ideas in the quantum mechanics so we all know in some sense that quantum mechanics implies a fundamentally non-classical picture of the physical world and one of the clearest ways the sharpest ways that this non-classicality is expressed is in the phenomena of non-locality and contextuality and the famous results of John Bell coach and Specker theorem are the sort of most famous articulations of this now the main sort of message I want to convey in this talk for the for the interest of this audience is that the mathematical structure of contextuality and mathematically non-locality is a special case is fundamentally sheath theoretic it's it's fundamentally about the passage or obstructions to the passage from local to global so the non-existence of classical explanations for quantum phenomena corresponds precisely to the non-existence of certain global sections and this leads to both logical and topological descriptions of these phenomena very much in the spirit of topos theory so we'll use more with various times more or less explicitly a sheath theoretic language and and clearly the whole thing is in the scope of topos theory and a nice point about this is that it allows standard constructions which witness these results in which you find in the physics literature things known as for example coach and specker paradoxes hardy paradoxes and so on there are many paradoxes are frequently used term in this area to be visualized in a very direct way as discrete bundles and so there is a kind of topology here and in fact the non-classicality appears exactly as a logical kind of twisting of these bundles which is very directly related to classical logical paradoxes but also on the other hand there is topology there and it's witnessed by the non-vanishing of coma logical sheaf invariance so this harmony between a logical point of view and a topological point of view is very much I think in the spirit of topos theory and at the same time it's also strongly connected with probabilistic ideas and both quantitative and qualitative and discrete and continuous features arise naturally and again as I saying I think the very much in the spirit of topos theory to provide a home for all these different aspects excuse me samson we just see a page with the title overview is that normal oh my goodness no I there I've been seeing text can you see my text yeah yeah yeah everything is there right so this is everything I just said so okay now see the text as well okay very good all right so quantum context so yeah let me know if you're not seeing I mean I've got a lot of these pauses so let me know if you're not seeing text there should be text all the way all right so so most but so let's focus on quantum contextuality so I guess everyone's heard of bell's theorem I don't know how many people are sort of familiar and can give a well we'll see that very shortly so we know that quantum mechanics is weird as is often said certainly doesn't conform to our classical picture of reality and this has quite profound implications indeed for our conception of what reality is and also in terms of applications for the possibilities for information processing and we'll see something about that as well so it has both a very foundational but also potentially a very important technological application as well so what is contextuality so I like to encapsulate it in the following slogan in a nutshell contextuality is where we have a family of data which is locally consistent but globally inconsistent and you see immediately from this phrase that something about the passage from local to global it should be sheath theoretic and as we'll see it is sheath theoretic in nature so as a sort of helpful initial analogy suppose we were taking pictures of some building and we had a collection of pictures this is like our family of data and we see that each individual maybe our cameras and sort of isn't able to take a picture of the whole structure but we can take pictures of parts of it and we see that each part looks okay and even the places where they join up where they are but are consistent with each other so this is a locally consistent family of images but we would naturally think that these are all just parts of some coherent whole there's an actual building out there so we try and put them all together and we see what we see here is the famous Escher ascending and descending the power sort of visual paradox of a staircase that's either always going up or always going down depending on how you look at it so here is our global inconsistency incidentally as a side remark this very figure was suggested to Escher by Roger Penrose who studied with his father visual paradoxes and even associated a co-homology to these visual paradoxes quite akin to what we'll be talking about later okay so and then a very brief recap on quantum theory I'm not going to delve into this and I understand that people are not coming from physics this mostly in this audience but really this is just to reassure that nothing more is needed than some linear algebra because most quantum information and computation theory and also indeed foundational results relating to non-locality and contextuality takes place in finite dimensional Hilbert space so finite numbers of qubits or maybe q-dits if we allow more alternatives so a qubit is just a two-dimensional complex vector space that's the space of qubits and in general finite dimensional Hilbert space is just c to the n and we can just regard it as a complex inner product space so operators are just complex matrices and the adjoint of a matrix is its conjugate transpose so we can understand it very simply now what is a state of a quantum system in general it's a density matrix which is a positive semi-definite self adjoint probably trace one matrix and in particular pure states are just rank one projectors so we can think of it as a unit vector by the way we can just think of the one-dimensional subspace that it generates so when you prepare a quantum state the representation is just this simple object of linear algebra and when we measure projective measurements are described by self adjoint matrices and the idea is that the eigenspaces of the matrix in its spectral decomposition correspond to the possible outcomes of the measurement so and the basic rule that lets us compute probabilities that give the predictive content of quantum mechanics is the Born rule and the Born rule says that the system in state row will take the ith possible outcome of the measurement represented by the eigenspace p i with projector p i yes so when i said eigenspace is i was i was writing the projectors here that correspond to those eigenspaces so the Born rule simply says that the probability of getting outcome i is just the trace of row p i where p i is the projector on to the eigenspace and in the case of pure states and rank one eigenspaces this just reduces to computing inner products of complex vectors so there's just some simple linear algebra behind it when I talk about various observable behaviors that we'll discuss being realized in quantum mechanics we mean that we can find matrices of this kind which will under this Born rule produce this observable probabilistic behavior okay well i'm now going to sort of immediately come to some concrete example in fact quite soon we'll be proving some version of bell's theorem but so the the usual kind of way one talks about this in a physics kind of way is that you prepare a state somewhere remember a state is represented as we just said by one of these density matrices we have measurement devices Alice's measurement device Bob's measurement device we choose some measurement perform the measurement on our part of the state that's been prepared and observed some outcome but rather than talking in these terms there's another language which is very popular in the quantum computation and where we can even strip away this kind of physics speak and talk about a certain kind of game a non-local game or an Alice Bob game so in this eye in this sort of game we have two and this is mathematically isomorphic to the sort of experimental setup that we were just looking at so in this game we have two players Alice and Bob and they play cooperatively against a verifier but the constraint that's placed is that Alice and Bob can't communicate while the game is being played so that's the significance of this wall between them so they're put in different rooms we take away their mobile phones we close the rooms in Faraday cages there's no communication between them of course ultimately they should be space like separated so there's no time for light to pass between them during a round of the game and the verifier supplies Alice and Bob with inputs and they have to return outputs and if the outputs meet a certain winning condition then they've succeeded in winning the game so really the idea is to see what kind of coordinated behavior Alice and Bob can achieve given that each doesn't know which input the other has received so the most famous example which is very closely related to the usual kind of Bell's theorem setting is the XOR game so this is what we just said verifier chooses an input for each of Alice and Bob and we assume that the verifier choose it we're going to assume a uniform distribution was there did somebody say something um we assume the uniform distribution for the choices by the by the verifier of the inputs the convenience uh Alice and Bob each have to choose an output as we said they're not allowed to communicate and the winning condition in this XOR game so it's a funny looking thing that says that the exclusive or so these are all Booleans 0 or 1 and the the condition is that the exclusive or of the outputs should be the conjunction of the input so exclusive or is actually just addition modulo 2 so what does this mean if you think about it there are three cases I mean the conjunction is only true if both X is what one and Y is one and in all other cases it's 0 so in those three other cases the exclusive or has to be 0 as well so in that and that can only happen if A and B have the same value in other words if A is is the same as B if you think of addition modulo 2 0 plus 0 is 0 1 plus 1 is 0 and the final case so so those are the first three cases we see here and in those cases the outputs have to be the same they have to be correlated and the final case where this is true that's both X and Y or 1 then this has to be 1 and that can only mean that A and B have different values they're anti-correlated so the probabilities of the winning outcomes given the various inputs are given by this expression here and the success probability so a strategy a probabilistic strategy is just given by a table of conditional probabilities like this and if we assume a uniform distribution of the choices of inputs then the success probability for the strategy is just given by this expression so that's the XOR game so how well can we play this game well here is a strategy for the game so it's a table of 16 numbers so that each of these rows is one of these conditional probabilities conditioned on this choice of inputs to Alice and Bob and here we see the probabilities with which the various outcomes are obtained I mean we allow Alice and Bob to use randomization techniques as they wish just subject to this constraint that they can't communicate during the playing of the game so what this this entry highlighted here means for example is if the verifier spends Alice a 0 and Bob a 1 then with probability 1 8 Alice outputs a 0 and Bob outputs a 1 so I hope that is clear okay so we see a table of 16 numbers what can we learn from it firstly we'll learn a bit later that there's there's sort of a pre-chief here and there's something of that kind of structure but what we what we see immediately is that if we take the expression we looked at here for the winning winning conditions so we're looking at the the correlated outcomes in the first three rows and the anti that's the outside columns and the anti-correlated outcomes on the final row that's the two inner columns if we if we sum all those probabilities and divide by four to normalize we get a winning probability of 0.81 well is this big news yes it is big news or at least if we believe this thing can exist in the world it is big news because classically the optimal pro classical probability is three quarters so if the world behaves as it is supposed to behave according to classical physics classical probability then then the strategy couldn't exist I mean where do I get this three quarters from well in one direction it's very easy to see that we can do we can at least get a probability of three quarters for example suppose Alice and Bob decide beforehand they will both always output a 0 or they will both always output a 1 either choice is good then you see that in three of the cases they win because if in if the verifier sends them sends them any of the first three cases i.e. not both one then the the coordinated response the correlated outcomes are actually winning outcomes so if they put all their weights on those outcomes so so just a column of ones here and everything else zero then three quarters of the time they would win so that's a very dumb strategy which achieves a probability of three quarters of winning that is not only what you can do with a dumb strategy that's all you can do with with with classical means and that is less obvious and this this uses and is essentially you know they're essentially expressing the idea of bell inequalities which is the fundamental idea behind bell's theorem and the fundamental fundamental method in the whole of quantum foundations and quantum information and the subject of many experiments and of things that where people are of the emerging quantum technologies so this is a very fundamental idea that it's giving an actual limit to what can be achieved by classical means and which this strategy exceeds and since the table i just showed you exceeds this bound and moreover it's quantum realizable meaning that we can find a quantum state described by an entangled pair of qubits and and appropriate quantum measurements so we just find these matrices and according to the use the born rule to compute what what they do from the table and this would give you exactly this table of numbers so quantum mechanics predicts that we can construct such a strategy and as well shortly see this is confirmed by experiment and are already being woven into quantum cryptography there is kinds of quantum information processing scenarios so this is a clear case of using quantum resources to yield a quantum advantage in an information processing task okay so so let's discuss bell inequalities and this takes us to a pioneer of both probability and logic namely George Boole so Boole had a beautiful paper in the in the middle in the 1850s the 1850s still very worth reading and as was observed particularly by Ithama Petovsky what Boole was discussing there is essentially the subject of bell inequalities although of course he had a different motivation just looking at the fundamental question in probability theory so the problem is we're given rational numbers which indicate relative frequencies and if no logical relations obtain among the events then the only constraints impose although they each be non-negative unless the one but if the events are logically interconnected there are further equalities or inequalities that obtain among the numbers and the problem is thus to determine numerical relations among frequencies in terms of equalities and inequalities which are induced by a set of logical relations among the events and these are called conditions of possible experience so more formally we're given some basic events and some Boole which we can think of as variables and some Boolean functions which can be described by propositional formulae and we're given probabilities of the basic events and we're asking what numerical relationships can we infer from the logical relationship between the probabilities can we infer from the logical relationships between the events so this is already a beautiful interplay between logical notions and probabilistic notions and let's make a very simple observation which ends up giving a rather complete answer to this question so we've given some propositional formulas we're given probabilities for these this fits exactly into the picture we were just discussing and we make a very basic so to have some logical connection we say that the formulas are not simultaneously satisfiable they can't all be made true these are just propositional formulas so in other words any n minus one of them must imply the negation of the nth so now using elementary probability theory we we we can extract something from this we see that the probability of the nth events described by this formula because of this implication must be must be and the monotonicity of probability must be less than or equal to the probability of this disjunction and then by what is actually fittingly called Boole's a very simple Boole's inequality the first of the Bonferroni inequalities no disjointness but just the fact that the probability of a union is less than or equal to the sum of the probabilities we get this inequality we get the probability of a of a complement is just this we collect terms and we end up with this inequality just under this assumption that the formulas are not simultaneously satisfiable that the sum of the probabilities must be bounded by n minus one so this is a very simple derivation and we can immediately apply it to the bell table so here's the winning condition the winning outcomes winning responses of our XOR game highlighted and we have the probabilities waiting those outcomes in our table and so we make events that correspond to the winning conditions so remember this was on the first three rows that the outcomes are correlated and on the final row that they're anti-correlated we can describe in logical terms by these formulae so it's very easy to see that these are not simultaneously satisfiable if you take them all together they're contradictory so we start from A2 we can then replace it by B1 we can replace B1 by A1 and A1 by B2 and then B2 exclusive or B2 can never be true so they can't they're not simultaneously satisfiable so our little proposition from the previous slide must apply and therefore it should be the case that the sum of these these numbers that are highlighted should be less than or equal to three we have four events so n minus one is three but we see in fact that we get a violation of this bell inequality by a quarter so by the way I mean I had this isn't a contradiction in mathematics so there must be an assumption that I've made and you well I mean it's nice to think about this but the the I don't know if anyone wants to sort of say something here but in fact if you look carefully you see that I've tacitly supposed that all these individual probabilities come from a single joint distribution and that's really the classical assumption that there's a global event space where everything lives and everything is objectively going to happen you know I mean every everything objectively you know happens or doesn't happen regardless of which variables we choose to look at and under that assumption this goes through this reasoning goes through perfectly and would indeed give us a bound and then we'd see that this table is not consistent with those assumptions so we get a violation of this logical bell inequality and in fact this simple derivation has a natural generalization so given a family of propositions we say it's k consistent if the size of the largest consistent subfamily is k so if they're n propositions k out of n and if you have a k consistent family you can easily show the same kind of way that the whatever the probability assignments whatever joint distribution we have for some of the probabilities of these events is bounded by k but the interesting point is that all bell inequalities arise in this way so the result is that a rational inequality is satisfied by all non contextual input well as I'm not explaining all the terms but by all setups of this kind in very considerable generality if and only if it's equivalent to a logical bell inequality of the above form so in other words logic is controlling the the probabilistic what could be what can happen in terms of probabilistic behavior in this rather nice way so I think this does give a nice logical answer to bull's problem in fact pitofsky suggested that there might you know maybe in business that there could be such a result but I think this does give a nice answer and and if it also tells us that you know if if bull was delimiting the conditions of possible experience that in quantum mechanics we have quantum conditions of impossible experience and there's a lot one can say about that but of course the experience is impossible only in a certain sense because what's important to say in understanding what's going on here is that all that we can observe on a single play of the single plays of the game are just these subsets of variables we cannot observe all the variables together so the fact that there must be something true of all the variables at any point is the classical assumption and you can sort of see that this is going to lead us inevitably to something about you know non-existence of global sections okay so um is this all science fiction well I mean it's certainly a quantum theory is verified and confirmed in countless experiments and manifestations of our science and technology every day but very specifically these predictions were I mean there was a sort of history of experimental tests sort of turning back to the you know the picture of the preparing states measuring them and so on and this culminated just a few years ago one of the first widely agreed loophole free belt tests where sufficient time like sorry space like separation was achieved between the Alice and Bob measuring stations and sufficient efficiencies of detector efficiencies were achieved so that there was a sound basis for these results and this is a picture if you don't recognize him of John Bell who did this pioneering work in in 1964 and a review article by Alan Asper who was really the the pioneering conducting Bell experiments in the going back to the 1980s which sort of looked at these more recent teams actually three teams that did this did this work which as he says closed the door on this quantum debate so here you see a schematic where you have indeed a source emitting these entangled pairs of photons Alice and Bob now become detectors and then and they so they're performing measurements and we're looking at coincidences correlated or non-correlated as we were saying so this is actually done in various experimental setups by several teams so here's a sort of timeline the formalization with you know in finite dimensional case just linear algebra and then obviously with more general Hilbert spaces and operators and so forth goes back to von Neumann even to the late 1920s and every we've been using it since then the EPR paradox already in the 1930s Bell's there in the 1960s the first experimental test still subject to these experimental imperfections in the 1980s we already see the beginnings of quantum cryptography and of quantum computing in the 1980s shores algorithm of factoring primes in the 90s and these first loophole free bell test by these three teams and delft missed in the United States and Vienna in 2015 2019 quantum supremacy claimed by Google various other so this is to do with quantum advantage and an emerging quantum computing and technology industry right now okay so let's be a bit more formal and a bit more general just to give an idea of how we can connect to the sheath theoretic language so there's a kind of we can have a kind of level of types which set up the structure of a measurement scenario so I'm just going to look at finite things here so I've been saying so we have a finite set of measurements and then we have a simplicial complex whose faces are called the measurement context so this is telling us because the key point as we kind of already saw implicitly is that not all variables can be measured together only certain variables are measured together in quantum mechanics this is because you have in general you have incompatible measurements non-commuting observables so those that can be measured together are collected in the faces of a simplicial complex and then we have a finite set of possible outcomes for each each measurement and we have tables that give well so this is just the structure of a measurement it's saying what kinds of variables you have which of them can be can be measured together and that's all but then an actual behavior in such a setting and using which we can think of in terms of preparing some state and performing some measurements is given by what we call an empirical model which is a which is a family indexed by the faces of the simplicial context complex of probability distributions over these joint outcomes so these are the rows of our table as we saw earlier so the the simplicial complex here just has these these it's just a graph where we have these binary contexts and we'll we'll see more of this later but we can sort of visualize this in a very nice way as a kind of discrete bundle so we'll see more of that later let me just say that a key point here in in the sort of the underlying assumptions is that which is really enforcing this idea that well that the we that doing if you that we're not we can't our choice of context cannot be is not itself information that influences the outcome so in other words that if you have that marginals are well defined essentially if you have a smaller set of variables then whether you marginalize from one larger face or from another you get the same result so this is a physical principle generalized though signaling in the context of bell scenarios which relates to sort of relativistic constraints or in contextuality it's called the no disturbance condition so then we say that an empirical model as we just defined it is non-contextual if there is a joint distribution over this the whole set of variables which marginalizes to give us all these observable behaviors and that is we can glue all the local information together the the things we can directly observe by performing measurements into a joint distribution from which all the information can be recovered we call we call such a thing a global section and once we put it in categorical language chief language it is a global section as we'll see in a moment and if no such global section exists the empirical model is contextual thus this sort of makes more precise our previous slogan that contextuality arises where we have a family of data which is locally consistent but globally inconsistent and the import of bell's theorem and similar results is that there are empirical models observable behavior and indeed experimentally testable behavior arising from quantum mechanics which are indeed contextual so just to make a more categorical formulation given one of these scenarios we have a simplicial complex that we can define a pre-sheaf where the firstly this we have this sheaf which simply collects all the outcomes for each for each the as it were for each context each face of the simplex of the simplicial complex each sorry for each simplex in the complex we have the the local section giving an outcome for each of the variables in that context and then we compose that with the discrete distributions monad on the category of sets this is just to keep things simple the discrete distributions monad is just the reduction to the discrete case of the jury monad if you happen to have encountered this which is a basic going back actually to ideas of laudier which is a basic way of categorifying probability theory so it's a very useful monad that comes up a lot in many applications so this is a pre-sheaf and the restriction for this pre-sheaf is exactly marginalization and then we see that an empirical model which builds in this no signalling condition this physically significant condition is just naturality given that the you know restriction is marginalization the fact the restriction is marginalization just I mean this is obviously here trivially that we just have projections in this event chief and then just from the functorial action of this monad we recover that we get marginalization as the restriction operation so everything fits in a very very neatly in a categorical formulation and there's also a lot of topology at work here even if even in this finite setting there's a lot of topology at work here but I'd like to sort of firstly show you this in a visual form and then say a little bit about cohomology and variance so here is a way we portray these simple scenarios like in that Alice Bob game or Bell scenario that we were looking at earlier we have a base so we have a kind of bundle structure where we have a base with the variables and we put an edge between two variables if they can occur in the same context that that is they can be measured together so Alice can measure any of her variables together with any of Bob's variables and vice versa but Alice can't measure both her variables at the same time and Bob can't measure both of his variables at the same time so that's why we have the edges that we do above each variable in the base we have a fiber of the possible outcome so in our case here these were just binary outcomes in each case so we have these little fibers sitting above the variables in the base and a local section and and and the possible local section so here I'm abstracting from probabilities just to take those things that have non-zero that are in the support of the distribution that have a positive probability that could happen and so we only have we have edges that indicate those those events observable events that could actually happen and the global section here of possible events would be just in a path that goes around all the fibers in a consistent way and come a closed a closed curve that comes back and assigns a unique value to every variable globally so that's what the global section is okay so as we say we're so I'm going to look at another famous example from the quantum literature the Hardy paradox as it's often called and as we were just saying and the important point is that the contextuality here can be seen at the logical level without even invoking probabilities we only need to distinguish between what gets probability zero and what gets positive probability so I put ticks where there's a positive probability that's if you like it's possible and across for those with probability zero namely impossible so we have a basis before it's the same the same scenario the same shape as the bell test but it's a different kind of model as we said we have these are the compatible observables those are the fibers and then you see on the first row everything is possible so we put in all these edges on the second row only three things are possible so we put those three edges in and similarly for the other rows and now the point is that there are some global sections in this table here's one so that's a consistent value consistent with what lives in this fiber so the edges have to be things that we allow as possible events so there is there are some local sections but suppose we took a different choice of a local section here the question is can we extend it to get a global section and in this case you see we have a problem we can go forward and we can go forward and we can go forward but the only choice we have doesn't take us back to where we started there is no closed curve of this kind and that corresponds exactly to the fact that there's no way of extending this there's no global possible event which could account for this observable local event and this is already the signature of contextuality so and here we well we see in fact we can distinguish in this way different strengths of contextuality there's the basic probabilistic version that we saw earlier Hardy illustrates there's what we call logical contextuality where there are some global sections but some local sections that can't be extended and the extreme case is we call strong contextuality is illustrated by this as it were discrete mebius strip now this discrete mebius strip is exactly what corresponds to a famous construction in quantum foundations quantum information namely the Popescu-Rawlik box interestingly this is something that can't be realized in quantum mechanics it's even more weird than quantum mechanics allows for but on the other hand there are quantum realizable phenomena a little bit harder to draw which exhibit the same kind of strength of contextuality the point here is that wherever you start you can never get back to you never go around and get back to where you came from exactly you have to go around twice so you had there's no univocal unique assignment of values that that's consistent and extends any local section whatsoever so this is a mebius strip it's also in fact a well it's related to a lot of things let's see how it's actually related to the bell table the the XOR game the subject essentially of the bell test because those experimental bell tests were exactly confirming this kind of setup so really the winning positions in this game corresponding I mean if we just put you know only focused on those that would give us exactly the nodes we see in this mebius strip so that was what we were doing all the time and remember that the basis of our derivation of the bell inequality was that the corresponding propositions were logically inconsistent so we already see the sort of elements of a beautiful connection between physics probability logic and topology so let's say a little bit more about logic so in fact you know there's the famous lie of paradox the sentence that says I am false or this sentence is false by extension we can take liar cycles this is the sort of thing that magicians study where we have a sequence of sentences which each says that the next one is true but the last one says that the first one is false so and these liar cycles can be modeled by systems of equations so that the point about doing these cycles is that we see with that each of the statements involves a subset of the variables and when you each individually or in fact any n minus one of them are consistent but if you take all of them then you get an inconsistency and in fact um uh up to rearrange but the liar cycle of length four corresponds exactly to the PR box and the usual reasoning to derive a contradiction from the liar cycle corresponds precisely to the attempt to find a univocal path in the bundle diagram and of course this can all be discussed much more generally but I think this already shows vividly the uh connection and for those of you sort of uh familiar with logic one of the famous results of first order logic Robert is Robinson joint consistency which is an equivalent of the kreg interpolation lemma which says that if you have theories over different languages which are consistent modulo their common sub language then the union of the theories is consistent so this is you know you can turn this around into kreg interpolation now this is um it says that two compatible theories can be glued together they're compatible in the sense that they agree on their overlap they don't disagree at least on their overlap in this binary case local consistency implies global consistency what you will never see in any logic book is something that goes beyond the binary case and that's for the very good reason if you go beyond the binary case it fails uh and actually you get a minimal counter example just even propositionally by taking these three theories and you see that they're um locally consistent in this pairwise consistent but but jointly inconsistent and this is actually or again something that occurs famously in the quantum foundations literature this is the specker triangle from his 1960 paper preceding the famous paper with Simon Cochin okay and then a little bit about the co homological characterization um there is topology here and we can witness contextuality by co homology so i mean i'm just gonna i'm not gonna go into details on this but i mean it's really fairly simple we're just witnessing the non-existence of global sections so this is meant to be a sort of a support uh pre-sheaf um and we can um we take a relative co homology so we focus attention on one context so this is like starting with one of the local sections and seeing what happens and um uh we can uh we can assign to each local section here an element of the first co group in the Czech co homology uh which we work with because it's easy to compute with uh and in fact gamma is really just the connecting homomorphism of the long exact sequence so there's some choices here and these are exactly the choices you make in uh you know non-essential choices you make in um uh proving the snake clone i say okay so then and then the basic results are indeed that you know as we would expect that um uh the the co homology obstruction vanishes is equivalent to there being a family which extends the given local section so if the model is uh extendable in the in the sense we were just saying to start from a local section always go to a um a global one that that extends it then the obstruction vanishes so if the um uh so what this is saying is that non-vanishing of the obstruction uh which was the quantity we just defined there in the first co homology group provides a co homological witness for contextuality so um now this this sounds perfect it's great but of course the the sort of drawback is that to get to get going we needed to abelianize by taking not the supports themselves but the free um free the the module that they generate that we took over the integers um so we're abelianizing and where we have these integer coefficients um and because of negative coefficients in co chains you know sort of weird things that wouldn't that we don't really care about there are false positives nevertheless we can effectively compute witnesses for contextuality in many of the cases in the literature and in cases where the outcomes themselves have a module structure we obtain very general results so-called all there's nothing proofs which actually accounts for most of the contextuality arguments in the quantum literature and in particular we can find large classes of concrete examples in stabilizer quantum mechanics and we have a complete characterization of contextuality there um now there were these counter examples of my my student Giovanni Caru that's a beautiful work in his thesis and he has a paper I'll mention at the end which gives a refined co homological criterion which covers the vast majority of cases kills all known counterexamples and is conjectured to be complete in fact and following our work Robert Rausendorff uh um leading figure in quantum uh information and uh Cihanoke have developed a related co homological treatment of contextuality because some of what they've done my students even Osnes has shown that their work also uh sort of falls under the scope of the chief co homology invariance and okay Cihanoke in particular has done a lot of work in applying topological ideas in contextuality and quantum computation okay so I hope I still have 10 minutes left is that right Olivia or yes yes yes okay thank you very much so um yeah okay so that's something about co homology I mean I uh if you want to I'll I'll give some references at the end now I want to turn to a sort of some geometry some another different aspect this is a multifaceted subject so another facet is convex geometry um which is also very rich here so we can um if so here we think of these probability tables so think of them as vectors just they're just vectors of real numbers of probabilities sort of lay lay the table out as a vector in a high dimensional um r to the n uh and then um what we find is that the spaces of these uh probability models under various assumptions form natural convex bodies in this euclidean space so in particular because we have bell inequalities linear inequalities which which bound all the possibilities for the um non-contextual models they form a polytope and then on the other hand if we only impose the kind of uh there's no signaling kind of conditions the marginalization conditions those are also linear um linear equations uh linear inequality uh linear inequalities because you also have a non-negativity constraint so they also form a polytope and these things can in principle all be found by linear programming although the linear programs get big very rapidly of course and then sitting in between these two uh the non-contextual case and the um no signaling case we have the quantum set those things that are quantum realizable by using quantum states quantum measurements and computing probabilities using the Born rule as we mentioned it's a convex set but it's very definitely not a polytope and we can relate the various hierarchy of contextuality that we've mentioned to the geometry of these polytopes so obviously the things that the interesting things from our point of view are those that lie outside the non-contextual polytope that violate some bell inequalities um and uh the um so that the whole space of of contextual models lies in in this sort of area here but the most interesting of course for us are the quantum realizable ones which lie in this quantum body cube um there were a strong contextuality in general lies at the vertices or at least the faces of the polytope containing only uh contextual models and the um uh in lower dimensional subspaces and the logical logically contextual things like the hardy paradox in faces of this polytope so we get a high a strict hierarchy in fact as we were saying of these different kinds of contextuality and this classification in turn feeds into uh things we would like to do in relation to quantum advantage as an interlude i can't resist mentioning that although it you know you may say everything is finite and it can't be that hard that there's a huge amount of complexity here just consider the question given a finite probability table as we've been showing a finite table is there a quantum realization is there a quantum state and measurements which give rise to it via the Born rule if we fix the dimension of the Hilbert space this reduces to the existential theory of real closed fields it's a it's a it's a it's a it's a fine it's a finitely it's a finite search problem although we're searching for complex numbers but searching for complex but a given a fixed number of complex numbers which really means a fixed number of real numbers and that's that isn't so bad because we know already from Tarski um and then uh since we only need existential queries from later results that um this is a decidable theory and in fact decidable in peace space so at least it's decidable albeit of uh high complexity but if we ask the realization in any finite dimensional Hilbert space so we're we're no longer bounding the the sort of number of complex the size of the matrices we need to find then this is undecidable and moreover rather fascinatingly there are finite tables which are realizable in infinite dimensional Hilbert space but not in any finite dimensional Hilbert space and this already comes from beautiful results by William Slopstra published quite recently but going back a few years earlier we has a beautiful reduction to group to group theory and computational problems in group in group theory one can use the Higman group to show this latter fact for example and even more spectacularly we have the recent result with a great title if you're of MIPSTA equals RE the result of the G Natura Jan Vidic Wright and UN which is simultaneously a major result in complexity theory quantum foundations and mathematics so MIPSTA so so we have this paradigm and complexity of the interactive prover paradigm which is famously equivalent to peace space and even if you if you if you give the the prover quantum resource but you only have one then it's still peace space by a famous previous result but if you allow multiple quantum provers sharing entangled space then it allows all semi decidable problems to be represented so for example the halting problem provability of statements in piano arithmetic and so on so it already becomes really strong so and as consequences of this it refutes the Terylson conjecture which really says in a way that tensor products at least in infinite dimensions are not fully general to describe you know they're no longer equipped I mean so if you have commuting subalgebras of two subalgebras of operators where one subalgebra everything one subalgebra commutes with everything in the other in finite dimensions you can always represent this on a tensor product and this is no longer true in infinite dimensions this is essentially the Terylson conjecture and the very famous mathematics conjecture of Al-Ancon 1970s is is also refuted by this result that the connection between that and the Terylson conjecture was already there so there are some quite very deep and remarkable phenomena lurking in these settings already okay so to say I'm running out of time yep so I'll just say that you know this does reach into issues of quantum advantage and using these tools that we've been mentioning to derive a general results about quantum where you can gain where you can do something using quantum resources that you provably can't do using classical resources and this is you know it's an emerging technology which it's pretty clear is going to have a major impact on our scientific world technological world and beyond and we still have we have remarkable examples but no general theory so we're beginning to develop some elements of this or that's the aim so here we we have a way of measuring contextuality and for a large class of problems we have an inequality that shows that to get a better a better performing algorithm you need more contextuality and in various cases you need some of these stronger forms of contextuality that we were we were mentioning so this is and really the the sort of a major current topic of interest is where to where the line in the sand can be drawn separating quantum advantage using these non classical phenomena from efficient what is efficiently classical simulability classically simulable and there's there've been many surprises in both both directions so a very promising recent line of work is is in shallow circuits breakthrough work by gravie gossip and cunning which gives an unconditional separation most most separations you'll see are just based on the best you can apparently do or or on in theoretical work on using conjectures about separation of complexity classes but they give an unconditional separation for this shallow circuit class and the idea of the shallow circuit class is that the non locality as we saw earlier is weakened to a bounded locality so shallow circuits mean the circuits can get arbitrarily big but they can get wide but not deep each circuit can only have a bounded number of sort of previous gates or each gate can only have a bounded number of gates feeding into it previously computed so this means that in a classical circuit most things can't communicate I mean they don't have to be placed like separated but they can only communicate through the structure of the circuit but in a quantum case you because you have this this correlation behaviors as we saw earlier you can achieve a provable quantum advantage and although the the sort of absolute locality is kind of attenuated by this this bounded communication that you have in the circuit asymptotically the advantage witnessed by the bell inequality violation is recovered so this really leverages all the tools we've been discussing to prove a striking result about what could be achieved with quantum advantage of course the point is rather than a single finite case we now have a whole family of instances a circuit family and so we get an asymptotic calculation there and the same ideas hopefully can be transported to other computational settings and there are a lot of other further developments there are remarkable rockably good connections with things that have nothing to do with the quantum mechanics for example in relational database theory and in even in linguistics and so on and something it's been a pleasure for me to find out recently is there are strikingly close connections to work done by Daniel and his students and collaborators from I think quite different motivations essentially but the mathematics is strikingly reminiscent so something we hope to understand better let me just give some references to papers that will be published in various places but you can conveniently access them on the archive and they want to get more details of anything I've been talking about you can find that there and here are some of the people I've had the pleasure of working with on these things so thank you very much thanks so much for this marvelous talk