 Well, I'm very grateful for this invitation to speak a little bit about this connection between proofs and nuts. And I think there is a lot to explore still. So I will just explain a little bit the basic ideas. And I love this picture that in a way summarizes the talk. So it's about how a proof can be seen as some kind of naughty structure which has to do with the structure of dialogue. And so it's a long question about what is a logical proof, how should we should represent it, describe it, and how we could have a properly mathematical notation for proofs. And my starting point in this talk is the idea of game semantics. So the idea is that every proof of a formula A initiates a dialogue where proponent tries to convince opponent and opponent tries to refute the proponent. And this is really a nice interactive understanding of proofs. And so here is a typical proof in, let's say, some kind of Genson notation, traditional notation. It's a little bit cryptic, a bit difficult to understand what's behind. But what it proves is a so-called drinkers formula which says that in any open cafe in Paris with at least, so we need one drink, I mean, one customer. So in the cafe, so there exists a specific customer Y which is so sober, it's very sober customer Y, such that if A of Y, which means that Y is drinking, then all the other customers in the cafe are also drinking. And clearly, this is counter-intuitive, but this is proved by this short proof in classical logic. This property is not true, is not valid in intuitionistic logic or constructive logic, but it is valid in classical logic. And here is the proof. And I said that a proof gives a strategy, some interactive strategy to convince you. So we could, I mean, I could play it with you. So imagine I want to convince you of this property, which you think is a little bit strange. Well, we open, I mean, we enter together in the cafe. And then I say, OK, I know the customer was very sober, so I could pick someone. So for instance, I don't know. Gerard, if you allow me, I would say, OK, Gerard, you're very sober, so I know that, OK, if ever you drink, everybody else will be drinking in the cafe. Yes. And then the point is, well, I may be wrong, OK? So you may be drinking, Gerard, but then someone else in the cafe is not drinking. And so my opponent will refute me and say, come on, Paul, you're just wrong. This customer, so I don't know, Nicolas, you're allowed to be the counter-witness. So OK, say, OK, come on. You know, here, I see Nikolai is not, I mean, so he's a counter-example to your claim. And so the proof what he does interactively is to allow me to backtrack. And so this is a little bit hidden here in the fact that there is two existential here introductions in the rules. It allows me as a prover to backtrack and say, oh, sorry, I was wrong. I shouldn't have picked Gerard at the beginning. I should have picked Nikola. And so the idea is that the fact, I mean, the reason why the existential here is not constructive has to do with the fact that I did use the witness from the interaction I have with the opponent. So in a way, I cheat. But this is exactly the way classical logic works interactively. And I mean, instead of speaking about the drinker formula, you can say that this formula says that every proposition is either true or as a counter-example. I mean, you can think of the why here as a counter-example. And clearly, before a property is proved, well, still, this property holds because if someone finds a counter-example, then we have the counter-example. I mean, this is the same story I've told you here. OK, and so in a way, this syntax is a little bit difficult to understand while the game semantics gives a much more intuitive understanding of it. And so let me speak briefly about the way game semantics and algebra are connected. So this is through linear logic and a number of connectives which, in fact, are really adapted or are really consistent with linear algebra. And so here we have negation, which is really like a dual. And I will come back to that in linear algebra between the game A played by the player and the game not a same game, but played now seen from the point of view of the opponent. And so negation permits the rules of opponent and proponent. The sum here is what it does. It says, OK, let's start with two games. So there is a game A and a game B. There are some. So imagine this is chess and poker, for instance. The sum of the game is the game where I, as a player, decides whether I want to play chess or I want to play poker. And then once it's decided, we carry on. We never come back to the other board. And so this is understood as, here, this disjunction. It's a choice I do as a player. But there is a dual connective, which is just the same, but where opponent makes the choice. And this is understood as a form of conjunction. Because if I let the opponent, the environment, choose, then clearly I should be a master in chess, but also a master in poker if I want to win the game. Because I don't know what the opponent will choose. And so this is really a notion of constructive conjunction. It's an ant. So this is a little symbol used for ant. But there is also a tensor product where the two games are played in parallel. But only opponent is allowed to switch board. And so player will just play where opponent has just played. And this is understood as a classical conjunction. When I say classical, I mean in the sense of classical logic. And there is a dual where it's same, but now player is allowed to switch board. And the nice thing is that this can be understood as a form of classical disjunction. And so I will show you how to establish the fact that for every formula, so for every game. So here, for instance, let's take a data-mined version of chess. Well, we have the property that A, let's call it the chess is called property A, or here, not A. So not A remember is just you swap the board. And so I will give you an interactive strategy which wins in that game. And so really the idea is that we are playing, I mean, I am playing two boards here in parallel. And here in front of me, there is a tensor product. And the tensor product can be seen as a strategy or counter strategy of type tensor can be seen as a pair of strategies. So let's say I'm playing against two famous Russian chess masters. And I will show you how I can win by playing here white and here playing black, okay? And of course, I mean, winning, you know, I'm not crazy. I cannot win against both. Or if I want to win against both, I need to be a very strong master myself. But here is by pure logic, okay? So there is no, I mean, it's just by some kind of logical, let's say manipulations or logical, you know, truth, okay? So I want to prove that A or not A and the strategy to do that is just to let, it's very simple and it's like cheating really. So it's to let Korshnoi start here. And so Korshnoi plays a move like this over here, okay? And then copycat what Korshnoi has done on this board. And I can do that because as, because this connective now enables me to play, I mean, to switch board when, because whenever a move has been playing on one board, I'm allowed to move to the other board like this, okay? And so I've copycat the move by Korshnoi, Karpov answers, and then I move to the other board and I play like Karpov, okay? So, and then Korshnoi answers and then I just move like Korshnoi has just moved. And in that way, I just by copy, you know, some kind of copycat strategy, I mean, Karpov really believes he's playing against Korshnoi and Korshnoi really believes he's playing against Karpov. And in the end, one of the two wins. And so that means that I will lose maybe Karpov wins. So I lose on this board, but I win on this board and after I want to prove that A or not A. And so that means that I only need to win on one of the two boards, okay? So the idea is that we can understand the fact that the property is true or it's negation, something purely interactive and purely linguistic, which doesn't have to do with the outside world. It's not about whether today it's cold or hot weather. It's really about just pure linguistic phenomena. And so I mentioned that there is also this exponential modality, which is very nice. I will not speak about it anymore in this talk, but it's nice to know that there is something which enables opponent to reopen to reopen boards whenever the opponent is embarrassed. And this is what happened in the drinker formula thing. At some point, I was embarrassed as a player. So I reopened the new board and I won on the new board on the second board using information I knew from the first, the fact that, I mean, Nikola was the good witness and not Gerard, okay? And this I learned on the first board and then I used this information on the second board. And so, and by the way, this, I mean, maybe I will have the time to mention it, but this has to do with co-free constructions of co-algebras or commutative co-algebras in vector spaces. And, okay, I'll try to come back to this, but now what I will try to show you is that there are connections between these ideas of game semantics and ideas coming from linear algebra and representation theory. So, okay, I will move that, okay. So, but before I do that, there is this important, let's say, tool coming from categories and ideas by Lambeck in particular, which is to give a factorial approach to proof invariance in the same way that we will see there is a factorial approach to not invariance, okay? So, so I will speak about the, in particular, I will start from this idea coming from Brewer, Hayting-Kolmogorov, that a proof of a conjunction, okay, really is a pair consisting of a proof of A and a proof of B, okay? So, and the idea is that, I mean, from the game semantics point of view, is that someone, you know, if I claim A and B, my environment, my opponent could attack me on A or on B, so I need to be able to prove A and to prove B, so I should have a proof of A and a proof of B, okay? And so this is fine, and we will see that this will be interpreted by the existence of a Cartesian product in categories, but then there is this more mysterious description of a proof of a formula from A to B in logic, as an algorithm which is able to turn any proof of A phi into a proof of B, psi of phi. So this algorithm phi, the question is what does, I mean, what does it mean, an algorithm? And this was, I mean, this is still a question that people, I mean, it's not so clear, whereas here, it's quite clear what it means, okay? A pair is a pair, but an algorithm is something like saying, okay, I have a notion of algorithm somewhere in the air, but I don't know exactly what it means. And so the notion of Cartesian close category is an attempt to answer that question by saying, come on, okay, an algorithm will be a map in a specific category, but this category should be Cartesian, so it should have a Cartesian product, and it should be closed in the sense that we should have a family of adjunctions between the function A times and the function A implies, and what this means is that we have a natural bijection between the set of maps from A times B to C and the set of maps from B to A implies C, and we can think of this as some kind of implication. So clearly a basic example is the category of sets and functions, okay, but there are many, many other examples and we spend a lot of time when we study proofs to construct Cartesian close categories of many shapes, okay. And so, I mean, just, I mean, a typical example is every topos is such and every category of sheaves is such and there are many, many other examples that we can analyze, it's a very rich and interesting topic, but here I want to focus on the free construction, okay, and so, and you will see it's very symbolic. And so if you want to say, okay, I know I start from a category and now I want to construct the free Cartesian close category and you think about it, you say, okay, I should start by the objects and the objects, they should be constructed from the objects of the original category and then the products and the implications, okay. So there should be this grammar of objects and so the objects of the category of the free Cartesian close category are constructed by this grammar, okay, so you can think of them as formulas or as types constructed by this very simple grammar with Cartesian product and implication. And here, again, Cartesian product is some, you can understand it as some kind of conjunction and implication here. Okay, and so now the morphisms in this category, they are not as, I mean, they are a bit subtle to define, to describe and I will just say a word about them. So they are lambda terms and so just to say a little bit, you know, a word about lambda terms. So lambda terms are terms in the calculus which is pure calculus of functions, okay, and I will say a word on that now more, but then these terms of this calculus should be considered modulo some notion of beta, eta conversion. And what I claim here is that the situation is very similar to what we have in nut theory where we have like this tango diagrams, okay, and then up to deformation of diagrams, okay, typically write the master moves, okay. So how is the lambda calculus defined? I mean, there is some kind of, I mean, some kind of calculus where you describe functions in a given context. So typically you said that there is, for instance, a function X, okay, this is just a variable in a context where X is of type A and then X will be here. This term will be of type A and the most important two rules is the abstraction rule which says that if you have a term P of type B, okay, in a given context where the variable has been declared of type A, then you will construct the function which is written lambda X dot P and the function here, you can think of it as the function which to X associates P of X. And so this is the notation here and its type is the type of functions from A to B. And so I said implication, but you can also think of it as description of some kind of function space, okay, all the functions from A to B, okay. Since the variable here is of type A and the output is of type B, then lambda X dot X, this function is of type from, I mean, A implies B. And once we have constructed such a function from A to B, we can apply it to an argument. So this is the notation. So P is applied to the argument to get something of type B. So the argument is of type A applied. I mean, we apply the function of type A implies B or A to B and then we get something. And then there are three basic rules here that organize, I mean, that deal with the context. And so then what are the beta and eta rule? So they are very cute rules. I mean, they are very beautiful and powerful. So the first one says, okay, if I have constructed the function which to X associates P of X, and then I apply it to an argument, then I will get, I mean, I can rewrite it into the same, I mean, the P, but now it's P of Q and this is the way it's written here. So because it's a pure calculus of functions, we have variables X appearing in P and so we can then substitute them by Q. Okay, similarly here, there is a rule which says that every term of this calculus can be seen as a function lambda X which to X associates the term applied to the argument X. Okay, and this is completely formal, completely symbolic, but at the same time, it's known to be deeply connected to a language, I mean, a language of proofs. And so this lambda terms here, this lambda terms in red, they can be seen as proofs of some propositions which are written with implication here and conjunction. So there were this little calculus here of, which you can see, think also as a calculus of formulas. And here, this is the description of the proofs of these formulas. But what is important, I mean, okay, I don't ask you to understand all these details about proofs and formulas, but what I want to stress is that there is a completely algebraic construction of the free Cartesian closed category. And what this means is that whenever I take a category C here and I have a factor into a Cartesian closed category D, I can lift this factor to the, because D is Cartesian closed, I can lift this factor from C to D to a factor which preserves up to coherent isomorphism, the Cartesian product and the implication arrow, okay, from this free Cartesian closed category into D. And so this gives, so this construction is extremely important in the construction of what we call proof invariance and I mean, I work in a like computer science lab and the reason is that in fact, this lambda calculus can be seen also as a language of programs. So there is a nice correspondence between proofs here and programs. So you can think of it as a very simple programming language here. If you like that you transport, you interpret all the, so the morphisms are programs and you interpret them into some category. And so here the category typically will be, could be like the category of sets and functions could be a pre-shift or shift category or the topos or whatever. As long as it is Cartesian closed category, we have this beautiful little factor, okay. And okay, this is the story for what I would do, all my day, okay, constructed this kind of things, but then people in, I mean, studying not invariance, and this is the connection with nuts, they do something extremely similar where instead of constructing, so they start from the factors, let's say, I mean, it's one way to think about the construction of not invariance. There are many ways to construct not invariance, but here I will describe the factorial approach, which is to say, okay, if I'm able to associate to every color an interpretation in a ribbon category, and I will explain briefly how to construct such ribbon categories using representation, theory of quantum groups. So when we have such a ribbon category and a function into that, we can leave the function to a function which preserves the structure of ribbon categories, but this, I mean, this free ribbon category is a category where the morphisms are frame tangles. So in particular, we can study like ribbon nuts and we can associate invariance to each ribbon nut because this category is typically the morphisms from the unit object to itself are ribbon diagrams which are closed and so they are really like ribbon nuts. And so you see there is this very, I mean, kind of fascinating analogy between what we do with proofs in the factorial semantics of proofs and the factorial invariance of nuts. And so I will try to explain the connection so I will go very briefly just to explain this ribbon categories on string diagrams. So it's a notation for monoidal categories. And so the idea is that the morphism from A tensor B tensor C to D tensor E will be described as some diagram with three inputs and two outputs, okay? So ABC here as input and DE as output. So the flow, I mean it goes in that direction from the bottom to top, okay? So this is the arrow here. And so composition is described by vertical composition. So the composition in the category and the tensor product as horizontal, I mean putting side-by-side F and G. So typically here is an example where we have F tensor identity, so F here identity here and then F tensor identity composed with identity tensor J, okay? So I mean, if you interpret this morphism here in the string diagram notation, you get this picture. But then if you interpret this other morphism, you get this picture where you see here F and G have been kind of, I mean, we play with the order in which they appear here. And the point is that in a monoidal category, these two morphisms are equal. So really we can trust our eyes up to deformation of diagrams. And indeed this diagram here and this diagram here describe the same morphism, okay? And this is really the beginning of a beautiful story where you try to make this topological intuition valid up to the point where you can say, okay, I have a nut and it's described as a morphism in a specific category. And the morphism is invariant up to deformation, okay? So this is what I will explain now, okay? So a braided category or monoidal category is just a monoidal category equipped with a braiding. So it's a family of isomorphisms from A tensor B to B tensor A. So, and this is the way I will draw them, okay? I think many people here know this whole story, but I felt that it's good to tell it again anyway. So this is the way the braiding is described, but of course there is an inverse which is described as a negative braiding whereas here it's a positive braiding. And our coherence diagrams. So this one, for instance, says that these two sequences of arrows are equal in the braided monoidal category. And diagrammatically it says this, okay? That if we permute A with B tensor C is the same as permuting A with B and then permuting A with C, okay? Another diagram which is essentially saying the same, but for the other configuration. And so this is like braided monoidal category, but then we can define, and I will be very much interested in the notion of balanced monoidal category. So it's just a braided monoidal category with a twist which is defined as a family of isomorphisms and which I will depict as a twist like this, okay? So there is a little like we twist the ribbon and this is why I work with ribbons rather than just strings, okay? So we can see this little action here on the ribbon. And so it should satisfy that theta i equals the identity. So when we twist the unit, I mean, we just do nothing. And then this very nice equation we say that when we twist the tensor, so when we twist the tensor it's the same as braiding, twisting, braiding, okay? And so this is the way to see it. So if you want, so A tensor B is really A and B in parallel. And so if you twist A tensor B, this is what you get. You need to twist A and B independently, but also braids them twice, okay? So you see this is a typical example where purely algebraic, you know, coherence property, which says that this map should be equal to this sequence of three maps coincides with a very topological intuition, okay? About how we twist ribbons, okay? And so now I carry on, you know, what I'm trying to do now is really to build what I will call a ribbon category. So we have, we need a notion of duality. So a dual pair between an object and A and an object B where we say that A is left dual to B is defined as a pair of morphisms. So one morphism from the unit object to A tensor B and the other morphism from B tensor A to the unit. So you can think of it as some kind of identity here that we are building and here are some evaluation. So this is sometimes called co-evaluation map and evaluation map, okay? So typically we have that when A, imagine that A is a finite dimensional vector space and this is B is the vector space of, it's a dual vector space of forms, okay? So like this is V and V star, okay? And so we ask that these equations are, zigzag equalities are satisfied, okay, which are represented like this. And we said that, yeah, so in that case, we said that A is the right dual of B. And yes. So anyway, a ribbon category is simply defined now as a balance category. So if you remember, it means braiding, twisting, okay? But moreover, every object A has a right dual and there is this further requirement that when I take A star tensor A and I twist A and then I evaluate, so I get this map from A star tensor A into I, or I twist A star and then I evaluate, I should get the same, okay? And the nice thing is that once we, I mean in any ribbon category, the object A star is also a left dual to A. And so you can use the twisting and the braiding to build, I mean, a unit, so a co-evaluation map and an evaluation map but where you see A star is on the right now. Okay, oh yes, over here. So A star is not anymore just a right dual, it's also a left dual thanks to this structure of twist and this equation, okay? And so in particular, we have this nice equation satisfied in every ribbon category that twisting is the same as braiding and then doing this, playing between the here, evaluation and co-evaluation but this is the original evaluation and then this is the one that we deduced from the twist but we can also define the twist from that fact that A star is at the same time a right dual and a left dual of A, okay? And I will come back to that later because we see the similar phenomena appearing in logic and I claim that this is of course topology but in fact, and this is really the purpose of, you know, I mean my work but also this talk, okay? Is to show that this kind of phenomena can also, I mean, it's interesting to look at them from a logical point of view and there are some kind of maybe projections of some more, I mean, like let's say purely logical structures about negations, okay? Because of course, when I say duality, I have in mind some kind of negations and we will see that these dualities can be seen as particular instances, extremely interesting and rich but instances of a more general pattern where negation is not involutive anymore. So something important here in this ribbon category is that when we dualize twice an object, we come back to the original object which is true for instance for finite dimensional vector spaces or we see representations which is not true anymore for general vector spaces because typically the map from V to V is double negation or is by dual is not an isomorphism. And so I mean, what we see here, this phenomena, okay? I mean, this kind of very nice, typically here reconstruction of the twist here from dualities. So, you know, these equations can be also played and I will come back to that at the logical level and I will explain how to do that. Okay, anyway, we have this free ribbon category. So there's this beautiful theorem by Schum which says that the free ribbon category can be constructed and it has, okay, so it's a free ribbon category generated by a given category. So the objects are sine sequences of objects of C. So sine means that epsilon one, epsilon k are plus or minus to indicate the direction of the links. And the morphisms are frame tangles. So frame tangles means they are ribbon, you can draw them with ribbons with links labeled by maps in C. So this is a typical example. So this is a map from A plus to B plus C minus D plus. And so you see A plus, this is the input. So the map goes in that direction and the output is B plus. I mean, I could say B plus tensor C minus tensor D plus and the C minus here, the minus means that the flow of the computation goes in that direction. Okay, so this is typically a map in this free ribbon category. Okay, and so clearly I want to, you remember that in the free Cartesian close category, the maps were proofs, they were lambda terms, they were very symbolic objects whereas here it's purely topological. And so my purpose in the next 15 minutes is to show you that there is a way to think about lambda terms, at least in good situations where the lambda terms are linear. I will explain that, okay? And you can think of this in connection to the next talk by Noam Zeilberger. So when the lambda terms are linear, then we get a very, I mean, slightly mysterious but also very natural, I mean, the two connections between proofs and knots. And so as I was saying, okay, this category here for the free ribbon category has this beautiful property that defines the free ribbon category. So every time we have a category with braiding, twists and good dualities, we can take any functor from C to D. So here really you should think of this functor as giving an interpretation to each of the links here. Okay, so we give an interpretation to each of the links. And then just from the fact that this category is ribbon, we can lift this functor to a functor where the tangles, of course, it's very important here, the frame tangles, but in the topological sense, okay? So really a modulo deformation, if you like them, are interpreted as morphisms of this category. And it's a way to construct many invariance of knots in topology. And so I will try to explain how this can be adapted. But maybe before that, I will say, I mean, I think it's nice, especially here, to spend maybe five minutes explaining how we construct such ribbon categories, okay? Before I move back to proof, because I want to show also that the fact that I look at proof has to do with finite dimensional versus like possibly infinite dimensional representations of quantum groups. So the idea is that one way to construct these ribbon categories is to define them as categories of modules over half algebra. So suppose given like a symmetric monoidal category V, so you can, for instance, take the category of vector spaces over a field, okay? So bi-algebra is an object H of the category V equipped with a multiplication and a co-multiplication. And so I use this diagram here for blue for multiplication. Remember, I always go, oops, sorry, I go from below to top. So this is multiplication of H and unit and co-multiplication and co-unit. And we should ask these equations so that you know, okay? So typically this is the bi-algebra equation, which says that multiplication and co-multiplication are compatible in this way, okay? And then similarly for unit and co-multiplication, multiplication and co-unit and unit and co-unit. Then an antipode is defined as a morphism from H to H which satisfies these two equations. And whenever we have such, okay, so half algebra, which is a bi-algebra equipped with an antipode, then we can construct a monoidal closed category of left modules where the action on the home, okay, the internal home is defined by this formula. So I wrote it in the Swidler style, okay? And this generalizes the usual construction for groups, okay, where so you can think, you know, of the, your half algebra as a group and then what you're asking, I mean, here, this says that you should multiply each input by the inverse of H, apply the function and then multiply by H. And so this is just the quantum group version, but there is the diagrammatic representation of it, okay? That I can just show you here. So that means that this object, the right negation has, has, I mean, is an H module, okay? Similarly, there is also a way when the antipode is reversible to define a closure on the left, okay? And so similarly, except that we need to use this inverse I mean, inverse of the antipode and we'll get the good properties, we create properties. So this is a way to get, you know, this implication here, I mean, so what we get is a monoidal closed category on the two sides, left and right. But now maybe we want that to have also a braided monoidal category. So for that purpose, we introduce the notion of braiding on the hopf algebra, which is in fact a vector of H tensor H which satisfies the number of properties which can be represented diagrammatically like this. And now the important point here is that every braiding, okay, induces a braiding, so a braiding on the hopf algebra, induces a braiding on the category of left H modules, okay? And the idea is that you just take V, I mean, like a vector like V tensor W, you swap W and V and you apply your, I mean, you multiply the braiding, okay, of your hopf algebra at the same time as you permute the vectors V and W, okay? And so then what we get from that is a way to relate the right negation with the left negation. And this is extremely important from my angle, like logical angle because it's really saying that this braiding will induce a map from the left negation or right negation to the left negation. And this can be understood in a very logical way as I will explain. And this map in fact can be, I mean, if we compute it, this map, what it does, it associates to any form here, this form where we pre-compose the form with an action of U and U is equal to this vector here and can be represented in this way, okay? And this is an extremely important vector in the theory of quantum groups. The thing is that it's as a bad property that it's not a group like element. And so in order to obtain a group like element in the Hopf algebra, I mean the very natural ways to define a twist. So this is where we get back to ribbons, which is just a vector of H satisfying these equations that I drew here diagrammatically. And then when you multiply the U with this twist or its inverse, well, suddenly you get a group like element, okay? And there is an element of magic, I mean, in this thing. This is something I try to understand in the, let's say from the outside, looking at let's say braided monoidal categories and so on. And the reason why it's very important is that in fact it's related to this, I mean, all the work here that I'm describing was really developed by Rache-Tikin and Thurayev in the 90s. And I mean, they're really the fundamental, I mean, the fundamental mental observation is that if we take the finite dimensional modules, this defines a ribbon category. Okay, so if I go back to my little picture before, okay, sorry for that, but here I have this category of finite dimensional representations of my, you know, Hopfadje-Boas with structure. And so I can interpret ribbons as morphisms between such representations, okay? And then using that we can construct invariance of the ribbons. So it's a beautiful recipe. And I try to think about what, I mean, it's logical meaning. And so to that purpose, I will introduce the notion of dialogue category. So, you know, like ribbon is about ribbons. And since I want to speak about dialogue games, I found it's nice to call my categories, like dialogue categories, but you will see they're extremely stupid category. I mean, the way they are defined is absolutely obvious, okay? So the important thing here is the connection with game semantics, the thing I was telling you before and the idea that there are, you know, proofs are based on interactions. And so the, yes, so a dialogue category is just defined as a category with an object bottom and a natural bijection. So I mean, we ask that there is a way to turn any map from A to B to this object which I will call bottom. And you can think of it, for instance, as the base field into a map from B into A implies bottom, okay? So this is a very familiar situation in, let's say linear algebra. So we can do that on the left or on the right. And this is just a definition of a dialogue category. So it's very stupid and primitive. The important thing is that then we can look at, introduce the notion of pivotal dialogue category, which is a category where we can play with the inputs of forms. So whenever I have a map from A to B to bottom, I can turn it into a map from B to bottom. And this is the way I like to represent that. So it's going, I mean, like this. And we, I mean, and then when I ask a coherence diagram which says that if I, you know, turn A and then turn B, it should be the same as turning A to B. And this can be understood as some kind of coherence property or coherence properties of a map between the two negations that is here, the coherence property is here. But what is important here is that we get from this, I mean, a notion of dialogue category with, you know, so okay, so this was a pivotal dialogue category, but we could also say, okay, I will define a balanced dialogue category just as a dialogue category with these two negations, a braiding and a twist. And the important property is that every balanced dialogue category is defined as a pivotal category. And what is, I mean, an important observation is that we really need the twist to do that. So the idea is that whenever I have a map from A to B to bottom, I can pre-compose it with a braiding, but also with a twist on A here. And really the intuition is that the, I mean, this operation of wheel I was describing here can be described at the same time as a, I mean, you see here is there is a twist and a braiding. So the braiding would not be sufficient. And this is the same, I mean, this is really related to what happens with Hopf-Agebras. So in the case of Hopf-Agebras, if you remember, we needed a little theta to make, to construct a group-like element in H. And here it's the same story that, I mean, that, okay, I mean, when we have a good ribbon of algebra, in fact, the category of general representations, so finite dimensional and infinite dimensional, defines such a balanced dialogue category, okay. And now from this structure, so in particular, I mean, so it satisfies this pivotal coherence property. And from that, what we get is that when bottom is the unit object of this dialogue category, then that the finite dimensional H modules define a ribbon category. So before it's, I mean, it was, I mean, here it can be deduced from a purely, I mean, a purely categorical and formal way that we have this property. And so I will, so I don't have too much time, so I will just show you a little, I mean, connection between proofs and nots that makes it even more, I mean, meaningful. So the observation is that in a ribbon category, every object bottom, in fact, defines left and the right negation where whenever you dualize the object, you tensor it with this bottom, okay. And so now in the same way as we constructed the free Cartesian close category, we can construct the free balance dialogue category. So I will show you the way it's constructed. This is very similar as the construction for free Cartesian close category. So it's just the objects are, let's say formulas constructed with a tonsure product, left negation and right negation. And then the maps are proofs, okay. And so proofs of a logic that I could tensor your logic because it has to do with tensor algebra. And the proofs are constructed exactly in the same way, you know, using the same, let's say Gensen-like, you know, Gensen-like constructions of proofs. But there is of course a little care about so-called exchange rule. So a logic, okay. So we need to care a little bit because this is really about, I mean, we're manipulating the hypothesis of a proof. And so we manipulate them with nuts and ribbons. So we need to have a little bit of information about that. But this can be very nicely done using, I mean, the traditional proof theory is just a very basic adaptation of it. And from that, we get the free dialogue category with a ribbon structure. And now we know that every time we have a ribbon category and we fix an object bottom, this defines a dialogue category here. So where we have, you know, two negations on the left and on the right. And so just because it's the free dialogue category, we can construct this functor and the main theorem, and I will stop here, is that the functor is faithful, okay. And so what this means is that two proofs here in this logic, I mean, this is the world of logic. This is the world of topology. Two proofs are equal in this category. If and only if the underlying ribbon structure is the same. And so I will just show you an application of that because this can be seen as a coherence theorem for dialogue category. So imagine I take A, an object in the dialogue category, and I map it here to its double negation. And this map is not involutive. Since, for instance, you know, you're working in the category of general representations of quantum group, for instance. So this is not, I mean, this is not invertible. And then we can also take this other negation to where we take the left and right negation and we change the order. Then we can apply, you know, the two turns here so that enables to connect these two negations to these two negations. And the point is that we don't, I mean, if we do that, it's not equal to this one. We need to twist also the output and to see why. In fact, so this commutative diagram, I mean, imagine we want to prove that it's commutative in any dialogue category. And so we need only to prove that it's commutative in the free dialogue category. And how do we do, well, we construct, we see the two maps as proofs. Okay, so there are the two maps. This is, this one is here. And this composition is here. And then we look at the images through this, oops, through this factor here. And the images are just tangles. Okay, and the tangles, what they do, they follow, they track the manipulations we do and the formulas, and in particular on this bottom. So they track the bottoms. And then the two tangles here are equal. And we know that the proofs are equal. Yes, so I'm, I mean, I could speak more, but I think I'm finished with time. Yes, maybe you could leave some time for questions. Maybe I should just say one word that these tangles here, what they represent. And maybe I will just show you this picture that, okay. What they represent is the flow of negations in the dialogue between the opponent and the player. So if you remember at the very beginning, we had this interaction between the prover and the refutation. And in fact, these tangles here, where they can be understood as little strategies where opponents ask a question, or maybe a good player, opponent ask a question, and then the player answers here. And so there is this interesting relationship that I think it would be, I mean, it's worth exploring more between proofs and topology. Thank you very much. Thank you very much, Paul-André. Let us let the place for the dialogue. I'm not sure we can hear you. At least I cannot. Yes, cannot. Do you hear me, Paul-André? I can hear you. Okay, good. Does anybody hear me? Yes. Sorry, it was my fault. No, no, it is because now you are on the dialogue mode. Yeah, but I realized that during all that talk, I had the sound off, I didn't realize that. So if you wanted to stop me, I was like, you know, a raging bull, I don't know, like, you know, like... Okay, so are there questions? Actually, I have a question, it's Maxim, can you say it? Yes. I got lost a bit when we discuss all this braiding. Do you introduce because there's so much literature, it's kind of polluted by braidings, or it's really in this dialogue, things that you don't introduce braiding by hand, and it appears by itself? Ah, well, the thing is, there is this dream we have to understand the topology of proofs, yes? And so in the traditional, you know, logic, we have this so-called exchange rule. So where we need, I mean, clearly when we describe, this comes from Genson, it's an old tradition. We need to permute the hypothesis in a proof. So, and so the thing is usually we don't track, we don't track the permutations, okay? We say it's a symmetry, but then it makes sense to say that whenever we have such... Ah, so it's kind of generalization influenced by national gravy, I see. But I would not say, I mean, yeah, it's generalization, but I would say it's a more refined picture, because this can be done, but now we track, we remember. And so now in the proofs, so, okay, just to show you a proof, we will remember the permutations. Traditionally, we don't, because we don't care. But now, because we track them, we can get this, okay, free construction, but which has some kind of grammar, categorical grammar for infinite dimensional vector spaces with breading, so I mean, and so the breading is produced by the Hopf algebra action. And that's the idea, yeah. Maybe I just want to give you a comment if you, because it's breading seems to be it's kind of bit artificial, like fashion instance, yeah. But if you consider just notion of bi-algebra, yeah? Without anti-port unit, co-unit just very simple product and co-product. And look how many, this is the basis of map from N-sparrow to M-sparrow, yeah, of course, there's some kind of normal form, but intrinsically, there are also some three-dimensional manifolds, some colorings, yeah. So it's, even without breading, one can see three-dimensional picture. Yeah, yeah, so that's, yeah, indeed. And maybe, okay, really all this work started from this more, I mean, you know, no breading, I mean, un-breaded, but where I manipulate trees, yeah. So I agree, I mean, but what I wanted to show is that this phenomena that I agree with you there, I mean, they appear, I mean, they are a bit questionable, okay, what are they doing? That in fact, they have some kind of, they can be understood at a logical level, that's what I mean before, because the thing is, when we look at ribbons diagrams, we see a lot of phenomena and it's difficult to understand what is really coming from the topology and what is coming from the fact that we can negate and we can turn things. And so the story that I want to tell, so for instance, I mean, just to say very briefly, this thing, which would have emerged anyway in proof theory without, so this operation that I was describing here of permuting, you know, too, I mean, by doing this kind of operation, if we think of a proof as something very concrete, not something in the air, but something that people manipulate in space and time, like we discussed, then we see that this operation is very natural and then we see that the twist appears here. So there is this, I would say, surprising and slightly, I mean, we want to understand, these connections and maybe just, yeah, that's, but I agree and I would need, you know, I think we need many faults. I mean, at some point to avoid maybe these braiding. My suggestion is really to avoid braiding, make something simply and in other things, it's all these games also relate to logic and some different way when you consider the statement which sequences, for any, they exist, for any, they exist, it's like a game, yeah, different. Ah, yes, with the exist, but so there is, yes, so maybe I should mention very briefly, I mean, just this last, there is something that I wanted to say that there is an interesting open problem. I mentioned the exponential modality, which you can think of it as a co-free commutative commonoid, okay, generated by, you know, and I, for the moment, I mean, so recently, not very long ago, like, I mean, people like Daniel Murphet have, you know, studied this construction in the category of vector spaces. So, and so it, and in fact, I mean, I have shown that it's connected to, you know, three less finite dual construction that, you know, and so this is one, I mean, this construction is far, I mean, far from obvious for me. I would be interested to see how this construction, which we know now on vector spaces, could be lifted to representations. Maybe it exists, but I don't know it. And this is the kind of things I like to discuss with Gerard and the connection with automata theory. Here, and I, you know, all the story I've told was about linear, so let's say, arguments where we cannot repeat hypotheses. And so in the case of the drinker formula, the point is that we can construct a co-free, I mean, the algebraically, the story is explained like this, that we can construct a co-free commutative commonoid above the existential, so we have the existential, and this is what enables us to backtrack and change the witness. So if you remember the first time I was taking Gerard as an existential, and then I moved to Nicolas, and algebraically, this corresponds to the fact that I can construct this bank, so here, so the co-free commutative commonoid, on top of a formula which contains an existential. And honestly, I don't fully understand how this could be expressed in the language of this half algebra. But I mean, I would love to have elements because that helps me to understand the material nature of proofs. I mean, this is really the point in a way. Dear, dear, I take the opportunity to speak to you directly. This regarding maybe questions, but I will ask for questions afterwards. Paul André, dear Paul André, you know that the swiddlers dual constructed for the scholars being in a field which, as you mentioned, vector spaces, encounters problems even when the scholars form not a division ring, form a domain. Ring without zero divisors. But I have been aware of a paper by post P.O.S. Do you know it, maybe? Ah, yes, so I don't have to send you. No, but it's, so I, okay, I tried to adapt, I mean, this construction to a situation where, I mean, some shift, like a pre-shift theoretic construction where I change the base ring, commutative ring. And so I needed to generalize this, yes, to, and so I use, but it's very categorical. Yes. And the question is, we would like to understand better the combinatorics behind. Ah, okay. And that's what I mean, yes. So, I mean, for very, you know, kind of general reasons, there exists such a co-free commutative common rate for like any commutative ring, but it's not a clear why. I mean, no, what I find fascinating is that you have all this connection with your work and differentiation, things that I would need to understand better. And so I just wanted to mention this for you. We can interact in a way. Yeah, yeah, of course. I would love to. Other questions, please. Sorry if I missed it. But why exactly do you need to find a dual here? Do you need it to define exclamation mark A or do you need to dualize it? Well, it's about this. So, I mean, when you get to, when you want to work with, so okay, so what happened? Okay, I will, so what happened is traditional models of, because it just has to do with linear logic. So what happens with traditional models of linear logic is that, in fact, we are able to build, using topological vector spaces or these kinds of things, we're able to define the co-free commutative commonoid by some kind of variation on the general, I mean, symmetric algebra construction. So, and so we get something like, I mean, powers, I mean, tensile powers of A to the N. We symmetrize it. We do some clever, you know, co-limit construction is not anymore some, and we get something when we dualize it, this gives the co-free commutative commonoid. In the case of vector spaces, like, you know, with no topology, there is this construction, which I find extremely, I mean, mysterious, where you use the fact that every co-algebra induces an algebra structure on the dual, and then you do some kind of clever, clever restriction of the, on the double, double, on the bi-dual, so that you get the co-free commutative commonoid. I'm not sure it answers your question, but I just wanted to say that to me, I kind of understand it by, honestly, I don't understand it. I mean, I understand it from the outside, but I don't have a completely clear combinatorial, and this is what I hope I will get from this kind of work by Ferrara and Cristian. Sorry, sorry, we could discuss offline, and I will be happy, but I will need to think about it. Thank you very much, so we resume at 13.30. Thank you.