 First I wanted to thank you Gérard for the kind invitation to present something at the workshop and I'm happy that it's going forward virtually. I also wanted to thank Paul André for his nice talk just before this because he introduced a lot of concepts that should make this talk easier to understand. So as Gérard just noted, the title of the talk was untyped linear lambda calculus and the combinatorics of three valent graphs. For the specialists in the audience I just wanted to to point I just wanted to make clear that when I say three valent graph what I really mean is three valent map and I'm going to spend a little bit of giving in a little bit of introduction to say what I mean by map. So I think that some people might be already familiar with this material but probably others aren't and this way we're all on the same page. So I'm going to give an introduction to what our maps, what do I mean when I say that and also a little bit about the combinatorics of maps. So maps are a classical object of study. They have many different equivalent definitions. One way to define them is topologically. So a map is a two cell embedding of a graph into a surface. Whenever I talk about surfaces in this talk we're assuming that the surfaces are connected and oriented. So here's a picture of the complete graph on four vertices embedded into the torus. That's topological definition of a map. There's also an algebraic definition where the idea is that you can assign labels to the half edges of the graph and then you can represent everything, all the information about the graph and about its embedding up to isomorphism just by a collection of some pure mutations. So a map can be considered as just a representation, a set equipped with an action of this particular group that I wrote there. But there's also a very simple combinatorial definition of a map which basically it's just a connected graph equipped with a cyclic ordering of the half edges around each vertex. These are connected graphs because we're thinking again what our surfaces are connected. So this is what a map is to make that definition clear. So when I talk about graphs versus maps these are three different depictions of the same graph. The two on the left are isomorphic as maps but the two on the right are not isomorphic as maps even though they're isomorphic as graphs. And the reason is because if you look at the cyclic ordering of the half edges around each vertex to go from the one in the middle to the one on the right we have to pass the vertex to make it cross through another edge and that's going to change the cyclic ordering. So some special kinds of maps that I'm going to talk about. So a planar map is a map which is embedded on the sphere or on the plane so we can consider it as a graph which is drawn on the plane. A bridge list map as I said all of the graphs are connected but a bridge list map is one which if you remove any edge it remains connected. And then a three valence map is a map with the underlying graph every vertex has degree three. So one of the reasons why maps have been studied for a long time is they're connected to the famous four color problem now the four color theorem which formally is a statement about maps so it's the statement that every bridge list planar map has a proper face for coloring and there's a well-known reduction which goes back to the 19th century to date that that this is equivalent to a statement about three valence maps namely that every bridge list planar three valence map has a proper edge three coloring. So now I want to say just a few words about map enumeration. So the graph theorist Bill Tutt was the first to look at the enumeration of planar maps and I just put a quote here which is from his autobiography graph theory as I have known it where he talks about his motivation for studying this problem which was actually related to the four color theorem which then was still open and he says that it occurred to me that it might be possible to get results of interest in the theory of map coloring without actually solving four color problem for example it might be possible to find the average number of colorings and vertices for planar triangulations of a given size and then explains that to do that first you have to know how many triangulations you have and then you determine the number of four color triangulations and then you can divide the second number by the first to get an average. So Tutt wrote a series of papers in the 1960s where he he attacked this problem and actually managed to get a lot of results and one of his insights was to consider rooted maps and this is this is an old idea from combinatorics that often if you're you're counting objects it can be hard because they can have non-trivial symmetries and if you don't want to double count so one way to do that is to to root your objects to get rid of any symmetries and so what is a rooted map it's just you can you can see it as a map with a little vector sticking off of one vertex that's the root and you can show that if you consider these things up to root preserving isomorphism then a rooted map has no non-trivial automorphisms so so as I said he managed to Tutt manage to make a lot of progress he got some surprisingly simple formulas for counting different families of planar maps and triangulations etc so so here's just a screenshot from one of his papers where he gives a formula for the number of rooted maps with n edges and this is this is a very well developed field so since that the study of map enumeration has become a very active subfield of combinatorics I put a few links at the bottom of the slide which you can go to for more okay so that was a bit of background please during the talk if you have questions you can you can ask them I don't think I can see the chat window but it's fine if you unmute yourself to ask questions now I want to talk about something different which is the lambda calculus and specifically the linear lambda calculus so so thanks again to to Paul Andre Melias who gave a little bit of a little bit of an introduction in the previous talk and I wanted to give you a little bit more of the history so so lambda calculus is a formal system that was was introduced by Lonzo Church and originally he he invented it in 1928 and he published the first paper about it in 1932 this history that this this history that I'm telling you is from the source at the bottom by Cardone and Hindley so Church's original goal was to develop a foundation for logic that does not use free variables and he wanted to be something which is more natural than the system of Russell and Whitehead which had just come out a few uh a couple decades earlier and and satire and so he came up with a very beautiful system the only problem was that it was inconsistent this was discovered by his students unfortunately um but but always not lost because Church was able to separate his original system into two pieces and he extracted one which is nowadays called the untyped lambda calculus which is purely about computation and then he had another system extracted another part which was typed like in Russell and Whitehead system so a typed lambda calculus which he used to talk about logic and this was in the 30s and 40s since then lambda calculus has become very important especially in programming language theory and in proof theory and other areas now so I'm going to be talking mostly about the untyped lambda calculus one of the nice properties of untyped lambda calculus is that you can define so-called fixed point combinators and the first one was actually due to Alan Turing who was another student of churches and so I'm not going to get into the formal definition there is the idea that lambda calculus is lambda calculus is a programming language based around the idea that everything is a function all terms represent functions in some sense and a fixed point combinator in a certain formal sense given a term that represents a function it finds a fixed point for that term so I just wanted to show you the actual combinator itself which was originally found by Turing and you don't have to memorize it but I just wanted you to see the syntax of the lambda calculus and I wanted to point out here that you can see that the variables x and y appear multiple times I mean so this term is actually there's you know it's one thing one second let me see if I can get a little pointer to highlight okay can you see my mouse moving yes along the screen you can okay good so right so here you have these variables x so x appears two times on the side y also appears two times on this side and then the same thing on the other side so this is actually very important for the construction of fixed point combinators but I'm not going to dwell too much on that again on what fixed point combinators are because I'm actually going to be talking for the rest of the talk on a special subsystem of the lambda calculus which is called the linear fragment so a term is said to be linear if every variable occurs exactly once so this is a very well behaved subsystem of lambda calculus and it's no longer Turing complete I guess I didn't mention that but but that was one of the first results about the lambda calculus is that it's equivalent to to other models of computation like Turing machines and and the proof of that involves the construction of these fixed point combinators so the linear subsystem of lambda calculus is no longer Turing complete it actually turns out to be complete for polynomial time so I do want to try to convey to you the actual formal definition of the lambda calculus for the linear case so I'm going to present it using this logical notation which was also in Paul André's talk this morning so I'm going to define the so-called judgment which is going to say that the term T is a linear term with free variables x1 through xn so this thing here is a judgment so we're going to be defining inductively the linear terms with a given list of free variables and it's defined as inductively like so what in the bottom so you have variables or our terms and so a variable just has itself as a free variable so that's what we show here x entails x now you have this rule for application so here t is a term u is a term and then t applied to you as a term and intuitively if we think of t as a function that t applied to you is supposed to represent the function t applied to the value u but also t has some free variables and now I'm using greek letters to range over lists of free variables so if it has some list of free variables gamma u has some list of free variables delta then the combined application t applied to you has lists list of free variables gamma concatenated with delta because we're combining all of the free variables that occur in t and in you and and each free variable occurs exactly once because that's the linear restriction and then there's one more interesting rule which is the so-called rule of abstraction and here it says that if t is a term with free variables gamma together with some distinguished free variable x then we can abstract in the variable x to form a term which is called lambda x dot t and now it has it no longer has x as a free variable so now x is said to be bound inside this expression lambda x dot t accidentally click there and and this again intuitively is the idea of we're now we're defining a function so lambda x dot t represents a function which for any input x will return t that's the intuition here and finally there's another rule which is sometimes called a structural rule and and basically what this is saying is that we don't care about the order of the variables the free variables inside of the context so if t has some free variables gamma x y delta we can permute we can freely permute those in order to to derive linear terms so this if you were watching carefully in polandize talk this rule appeared there there were also a couple two other rules which were called weakening and contraction and weakening is is basically is the ability to say that a variable doesn't have to occur in your term so you can throw away variables that are free and contraction says that you can reuse a variable multiple times and those those rules are present in the full lambda calculus but in the linear lambda calculus we don't have any structural rules except for this one rule of exchange now I I just put some terms at the bottom to remind myself to define them and again feel free to interrupt me with questions so a subterm of a term is this is just the standard definition it's it's any term that appears inside the if you think of the term as a tree it appears as a subtree so for an application t applied to you t and you are both subterms of t applied to you as are any of the subterms of t and or you and in this index expression lambda x dot t and t is a subterm of t and also the the free variable x is going to be a subterm since it has to appear somewhere since this is a linear term um alpha equivalence just this is a technical term from linear from lambda calculus which just says that we don't care about the labels of variables so we can freely rename variables variables and a term is said to be closed if it has no free variables so I'll give you some examples on the next slide but like the for example lambda x dot x is a closed term since it has no free variables it has one variable x but that's bound by by the lambda and then one more thing I want you to define so as I said in the linear lambda calculus there's only one structural rule which is called the rule of exchange but you can even consider dropping that rule so that you you only have the three rules at the top then you're you you consider your free variables as really as a list of free variables that occur in a certain order and if you do that then you get further restriction of linear lambda calculus which is called ordered linear lambda calculus um which is which is even more restrictive so to give you some examples of lambda terms and the syntax so so here's an expression lambda x dot lambda y dot lambda z x apply to y apply to z here I'm using some convention so sometimes implicitly there's applications so this is y apply to z even though I didn't write parentheses um so this is just an example of a term it's actually an ordered term um because we can derive this without using that exchange rule and you can see that that you know the why we call them order the variables occur basically in the order that they're bound by the lambdas this is an example of a non-ordered term so lambda x lambda y lambda z x apply to z that thing applied to y and if we want to derive this term then we need that exchange rule I put these letters b and c next to them because these are actually very special terms which are which are known in logic as the b and the c combinators and they'll come back later on now this is an example of a term which is not closed so it's and so the opposite of closed is open it has it has one free variable which is x so the term here is lambda y lambda z x apply to y apply to z and it has a free variable x so it's not closed here this is another example of an open term so x apply to lambda y y is itself open because it has the free variable x but it has a closed subterm which is namely lambda y dot y okay so this was also mentioned in the previous talk so lambda calculus part of the reason why it's interesting is because you can do computation with it and and computation all happens through this rule so-called beta reduction which explains the the interpretation of the lambda as defining a function so what the way to read this is that if we have a lambda x dot t term applied to another term u then this will reduce by the rule of beta reduction to t where we've substituted u for the variable x this is the only rule of computation in lambda calculus and you can apply this anywhere to you whenever you have a subterm which is of this form you can apply the beta reduction rule this gives you a rewriting system on lambda terms which is confluence so it doesn't matter in what order order you apply it and for linear lambda calculus it's strongly normalizing so you always reduce to a unique normal beta normal form and sometimes this rule is also considered together with a rule which is called eight expansion and this takes a term t and and rewrites it to lambda x t apply to x which you can think of as a kind of extensionality principle if t is a function we can treat it as the function which given in argument x will apply t to x and here's just a little example of that so we have an expression this big expression which is actually the b combinator applied to lambda a dot a applied to t we apply here this is we apply a beta reduction to this piece so we substitute lambda a dot a for the variable x and then we get what's here here we actually have a choice of which beta uh which subterm to apply beta reduction to but we'll apply it inside here to lambda a dot a apply to y apply to z and we just get lambda y lambda z apply to y z apply to t and one after one more beta reduction we get to lambda z t apply to z and if you apply the eta rule in reverse you can go down to t so so sometimes people talk about beta eta equality which is the the notion of equality that you get by quotienting modulo beta reduction and eta expansion so i want to say one more thing which is which is about this typed lambda calculus and this is more connecting to to things that voranderi mele said in his talk and so there's types are formed in in uh in lambda calculus there's very basic form of types so types can either be atomic and that's written x and y or they can be uh they can be function types or if we're thinking logically these are implication types and i'm using the notation for the implication which comes from so-called linear logic and you just read this as an arrow or as an implication and now the judgment form it it refines the the the judgment form that we had before for untyping untyped lambda calculus now we assign types to all of the variables and to the term and the way that you read this is that t is a proof of b under assumptions a1 through an if you think of these as as logical propositions because this is linear lambda calculus these these hypotheses have to be used in a linear way so now you can define again you can define inductively type system for linear lambda calculus and it's just refining the rules that we had before so so here now for the variable case if we assume that x you know if we assume that x has type a then x has type a the application case if t has type a arrow b and u has type a then the application t applied to u has type b and again we keep all of the assumptions gamma and delta we combine them into assumptions gamma comma delta and then the rule for abstraction and says that lambda x dot t again we think of it as defining a function so now we're going to give it a function type a arrow b where we assume now that the variable we give it the type with the variable x we give it type a and then we we need to show that that the term t has type b and we still have the the rule of exchange so connection to category theory you can see typed linear lambda terms modulo beta eta equality as a presentation of the free symmetric closed multi category over a set of atomic types that's one of the reasons why why linear lambda calculus is interesting and yes again the free symmetric closed multi category and what by if you're not familiar with multi categories this is this is a slight generalization of of manoidal categories and so you can extend the the linear lambda calculus to get a very similar presentation of the free symmetric manoidal manoidal closed category over a given set okay so that was there was a crash course in lambda calculus and linear lambda calculus and now i'm going to move on to something else so do people have questions about about this is there anything that you'd like me to spend more time on okay okay then then if not i want to go on to you know what on earth through these two things have in common i started by talking about maps and map enumeration that i moved on to to lambda calculus so several years ago back in 2014 i thought that it could be interesting to count untyped um untyped linear terms specifically closed beta normal ordered linear terms to enumerate all of them by size where the size you could just take to be the number of of lambdas appearing and there were some reasons i was interested in this which actually had to do with some joint work with with quantum and yes but that's not important for this talk um if you go back to the definition that i gave you of what are ordered linear lambda terms and and what is beta normal terms it's actually not very hard to come up with a recurrence that will recount such terms by size together with a second variable which is the number of free variables you can get a recurrence and then you can you can start counting these things um it's also maybe more fun to just start listing them and and see what sequence you get so so at size one you have one such term which is this lambda at xx at size two you have two terms where you read this as lambda x dot now this is x applied to lambda y y and the other term is lambda x lambda y x applied to y this is at size two then at size three you actually have nine terms at size four you have 54 terms and you can keep going like this so this is what i did and i entered this sequence into the online encyclopedia of integer sequences and it was surprised to me that it was there um and it was listed as counting something completely different and various things but you can see there there's a comment from Don Knuth that says this is the number of rooted planar maps within edges if you look here there's there's a actually a very simple formula which if you remember this is this it turns out that this is the formula which was computed by Tutt in the 60s for the number of rooted maps actually the rooted planar maps with n edges so so that's why I was in May 2014 very surprised about why you would get the sequence counting planar maps where I started you know counting these objects um but I I got together with Alan Giorgetti and we managed to find a bijection between this class of lambda terms and rooted planar maps um I'm not going to be talking more about that bijection and it's it's a little bit it's not so it's um it's not so natural for reasons that I don't want to get into now um but but what I what I wanted to say is that it turned out that a couple years earlier um another group of uh of combinatorists uh French combatorist Olivier Bodini, Daniel Gardy, and Ali Jaco they had found another connection a similar connection between general linear lambda terms and and three valent maps of arbitrary genus I'm going to be talking more about this in a moment but but now if you look at these two connections which were found in similar times so first there was the connection by Bodini, Gardy, and Jaco between linear terms and three valent maps then the connection that we found between uh beta normal ordered linear terms and rooted planar maps not necessarily three valent and you might think whether this is a piece of a larger puzzle and well it turns out that that it is and that there's actually many connections between different natural subsystems of lambda calculus actually subsystems of linear lambda calculus and different families of of rooted maps both planar maps and maps of arbitrary genus so as I said I'm going to be so okay I want to point out a couple things so so first all of the connections on the upper half the table between different families of linear terms and different families of three valent maps can all be explained as the restriction of a single natural bijection which is the bijection originally found by Bodini, Gardy, and Jaco which I'm going to be explaining in a bit the connections in the lower half of the table they're more mysterious so I mean it might it might seem a bit mysterious why in passing to beta normal terms we suddenly go from three valent maps to general maps and and this thing is still it is a mystery it's still mysterious to me we don't really have a good bijected explanation for the connections in the lower half of the table on the other hand it's still been useful the fact that these connections exist has been useful in finding connections between lambda calculus and in some other areas via their mutual connection to two maps I just wanted to point out one paper below which corresponds to to the seventh entry of the table and this was actually a collaboration with Julien Cortiel and Karen Yitz where we did some combinatorics and it had applications to Karen Yitz's work in quantum field theory and it also had connections to lambda calculus okay I I see that I I put a note for myself at the bottom to explain what unit list means so I'm not going to go through this whole table but I just want I did want to mention I already defined linear I defined ordered unit list here means terms with no closed sub terms we talked about that before so that's just what what unit list means is that they have no closed sub terms and it turns out that the unit list restriction on lambda terms corresponds to the property of bringing bridge list of of a map okay so in the most of the rest of the time I wanted to explain this bijection between linear lambda terms and rooted three valent maps of arbitrary genus which as I said this bijection restricts to explain all of these connections in the upper half of the table so it restricts to one between order terms and planar three valent maps and between unit list terms and bridge list three valent maps and again this is this bijection is originally from the paper by Bodini et al and I wrote another paper a few years later where I was revisiting the bijection and I'm explaining it from another perspective so there's actually an old idea in lambda calculus which goes back to the 70s at least maybe maybe earlier it's kind of folklore which is that you can represent lambda terms as a kind of annotated graph you can also you can think of it as a as a syntax tree where you have two kinds of notes that stand for the two operations of application and abstraction where we're drawing the application with an at symbol and the abstraction with a lambda symbol but then these trees are enriched with pointers coming from the lambda note to the corresponding occurrence of the variable that it binds so this idea is is especially natural for for linear terms because you have just a unique occurrence every lambda has a corresponding unique occurrence of a variable so so if you you know if you see it as a tree with pointers well that corresponds to a kind of graph here I've shown a picture this is the representation of some this is kind of an arbitrary term lambda x lambda y x applied to lambda z y applied to z you can see this is a rooted graph the root is annotated with the term itself then the other edges of the the other edges of the graph correspond to sub terms so like this edge is annotated by the sub term lambda y x etc this edge is annotated by the variable x which you know you can also follow the wire to here and you see that here we have a sub term which is annotated you know the edge is annotated lambda z y applied to z and here at the output of this application node we have x supplied to lambda z y z okay so so that's this idea which of just a diagrammatic representation for for lambda terms which again is kind of folklore from the 70s you can give some explanation for this in the framework of of string diagrams which Paul Andre talked a lot about in the last talk and this this is related to an old idea of Dana Scott for modeling the untyped lambda calculus so I mentioned that typed lambda calculus well the linear typed lambda calculus has relation to symmetric closed multi categories or symmetric monoidal closed categories and there's there's a similar connection between typed non-linear lambda calculus and Cartesian closed categories but for the untyped lambda calculus a long a long standing question was what does it mean mathematically not just you know in terms of formally in terms of writing but is there is there a natural mathematical model where you can really think of terms as functions and the problem kind of the paradox that Dana Scott observed was that you you need in a sense you need to find a type U or you need to find a set U which is isomorphic to the set of functions on you set of functions from U to U and you can't do that in set theory for cardinality reasons but but you can do it with other models more refined models so Dana Scott came up with the first first models and later he gave an axiomatic explanation for that as this idea of a reflexive object in a Cartesian closed category so an object U in a Cartesian closed category such that it's equipped with an isomorphism or maybe some retraction to the space U to the U and you can do the same thing now where instead of Cartesian closed categories you have symmetric monoidal closed categories and so then you have these two operations which now working as at and lambda suggestively because they correspond to the two operations of application and abstraction and then if you use the the formalism of string diagrams you can you can represent them this way so I drew application over here it has one incoming wire at the top I'm reading the diagrams from top to bottom and that corresponds to the type U and then at the bottom it has two wires one is an incoming wire one is an outgoing wire and you can think of this as U R U more concretely if you imagine working in a compact closed category then U R U is represent could be represented as U tensor U star and so then we're using the standard conventions for compact closed categories to represent this this operation the lambda the abstraction operation is just is dual to that because it goes from U R U to U if we think in a compact closed category it has this type U tensor U star into U which is then represented as a node of this type and now if so I wrote here now rules which corresponds to beta reduction and a expansion if you're working in a in a higher category you could see these two cells otherwise maybe you could see it as as an equation but but diagrammatically it's so diagrammatically it's nice to observe that just that the beta reduction the again the the real beta reduction says that lambda x dot t applied to U goes to T with U substituted for x can be read as this diagrammatic rule which is kind of unzipping operation you take two these two nodes and you unwind them to get just a pair of edges eight expansion corresponds to this bubbling operation where you take a wire and you replace it you insert these two nodes in this shape okay so now that that we have this diagrammatic representation for for lambda terms for and particularly for linear lambda terms and we have some way of understanding it maybe categorically it's it's not so hard to see how you go from linear lambda terms to rooted three valent maps the idea is well you just you take the lambda term you represent it diagrammatically as I've done here with some different examples so like this is here this is an example for the the b-combinator represented diagrammatically this is an example for the c-combinator and well what do you do you just forget the the colors and the orientations on the colors of the vertices and the orientations of the edges and what do you have you have a three valent graph which is rooted because there was the distinguish route and and it's it's really is a map rather than a graph because we care about the cyclic ordering so these two three valent maps are not isomorphic because they have different ordering and we can do this for close terms like I showed here we can also do it for open terms terms with three variables but in that case so diagrammatically you can think of the free variables as wires so on that input's coming from the boundary which are then flowing through the term and then going out into the root and if we forget if we again we apply this forgetful transformation then we get now a three valent map but now it's on it's a rooted three valent map but now on a surface with boundary with some edges attached to the boundary so I hope that's pretty clear how you go from the linear term to a rooted three valent map what might not be clear is how do we go how do we go backwards and how do we go from a three valent map to a linear lambda term because what the existence of this bijection is saying is that in a sense this transformation is invertible so given a rooted three valent map there's a unique linear lambda term that maps down when you forget about the colors when you forget about the distinguish between application and lambda and about the orientations of the wires it maps down to that three valent map so here's an explanation for that so first as I said we want to consider rather than just rooted three valent maps in the classical sense we want to consider maps on on surfaces with boundary that can have some free edges attached to the boundary they have one edge with on the boundary that's marked as root then we observe that any such map with boundary has to have one of the following forms so again this is a three valent map rooted three valent map we look we go to the root and then we look at the the vertex which is adjacent the three valent vertex which is adjacent to the root so here I've gone to the root and then to the three valent vertex and I ask what happens when you remove the that vertex in this case the map is split into two pieces that I'm calling t1 and t2 that's one possibility is when we remove the vertex adjacent to the root it splits into two pieces another possibility is it stays connected and in that case so if it stays connected we can reroute t1 canonically just by going to going down the left here I can reroute t1 put the move the root over here and then put this edge make it now be attached to the boundary so there are these two possibilities the the vertex adjacent to the root is either disconnecting or or not connect or connecting or really there's a third possibility which was actually there was no three valent vertex this is kind of the degenerate three valent map which just has one edge which is you know which is attached to the boundary both at the root and on the other side but now if you look at this this characterization of rooted three valent maps of arbitrary genus well this is exactly like the the inductive definition that I showed you for linear lambda terms these three cases of application abstraction and very and the variable case and right so once so that's the basic idea of the the correspondence between between rooted three valent maps and linear lambda terms at least this is one way of understanding the bijection and I'll just give you a little example to illustrate that so here this is a a rooted three valent map the Peterson rooted embedding of the Peterson graph and you know I claim that it corresponds to a unique linear lambda term how do we compute it well we go to the root we look at the vertex adjacent to it we ask what happens when we remove it in this case it's connecting therefore and this corresponds to a lambda and if you remember I'm drawing the the lambda nodes with the abstraction nodes with the in red and then we continue we go down the left side and we place the root here we move the other edge we move it onto the boundary and we do the same thing so now we look at this vertex and when we remove it it's connecting so this corresponds to a lambda we continue and we keep going along and we keep going and you can see that all of these nodes are going to be connecting nodes so they're going to correspond to lambdas until we get to this point where now you can see that we have two edges here and when we remove this vertex it's actually going to split into two pieces I mean there's this little piece which is a trivial piece here and then there's the rest of the graph here so this is going to correspond now to an application node rather than to an abstraction node and we call it in in blue and then we can continue etc and we we we traverse the whole graph and finally so finally we have the diagram of the term which corresponds you know this is now the term in traditional syntax so that's the linear term corresponding to this embedding of the Peterson graph so I think I'm pretty good at time I I wanted to say just a couple more things so so as I said you know we found various connections between the combinatorics of lambda calculus and the combinatorics of of maps particularly three developed maps but also some connections to general maps and it's I think it's just the beginning of a story that you know we still don't we still don't really understand but I think that there are deeper connections to be explored so one is about typing and there's a there's an analogy that can be made precise between typing a linear lambda term and coloring the edges of the corresponding corresponding graph so here I've recalled the typing rules that I showed earlier for linear lambda calculus now I just want to take the step of you know supposing now that we we interpret types less more concretely so rather than you know being just logical formulas or or types of function types we're going to interpret them in some concrete group so given if we have any abelian group we can interpret this connective a implies b we can interpret as the operation b minus a in any abelian group and now just for the sake of argument consider this in the group and z2 times z2 the client for a group and now I claim that if you take any ordered linear lambda term you can type it in this type system where now you're drawing the types from this group the client for a group under this interpretation of implication you can you can type it such in such a way that any subterm u of t it will only be colored in it will be given the the zero type of the group just in case u is closed that is u has no free variables so the other way if u has a free variable then it will be given a non-zero a non-zero type a non-zero element of the client for a group and so now I give you a challenge problem is to find a direct proof of this claim and this is essentially all I wanted I wanted to tell you about I just wanted to give you a few pointers as well so I as I said kind of the story for how I originally got interested in this was a kind of a experimental math in the spirit of experimental mathematics and something I like about this is that it's it's easy to to play with these lambda terms and generate them and so I just keep some links here which since the slides will be available you there's also you can click on the links and so this tool the lambda term visualizer in gallery will let you put a lambda term and then get the corresponding read it three valent map that corresponds to it this tool this interactive lambda maps toy does the converse so it will let you draw a three valent map like on the left and then it will automatically compute the linear lambda term and the and the string diagram corresponding to it and then there's a library at the bottom which lets you do some experiments like generating random linear lambda terms and and running some experiments on them so that's all thank you and I'm happy to be answering questions thank you dear no no no questions uh remarks or comments yes I did did Carol raise his hand he's speaking but he's muted can I I would like to make a small comment do you hear me yes Gerard do you hear me yes yes I hear you perfectly yes okay so I would like to come to the first part of your talk where you quoted some enumeration formulas derived by derived by Tata and there are of course some other papers which are related with this chapter it's exactly this kind of formulas and in fact there is a connection between this type of formulas as well as formulas derived by Mirei Busquet Melu later in a series of papers which are apparently very much connected to this lambda calculus so these formulas show that the numbers like this formula which just appeared on the screen are in fact moments house dwarf moments of positive functions in this case this is an nth house dwarf moment of an elementary positive function on the positive segment 0 12 and other formulas given by Mirei turn out to be also moments of of rather more complicated functions so my my suspicion is that this whole zoo of formulas emanating from this set of theories they might be connected with the positively positively defined integer sequences there there is no proof this is just an experimental experimental observation and we have published with my collaborator Wojciech Motkovsky from Rotswald a series of papers which can be found also by Sloan yeah dear Karol just to make precise is it house dwarf or still just moments in most cases it is house dwarf okay so you said the references are on Sloan yes in Sloan and also you can find all these papers in archives so in archives and I can of course publish the reference list of these papers or just to give you privately as you wish they are available and published and my my my suspicion is that this is not just a coincidence but it might be a rule so there is a probably some connection with the positive definiteness okay I have a proof of that yeah I mean I of course I'd be interested in the references and there's I mean yes so something that that I like a lot is that this field of map moderation has lots of connections to other areas so something else that I'm interested in I've been interested in recently is this work of Olivier Bernaldi which is connecting planar map enumeration to accounting lattice walks certain kinds of lattice walks like I mean lattice walks in the upper half plane and that's also a question but whether whether that has any connection to lambda calculus and so that's that's a question that I'm interested in okay other questions or comments so thank you so much dear Noam