 All right, guys, it's the last part of the year, so we're happy to have the president, Sam, who's talking about how to write mathematics in some eloquent way, with little elements, sorry, little contents in the unit. So before we start, I would like to explain N-Quan, but he's watching me over there. Thanks for your service, attending the slide. Now, I'd like to thank Sam, who is also in his fifth year, just like Marco Park. So, I'd like to welcome Sam from there about the repository. I'd also like to thank the seminar organizers for inviting me to speak tonight. It's a great honor. This lecture is the culmination of five years of work by myself and my colleagues here, and Carnegie Mellon and Markov-Tah theory, and Burgeon Field and Markov-Tah theory. My aim here is to be more or less self-contained. I know there are some experts in the audience that sit in your first Markov-Tah, and I apologize in advance for the book boring you today, but I want everyone to be able to enjoy this talk, especially the newcomers. And throughout the talk, I expect some of you will feel a little bit confused, especially if you're in here, but I hope by the end we're all very confused. All right, so today, right, we're all verbosity, verbosity meaning using a lot of words where very few would do. And this is a critical part of Markov-Tah theory that there's very little being said, of course, but very many map terms employ, and our goal here is to elucidate some of that theory. So, right, algorithms are the kind of bounds we have, usually related to their oral bounds, right? Markov-Tah's talks were speaking, and naturally when you have an algorithm, you want to see how much can we get out of it, how much juice, normally, you know, we want high upper bounds for Markov-Tah. Lots of words being said. Very few ideas. The French math petitions, we have a special theory, you know, the French petitions are very key in Markov-Tah literature. The Bukakis. The Bukaki group, well known for very rigorous and tight upper bounds, in particular Markov-Tah. Nicholas being one of the founding members of the Bukaki movement, and Lexington Grothendick. Smooth Lexington Grothendick. He's kind of number two. Right. Those are papers in the early 90s. This was a burgeoning time for the Markov-Tah. Of course, formally Markov-Tah started five years ago, but a lot of the ideas were coming to be developed by this Bukaki-Nicolas being at the head of the research here. And they just loaded on the papers. Like it will, like it's been. Right. Will a drunk man ever get home? Classic in the probabilistic theory, answered by Grothendick and Bukaki and a drunk bird answered as well. But this is going to be fast. And that's a question that follows from the Bukakis paper. It was open at the time. It was about, you know, coke addicts in LA on their way to Tuscaloosa. Bukaki was adamant in trying to attack this question, but it was very, very challenging. It may have been wide open problems. Yes, Bukaki, I believe he did his graduate work at Berkeley, spent most of his nights traveling around. You know, being at attending bars, he would have ended up in different places. And this gave him this kind of idea on the result of this question. Take you, right? It's taking space across this other variable, y, to space. And just this is time. The one should be time here. Right. So this is the drunk person after an hour. Right. One represents, oh, the alcohol. Right. Sorry, this is an alcohol parameter. This, right, any bound, there's no other bound here on the alcohol that you'll copy. And the home is a whole unit ball here. So we're starting somewhere away from home as the drunkard walk is the standard. And we're looking at XT, which is random, and we're looking to see when is he going to end up in the ball. Now, everyone knows the standard result for a useful random walk. With a fixed alcohol content, there's a non-zero probability of returning, right? But there's a possibility of not returning. But the problem is extended with this alcohol parameter, which is more and more joking. Yes. What if we brought the negative y? Negative y. Well, it's highly non-physical to have negative alcohol. I mean, like, we all should have alcohol on us most of the time. So, I'm calling the zero as a very rare case. Almost surely you're always going to have some alcohol. A grad student, I think, at least. A grad student working on this, so it's natural that y should only be really considered strictly positive. But at zero, it doesn't cause any problems because it's almost never going to happen. So he was eating his soup and he worked this out. It's very interesting techniques with Kina Keen's Forcing Central Limit Theory. Forcing Central Limit Theory. This might be very familiar to some of the experts from the audience. A critical result and a more complex theory. So, all sorts of results followed by seminal paper on these drunkard's walks with alcohol parameters related to verbosity theory. And from M. Garcia, MIT, expanded. Yeah, just one nothing and... Oh, okay. I'll go back. You might have an easier time if you just use... Yeah, I know. The Mexican Institute of Tacos. Is it very into disciplinary university? Tacos in mathematics. One-story district, of course. And drunkard's walk can be very unstable with alcohol. There are fixed levels of alcohol, critical levels of alcohol where the drunkard's walk turns from a very decent probability of returning home to have almost virtually zero probability of being home and likely being under a bridge. So, we're in an alleyway with a dog pissing on you neighbors. Type of alcohol is also very relevant, which wasn't included in Mucapi's original model. But it's an easy extension that if you substitute sake with tequila on large timescales, you get a sort of ranching process, a galvanoncid process, where the alcohol just starts exponentially exploding, right? Because sake, everyone knows, induces more sake as you do bombs. More bombs will occur. Fresh tequila makes you just go up and want to stop drinking. Let's go, but... This is for tequila, sorry. I was speaking from my experience. Mucapi has a different... Yeah, it's a different drink than myself. Right, but when the explosion happens, they call this tequila sunrise, and it's related to the Mexican wave equation. If you were coming to our Mean Field Games Theory group, you would know more about this. We have some experts in the audience, I don't want to bore them. The Mexican wave equation for Garcia was just the wave equation. Wait, so the reproduction process occurs regardless of whether you get home or not? You don't need to be at home to be drinking tequila. Oh. Very commonly you're not going to be drinking at home, but Mucapi is drinking at home, and that's what he does at a bar. Yeah. At what level does the alcohol primer need to be set for the reproduction process to be memorable? Oh, to be memorable. If you really want to forget history, you really need at least... I think it's very non-trivial, like 13 pi over 74 is considered like the critical value where the memory list property gets cut in. But it really depends on, you know, your level of drinking experience in Mucapi. It's a parameter really that depends on the person. It starts at 17 pi over 34. That is the standard first time drinking for memory list property. Good question, very good question. Yeah. Is day drinking day drinking so common in this model? That's a very good question. Day drinking is very common in tequila-based models. So I cannot so much. So I would say that in the seminal work by Garcia, there was probably day drinking at all. But I haven't read the paper carefully, so I really can't speak to her. Anything else? What parameter is driving the Brownian motion to become a legal mathematical operation? Oh, right. When does it stop being legal? Like what's... does an upper bound threshold for where it stops being legal? Is it just a matter of boundary values? I mean, it's the kind of thing where it's well known that there exists some upper bound where it's like you can't reasonably drive. You're never going to get home and you're probably going to cause trouble if you're driving above that limit. But the standard theorems that state this are compactness theorems, so there's no real clear level where that barrier is. The legal limit, I think it's something like 0.08. It's not really like a hard set where that that is going to be. The true value is just kind of arbitrary. You know, it's like key values. It's kind of just someone said that it's this value. You need to set a value. The true value we're driving stops being effective. There's just something just... Okay? All right. The choice principle should not be taken for granted. And there's a special version of choice that comes into the theory here called sexy choice. Sexy choice, I think, is... This is not something that's applied by choice. Sexy choices has a lot in common with the choice. A lot of the same results hold. But here we're looking at six objects in particular. And this is pretty reasonable. I mean, no one has any qualms with choosing from six. So if you have a problem with standard choice, sexy choice, this is pretty good for everyone. And sex means it's sex and Latin. This is how it got its name. This is sex and six choice. And all sorts of reproduction occurs in branching models as a consequence of sexy choice. Right, six seems funny, though. I mean, one always seems important. I mean, I always say that in analysis, there's... What numbers are there? Zero, one, finite, you know... That's basically all the numbers I deal with, the last six. Right, but there's a real combinatorial reason for why there's something like this. You know, it's very discreet operation, so it's pretty normal that there's a combinatorics that's going to be relevant for sexy choice. If you have six objects, three objects are either pairwise, reproductively active, or pairwise and different. This is, you know, just like a ram-taker type argument. But DT, well known set theorist, the real DT, right, discovered by DT and he's always talking about, you know, fake math just DT. He's kind of like a fringe mathematician, but we thank him for his sexy choice theorems. Yeah. Speaking of the internet, I mean, you've been talking about only a discreet model of sex, but I've seen some really indiscreet models of sex on the internet. And I was wondering if you've got any experience with those? Personally, I've only had sex with a street number of objects. But I have heard of those who, oh, so I think this is a different definition of sex, perhaps, that we're sitting on. Is that the issue? I mean, perhaps sex may be a spectrum, but having sex seems to always seem discreet to me. I haven't read these theorems. Perhaps we can talk about it after the talk. We can talk about it after the talk. So, right, following the work of DT, there was a competent tourist who was remodeling his house. I don't think it. If my contractor keeps destroying the bathroom with his valve, that's every four days, I didn't avoid this. Now, it may be strange that this is a model of sex, but I mean, it's pretty natural to go to the bathroom after sex. And although I don't think, I don't know if this guy was doing under the contract. It's just the kind of thing where you don't want to, you know, they're in your house, you don't want to tell them that they can't use your bathroom, but they're really sticking up the place and it's causing problems for your wife and kids. But the way you work this out is by looking at four-colored bibliographs on peanut-shaped vertices with smiley faces. I guess this begs for an image. So we have four-colored. Here are the peanuts. We found that. The smiley faces, fake news are formed by the image vector given by throwing a fair coin at each vertex, avoiding others. Right, so something about, the idea here is that if you throw a fair coin, I have for each one of these smiley faces. And then we try to avoid laying the coin on the graph. I mean, I do this graph vertically, so it's extremely easy to avoid. But if I was to put the graph down on a table and put the coin at the center, then by avoiding the peanuts, okay, so we didn't really explain this fake news map, but by applying fake news, 314, it comes out. This is, this is a poorly explained theorem. The experts know what I mean here. So I'll try to chime in. I'll take the help. Yeah. 314. This must mean something else. Okay. PF Chang's is a pretty natural thing to have after you get a DUI. PF Chang's is also a natural thing to have before you get a DUI. Right. So this is a kind of a sad story. After PF Chang's was used to destroy the combinatorious bathroom forced the alcohol into PF Chang's. There was a wreck all over it. Right, so this guy took all the publications and he was trying to, he was very mad at PF Chang. So apparently he's a local mathematician. Perhaps one of the professors would be the combinatorious past here. Alright. So right after we were talking about the 314 theorem. The 314 fake news theorem we want to, and we want to prove this, right, by claiming being completely crazy. Well, we wanted to be of our both, certainly. But the mayhem could be a problem. We wanted to be seemingly all there, but lots of words and lots of confusion. This is, this is, it's pretty hard to identify what that means, but this is kind of one of the central points of verbosity in mathematics. Someone has to get convinced that this is mathematics, but you also need to have a lot of words. You're very, very confusing. So here's the difference of mayhem. We have two words, A and B, as multi-sets of letters. So C in the order might not matter, but four are usually matters of words. This may be a little intuitive, but we look at the this common words, or this common letters, and the mayhem of the proof, right, it's the average mayhem of a parrot that's sending the words. So if the words are almost all the same, the words were all the same, there would be no mayhem. But if you use lots of different words, there would be just lots of mayhem. So here's the proof of mayhem 3. This is nice. This is one of my favorite proofs of love of mayhem. Mayhem. K times N choose K equals N times N minus 4 choose K minus 4. Selector selects listee selects secret Selector selects secret listee selects reps. I think this needs to be broke down. Let me show you breaking this down. So the selector selects from the listee and he puts one aside. That's the secret one. Selector selects secret listee and then selects rest. Right, so I mean here of course this is you choose one guy at the end, right, there's no way to do this and you choose the other rest of the guy. Here we're just choosing K at him. Right, this is one shade of choosing K and then we have K more. And this way is the way of choosing K minus 1 where we choose 1 and then we choose from the K minus 1. Here we choose K. So I obviously am stubbornly over these counter-tile graphs. Not good at the board but this is a very nice slide. But how can we optimize these curves? Right, I mean this is this is a pretty optimized curve. This is extremely low mayhem. Most mayhem picks are a sensible theorem I would say. The approach here is to use a random random selection to help optimize here using the abracadabra. Abracadabra. And this monkey is by a theorem is by monkey DL. He was around in the great pirate era part of the pirate school. A monkey will minimize the mayhem of any theorem be almost certain. This is extremely extremely unintuitive because monkeys are very poor at typing sentences like selector-selector unless your board only has S-E's and L's, maybe T's. But this is true. When they type keystrokes independently. Right, grad students do not behave like monkeys. Grad students don't have independent keystrokes. Their advisors are heavily influencing their keystrokes. Like, almost surely if you take a grad student they're going to heavily be influenced by their advisor, yeah. What about the undergraduate students? Undergraduate students have the problem is they have lack of good ideas. They may seem to defend but they're never going to reduce mayhem because they can't optimize at all. So that's it's a troubling but it's a true consequence. Almost surely, almost surely. I'm sorry, can you remind what's the difference between a monkey and an undergrad? Right, they're very similar objects. But it really comes down to the fact that monkeys have much more independence than the other grads. That's what it really is. As you see here, monkeys can make independent keystrokes. Undergrads are heavily influenced but it kind of is like a suit of independence. They seem to be independent because they come up with their own ideas but the ideas are all bad. So it seems like independence but really it's just nothing though. Okay, any questions? I might be going a little fast, sorry. I said I wanted this to be a one-question. Yes. So what will happen to a monkey in a wine-tasting night? Wine-tasting night. You said will it change the independence of the keystrokes? Will it increase or decrease mayhem? Maybe for the application of mayhem and monkeys. Well, this leads kind of to we weren't really going to be talking about, this leads to the theory of the drunken monkeys. But we don't really want to talk about that because this kind of introducing bringing these two theories together kind of makes a lot of technical there's a lot of technical issues that come with wine. We're trying to keep it simple here. I said I want this to be self-contained. So drunken monkeys are not not something we'll be concerned with. Right, so we don't want to use grad students but if you I mean they don't have independence but you get a much faster convergence. I mean grad students are usually better at getting things out than monkeys but the grad students don't have independence I mean we have sufficiently regular animals and they're they're much more efficient with the keystrokes they're just not going to give you the correct kind of mayhem that you're hoping for something with machine learning something. Alright, so right here we're going to work out this I want to do this kind of in a little bit of detail because this is pretty important we're never bound for mayhem of minimizers you form uniformly overall theorems right and this is the Chinese okay I guess it's by Chinese Japanese Koreans okay the mayhem of a mayhem minimizing proof of any theorem is uniformly bound by 2 so the most the least mayhem you get any proof is 2 right over here alright so here's a proof suppose that every word a-m in a proof has exactly one letter and you might say this is an extremely strong assumption but I tell you there are a lot of words in one letter and if you take a lot of one letter words and string them together then they look like multiple letter words at the end of course me procedure not these words of course ringing alright right and of course if the letters are different then you may have the two letters as two so every every proof could be done with mayhem just by using one of the words so right this is a this is a very short proof a nice theorem very useful theorem and these proofs always exist in the parties Young Ren Chao had some amazing essays in Chinese each only one cell this is a nice application of the theorem this is a good example it reads like she she she the experts in the audience really understand this one no one else I apologize alright and these are so very very sharp results were developed by the school in the forest and using the blow up method and this is read in theorem every mayhem minimizing proof as mayhem is zero this is extremely intuitive but you take all the words in a proof make the one word just by saying saying the word I will call this one a not b n this is going to be the same word with the exception of quoting the f word right so okay so you take the s word and you leave it out of the original proof right and every other letter is the same in this word and then here this is an example of how you do this we think alright I'll entertain this selector selects listee selects sequence selector selects secrets listee selects sequence selects rest selector selects listee selects secret sector as you can see this is this is extremely relevant to verbose and so on so on probably you write so here's the issue with the proof the mayhem of the proof right if you take 1 over n minus 1 the sum of the disjunction between two words is only a semi-norm similar to bb or w1p but we really want the full power full power of the norm here this can be this can be zero the non-trivial objects we have proofs these mayhem minimizing proofs which have zero mayhem but they're non-trivial proofs they're these are really good proofs very informative proofs so really it's kind of saying the mayhem minimizing is not kind of semi-normal we should be thinking of minimizing the length our entropy or some kind of our readability of a proof to to really define what a good good mayhem is alright wait Sam I can't make out that 49th word in the Chinese Japanese it's Xi oh thanks is it okay yeah I know Chinese is not my first language right so we've been spending all this time talking about proof optimization with respect to mayhem but just this past month this is you know from almost no one's talking about this topic to 14 people talking about this topic and the amazing algorithm was pulled together for reducing the unreadability but I mean because I said we want readability we just want it to sound really fancy and this can be somewhere as it follows you read the proof and rewrite it without changing anything too quick don't make any sudden changes in the language but make it just as self-contained as possible no nonsense and so you know this is an adaptive proof so every single time we go through the proof we want to reduce unreadability so I was going to say the contradiction that you're not correct is that I don't I can re-french but I can't understand it they can just don't do it it's also well-known since grad students don't really enjoy to you know do these kind of computations and flip around textbooks equation 2.2.2.4 once you try to avoid this you want to make nice proofs stop telling people to flip around all over the all over the book and the last thing you got to do is prove necessity by taking all the nonsense and sum them up over the cube and then you just keep planning things like that's just switching the universe so you do this inductively until you reduce all the nonsense do we need cubes? it seems like any complex bounded is okay the period of this is kind of important here so I mean you have to have some kind of structure like that like the periodic structure the cubes the cubes so what do they call you know parallel pipelines so let's think about a theorem today and consider all the proofs today and right we've been interested in the simplest proofs the simplest to me but they still have to sound pretty nice pretty fancy but a map on the all subsets measure the kind of unibility of proofs so this is the frustration map the extent at which you want to punch the author left as an exercise to the reader as a soul proof as itself is infinite frustration if you put it in a proof highly depending on the length and the complication the exercise changes the frustration but one of the things that we were looking for in in frustration theory is determine the smallest frustration for all proofs that are we want to have the least frustrated proof of course now of course this is kind of difficult to compute because frustration you know it's different people get frustrated by different things but here at make some good progress it's very good at coming up with minimally frustrated proofs and maybe this is not coming this is just j coming I shouldn't give something's the there's no ass I'm right this is never coming probably more related to co-copy there exists universal constancy such that for any theorem right if some subset of proofs are written by undergraduates then of course they're they're going to be more frustrating so for some theorems frustration you know this constancy is not always going to be tight some theorems are always going to have frustrating proof and so this doesn't give a tight bound on those but there are existing theorems which could have a very non-frustrating proof but it's not going to provide an undergrad and that's especially not the first class of the analysis but reducing frustration by coming is the seminal paper where you get all the results I just see the C is unknown we just know that you can look at papers and see there are numerical bounds everyone's set up frustrating proofs are written by undergrad but it's not we've got to read my undergrad laws okay there are questions can you get back to the previous slide please so is there is known some empirical values for constancy at least I mean so so if you if you go and look in practice then what as I said there have been lots of numerical results for the frustrating the problem is a lot of them don't they don't scale things correctly there needs to be a standard for like what frustration is and there's a lot of personal ways to measure frustration so like to actually in any given right in any given scale there exists some state but the problem is like there is no real standard for what the frustration is anything else yeah so what would the it's like you get better sense of this what would the frustration be of the sexy choice in a bargaining game that is denied right uh so I think you should read the paper to reduce the frustration but I mean most of it's outlined in the paper for the what's the undergrad underground for so when it comes when it comes to the sexy yeah when it comes to the sexy sexy there's a lot there's a lot of frustration I mean when you're denying uh when you're denying the sexy sexy choice but um alright I gotta be honest I haven't read the paper from the paper just read the paper also at this point we are proving the lower bound right there is not only the upper bounds in this frustration no there is undergrad can write unbelievably frustrating this is this is this is probably not interesting yeah yeah um do you know if we have a copy of this paper in the library or like Taiwan, Google producing frustration by counting you might want to try try not sign that first and then if you don't get any hits you can go alright so let's consider a theorem and take some positive body and then we're looking at right since we're we know what happens with other graduates we're interested in what's happening after you get it right so we're looking at the set of proofs written after the undergrad after the undergrad so for a fixed theorem the mapping from how many years after you degree to the frustration this is the minimum frustration is continuous so you are going to be I mean as you get older your your proofs that you can write are going to be continuously changing there are no age old insights you're 30 and then all of a sudden you're writing so so the frustration of your degree obviously is going to be decreasing up until the point you get 10 years and then there's exponential growth and this is just because no, you don't only care the quality of anything you do any of your work I mean you don't need to care but this is all an average we're thinking about average frustration so the point of retirement most people reach frustration a lot at least enough of them to cause a singularity in average okay, so we've been seeing these results between how far you are in mathematical maturity how far you are after degree and theorems and this is going to motivate a few definitions if we say theorem is sex and if and only if it's proof contains six mutually exclusive cases back to back to six here and sigma m of t denotes the number of sex produced without petitioning your t so right, theorems most theorems are not sex in fact theorems under sex remain usually that's way too many cases but people have been interested in sex theorems for a long time and the conservation of time, energy principle follows remember that said i i m of t the number of theorems written by n in your t I think this is supposed to be said oh, okay no, this is the number of sex produced by mathematician number of theorems okay, so we're interested in right so if you take any years this should be years after your bachelors the amount of sex times the amount of theorems should be no more than g over t what's g? right, g should be some universal constant very large to very large to account for like after you graduate you might produce a lot of theorems right you probably won't produce that much sex g is here right oh, this is the constant of reproductivity right so obviously you can have more sex if you have more g more reproductivity and here's an open question that I want to leave you guys with if we have any function f then sex over f number of sex over f with zero sex must be decay really fast yes when you have g is a constant here doesn't that contradict the the earlier results by Bukake where your with alcohol reproduction is exponential right, it would seem like the thing is this g can get really large the thing is I think usually on average alcohol probably is reduced after grad school and so really this regime where the alcohol that was relevant is for fairly small table I mean not that small kids around going on year six but relatively small parenthood and over a theorem because theorems for me is less than two maybe one if you call it a theorem and so I don't know so alcohol might be high but theorems are still pretty low so the bound the bound stays the bound works out is g dependent on m or universal is g universal oh g right this g holds for a certain class but it's okay so right for mathematicians with right if you're going to want to apply it for fairly small t you're going to have to exclude certain mathematicians that write lots of theorems but that's a very small subset of mathematicians so for the vast majority of mathematicians does it fix constant g it doesn't say here but there's some you know there's some technical assumptions along okay that's all I had to try thank you okay any questions or for the big reveal oh quick question yes so Adrienne Berry has suitably brought up a question of industry and sexy choice right and I was curious about the relationship of that you know so going into the realm of continuous mathematics now will Brownian motion yield a reproductive process Brownian motion out itself is not reproductive but if you I correctly oceanic motion does you can you can append you can append a reproductive behavior on to on to Brownian motion and you need as we said you need six Brownian motions right and then when you have them come together right that's when you're going to have sex but there's very low probability that any Brownian motion will have sex more than a couple of times I don't know what that is Brownian motion action on Brownian motion this is one of my I don't know this is one of my opportunities if you want you can scroll back let's that's it again