 Okay, for this presentation, Camille is going to talk about experiments in ergodicity and this was supervised by Ole, Ole, Alex, Jonathan and Marc. So please, Camille, go ahead, thank you. Yes, so hello everyone. So I'm going to start from explaining what was the aim of my project. So the aim was very simple. It was to improve what we called the Copenhagen experiment. So now what is the Copenhagen experiment? So it is the experiment on choice, on decision-making, which was conducted in Copenhagen a few years ago in the Danish University Centre for Magnetic Resonance. Here's the manuscript that will be soon published from this experiment. So it's been done with collaboration with people around the world and also fellows from the LML. And the aim of the Copenhagen experiment was to validate the ergodic theory of decision-making. Hey, I will discuss this throughout my talk. And now we can ask why do we even care about improving the experiment. So the first reason is that we want to address the criticism that we saw happened after the experiment. So after this manuscript was made available. And more importantly, we want to replicate the finding, the main finding of the experiment. So before I explain how we can improve the experiment, I have to explain the experiment itself. So it's a gambling experiment with real money. And it was held in Denmark. So this money is Danish Kronens. And it was divided into two separate days. In each day, a subject experienced different wealth dynamics. So in the additive day, a subject wealth changed additively. So there were nine fractals, nine different stimuli, which were associated with this wealth changes. So each fractal meant different wealth change. So after applying this fractal, a subject gained 428 Kronens. And in the multiplicative day, wealth changed multiplicatively. So we again had nine different fractals. Each correspond to different wealth change. So after getting the best fractal, your wealth gets multiplied this time by the factor of two times 23. And by getting the worst possible fractal in this day, you lose more than 50% of your wealth. And then each day was divided into two sessions, the passive session and the active session. In passive session, a subject just had to passively learn how fractals work. So they just observe the sequence of fractals and associated wealth changes with those sequences. And after many, many trials, they learned the meaning or the wealth change associated with each fractal. On the other hand, the active session consisted of choice trials where a subject had to make the decision. And here is the example trial. So subject need to choose between the left gamble and the right gamble. So he can either choose left or right. And gamble consisted of two fractals. And after the subject made a choice, then the coin was tossed and the random fractal from the chosen gamble was selected to possibly affect wealth. So note that each gamble was mixed. So one fractal was associated with gains and the other with loss. Okay. And the last thing which will be important for the improvement of the experiment is that not all trials, first of all, in the active sessions, the outcome were hidden from subjects throughout the entire experiment. So only after the experiment ended and the subject made all of the choices, the final wealth was revealed. So they didn't know exactly their wealth throughout the experiment. And moreover, not all trials were affecting wealth. So only 10 out of, I guess, 312 trials were selected and applied to final wealth. But subject had to make the best possible decision nonetheless, because they didn't know which trial will be selected to affect wealth. Okay. So now we can ask how this experiment allowed to discriminate between models. So we have this prevailing model in the economics, the model of choice on the uncertainty called the expected utility theory. And the new model, which I will discuss is the, I don't know if it's the name of the model, but it's the model that was created within the framework of agonistic economics. And these models predict differently when it comes to different wealth dynamics. So the expected utility theory predicted that the risk attitude represented here by this parameter should be stable across different dynamics. So that EET says that the risk attitude is really the characteristic of individual and it should be stable. So it should be fairly stable and it can differ between different subjects, but not between dynamics. On the other hand, we got this agonistic economics, which states that subjects are really trying to maximize the rate at which the wealth grows. And it gives very specific prediction regarding the risk attitudes when the dynamic change. For example, in the additive dynamic, it predicts that subject to behave optimally in the growth manner should be risk neutral. On the other hand, in multiplicative, it should be risk averse. And that the shape of this risk aversion, I will come to that later, should be very specific for this dynamic in order to maximize the growth rate. And this is the same thing, but shown in a different way. So this is the expected posterior distribution of risk aversion for both theories. And this is in the additive day and this is the multiplicative day. So EET postulates that it should be zero for additive and one for multiplicative. So we should see that the fitted risk aversion should be somewhere around this point. On the other hand, EET says that it should be the same for different subjects, but different subjects can have different values. Hence, we got this long diagonal because different subjects may lie in different places on this diagonal. And the result after the collecting the data and fitting the Bayesian model, the result showed that the posterior distribution of risk aversion is located here. So it's much, it's even visually much closer to the EET. And it was obviously shown using like rigorous Bayesian methods that the EET model is more likely to be the model that is generating the data. And now let's get back to the aim of my project. So now at the onset of the project, we try to figure out how to improve the experiment. And we had these three ideas and I will walk through during my the remaining of my talk. So first of all, it's very simple. We wanted to add more wealth dynamics and not only have two, but we only also wanted to have at least one dynamic, which is, which encourages risk seeking behavior. Then we wanted to to change the design slightly to show all the outcomes and realize all trends. And finally, we wanted to optimize the design such we have greater probability to discriminate between competing models. Yeah. And let's start from wealth dynamics. So the ergodicity economics really one of the theorems says that there is a connection between the wealth dynamic and the ergodicity transformation on utility functions really are the different name for the same mathematical object. So I will use that and integrate changeably. So for a wealth dynamic, we can find a utility function such that if subject uses utility function, he will be growth optimal in this wealth dynamic. On the other hand, for a utility function, we can find a dynamic in which a subject will behave growth optimal. Yeah. And this is exactly what Alex was mentioning during his talk on Wednesday. And obviously, with the utility function from the psychological perspective, we have some associated risk attitude. And now we want to use this relationship going from right to left to use a well-known class of utility functions to create more wealth dynamics. So this is exactly it. So we want to use the utility function or the ergodicity transformation is just a function acting on wealth, transforming it to another value. And wealth dynamic is just a function that says how wealth grows over time. And we want to go from here to here. Okay. So now the theorem says that we need to find a wealth dynamics xt with a formed property. So the transformed wealth should grow at a constant rate over time. And this is expressed as this derivative. In other words, we need to find a wealth function xt that will be linearized by applying the ergodicity transformation. And we can now use the definition of derivative and go to drop the limit, consider final wealth to get this equation. And now after reorganizing this equation, we get something like this. So we get the equation for updating a wealth after time delta t, which says that we have to do three things basically. First, we need to transform the initial wealth, then add the constant value times amount of time elapsed. And then we should apply the inverse ergodicity transformation to go back to the wealth. And we decided to use the well-known isolastic utility also mentioned on Wednesday, because basically it's the most popular iso, most popular utility function. So it will be easy for everyone to understand that. It has some nice properties. But to be honest, it has also some bad properties. But we decided to stick with that. And if we plug this utility function into the equation bottom, we get something like this. And let's pause for a second and try to understand why this is so useful to do for creating new dynamics. So first of all, by changing the gamma, which is the growth rate, we can create different wealth changes. So it would correspond to different fractals, which then allow us to create new gambles. So if we set the growth rate to zero, we get this middle factor, which doesn't affect a wealth. So after applying this factor wealth change, doesn't change. And if the growth rate is positive, then we get fractals that increase our wealth. And if the parameter is negative, we get fractals which decrease our wealth. And by changing the other parameter eta, which we know is a risk attitude in iso-olastic, we can manipulate this wealth dynamic. So we can change the way how wealth grows over time when these changes are applied sequentially. So the nicest thing about the iso-olastic is that we can recover the two cases that we previously had in the Copenhagen experiment, meaning the additive dynamic for zero risk attitude and multiplicative for one. And we can also get something in between those two. And we can get a whole range of different risk-seeking dynamics for negative values of this parameter. And because we all like symmetry and we wanted at least one dynamic which encourages risk-seeking behavior, we chose to the final experiment this screen. So yet we added new risk-seeking dynamic for this parameter. Okay, so now let's move to the to the second point, which is this problem of trial realization. So if we, when we design a Gumbly experiment, any Gumbly experiment, we have basically two choices to make. So we have to decide if we, how many trials we really realize. So we might realize only one and then just see how subjects could behave in different trials, but assuming that only one trial will be realized and we can hide or show the outcome. So we can provide this constant feedback loop for the subject or we can hide and like in Copenhagen experiments show the final wealth after the experiment. And here was the Copenhagen experiment and with new experiments we want to move up to this corner to realize all trials and show all the outcomes. And now let's think, let's try to, I'll try to explain why this is so useful and what advantages and possible challenges are associated with this decision regarding the paradigm. So first of all, it's obviously more realistic because in real life we usually have this constant feedback, we can see how well, how wealthy we are, we can see how much resources we got so on. And it's definitely more engaging because you can imagine that the game which subject are playing is more exciting if they can like see at each trial if they gained or lost. And we can also include by doing this, which is the more important, I think we can include this temporal and wealth dependency effect. And what is, for example, the wealth dependency effect? So we know from the theory that the wealth should influence the choice. So we can imagine that we have an agent with fixed risk attitude and it faces some gamble pair. And it turns out that depending on the wealth his choice can be different. So if he has like 1000 Krona, he might choose the left gamble because it's more profitable for him. But if he has like 3000, it might be more beneficial to choose right gamble. And if we hide the outcomes from participants, we cannot really believe that this or it's more difficult to believe that this effect will happen because really subjects have no idea what wealth they currently possess. And what are the challenges related to this approach? So first, it's very difficult to control the trajectory of wealth because the changes are so frequent that it's easy to either go bankrupt or exceed a fixed amount that we have for one participant. So it's special difficult to control the bankruptcy problem in the risk-seeking dynamic because the changes in wealth are more steep when we have low wealth. So you can imagine that the subject is just unlucky and hits like three times in a row. The worst fractal and then he bankrupt and we cannot, from the ethical reasons, for example, cannot include the possibility of having debt. And also in the, for example, multiplicative dynamic, it's very easy to control the max payout because if you multiply something many, many times, you can easily imagine how fast it can grow. And there are also some drawbacks of this design that are potential compounds from emotional process and probability matching, which is the probability matching is that seeing the pattern and randomness which are not there and subjects are prone to that. But this is a problem, a general problem in this type of experiment. And we figure out a very, very simple solution which turns out to work very well with our next idea, which I'll discuss when I will talk about the third point. So we just said that, okay, so when subject bankrupt, so he hits zero wealth or exceeds some max payout that's intended for him, the experiment just ends. So you can see this simulate wealth trajectories, some of them made up until the end of the experiment and some of them, the red ones went bankrupt at some point and experiment is ended there. And it's similar for the green ones, which exceeded 4,000 crores. And so let's move on to the last point, which really brings together all those previous ideas. So what does it mean to optimize the experimental designs, when the experiment is meant to discriminate between competing models? So good experiments simply would provide a data that allowed to be sure that we can discriminate between the competing models. So we can think of a very bad experimental design, which has the property that it produces the same predictions for a competing model than its terrible design. So using that simple idea, we can introduce the measure of these agreements. So it's based on simple heuristics that simulated responses from the economic agents should differ from the expected utility theory in response. So just if we simulate both agents, they respond to the same sequence of stimuli should be different. So here we get some experimental design consisting of like say 10 trials. Each trial consists of two gambles. So it's just a gamble pair. And we simulate the EUT responses and the E response and just calculate the probability or the frequency of the disagreement. So when one agent chose left and the other chose right. So in this case, it will be 30%. And we can then tweak the experiment to try to increase that even further. And why it goes so well with this bound idea? Because let's imagine that we have this, we have fractals, which has very large growth rates. And we have really high chance for the agents to go bankrupt really fast. So let's say the EUT agent went bankrupt of the two trials and the E went bankrupt of the three trials. Then we are losing all these remaining trials, which previously contributed to disagreement. So because of this like truncating the experiment, we are decreasing the disagreement. So the disagreement really combines these two idea into one. So it combines the idea of of the difference between models and the effect of ending the experiment too soon because of the large growth rates associated with fractals. In this case, the disagreement would drop to 0.1. Therefore, this design would be worse. And this three things really gives us the optimization framework. So it goes like this. We got some experimental design which consists of trials of fractals and so on. Then we can simulate responses. Then we can calculate the disagreement between competing models. And then we can improve. So by switching this snobs and switches by turning the snobs, we can, for example, change the amount to change the growth rates for the fractals, change the number of trials, number of fractals, whatever we want. And then we can just get back to this agreement and see if we improved or not. And I will briefly discuss one of the possible knobs that we played around because it's so the project now is the stage of the project is that we have like code written for all of this. And now we're trying to really think really hard and figure out which of these snobs and switches should we consider worth turning to see if we improve. So this is the gamble space. This is our representation of all possible gambles and the x-axis correspond to right fractal or the fractal on the bottom and the y-axis correspond to fractal on the top. And these are the growth rates associated with fractals and the color corresponds to average growth rate for the gamble. And now they edge in the space. So if we connect to dots, it represents a pair of gambles. So the experiment is really consisting of all possible, of set of possible edges that we can draw in this 2D space. And now we can, for example, take all, again, all mixed gambles as in original Megan's experiment and just tweak the max growth rate for the fractal. So we can, for example, increase this to 400. So it would mean that the wealth changes would be more drastic, or we can decrease it to like 50. And after calculating this agreement for all these possible values of the C is just a max value of the growth rate. And we can see that there is somewhere here, the sweet spot for where the maximal disagreement between competing models is reached. So if we go from zero, that this agreement is really, really low, because like all agents agree on the same gambles. Then we reach this maximum. And then we decrease. And we can see from the right plot, which shows the probability of going bankrupt or exceeding this upper bound. We go lower in this agreement because the more and more trajectories are prone to this ending fast because of the heating of the balance problem. Yeah. So that's just one example. And that's basically it. And at the end, I'd like to thank to all my supervisors for their help and all the useful advice. And then special thanks to Oli Kulmak for really countless hours of discussion on this project, which really enabled us to advance the project to the current stage. Yeah. So that's it. Okay. Thank you, Camille, for again, this very nice, very neat presentation. Congratulations to you and to your supervisors for the project. So it is time for questions. We have five minutes. So do you know the drill, guys? Go ahead. Yeah, I've got one quick question. A very nice presentation, Camille. I was just wondering about the truncation. So is the intention to tell the subjects that there are these upper and lower bounds? And if you do tell them, then does this somehow change their behavior? And if you don't tell them, is there an ethical problem? Yeah. So this is the one issue related to that, this bound solution we really are trying to figure out. Because as you mentioned, if we tell them that then they can change behavior, because they are not trying to maximize like the growth over time over a very long period of time, because they know that it will be truncated when they hit like 4k, and they may start to behave differently when they are close to the bound. And on the other hand, we cannot really tweak them and just don't mention that the bound. So we try to come up with some solution in the middle to say something which is true, but it's not directly saying that the experiment will end whenever they hit 4k. So it's really trying to trick them, but if we can get away with that, it should be fine. Okay, so you'll have some sort of politician's statement about. Yeah, exactly. Very good. I think Colm put it in as well. Yeah, I was going to ask exactly the same question, so Camille, let's just answer that. Okay, more questions? I see none. So if that's so, so shall we thank Camille for this fantastic project and the very nice presentation. Thank you very much, Camille.