 I was wondering your views about whether big data and machine learning will make this literature relevant. You know, Amazon, Google, Netflix seems to be much better and knowing the preferences than myself. Sometimes I can easily think of an app where you put relevant characteristics and the app makes the choice for you. And another question I have is there seems to be some similarity with the ambiguity aversion literature where the overload information is measured, is modeled with a set of priors instead of unique priors. And I'm wondering whether you can spell out the connection between this and that literature. You started your presentation saying that it might be optimal for us to observe what the neighbors are doing and that herding might be optimal. What happens in the world where people are very different? So my neighbors are very different from me. What would your model predict for information and herding? So the type of information cost that you put down here, you didn't use these words but it's relative entropy. And relative entropy information costs have the feature that the more that you know, the cheaper it is to get additional units of precision. So I think this is what's driving herding in your model because you already know what it is that other people have done. And so then that makes it easier to learn more about the products that other people have experienced that you've already seen the high market share for. So my first question is that right? Is that what's going on? And then second, if so, do we think that that's actually a good representation of human cognition? Like I know you guys have thought a lot about this in a lab but it's different from saying people learn about options that are valuable to learn about and test that feature of rational and attention. And there is this particular form of payoff to additional units of information that have this diminishing marginal cost. And that seems really important in this model. So I'd like for you to comment on that. I'd suggest we first give you and then we have a second round. So that actually would be useful because then things are in my mind. Hello? Okay. So let me just start with actually let's go in reverse order. No and no. Okay. So that's not what's going on because what's happening is the watching other people is shifting the prior, right? And so having more prior over here is unlikely to make you go less, I mean, pretty much anything with a non-infinite marginal cost of learning. Starting in this area direction, you tend to go more in that direction. Just think about convex adjustment costs or something like that. It's basically we're shifting the prior and that's what's generating the hurting. You see other people do it, you think it's likely that's a good idea and therefore you start from the premise that it's a good idea and then you learn from there. And so that's kind of but you're the other bit you said so it's not what's going on here but you're perfectly right that Shen and entropy is probably not a universal model of information cost functions in the sense that I mean they don't even even the physics literature they've moved to other type of entropies like telecentropy in which you can have things getting more or less difficult to learn as the situation becomes a little bit more complex or not so you're trying to learn about two independent random variables if they're in Shannon the cost of learning about them together or separately is exactly the same but in Telus it could be bigger or less it's essentially the CES extension of the PLNP so and and there's a lot of people who are thinking along those lines Woodford is doing a lot with different types of information cost functions and with the correlations and what you learn in the signals and you know definitely you know different sorts of things so that's definitely where things are going but it's not but so it's no and no. Neighbors are very different so right now we don't have anything about our I mean just the type and we don't have space in the model so we don't have neighbors we just have a bunch of people distributed with certain types if and and essentially our model is about you see a bunch of people in them in and doing something and you're saying am I like them or am I not you know you see I'm gonna use restaurants it's easier example you see a lot of people in a restaurant you look around and these my people or are they some other people you know lots of popular restaurants serve food that I detest and you're always trying to back out who the other people are and so that's a fundamental that's in the model but what you were saying was adding another dimension to it like is there a the group around me different than like the whole population and that we we don't do but could easily be done yeah I'm like now in a in a different country where people do completely different things different customs different standards yeah we don't do that but it's you know conceivable ambiguity aversion ambiguity aversion is essentially the same thing is robust control and robust control is like this but without rational expectations in which you get to choose use this information constraint and you choose your beliefs in order to maximize the worst case outcome and that is mathematically equivalent to to Epstein's in which is a form of ambiguity aversion so this is different in the sense that I know that the outcome of the math is completely different it's the same in the sense that we have multiple posteriors on the table and are choosing about them but we don't have any of the risk aversion type of things that pop out of robust control in what we're doing it's purely Bayes rule and that's the constraint that tight that means whenever I learn I pop out to a bunch of posteriors but the big mathematical complicate complications they all have to satisfy Bayes rule which is what makes robust control so much easier mathematically and that sims hand senate sergeant could write like a 500 page book in like a year big data the couple problems with big data one is they need an input they need to know people they need to see people making choices in order to know how to interpret the stuff they see if everybody's just being told what to do then nobody's doing anything and you can't know anything which is why a lot of these computer programs these mapping programs send people on incorrect routes small fraction of people have to be sent on bad routes to make sure that those are bad routes because they need to know if everyone sent you know on interstate 94 then you have no idea what airport road looks like and so you have to send somebody there and get them stuck and then big find out so there's like really company you need to have an input you can't have Amazon and Google telling you everything to do because then they lose their source of information our social learning came from the private information was reflected in the market shares and then you had something to learn from you need to it's a complicated problem and to optimally you know sample to figure things out even Google has a big data problem let's see I think that that Bertos raised two really important points the second one I think is more important than the first and no it's less a criticism of me than it is of the literature and that is that in all of these rational expectations rational and attention models we have no way of differentiating what is easy to learn and what is hard to learn there's just this learning cost and there's just this land and front so everything is the same difficulty to learn and so the way people handle this is you know Laura has you see the stock price and then you're learning about fundamentals we have you you know see market share and then you're learning about your type Bertos in his paper with Miracle Villa hold has people seeing their consumption their you know interest rates you know everything that they just can't they learn about prices and marginal cost and so we don't really have a way of we want people who are learning about all of these things but we actually think people know certain things pretty well and don't have to put a lot of effort I get my paycheck you know it's not hard for me to learn my paycheck it's a lot harder for me to learn you know I was gonna say something that you just look up in Google and so it's actually learnings become a lot easier but but you know we don't have that and the first one we're not in steady state you need to read the unreadable appendix and it will show you that we can say nothing about the not being in steady state you give me some dynamics I'll construct a model that will deliver it these models are very ill behaved on the convergence path do we have more questions Peter you cited retirement as an example of the start one of the central facts of retirement is large fractions of people make no choice and so the supplier has to create a default so could you construct a theory given the diversity of the population and the default designer knowing something and how default the realization of the default is something I mentioned I'm in the default I had 8% return last year and so maybe that leads you to go into the default or maybe that leads you to do something relative to this you could think of an optimization problem for a default designer yeah I don't think this model captures that as well as one would like the way this model would explain is a lot of people do the default therefore people do the default but when people have ideas of defaults they have something else in mind that it's more of a sin of omission than it is anything else it will show up in market share it will but I don't will be bouncing some particular I think that you still have to see the whole thing so the way the model would explain it is this is market share seeded by the fact that it's salient and prevalent and then it kind of stays that way because everyone else is doing the fault they seem to be doing well I'll do the default that kind of story but I think the fundamental thing that gets it all the way going at the beginning is that that was the default and we don't have a theory of searching through lists or you know effort to move off of you know a bias towards a status quo or something like that this is not a theory of everything no there might be some types I think likely to end up in the default than other types certainly that will then change the nature of the information so I guess I would chosen shares I would I would yeah I think the way so one thing you could do is you could have a cost of switching and then a cost of learning and if there was heterogeneity in the cost of learning you would imagine the people who found it easier to learn would learn and then they would learn whether they should switch whereas other people would just throw up their hands it's gonna be really hard for me to learn and then I'll just stay here with the switching cause the other obvious question to ask of the model is you've of course got no suppliers in the model yeah so given suppliers ability to manipulate market shares I tried to set you up for that for next for tomorrow but yeah so manipulating market shares one way right that it gives a story about why firms might want to get big because once you're big I mean you tend to look there first because you have limited attention the other supplier phenomenon would be just take advantage of people's confusion but are you thinking of modeling any of that oh yeah I this we're not here yet that's in the future stuff supply side is in the future stuff definitely so I was I was also thinking about this endogenous supply and I mean you're emphasizing this manipulation exploitation but as far as I understood it the key inefficiency is related to the extensive margin key efficiency if you have firms are advertising their products wouldn't there be huge you know say I have a killer product that nobody's buying wouldn't have huge incentives to just advertise it and make sure to get out of that inefficient equilibrium so I'm not so sure that adding the supply you have to be able to make money along the way but you might lose money in the short run making a long run in your steady state it's clear the steady state we have an equation that tells you exactly whether your profit your project will should be in the market or not and it's a simple entry condition it basically whether your the ex-bonentiated utilities are above some constant if that's true you should be in and if it's not you should be out so the extent that suppliers know have the same information about utility that the demanders do the guy should know that if he can get his product out there and get people to try it it should survive in steady state definitely it's a very simple equation who gets kicked out of the market isn't infinitely complicated but whether this guy should be in the market is actually a very simple calculation so can easily we think thought about writing models of entry and exit and industry dynamics with with and uncertain consumers no we don't my equation is whether they would make profits in the steady state then I would add to that the cost of trying to get to to the steady state and the cost of advertising and increasing market share and then you would know you'd have two calculations I mean you shouldn't ever try if you shouldn't be there in steady state and then conditional on that you say okay what's the what's the entry cost and does that entry cost justify being there so it'd be slightly more complicated than what I what I said but not not undoable okay if there are no further questions and I always forget to do this but thank you for your comments I wrote it here at the top so that I knew I would forget to say it and I still forgot to say it thank you and let me thank both John and Bartosz for and all of you for an interesting section thank you thank you