 Thank you so much, Louise. Very excited to be here. Thanks for the organizers inviting me and thanks for being offered the discussion in advance. So I'm gonna talk about my paper, Drivers of Digital Attention, Evidence from a Social Media Experiment. So for those of you in Toulouse, you might have seen a version of this last year recently revised it hopefully for the better. So looking forward to your comments. So yeah, so the starting point for this project was that people spend a lot of time online and in particular a lot of time on social media services, right? So last year, nearly the average American spent 1,300 hours on social media. In any given month, nearly half of the world's population use one of the most prominent social media applications, Facebook. And because of this increased time and concentration of attention on a select few platforms, there's emerged a broadly global debate kind of thinking about regulation in these markets, right? So you open up the economists, the New York Times kind of pick your favorite newspaper or pick your article. There have been continual articles over the years thinking about various regulatory issues in this space and particular questions of antitrust, right? And so this isn't just something that's been focused on in the press or in academics. The FTC is active monopolization cases against Facebook is actively rethinking a lot of their merger and competition apology strategy in digital markets more broadly. So if you go beneath the surface of kind of, in the FTC complaints in particular, in the Facebook case, a lot of the sort of back and forth between regulators and Facebook kind of go down to simple issues of consumer demand, right? The fact that these services are free to consumers kind of complicates the application of a lot of the standard antitrust toolkit. And so a lot of the back and forth is kind of which set of products are feasible substitutes for a product like Facebook. Now this question of demand measurements further compounded by the issue that's best depicted by pictures like this, right? So people spend vast amounts of time on these services, but there's sort of this notion of people being glued to their phones and kind of a, in a psychological and colloquial sense potentially having some kind of addiction to these applications with unclear implications for antitrust, okay? And so that backdrop, what I wanted to do in this paper was to try to think a little bit carefully about, you know, how can we study the demand for these applications? And I'm gonna try to answer two broad questions in this talk and paper. The first is kind of what are the set of relevant substitutes? How do we think about measuring them? I'm gonna have two kind of reduced for measures of relevant market definition that I'm gonna be able to get at. And second, you know, can we quantify the role that this kind of more broadly inertia plays in usage? And I'm gonna try to be thinking about how important quantifying this is for thinking about substitution patterns as well as these questions of market definition, okay? Now, because these services are free to consumers, you know, it's not obvious kind of what is the relevant unit of demand, right? And so this is a quote from, you know, the CMA Digital Markets Report in 2020, where there's again this kind of back and forth between, you know, Facebook and regulators where Facebook kind of argues that they compete with a product like YouTube, which isn't technically a social media applications primarily because they compete for the user's time and attention similar to YouTube. And so, you know, this isn't without controversy, but I'm going to kind of take this notion of time is being the relevant dimension of competition and think carefully about substitution patterns in this time dimension, okay? So what I'm gonna do in this project is I'm gonna run an experiment where I'm going to try that as the best as I can to kind of comprehensively collect data on how people spend their time, okay? So I'm gonna have people in this study install a print of control software on their phone, a Chrome extension on their computer that's going to give me minute by minute automated monitoring of how they're spending their time, right? And so this is the measure of demand that we're going to be collecting. Now, because we're in this world of collecting our own data, we're gonna use the fact that the way we collect data on the phone is through print of control software and we're going to have two restrictions. One is going to be restrictions of Instagram. So we're going to take away access to Instagram on people's phones. We're gonna take away access for you on YouTube for people's phones. There's gonna be some variation the length of these restrictions, but the sort of core of it is running an experiment. We're gonna collect data on how people spend their time. And in the middle, there's gonna be a restriction of they're not going to have Instagram or they're not gonna have YouTube, okay? Now, the first thing I'm going to do with the resulting data is to try to characterize substitution patterns of time allocation, right? So I'm going to argue that we can think of the sort of substitution that people do during the restriction period as even as kind of a conservative measure for which kinds of activities are people viewing as relevant substitutes, right? And so we're able to observe all the different applications that people use on the phone and surveys that give us some sense of how people are spending time off the phone to try to understand which class of applications are substitutes for these different applications. The second thing that we're gonna do is because the restrictions are relatively long, there's some variation in them and we're tracking people after, we can ask the question of whether or not there are persistent effects of the restrictions, right? So not just on the applications that are restricted per se but also on other applications, okay? Now, using this sort of more qualitative reduced form measures of substitution, I'm gonna use this to kind of guide the assumptions of this demand model that I look at. So I'm going to estimate kind of a demand model of time with a consumer inertia. And broadly the goal here is going to be to answer two questions. The first is to just answer the question how much of the overall usage is actually coming from this inertia channel? And then the second is we wanna get concrete estimates of diversion ratios between these different applications. And so I'm going to compute these with inertia and without inertia and then apply both of them to kind of a very simple, relevant market definition. And sort of the purpose here is to, one, I'm going to give you some argument that these sort of without inertia diversion ratios are sort of more core measures of substitution patterns but we're gonna apply both of them to kind of thinking about, these relevant market definitions, okay? So to sum up running an experiment to collect out on how people spend their time, gonna shut off different applications, measure in the reduced form how they substitute and then uses this variation in this data for a demand model, okay? Now, before going into the details, let's stay still a little bit of a high level to kind of think about what are we actually getting here, right? If you kind of think of the simplest possible market, the typical sort of antitrust approaches, let's just look at what happens when price change, what kinds of applications do people substitute to, right? And so here the kind of core challenges, prices are just fixed at zero in terms of the monetary price that people pay. And so how can we think about measuring relevant substitutes, okay? And so the approach that I take in this paper is that one unique aspect I think of digital goods is that these kinds of experiments are substantially easier to run relative to other markets, for instance, cereal or something that IO economy is typically study. And so here we can think of, you know, this experimentally generated unavailability is going to allow us to look at relevant substitutes by saying how people substitute during the restriction as well as allow us to pick up on sort of more dynamic elements of demand, okay? Now, you can map this back to kind of a standard price-based framework, right? So if you look at some of the theoretical literature on media markets in IO or some of the sort of legal scholarship thinking about these specific kinds of markets, you know, people have posited that we can think of various sort of price on people attention that these firms can set, right? So the sort of most prominent one is thinking about the advertising load, right? So Facebook and, you know, if it has, you know, more market power can set a higher advertising load when people go on the platform. And so at least within the sort of econ theory space this has been the most popular modeling tool. If you look at the legal scholarship that kind of have these attention costs as being broader where we're thinking about product quality, privacy concerns. And so here I'm not going to directly sort of use these measures, but you can kind of think of the product unavailability variation as kind of asking, you know, if you thought this was the relevant price, the experiments kind of simulating the thought experiment where, you know, what happens if we take these to their choke value, right? What if Facebook sort of filled all of their newsfeed with just odds, right? And we're measuring how do people substitute at that kind of extreme part of the demand curve, okay? And what that's going to lead to is to be, we're going to get a relatively conservative measure for kind of how are people substituting, right? Because if we don't observe, for instance, substitution from a Facebook to a YouTube at this choke price, we likely wouldn't expect it from a smaller kind of advertising increase, okay? And again, here I'm going to be studying social media applications. I think we can kind of think of these kind of experiments as being more broadly useful across a large range of digital goods, right? Because again, you have your version of Facebook on your phone, I have my version of Facebook on my phone and you can kind of experimentally remove mine with while keeping yours, okay? Okay, so that's the broad overview. Let me get into the study details unless I don't see any questions in the chat. So I'll keep going. Okay, so what is the experiment? So we're going to recruit roughly 400 people from a number of university lab pools. There's going to be like four people we recruit from Facebook ads, so they're predominantly from these university lab pools. This is going to lead to demographic selection, right? So we're going to have relatively young participants. Now in this kind of environment, there's, it's hard to get kind of the perfect sample. And so I think we don't have a demographically representative sort of population but relative to this demographic, we kind of have an unbiased measure of kind of like the intensity of use of these applications should be broadly representative of people within these demographics. But that limits a bit sort of how much you can interpret the kind of diversion ratios that I get to, for instance, some of these ongoing cases, but I think most of the kind of the more qualitative conclusions for this study don't too strongly depend on this. In terms of incentives, people get a baseline $50 for just completing the study. So kind of keeping the application installed, doing the surveys. There's going to be a secondary restriction, which they can earn additional money for, which is not going to be in the experimental analysis, but kind of helps with the incentives for this participants. I'm going to pull with sort of an earlier data collection round that was a little bit smaller for some of my reduced form results. In terms of what are we going to do? So this software is an Android product control software, which I'll tell you a little bit about in a couple of slides, and a very simple custom Chrome extension, which I'll also tell you about a little bit of some details on in a couple of slides. Like I said, the application restrictions are going to be either YouTube or Instagram, and the timeline for the study is going to be as follows. So the study starts where people set up a calendar or they sort of set up their software, meet us on Zoom, and we kind of go through any questions they have about the study, make sure the software is all set up properly. And this is important for kind of building trust with the participants, because this is a very privacy invasive kind of data collection. So the study period runs for five weeks. There's a baseline period of one week with no restriction. Now importantly, participants when joining the study had no knowledge of which particular applications would be restricted and in particular, whether or not it would be YouTube or Instagram, they just knew that they broadly would have some social media, entertainment application restricted. And so at the end of this week, one hour before the end of this week they're informed via SMS, you're going to have no YouTube for one week or no Instagram for one week or whatever. The restriction period runs from one to two weeks where again, they're either gonna have no YouTube, no Instagram or be in the control group. And there's a, after that there's a post restriction period of two to three weeks without any restrictions, okay? So how does the sort of flow of users through the study look like? So there's a lot of coordination across, you know, lab pools. So we have 553 people which express interest, you know, there's some time lag in between, you know, for some of the lab pools when we originally reach out to people and eventually can actually get this data collection going. So we end up with 410 people who show up on the Zoom and complete the survey. Now I learned that, you know, Android software works very widely, like some Android phones don't work particularly well with our software. So we lose a decent number of people here because they had older versions of Android phones that didn't work with the software. So we end up with kind of the core sample of 389 participants. Now of these 389 participants, remember there's kind of no explicit instructions that you have to use Instagram or have to use YouTube. And so here, you know, we're gonna condition for our main experimental intervention on the set of users that either used Instagram or YouTube. We're going to block randomize based on their baseline usage in Instagram and YouTube, into the Instagram group, the YouTube group and the control group. The block randomization kind of ensures that we have sort of equal usage, at least in the baseline of the restricted applications but also sort of the other core set of social media applications. Following this, we kind of bend people into one or two week groups using uniform sampling within group. And then the final thing to note is, you know, the extent to which there's differential attrition based on treatment status, we don't really observe this is the case, right? So only four people dropped from the Instagram group, two people dropped from the YouTube group, two dropped from the control group. And in most cases, this is from either survey fatigue or a couple of people decided to buy iPhones in the middle of the study, which didn't doesn't work well with the data collection. So most of the kind of attrition here is front loaded. And so this means we don't, I'm not going to worry about it too much in the experimental analysis, okay? Cool, so let me tell you a little bit about the time data collection before we jump into an overview of the results. So for the phones, you know, we're effectively going to piggyback off the fact that there's this market of applications for, you know, parents trying to monitor what their kids are doing on their phones, right? And so we're going to use this print control software called screen time labs for phone time, right? And so what the way these apps work is effectively the kind of one, they only work on Android. So that's why we kind of only recruit Android bargespence because the app basically sits on top of the OS and just pings the screen every few seconds and monitors which app is on. Can I stop you for a second? There's a question clarifying question, which I think is relevant. Do you speak people from using Instagram, YouTube on other devices? Yeah, good question. I did not, I was supposed to say talk about that, but I didn't. So no, so this is part of the reason for the Chrome extension. So if you look at the baseline data, so especially for Instagram, basically no one uses it on the computer. For YouTube, there is a large amount of usage on the computer, but I can precisely measure the substitution patterns toward the computer. And so to be very clear, so one thing that we were very explicit with people in the setup was that we're going to have you install this Chrome extension. There's going to be no restrictions on the computer. So you should not have the need to kind of deviate your usage on the computer during the restriction. If we block YouTube, you should be more than free to use it on other devices. And so we can measure both the substitution during the restriction period towards the other devices, but also there's a tiny survey where we ask people, how much time off the phone do you think that you spent on YouTube or Instagram during that week? And as a sneak preview of those results, so for the survey based measures, we don't really seem to find that people think that they're substituting. For the Chrome extension, we observed some small amount of substitution that's a little bit larger for Instagram, but still dwarfs the amount of time that people spend. Thanks for asking, that's an important question. Okay, so to go back to the software. So this software collects minute by minute phone usage of all applications. There's some interesting scaling challenge of getting this to actually work with 400 people, but you can read about that in the paper. So beyond just collecting data, we're going to have this minute by minute usage of all the applications on the phone. For the reduced form measures, we're going to manually collect the application category that the app is assigned in the Google Play Store. And so every application you go into Google Play Store, there's some categories up top there. So we're going to manually collect those for all the applications that we observe people using. In terms of restrictions, again, this is pretty robust, because it's kind of the reason why these parental control applications exist. So we can shut off the application, so I can shut off your access to Instagram on the app, but you can also shut off any HTTP request to the domain. So you can't go to your browser and go to Instagram.com or download some secondary Instagram app. We can kind of restrict any request to the Instagram domain. Because it's parental control software, again, it's really hard for them to get this off their phone without letting us know about it. And so we don't have to worry too much about evasion on the phone, okay? That's where this question, one key concern is, we're only blocking this on the phone. How do we know people aren't substituting to the computer? So we build this really simple Chrome extension that kind of sits in their Chrome web browser. 90% of participants install it. It has some privacy things that only monitors time spent on a core set of websites that we care about and kind of gets a measure of how much time people are spending, okay? There's a bunch of surveys. You can read about them all in the paper if you're interested, but I'll bring them up as relevant during the talk, okay? Are there any questions about the data collection before I go forward? I don't see any other ones in the chat, but no, okay. Let me keep going. So I'm gonna give you a broad overview of kind of what we find in the reduced form experimental results. So the empirical specification chair pretty straightforward. It's a block randomized experiments or effectively I'm gonna be giving you estimates of the average treatment effect kind of controlling for this block and baseline usage. Now, the first thing we wanna look at is during the restriction period broadly, how do people substitute, right? And so we wanted to take seriously this notion that there's some cross category substitution. And so the measure of category that we use is the category in the Google Play Store, right? So we're gonna measure substitution across the most prominent categories during the restriction period. And so the first thing we observe is that in both treatments, there's a substantial reduction on category time, right? So if you look at the bundle of usage on social, that reduces in the Instagram treatment versus Instagram control. If we look at the bundle of usage on entertainment, that's lower in the treatment versus control. Now, perhaps the more interesting question is, what happens to the non-YouTube entertainment time or the non-Instagram social time, right? So in the YouTube case, we don't observe much substitution towards these other applications within the own category. For Instagram, we observe an increase in the usage of other social applications. And I should note here, kind of there's a skewness in the data, so I'm mainly reporting the log specifications, but you can see the whole things in the paper. Now, the more interesting question is kind of, is there any evidence for this cross category substitution, right? This notion that we should think of an application like YouTube as being substitutable for an application like Instagram, okay? And so in the YouTube treatment, we do observe kind of this evidence that there's an increase in time spent on social applications, which does mean at least one direction of this seems to be true. When we look at substitution in the Instagram case, it's a little bit more muted, right? So we observe a positive, but statistically insignificant substitution towards entertainment. We observe some marginal substitution towards communication, but most of the sort of substitution in the Instagram case is within the social category. Now, these sort of seemingly asymmetric substitution patterns may be a little bit puzzling at first, but you can look at this survey measure that I kind of elicited at the beginning of the study, right? So I asked people, how do you perceive that you use these different applications, right? Is it broadly for kind of entertainment purposes? Is it for social purposes? So keeping up with your friends? Is it for direct communication, right? So like directly messaging your friends? Is it for getting information is sort of broadly defined as not just getting news, but potentially some kind of niche informational content like economics papers. And so what you find is that, in the YouTube case, people are predominantly using this for entertainment, but if you look at sort of how people perceive they use these different applications which are in the social category, you actually have that there's wide heterogeneity, right? So TikTok is heavily viewed as entertainment. Instagram is viewed a lot as an entertainment application, but sort of distributed across these different activities and similar with Facebook, right? And so it's not too surprising that these product categories which are effectively based on sort of functional product characteristics don't seem to fully capture the extent of substitution, right? And so this is kind of important when we're thinking about these sort of debates with respect to these social media applications is that getting this kind of revealed preference measure of substitution is really important. We can't just kind of look at the product characteristics because we don't have prices, okay? A final couple of things in this. I'm sorry, can I just ask a clarifying question? When you say it's 0.1% increase, is it 15? So it's the time spent on social which increases by 15.1% but do you have any idea of how much of a, what percentage that makes of a time which people used to spend on YouTube because that would be the relevant issue I think in some sense. Yeah, so that'll be, so in this measure, right? We're just looking at the sort of time percentage increase over the baseline. When I get to the diversion ratios that's effectively exactly what that's going to measure. How much of the time from YouTube is sort of divvied up into these other applications? Okay, thanks. Yep. Okay, and so two final things to note here. You know, again, you could use, you could sort of think about defining a really conservative relevant market where you kind of think about the set of applications that people substitute to and you would get sort of a qualitatively different measure of market concentration using this multi-category definition versus single category. Finally, this was brought up before but just to rehash, you know. So if you look at these numbers, you know these are non-zero but they're not 100%. A large amount of substitution also goes off the phone, right? And so only a small amount of it kind of from the Chrome extension surveys is substitution towards the other applications in terms of what are the non-digital activities that people substitute to? I don't have precise enough measures of that to kind of speak to that but there is a sort of substitution towards a broad number of sort of off devices. Okay. So that's the brief overview of the reduced form restriction period substitution results. Let me give you a broad overview of kind of how people substitute after the restriction period and I'm mainly going to use this to kind of motivate the demand model that I'm going to estimate, okay? So this is just the sort of moving average plot of the time spent on the sort of restricted applications in the Instagram group and the YouTube group, right? So this black line here is the control group. The red line is the one week group in Instagram case and the blue is the sort of two week restriction group similar here for YouTube. You stare at these and you say, wow, the two week restriction group seems to have reduced usage relative to both the control group and the one week restriction group. You can plot this with computer time and you get a roughly similar pattern. And so I put this through the empirical specification and we're going to find that there's a five minute per day reduction in the Instagram two week group. Here I'm looking at levels instead of logs and so the difference between the two kind of comes from the fact that if you measure quantile treatment effects like all this is basically coming from the heaviest user of the application which is kind of consistent with what our intuition would say in this case, right? So if I'm a moderate user of this application doesn't seem like the restriction itself is leading to persistent reductions but if I'm a really heavy user of this application, you know, this two week restriction sort of temporarily leads to a reduction in my usage, you can match this with the qualitative data on how I, like there's an open ended survey response after the study and you can kind of see the people that are very intense users or are affected by this two week restriction. Now the natural question is, okay so I only really observe people for two weeks after that how long does it persist? So I send a survey one month after, sorry, one month after and, you know this is subject to the sort of, you know it's not as pristine data as the one during our collecting the study because kind of it's based on people's time perceptions but at least people perceive that they're spending less time on Instagram even one month after the study relative to before the study, right? So there's some evidence for kind of a persistent reduction usage for this group. Now, the second thing is that, you know for YouTube, you don't observe as strong in effect, right? So you see a negative point estimate but it's statistically insignificant but we can kind of look at, you know whether there's other dimension of persistence here, right? And so one cool thing the data collection enables is that I can actually see, you know when people install new applications, right? So I can actually measure, you know did people actively seek out new applications during the restriction period and persist to use them? And in the case of YouTube I find that both, you know people spend time on newly installed applications during the restriction period and they persist to use them, okay? And so I'm going to take both these results kind of reduce form evidence for kind of broadly consumer inertia that's going to drive the usage of these applications. And so what I want to do now is I want to get sort of concrete estimates of diversion between these applications. I want to sort of quantify the role of this inertia place and then you use these estimates of diversion to kind of do a simple relevant market test to kind of see how broad we should define the market. Okay? So what does the demand model look like? So the way I'm going to think about this is, you know people are making, there's kind of a panel of discrete choices, right? And every time interval I make some choice of which applications to use. So for computational purposes I sort of put this at a five minute interval. And so choice is going to be the application with the plurality of time in the interval. Now I'm going to think broadly about modeling demand and in particular the dynamic aspect of this kind of consistent with the state dependent demand estimation literature that's mainly used in quantitative marketing where I'm going to have roughly two assumptions. The first is that, you know only current period utility is going to matter, right? Effectively people are going to be myopic they're not going to sort of be thinking how does my current usage impact my future usage? But this sort of inertia is going to make a difference, right? Where I'm going to have that when I make my choice today some factor that's going to come from my past usage that's going to factor into my current period utility. Right? And so the typical identification challenge I'm going to face here is how do we kind of disentangle role that past usage from kind of the more intrinsic preferences for the products? Okay. And so here I'm going to restrict focus to kind of a prominent set of social media applications. So Facebook, Instagram, Snapchat, Reddit, Twitter, TikTok and YouTube. And the data I'm going to use is going to be this panel of phone application you should throughout the study. One important thing to note is that I'm going to try to leverage the fact that I actually observe for most of the time the core set of applications that people have installed, right? So I'm going to have kind of a subject specific choice set on any given point in time based on the set of installed application. Okay. And so I'm going to think for a consumer I in application J in market K where a market here is going to be a distinct choice set. So that's going to be the sort of set of installed applications minus any experiment to the restricted applications in some time period T. I'm going to have that first kind of this longer term inertia comes from some continuous stock of usage in the past two weeks. There's going to also be here because of the discrete choice formulation you might think there's some shorter term inertia which I'm going to use more of a nuisance, right? And so the idea here is, you know if I am spending time on YouTube now it might depend on the fact that if I used it five minutes ago I might be more likely to use it now, right? And so we might think that especially in the shorter term there's going to be some diminishing returns. And so I'm going to have this enter both linearly and quadratically to take care of any potential satiation effects. Now, the next thing is, you know the sort of intrinsic preferences that people have for these different applications, right? So I'm going to have the kind of standard product fixed effect here. The one cool thing is that, you know because I'm running this experimental study and I have this sort of more subjective data I can also kind of control for these stated activities, right? So I can make it so that people get different utility using Facebook for entertainment purposes versus for broadly social purposes. And so I'm going to build that directly into the utility function. Finally, I'm going to have from their survey some measure of the self reported number of connections on that application, right? So this is going to be the number of people that they follow, right? Some measure of kind of how much content for instance they can potentially get here. Okay. So we're going to have the sort of standard type one extreme value error. And then you notice these QI subscripts on everything, right? And so the way I'm thinking about preference heterogeneity here is a most concerned that kind of power users of these different applications are going to have different intrinsic preferences for these applications relative to none. And so this clustering procedure effectively allows me to sort of factor out these more intense users. And so I'm going to estimate the model by using the baseline data. I'm going to assign types to participants using k-means. And then for each type, I'm going to estimate a loge of model using maximum likelihood estimation, right? Okay, and the final aspect of the model is if you stare closely at this, there really isn't that much that kind of depends on the time of day that people use this, right? But if you look at, you know, this is a, oh, sorry, a heat map of usage throughout the day, throughout the week, this is the hour of the day, this is the day of the week. You can kind of see here this yellow is the most intense usage, this blue is the least intense usage. And so, you know, you wouldn't think that, you know, this is because, you know, Facebook for instance has more utility during the day versus at night, but rather the outside option for using Facebook at night is rather high because sleep is a very valuable thing. And at lunch, you're not really doing much else. And so, yeah. So I'm going to have this as the outside option is going to be time-varying. Okay. Cool. So that's the demand model, brief discussion of it. A little bit late on time. So there's two types of inertia. I'm mainly interested in this long-term one. You know, this long-term inertia, you know, it's a reduced form for a bunch of different behavioral mechanisms. I'll talk about this a little bit more on the, when I go through the counterfactuals. The identification here, I think it's pretty straightforward where I'm going to use this experiment to induce exogenous variation in this habit stock. You know, the sort of identification assumption there is that the experiment really only influences the stock of usage. You know, I'm kind of helped by the fact that I can observe this consideration set. And in terms of incorporating preference heterogeneity, so I do the somewhat normal thing of having type-specific coefficients. And I can control for the subjective usage of these applications. Okay. So in terms of estimates, you know, I find, you know, that this was a sort of positive and statistically significant sort of role of this long-term usage stock for each of the types. I see, you know, a negative coefficient in the quadratic for the short term, which is kind of consistent with these satiation effects. And overall seem to get reasonable estimates. I do some model validation by sort of validating that in the baseline, you know, the model predicts at least the aggregate market shares in the non-restriction period, but also that, you know, during the restriction period, the model predicts the substitution towards the non-restricted applications. Again, the model is estimated over this data, but these are the validation exercises that I use. What do I get from this? So to the question from before, so I'm going to get the sort of second choice diversion ratio between these different applications, right? So I'm going to ask, what would happen if we removed YouTube from the choice sets of people? So what fraction of the time that was spent on YouTube is substituted towards the other applications and the outside option, okay? Yeah, you should try to finish up in five minutes more or less. Okay, yeah, I'm almost done, I think. So, so yeah, so the main counterfactual we ask is kind of what happens if this long-term inertia channel shut down? And so again, I think there's kind of two interpretations that I want to give of this kind of factual. The first one is that a lot of cases here we're kind of thinking about either a merger or we're trying to measure diversion between an application which is kind of a very prominent application and sort of potentially a smaller application. And so if we wanted to really think about kind of the intrinsic substitutability between these applications, we do care about the diversion ratios in the baseline, but kind of by parsing out inertia, we can actually get a more direct measurement of substitutability of these applications that isn't as dependent on market conditions. The kind of second interpretation of this is, one of the behavioral mechanisms are things that are loosely consistent with what we might think of as addiction. And so there's a whole separate debate which I haven't discussed about potentially some design features that applications can actually contribute to this inertia. So you can kind of think of this as the diversion ratios that I measure here kind of a limit case on how these policies aimed at kind of curbing these platform features would kind of shift around the diversion towards these applications, okay? And so what do I find? So first I find that the overall usage of these considered applications drops by nearly 30% when you kind of shut off this inertia channel consistent with at least intuition, TikTok seems to be the most impacted application at least colloquially people perceive this as being kind of the most addictive application. And it's not just diversion towards larger application drops, but it's not a uniform reduction, right? Some applications which are kind of more niche applications within the set of applications I use actually have increased diversion, okay? So now we have these estimates of diversion with inertia without inertia, what do we do with them? So to conclude, I'm gonna apply them to a really simple market, relevant market definition, right? We're basically take these metrics of diversion ratios and then basically we're going to add each of the applications to the sort of relevant market as long as the sort of some of the diversion doesn't go above some threshold talent, okay? And so what are the sort of main takeaways? So my reading of these results is that kind of consistent with these reduced form estimates you can actually see that the relevant market is somewhat broadly defined. And so this is a little bit in contrast to for instance, the FTC's definition of relevant market for Facebook was only Snapchat. Here I kind of see a broader set of substitutes in both cases. The second is that this inertia channel is actually quantitatively large enough to expand this relevant market definition, right? And there's kind of two takeaways from this. The first is that you might interpret this as there's a larger set of substitutes than just using those in the baseline. The second is that you can kind of view this from the regulatory policies that actually target, this inertia channel can actually be pro-competitive, okay? And all the details and stuff for this are fleshed out a lot more in the paper, okay? So let me conclude then. So what do I do today? So I conducted a field, I told you about a field experiment where I collected data on how people spend their time. I used friendly control software to shut off access to popular mobile phone applications. I observed sort of dispersed substitution results. Some reduced form evidence for consumer inertia. I quantified this to show that it's roughly 30% of usage. And what I view as kind of the broader takeaways here. So I think the first is kind of, I hope I've shown that these particular kinds of experiments in digital markets are super useful for thinking about substitution patterns. I think there's a lot of challenges in thinking about issues in these markets, but the kind of data that we collect in the variation that we can generate is super rich, I think relative to other markets. And so this is at least one application, I think of kind of the things you can do in these spaces. The second is in terms of direct policy implications. I do think my results point to at least a suggestion that relevant markets may be broader than those that are currently positive by regulatory authorities. Definitely sort of warrants, I think further investigation based on my results. The third is that, I think at least I haven't seen this discuss too much is that these sort of debates about digital addiction policies may actually be useful from a sort of competition policy perspective in terms of sort of a tool that people have to kind of soften diversion towards these more popular applications, which is a little bit less heavy handed than some of the alternative approaches. Okay, so that's all I have. I know I went through some of the results a bit fast. Hopefully all the details are in the paper and happy to chat more about them now. And I'm looking forward to the discussion from Penner. So. Great, thanks Guy. And so next we'll have a five minute discussion by Penner Yildrem. Penner, floor is yours. All right, so thank you so much for having me. This was a great paper that I've seen in a few conferences before and I'm really excited to, I think I'm really excited about the implementations and applications of this particular paper. And it's fairly well done. So there isn't much yet. So I thought that I would just at a high level point out a couple of things that we might need to think about to make broader generalizations regarding the substitution in digital economy. So again, it's a really interesting well done study in a fairly understudied area. And the focus is as Guy has gone into more detail, how do people substitute time across digital platforms? And this is important for a understanding or defining relevant markets for different applications, different companies for competition and antitrust purposes. And it's also important in terms of understanding the drivers of usage, behavioral drivers, addiction and inertia. And this in this paper, there's a combination of an experimental method with follow-up survey tools that installs the software on individual devices that allows Guy to stop usage of Instagram and YouTube in particular. Then the Chrome extension and follow-up surveys and the parental restrictions or app allows him to be able to track usage of different apps as well as some time use outside of these apps. So as a result, he is able to come up with a few interesting findings. Of course, there's no perfect substitution. Perhaps this is not very surprising and there's a decline in the overall use time either Instagram or YouTube are restricted. But I think what might be fairly more interesting is the asymmetry and the degree of within and cross category substitution for these particular two different particular apps. Diversion from Instagram, for instance, to others, there is some diversion from Instagram to other social apps, but not at least statistically speaking, not to other apps. And there's no diversion from YouTube to other entertainment applications, which are in the same category, but there is a diversion to social apps. And he also does a number of things in the paper that he didn't really go over in terms of looking into diversion to new apps and fairly more prominent apps, apps within the same family, for instance, Facebook meta-owned apps. And he does find some substitution towards the more prominent apps as opposed to maybe the apps that are on the long tail. He also finds some evidence of persistent declines in activity with these interruptions, which actually surprised me given two weeks as a fairly short amount of time for an interruption. And then he tries to disentangle how much of this is due to inertia versus other factors. So just at a high level, I thought that there are three things that we could think about maybe as considerations, factors of considerations in trying to make generalizations from this. And the first thing or at least the first part is going to relate to experimental design. Then I'm going to think about some of the responses from the firms, what might be the responses that are happening at the same time as you start creating these interventions that might make it harder to interpret those substitution ratios. And at the same time, we might also think about possibly what to make up the results, the interpretation will be the last thing. So in the experimental group, it seems like in the paper, guys spanning a lot of time trying to make sure or address the issue of selection into the experiment with the concern that maybe there are people who are generally concerned about their time use and their self-selecting. That's something that seems to be at least not a concern, that doesn't seem to be a concern, but I'm a little more concerned about recruitment of college subjects for two particular reasons that don't seem to get too much attention in the paper. One is that students in general, compared to the other rest of the population, they seem to have better or more connected outside options. If I'm a student, I probably live with my, all the people that I communicate over social media, I probably live with them or I go to class with them. So I think that the outside options, especially for social apps, might be a lot stronger for a student population compared to the rest of the population. And then these younger cohorts may also substitute less to paid advertising. So this is going to come in, for instance, when we do not find within category substitution moving from YouTube to say Netflix. Netflix is a paid app. And of course that creates a different level of friction in terms of substitution for a younger cohort, maybe compared to an older cohort. So I wanted to, first of all, just bring up this issue of how much can we generalize from this group because over again, they're the level of connectivity that they have as well as the paid alternatives. A second important thing is that many of these goods that we are looking at at least the digital goods, they tend to be network goods. And when I say network goods, I mean that in a number of different ways that brings challenges and interpretation. What does it exactly mean to limit a single user's access? You can't really just think of this intervention in term as I'm blocking your access. The moment I block your access, these user-generated content platforms, it also implies that some of the content on the platform is also changing. I'm reducing the content, I'm changing the content, I'm changing other people's communication as well. So A, multiple things are changing. B, maybe not so much of a problem, perhaps if these individuals were randomly participating from a campus, but in a larger experiment where people tend to communicate with each other, there might be spill overs to the control group in terms of the intervention. And three, if you think about network effects that might also come up with maybe a bigger experiment of similar type, how would we interpret those effects? Again, a number of things that are unique to digital goods with network effects or a network structure as well as content generation platforms that make it hard to interpret just these access, shutting off access type of experiments. And since Guy is making some interpretations or at least suggesting that there could be some elements that we can take away from this study for those bigger exercises, I wanted to also highlight that. Another thing, I didn't see much of a discussion of this on the paper, maybe I missed it, but of course the moment I decide to go off Facebook or I don't visit, let's say Facebook for five days, the firm starts to respond to that, right? They might be sending emails to me back saying, hey, we miss you or there's this friend of yours who just posted something, is there anything else that's changing in the way that these firms are trying to communicate and trying to lure back users into the platforms? Or maybe they are just simply, this may be also means that firms are blocking the regular communications that they have. I wanted to get more clarity, but also try to understand how those effects might change the diversion for the groups whose access was limited. Then finally, I know I'm taking a lot of time, but what exactly are we observing? So this is related to maybe the interpretations coming from these diversion ratios. One is, we are of course, possibly talking about substitution across devices and Guy is doing, he's trying to put a lot of effort to try to capture that through surveys and the Chrome extension and everything, but I in a venue look into, for instance, table one in the paper, it's looking into some of the summaries of how people use different apps on different devices. For instance, Snapchat, there's a zero mean. That means that nobody is using Snapchat on a desktop device. 9.35 minutes on average for mobile. I look at TikTok the same way, 50 minutes on mobile devices, but one minute perhaps on desktop. Whereas when you look into other apps, I think YouTube, Buzz One, Instagram, somewhat also similar, they're much easier to use perhaps or much regularly used across devices. So when you talk about, for instance, some of the substitution, you're not only thinking about substitution for a particular app to another app, it's in the same device, but you're thinking about also adding maybe an additional friction or additional cost of having to change devices that you don't normally regularly use. So I wanted to understand also how much does the substitution across devices add to the diversion? And then finally also- He was wrapping up. Okay, last minute, last thing. And thinking about time or attention as a relevant dimension of substitution. Two minutes on YouTube in terms of advertising value is not the same as two minutes on another platform, Instagram or Facebook. How do we interpret these relevant to especially advertising markets and advertising revenue? And finally, the one thing of course, we don't know just because people are switching across these platforms. We don't necessarily, we cannot say much about welfare effects. That's the one last thing. The great paper, I really enjoyed reading it and I look forward to seeing more of the results. Thank you.