 I can just share the screen with your files there. Sure. Okay. And you can flip along the slides as you think they need to be flipped along. Is that what you have in mind? Yes. I'm just a little bit in a second. So this talk is partly about how to go about explaining what you might call the direction of time, which is really a set of temporal asymmetries in this way of thinking. But it's also partly about how to interpret Reichenbach's claim in the direction of time. So essentially Reichenbach, and I'm going to do this partly through a comparison between what Reichenbach has in mind and what David Albert has in mind in his book, Time and Chance. So they give, I would say, different Boltzmannian explanations for both a set of temporal asymmetries that we might take to underlie the direction of time and also an asymmetry to the Boltzmannian program. Reichenbach's explanations are often given rather short shrift. They're often criticized for relying too closely on entropy and they're also cross-sized to their obscurity. Sorry, just a second. Small puppy. So this is from the Stanford Encyclopedia article on Reichenbach. They say we do not pretend to understand this argument, which is a pretty damning criticism. So by comparing David Albert's and Reichenbach's accounts, I'm going to argue that there is indeed a Reichenbach style explanation worth recovering and it's going to combine features from each program. So I'm just going to talk briefly about Reichenbach's explanation, then do the comparison and then talk about the explanations of records at the end. So Reichenbach says he gives what's called an explication of the direction of time and records and explication is going to involve identifying more precisely what the phenomenon is that's going to be explained and then explaining why that phenomenon is temporarily asymmetric. Reichenbach is going to proceed by appealing to what he calls the hypothesis of branch structure and even though he talks about these sometimes as logical posits, they're clearly meant to be empirical posits, which he takes to be well verified. So the first of these is that the universe is in the midst of a long entropic upgrade. So the entropy is not just currently low, but we're in the middle of a long rise. How long? Well, long enough to encompass all the temporarily asymmetric phenomena that we're very interested in explaining. The second posit is that there are many branch systems. These are subsystems that are energetic isolated from the main system for a certain period of time, but connected to the main system at their two ends. And Reichenbach is very clear that this isolation can only ever be partial. He talks about them as being quasi isolated with respect to certain variables. Skipping ahead to the fourth posit, this is that the vast majority of branch systems have one end point higher entropy than the other. In other words, branch systems do not generally start and end at equilibrium. And finally, the third and most, I think, strange of the posits is that the lattice of branch systems is the lattice of mixture. So I'm going to try and explain what's going on here, but I'm not going to worry too much about it because in the end, I think we should get rid of this posit. So it's a little bit odd and it's partly based in the way Reichenbach's thinking about probabilities. So he thinks the probabilities we have for a single system across time kind of mirror what he calls across systems probabilities. So say we had the conditional probability for an event B, given the earlier event B, at a particular position, a sequence in a single system, that's meant to be matched by the proportion of systems of that same type, but also have an event B at this particular place, given they have an earlier event B at a particular point in the sequence. He talks about a conditional formulation. He also gives a formulation in terms of absolute probabilities. So after some time, the absolute probability of finding a single system in a given state is meant to mirror the distribution of systems of that same type being in that same kind of state. What sort of loosely, you know, the probabilities and the behavior of a single system across time are meant to be in the frequencies of systems of that same type. It's the kind of frequency matching postulate. Now, notice this time of frequency matching is meant to only work towards the future for the conditional probabilities or after some time for the absolute probabilities, the reason being that individual systems may have started out in particular states, such that if we look at the evolution towards the past, that's going to look kind of unlikely. We're going to return to the problem of whether this kind of posit smuggles in a temporally symmetry in a wiring way. But either way, this is a kind of probabilistic postulate that Reichenbach has in mind. Now, from these four posits, and this is Reichenbach's formulation of them, he derives a fifth within the vast majority of branch systems, the direction towards higher entropy at parallel to one another and to that of the main system. So we have in mind, we have here the entropy rise of the universe being kind of reflected in the entropy rise of quasi isolated individual subsystems. And if anything like the Boltzmannian program is correct, we have to derive something kind of like five from posits somewhat like one to four, or rather similar prosits about probability and entropy. Now, in terms of assessing this kind of derivation that Reichenbach gives, well, actually, we'll do the assessment in a second. Let's assume for the moment that it just works, that we can do what Reichenbach wants, which is derive five from actually just three and four. How is this going to explain or explicate the record asymmetry? So Reichenbach's idea is, you know, say we observe an isolated box of gas at mid entropy, we note that the probability of an quasi isolated system merely fluctuating into such a state is very low. And so we should reason that the system reached this state by being in a lower entropy state in the past when it branched off from the main system. And so we can think of the state of the box now as a record of its past state, as well as a record of the main state at the point of interaction. So in this kind of program, record bearing states are going to be states of quasi isolated entropy increasing subsystems that are improbable, given nearly the evolution of the isolated subsystem subsystem across time, but highly probable given the state of the main system at the point of interaction. And we have records of the past and not the future, then, because, you know, in most cases, their interaction point that can render the state now probable lies in the past, rather than the future. So there's a very sort of Reichenbach way of thinking about records. I won't talk about how that the same kind of explanation is meant to apply at the level of macro statistics. So the essential components of both of these kinds of explanations is that we have some kind of internal dynamics that move systems towards probable states towards the future and not the past, such that deviations from probable states can serve as records of more improbable states in the past, but not the future, and therefore can serve as records of past interactions and not future interactions. Now part of the reason I wanted to lay this out is I have a clarification in mind that actually deals with I think a lot of the confusion and quite a few objections to Reichenbach's explanation. So sometimes for Reichenbach what counts as a record state can actually be quite large. And you might consider what we normally call a record might be just a small part of that system. So we might call these things local record states that is you know states concerning sub parts of your quasi isolated system. These don't have to be entropy increasing. So on the relevant kind of time scales, you can have local record states that are low entropy that are very probable given the systems dynamics. And it can be that like high entropy local record states are very improbable. So this is an example. I think it might have been, it appears in this club I think at someone else's example. You know if you have a supermarket with very efficient shelf stockers, it might be very probable for the cans to be very highly ordered on the shelf and very improbable for them to be in a high entropy state. So if you have a quasi isolated system like this supermarket, then a state such as disordered cans on the, scattered on the floor can serve as a record of some kind of path interaction, such as a shopper colliding with the cans. So take a message here. While it's important for Reichenbach that the whole quasi isolated subsystem is entropy increasing, a recording system can have sub parts that are not entropy increasing and that even have low entropy states is very probable given the system dynamics and they still be records. So that's our basic run through Reichenbach's account. Now we get on to evaluation by comparison to David Albert's posits. So firstly I'm going to deal with this issue about probabilities. So the reason why Reichenbach had his kind of unusual posit about frequency matching and probabilities for systems across time, mirroring those of systems of the same type. That's because Reichenbach, you know, didn't like probabilities in the single case. He thought they were only defined with respect to infinite limit sequences. And he thought, you know, we couldn't have probabilities for the behavior of the whole universe when it's near its low entropy starting point or its current upgrade, because it's just occurring once this universe. You only, you know, you have to have like lots of systems if you're going to get probabilities in play. I'm not going to talk in detail about this, but, you know, Reichenbach had his philosophical reasons for rejecting single case probabilities. But these days, the broad philosophical reasons he had like empiricism, frequentism are usually thought to be compatible with accepting single case objective probabilities. Nor do I think there's anything kind of important about Reichenbach's style of explanation that kind of rests on rejecting single case probabilities. And indeed, his rejection of these, you know, severely hampers his attempt to derive the second law in the way that he has in mind. So, you know, Reichenbach seems to want to say that the behavior of the universe as a whole is what kind of gives rise to the behavior of these isolated subsystems. So he talks about the direction of time is supplied by the direction of entropy of the whole universe, because the latter is meant to be made manifest in the behavior of a large number of separate systems. What's kind of, so just am I in the right place in the slides because I don't have the where should I go? Keep going. Here we are. Yeah. Thanks. So it's hard for me to read from that screen. Yeah, we're going back to where I was. What's weird about it is that Reichenbach's actual derivation of the second law makes no use of the general behavior of the universe as such. So next slide. So recall Reichenbach is attempting to derive five from actually just three and four. But even though five talks about the directions of the entropy of the isolated subsystems being parallel to that of the main system, nothing about the main system actually features in three and four. They're all about the isolated subsystems. And so just sort of as a very clear logical matter, Reichenbach isn't able to derive the parallelism between the entropy rise of the universe and that of isolated branch systems from posits that adjust about the branch systems. Next slide. Now I mentioned earlier this problem about whether assuming the lattice of branch systems being a lattice of mixture, this kind of odd probabilistic posit, whether it was assuming time asymmetry. Now strictly speaking, it doesn't. It can be formulated in a kind of temporally neutral way, but it does presuppose what Sklar calls parallelism. So this posit is just presuming that the entropy gradients of branch systems are parallel to one another without explaining why this is so. And this seems like a kind of significant problem. Now, next slide. Sklar does suggest that maybe Reichenbach is also assuming that the universe is the lattice of mixture. I don't think that's quite right, because if it's Reichenbach, he's objecting to probabilities for the whole universe. And also Sklar notes it doesn't actually solve the parallelism problem. It just kind of makes it worse. So and then the alternative here as a kind of fix is to use probabilities of the whole universe, something like the statistical postulate to then imply something about the probabilities of branch systems. Next slide. So what we have then is instead of this odd probabilistic postulate in three about the behavior of the lattice of mixtures, we have something like the statistical postulate and something like a flat probability measure over something like initial states of the universe. And then instead of trying to derive five from just four and three, we're going to try and derive it from one to four. Thanks. So the idea here is that then given macro dynamics of the kind that's usual in our world that form branch systems that have entropy gradients, the microstates of such branch systems are likely to be such that their entropy is increased towards the future where that likely a probabilities derived from the probability postulate. Noting that, you know, you can have processes that reliably produce branch systems and entropy increasing behavior. They just have to have a very kind of atypical macro dynamics. Next slide. So even if we're just attempting to kind of recover, I think a good sort of Reichenbach style derivation of the second law, we do need to interview something like university wide universe wide probabilities, even though Reichenbach himself didn't want to. Next slide. So in the comparison between Reichenbach and Albert considering probabilities, it's just to go with Albert and have something like a statistical postulate in play. Now, a next point of comparison is whether we should have something like a long entropic upgrade of the universe, which is what Reichenbach has, or the path hypothesis, which is a particular state that the universe started out in, which is Albert's posit. Next slide. So the reason I'm going to suggest we should prefer the upgrade posit is basically requirements on explanation and the requirement that explanation should identify something like difference makers or kind of relevancy, things that are relevant in a certain way, rather than filling in the exponents with things that aren't relevant. So it seems like in the case of explaining temporal asymmetries and the entropic upgrade, what we need, sorry, in the case of explain the second law, what we need is something like an entropic upgrade or a low entropy initial state, not the particular macro state the universe started out in. And so for this reason, we should prefer Reichenbach's upgrade posit or something like that, rather than the particular state that Albert goes for. Next slide. And it's worth keeping in mind that it's not to say the path hypothesis as a particular state isn't as unimportant or something like that. Indeed, you can use Reichenbach's account to explain how we reason to the path hypothesis and why it plays an important role in our reasoning practices. But none of that means that the path hypothesis has to feature in explanations of temporal asymmetries. Next slide. So when it comes to the entropy posit, I'm sticking with Reichenbach. Very quickly as well, I'll just brush over this. We do need to have some posit, sorry, next slide, about branch systems. Next slide again. And next slide. I thought there was something to say as well. I mean, essentially what's going on here, if we do want something like the second law to be manifest, we do need to have branch systems that exist. And having branch systems that exist does not follow merely from the other posits about the universe starting a low entropy state, the vammical laws, and your probability postulate. So we do have reason to keep a positive about branch systems in there if we want something like a manifestation of the direction of time in the way that Reichenbach has in mind. Next slide. So we're keeping that. So the final comparison is the account of records, which I think is interesting. So I'm leading some time for that. So next, yeah, here we are. In time and chance, there's various things that Albert says about asymmetries of records. And I've extracted kind of two accounts that you might take to be his account of the asymmetry of records. So one is when he gives this account of what he calls measurement. So measurement is going to involve inferring from the state of a system at two times to the state of a system itself or another at a time in between. And the claim is that we have records of the past and not the future, because you have access to a ready state and the path hypothesis that is prior in time to all records, but no excess access to analogous future hypothesis. So the idea is measurement is involving, you know, you have states of systems at two times, and you're raising to something, the state of the system at a time in between. Now, if this is truly our account of the asymmetry of records, here are two very quick concerns. One is that it seems to be over general, as in we don't want to say that we're measuring or recording the future when we reason from the state of system is in now, and where it's going to end up to what it's going to be like at times in between. So for example, if you know a system is heading towards equilibrium, you know, it's far from equilibrium now, you'll know that it's going to pass through some kind of mid equilibrium state in the future. That seems to have the logical form of measurement, but it doesn't seem to be what we really have in mind by measurement. And it would also get rid of there being a temporal asymmetry of records. So again, this doesn't seem to be quite the logical form of recording that we're after. It's also in a certain way incomplete in that it doesn't capture all the ways that we reason towards the past that are different from the way we reason towards the future. So in particular, it doesn't seem to capture how we reason to the past hypothesis, which seems to be important part of reasoning using records. So different, thanks. A different account of records might be taken from the end of this section where Albert talks about the kind of a strict form of an asymmetry of knowledge. So he says, look, take everything we take ourselves to know about the present and past. Using the statistical postulate and dynamical laws, we can derive everything we can take ourselves to know the future. The future direction works well, but not the reverse. You know, take take yourself to take everything you take yourself to know about the present and future, and you can't derive everything you take yourself to know about the past and conditionalizing on the past hypothesis vastly changes what we can know about the past, but not the future. And this is kind of a strict asymmetry. Now, again, if this is meant to be the account of the asymmetry of records, there's a few immediate concerns. It's still kind of incomplete in the doesn't tell you anything about how you reason to the past hypothesis, which seems important. Secondly, it doesn't really identify a particular inferential mechanism. So one can agree with everything that David Albert says here about there being this asymmetry of knowledge, and still sort of look at this imprisonment about saying, okay, what is the actual inferential mechanism that is reading using records or inferring using records. And finally, there's a kind of a claim that at least for part of this claim, the claim is trivial. So if we think about like what kind of difference conditionalizing on the past hypothesis makes, of course, it's going to make no difference to our reasoning towards the future in this formulation, given that the past hypothesis is something that you're meant to know. So if you're already conditionalizing on everything you know about the past, you're automatically conditionalizing on the past hypothesis, and therefore conditionalizing on again, you know, won't make a difference to what you know, not to say that the whole claim is trivial, but at least this important aspect of it seems to be. So it shouldn't be surprising conditionalizing on the past hypothesis doesn't change what you know about the future in this formulation. Next slide. So I mentioned a few times at David Albert hadn't kind of given this mechanism for reasoning for how we reason to the past hypothesis. And insofar as he gives an account of this, he says that we reason to it in the same way that we reason to other fundamental laws, because he counts the hypothesis as a Lord. So this might involve evolution, hardwiring, experience, empirical study, but there's no kind of more detailed account than that. It's just kind of a kind of black box Bayesian style updating story. And you might disagree, but you know, it seems like we should prefer an account of records that has the following features, you know, that doesn't apply when reasoning towards systems heading towards equilibrium doesn't apply to the future, that is going to capture how we reason to the past hypothesis as well, given that that's just a particular state in the past that we could reason to in any normal way that does identify a particular mechanism at work. So we know what records are. And also I'm going to suggest is sensitive to the difference between what we know and what is required for the inferential mechanism to work. So kind of correctly identifies kind of what asymmetric poverty is required for the inferential mechanism to work correctly towards the past and not the future. And so even though I know when I first learned the stuff from David Albert, he was forever telling me that entropy was not important in the account of records and the past hypothesis. But I'm going to suggest that Reichenbach was right to emphasize entropy that entropy is in fact essential for answering these kinds of desert rather. So next slide, yeah. So the kind of rock and rock, Reichenbachian alternative I was suggesting earlier is that we reason using record when we reason to a past state of an isolated system that makes its forward evolution to its improbable state now sufficiently probable. So we already had these cases where we have a system in improbable state now given its dynamics when isolated. And we're going to reason to a past state that makes its otherwise improbable state now sufficiently probable. Now this has these nice features that it doesn't apply to systems taken towards equilibrium. They don't end up in states in the future that render their states now probable. If we're going to accept universe wide probabilities, it will capture how we reason to the past hypothesis. So it's nicely general. It does give a particular mechanism as described above. And as far as I can tell, it relies only on the difference makers required for the intro inferential mechanism to work. These sort of have these systems behaving kind of probabilistically towards the future, but not towards the past. So we get the right kind of generality for capturing what conditions are required for this kind of asymmetric inferential mechanism to work. And this is broadly the kind of records I think we should have. Thanks. That's fine. Next slide. So there's something that I think Albert gets right. So there are ways in which the truth of the past hypothesis or some kind of entropic upgrade can allow us more expansive knowledge of the past, even when we don't, we're not using Reichenbachian recording precisely. But I think Reichenbach is equally right to seek an account of records that does apply in the general case and can apply more local cases and we get something important from that kind of account. Next slide. So we end up with a kind of circularity here, but I think it's a good one. So the kind of metaphysical or what Reichenbach is called logical posits we have is that the universe is on a long entropic upgrade, follows these laws, has these probabilities, and contains relatively isolated systems in which the entropic asymmetry can be manifest. And then we have a kind of epistemic posit, which is kind of the reasoning practice we employ, which is that we reason to pass states of systems that make their forwards evolution towards the present state sufficiently probable. And that can involve reasoning to past interaction points in a kind of branch system way, or it can involve just reasoning within the states of a single system. And I mentioned there's a kind of circularity going on here. So essentially, like, you know, what allows us to use this kind of epistemic reasoning practice is the fact that the universe is indeed on one of these entropic upgrades, but we can also reason to the fact that it's on one of these entropic upgrades with the entropy being well in this past, using this kind of epistemic reasoning. And again, I don't think there's any kind of vicious, viciousness here. You know, one is pointing out the posits that get this kind of inter-ferential practice going. And the other is the kind of asymmetric inferential practice that is then, you know, justified and reliable, given those kind of metaphysical or logical deposits are in place. Next slide. Okay, so certainly this hasn't done anything, everything. Sorry, there remains like a lot of other things to do. So you might want to explain why records give us more or more reliable knowledge. Indeed, you might want to identify only a subset of Reichenbachian records as records and a kind of more ordinary language sense of records. You also might want to investigate and allow for other modes of reasoning towards the past and future and think about how these combine with the record asymmetry, maybe to deliver further asymmetries. I also haven't said a lot about the apparent sharpness of the record asymmetry. And you might want to say something about why it feels to us that sort of knowledge of the past is in principle available in a way that knowledge of the future is not, that might involve some psychological components. And I also haven't said much about this link between what Reichenbach calls micro probabilities and macro probabilities. So I'm just kind of, I sort of taken for granted in some of the examples I've given, the fact that we can apply the kind of probabilities we get from the statistical postulate to kind of macro state behavior as well. And people are worried about that kind of connection. But the good stuff is, I think there is a kind of Reichenbachian account worth preserving here. So the important notes on this is that there is a good Reichenbach explanation of the second law, but it's going to have to involve something like universe wide probabilities. Reichenbach really wanted to avoid these, but to get his own kind of explanation off the ground, you need to get them in play. Secondly, something like Reichenbach's posit of a long entropic upgrade is to be preferred over Albert's path hypothesis. Given this explanatory criterion of what you might call difference making or identifying kind of minimal explanatory posits. There's further work to be done here about whether you want an upgrade or just a claim about the initial state. I've done some a little bit more work on that since and I'm happy to talk about it more. But the important point there is that whereas the path hypothesis is the claim about a particular state, it doesn't need to be about the particular state, it just needs to be about its entropic features. And finally, you know, everyone I talked to about Reichenbach's account of records was very not in favor of it and thought the branch hypothesis was kind of clearly not a workable program. But I think it really is defensible once you add in these universe wide probabilities. So I think more clearly than what David Albert had done, I think it does correctly identify a more temporally asymmetric mode of reasoning, a mode of reasoning that works reliably towards the past and not the future, and one that's undergirded by the entropic structure of the world. And I will finish there. And there are some references. Thanks. Okay. Thank you very much. And we are also in a long time. Let's also talk about time and little aspects time. Okay. That's good. I will perhaps stop sharing because I guess Christian will have something. Okay. Thanks imagine that hilarious. Yeah. Thank you for sending me the file in advance. Okay. So now we will have around up to 10 minutes of a crystal all of us responding to this. And then she will perhaps briefly reply something that we will take a couple of questions from the public. So Christian, please, as far as your sub to 10 minutes. Okay. First of all, many thanks for earlier for inviting me to comment this this paper. And also many thanks Allison for writing this paper. I really enjoyed it very much and learned a lot of new things about the relation between Reichenbach and Albert that I was not aware of. So it was really great to read it. Also, I have to say that many of your criticism to Albert's view are to my mind very well guided. So I don't have anything particular to object to that. And also the Reichenbachian account does look like a promising or interesting combination of the best of the two worlds, right? Taking something from Reichenbach and taking something from Albert, putting together, have a new view that I think is very interesting. So I think that this is a very nice Neo-Bosmanian paper in the direction of time. So I was really happy to read it. But my intuitions with respect to the direction of time are not Bosmanian. I am rather on the opposite side, you know, I tend to be more primitive is to think the direction of time is more fundamental and is what explain many times symmetry phenomena. For example, you talk about increased decrease of entropy. I can only make sense of this term by assuming some direction of time. So probably many of my comments come from there, from this assumption. So I would try just to build a middle ground to understand the proposal better and just to get your point in a more neutral way. So I have some very general comments. But I mean, the main problem that you are addressing in this paper is the Bosmanian problem of why certain phenomena are temporarily asymmetric. You frame yourself in the Bosmanian solution to say that, well, these temporary asymmetric phenomena are somehow related to the increase of entropy in isolated systems. So this is the framework in which you want to give an answer or give an account for the direction of time. So I want just to, I mean, to know what you think about three concerns I have for adopting this view. So in page two, I mean, just said, of course, many, many different things on how these two issues could be related. But in page two, you said, in a very simple sentence that the temporal symmetry to be explained is always related to universal or widespread temporal symmetry in the entropy of the universe. And I have three concerns about two wars that appear in this formulation. So I would like to hear what you think about this. First, my first concern is about the word relate. What does relate means? I mean, usually when you have these different accounts for the direction of time, you think that, of course, all these things are related. And you have the record symmetry. I mean, many other macroscopic asymmetries, entropy, increasing, blah, blah, blah. And you think that all of them are somehow related to the direction of time. But it seems that you are meaning something stronger than that. You are meaning something like, okay, we should reduce or we should account for the direction of time in terms of these things. So it's not just a simple relation. It's just something like a reduction. So my first worry is, are you assuming something like a strong reduction between whatever we mean by the direction of time and this entropy increasing behaviors? And if that is so, I want to hear if you have any argument or any reason about why we should assume this reductionist view and not a more primitive view or a more non-reductionist view, at least. So this is probably, I mean, just for the framework, right? My second concern is probably more general. But I mean, I can understand that the increase of entropy may be the basis for the explanation of many high-order temporally semantic phenomena, like for example, the asymmetry and the record, the symmetry of records or why coffee gets hotter or whatever. I can grant it. That's fair and the explanation is fair. But I mean, now there might be other temporally symmetric low-order phenomena in which relation is less clear. For instance, we have, okay, okay, these high-order temporally symmetries, but also low-order temporally symmetry like the decays and weak interactions, like, you know, the beta particle decays, neutral k-on decays and so on. So it seems to me that it's not so clear how these low-order temporally time asymmetries can be related to the increase of entropy. They seem to be more fundamental. So my point is the view might work for some cases, for high-order cases, but it's much less general than it seems to a foresight. So again, my intuition is that it could be, I mean, there could be a more fundamental reductive connection that is not based on entropy. So when you relate the direction of time to the increase of entropy, well, this is probably just a partial expansion. Probably we need a more fundamental explanation that also account for these low-order temporally symmetries. So again, I would like to hear if you have any thought about this. My second concern is about when you talk about many parts of the paper about the entropy of the universe. I mean, I understand how it works in the Volkswagen universe. We can talk about the entropy of the universe in a more or less clear way. Now, when we move to, you know, generativity or the cosmological standard model, I mean, as far as I know, it's not so clear what we mean by entropy because we don't have, I mean, as far as I know, again, I'm not sure if we have a very clear concept of what is, for example, the entropy of the gravitational field and how it is to be integrated with other kind of entropies that we have. So again, my point would be that probably a more general view should be developed at some point in which we move from the Volkswagen universe to the generativistic or standard model universe that we have right now. In that case, we need probably to modify some intuition that we have about entropy to integrate other cases of entropy that we may have. So this is about the general framework. I mean, this is my concerns that I usually have when people talk about the direction of time in relation to entropy. And this is our excellent location to ask you what you think about this. But this is the general thing about your proposal in concrete. And you have this idea that Reichenbachian account can explain on the one side the entropy, the entropy rise of isolated subsystem. This is the first part of the paper. And the second part is about the symmetry of records. And this Reichenbachian account can give an answer to both. And when you put these two things together, you can give an account or you can give something like an explanation of the direction of time. So again, this is just a general question about this relation between A and B and the direction of time. I mean, again, it's clear that there is some relation between A and B and the direction of time. But again, it's not clear in the paper which is exactly their relation. Because it seems to me that cannot be a reduction because, I mean, there are other things that are not appearing in this explanation that could be also serve as a basis for the direction of time. So it seems to be that at some point you try to explain the direction of time, but you explain a bit less by explaining these two things. And I'm not sure how you can go from these two items to the direction of time. So I think probably I missed something in your presentation, but it would be nice to hear how you would draw this disconnection between these two things and the direction of time and how by explaining these two things in the Reichenbachian account, you can also explain the direction of time. So moving on to more particular things. I mean, I really like your combination between the Albers and Rechenbach's view and all the very well-ordered posits. One of my worries about, I mean, I find absolutely convincing how you try to fix the derivation of the posits five from the posits three and four. But my worries about posits one, probably I misunderstood something, but I'm not completely sure if this posit, the entropy of the universe is at present low and is situated on a slope of the entropy curve is enough to give all the things that you want it to give. Because my main worry is that, well, at some point, yes, you have the gradient, but also this gradient is working in a statistical framework with time reversal invariant loss. So it seems to be very much neutral with respect to the direction. So you can go both ways, every because, again, the background is time reversal invariant. So I mean, again, I'm not completely sure how this gradient could ground any other temporal symmetry without adding something to this structure to give some asymmetry in this direction. As far as, I mean, as it stands, I would say that, okay, you have the gradient, but it's pretty much neutral. You need to add some structure to the gradient besides the other constraint that you can have. So again, I would like to hear a bit more, probably, again, I missed something, but I would like to hear if you have any other particular things to say about this. Finally, about the symmetry of records, you talk about the Albert's past hypothesis. And you compare the Regema's view and the Albert's view in relation in a many point about the past hypothesis. So you have many different criticism, but one of the criticism would, I would say that is a bit unfair for Albert. I mean, the other argument is something like, okay, we don't have in Albert's view an explanation of how we come to know the past hypothesis, but the Regema's approach can give this explanation. Therefore, the Regema's approach is to be preferred because it can explain something that the Albert's view cannot explain. But I mean, again, I think that it's probably not fair with Albert, and you probably know this much better than me, but I would say that the past hypothesis is just an hypothesis. It's a proposition within a system of knowledge that is tried for simplicity, informativeness, to account for phenomena. In that sense, I would say I don't see that we need to explain how we come to know it. It seems that we have to justify it on a different grounds to say why is simple, why is effective, why keep the same information about phenomena. So I think that the criticism shouldn't be to say, okay, you are not saying nothing about and you're saying nothing about how we come to know it, but just to post a different alternative explanation or alternative best system that can do it without the past hypothesis. So I'm not sure if Albert has to explain how we come to know the past hypothesis. What he has to explain is why the past hypothesis is a good proposition that appears in something like the best system. You mentioned sometimes this idea about the best system, but it's to me that it plays a bigger role that in your paper seems to play. So yeah, this is just a point if you have something to say. And finally, I'm sorry, this is a point and probably this is something that I probably misunderstood in the paper, but in the first part, I understood that the past hypothesis was not necessary. The only thing that we needed was something like the positive one plus some modification of probability, etc. But in the second part, what you say is that, okay, this view, the rehenbacking view is better because can explain the past hypothesis, but I understood that we didn't, I mean, we didn't need the past hypothesis to begin. So I was kind of confused. Probably you wanted to say something like, well, we don't need the past hypothesis to explain the rise of entropy in isolated system, but we needed to explain the rigorous symmetry, or we don't need it in any sense. But if we, I mean, again, I mean, I don't see why in the first part it seems to be something that we can do it without that. But in the second part, it seems that I would think to explain why we know the past hypothesis or how we can come to know the past hypothesis with this rehenbacking view. So again, I was a bit confused about that, but probably you can say something about this, I think. So I think that's everything what I want to say. Again, many thanks for writing this paper. I enjoyed it very much. Well, okay. Yes, thank you. I missed a bit because I had to disconnect and reconnect. But so just to mention, so first Alison's talk was based on her paper, which was published last year. So if you need more details, you can see this paper in symptoms. And Christian was using it to prepare his response. And secondly, of course, the idea was to choose respondents, which are somehow disagreeing with the speakers. And that's why he was saying that he is more in favor of primitivism, which is not Alison's view, because this is supposed to make the discussion more interesting. So Alison, you have a few minutes to reply, and we will try to finish around 10 or 15 minutes. So after you reply, a few questions and finish by 10 or 15. Okay. Oh, thanks. Thanks for the comments, Christian. I think you did a very nice job of finding the middle ground of a good place to talk about this stuff. I'll try to make my responses very brief then, but I think to the point. So regarding why I use the word relate, the Boltzmannians relate temporally symmetries to the entropy rise of the universe. It was very deliberately not reduction, actually. It's for a few reasons. One is that for people like David Albert, they think the origin of the entropic asymmetry is the past hypothesis. And the past hypothesis is also the origin of the temporally symmetries. But for him, there's no direct relation between the entropic asymmetry and these other temporally symmetries. It's almost like a kind of common core structure, if we can not coordination, obviously. But so I was that said, one might want to be a bit more careful about how you define the Boltzmannian program, because in a very loose sense, all the programs will have some kind of relation between the entropy rise and other temporally symmetries. So there is interesting stuff there about what's the direct relation you want. The other thing is I tend to stay clear of using talk of reduction almost entirely in my work, while I'm a bit suspicious of some concepts of metaphysical reduction. But to move on to your questions about what is the direction of time mean? What is it to explain the direction of time? This is where you do run into some very serious problems. So Reichenbach at least gestured at a kind of reduction, which you call sort of explication, where once you identify the kind of five or six core temporally symmetries that we take to be the ways in which the direction of time is manifest. And once you explain those temporally symmetries, then you've done the work. But you're right. In the paper, I didn't give my own account of this. And I'm actually, I'm sort of suspicious that there is such an account to be had. And again, this is for kind of broad meta philosophical reasons. I suspect our concept of the direction of time is no doubt so messy and such a kind of, and I think it would be in some sense a mistake to look for an actual relation in the world that kind of mirrors all the qualities, the absolute concept of the direction of time has. So in a way, I want to kind of divide the project into one is explaining the actual temporally symmetries that are there. And there's a lot of good work to do there, people's skeptical can be done. So you know, it's not nothing. But then there's also going to be these kind of psychological aspects of sort of why we think of it as being a direction of time, you know, all that. And that's just not going to be mirrored in kind of worldly structure, necessarily. So yeah, I'm kind of trying to avoid precisely the thing you're asking me, which is, you know, what is the direction of time? And what is it to explain it? Third one was, oh, yeah, I don't have much to say about extending the concept of entropy into cases like GR. And I believe for gravity, it's okay. But yeah, you're right. I mean, certainly, this whole program is going to rely on good accounts of entropy being given in these other kind of systems. My own thinking is that I haven't seen anything to dissuade me for thinking the extension is possible. But it's sort of not my wheelhouse. I don't have much to say about it, but it's a fair question. I do think I have something quick to say about like, I don't think you need to presuppose a direction of time in order to rely on something like the entropic upgrade. So essentially, like it's part of the Boltzmannian system. But once you have the upgrade in place, that is what then defines, you know, what's going to count as towards the past and what's going to count as towards the future. And you will have done enough if you have explained why various phenomena behave in parallel. Concepts from our kind of ordinary language thinking like, you know, increase means increase towards the future, like if you're sort of drinking the Boltzmannian Kool-Aid, you sort of just have to kind of give up the thought that you'd have a kind of prior defined direction. But I understand that is going to feel uncomfortable if you wanted to keep those things. So again, it's just kind of within the Boltzmannian camp. And then just finally, like for sort of clarification purposes. Yeah, in terms of explaining how we reason to the path hypothesis, there's kind of two ways it functions in the paper. One is a kind of ad hominem move in that like, you know, David Albert thinks it's important that you reason to the path hypothesis because he thinks, you know, if you couldn't do that, then he doesn't think all his claims about the role it plays would be all that kind of important for us in our reasoning practices. So he thinks it's important. So it better be something that we can kind of say something about. But the main way that I'm using it is something like, look, if what we're on the hunt for is an important inferential mechanism that works towards the past, but not to the words of the future. There's the kind of clue that we've left something out. If we've left out how we reason to a path hypothesis, but no future hypothesis. And I used to have this kind of back and forth about, you know, say the universe ended in five minutes time, well, it would end in a relatively low entropy state. But, you know, but that's not enough to say that there's a future hypothesis. And the important point there seems to be, look, even though the universe is going to be in a particular low entry state in the future, it's somehow not the relevant kind of low entropy state. And the important thing about it seems to be that, you know, our mode of reasoning towards future relatively low true state is very different from our mode of reasoning towards the past low entropy states. So yeah, I don't want to put it, it doesn't function as the kind of, oh, your program is necessarily better if you explain this positive, but more as kind of a clue that there's some kind of template symmetry of reason you've missed out. But yeah, okay, that's everything. But thank you. That was, yeah, that was very helpful. Many thanks, Alison. Okay, great. So I can still take one short question or two. You can raise your hand right in the chat. Or else I can ask something. Jo knows. Jo. Yeah, yeah. Thanks, Alison. I just wanted to understand this idea of yours better that Albert's account of the knowledge asymmetries incomplete because it lacks this account of how we reason to the past hypothesis. And here's what I understood from your talk, you sort of point out that Albert himself suggests that it's akin to our reasoning to any of the laws. And you seem to suggest that that's incomplete or inadequate. But there's this other kind of idea that seems varied in Albert's discussion, which is other kind of idea about how we reason toward the past hypothesis, which is this idea that without it we get mired in this unstable epistemic situation. So given just the dynamics and the statistical reasoning, we're just on the basis of those items were led to disbelieve the evidence or the records we thought we had for those very things. But if we assume the past hypothesis, we avoid getting into that kind of weirdo epistemic muddle. So I was wondering why that isn't a kind of account of how we reason to the past hypothesis that might be acceptable. Good. No, I thought about this one a lot actually. So I've kind of two reasons. One is that I used to sort of think about how it would actually work and think about, okay, what are the records we take ourselves to have? And what would be required to underwrite just those records and nothing else? And the worry was, you know, the positive you would posit would sort of keep changing depending on kind of what records you've had at a given time, and it wouldn't go back particularly far. It would only go back to kind of a sort of, you know, like could we do it with like just 500 years ago enough to get like records since that time, then we take it a bit. So the first worry was it would be a kind of a messy way to go. The second worry then was like, okay, but assuming you want to add in kind of a very neat posit and something that's going to keep underwriting, you know, no matter what kind of records we take ourselves to have, you know, how is that kind of inference actually working? And the thought was if you kind of think about, you know, the thought was you sort of end up in the kind of Raikabaki and spirit anyway. So you end up doing something like, well, you're kind of reasoning towards a past state, such that the stuff we got to now would seem like not weird flukes of conspiratorial records, but everything would be kind of well behaved in the way that we thought it was. So the thought was something like, you know, once you start generalizing the minimal thing required to underwrite the reliability of a record, so sort of a more well behaved epistemic system in general, then you get this kind of more temporally asymmetric style of kind of reasoning to past states that make the evolution towards now well behaved. So yeah, I will say, I mean, this whole project actually started out with just assuming David Abbott's right, what's actually going on here? And then I thought that the Raikabaki was kind of a nice way of generalizing, yeah, some of the moves that I thought David Abbott was making, including that one you're describing. Okay, great. Yeah, that's interesting. Thank you. Thanks. Okay. So you can take last short question. Maybe I just something you could just tell briefly. So I didn't understand. So the subsystem should be isolated and they increase the entropy and that's why it appeals the universe increases. Okay, but what about interaction? So was it when you were speaking about the measurements that you meant? And was it was this as the account of interaction? Because isolation and interaction are two opposite things. But in the world, they are both present, we are not just everybody is not just isolated, okay, they are also interact, so you should account for both things. And is this interaction going to destroy this nice picture, which arises from isolation? Or what, because it looks like when you or your author says that the entropy increase of the universe is explained by the entropy increase of the subsystems, okay, this is you get your result about the universe by telling just half a story, okay, the half about isolation, but you don't tell the other half, which is about how interaction still recovers, still also recovers this entropy increase. Yeah, so it's it's more that way around in terms of the entropy. So the for the Reichenbachian story, something like the appropriate probabilistic entropy rise of the universe is then reflected in the behavior of individual isolated subsystems, but to add in some of the sort of details there. Yeah, certainly, it's never going to be complete isolation. And for the most part, it's only going to be isolation with respect to like quasi isolation, with respect to certain variables. So we should never be looking for so well, except in unusual cases, completely isolated subsystems. In the case of records, for example, we need sort of isolation with respect to kind of epistemically accessible variables of certain kinds. But the idea is very much that they are only going to be, we have to be able to kind of model a systems behavior with respect to certain variables as it is in isolation. And given that kind of expected behavior, when we find very like deviations from what we would how we would probably expect to find those kind of systems, they point to the kind of ways that the system started out or the interactions they had along the way. So to my mind, yeah, the Reichenbachian story very much wants to combine elements of interaction with elements of kind of, again, very limited quasi isolation. I didn't talk about the standard footprint in the sand example. You know, the idea is there is meant you I mean, you have the beach with a sand with a lot of interaction with the outside environment, you know, with wind with waves with lots of stuff going on. But you kind of bracket off the interactions with humans briefly. And then when you find a kind of unusual structure like a footprint, unusual only relative to the dynamics of the kind of wind than the waves, then you look for an interaction with, again, a system that you sort of bracket it off, naming the humans possibly walking on there. A lot more to say about that example as well, you need a lot of assumptions to get those kind of inferences off the ground. But yeah, as far as I'm concerned, it is meant to balance interaction and isolation. And the isolation is kind of very, you know, a lot of detail needs to be packed into what that means. It's certainly not complete isolation for all variables ever. Yeah, paper is isolation not being complete. I was just saying like, it looks like isolation is the norm and the induction is an exception. I mean, even the example of the supermarket with the shoppers, you know, it's not like the cans on the shelf. It's sort of never worth for modeling kind of reasoning within that kind of system. It's much more helpful to reason about sort of the shopping center as a whole with its agents and shelf stockers and not to think about, you know, here's my isolated can relatively isolate here's another one to next with no. So again, it's going to depend a lot on what you're trying to do with this account and what kind of but insofar as you're kind of trying to recover ordinary inferential practices that work towards the past and not the future, often you're interested in modeling quite large systems with lots of interacting parts and not trying to kind of minimize the sort of small quasi isolated subsystems. Okay, thank you very much. So yeah, I still wish I'd finished this slot. Thanks again to our speaker Alison and to our response to this time. Okay.