 Now, for this session, we will have two talks. And the next one will be given by Karen Yan for a talk. And I'm going to speak English today. Two of the four that cause our reasoning in neurophysiology. Thank you. Thanks to Eichsender and Charles for organizing this workshop. And I'm very happy to be arranging this morning for us, beautifully, because Michelle talks about one way to get out this magnet philosophical literature is maybe where we can reconceptualize, top-down experiment, intervention, and then really offer another way to go beyond this to, well, let's focus on scientific exploration, exploration, causation. And here, I'm going to offer another way out of that literature is to focus on the role of our experimental tools placed in causal reasoning, hence the tool afforded causal reasoning, the title of my talk. So to begin with, if you're familiar with the literature of mechanism explanation, for example, John B. called, they published a book that they're focusing on. So neuroscience rely on designing experiments and to perform their causal reasoning. And then a lot of philosophers of neuroscience have been focusing on analyzing different types of experiments. So for example, they categorized different types of experiments and the part relevant to causal reasoning is a connection experiment. And Claver and Darden in their 2013 book, the part that's specifically relevant to causal reasoning is their analysis about experiments for testing causal relevance. But all this literature, more on the experimental design type and less on the experimental tools themselves. And the only reason exception is, the only exception is this book just published last year. And there they started refocusing centered the role of experimental or tools in general in their relationship to neuroscientific experiments. And here this paper, what I want to focus on, which is not taught a lot in this book, but what I'm going to focus on in this paper is exactly what the role of experimental tools play in causal reasoning. And I have two thesis to propose today. The first one is the affordance of the experimental tools enables or constraint neurophysiologists causal reasoning capacity. Moreover, some causal norms are intertwined with experimental tools. What I mean by intertwined means that, without this detailed affordance of these experimental tools, you cannot specify this causal norm. And you are not able to exercise the capacity to satisfy the causal norm. That's what I mean by intertwined. And if we can find causal reasoning in law, in the way I show you how experimental tools enable constraint your scientists recently, it actually has a crucial mathematical implication for philosophers, which comes to my second thesis today, if philosopher ends to abstract causal reasoning norm from neuroscientific practice, which is the practice term for a lot of philosophy clients not that they have a particular practice. And if the goal is to abstract causal norm, how should they proceed? This is a methodology question that I want to propose. So I want to make a distinction between theory first approach versus tool first approach. So theory first approach, as you may see many philosophers actually follow under this approach that they will use some theory or framework about causal reasoning. For example, the famous interventionist kind of causation to guide the philosophical analysis scientific practice. So they may go into some detailed scientific case and then start applying this theory to see if these practices actually provide causal explanation or actually say acceptable, adequate, causal instance of causal reasoning. But the tool first approach would be like, hey, wait a second. We don't really have the theory going to the practice, going to the field. We just dive into the experiment theory and to investigate how tools enable and constrain their reasoning. And we try to abstract that causal norm from the detail of the practice. So here's the second thesis of the talk today is that in some, I'm not saying all, in some experimental context, it's actually more beneficial to use the tool first approach to understand and abstract this relevant scientific causal reasoning, specifically for the field where they just don't have the examatory causal investment strategy. For example, the system neuroscience I'm following, they're still feeling how to put everything together from the gene technique, from the electrical recording, and plus other genetics, and plus others. They try to put everything together in one set of experiments. And they don't need to figure out how to work. So it's very similar to Ray's idea about this scientific exploration of causation. So all right today, I'm going to show three case study, three causal investment strategy here on our CIS. But due to the time limit, I'm going to skip this one because I think the real philosophical maze is in this two part by my argument. So I'm going to show you how it's enabled by recording tool and how it's enabled by simulation tool. And use these two to support the two this is I mentioned earlier. So first case is the causal reasoning enabled by recording tools. So here I want to make a distinction for those who may read a lot of methods literature. You will see a lot of case study craver, for example, provide. It's about coming from cellular molecular experience. But the one I'm focusing on is actually from the neurophysiological experiment. They do this kind of neuro recording, single-neuro recording, and sometimes plus opportunities to intervene the single neurons. Usually they use mice. And so this is their tool, their method, and it's very different from this cellular molecular experiments. It's sort of put out front so people know they're actually very different experiments. And so one common challenge for neurophysiology is that they have to like insert electrode into the brain to record neuron. And then by inserting they will break the membrane and so the signal will leak and then the data is messy and then they have to figure out how to deal with those noisy dirty messy data. And the one tool have been invented is by the Nobel Prize winner. It's called patch plan recording tool. And what it does is it's able to make a tie seal between the electrode and the part of the membrane. So it looks actually like this, that this is electrode and then it attacks the membrane and then you keep the sucking out the, make it really seal here so the fluid and iron, iron will not leak and then so the recording here will be very clean, well really clean compared to the old fashioned way. And so this tool is very important because they enable neurophysiology to collect low noise signal with a very low probability of artificial signal, which is the biggest challenge in neurophysiology because they have to deal with how to make sure that the signal they're using to do the analysis is not artifact. Then the tool I'm gonna talk about today is actually called wholesale patch plan recording. So I'm gonna here show that as a WPR and just say one configuration of the patch recording tool I mentioned earlier and what's the difference is that once it completes they do another suction to break the membrane. So they can get fluid here, they can have the more like wholesale situation that that makes them is useful for determining overall probability of neuro, when they focus on this one single neuro. What does it does and what time? And then what they do is that with that tool they have the experiment which is the visual just they record it on a brain slice and they record six neurons at the same time. So the same feature here is that they record six neurons at the same time to test like the neuro, which neuro send exactly accepted the electric signal to another one. So because they record at the same time and the signal are very clean so they can inject something activate here and then they can see if I pick up from here then they can infer that they're say a excitary connection from neuro one to neuro six. And I will shorten this part of saying and it's meaning that this is visual recording doing the six neurons simultaneously. Let's get this point. Okay, so here's the flow sample punch line here. So we have this tool. We have this experiment. I'm going to run you three scenario to show you that how the tool enable neuroscientists to conduct specific type control. So let's consider scenario one which is a typical and virtual recording experiment that they're not using that tool I say can seal and get clean signal. What do you usually do? They just insert electron to this brain slice and they could simultaneously pick a multiple signal from multiple neurons. And what do they do? They do afterward analysis by applying statistical software to average the data, do some like component analysis. Anyway, using statistics to solve the problem to make the data better, less noisy. That's a typical regular situation. With the tool, with that wholesale patch plan recording tool they can actually just perform their data not directly on the neural signal they pick up because the neural signal supposed to be just exactly what this neuron is doing at that time. They pick it up that way. They can just perform their causal reasoning on the data. If we take out this tool and just focus on oh, if you just have that experiment we pour six neurons simultaneously. You can do that, but if your data is not collected through WPR then you are not able to actually perform your causal reasoning directly on the data. So in this sense the combination of MMC and WPR enable your scientists to do this physical causal reasoning about whether neural ones have this exact connection to the neural six for example. And so in this sense I want to argue that causal reasoning is actually enabled by this combination of NPR and MSE and then let's just summarize what I said. So and then I actually want to propose this into twine causal norm in the recording experiments because normally if you're neurophysiology it's mostly like a 101 basic common sense for let's say grasping in the lab learning how to do it like you don't do the same first. You don't think like oh there's a connection with information flow from one neural to two neural not a neural and then you can say they are causal connection. There's so much confounding factors you have to control in order to make that claim. But with this tool and the combination of the experiment they actually can use this norm which is from a common norm for the beginner but with tool, with experiment in the right sense, in the right context you can infer that you see the connection you can safely come to claim that oh this is a causal connection. Then the second case I'm going to show you is how it's enabled by simulation tool. But before I get to that I have to escape some detail but then I still need to tell you that okay something happened in the middle let's skip the episode two and then they wanted causal hypothesis and now they need to test this causal hypothesis but the problem is that well the actual brain circuit in this mouse is very complex. How do you test this concept? How do you don't have the tool to intervene a very complex neural circuit? So how do they do that? Well here comes the simulation tool to help them to do their work. So here I want to show you that this scientific team, their causal reasoning is actually afforded by the tool they use which allow them to construct actual and non-actual network models of neurons. As a causal noise actually intertwines this is a specific kind of simulated intervention they can do by this tool. And so the tool for the counter fact reasoning is what I want to elaborate through this simulation tool because if you want to intervene a very complex neural network in the brain it's too complex and even I want to do some counter factual reasoning with the data set I have which is like all the, you know they recall so many brain splines each blanks they recall six neurons the data set is massive and if you want to do your counter factual reasoning with the data set unfortunately like our brain is just not that powerful to do this kind of counter factory so you need tool and the tool is a simulation tool. So what they do is that they use, they of course you know use computer they use the original Sparrow data to construct this experimental network in their computer and so this is like you can think as a computer analog of the actual network in the brain and then they then have similar they similar another they call mean network which is they actually tweak the excitary synaptic connection to some sub-synominal so sound is really high they reset it to the to the specific value they set and this is called big UMU EPSV network is that they remove a lot of it and only leave the excitary connection that's about certain value and that's the big network and then why is this helping them to test the hypothesis because this is the signature feature of the layer four neurons they think it's probably responsible for driving that actual potential they found in that brain circuit so they want to see this they want to see the differences among the three so then they compute the required minimum number of a synchronously active neuron to try further electrical potential and this is their result that the analog of the actual experiment that I said required roughly 30 neurons and if they send a mean value that should double at least double and the big U.S. is almost similar close enough so they use this reasoning to say that okay, they can use the simulation result to support their hypothesis that yes, it's the layer four neuron mainly drive the next actual potential in the network okay, so right now here comes the discussion about this specific type simulated intervention I found in my case study and how it connects with the traditional philosophical literature on the interventionist causation so this causal reasoning or you call simulated intervention though it appears to have the interventionist spirit but you actually cannot use this philosophical account to understand it why is that? well, let's go back to this famous diagram from Craver so just give the detail basically that this is the intervention and this is what you want to prove that there's an X cos Y and then you want to use the intervention to provide that to say S cos Y and it basically not allowed to have this four condition to happen okay, and the problem is that in the experiment I just show you they change or remove the exciterate synapse connection and they are simulated network the mean network and the big UEPSP network so this is like a violating the condition the I2 is valid because it actually moves remove or change because it's immediate but this intervention is required at the network level but it violated I3 oh sorry, I2, I need the type of you and then but it's critical for them to instantiate the kind of intervention they want at the neural level so the intervention is do it at the network extra whole but they want to see that by doing this they can for example the big UEPSP network is that they intervene the whole network and the result is that they can keep only that layer four neural signal push-off feature and see if they still observe and so that's their reason let's skip this one so I think the reason they can perform this kind is type of the intervention is because the simulation tool afford that 100% computational persistence control which is not available in the so-called web experiment and that allows them to do this kind of intervention that's not typically captured or imagined by the philosophical literature when they just think about intervention with the more like common-sense scenario and so I want to propose another interpoint causal norm based on this simulation experiment that in this case we can understand there's actually a simulated intervention I act in this case it's actually the layer amplified UEPSP of the network with respect to why how many neural are required to drive further electrical potential of the network so this simulated intervention I on X is a tool afforded simulated change in the value of X and Y so here I'm just basically readjusting the typical formulation of the interventionist account solution by making it tool afforded as part of the necessary future plus a more contextual condition here is that the acceptability of the tool afforded simulated change is actually a condescending feature depending on which experimental context and the kind of computation involved because this is just one case of how simulation tool is applied to new physiology that I mentioned there are multiple so many different types you can discover in scientific practice but nonetheless the key idea is that this norm builds in this tool afforded future from the tool okay so the ending the ending take a message is that if what my analysis show is correct then the case that I present it's actually better to be approved by tool first approach is that in this case philosophy drops the description is actually to be sealed of philosophical theory of causation by carefully examining the contextual and perceptual detail of the spiral path the frontier scientists gauge it in generally improving the causal norm of the time because those frontier scientists they don't have the already expected consensus about how you go around doing your causal reasoning they are exploring and figuring out and in that case tool first approach is better and we can help them to abstract that general pattern they have considered successful in their practice but the theory first approach in working some cases I'm not denying for example if you go into some RCT on clinical literature it's pretty clear you can just apply a theory of RCT and then to understand the causal reason this is what they do to themselves as well so theory first approach still works in some cases but if we put this apart then I think it will give more room to more the philosopher books on scientific practice to position their work in this direction thank you thanks Karen very stimulating work I wonder if you agree to the following picture but I hope you won't okay thank you for telling us this is a trap question anyway so the picture will be the following so it's about the domain neutrality issue so I wonder if you would agree that there is a set of ideal principles for ideal causal reasoning so about how ideally a causal claim is established and that is to mean neutral and that will be used in different contexts for example also in statistical software I assume they use R or something like this contains algorithms that basically are based on these principles of ideal causal reasoning but then we have the problem that in various sciences ideal causal reasoning doesn't work because the ideality conditions are not satisfied so you cannot intervene in the way in which the principles of ideal causal reasoning would require it so scientists come up with all kinds of fixes to fix the problems of their causal inferences which are due to the fact that they are not like in the ideal case but however it could still be and that is different so the fixes applied by scientists will be different will be, how do you call it something, a tool afforded there will be tool afforded I think this is a great concept tool afforded causal reasoning fabulous I'm totally on board with that I just wonder if you would agree that there is nonetheless a set of ideal causal reasoning principles to reach all these different tool afforded methods as it were aspire so it's kind of a regulatory ideal that scientists try to get close to they will never get close to their methods will always have suffer from defects it will always be non-ideal but still they all use the same ideal in order as a sort of a norm that they try to get closer to now is that something that you would agree with I'm kind of hoping that you're not I'm not you're not, great but I can give a longer answer but the answer is no of course no I mean coming up as a more parameters oriented position I want to re-post the question to those who are actually hoping I say yes I know a lot of philosophers are probably hoping me to say yes to Michelle's question but I want to say that what we need to think about what's the use of having that in the first place it's good for scientists to have that like a cross-broad very unifying ideal and that really instructive for them to do their work communicate with each other I doubt it and then okay maybe good for philosophers because we like unified theory but do we really as a philosopher want to have a more fruitful relationship with scientists and to show that our philosophy can really engage in possibly improve and help with their the practice of this unifying really help us do our job as a philosopher I also doubt it so one quick follow up yeah that's absolutely fine I just wonder how do you then explain the fact that statistical software like R and it's causal inference tools that they can be used in very diverse disciplines so they're used by neurophysiologists as well as by economists and social scientists so it seems that these tools and their machine implementations of causal reasoning they seem to be very versatile how would you explain that yeah so versatile is the key I think that we when we think about the statistical tool I think we need to refrain from thinking they are really like the off-shelf application like their site or regression analysis because a lot of a lot of I think we're actually important part is how they interpret the result of their statistical analysis and that interpretation is where they will bring in their background knowledge sound their implicit norm also about maybe the specific characteristic of the system they are investigating because it's a mind versus a sea element it's different and so I feel like the fact that those statistical tools they apply in so many places we will not I think that's very welcome and that doesn't mean the causal reasoning will just look like the same because they apply the same stuff may I ask a two-part question as long as it's not a trap question the question is how do you as a philosopher study the scientists working tools I go to their lab I directly see how they use it that's how I do it so the follow-up is that I get pretty itchy when someone talks about making these kinds of conclusions based on the tools especially when you're careful to say that a lot of this depends on the affordances granted by the tools by which we mean that the tool from where it is originally from the context in which it's originally invented is being re-adaptive to the difference of experiences definitely in which case I don't think you're talking about tools but you're rather talking about methods and methodologies and when we're talking this is the point that you just has been making career which is that method and methodology is all talk and art and has less to do with the nature of the tool itself so to your issue of well they're not just taking some statistical time to draw the shelf the reason they're not taking it off the shelf is because they have conversations and debates about the particular method required to make the tool answer the kinds of questions they want to so my I'm basically encouraging you to say yes I agree with the overall analysis but the tool I think needs to fade into the background a little bit accusations of being an instrument of determinist etc that instrument of determinist can you elaborate that term a little bit to say that the instrument determines the outcome of the argument because what you're actually saying is that the scientists are engaged in methods talk and have overarching methodological interests or methodological preferences or there's some there's some noun if you're flowing around so to shift from I'll give an example of this in my talk as well but to shift from instrument to methods of methodology I think would put you on some safer ground and give you some opportunities to look at oh how are the some how is it that in a given experiment a tool is wheels I think it's important I have hard time to put this to turn I see there that you can reason from the fact that I said affordance and you think that implied determinism because to me so one way to think I think if there's a real philosophical difference here whether affordance the relation between affordance and the instrument of determinism you're worried about and might be accused of and the rest I could be conceptual issue that what you like to talk say as a method I like to call it the tool but that's it that's a boring issue this is real because I don't want to be accused of committing to instrument of determinism of course the tool for the way I use the concept I feel that already shows that okay patch claim tool well let's say aside let's just don't debate whether this you think that's a a simplistic argument it is the tool not invented for neurophysiology of course yes that's what I need you don't have the invented status and when it moves context but that is the nature of the neuroscience neuroscience never have their own tools to begin with that's just what neuroscience is and that is what the neuroscientists I work with they keep telling me it's very interesting for now in neuroscience they don't only have their own tool they always borrow tools from other field but to say because other field you are not using tool I think that's way too much I'm not saying they're not using tools I'm saying as they bring in a new tool let's say instrument as they bring in a new instrument they need to engage in discussions about how to use it and they are doing that is methods talk is how I would be okay if I can just reformulate because there seems to be a problem for the last two minutes for you a tool is an element in the argumentation in the general discussion it's not a thing that has certain like you it seems that in your talk you are going so he want me to say when the neuroscientists adopted this patch recording tool to record a mice neuron I'm talking about methods I'm not talking about tool anymore that's what you want to suggest think about that and then if you can send me the reference to clarify this notion of instrument method the differences are very important thank you Richard my question is actually I think that's the tool first or theory first here's a very clear picture depending on whether or not you have sufficient decompounding methods so bringing instruments or tools into for example New York physiologies because they don't have way to decompone what they want to decompone that's a lot of potential compounders so you mentioned once or twice decompounding I think that's the major thing and since they have no way to decompone those potential compounders tools or instruments are essential so in that sense it's a compounder that's why you can use theory because it's already become they almost find all the potential compounders so there's no problem to use the theory but why New York physiologists because they have no way to decompone even create instruments to try to find ways to decompone or even detect potential compounders so I think in that sense the theory first is actually a method to try to find decompounding methods and I think that's the thing but then because it's so frontier so finding a body of lying instruments or even inventing lying instruments are essential yes, I agree but basically it's for these compounds that's nice in my context okay, thanks for suggestion and go home, think about it I have a question I very much like the idea to look for different factor of reasoning especially because quantum factor that I wrote just got this problem what's the relevant kind of information that we should be looking at so we have a lot of ways that it has the solution and the significance we have and then we can say the possibilities but now I think there is another side to using counterfactuals in terms of causal reasoning mainly what about certain possibilities which maybe we cannot measure because we have right, and then if you have some of these counterfactuals maybe that would say no to that and we can move more than that but isn't needed in that context so what do you think about this now so your question is they don't have tools to measure the you don't have a tool but you have a counterfactual that kind of suggests if you have another tool so it would be another use of counterfactuals in addition to the tool of order right, and then what I think about if that kind of counterfactual and no tool and what I think as a philosopher what would say about that interesting point if I think about that point that out to the scientists I don't have an answer to your question sorry, but it's an interesting question so I guess because the question presupposes that there is such things if there is such things I feel they perhaps scientists have to usually make that they still need to control they still need to do something more than what they currently claim or try to invent new tools which is a lot of cases it's because I have an example so in neuroscience a lot of people use one statistic for their causal analysis and then but it's not really ideal for them to use that to analyze the neural connection and so some, again, borrow they always borrow they borrow a grant of theorization which is invented by economists and then adapt that to and of course it involves some adjustment they've applied it to their neural network analysis so maybe that's because they also found some counterfactual they need to need to deal with and they just look for more new tools thank you for your question just to plug one of our exhibition students if you're interested with this question about invention of tool and how it model empiricism, that's a book from this year from our XPHDs that it's exactly about so when you have counterfactual you try to do an experiment I was reading that book what's the name? model empiricism the book is called model empiricism it's called the interviewer so let's thank our speaker