 Yes, thank you very much for this introduction and also thank you for the opportunity to talk here. I see I mean it's a it's a big audience so I'm really happy to see and that and also that if you want the next generation of people will learn about inversion data assimilation and, well, in my case, joint inversion. So as Alex said I have a fairly broad interest in seismology electromagnetic and gravity and the idea of this joint and constraint inversion is to combine different types of physical data and then come up with consistent models. Now it's a bit of a shame of course that didn't the lecture of Martin Samridge didn't happen yesterday because I am going to assume like a basic knowledge of inversion methods I'm not going to explain sort of inversion basics. And of course that would have been a nice refresher for for all of us. But I hope that still sort of. It's understandable enough that that you can you can take something out of this. I wanted to say that if you have any questions during the lecture I will take a few breaks in between because there are sort of different segments and I think an hour and a half to wait for this question is quite long. So please write them in the chat we just discussed this so you can't speak during the lecture because I think that would be probably a bit too chaotic, but write something in the chat. If you have a question I'll take a few breaks. To answer your questions and yeah so so that what I wanted to talk about a sort of a variety of things in this talk so first of all is sort of basic concepts of joint and constraint inversions. I want to give you a few if you want so recipes or ideas, but also a bit of a more philosophical idea about how we view the results of inversions and especially these joint and constraint inversions. So as the title maybe gives it away as a hypothesis testing tools and hopefully by the end of this lecture you'll understand sort of what I mean by this and I, yeah I want to start the presentation by sort of a famous quote from Thomas Henry Huxley which was also known as Darwin's Bulldog because he vigorously defended Darwin's theory of evolution. And he said the great tragedy of science the slaying of a beautiful hypothesis by an ugly fact and I think we've all experienced this when we thought we had this great idea how something works in the earth and then reality came and told us that this is maybe not the case. But well there's a solution to this, or one solution to this problem comes from sort of theoretical physicists and probably one of the greatest minds of the 20th century so Albert Einstein said, if the facts don't fit the theory, change the facts of really sort of theoretical physics kind of kind of you. And yeah one of the points of this lectures if you want so is to show you if you have a sort of a negative outcome in your inversions or your joint inversions with the hypothesis you have been working with has been slain that there's maybe another way then changing the facts and sort of still learning something about the earth and I think that kind of perspective is hopefully useful to some of you. Yeah but before we get into this I want to do yeah if you want a basic recap a basic tutorial as I said I'm going to assume that you know something about inversion is such hopefully you have performed use some inversion code on some sort of data. Because yeah it's going into the mechanics of inversion that would take this sort of a bit too far. So, when we want to integrate different data, we have a variety of geophysical data sets there are various methods to combine these and of course integration is something that's a bit of a buzzword at the moment and very often it involves. Comparing models plotting them on top of each other but if we want to go a step further and use some formal integration methods I see sort of three main methods here so what I call joint version means that you have one big inversion program one computational program you put all your geophysical data sets in their rawest form in there, the inversion algorithm does some something. And then at the end you get out a model or a set of models that is designed to explain all the data simultaneously. So I'm going to do this and I will talk about that as well in this. In this lecture is what I call constraint inversion. So, you might not have a one of the data sets for example you might not have the seismic data but only electric magnetic data or the other way around. So, you take that model or use you extract certain features from this model and steer the inversion of the other data set in the direction that it, for example matches the appearance of this seismic model as much as possible. So, one of the models is fixed or one of the inversions already has produced the result you only use that, and you just perform the inversion of one data set with this with this model as additional input. And then what you can do is also if you don't have any of the models or you don't have the technical possibility to do either joint or constraint inversion. Then you can do what is called post inversion analysis so you can use formal algorithm to identify matching structures in the individual inversion results and this can also be very fruitful. But I'm not going to talk about that here because I haven't done this really very much. So that's sort of for somebody else to talk about. What does a typical joint inversion or inversion approach look like. So, there's not going to be a lot of maths in this lecture but here is one of the, if you want to central equations that I think anybody who has done inversion should have seen. So, when you formulate an inversion problem, and here I'm writing this as an electromagnetic inversion problem or magnet magnet to Lurica, it's an electromagnetic technique inversion problem. You typically for just an individual inversion have two ingredients that you need to define. Before you can write the algorithm and sort of solve your problem and that's the two parts of the so called objective function the quantity you want to minimize in the inversion and typically those are the data misfit terms so this is what I call five mt so I call it. The name of the method, and that depends on in this case the conductivity sigma. And then you have the so called regularization term, which makes sure that the model doesn't contain any spurious structure and typically what we use here some sort of smoothness regularization so you want to find the smoothness smoothest model. So that explains your data and you can balance the different terms, the data misfit and the regularization by this regularization term lambda or sometimes also called the Lagrange multiplier. So basically, all of you have seen at least some version of this, because this is the other, if you want to the basics of individual inversion so now, if you want to turn this into a joint inversion problem. A slightly more complicated expression so first of all I'm writing this as a joint inversion problem of electromagnetics and T and seismology or some seismic method that depends on some sort of seismic velocity I'm intentionally being a bit sort of vague here. But of course you could write the same for other two methods so gravity and magnetic so whatever you want. So our objective function now depends on seismic velocity and conductivity. We have again the misfit term for the electromagnetic data, we now have another term for the data misfit term for the seismic data. And for simplicity I've written the regularization term as sort of depending on both typically you have them somewhat separately. One thing you can see is that the data misfit terms and for each method they only depend on one of the parameters so one depends on connectivity and one depends on velocity. And so what we need to define and this is if you want so everything in joint inversion that really determines what kind of joint inversion is doing. Doing is this coupling terms you need to define some sort of criterion that says what is my expected relationship between in this case velocity and conductivity. So this then is also multiplied by some sort of weighting factor that determines how much you want to enforce this similarity or this this coupling that between the different parameters. And there are if you want to have two broad classes of coupling that are in, I would say in common use. These days. So one is called structural coupling. And this is based on the idea that okay, even if the methods are quite different they sense the earth differently. Yes, only one geology. Yeah, so if you have an interface between two different rock formations, then it is likely that there's a change in both properties and velocity and conductivity. You can use that as a way to couple your methods you might not know exactly how those things change at that boundary but you can assume or it's reasonable to assume that the boundary is present in all the methods that you use. And one method that is very popular at the moment is the so called cross gradient that was published by Gallardo and me you major in 2003, because it makes very weak assumptions about how these structures are connected so you would say that in virtually any little situation in the earth, the cross gradient is probably a fairly reasonable thing that you can do so it's very widely applicable, you don't need to put in a lot of, if you want to prior knowledge to to assume this. So it's very commonly used for joint version at the moment so if you want to know more about that you should look at the Gallardo major paper and the sort of literature that sites those. Possibility is to work directly with the parameters so you can directly relate velocity and resistivity. Or you can make some sort of constraint where you say okay I want to have a one to one relationship between them or you can define some sort of clustering. But of course this requires much stronger assumptions about what you put in to the inversion as a sort of relationship between those two quantities and you if you're in a region you let's say an exploration geophysics you might have ball hole data. Or for some materials we have theoretical models what the velocity and resistivity should behave like. You can you can put that in, but there's always if you want to inherit danger. That you're using a wrong assumption and partially what I'm going to talk about today. This hypothesis testing approach is how can we deal with this situation where we sort of want to want to see if a certain relationship is reasonable. And yeah, so it provides a very strong coupling between the different methods, much stronger than the structure, structural coupling but of course if it's wrong, then you also get artifacts and I will now show you some some some set of very simple examples hopefully instructive that demonstrate both the structural coupling and this parameter relationships on some some very simple models. So the tests I'm going to show you is a is, if you want so the first type of inversion that you probably can do as a as a as a student, maybe in a lecture course. Here on the left hand side I have an electromagnetic test model of three layers so and you can see this is if you want so near surface geophysics but the scale in this sense is completely irrelevant. We have an upper layer of 100 meters then we have an anomaly of one meter and then it goes back to 100 meters. And on the right hand side, for some of you who might be interested in, for example, electromagnetic data. This is the data that would come out of this model so you can see here the so called apparent resistivity curve that sort of reflects that model, in a sense of, we start at 100 meters we see something, a decreased apparent resistivity an expression of this conductor and then we go back to 100 meters. And the inversion I'm running here is, yeah, as I said extremely simply, all I assume that I know the thickness and the resistivity of the uppermost layer and the resistivity of the lowest layer and all I'm looking for is the thickness and the resistivity of this middle layer so I have exactly two inversion parameters. And the reason I do it that simply is that then I can explore the whole parameter space I can show you all the possible solutions to the problem as simple parameter plots in the play. So this is what I'm showing you here so this blue region, that's the region of acceptable models so these are all the models that fit the data within specified uncertainty I think I used a couple of percent here. The true model is the red dot, and then the blue models, the blues region are all the acceptable models and again I hope you have seen at least these concepts. In principle so that we could would call the fact that we have this space here the null space and any model. That lies within this blue region is from from a data misfit perspective, an acceptable model and if you run an inversion algorithm depending on where you start you might end up in any region here. In this blue area but of course our goal typically is to get as close as possible to the true model to this red dot. And we can also see the sort of the typical things so if you did a formal analysis of our inverse problem we would say okay we have an uncertainty in the layer thickness here between 23 and 59 meters so actually quite substantial. So we can also see a trade off between layer resistivity and layer thickness, and you can do this formal analysis if you know a bit about my need to to lurid data. You, you know that actually we resolve the conductance of the product of thickness and resistivity and not the two things independently. Yeah, so that's a very basic individual inversion for magnitude to lurid data and now I'm going to pair this with another data set and I decided again something hopefully some of you have seen maybe in a initial lecture with seismic travel time data. So on the right hand side here we have another model. Again, there is sort of the same three layer thicknesses, but as seismic velocities go, they increase with depth so that's what I have mimicked here. And then on the left hand side, you can see the travel times from those different layers so you can see the color coding. Here is the direct wave, and that's the first thing that we would record and then you can see the other waves and then the black line that's the combined first driver travel time curve that we would get. So in order to make a seismic refraction experiment, and what you can can see and again, you might have heard this in a lecture, the second layer that is only a first driver for very short range of sets. So this is what we would call a hidden layer so the seismic inversion is not particularly great at finding the parameters of this hidden layer. So we can do the same analysis as before we can plot the range of acceptable models for this kind of inversion, and this is what we get. We have a similar situation as for the empty we have the true model here we have a region of acceptable models. But for example, the range of acceptable thickness is a bit smaller than for the empty so we only have 31 to 49 meter. But we also have this trade off between layer thickness and velocity so we have different combinations of velocity and thickness that can actually match observations. So instead of putting all this together in a joint inversion if you want to in the classical sense is okay. One constraints, maybe the thickness is a bit better. So I want to utilize that to improve my results and reduce the space of acceptable models assuming that for example, the layer thicknesses are the same. So this is a good idea. So this and this is if you want to the simplest form of structure structure coupling you have a one the model and you say okay. Know anything really about velocities and resistivities in those layers but the boundaries, for example in the sedimentary environment, the horizons, they should be the same. So we get here. So, again, I'm plotting the space of acceptable models and the true model, and then the blue is the projection of the seismic information on to the magnitude to lyric information, and we can see that we have successfully reduced the range of acceptable models in this case, because the size mix has a better resolution for the depth that what's really determines the, the range of acceptable models here and we have mainly improved the empty in this case. In reality when you don't have such a simple inverse problem where you only have these two parameters but you have various layers, it becomes more complex so it could be for example that the empty is a bit better at resolving the things of the upper layer, and then the size mix is better at resolving the things of the lower layer and the to sort of help each other and and you get a model but I think for the simple example you can see that here that the size makes really helps to to constrain the thickness of this layer for the empty as well. So now, one thing we can do is instead of doing a joint inversion, we can pick a seismic model as a constraint and say okay I want to do an empty inversion where the layer thickness is as close as possible to the seismic model where you have a fixed seismic model and then you make the empty inversion fit the empty data, but also adhere to the layer boundaries that you get from the seismic model and this is now, again the same picture you can see how the orange region stays the same because you get the uncertainty from the empty and the blue region has decreased even further. So, and this is when I first saw this was even for me was a bit of a surprising results so when you do a constraint inversion and then you do sort of formal uncertainty you, sorry, I was just trying, I didn't mean to go back there. I was just trying to get the chat up as well. Max, I am very sorry it's a you don't need to look in the chat, I will I will do it and during the break it is better just forward not looking because sometimes the questions are quite general to discuss later. Sorry, yep, I just wanted to make sure that I'm miss you know, sort of talking over people said I was sort of speeding past them. So, I was here on the on the constraint so this is the constraint inversion result and yeah you can see that the range of models is even tighter so a formal analysis shows us that we have less uncertainty. It might seem counter intuitive but the explanation for this is that we're basically taking the uncertainty of the seismic data out of the equation because we have picked a single model and we've said okay we assume that this model is sort of representative of the truth. So, only allow sort of small variations from the, from this model, and as such the uncertainty of the empty inversion under this assumption is reduced even further so we get a very tight boundary here on our range of acceptable models from this constraint So if you have a very good model that you think is a very good representation of the earth, then a constraint inversion can really focus on these features and show you if the other methods can sort of match these features as well. So this, yeah, a very simple structural inversion and we can see that yeah there is some interaction between the methods but for example in this case. It's all driven by the seismics because for this one layer the seismics really has the better resolution capabilities and there's not much information going back from the empty to the seismics. This changes if we assume a parameter relationship. So, a parameter relationship means that we say we know something how there's about the connection between seismic velocity and apparent resistivity. And here I've sort of generated some some theoretical or hypothetical relationship between like a linear relationship between velocity and log conductivity. And if anybody has here has ever looked at something like ball hole data or even some some lab data, you will know that these things tend to scatter quite wildly at times. So the true relationship is this blue blue line here and actually the relationship or true value for this layer that I assume is marked by this red dot. And then the black dots here are the generated from this blue line but with quite a significant random scatter on it. The, the other line that's what I call the biased relationship is sort of constructed saying okay, you know, based on the scatter of this data and if you don't know the true relationship. These two lines the true and the bias relationship they're probably all are both good representation of this data I mean if you didn't know the underlying true value. This bias line compared to this the scatter of those those black dots. It seems like like like like a reasonable fit. Yes, you would say okay it's quite noisy but it goes through all the points and this is quite close to the center. So this reflects the difficulty in really estimating these kind of relationships as we we have some ideas what they should look like. But the details can be quick can be a bit different. Now we can go sort of look at this in both spaces because we can easily project from from resistivity velocity and vice versa. So, the orange region here is going back to the individual seismic inversion so that's the, the range of acceptable models from seismic inversion and now the blue is for for this joint inversion. Under this parameter relationship so we can see by by making the connection stronger. We've now also created a stronger interaction between the methods and we have reduced these the space also for for the seismic models because at some point. The T resistivity just exceeds certain values or deviates too strongly from the relationship and the inversions as okay this is not compatible with our assumptions anymore. And so we restrict this even further. And if we go back to the empty plot, we can also see again orange the original one green is here the projected complete seismic uncertainty and blue is the joint. So the intersection of those two sets, the joint uncertainties. So we can see that if we consider this this relationship as exact, then we get a very tight range of acceptable models so we really sort of focus on this true model and this is, if you want so the classical view of joint inversion or the classical goal of joint inversion. If you have very good information if you the pieces that you put into your, your puzzle are very good, then with the joint inversion approach and a strong coupling you can really get close to the true model with very little uncertainty. The issue though is that if I, instead of using this true relationship in my joint inversion I use a the biased relationship, then the situation looks like this. So again here's the individual inversions. So we can see that the seismic information projects slightly differently it from velocity to resistivity because we're using instead of the true relationship we're using something with a different slope. So the whole thing becomes offset and that now the intersection. So if we do the joint inversion we get this really tight space here where where we would say if we run a joint inversion. The algorithm would end up somewhere in this blue region. But the problem is the true model, the true value is not contained in the joint inversion result anymore. So we get a joint inversion result. Yes, we can fit both data sets and the intersection is this this blue region. If we do a formal analysis and this could be like something fancy like Markov chain Monte Carlo but it could also be some sort of linearized uncertainty resolution analysis it would tell us, oh you have a very good results yes your uncertainty is extremely small. Unfortunately, the true model is not part of your solution. Yes, so as you're outside the uncertainty. You can find only the true model. And yeah in this if you want so classical explorative view of joint inversion that's that's a problem right you've put in your information to the best of your knowledge and but the result that you get is biased. The, yeah, classical thing is either to say okay I need to maybe work with less strong assumptions. And that's why the cross gradient is so popular, or you need to explore different kinds of possible parameter relationships and sort of after thinking about this for for some time I thought okay there's actually a third option and that's instead of saying okay we use the joint inversion with a parameter relationship and hope that it's sort of true. We go even sort of further alternative turn it around a bit and use it to exclude certain parameter relationships because we could change our parameter relationship even more and at some point we would get the result that okay there is nothing that can fit. The data and our parameter relationship at the same time. And this then give also gives us some information it gives us the information that what we're assuming here about the relationship of parameter cannot be and so we can exclude that we can say this is not a possibility and this is what I'm going to elaborate now on as part of this hypothesis testing approach. Yeah, so this was my idea here that to take. I'll talk about this first and then take a take a minute or two if there are any concrete questions about these examples because, while they are quite elementary I hope, and related to things you've seen there might be some, some questions about sort of what's happening here. So, yeah, the consequences that we can conclusions that we can draw from this okay we have joint and constraint inversions we restrict the space of acceptable models and of course that's very nice that's very good, and that's what we want. And if our constraint model if the input that we use as a constraint for our inversion is is good we get stronger restrictions then from a joint inversion because we're eliminating the variability of the seismic model. And this this coupling method really, and the parameters of coupling are crucial and have very strong effects on the results you can see how, depending on what I used I got quite different range of acceptable models. And yeah, I was starting to sort of wait on this so there are two possible views that you can take you can say okay I'm using this in what I would call explorative mode or exploration mode. So if you know an accurate or true relationship, then the models that come out of your joint inversion will also be accurate or true or representative of the earth. And of course that's often quite difficult and we often don't even know how accurate sort of our relationships are how accurate the things that we use to connect the different methods are. That's where I would then say, you can switch into what I call the hypothesis best based mode. So we say we specify a coupling, and this is equivalent to formulating a hypothesis that we want to test. Yeah, those of you who have done maybe a bit of statistics know that then the most powerful thing you can do is you can specify the hypothesis in a way that you can then reject it because that then tells you conclusively that okay this is not a viable relationship between those different different practice. Okay. Concrete question I see there's one.