 All right, good afternoon, ladies and gentlemen. I know you have a very intense schedule this afternoon. This is the fourth lecture. And I'm actually quite terrified to be the last one standing in between you and a pool party. But I'm still happy to be here to talk about this topic, even though it is often considered to be a dry one. It's about measurement and economics. So we will talk about some methodological problems. And it is often considered to be dry, but I'll do my best to make it light and lively. So let's jump into the lecture. This is the plan. I will give a brief introduction to the topic. And then I will talk about the program of modern econometrics, which is one of the movements in economic science that has really pushed this idea that science is measurement. So this is related to the title of the talk. And as I will try to show, one of the most important elements in the program of modern econometrics was this attempt to measure and quantify value, welfare, or utility. So I will then, in the third part, go into some examples of the standard analysis in utility and welfare economics. And then I will, of course, criticize them. And at the end of the lesson, I want to convince you that econometrics is not all bad. There are parts of econometrics that are still useful. Econometrics and econometric methods are particularly useful when we understand and use them as descriptive tools. And the reference for this talk is a recently published paper of mine in the European Journal of the History of Economic Thought. This is about Pavel Ciampa and the meaning of econometrics. Very interesting read, so some elements you will find there in more detail. And this paper is actually a good example of a roundabout production process because I wrote the first draft in 2016 when I was a fellow here at the Mises Institute. And now it got published, so it took some time. But I'm very happy that it is out. All right, so let's jump into the content of this lecture. So there are basically two main developments in the economics of the 20th century. And this is not me speaking. This is the general perceived history of economic thought. There was a very influential development in theory, which is, of course, the emergence of Keynesian economics, that what is sometimes called the Keynesian Revolution with John Maynard Keynes and the publication of his General Theory in 1936. And then there is an important development in methodology. And this is related to the emergence and the rise of modern econometrics. Around the same time, also in the 1930s, the econometric society was founded, one of the most influential economic societies in the world still today. And the most influential branch of economics in the post-World War II area was really at the intersection of these two developments. She had, for example, the large-scale Keynesian macroeconomic models in the post-World era to improve, plan the economy, and rebuild the economies after the war. And in my assessment, the Keynesian Revolution was important, of course. But arguably, the methodological development was even more important and more lasting because it is not only tied to Keynesian economics, but it really has influenced all major branches of economics today. So then this is what we want to talk about in this lecture. Of course, the methodological changes that happened because of the rise of modern econometrics. These are the two gentlemen who are closely connected to the rise and the emergence of modern econometrics. They are not the only ones. There were many economists, very famous economists, involved in this movement in America, for example. Irving Fischer was a founding member of the econometric society. In Austria, we have Joseph Schumpeter, who was not really active in pushing the agenda himself. He did not change economic analysis towards the ideal that econometrics postulated. But he encouraged, of course, his fellow economists. And among those, most importantly, were Ragnar Frisch from Norway on the left and Jantin Bergen from the Netherlands on the right. And they are really the most important economists in this development, which is shown by the fact that they were the first recipients of the Nobel Memorial Prize in Economics that was awarded in 1962. Ragnar Frisch was the one who came up with the vision of what econometrics should be. And Jantin Bergen was really one of the first to apply it in large-scale models in the inter- and post-war period. And so this is why they won the Nobel jointly. According to Olaf Bierkold, who is the most important Frisch expert these days, Ragnar Frisch deserves a place in the history of economic thought, a lasting place in the history of economic thought, just because of the opening sentences of this 1926 paper that he originally published in French. This is the English introduction, where he first defines in the modern sense what econometrics is all about. And he says that in the immediate between mathematics, statistics, and economics, we find a new discipline which for lack of a better name, maybe called econometrics. And then he moves on to declare the essential, most important goal of econometrics, which is to turn economics into a science in the strict sense of the word. And this means, of course, a science modeled after physics and astronomy. So he wanted to apply the natural scientific method or natural scientific methods to economics and thus transform economic science. In his own 1926 paper, he argues that there are really two aspects to accomplishing this. There is what you call the theoretical quantitative aspect and then there is the empirical quantitative aspect. So under the theoretical quantitative aspect, he understands the quantification, the mathematical reformulation of economic theory and the formulation of economic theory in terms of at least potentially measurable magnitudes and variables. And we cannot really use things that are unmeasurable, unobservable and that are not quantifiable. We have to boil economic theory down to something that is measurable, observable and can be treated mathematically. So that's the theoretical quantitative element, reformulation of economic theory. And then the related aspect to it, that's the empirical quantitative element. This is when we try to test empirically the quantitative theoretical propositions that we've come up with. And yeah, a lot of the developments in modern economics are really either in one of the two or in both of these camps, right? Think about the ISLM formalization of Keynesian economics that fits into the theoretical quantitative aspect. Think about the large-scale Keynesian macro models. They are empirical quantitative attempts to test Keynesian theory. And this whole enterprise was really quite successful according at least to Jantin Berchen himself who said that very interestingly, unlike the attempts around 1838 by Cornot, a French economist, and 1870 by Valras, Jevens and Mengar, which did not succeed, the third wave of quantification was successful. So this is really interesting, right? There were, of course, attempts before to quantify economics, to turn it into a science and the strict sense of the word, that the lumps in Mengar with Jevens and Valras is really surprising. Mengar, of course, was not really involved in the business of formalizing economic theory mathematically, not at all. But Jevens and Valras were, so they can be considered to be precursors of the modern econometric project. And you see that in the works of Frisch, for example, who references both Valras and, most importantly, Jevens in his 26th paper, he refers to Jevens' dream as one of the main goals of econometrics. And the dream that Jevens expressed in his main work was the quantification assessment of marginal utility of goods, changes of marginal utility of goods. That was the goal. That would be great if we could do that. And after a career as the most successful Norwegian economist, Frisch, after receiving the Nobel Prize, said that, well, yeah, it's not a dream anymore. In 1970, he said, we accomplished Jevens' dream. We are now capable of measuring marginal utility. You might wonder, how did that happen? Yeah, how did the dream become true? And yeah, if you look into the development of economics in the 20th century, we have, of course, a mathematician of economic theory. We have a mathematical axiomatization of microeconomic theory, for example. And you learn about this under the terms of determination, additivity, transitivity of preferences, for example, in microeconomic theory. And if you study economics at the university level, you learn about utility as a mapping of multi-dimensional bundles of economic goods to a cardinal scale. And it is really a cardinal scale, even though in the first chapters, it's always introduced as ordinal. But once we get to the important applications, it's always cardinal. So don't be fooled by them introducing it as ordinal and then moving on to analyzing it in a cardinal manner. The question is, of course, how do we define or how do we find out about this mapping? And Frisch himself said, well, it's of course difficult, but in principle, we can ask questions, right? We can ask questions, we can interrogate human beings, and we can gain experience par interrogation, as he called it, he published in French. And when people tell us what they prefer, we have at least, in the first instance, an ordinal ranking of what they like and what they like less. How do we get to a cardinal measure? Well, Frisch appealed to everyday experience. That's what he literally said. By an appeal to everyday experience, we can abstract from the ordinal ranking that we get from interview data and come to a cardinal measurement of utility. And once we have that, we can, of course, move on and do scientific utility and welfare analysis. So if you look at all of this, you realize that the scientific basis seems to be very thin. It's not very convincing, but it is also clear that if this project were to succeed, what we could do would actually be assessing and maximizing, potentially, total welfare, social welfare, engage in economic planning. And the entire econometric project is really tied up into a movement in economic policy to what's planning, right? That is what made it so interesting and appealing to policymakers because it was useful or it promised to be useful in economic planning. So we can move on to standard utility and welfare analysis, welfare economics. Let's first define what it is, yeah? Those definitions are drawn from Rothbard, but I think mainstream economists would agree with them, yeah? Utility theory analyzes the laws of value and choice of individuals. So utility theory is really the basis for all the economic theory we derive from it, yeah? For the entire framework of economic theory, if you like. And welfare economics is when we look at the interplay of individual value, individual choices and action, and try to draw scientific conclusions about the desirability of different alternatives. Yeah, we want to draw conclusions about whether one economic intervention is preferable over another, whether one institutional setup is preferable over another, and so on, scientifically, yeah, that's the goal. And that means, for example, we want to assess the welfare implications, the changes in social utility that emanate from taxation, from subsidies, from price controls, from monopoly, and from partial and potentially complete economic planning. How does it improve or not on the social welfare of society? And since we have to talk about measurable elements, it is no wonder that standard economics focuses not really on subjective value, but on something that is actually measurable, which is money and money prices, and in particular, the reservation prices, for goods. Here you have a random selection of male first names, and we think of these men as potential sellers of some economic good. Since you have heard a lecture about minimum wages just before, let's think about this as being the reservation prices for some labor service. Yeah, so our Joe here, the first one, has a reservation price of two, and Sean, the last one, has a reservation price of 10. This is a reflection of their opportunity costs. So Joe is the most efficient provider of whatever the service is, and Sean has very high opportunity costs. He has maybe something more important to do, that's why he would charge a higher price, or maybe he's just so unproductive in terms of the time he needs to do it and so on. Yeah, and let's add another set of random female names this time, which represent the reservation prices of potential buyers of the product. We have Marty, Felicia, Pat, Susie, and Christy, and the reservation prices from 12, 10, 9, 8, and 6. So the reservation prices of the buyers are the maximum prices they are willing to pay for the service. The reservation prices of the sellers is the minimum price that they would accept when selling the service. And of course from this, we can derive the standard supply and demand schedule, right? The supply schedule is just a ordering of those reservation prices from lowest to highest. We add all of these reservation prices to our diagram, and we have magically our supply schedule. And we can add to this, of course, the demand schedule derived from the reservation prices of the potential buyers, and we do the same thing. We add them this time in descending order, and we have our demand schedule. So now, from this, we obtain an equilibrium, a market equilibrium. This is where demand and supply intersect. This is the price at which the quantity demanded and the quantity supplied is the same. Here in our situation, we have an equilibrium price of $8 and a quantity of four. And once we have this, we can assess the social utility of the situation, the social utility that is generated or the mutual benefit that is generated through these interactions. And this is based firstly on the concept of consumer surplus. Consumer surplus is the difference between the reservation prices of the buyers and the actual prices they have to pay for the product. So we go along our demand schedule and look at the difference between the prices that they are willing to pay and the prices they have to pay, and we can quantitatively assess the consumer surplus. We have a consumer surplus for Mario of $4, then of $2 of $1 and so on. And our marginal buyer has no consumer surplus. Reservation price is equal to the price paid. So, and of course, the buyer that is not actually buying has no consumer surplus either. And so we end up with the quantitative assessment of the benefit that is generated for the consumers in this situation. And conceptually, we can do exactly the same for the suppliers, right? We look at the difference between the reservation prices and the prices they actually receive, and we can assess the producer surplus in this situation. And we can quantify it, yeah? We can give a quantitative measure to the consumer and the producer surplus. And producer surplus would be 12 in this situation, consumer surplus seven. And we end up with an assessment of total utilities, total welfare, which is the sum of consumer and producer surplus at this time in this situation. It's $19. So now we can move on, right? Assess the welfare implications of, for example, a minimum wage or a price floor. Let's assume that there is a price floor of $11. So the people are not allowed to trade at $8. They have to trade at $11 or more. In this situation, of course, the quantity that can be exchanged is much smaller, right? The quantity is only one unit. There's only one mutually beneficial trade because we have only one buyer who's willing to pay more than $11. And the equilibrium price will be $11. Potentially more depends on the bargaining situation, but we keep it simple. And we get a welfare loss. Now we have a measure of the welfare loss. We look at the consumer and producer surplus that is lost because of that intervention. And we can quantify it once again. Consumer surplus in this situation is only one. Producer surplus is only nine. Total surplus is 10. And we have a welfare loss of $9. So you can attach a number to that welfare loss. That's the standard analysis of deadweight loss from a situation where you have a price floor above the equilibrium price. So of course, this assumes that these individuals that are represented here are still at this position on the demand and supply schedule, right? We have now random pictures of a Joe and a Sean that are represented here on the supply schedule. This assessment or analysis of the deadweight loss of the welfare loss assumes that Joe is still the one making the trade, selling to Marty. Joe makes a producer surplus of nine in the situation. But of course, Sean would be willing to sell as well at a price of 11. He has a reservation price of 10. And now think about a possible situation where Marty actually doesn't like Joe and prefers to buy from Sean because he's so much nicer and funnier, yeah? And then this might happen, yeah? Sean makes the trade and Joe is out of the market. And this changes the analysis. We have now a much smaller producer surplus. The difference between the reservation price and the receive price. For Sean, it's only one dollar. And so we have an additional welfare loss that comes into play here. And the deadweight loss is really bigger than the standard analysis would suggest. And it's just by one little reflection on the situation that you can find out that this is actually the case. And you can find out that the standard analysis only gives you the best case scenario, yeah, if we accept the premises, right? And this is an argument that has been made by a philosopher, David Schmitz, in a publication. And it's very interesting that you need a philosopher to point this out. It's a very simple analysis, and it's true, yeah? So the standard analysis of deadweight loss from price controls ignores that price controls lead to a situation where not the most efficient seller sell and not the most willing or eager buyers buy. Because it's now possible for sellers or buyers, depending on whether you have a price floor or a ceiling to discriminate based on other aspects than just the willingness to pay, or the readiness to provide. So that's one example of how the standard analysis is wrong. And I want to give you another one, which is the analysis of the welfare loss from an excess tax, from taxation. So a tax, right? We have here a generic demand and supply schedule and an equilibrium situation. A tax can be interpreted as an additional cost that shifts the supply schedule, right? A tax has to be paid, it's like an additional cost. So it will shift the supply schedule upwards. And in the situation where the demand schedule is relatively price elastic, it means it's relatively flat, you have a reduction in the, a big reduction in the quantity exchange and hence a big debt weight loss, a big welfare loss, shown here as the red triangle between the demand and the supply schedule. So a price elastic demand leads to a big welfare loss when you have an excess tax on the product. And yeah, when you have the opposite case of a price inelastic demand, you think about the same sort of tax added, same cost added on the supply schedule, you have now a small debt weight loss because the quantity that is exchanged is reduced only very little. This is of course because the producers are now able to transfer the burden of the tax onto the consumers because they are willing to buy the product anyway, even if it's more expensive. That's what it means to have an inelastic demand. You buy it anyway. And this leads to the standard conclusion in optimal tax theory, right? The standard conclusion is that you should tax markets where the demand is inelastic and the same goes for supply, right? We could expand on this, but this is not really important now. The standard conclusion is you have a small welfare loss on your tax markets with a price inelastic demand because the quantity exchanged and the mutual benefit that is generated remains relatively high after the tax is imposed. This ignores an important aspect and this has been pointed out by a publication of Tate Feigli, Christopher Hansen and myself at this for the moment only published as a working paper, but you can realize the problem when you look at the overall expenditure on the product, right? We have, after the taxation, a sharp increase in the price for the buyers and only a small decrease in the quantity. This leads to a situation where the overall expenditure or overall spending on the product is actually increased after the tax is imposed and now you don't have to be a genius to figure out that this obviously has implications on other areas in the economy. When you have to spend more on some product, you have less money for some other products. So if you just open the focus a little bit and think about potential other markets, you realize that, well, there must be a reduction in spending. That is a shift in demand for other products. There is lower demand for other products now and a reduction in the quantities exchanged. So there is, of course, an additional welfare loss in other markets that emerges from that taxation. And if you look only at the tax market, you do not take that into account. So you underestimate the welfare loss from taxation. That's another example of a criticism of the standard theory that accepts the premises. So both Schmidz and Feigli at all are criticisms that are internal. We see, okay, we look at the premises of the analysis and then we show that the conclusion drawn is actually wrong or not quite complete or misleading. So they are internal criticisms, but they point at something very important at a very important problem and that is the fallacious assumption of constancy, the Keteris Paribus assumption. What is assumed in this analysis is that everything else remains constant. We impose a tax on some product and everything else remains unchanged. That's, of course, not true, yeah? And so the assumption of constancy is really problematic. But of course, there are more fundamental criticisms of this analysis and you can find those, for example, in Rothbard's famous 1956 paper toward a reconstruction of utility and welfare economics. That's a paper where Rothbard argues that scientifically there are really only two principles with which we can work in order to draw conclusions about the desirability of alternative situations. Those two principles are the unanimity rule or the Pareto principle and the principle of demonstrated preferences. So the Pareto principle just states that we can talk about an improvement in social welfare or total utility, whatever you wanna call it, only in a situation where at least one person is made better off and nobody is harmed. That's an improvement in total welfare. And the principle of demonstrated preferences states that we can know about whether or not somebody's situation has improved only to the extent that the preferences have been demonstrated in action in a given situation at a given point in time under given circumstances. That's the only way we can do that. And what is required for that? Well, we have to know that the interaction of the transaction, the exchange was voluntary. There was no rights violation involved. It was an interaction of the free market. And then Rothbard, of course, draws these very strong and provocative conclusions that have aroused a lot of criticism. He states that the free market always increases social utility. So that is, of course, in the eccentric sense of the word, of course, people make mistake in some situations. But in the eccentric sense of the word, a voluntary transaction is, in that sense, based on these principles, welfare enhancing, it increases social utility. And a government intervention cannot increase social utility because it is essential to a government intervention that somebody is harmed, somebody has to pay for it. Somebody is prohibited from engaging in a transaction that he would like to engage in. Or he has to make a transaction that he doesn't want to make. So you can, of course, imagine situations where a government action is not a violation or not against the preferences of anyone. But that's very artificial. The essence of government intervention is, of course, coercion. The government is the institution of organized coercion in society, and that is what is essential to government intervention. And in that sense, you can never show scientifically that social utility, social welfare, total welfare is enhanced after a government intervention. And the fundamental problem is, of course, that, yeah, we cannot really measure utility quantitatively. There is no objective measure of utility. There is no cardinal measure of utility. Even these money prices, these reservation prices, if we could measure those, they are not really a measure of utility because money itself is a good that is valued differently and subjectively. So it's not an objective measure of utility. And, yeah, Rothbard argued then very, very convincingly, in my opinion, that there is no such thing as total utility that can be maximized. Yeah, that we don't have the target variable of our operations. It doesn't exist, yeah. Cannot be measured. It cannot be maximized. And then he points out that all utilities are really marginal utilities. That's the only thing that we can assess in a way scientifically. We can only assess on the margin, given a certain transaction, whether it is welfare enhancing or not. And it is welfare enhancing to the extent that this is voluntary and not a rights violation. Even if you could, yeah, at least indirectly measure utility from demonstrated preferences in a certain situation, the next problem is that you cannot then use these measurements in another situation, extrapolate from those observed situations because things change. Things don't remain constant. And that is the fundamental problem for the application of utility and welfare economics in the real world. You cannot hold things constant. Even if you can in certain situations, maybe get a good estimation of what the value assessment is of actors. You cannot use that in other situations and then apply it. And so this jeopardizes the application of standard utility and welfare economics in the real world, but it also undermines completely one essential part of the econometric project, which I come to now. So for this, it is important to understand one distinction. And that's the distinction between description and induction in statistic analysis. Description or descriptive statistics is just an attempt to measure what we can measure and describe or account for the evolution of measurable variables. That's something that can be more or less useful, but it's certainly not problematic from a methodological point of view to do that. But this problematic is statistical induction in the social sciences at least. And statistical induction is drawing generalized conclusions from observed data. That's the attempt to falsify or verify even on the basis of empirical data, certain theoretical propositions. The main idea of inductive econometrics is the following. It always operates on the basis of measurable observable variables. And you then postulate a set of variables that are the courses and a set of measurable variables that are the effect. And you postulate some model, some quantitative mathematical relationship, some functional relationship between them. As you have X1 and X2 as the measurable courses and you have Y as the measurable effect. So the idea of inductive statistics, the econometrics, is now that you put your model to the test, you are willing to revise your model to falsify your model in light of new evidence. So for example, you might observe that all of the sudden the same configuration of causal factors leads to a different effect instead of why you observe Z. So now you draw the conclusion from that. There must have been some other factor that was important as well that I did not include in my initial model. Maybe there was some factor X3 that I forgot. So you add it to your model. And when you are really a sophisticated statistician, then you might even think how maybe the whole thing is not linear. It's a non-linear relationship. And you revise, you refine your model so that it fits the data again. And now you have a new tentative explanation for your phenomena that you are still willing to reject in light of new empirical evidence. You are still willing to falsify and refine and revise your model further. That's the project of inductive econometrics. And the hope is that you approach in this process the truth, that you come closer and closer to the true model. Now the problem, of course, in this whole enterprise is that the process of hypothesis testing, of formulating and refining new hypotheses and then testing those again, presupposes that there is some constant relationship between causes and effects. So it assumes that there is a correct model that is correct and correct and constant over time. Because if there was no such model, you would be chasing a moving target and you might actually be correct in one point in time with your model. It might be wrong the next day. It might be correct again in the future. Okay, so you do not really falsify anything. And you do not verify anything either. So you need a constant functional relationship between the variables that you think of as causes and the variables that you think of as effects. And of course Mises very famously said that there are no constants in economics. And that's probably true. A very good justification for this claim was given by Hans Hermann Hopper, whom I was very happy to find out was the mystery speaker of the first evening. So he addressed you on some of these elements already. In my assessment, the most important publication in relationship to this problem in economics is his 1983 German language book, Kritik der kausalwissenschaftlichen Sozialforschung. Elements of this book have been translated and published in English, but the German is really so much more sophisticated. So you should all learn German and read it. It's in my opinion, it's really one of the most important contributions of Hans Hermann Hopper, which is often overlooked in light of his other altogether more provocative contributions. But it's really, really good. And he shows in this book that the constancy principle that you have to assume for this scientific project to work does not hold in economics or in the social sciences in general. And the constancy principle simply assumes or states that equal causes lead to equal effects. And if you observe unequal effects or different effects, that implies that there has been some configuration of different causes, unequal causes. And only under this assumption, this whole project of falsifying and refining and reformulating your hypotheses makes sense. Why does it not hold? That's the important question. And the answer that Hopper gives is very intriguing. It's very simple, but all the more elegant because of its simplicity between these measurable variables that we could think of as causes and the measurable variables that we think of as effects. There is in economics and the social sciences in general a human actor, and human actors can learn. That's very important and they have an ability to learn. And Hopper drew here in his argument on an argument interestingly made by Karl Popper against the Marxist theory of history. Popper said that there cannot be a scientific theory of how history will turn out in the future. That's impossible because human beings are capable of learning. So we cannot scientifically predict the course of history. And Hopper said, thank you, Karl. This is great, but it also means that falsification doesn't work in the social sciences and in economics in particular. If human beings are capable of learning and the effects that come out are dependent on our state of knowledge and our state of learning, then we cannot scientifically predict what will come out. We cannot scientifically predict the why and there is no constant relationship between the X's on the left-hand side and the Y's on the right-hand side. Because people find out about new things and this is learning in a very broad sense of the world. We find out about the circumstances. We change maybe our evaluation of things and we change our behavior even if the quantitatively measurable configuration of causal factors is the same. So there's no constant relationship. Let me draw this main conclusion then. The inductive part of modern econometrics is problematic. It is not justified because the constant principle doesn't hold. The descriptive part on the other hand is not. If we just try to describe what is the quantitatively measurable relationship between certain variables in the economy, this as such is more or less interesting. It can be more or less useful to guide us in our pursuits of truth and knowledge but it is not in a way methodologically problematic. And this has been pointed out by the economist, the Polish economist Paweł Ciąpa, who is really the first who defined econometrics 16 years before Ragnar Frischstedt in 1910. And Ciąpa said that, well, econometrics or as he also called it, economographics should just be understood as some kind of descriptive economics. We try to describe what we can see in the real world. And then econometrics just becomes the theory of accounting. How do we account for the evolution, the development of the economy of certain elements or aspects, parts of the economy? And understood in this sense of the word, econometrics is entirely compatible with Austrian economics. From the vantage point of Austrian economics, descriptive statistics, econometrics in this sense is not problematic and can be used. So empirical quantitative methods, statistical methods understood as tools to describe the situation of the economy are useful in history, as Mises would also acknowledge. And with that, I want to close the lecture and I thank you very much for your attention.