 It's a great pleasure for me to be delivering the opening lecture of the academic year at Universitat Pompeu-Fabra. I'm grateful to the rector for his invitation and the professor Pablo Salvador, who has been a wonderful academic colleague for many years for having invited me, encouraged me to be here today. The last time I was in this auditorium was to chair a dissertation defense, and I have very fond memories of that time at your university and in Barcelona. My talk will focus on four elements of regulatory policy that I believe are necessary for a rational regulatory system. In this talk I will primarily discuss health and safety regulation. Examples of such regulation which abound in the regulatory systems of advanced economies include environmental regulation designed to reduce the concentration of contaminants that had an adverse impact on human health, worker safety regulation designed to avoid industrial accidents or injury in the workplace, and consumer product regulation designed to avoid such harms resulting from consumer products. The four elements of a rational regulatory policy that I will discuss in today's talks are the following. First, how to determine the appropriate stringency of regulatory standards designed to reduce adverse impacts on human health, including premature deaths. Second, once we have decided on the level of protection we want as a society, how to operationalize it into requirements that the actors imposing risks have to meet, and what trade-offs does that choice present. Third, how to set the relative stringency of standards for new and existing sources respectively, to avoid the pernicious consequences of excessive grandfathering. Fourth, how to appropriately allocate regulatory authority in federal or quasi-federal systems. These four elements are not necessarily exhaustive, but they are very important and regulatory systems often make the wrong choices. Before you think I'll be talking here all day, I am aware that 25 minutes is the prescribed time and I will try very hard to stick to that time limit. First element, cost benefit analysis. When the government seeks to reduce risks that people face from exposure to substances that have adverse impacts on their health and safety, how stringently should it regulate? I will argue that in a rational regulatory system, this determination needs to be guided by cost benefit analysis. Let me start by discussing the regulation of carcinogens, because a lot of risks the regulation seeks to reduce are risks of cancer, often from environmental exposure. For regulatory purposes, carcinogens are treated as no threshold contaminants. This means that they are assumed to impose health risks, which translates into premature deaths at every concentration. The lower the concentration, the lower the risk is, but the risk does not go to zero until the concentration goes to zero. Why is each regulatory standard not set at zero? The reason is obvious. We cannot have an industrial society that operates in this manner. As a result, concentrations of zero are not what regulatory systems seek to accomplish across the board. Instead, regulatory regimes have typically indicated they seek to achieve safe levels of exposure. What is safe in this context? It is quite simply deciding whether it is appropriate for an individual to be exposed to a lifetime risk of cancer of one in 10,000, one in 100,000, one in a million, or any other number of your choice. The choice of these numbers is simply arbitrary. It is not a scientific inquiry. And the reason I know this is because no leading university such as yours would award a PhD for a dissertation saying that safe means a one in 100,000 probability of death as opposed to a one in a million probability of death or one in 10,000 probability of death. Translated into a population level, if we're talking about an environmental contaminant that affects 10 million people, we are asked to decide whether we want to have a thousand deaths from lifetime exposure to this carcinogenic contaminant, which we can accomplish by setting the individual probability to one in 10,000, or instead, should we pick a hundred deaths by setting the individual probability to one in 100,000, or should we choose 10 deaths by setting the individual probability to one in a million? And if you think 10 is too many, would you prefer only one death? That too is possible if the individual probability is set to one in 10 million. And of course, because of the no threshold nature of carcinogens, there is no stopping point. You probably figured out by now that there are consequences to these decisions outside of the domain of public health. We can have any level of protection we want, but there will be costs attached. The more stringent the protection, the higher the costs. And typically costs rise rapidly as the level of protection increases. The function is not linear, but convex. There is nothing wrong with imposing costs on the sources of carcinogenic pollutants to give them incentives to reduce their emissions. Quite to the contrary, polluters are imposing costs on other people that they're not internalizing into their production decisions. So for example, a steel factory optimizes its use of resources, such as iron, labor, technology, and electricity, so that it can produce steel as cheaply as possible. If the price of labor goes up, it might use more technology and so on. Otherwise, it will be run out of the market by more effective competitors. But the factory does not pay for the cost of using the clean air. Economists refer to this pathology by reference to the divergence between the private costs, such as iron, labor, and technology, which the company has to pay, and social cost, the clean air that it can take for free. As a result, the factory will use a suboptimally large amount of clean air, which is to say that it will pollute at a suboptimally high level. The goal of regulation should be to internalize the externalities so that the factory sees the harm it imposes on the breeders of air, and so it can therefore make appropriate trade-offs among all the resources, private and social, that it consumes. That is good social policy. But should the factory reduce its pollution to a level greater, then it could be justified in light of the harms that it produces. Some might think that we should always privilege individuals over economic entities, but the steel factory is not a faceless entity. People will bear the cost of the regulation, workers might be paid less, or the factory might close down and cause people to lose their jobs, or the factory be less profitable and its shareholders, perhaps pension plans for retirees will be adversely affected. When one looks at the consequences for a no-threshold contaminant, one is inevitably drawn to choose the level of protection that has the highest net benefits, the level that maximizes the difference between its benefits and its costs, understood broadly. And that is what cost-benefit analysis does. And in the process of doing so, it leads to the internalization of externalities, which, as I indicated, is a good thing. I drive a car, I believe, to be safe. It is heavy and has reinforced sides that enable it to withstand readily well a side impact caused, for example, by negligent vehicle crossing an intersection with a red light. I have paid a premium to buy such a car. But I would not have bought an armored tank, even if one was available in the market and was safer. I traded off my preferences for safety against my preferences for costs and deciding how to control a risk, such as driving that I undertake voluntarily. The government should pay attention to similar trade-offs in deciding how to regulate risks, such as pollution for factories, to which people are exposed to involuntarily. There is no compelling reason to suggest that it should act without considering the full consequences of regulation. You might be thinking that we're lucky to know the cost of regulation, but how do we know what the benefits are and how do we value things like reducing premature deaths? Economists have figured out ways of doing this by observing decisions people make in market settings. For example, workers who take riskier jobs as opposed to less risky jobs that are similar with respect to other characteristics get higher wages. From this premium, economists calculate a willingness to pay to be free of the additional risks caused by risky jobs. And then they perform an extrapolation to calculate what is known in this field as a value of a statistical life. I was part of the process through which the U.S. government in the 1990s determined the value of statistical life to be used for environmental regulation. It was an outside group. We went to Washington for one day. We were paid the $300 the government pays outside experts and came back having determined that it was going to be $5.9 million 1997. I thought we had done a good day because of work. That number has persisted. It's adjusted for inflation. It's now roughly $9 million. It's used by most federal agencies to do this. Now, even though I was part of this process, I'm not going to make any argument here about whether the studies on which this number relies on are all great studies or whether they give rise to conceptual problems. That is a complicated area worthy of many lectures. But I do want to say that in the case of a no-threshold contaminant, one cannot defensively set regulatory standards without considering the trade-off between the benefits of additional protection and the negative consequences associated with such protection. That is what cost-benefit analysis seeks to do. Now, you might worry that embracing cost-benefit analysis would lead to lax regulatory standards. In the United States, every president since 1981 has had in place an executive order requiring that major federal rules be justified through cost-benefit analysis. There are exceptions to this rule, some limited ones. For instance, in cases in which a particular statute or a judicial interpretation of that statute says otherwise. So we have a natural experiment that allows us to make the comparison. We can compare cases in which regulations were justified through cost-benefit analysis with regulations in which the statute was interpreted to preclude such cost-benefit analysis and the regulation was justified in some other way. In a recently published article, a colleague and I show that under the principle provisions for which cost-benefit analysis is not allowed, the resulting regulatory standards were less stringent than those that would have resulted from the use of cost-benefit analysis. Now, so far I have talked about no threshold contaminants like carcinogens. You might be thinking that the inquiry will be less complex for threshold contaminants. That is for contaminants that have a level below which there are no adverse health consequences. Non-carcinogens are typically treated as threshold contaminants. Now, for such contaminants, it might be desirable, you might think, to set the level of protection under the threshold. Then everyone is protected and there are no adverse health consequences. Now, even in this situation, the level of cost necessary to achieve this goal might be higher than could be justified by the health benefit. But let us leave that problem aside for a moment and focus on the threshold itself. The threshold models are problematic for three reasons. First, in order to get there, scientists make indefensible assumptions by creating a sharp discontinuity in their assessment of the strength of the scientific evidence above and below the threshold respectively. The probability of an adverse effect is treated as 100% above the threshold and as 0% as if it didn't exist below the threshold. But typically, the science does not reveal a step function of the sort. Instead, it is more consistent with a continuous function. The probability of an adverse impact below the threshold is not zero, but a positive level that is arbitrarily ignored. Economists, in contrast, have a standard technique for dealing with scientific uncertainty of the sort, which is the concept of expected value. The expected adverse impact of a lower probability is lower than that of a higher probability, but it is not 0% below a certain point and 100% above that point if the function itself is actually continuous. Second, the determination of thresholds often involves making unsupportable value judgments of what counts as an adverse impact. For example, mercury is a very harmful substance. One of its consequences is it has a very bad impact on the brain development of young children, which translates into the loss of IQ points. When the U.S. Environmental Protection Agency recently regulated mercury, it determined the threshold to be an average loss of two IQ points in the affected population. There's a lot of IQ points when the population is high. There is nothing magic about two points. There is just a continuum of harm and the agency picked an arbitrary point and called it a threshold. The reason the agency did that was because it was forced to make its decision under a regulatory straightjacket caused by the convention that non-carcinogenic effects should be assumed to have thresholds. This is not science, but arbitrary line drawing. Once again, at a leading university such as yours, one probably cannot get a Ph.D. by determining that the threshold should be the loss of two IQ points on average as opposed to one or three or any other number. Third, thresholds are determined by reference to a particular type of individual. With respect to smog, for example, the level that will make it difficult for me to breathe comfortably if I run outdoors is different than the level that an asthmatic will be able to tolerate, which will be much lower than mine. And then average asthmatic will be able to tolerate a higher concentration than a particularly sensitive asthmatic. So even if each individual has a threshold, the population as a whole will not have one as long as there are sufficiently sensitive individuals, which typically there are. Because populations have different levels of sensitivity, even contaminants that are threshold contaminants for individual become no threshold contaminants for the population. In summary, contaminants that are created for regulatory purposes as no threshold and contaminants that are created for regulatory purposes threshold contaminants actually exhibit similar characteristics. For both, the decision maker must decide how many premature deaths and how many serious average health effects to avoid. There is no intellectually defensible way to make this decision without considering the resources that need to be expended to achieve standards of different stringencies. In broad outlines, that is what cost-benefit analysis does. I will now turn from the determinate nation of the stringency of the regulatory standard to the choice of regulatory tools necessary to achieve that standard. And here I will focus on three elements. First, on cost-minimizing regulatory tools. Once a regulator has chosen the level of regulatory stringency, it must select a regulatory tool that will impose the necessary obligations on the regulated community. Command and control regulation impose a specified obligations on each member of the regulated community. In the environmental context, for example, the regulator might require each polluter to meet the emissions standard of the results from the use of the best available technology. And that is a sort of technical definition. Command and control standards of this sort are typically not the least cost way of meeting the prescribed regulatory goal. The reason is that the regulator will not have sufficiently detailed information to be able to figure out how the regulatory goal can be met at least cost. And the difference between the cost of such command and control standards and the least cost standards can be considerable. More flexible approaches provide a solution to this problem. For example, under marketable permit schemes, the regulatory goal determines the total number of emission permits that will be allocated to a region. These permits are then distributed among polluters, generally either by means of an initial auction or through some type of grandfathering. Subsequently, permits are traded in an open market. Assuming that a robust market for permits arises, a tradable emission permit regime reduces aggregate emissions to the chosen level for the least cost. Markable permit schemes have many design complications and there is a robust academic literature exploring these properties. But for every possible objection to their use, there is a design solution that preserves its attractive least cost properties. For example, typical marketable permit schemes control the total amount of permissible pollution but not the distribution of this pollution. That is fine for global pollutants, such as carbon dioxide, where all that matters is the total atmospheric loadings and not the distribution of those loadings. A ton of emissions in Barcelona has exactly the same impact as a ton of emissions in Beijing. But for local pollutants, such as sulfur dioxide, local concentrations do matter. Nonetheless, it is possible to construct more complex trading schemes that respond to this complication. Constraining, for example, trades that would violate ambient standards. The useful inquiries are on how to deal with these matters rather than to perpetuate regulatory tools like command and control regulation that have serious inefficiencies. I will now turn to a second design element. A key feature of the regulatory policies of many jurisdictions, including the United States, is the extensive grandfathering of existing sources from standards that apply to new sources. Grandfathering of this sort has bad incentive effects because it distorts the economic analysis that existing plant owners undertake when deciding whether to modernize or replace a plant. Stricter standards for new sources make building a new plant more expensive than it would otherwise be. As a result, existing sources, often dirty and obsolete ones, remain in operation longer than would otherwise be the case. A phenomenon known as the old plant effect. This effect is both economically undesirable and may worsen environmental quality by delaying the replacement of dirty, existing sources with new sources, which be more efficient and therefore cleaner, even absent the regulatory requirement. When I became interested in this area and started writing about it, I discovered that the Law and Economics Literature committed the same error as government practice. In the first step, it determined the optimal level of controls for new sources, not taking into account the impact of these standards on existing sources. Then it determined the optimal transition rule for existing sources, in light of the standards it had picked for new sources. Because the costs of retrofitting existing sources to meet a standard are typically much higher than the cost of building new sources with that standard in mind, these transition rules are typically very permissive. This two-step process has a pernicious effect. Unless one is in an era of great economic growth with additional demand for the products of the regulated entities, existing sources will continue operating with no additional costs rather than building new sources. Sources that would otherwise have been obsolete and closed down because they could no longer produce a product sufficiently efficiently, would now stay in operation because of the additional large costs of building new sources to meet the new environmental standard. The result could be very stringent standards on the books that would not be applied widely because new sources would not be built. There is a solution to this problem which involves understanding the relative stringency of the standards for new and existing sources needs to be considered. The mistake is to optimize the respective standards sequentially, first setting the optimal standards for new sources as if existing sources did not exist and then setting the optimal transition rules for existing sources. Instead, they need to be optimized jointly so the difference between the standards does not undesirably stand in the way of technological innovation. It makes sense for new and existing sources to be subject to different standards for a period of time in light of the higher compliance costs of existing sources but the impact of differential standards and the transition from existing to new sources needs to be considered. This problem would not arise if regulatory standards were replaced by marketable permit schemes which I advocated for earlier in this lecture. Then new and existing sources would compete for permits in the same market and existing sources would stay in operation only if it was economically desirable for them to do so. Let me now address the third and last design element that I'll focus on today. And that involves the allocation of regulatory authority in federal and quasi-federal systems. I have in mind the systems of the United States and the European Union. Inter-jurisdictional impacts provide the strongest argument for allocated regulatory responsibility at the higher level. For example, the state externalizing its pollution to other states can capture economic benefits in the form of jobs and tax revenues but it imposes costs in the form of adverse health effects on other states. As a result, the upwind state is not affected by the full cost of its actions. Here, too, there is an externality. In this case, a divergence between the private costs borne by the state and the social costs that are imposed on downwind states. In the absence of bargaining among states which is difficult to accomplish, the amount of pollution crossing state lines will be greater than its optimum. Another prominent justification other than that based on inter-jurisdictional externalities posits that the harmonization of regulatory standards promotes the establishment of a common market by putting different states on an equal footing in the competition of markets for their products. This justification is prevalent in the European Union and prevalent in a somewhat analogous form in the United States. It is far less compelling than the justification focusing on the presence of inter-jurisdictional externalities. The harmonization rationale does have force in the case of product standards. Indeed, a product cannot trade freely throughout a common market if states within the market can exclude it on environmental or health and safety grounds. Harmonization standard arguments, however, have also been invoked to justify the vesting of centralized responsibility over process standards, such as environmental ambient and emission standards. But there are several problems with extending the argument in this manner. First, as long as product standards are harmonized, there can be a well-functioning common market regardless of the stringency of the process standards governing the product's manufacture. Thus, more accurately, the argument must call for the harmonization of the product's production costs so as to deny a comparative advantage to states with laxer environmental standards. The second problem is that the cost of complying with environmental regulation, or for that matter, the cost of complying with any regulation, are only one component of the total cost of production. Other components include a state's investment in infrastructure, healthcare, and education, as well as its wages, labor productivity, and access to raw materials. These factors, which can have a significant effect on production costs, are unlikely to be or incapable of being the subject of the European unions or any federal or quasi-federal systems harmonization efforts. Thus, rather than eliminating cost differences, the harmonization of environmental standards has the effect of conferring a competitive advantage on states with lower non-harmonizable components of costs. Third, the harmonization argument cannot be used as it has been in the European Union to justify both uniform ambient standards and uniform emission standards. A centralized regulatory regime consisting only of uniform ambient standards, which permits the states to allocate the pollution control burden among existing and new sources in any way they see fit would confer a competitive advantage on the states with smaller industrial bases. Indeed, states with lower pollution output would offer their sources less stringent emission standards without violating their ambient standard. The addition of centralized emission standards moderates its comparative advantage, but does not wholly eliminate it. Highly industrialist states where the centralized ambient standards constrain further growth would be unable to track new sources without imposing additional costs on existing sources. If regulated activity does not have inter-jurisdictional effects, then centralized regulation means that the local preference for the level of regulatory stringency are trumped. Typically then, the resulting regulatory standard reduces social welfare. In summary, cost-benefit analysis, marketable permit schemes, and proper attention to grandfather and federalism issues are necessary components of a rational regulatory policy. Much more to be said about all of these topics, obviously there could be courses on each of them, but I hope that today in the time that we have, I could give you a flavor of my thinking in this area. I'm very grateful to have been invited to give this lecture and thank you very much for your attention.