Page images
PDF
EPUB

Other Management Indices of Environmental Impact

When the threshold theory holds, the ambient index is a good guide to where abatement action should be taken-abatement is urgent where the threshold value is exceeded, and is unnecessary where the threshold value is being met. But when the linear hypothesis is more nearly valid, the ambient index is of little use as a guide to action, because the index reveals nothing about the payoff to be obtained by improving ambient quality in different areas. Instead, one must look at the other characteristics of the areas competing for abatement resources in order to determine where a dollar spent would accomplish the most good in terms of reducing discharges, improving ambient quality and, finally, reducing damage. For this purpose other management indices are needed.

Among the most important management indices when pollution effects are linear in exposure are population exposed and total discharges in an area. With strict linearity of effects on individuals, it makes no difference whether ambient levels are reduced by half or the number of people exposed is reduced by half-what matters is the ambient index weighted by population. And, with a uniformly distributed population, it makes no difference whether discharges (and hence high ambient levels) are concentrated or spread out-using a tall stack to expose twice as many people to half the concentration level accomplishes nothing. Thus, with effects linear in ambient exposure, a pragmatic management system could concentrate on reducing total discharges in an area in a cost-effective way, rather than worrying about high localized concentrations and the precise location of sources, and would work harder to reduce discharges in heavily populated areas whether ambient levels are high or low there. Although this is difficult to accomplish by pure regulatory means, it is technically easy with market systems. especially those based on discharge prices.

For some effects of some pollutants, population exposed is not the most important index of environmental impact. For example, reducing visibility in scenic but sparsely populated areas is regarded as a highcost effect by many, and has led to non-deterioration policies which seek to control discharges even when ambient indices indicate "good" conditions and when few people are directly exposed. For some pollutants which are long-lived, are transported long distances in the environment, and can be harmful at low exposure levels, there may be no good way to decide that some discharges are more or less harmful, pound-for-pound, than others-neither local ambient concentrations nor local population is a reliable guide to impact, and the best assumption may be that every pound is about equally harmful. In fact, given the lack of knowledge about environmental systems, about the longterm effect of pollutants, and about the best ultimate environmental goals for society, the assumption that all discharges of a pollutant are equally harmful may be a useful starting point in many cases.

REGULATING TECHNOLOGY OVER TIME

Ultimately, the success of any program for managing environmental resources depends on how successful it is in changing the capital equipment, the techniques, the knowledge, and the organizational arrangements the technology, in a broad sense-used by society in

going about its business. These processes of change in society's technological base go on continually, usually motivated and guided by market forces. Environmental policy, so long as it does not take the form of simulating market forces, must find some way to accomplish the same things by regulation. Unfortunately, regulatory tools are poor substitutes for market incentives, and are likely to result in costs significantly higher than they need be over time. The reasons for this, and some implications for policy are discussed in this subsection. Regulating the Development of Technology

The development of pollution control technology under a purely regulatory system proceeds slowly for a simple reason: pollution remains free as long as one can make a plausible case that it is impossible or too expensive to do anything about it. Thus, for a discharger to develop technology and to demonstrate its applicability to his particular situation is to undercut his best defense against demands that he undertake expensive control measures. In such a situation, it is in the dischargers' economic interest that nobody do effective R&D on their problems, and that what work is done show the impossibility or high cost of compliance.

The only mechanism a regulatory system has to try and reverse these incentives is the threat of severe penalties for failure to accomplish some required result, with no excuses allowed. But even in the simplest cases there is uncertainty about what can be accomplished, and events outside the control of the dischargers can always intervene, making it necessary to provide some sort of escape device, such as administrative appeal, judicial review or political reconsideration.

Under these conditions, the safest course for a discharger to take is to insist that the requirement is totally unreasonable for the foreseeable future and may or may not be achievable some day, but that it would be foolish to rush into use of expensive and untried technologies anytime soon. The regulatory agency must then come forward with some evidence that it is not unreasonable to expect some early progress on the problem, which it can do only by pointing to the most promising candidate method. And since the agency is interested in a solution which might accomplish quick results, is broadly applicable, and can be the subject of administrative and legal determinations, the device or method selected is usually an add-on device of a relatively unsophisticated type. Subtle or fundamental changes in design, solutions which require the adaptation of a general principle to many different situations, or approaches which might accomplish more in the long run at the cost of immediate results, are weeded out early. Thus, even though the discharge standard may have been stated in terms of performance rather than technology, and even though it was intended to stimulate some imaginative R and D, the typical result is an immediate focusing of attention on a crude solution proposed by the regulatory agency, with the dischargers trying to prove that the agency's solution will not work.

Of course, if the agency is strong enough to force adoption of an expensive control method, then the industry will have an incentive to find cheaper ways to accomplish the same result. But this can work only where there are known solutions to the problem cheap enough to be enforceable. And the risks involved in trying something other than

what the regulator "suggests" are not insignificant, because "an industry that fails to meet the standard after adopting a technology offcially sanctioned by EPA is likely to be in a better position than an industry that fails to meet the standard after adopting some alternative technology." 105 In fact, by adopting EPA's method while protesting that it will not work, a discharger is virtually immune from enforcement actions if the technology later proves disappointing for whatever reason.

It may be that one firm in the industry will develop control technology in an effort to gain a competitive advantage over his competitors. This is risky, however, because the firm who develops the technology is likely to find that he is forced to use it himself while his competitors continue to delay taking any action on the grounds that, for them, technology is still unavailable or too expensive. In fact, as the regulators become more sensitive to cost-effectiveness in their actions this perverse incentive becomes worse the agency will avoid forcing an expensive technology on one firm when a cheaper technology is being developed by another firm. Then, the technically progressive firm bears the R & D costs and is the first to control.

The most visible effort to push technology with regulatory tools has been the attempt to get low-emission automobiles developed. The his tory of this still-running episode is well-known and is discussed above. It is a classic illustration of the general story: a demanding requirement was established with nothing short of Congressional action available to change it significantly; the manufacturers insisted it was impossible to meet the requirements; EPA, with encouragement from catalyst producers, pointed to the platinum catalyst as the best hope; the industry focused all efforts on the catalyst and demonstrated that the initial deadline could not be met that way; EPA and Congress have given ground steadily and only modest progress toward cleaner automobiles has resulted.

Whether this is a success story or not is a matter of some judgment. One reasonable interpretation is that by focusing attention on the catalyst for 1975 application. the CAA eliminated all incentive to develop either more effective but non-catalyst short-run methods, or more fundamental long-term solutions. The industry has devoted its efforts to pushing back the 1975 date-to 1976. then 1977, then 1978, most recently 1980-with the result that emission standards for 1977 are no better than they could reasonably have been expected to be in 1970 even without the 1975 deadline in the 1970 CAA, and nothing except the catalyst has been given serious consideration for the long

run.106

Another classic illustration of the problems involved in using regulatory methods to push technology is offered by the history of flue gas desulfurization (FGD) systems for power plants. Emissions from the combustion of sulfur-containing fuel can be reduced in three ways: clean the fuel before burning; alter the combustion process so that the sulfur (and perhaps other pollutants) are captured immediately; or remove the sulfur from the exhaust gases as they go up the flue.

105 Zener, op. cit., pp. 707-8.

108 The foreign auto makers, not totally dependent on the U.S. market, were in a posttion to try more risky but more fundamental approaches to the problem, and succeeded.

In many ways, the last of these three is the least satisfactory, since it requires the continuous treatment of a stream of hot gases dilute in the relatively unreactive sulfur dioxide. Fuel desulfurization could be done separately at remote locations, and the resulting low-sulfur fuel could be used for many things, while combustion modifications, e.g., fluidized bed combustion, could be integrated directly into the boiler design, and would improve energy efficiency while reducing several pollutants simultaneously. But FGD systems had the advantage of being add-on devices which, conceivably, could be perfected and built fast enough to accomplish the 1975 deadline for attainment of primary NAAQS. So EPA tried to force technology by setting emission limitations in the state implementation plans and new source performance standards at levels which require use of FGD on most coal-fired power plants.

The action-forcing elements of this strategy were the 1975 deadlines in the SIPs, which were to be met at any cost, and the fact that no new source could be constructed unless it would comply with the new source performance standards. EPA was so confident of its regulatory program in 1971 that its biggest concern was the projected shortage of the skilled labor needed to get FGD systems built fast enough to meet the 1975 goal.107 And, indeed, the general feeling among independent technical experts in 1971 was that, if FGD systems had offered the possibility of improving fuel efficiency by a few percent, the developmental problems would have been worked out and a substantial construction program would have begun. But FGD offered no such economic benefit. It was a pure cost, and a significant one at that, and any utility could convincingly argue that the technology was not "available." So, little happened. The 1975 deadline came and went.

99 108

As of January 1976, the New Source Performance Standard requiring FGD was still in remand, 22 FGD systems were in operation, 20 were under construction, and 67 more were "in various stages of planning or consideration." This is what EPA calls "much progress," even though most of the 22 operating systems are small demonstration plants, some were funded by EPA, and experience with them will be used to argue against as well as for building more.

Perhaps this is a success story, "success" being a relative and judgmental term. What is clear is that the five year running battle over FGD has produced more in the way of adversarial technical disputes. than serious efforts to make progress, has concentrated all attention on FGD at the expense of approaches more promising in the long run, and has yet to demonstrate that it can do more than get a few plants built which may or may not get sulfur dioxide emissions reduced. And the control of sulfur dioxide from coal-burning powerplants is in many ways an easy problem, involving a relatively simple, uniform technology, at a few hundred large plants. If the regulatory mechanism cannot do better than this, then the prospect is not bright that it will succeed in stimulating the kind of imaginative technical progress needed throughout the economy to protect and enhance environmental

107 This discussion is based to some extent on the author's experiences in 1971 while a member of the CEQ-Treasury team designing a sulfur oxide emissions tax. EPA felt such a tax would be no help, and would only interfere with its undoubted regulatory strategy for meeting the 1975 deadline.

108 EPA Progress, 1975, pp. 80-1.

resources in the long run, and the cost of environmental protection will not come down over time as rapidly as it should.

Setting Technological Requirements

The heart of environmental regulation is the setting of discharge limitations, or technological requirements, for individual emission sources. It has long been known that doing this for the many and diverse sources in the U.S. economy would be a difficult, time-consuming and expensive process. What recent experience has demonstrated is that the process is, if anything, even more difficult than anticipated, and that a pure regulatory system has great difficulty accomplishing a reasonable degree of cost-effectiveness in setting its technological requirements.

The inability of a pure regulatory system to be cost-effective in setting discharge limitations has two causes. The first is the sheer impossibility of acquiring in a central agency the information necessary to determine where discharges can be reduced most cheaply, especially when this information is known only dimly to the individual dischargers themselves and changes over time. Not only does this lack of information cause the agency's standards to be inefficient in their substance, but the procedural costs involved in gathering and revising information and the costs of uncertainty and delay are themselves significant.

The second impediment to efficiency in setting regulations is the fact that cost-effectiveness in controlling discharges often requires an uneven distribution of discharge reductions, i.e., discharges should be controlled where control is cheap, and should be allowed to continue where control is expensive. For resources traded in a market, this sort of cost-effective unevenness causes no great inequities, because those who invest more in order to conserve on a valuable resource save money thereby, while those who cannot or do not conserve on the resource must pay to continue to using it. But for environmental resources which are given away free by the regulatory agency, unevenness in control can cause large and inequitable differences in control costs among sources. Not only is this inequitable in a way which invites legal and political reversal of agency determinations, but it also provides strong and perverse incentives of the kind which can destroy costeffectiveness in the long run. As a result, cost-effective distinctions are seldom attempted in a regulatory system even when the agency has the information needed to make them.

Whether these problems will get better or worse with time cannot be reliably predicted now. If "closing the loop" on water polluters becomes a realistic option, then presumably the task of setting technological requirements in the long run will be eased. But it is at least as likely, even in water, that more requirements will be put on more pollutants, the number and diversity of sources will grow, and the costs incurred by society because of the inherent inability of a central regulatory agency to manage technology efficiently will continue growing.

Investment and Maintenance Over Time

U.S. environmental policv since 1970 has been almost exclusively focused on short-term results, often to the detriment of longer-term prospects. For example, as discussed above, the technology-forcing

« PreviousContinue »