Cary Coglianese is the Edward B. Shils Professor of Law, Professor of Political Science, and Director of the Penn Program on Regulation at the University of Pennsylvania Law School. He is the founder of and faculty advisor to RegBlog.
The current spending battles in Washington reveal the deep fault lines between the political parties over the size and role of the federal government. That divide also emerges in the parties’ rhetoric over the value of government regulation. Republicans attack regulations as overburdensome job killers, while President Obama and fellow Democrats defend them as common-sense rules that protect Americans.
When it comes to regulation, though, there is something on which Democrats and Republicans agree. Since the early 1980s, both parties have broadly supported “regulatory impact analysis,” a technique used by federal agencies, such as the Environmental Protection Agency or the Department of Transportation, to predict the benefits and costs of new, binding regulations before they adopt them. Every president from Ronald Reagan to Barack Obama has directed federal agencies to conduct such analyses before establishing any rule expected to have an annual impact on the economy of more than $100 million.
However, as important as it is for federal agencies to look before they leap, it is equally important that they also conduct analysis that’s retrospective. Agencies face no similar obligation from the White House to look back systematically to see what good (or bad) each of their major regulations has achieved.
At least part of today’s partisan divide over regulation could be bridged by creating credible practices for retrospective evaluation. Just as agencies are currently expected to make their best predictions about the costs and benefits of major regulatory proposals, they could also be expected to inquire after the fact about each such regulation they adopt: How much positive value has society reaped from it? How much did it cost in dollars or jobs? Was there a more effective way of doing it?
President Obama had it exactly right earlier this year when he articulated in an executive order the principle that government “must measure, and seek to improve, the actual results of regulatory requirements.” That may seem self-evident, but it’s really quite striking because this was the first time such language appeared in a formal presidential order on regulation.
The president has also ordered agencies to develop plans to conduct periodic reviews of existing regulations. These plans are due in May, so it remains to be seen how meaningful they will be.
One step forward that agencies — or the president — could make would be to require that any future regulatory proposals that by law need prospective impact analysis must also be accompanied by an individualized plan for retrospective evaluation.
To do retrospective evaluation well, agencies must engage in advance planning: making early decisions about how data will be defined and collected over time and what relevant control groups might be used for making comparisons. With such a plan in place before an agency imposes a new rule, it is more likely a few years down the road that evaluators will be able to say something scientifically reliable about how well the regulation worked and what it cost.
Another, complementary idea would be for Congress to require agencies to set aside a small amount of funds from their budgets — say, 1% of the predicted annual impact of a new rule — to be dedicated to retrospective research. After all, if a rule is expected to cost society more than $100 million a year, it’s worth dedicating even a tiny fraction of that amount to find out if, after it’s adopted, the rule is working as expected.
Generating better and more extensive evaluation research on regulation may also help narrow the political divide. The debate over regulation stems in part from everyone’s insufficiently informed beliefs about the actual impacts of regulation. One way to narrow the political chasm is to generate more facts about what works and what doesn’t.
This post first appeared as an op-ed in the Los Angeles Times.