Trickle-down Regulatory Impact Analysis
Ever since Ronald Reagan’s Executive Order 12291, conservative Republicans have been some of the staunchest defenders of Regulatory Impact Analysis (RIA), and progressive Democrats have been some of its most vocal critics. But does the identity of the party controlling the White House actually affect the quality or use of RIAs?
In a forthcoming article
in Regulation & Governance
, we use data from the Mercatus Center’s Regulatory Report Card
to test for various differences in regulatory analysis between the Bush and Obama administrations. Three years of score data measure the quality and use of regulatory impact analysis over three time periods: the final year of the Bush administration, the early transition period in the Obama administration when an acting director headed the Office of Information and Regulatory Affairs
(OIRA), and the period between the confirmation of Cass Sunstein as OIRA Administrator on September 10, 2009 until the end of 2010.
For the three-year period we studied, we found little overall difference in the quality or use of regulatory analysis between administrations, after controlling for what we’ve termed the Bush administration’s “rushed midnight
” regulations, which had lower-quality analysis. In the Obama administration, regulations for which the OIRA reviews were completed after Sunstein’s confirmation had about the same quality of analysis as regulations for which the reviews were completed during the early transition period.
Our analysis revealed the biggest and most statistically significant effects when we controlled for the type of regulation. Across all three periods, almost all types of prescriptive regulations (which contain mandates or prohibitions) had higher-quality analysis than budget regulations (which implement federal spending or revenue collection programs). Significant differences also existed between different categories of prescriptive regulations. Environmental and homeland security regulations, for example, tended to have better analysis than economic, civil rights, and health and safety regulations.
These effects even dominated when we controlled for the federal agency issuing the regulation. Perhaps there are differences in the “state of the art” of regulatory impact analysis for different topics that transcend differences between administrations or departments.
The principal difference between administrations emerged when we included a variable that measures agencies’ policy preferences, as developed
by Joshua Clinton and David Lewis. Their survey identified some agencies as more “conservative” by virtue of their missions or cultures (like Defense or Homeland Security) and others as more “liberal” (like Labor or Health and Human Services). In the Bush administration, the more liberal agencies tended to have better analysis than others. In the Obama administration, the more conservative agencies tended to have better analysis than others and made stronger claims that they used it to make decisions. For both administrations, however, the more liberal agencies had better documentation of their data than their more conservative counterparts.
We suspect this data mean that an agency may not have to work as hard on a particular RIA to facilitate OIRA’s approval of a regulation when the agency’s historical mission and policy preferences are more closely aligned with the administration’s. Other studies suggest this conclusion was likely the case for homeland security
in the Bush administration and health care
in the Obama administration.
Taken together, these results suggest that institutional factors play a major role in the quality of regulatory impact analysis. We are more likely to improve the quality and use of RIAs by changing the institutional dynamics than by changing which people are in charge.