A New Measure of the Quality of Regulatory Analysis

Font Size:

Our Report Card project scores the use of regulatory analysis in agency rulemakings.

Font Size:

How well do federal agencies conduct the regulatory analysis required in Executive Order 12866 and use it to make decisions?

Up to now, scholars have tried to answer this question using two evaluation methods: objective checklists, such as those advanced by Robert Hahn or supported by Stuart Shapiro and John Morrall, that note the presence or absence of various factors in Regulatory Impact Analyses, and in-depth case studies that offer detailed assessments of the underlying research. The two methods have forced a tradeoff between breadth and depth.

However, in our new article in Risk Analysis, we present a third, middle-ground approach that offers both breadth of coverage and depth of analysis. We developed an expert assessment methodology that uses a Likert scale rating of zero to five as a way to assess the quality and use of regulatory analysis according to 12 criteria derived from Executive Order 12866, Office of Management and Budget (OMB) guidance, and scholarly research.

Our method groups the criteria into three categories. First, we scored the relative openness of the analyses: how easy is it to find the regulatory analysis, understand it, and trace the research and data back to original sources? Next, we considered the substance of the analysis: how well does the analysis define the market failure or other systemic problem the regulation seeks to solve, identify the outcomes (benefits) the regulation is intended to produce, develop a wide variety of alternatives, and estimate the benefits and costs of alternatives? Finally, we examined the use of the analysis: to what extent did the agency claim to use the regulatory analysis, take net benefits into account, and make provisions for retrospective evaluation of the regulation’s actual benefits and costs after it is implemented?

Our article contains results for all economically significant regulations proposed in 2008. The scoring project is ongoing, and subsequent work has expanded the score data through 2011 for “prescriptive” (i.e, non-budget) regulations. We found that regulations that implement budget programs tend to have lower scores than prescriptive regulations. Although neither Executive Order 12866 nor OMB Circular A-4 says that budget regulations are subject to different analytical standards, agencies and OMB apparently treat them differently. For this reason, we discontinued evaluations of budget regulations after 2009.

Although Report Card.jpg the scoring is not yet complete for all prescriptive economically significant regulations proposed in 2011, seventeen out of twenty-three regulations have been scored. The accompanying graph shows the mean overall scores for each year as well as the mean scores in each of the three categories.

Keeping in mind that the 2011 averages may still change as we finish scoring the remaining 2011 proposed rules, the graph shows that the average scores have not changed much since we began the regulatory report card in 2008. The mean Total score for the entire four-year sample is about 31.6, and no single year’s mean score is statistically different from the overall average at 1% or 5% levels.   The same is true when examining the means of Openness, Analysis, and Use: no single year stands out as an aberration compared to the overall average of the four-year sample.

Our ongoing research has used the score data to test whether various political and electoral factors affect the quality and use of regulatory analysis. Perhaps the timeliest finding in this presidential election year is that “midnight regulations”—regulations finalized between Election Day and Inauguration Day—have measurably lower-quality analysis than ordinary regulations. For example, the analyses for the Bush administration’s prescriptive midnight regulations scored 23 percent below ordinary regulations proposed in 2008. We detail this result and others in our article in the Administrative Law Review’s OIRA anniversary issue.

Our scoring process avoids any reviewer bias through reviewer training and audit procedures. For the first several years of the project, the evaluation team consisted of several senior scholars at the Mercatus Center and graduate students trained in regulatory analysis. Currently, the evaluation team consists of Mercatus scholars plus a group of economics professors at various universities around the country. All evaluators undergo training in the evaluation method, and inter-rater reliability statistics demonstrate that the team of trained evaluators produces consistent evaluations.

All score data, as well as notes justifying each regulation’s scores on every criterion, are available to the public on the Mercatus Center’s Regulatory Report Card site. As we complete evaluations for each year, complete sets of scoring data are made available in a downloadable format. By combining the analytical benefits of breadth and depth, the Report Card project is a novel way to check the quality of regulatory analysis and its use in agency decisions.