RegBlog

RegBlog

Randomizing Regulation

If Michael B Abramowicz.jpg legislators disagree about the efficacy of a proposed policy, why not resolve the disagreement with a bet?

One approach would be to impose one policy approach randomly on some members of the population, but not on others, to determine whether the policy meets its goals. This solution would overcome the measurement problems of conventional regression analysis and would provide a useful way to compare regulations and promote bipartisan agreement. Legislators might agree that once such a test is complete, the winning approach would apply to everyone.

For Ian Ayres.jpgexample, regulators could test the Sarbanes-Oxley Act’s most controversial provisions, such as those requiring public companies to institute internal controls and then to have their CEOs and CFOs certify their financial statements, by randomly repealing one or more of those provisions for some corporations for some period of time. Randomization would enable analysts to determine which regulatory regime is optimal by assessing which test-group of corporations has the highest level of success, whether measured by stock price, investor confidence in financial reporting, lack of fraud, or other yardsticks.                                                                                           .

Yair Listokin.jpg

Conventional statistical and econometric analytical techniques are often used to measure the efficacy of statutes and regulations, but they face problems that randomized trials would not. Researchers may purposefully or mistakenly omit variables from their regression analyses, leading to incorrect results. Publishers are more likely to feature work that provides statistically significant results, even if those results are not correct, a phenomenon known as publication bias.

Randomization trials avoid variable omission and publication bias, but they do, of course, have their own limitations. The optimal kind of randomization trial is a double-blind study, which is often used to determine the effectiveness of medical interventions. In a double-blind study, neither the doctor nor the patient knows what treatment the patient is receiving. Such pure randomization is not possible in legal experiments because test subjects will need to know what rules to follow.

There can also be selection problems with randomization. Those who self-select to participate in an experiment may tend to be different from others in ways that affect outcomes. For example, those companies that choose to participate in a Sarbanes-Oxley Act experiment might be those already facing the highest compliance costs. The government can sometimes address this problem by designing tests with mandatory participation.

Other limitations of randomization include attrition of test subjects from one test-group but not another, crossover of test subjects from one test-group to another (such as by well-connected people), and spillover of the effects of the experiment on those not in the test-group.

Risk2.jpg

Randomized tests also need to be designed with the costs of research in mind. But these costs should decrease over time as the public becomes more comfortable with the idea of participating in experiments and policymakers have more experience with them. Also, policymakers should experiment with the best candidate policies first so as to minimize the total cost of testing.

Ethical concerns are important, but may not present a significant barrier to using randomized tests. While legal randomized tests would lack the informed consent provided in medical experiments, the government regularly imposes regulations on the public – within constitutional and other legal bounds. Also, randomization sometimes makes the imposition more equal than regulation imposed using predetermined criteria. We tend to think it is worse to impose rules on people because the selected people are unpopular rather than simply because they were selected randomly.

How should randomized trials work? The experiments should be large enough to produce meaningful results. The test groups, meanwhile, should be the smallest possible without changing the results outside those test groups. For example, driving speed limits cannot be randomized at the individual level because such a test group size would significantly increase the risk of accidents. However, the test group could be at the county level.

Experiments should also be of sufficiently long durations to prevent test subjects from changing their behavior temporarily for the duration of the experiment. For example, if different income tax levels are imposed on different people to see if imposing a higher income tax reduces work output, an experiment of short duration would be more likely to be biased. Workers could wait out a temporary increase in income tax level by temporarily working less, and plan to work more once their income tax level decreases.

There is no problem, under current standards of judicial review, with administrative agencies testing out different regulations on their own. Agencies could put their proposed experimental regulations through the regular notice and comment process. After running the experiment, the agencies could provide a randomization impact statement explaining why the agency decided to test regulations through that process, describing the experiment, and providing its results. Because randomization provides for more objective analysis of policy results, courts should be more deferential in conducting hard look review to agencies that have selected policies through this approach.

Although random tests of laws and regulations need to be designed carefully, they have the potential to help identify which regulatory approaches are optimal. Meanwhile, they also can facilitate legislative agreements. If each opposing camp of legislators on a policy question thinks that an experiment will prove it correct, they may be more willing to create legislation than in a legislative culture in which the ultimate outcome is predictable or left to administrative discretion.

Michael Abramowicz is a Professor of Law at the George Washington University Law School. Ian Ayres is the William K. Townsend Professor at Yale Law School. Yair Listokin is a Professor at Yale Law School. This post draws from the authors’ recent article in the University of Pennsylvania Law Review, “Randomizing Law,” 159 U. Pa. L. Rev. 929 (2011).