Machines as Bureaucrats
We rely on agencies to increase air quality and mitigate climate change, protect public health and safety, and safeguard the integrity of financial markets. Nearly a century ago Max Weber cogently observed that the modern nation-state depends on bureaucracy—or, in modern parlance, on administrative agencies. He would not have been surprised to see how, even as self-driving cars navigate the streets of Pittsburgh, Pennsylvania and Mountain View, California, agencies staffed by bureaucrats and overseen by administrators have remained the essential organizational technology of the administrative state. Whether those agency administrators exercise sufficient independent judgment as individuals to warrant the integrity and accountability of a decision has, in turn, been the subject of some classic administrative law cases.
But technological change is creating new dilemmas and opportunities for the administrative state. Agencies today can rely on sophisticated computer programs—programs that agencies could use to make or support decisions, and could therefore assume an increasingly prominent role in the regulatory process. The smartphones that so many Americans carry around in their pockets are far more powerful––and an order of magnitude cheaper––than the vast computers scientists and the military used a generation ago. In the coming years, computing power and storage will almost certainly become even cheaper, surveillance more pervasive, software architecture more flexible, and the limitations of human decision-makers will become more salient.
Justice Mariano-Florentino Cuéllar at the Penn Program on Regulation’s annual regulation dinner.
Traditional expert systems used law-like techniques to search through potential options when analyzing how to diagnose certain medical conditions, or how to categorize a particular kind of molecule––but they were cumbersome at best when it came to some of the seemingly simplest things that people could do almost “without thinking,” like classifying visual objects, interpreting idiomatic expressions, or decoding nonverbal communication. As computing power gets cheaper and software improves, expert systems are ever more able to sift through millions of options quite quickly. But an even bigger change is underway in the realm of so-called “machine learning,” where the software architecture uses two interesting techniques.
One technique involves so-called “neural networks,” which are inspired by the layout of the human brain to spot patterns and leverage “big data.” “Deep learning” systems embody a particular architecture for neural networks that avoid some persistent problems neural networks have had in developing adaptive responses to new data and have sparked particular interest because of its capacity to solve pattern-recognition problems in computer vision and other fields.
Meanwhile, “genetic algorithms” that emerge by developing simple algorithms––or baby computer programs––to solve a problem like spotting suspicious financial transactions, allowing those algorithms to mutate slightly over time, and then selecting for the algorithms that beat the others on a given metric. It is a great way to write a nearly security-proof, pesky computer virus, which we will return to later in this series. But it is more generally through machine learning that new progress is underway on many of those apparently simple but devilishly hard technical problems, like vision and speech recognition.
Because of these changes, lawyers for regulated industries, citizens facing possible search or arrest from the police, and individuals seeking asylum will find themselves interacting with agency officials who heavily rely on software to make decisions––or perhaps these members of the public will be interacting directly with the software itself. More extensive use of programs designed to supplement—or even replace—human decision-makers will become commonplace as computing power and memory become cheaper, data from surveillance become more pervasive, and economic and military pressures drive adoption. The public already relies on software to recommend romantic partners and investments. As autonomous and elaborate decision-support programs become more common, social norms will continue to change about the propriety of relying on computers to make decisions. Although computer programs analyzing vast amounts of information may hold some promise for making better use of data, enhancing transparency, and reducing inconsistency in bureaucratic justice, such reliance may bring about subtle consequences and deeper questions that merit careful scrutiny.
Cary Coglianese, Director of the Penn Program on Regulation, welcomes guests to Penn Law’s annual regulation dinner.
What should we make of a world where the entities entrusted to exercise administrative power are not agencies but software programs that leverage the fast-developing technology of artificial intelligence? Imagine a series of sleek black boxes—capable of sustaining a cogent conversation with an expert, and networked to an elaborate structure of machines, data, and coded instruction sets—that deliver bureaucratic justice. It could begin innocently enough, with anodyne decision-support programs for administrative law judges adjudicating disability claims, or for hearing examiners at the U.S. Environmental Protection Agency. But as the interfaces became more intuitive and the analytical capacity more sophisticated, the black boxes might steadily climb up the bureaucratic ladder, displacing supervisors, division heads, and even agency administrators. All of which could recast—or even disrupt—legally-sanctioned bureaucratic authority.
It may seem simple enough to determine the expected value of these changes in social welfare terms. Consider the choice, for example, to replace an administrative law judge working on disability determinations, or even an Under Secretary responsible for food safety, with an expert system––one that could replay in exquisite detail the sequence of decision rules it relied on to render a judgment. Any reasonable effort to judge the quality of that judgment mainly depends on how a statute or regulatory rule defines a domain-specific metric of success. Because such delegation could affect variables that cut across domains––such as perceptions of government legitimacy, cybersecurity risks, and the extent of presidential power––even more important would be an uncontroversial metric of social welfare, along with certain assumptions to minimize the difficult trade-offs across domains.
But more profound challenges would arise in the myriad situations where the unambiguous metric is not so easily available. Think about the subtle choices involving drug approval, asylum applications, bioethics, and the protection of endangered species. In all of these areas, heavy reliance on artificially intelligent systems could also make it harder for lawmakers, courts, and the public to assess the consequences of automated agency decision-making where the trade-offs are complex.
We may ultimately find that the choices we make about automation will be part of a broader conflict about the role of people in an economy that sheds a large proportion of existing job categories more quickly than expected, even as it continues to enhance automation technologies that humans find, like a sweet-tasting artificial strawberry dessert, occasionally more satisfying than the “natural” alternative. As these questions become more familiar, the administrative state will continue confronting a host of challenges entirely recognizable to Weber––from striking the right balance between agency insulation and responsiveness to the role of tradition in bureaucratic decision-making. But increasingly, the dilemma agencies and the public will face is what to do about the aforementioned sleek black boxes that promise to make governing far simpler and cheaper. Whether those boxes also give us an accurate account of who gains or loses in the process is not something we should take for granted.