Sometimes, it’s the small details that hobble even the most easily explained policies. When California decided to expunge felony records for marijuana offenses, relief for former felons was hampered by a lack of comprehensive recordkeeping and reliance on proactive individual action (the expungement wasn’t automatic; you had to ask for it). These and similar stumbling blocks can be weaponized by opponents, as occurred with the restoration of voting rights to felons in Florida. It’s a technological spin on the well-known legislator’s warning, “If I let you write the substance and you let me write the procedure, I’ll screw you every time.”
In Recoding America, Jennifer Pahlka makes the argument that there doesn’t even have to be a bad guy on the procedure side for this to happen. This is a book by a technocrat with a persuasive argument for a measure of technocracy: America’s ways of lawmaking could be greatly improved by borrowing from the project management concept of agile development, which allows people lower in the hierarchy to make consequential decisions rather than being burdened by having all the rules have to be specified in advance. The latter, “waterfall” development, can lead to deadly (sometimes literally) complexity and policy failure. When policy is too rococo and reticulated, such as having nine different definitions of a “group” of doctors for Medicare purposes, throwing money at the problem rarely helps. Neither does outsourcing and oversight, both of which Pahlka believes can help when properly deployed but often end up generating more layers of bureaucracy.
Pahlka argues that teams building tech to implement a government policy should have the authority to alter it as they go. They should build something that works at least a bit as soon as possible, shift edge cases to human review, and automate the easy stuff rather than building software that’s supposed to accommodate all possible situations. The worst part of directives from above, from her view, is that “nowhere in government documents will you find a requirement that the service actually works for the people who are supposed to use it. The systems are designed instead to meet the needs of the bureaucracies that create them—they are risk-mitigation strategies for dozens of internal stakeholders”—even though they also fail at that regularly.
The failure of service and benefit systems is a special problem for government because people interpret their experiences with bureaucracies as evidence of how government works more generally. Involvement with the criminal system, getting a construction permit, or filing taxes, can be unpleasant enough that it erodes faith in government and deters political participation. (This observation suggests that making voting complex, as when absentee voters are required to use two envelopes and sign and date one of the envelopes, doesn’t just disenfranchise individual voters—the ultimate effect is to deter citizens from even trying.) Unfortunately, the problem is also worse in government because obsolete tech is paired with obsolete policies—not just obsolete, but accreted over rounds and layers of attempted reform, which is how you get those nine definitions of a “group” of doctors in Medicare.
Pahlka compares current policymaking frameworks to waterfall development in software, where directives come from above. Waterfall development uses new data to grade only after the fact. “For people stuck in waterfall frameworks, data is not a tool in their hands. It’s something other people use as a stick to beat them with.” Naturally, they aren’t that interested in collecting ammunition against themselves. In addition, “[e]ven when legislators and policymakers try to give implementers the flexibility to exercise judgment, the words they write take on an entirely different meaning, and have an entirely different effect, as they descend through the hierarchy, becoming more rigid with every step.” She gives numerous examples.
One thing that Pahlka suggests might be fixable is policymakers’ cultural contempt for implementation. They think/hope/expect/imagine that if they write the right rules, everything will be fine. But it isn’t and won’t be. Pahlka criticizes the Administrative Procedure Act rulemaking process that most of government uses, because it essentially invites and requires interest group lobbying for every rule. The required process is more like a jury trial than an expert evaluation. Leftists, she argues, got really good at suing the government to stop bad stuff, but that contributed to an environment of risk aversion at agencies (and didn’t stop the Supreme Court from harming agency power anyway).
At points, Pahlka is pretty clear that there are some no-win scenarios here: Equity usually requires data, which requires paperwork, which favors the powerful. So, what is to be done? One very concrete recommendation from Pahlka is to focus on making things simpler for most people and devote human resources to the tougher situations. She argues that new programs should be launched when they’re ready to handle 85% of the cases, though the edge cases should be addressed technologically eventually. In reality, she points out, policies are launched incrementally anyway, because the systems built under current processes don’t work for a lot of people. Waterfall policymaking merely ensures that rollouts are incremental in the worst possible way.
As one person quoted in the book says of welfare applications, “Every time you add a question to a form, I want you to imagine the user filling it out with one hand while using the other to break up a brawl between toddlers.” Documentation should be required only when needed and responsive to actual circumstances. Sadly, Pahlka doesn’t give much shrift to the idea of just removing eligibility constraints for services and benefits, whether that’s sending money to people with kids or implementing universal health care. More universal safety nets could lead to lots less waste and failure in the administration of exceptions.
For an example of successful agile development in government, Pahlka points to free COVID tests—not for nothing, a universal policy. The rule was that each unique address could order only a certain number of free tests. Initially, the postal service just asked for a requester’s address. But it turned out that, occasionally, “one apartment dweller requesting tests would blacklist other units in the same building.” This wasn’t a programming error. It was a problem with the post office’s records, which hadn’t been updated to reflect division of a building into apartments, even though individual mail carriers were compensating for that when they walked their routes. To compensate, the team added a process involving human review of edge cases, whereby an individual could fill out a short form appealing a denial as error. Pahlka acknowledges that this process disproportionately burdened lower-income individuals. But it also cleaned up around two-thirds of the residential address database as a result.
Even though it sounds scary and even undemocratic to have a random technologist embedded deep in the hierarchy making important distinctions, Pahlka argues that it’s necessary for the success of the actual intended outcomes decided upon by elected representatives. When no one can go ahead and make decisions about how a program should work, but lots of people have the power to add requirements to it—as is now the case—you get lots of paperwork and few good outcomes. Good product management can “reimagine representation and voice so as to honor the values our government is supposed to be founded on.”
To further improve things, Pahlka argues that the government should spend money improving its human resources, especially at the levels of program management/operating expenses. Oversight should ask less about whether a team stuck to a plan, and more about what the team learned in implementation and what user tests are showing now.
There are some things this book misses. Pahlka, who’s not a lawyer, doesn’t suggest that new laws should explicitly allow regulators to easily simplify and even eliminate earlier categories and rules. There are certainly reasons why we don’t do that. For example, lots of systems rely on past categories and rules and changing them could cause a cascade of incongruencies. But the accretion of legal complexity keeps making things worse. In my view, it would be a worthwhile exercise to try to write a law that actually allowed agile development of implementation policies—and then probably a painful exercise to see what happened when courts got their hands on it.
Give people leeway to implement the intent of the overall policy, Pahlka suggests, and you can avoid the layers of bureaucracy that stymie well-intentioned attempts at reform. While there’s merit in the argument, she doesn’t give a lot of weight to the reasons that policymakers try to be comprehensive. Although the perfect shouldn’t be the enemy of the good, it’s also the case that if you get a policy running that works for 90% of people, the 10% excluded are likely to share some demographic characteristics, and historically the policy is unlikely to be revisited to fix it for them. That result is usually worse when it comes from government than when it comes in private software. But if the perfect is not to be the enemy of the good, then oversight that focuses on implementation success, and flexibility to keep working for that 10%, are the proper solutions.






