When I entered MIT’s doctoral program in political science, I did not anticipate a career researching legal behavior, but I was already enthralled by the prospect of learning about how people think and behave as political actors. Although it makes dramatically clear how old I am, I will share that my first encounter with empirical research as a Hunter College undergrad involved entering questionnaire responses onto IBM punch cards – rectangular pieces of cardboard with rows and columns of possible positions to enter data – and then watching as a “counter-sorter” machine mechanically sorted the cards according to the patterns of punches. Before my eyes, I could see (and calculate) the percentages of men and women with various demographic characteristics who had responded to the questions I asked them about their political attitudes and voting behavior. I confess I still miss those close encounters with empirical data.
I took that delight in interviewing people about their experiences and attitudes and translating their answers into statistical data on with me to MIT. At MIT, however, I had little opportunity to investigate legal behavior. Looking back, it seems strange that the school’s outstanding political science faculty didn’t accord much importance to the structure or operations of the legal system or the consequences of law on society. It was not until RAND – where I landed after graduate school – established a program of policy research on the civil side of the justice system that I had the opportunity to apply empirical research methods to studying courts, legal procedures, judges, lawyers and litigants. RAND was not the first organization to do such research: empirical research on criminal behavior and criminal law stretches back to the 19th century and by the 1970s, researchers at the Federal Judicial Center had innovated evaluations of court reforms and scholars at the University of Wisconsin Law School had conducted groundbreaking studies of when and why ordinary Americans chose to pursue justiciable harms. Even earlier, researchers at Columbia had conducted research on auto accident victims’ claiming behavior and the consequences of judicial settlement conferences. However, in the early 1980s, when my colleagues and I explained to lawyers and judges that the new RAND program was going to conduct policy analyses based on empirical research, most of them responded that this was a fool’s errand: lawsuits, they agreed, were so individual, so context specific, that it would be impossible to derive any patterns from dispute outcomes or to link behavior to outcomes. RAND leaders thought otherwise, however, and urged the small group of researchers who had joined this new program to persevere.
We began by reviewing the then current (and today, still relevant) controversies about the civil justice system: that courts were overrun with cases, that litigation was ridiculously expensive, that juries were “out of control”, that disputants preferred settling in private to public contests in court. Our thought was to uncover existing data and build on that. To our astonishment, there were (virtually) no data to support (or dispute) these claims! Reading reams of commentary, including by leading corporate officials, lawyers and judges, we could find scarcely a number to support the claims about civil justice that were bandied about in the policy arena. Commentators simply “knew” that their observations were correct. I remember our enthusiasm when one of our group actually did find some numbers reported to support an empirical claim. We eagerly traced the claim to an author, only to eventually find that said author (a well-known public commentator) had explained that he produced the statistic that supported his argument on the “back of an envelope,” and pretty much out of thin air. At that point, we reported to our program managers that it was likely to take a lot longer than they had suggested to the program’s sponsors to produce the sort of empirical evidence that RAND policy analysts were used to using to support policy recommendations.
For me, this began a decade of inquiry trying to pin down accurate information about the basic contours of America’s civil justice system. Over time, U.S. federal and state courts began to report accurate but still woefully incomplete data on lawsuit filings; jury researchers began to publish information on how juries in different locales decided different types of civil suits; procedural justice scholars began to evaluate claims about disputants’ preferences for different types of dispute resolution mechanisms. Although our goal as policy analysts was to promote sound policy, for the first decade we were mostly limited to a “just the facts” approach: we didn’t attempt much analysis because we felt we didn’t know enough about the civil justice system to assess its merits and demerits, much less opine on useful changes.
At first, my reaction to these experiences was simply frustration: how could anyone recommend sound policy if they didn’t know basic facts about the system they were trying to improve? However, as I learned more about law, I became increasingly intrigued by the disconnect between the empirical assumptions that underlie legal doctrine and knowledge about the accuracy of those assumptions. Early in the development of the RAND program, its managers hired law faculty to teach us researchers – all but one of us non-law trained – something about substantive doctrine and procedure. Listening to one of the leading tort law professors of the day lecture on key tort doctrines, I could not keep myself from interrupting to ask: how do we know this [key doctrine] has that [desired effect] on behavior? What’s the empirical evidence for that? Has anyone ever studied it? The answer was often puzzlement: why would one even ask that, our mentor seemed to think. Without ever intending it, I seemed to have stumbled into a field where policy dicta were all about how rules affect behavior, but research on behavior – much less its connection to rules – was all but absent.
A doctoral candidate might have given up in frustration. However, as RAND researchers we had a podium to report on our accumulating findings. Courts overloaded with civil lawsuits? Mostly not. Ridiculously expensive litigation? No one was keeping tabs on public costs and private costs were kept confidential. People suing at the drop of a hat? Rarely. Juries out of control? More likely responding to changes in the types of cases being brought to them to decide. Disputants preferring settlement in private to court proceedings? Sometimes, but often not: defendants for example often wanted an opportunity to vindicate themselves in public. As we built datasets derived from court data, corporate records, jury verdict reporters, and survey interviews, we began to confirm and disconfirm – mostly the latter – the many popular allegations about America’s civil justice system, often yielding considerable publicity for our work. This was actually quite heady stuff for young researchers.
Over time, many of our program’s sponsors soured on the empirical research initiative they had provoked. Too often, the empirical data did not support their public arguments for policy change. However, while our research did not always – or even often – carry the day in debates over proposed policy changes they did help to produce a sea change: Now when contending parties promoted policy reforms to solve perceived problems, they were frequently met with the question: where are the data to support your position? And after a while, the idea that legal analysis should include empirical analysis began to make its way into the legal academy.
Fast forward to the 2000s. Law faculties, particularly at elite schools that could afford to add unconventional legal scholars to their ranks, began to hire folks with Ph.D.s in the social sciences as well as J.D.s. (Indeed, the number of “law-and” faculty candidates meant that people like me without J.D.s were no longer competitive.) Among the new recruits were economists and other quantitatively-trained social scientists, many of whom had been educated to believe that only numeric data count as “empirical.” Often these empirically-inclined legal scholars were more interested in building econometric models that required sophisticated statistical skills than in describing individual or corporate legal behavior and court operations. Rather than starting with a question and seeking data that might help answer that question, they started with a database – often one created by other researchers – and then figured out how they might analyze that database using their existing tools to produce a law journal article. (An example of the aphorism “to a man with a hammer, everything looks like a nail.”) This approach almost by definition excluded more qualitative approaches such as case studies that qualitatively trained historians, political scientists and sociologists use to depict events and explore the factors that explain those events. Often, non-empiricists on law faculties – influenced by their quantitatively-inclined colleagues – dismissed this sort of work as not truly “empirical,” thereby excluding a considerable swath of law-and-society scholarship that has shed light on litigation dynamics and legal culture. I began to think that the turn to data in legal scholarship was contributing to the development and application of new analytic approaches but not revealing much about how the law operates to shape ordinary people’s lives – the question that prompted my own work decades earlier.
Recently, I’ve become somewhat more optimistic about empirical legal studies. Survey experiments, which combine survey research methods and experimental design, have captured the imagination of a new cohort of empiricists, yielding interesting hypotheses about how people respond to different legal rules. These experiments require clever thinking about legal behavior, based on observational data, and appropriate application of survey research methods – which are inherently qualitative – not just a command of advanced statistics. The advent of large language models (LLMs) and other AI approaches to text analysis, is producing useful findings on contracting preferences, how police agencies report crime and a host of other issues. Using LLMs, we are beginning to be able to draw information about litigation dynamics from court dockets that was previously inaccessible without huge resources. My Stanford law colleagues are not just publishing articles on the situation of unrepresented defendants in state courts but partnering with judges and court administrators to pilot online tools to help these defendants. All of these suggest a turn to question-driven, rather than data-first, empirical analysis. I’m eagerly awaiting what the next decade of empirical legal studies teaches us about the role of law in shaping behavior.
© 2025 Walter de Gruyter GmbH, Berlin/Boston
Articles in the same Issue
- Frontmatter
- Introduction
- Complex and International Litigation and Empirical Litigation Studies: Festschrift in Honor of Professor Deborah Hensler, Editors’ Introduction
- Articles
- Empirical Tort Law (and Theory) – An Essay in Honor of Deborah Hensler
- Pulling Back the Curtain on the Federal Class Action
- The (Un)intended Consequences of Legal Transplants: A Comparative Study of Standing in Collective Litigation in Five Jurisdictions
- Essays
- Deborah Hensler and the Institute for Civil Justice Striving to Speak Truth to Power in the Tort Reform Debate
- Stories, Individuals, Statistics, Aggregation, Torts, and Social Justice: Deborah Hensler’s Aspirations for Law
- Deborah Hensler: A Teacher of Judges
- Reflections on a “Data-Driven” Life in Law
Articles in the same Issue
- Frontmatter
- Introduction
- Complex and International Litigation and Empirical Litigation Studies: Festschrift in Honor of Professor Deborah Hensler, Editors’ Introduction
- Articles
- Empirical Tort Law (and Theory) – An Essay in Honor of Deborah Hensler
- Pulling Back the Curtain on the Federal Class Action
- The (Un)intended Consequences of Legal Transplants: A Comparative Study of Standing in Collective Litigation in Five Jurisdictions
- Essays
- Deborah Hensler and the Institute for Civil Justice Striving to Speak Truth to Power in the Tort Reform Debate
- Stories, Individuals, Statistics, Aggregation, Torts, and Social Justice: Deborah Hensler’s Aspirations for Law
- Deborah Hensler: A Teacher of Judges
- Reflections on a “Data-Driven” Life in Law