Interpreting Motion Analytics

Motion Analytics offers a new view on how judges think and rule. By considering the outcomes of dozens or even hundreds of motions, litigators can make better decisions about the conditions under which judges and courts are most—and least—likely to decide in their favor. Like all analytics tools, Motion Analytics should be interpreted with an understanding of its underlying data, as well as appropriate techniques for drawing well-supported inferences. Here we offer guidance on both of these components: the nature of the Motion Analytics dataset, and best practices for making inferences from the information presented.

Motion Analytics are created from caselaw, using machine learning.

We construct Motion Analytics from the language of millions of caselaw opinions in the Ravel corpus. To do this, we built a suite of custom machine learning technologies, including natural language processing algorithms. We train (and re-train) them specifically on the language of caselaw by having experts review and annotate representative samples of thousands of opinions with their motions and motion outcomes. In other words, we use these expert-created examples to train our machine learning algorithms, so that we can then identify motions and their outcomes using software instead of experts.

We continually test our algorithms’ performance against new expert-annotated data. While state-of-the-art machine learning systems typically yield 70-75% accuracy on this class of problem, our system returns the same sets of motions and outcomes that humans do in more than 80% of cases. Our algorithms perform best in the most common scenarios, such as opinions that were published within the past ten years or that adjudicate three or fewer motions. Many of the differences between machine and human are minor: our algorithm might find two requests for judicial notice instead of three in a single opinion, or report a partially granted motion as fully granted. A practical implication of this is that while there is a chance of error for any individual motion, large aggregations of motions will tend to reveal an accurate distribution of grants and denials via the law of large numbers.

Evidence from Motion Analytics is one ingredient for inference.

We believe that drawing reliable inferences requires a combination of prior knowledge and directly observed evidence.* For an attorney weighing a particular motion’s odds of success, prior knowledge might include her own assessment of the motion’s merits as well as the logic the judge has articulated in previous opinions. In the context of this prior knowledge, the attorney can now consider direct evidence about the judge’s motion-granting behavior, in the form of counts of previous outcomes in Motion Analytics. This new evidence could serve to strengthen the attorney’s prior expectation: it may be the case that her prior knowledge indicates a forthcoming grant, and the motion data show a long pattern of grants from this judge under analogous circumstances. Or the motion data may challenge the attorney’s prior expectation, perhaps revealing a series of denials. Finally—and importantly—sometimes the motion data should not sway prior expectation particularly strongly. For instance, if only a handful of relevant past decisions exist, making strong inferences is probably not warranted.

We believe that evidence from Motion Analytics, then, is one ingredient for inference. Counts of previous outcomes—extracted exclusively from caselaw, and with some degree of noise—should not be construed as statistically-significant claims or as predictions in themselves. But weighed appropriately with prior knowledge, they represent a new trove of direct evidence to bolster powerful, informed, data-driven decisions.


* The technique we describe here—combining prior knowledge with direct evidence—is known as Bayesian inference and is widely used in predictive analytics and scientific modeling.

Have more questions? Submit a request


Please sign in to leave a comment.