Are All Citators Created Equal?

By , December 3, 2010 5:24 pm

by Kerry Fitz-Gerald

Students often ask whether there’s any reason to use both Shepards and KeyCite or if it is ok use only one (and of course, they want to know which one). For years, I’ve played it safe with the bland answer that if it really matters, they should consider using both, but that usually one or the other would suffice. Recently, I read a terrific article by Susan Nevelow Mart, “The Relevance of Results Generated by Human Indexing and Computer Algorithms: A Study of West’s Headnotes and Key Numbers and LexisNexis’s Headnotes and Topics” 102 Law Library Journal 221 (2010), that provides a much more nuanced answer to the question.

To start, Nevelow Mart provides a detailed explanation of the way the two systems create headnotes and place them within hierarchical legal topic classification schemes. The main difference is that within the West system, human editors write the headnotes and choose where to place them within the Digest system, while on Lexis, headnotes are classified into topics based upon a sophisticated series of computer algorithms. Human editors monitor the process, but Nevelow Mart concludes that the “role of human editors in classifying individual headnotes for each new case seems to be limited in LexisNexis.” (Id, at 225).

Nevelow Mart then proceeds to explore two questions: first, whether human or computer classification makes any noticeable difference when searching using the Digest or Topic systems, and secondly, whether there is a noticeable difference when using citators limited by headnote.

I won’t attempt to reproduce her detailed research approach and results here. Suffice it to say, she concluded that for searching purposes, there is a distinct advantage with human-generated headnotes, but that this advantage did not extend to citator systems.

The article is well worth reading for these conclusions alone. But I was particularly struck by her table of relevant cases found in citator comparisons. Here one can see that of her 10 test cases, when she limited by jurisdiction and headnote, the average overlap between the systems was only 25.7%. In other words, for all of her test cases, each system returned a number of cases that were not returned on the other system. Moreover, she examined these unique cases for relevance and found that a significant percentage of these unique cases were relevant.

Given the small sample size, it’s possible that these results are anomalous. And I am curious to know whether this disparity exists when one does not restrict by headnote or jurisdiction. But that said, Nevelow Mart’s research provides strong support for the conclusion that if finding every relevant case is important, one needs to use both systems.

LLOPS is powered by WordPress • Panorama Theme by ThemocracyLog In