I’ve taken a real interest in the development of Technology Assisted Review (TAR) in litigation.
This is an interesting article by George Carry and Mike Lieberman from Crowell & Moring looking at a recent US case looking at the use of TAR.
In Livingston v. City of Chicago (N.D. Ill. No. 16 CV 10156), the case in question, the Court allowed for the use of TAR across the relevant data population, specifically stating that the party undertaking the discovery exercise “…is best situated to decide how to search for and produce emails responsive to Plaintiffs’ discovery requests…”
The Court also stated that the party “…need not necessarily run TAR against a full ESI universe…” and “…its approval of the City’s TAR process rested, in part, on the City’s offer to validate its TAR results with statistical evidence provided to the opposing party.”
This is obviously an interesting judgement and has relevance regardless of where in the world your case takes place. This builds on the growing list of cases referring to TAR and how it is being adopted. How TAR is being adopted is still a contentious issue that we see in our cases and I wanted to highlight two areas for contention we have recently seen.
In line with the court judgement that I refer to above which states that parties “…need not necessarily run TAR against a full ESI universe…” we have seen the approach to defining a document set for TAR to be ran against be questioned. There is no doubting that keyword searching still plays a role in the eDiscovery process but if keyword searching is the sole culling technique to define your TAR population then the inherited problems of a keyword approach are still part of the process. The technology that sits behind the TAR process in many eDiscovery platforms allows practitioners to analyse ESI before adopting a keyword approach. For example, machine learning algorithms can group documents together by their conceptual content, looking at the patterns of language used between similar documents rather than the words themselves. This approach can also allow you to analyse closely correlated words to any priority keywords you may want to run. A combination of looking at conceptual content of document and the correlation of words to priority keywords can all be used to help define your TAR document population and ensure any keywords ran are as efficient as possible.
Secondly, we have seen that the way the statistical evidence of a TAR model has been presented and interpreted has been contested. With a TAR approach becoming increasingly common, is now the time for a standardised TAR results sheet with clear definitions of what each statistical test means to be part of the results process?
Responding parties have significant discretion to design and deploy technology assisted review (“TAR”) workflows in a manner they determine is reasonable and proportional for the case. At least that’s what the Northern District of Illinois suggested in its September 2020 ruling in Livingston v. City of Chicago (N.D. Ill. No. 16 CV 10156).