Predictive coding, Technology Assisted Review (TAR), machine learning, artificial intelligence...call it what you will, it is a very useful tool to have in the toolbox and to deploy when appropriate. However, it is not a panacea to solve all your ills and it does need careful supervision and control.
When processes are not properly configured, the risk of facing adverse results in litigation is substantially increased. Based on my experience, implementing the right controls and measures is what enables you to harness the true power of TAR.
The topic continues to drum up media, as mentioned in the Logikcull article below discussing a case involving major airlines that experienced a glitch which led to several non-responsive documents being produced and an expected subsequent impact.
It is a useful reminder that however good and powerful TAR tools are, they still require, or rather demand, supervision to ensure that they are operating correctly. That is no different to any other process run during a discovery operation - you should always verify the results.
Following are ways to help enforce a more positively-derived outcome when implementing TAR tactics in litigation:
- Ensure that the configuration and boundaries are set up appropriately from a technical perspective – i.e. you know what you are telling the machine to do
- Ensure that the document set is suitably prepared and of a suitable quality (for example, the textual extraction from documents used for the process is appropriate)
- Ensure that a suitable qualified and experienced person (on the case specifics) performs the training cycle(s) or commences the review if Continuous Active Learning (CAL) is used, so that they system learns appropriately
- Have an idea of what you expect the process to generate and monitor it
- Continually monitor the progress and the statistical variables (e.g. overturns) to ensure it is acting as you expect
- Perform additional tests to verify the results…don’t just trust the machine blindly.
I am a firm believer in the benefits of these technologies and we now use them on virtually every case, either as part of the main review process or part of an internal QC process to identify any potential undetected issues. These technologies should be encouraged in the right circumstances within a defined process with appropriate safeguards and controls. If done right, these technologies can be very powerful, and thus their use should be encouraged.
An AI-driven discovery process went off course recently, spinning out of control in class action litigation against some of the nation’s largest airlines. The result: millions of unresponsive documents produced without any easy way to tell the docs that matter apart from those that don't.