I enjoyed reading this article by Eric Evans at Mayer Brown.
In this piece, Evans lays out a common scenario: A company just received a charge of race discrimination filed by a rejected job applicant. In preparing to defend the charge, the general counsel learns that the company received thousands of applications for the position. To cull the applications, the company used artificial intelligence to review skills and qualifications identified by the applicants. The general counsel is satisfied that the results achieved in part using artificial intelligence were not discriminatory, but she wonders how best to prove this to defend against the race discrimination charge.
TAR AI tools are well known and defined, but that is not true of all AI and should not be overlooked on TAR – especially if there are any challenges.
The ‘explainability’ point here is also an important part of how we should all be approaching AI in a litigation, arbitration, regulatory and investigation context. Take a read of the piece and consider how you would approach this.
For more information see: https://amonsocial.alvarezandmarsal.com/post/102gbrp/a-different-kind-of-turing-test-how-should-organisations-explain-decisions-made and https://amonsocial.alvarezandmarsal.com/post/102fjeh/defensibility-of-ai-in-litigation
The white paper provides a framework to assess—and ultimately defend—TAR tools. It stresses that a well-designed and well-managed TAR tool, for all its complexity, may be easier to explain and defend than a process relying on unassisted human judgment.