An interesting article by Giles Pratt and Rachel Annear from Freshfields providing guidance for companies when it comes to explaining decisions made by artificial intelligence systems.  This is based on recent guidance published by the UK Information Commissioner's Office and the Alan Turing Institute into this topic.  This is going to be an increasingly interesting topic, not only for businesses trying to understand their own processes, but the inevitability of increasing regulatory interest and future litigation when things go amiss...at least from one side's perspective.  

Therefore, how businesses think and document the development and refinement of their AI systems will be crucial.  It is important that companies consider regulatory and legal risk when designing and implementing AI into their business. This inevitably will revolve around how well businesses can explain how their systems actually work.  This has three key aspects:

  1. Transparency: being able to identify and show the influence of the important factors within the system;
  2. Interpretation: easy identification and explanation of how the system handles the system inputs and then relates them to those factors, as well as how it then judges and weighs those factors; and
  3. Provenance: easy identification of not only the original inputs but also how the path to a decision has been reached.

I think this will be a key battleground of future regulatory action and disputes, understanding how and why decisions were made, being able to unpick them and ensure that they are compliant with the law and regulations.  It is going to be very interesting to see how this develops!

But what is essential is that companies consider this risk as they are developing new technologies and embed safeguards into the process to ensure that they can appropriately respond should they be required to do so.