Artificial Intelligence (AI), machine learning and other types of algorithms are continually being used within business to improve and sometimes make decisions. This is a trend that is only going to increase. It is therefore not long before the decisions made or influenced by AI etc. become subject to regulatory scrutiny or a litigation.
This article by Eric Evans, Alex Lakatos and Brad Peterson of Mayer Brown provides a great explanation of how companies should consider regulatory and legal risk when implementing AI into their business. I specifically agree with their views on the need to focus on explainability and the three aspects that they describe as transparency, interpretability and provenance.
I think this will be a key battleground of future regulatory action and disputes, understanding how and why decisions were made, being able to unpick them and ensure that they are compliant with the law and regulations. It is going to be very interesting to see how this develops!
But what is essential is that companies consider this risk as they are developing new technologies and embed safeguards into the process to ensure that can appropriately respond should they be required to do so.
companies are focusing on explainability. Explainability, in general terms, has three aspects: Transparency: easy identification of the important factors in the tool's operation; Interpretability: easy identification and explanation of how the tool weights those factors and derives them from its input data; and Provenance: easy identification of where input data originated.