As night follows day, disputes will follow change. Artificial Intelligence (“AI”) and all of the related concepts such as machine learning and big data are fundamentally are changing how many businesses operate and indeed how entire industries operate. The transformational impact of this will inevitably lead to both investigations and disputes when things go “wrong” or there are parties who believe they have unfairly suffered losses.
How do you assess these technologies where this happens and what does that have to do with guidance from the ICO. As Mark Young and Sam Jungyun Choi from Covington & Burling set out, the ICO has issued its draft guidance about explaining decisions made by AI. The guidance is directed at looking at how AI is used to make automated decisions and how that needs to be explainable to data subjects.
The guidance sets out four key principles:
- Be transparent – explain how decisions are made and which data is used to do so
- Be accountable – ensure there is appropriate oversight
- Consider context – there is no one-size-fits-all approach to dealing with AI and the specifics of each situation need to be taken into account
- Reflect on impacts – consider the impact and results, to mitigate risks of unfair bias/discrimination and to ensure the quality and consistency of the results.
These are very sensible principles and can equally be applied in a legal and regulatory context. When these technologies fall under the microscope from a legal or regulatory perspective, businesses will need to be able to explain the technologies in a such a way that courts and regulators can understand them. This will not happen by chance, and businesses need to consider these factors and risk when developing these technologies.
The ICO sets out four key principles — guided by the GDPR — in relation to explaining AI decision-making systems. For each principle, the ICO identifies different types of explanations that should be provided to individuals, as set out below. Be transparent: Organizations should make it obvious that AI is being used to make decisions and explain the decisions to individuals in a meaningful way... Be accountable: Organizations should ensure appropriate oversight of AI decision systems, and be answerable to others. Consider context: The guidance recognizes that there is no one-size-fits-all approach to explain AI-assisted decisions. about ethical purposes and objectives at the initial stages of AI projects. Reflect on impacts: The ICO encourages organizations to ask and answer questions about ethical purposes and objectives at the initial stages of AI projects.