Interesting read from Burges Salmon on AI. Tom Whittaker shares some useful pointers on a new US National Institute for Science and Technology (NIST) draft report which identifies three main categories of risk of which those designing, developing and using AI should be aware of.
His run down on each category is useful and I urge you to have a read. His points on human vs machine (or rather the two working together symbiotically) are important in my opinion.
He said, “assessments of an AI system or a decision deriving from AI ought to be able to be scrutinised by a human.” As he explains, this is an important aspect of gaining public trust, but I think we would go one further to say it’s more than just garnering trust- it is integral for effectiveness too.
Technology can be a fantastic tool for uncovering and gathering information, especially since so much intelligence and evidence is stored electronically. But at the end of the day, the key is empowering the human to be more effective.
NIST development of an AI Risk Management Framework is, in their words, "an ideal opportunity to advance those discussions and forge agreements across organizations and internationally to the benefit AI design, development, use, and evaluation." This reflects the international aspect of discussions about AI. For example, the EU proposals for AI legislation is drafted with the intention of setting global standards for regulating AI. So whilst any risk assessment (whether required by law or not) must be based on current and expected circumstances, an eye also should be had on international developments on how to best identify and manage the risks arising from AI.