What happens when instructors train algorithms using the biases inherited from an imperfect system? In the case of BSA/AML - the answer, unfortunately, is still in the works.
BSA/AML is one prime example of how Artificial Intelligence (AI) and Machine Learning (ML) might still be able to help humans (albeit, after a few minor tweaks to the training data) on a grand scale. However, based on any gauge on the efficacy of the current BSA/AML apparatus, it is clear that there is room for improvement. The going concern is whether we can reap the advances in technology to our benefit, or whether the bad habits (biases) that have been built into the system might take us further from the goal.
Read on for the specific remarks made by an economist, entrepreneur, adviser, professor, civil servant and veteran before the House Committee on Financial Services (U.S. HR) earlier this month, relating to the modernization of BSA/AML in the midst of technological advancement ("Promoting corporate transparency: examining legislative proposals to detect and deter financial crime"). In short, we are moving in the right direction but some course-adjustment might be needed. (Note: His remarks begin at the 42:55 mark. The full video link appears at the bottom of the linked page).
To ensure we maintain the balance between risks and rewards of advancing technologies, I suggest three core principles for the subcommittee to consider as part of any reform or legislative proposal: 1. Encourage information sharing between law enforcement, financial institutions, and regulators. This will enable the sharing of priorities and training data for Machine Learning. This will also help regulators better judge the quality of the data generated, and not just the volume. It will also provide tools for measuring biases in data. 2. Avoid opaque solutions where humans cannot understand the internal processes or outcomes of the machines. 3. Keep humans in the loop; let machines sort and filter data, but let humans adjudicate good vs. bad and right vs. wrong.