I wrote earlier this week about how countries are using AI technology in numerous ways to help us manage and mitigate the effects of the COVID-19 pandemic. While we are finding effective and meaningful ways to use the technology, we risk running too far ahead.
AI can also be useful in facial recognition and temperature scanning, which enable us to start up economic activity while reducing the risks of social contact. But, those applications of AI technology clearly raise important privacy concerns and protections.
It's possible the rush to put the technology in place could run past our need to consider the proper ways to use it while protecting individual privacy rights. I don't know the answer to this dilemma (at least not yet ;-)), but it sure looks and feels like an issue we need to keep alert to and raise as quickly as possible (like right now). What are your thoughts?
The pandemic is opening up a massive opportunity for the tech industry, while it shines a light on calls for more scrutiny of AI innovations being developed faster than regulators are able to devise rules to protect citizens’ rights. The quick introduction of AI tools to fight the virus is being done in the name of the greater social good, but it raises important questions around accuracy, bias, discrimination, safety, and privacy.
