BW Cyber has been beating the drum about the risks posed by artificial intelligence technology for many months. But, if you don’t want to take our word for it, you can take the U.S. National Institute of Standards and Technology’s (NIST) word for it, because the body is out with a new publication, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2), that discusses the risks that deploying AI technology can pose.
The report shows a taxonomy of concepts and terminology related to adversarial machine learning as well as providing methods for mitigating and managing the consequences of attacks. It’s a significant document, both in terms of size (approximately 100 pages) and importance.
The biggest AI-related risk that we see, however, is in the field of generative AI tools, like ChatGPT, Bard, etc. We strongly recommend that asset and wealth managers only allow limited GenAI access with pre-approval from management. Too often, the information uploaded into these tools includes incredibly sensitive company or client information and the use of these technologies should be heavily monitored and censored.