Beware, Your Employees May Be Entering Proprietary Company Data Into ChatGPT!

Artificial Intelligence

ChatGPT-related cyber-attacks have already begun, utilized by criminals to be employed to nefarious ends; in response, BW Cyber is alerting its clients to also be aware of the potential for a data compromise of proprietary company information.

While ChatGPT has a vast potential to assist non-programmatic managers (and others) with algorithmic trading & other types of automated market signals, it comes with a downside of which most asset managers are not aware: it collects and stores all data submitted to it.  This can be a big deal if company proprietary data, private code, or legally restricted data is utilized to interact with ChatGPT.

In response, BW Cyber strongly recommends asset managers immediately create/update their “Acceptable Use” policies to articulate exactly how employees may or may not interact with ChatGPT.  We are already aware of organizations becoming aware (after the fact) that developers (and others) provided proprietary code to ChatGPT only to later realize that knowledge of the code could lead to a successful cyber-attack.