


A group of insiders at artificial intelligence companies are calling on executives to adopt new transparency guidelines to foster trust in the burgeoning market.
In an open letter Tuesday, a group of current and former employees of AI companies said that while AI has the potential to benefit humanity, the current culture at companies like OpenAI must change.
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter reads. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
In their letter, the employees cite several possible risks involved with developing AI technology, including increased misinformation, entrenchment of inequalities and loss of control of AI systems that they said could lead to “human extinction.”
Because the companies cannot be trusted to disclose possible concerns, the employees argue in the letter, workers should feel comfortable speaking out. However, since many AI companies enforce strict non-disclosure agreements, employees are discouraged from speaking out.
“Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated. Some of us reasonably fear various forms of retaliation, given the history of such cases across the industry,” the letter reads.
The employees conclude their letter by offering solutions to the current culture at AI companies: not enforcing agreements that prohibit “disparagement” or criticism of the company; creating an anonymous process for employees to express concerns to the board, regulators and independent organizations; and supporting a culture of open criticism and not retaliating against employees who speak out after other avenues have failed.
OpenAI could not be reached for comment on the open letter. However, OpenAI has touted its Integrity Line, which allows current and former employees to anonymously report issues or possible risks.
• Vaughn Cockayne can be reached at vcockayne@washingtontimes.com.