• OpenAI posted a job listing for an investigator who would “fortify our organization against internal security threats.”
  • Duties would include working with HR to conduct “investigations into suspicious activities.”
  • OpenAI is already at the center of a heated debate about AI and security.

Advertisement

At OpenAI, the security threat may be coming from inside the house. The company recently posted a job listing for a Technical Insider Risk Investigator to “fortify our organization against internal security threats.”

According to the posting, job duties include analyzing anomalous activities, detecting and mitigating insider threats, and working with the HR and legal departments to “conduct investigations into suspicious activities.”

A spokesperson for OpenAI said the company doesn’t comment on job listings.

OpenAI is already at the center of a heated debate about AI and security. Employees of the company as well as US lawmakers have publicly raised concerns about whether OpenAI is doing enough to ensure its powerful technologies aren’t used to cause harm.

Advertisement

At the same time OpenAI has seen state-affiliated actors from China, Russia, and Iran attempt to use its AI models for what it calls malicious acts. The company says it disrupted these actions and terminated the accounts associated with the parties involved.

OpenAI itself became the target of malicious actors in 2023 when its internal messaging system was breached by hackers, an incident that only came to light after two people leaked the information to the New York Times.

In addition to hacker groups and authoritarian governments, this job posting seems to indicate that OpenAI is also concerned about threats originating with its own employees. Though it’s unclear exactly what manner of threat OpenAI is on the lookout for.

One possibility is that the company is seeking to protect the trade secrets that underpin its technology. According to the job posting, OpenAI’s hiring of an internal risk investigator is part of the voluntary commitments on AI safety it made to the White House, one of which is to invest in “insider threat safeguards to protect proprietary and unreleased model weights.”

Advertisement

In an open letter last June, current and former employees of OpenAI wrote that they felt blocked from voicing their concerns about AI safety. The letter called on OpenAI to guarantee a “right to warn” the public about the dangers of OpenAI’s products. It’s unclear if this type of whistleblowing will be covered by the “data loss prevention controls” that the risk investigator will be responsible for implementing.