OpenAI discovered that its internal messaging system had been illegally accessed and information related to AI technology had been stolen, and the information was kept secret without notifying the FBI or general users.



It has been revealed that OpenAI, the AI company known for developing the chat AI ChatGPT, was hacked in early 2023.

A Hacker Stole OpenAI Secrets, Raising Fears That China Could, Too - The New York Times

https://www.nytimes.com/2024/07/04/technology/openai-hack.html



NYT: Hacker stole details about OpenAI's AI tech, but the company kept it quiet - Neowin
https://www.neowin.net/news/nyt-hacker-stole-details-about-openais-ai-tech-but-the-company-kept-it-quiet/

OpenAI was Hacked But Never Told FBI
https://www.thedailybeast.com/openai-was-hacked-but-never-told-fbi

In early 2023, The New York Times reported that hackers accessed OpenAI's internal messaging system and stole detailed information about the company's AI technology design. According to a source familiar with the OpenAI hack, the hackers accessed an online forum where OpenAI employees discuss the latest technology and spied on the discussions. However, they were unable to access the systems where OpenAI stores and builds its AI technology.

OpenAI executives notified employees of the hacking incident during an all-hands meeting at their San Francisco office in April 2023. This was also the first time the company had reported the incident to its board of directors. However, because no AI technology or confidential information about customers or partners was actually stolen, OpenAI executives apparently decided not to make the incident public. Furthermore, because the hacker who hacked OpenAI was believed to be a private citizen with no ties to foreign government agencies, the hacking was not considered a threat to national security, and no report was made to law enforcement agencies, including the Federal Bureau of Investigation (FBI).

However, some OpenAI employees reportedly worried that America's adversaries, such as China, had hacked the company in an attempt to steal its AI technology. The hack also raised questions about how seriously OpenAI takes security, 'opening a rift within the company,' The New York Times reported.



One example is Leopold Aschenbrenner, who served as technical program manager at OpenAI. He argued to the OpenAI board of directors that 'OpenAI is not taking sufficient measures to prevent the theft of sensitive information by the Chinese government or other foreign hostile forces,' and called for strengthening security. However, Aschenbrenner was subsequently fired in the spring of 2024 for leaking confidential information outside the company. In his podcast, Aschenbrenner said, 'OpenAI's security is not strong enough to prevent the theft of important confidential information even if a foreigner were to infiltrate the company,' and claimed that he was fired from OpenAI because executives found his claims annoying.

Regarding Aschenbrenner's dismissal, OpenAI spokesperson Liz Bourgeois told The New York Times, 'We appreciate the concerns that Leopold raised during his time at OpenAI. However, these did not lead to his departure.' She explained that Aschenbrenner's dismissal was unrelated to his security recommendations.

Bourgeois added, 'While I share Leopold's passion for building safe AGI (artificial general intelligence), I disagree with many of the points he has made about OpenAI's work since then, including his views on OpenAI's security, particularly since this incident was addressed and reported to the board before Leopold joined the company.'

Daniela Amodei, co-founder of Anthropic, a competitor of OpenAI, argues that theft of the latest generative AI designs would not pose a significant threat to national security. Some argue that theft of the latest AI technology would not pose a threat to national security.

in AI,   Software,   Security, Posted by logu_ii