Cryptopolitan
2025-07-08 14:31:05

OpenAI overhauls security to protect IP

OpenAI has revamped its security measures to guard against foreign threats. The $300B artificial intelligence company, known globally for its products like ChatGPT, has rolled out a broad security overhaul that includes stricter access controls, heightened employee vetting, and physical safeguards. Although the company has denied the claim that its actions are in response to its competition, OpenAI has beefed up its security procedures after an incident in January involving the Chinese startup DeepSeek . OpenAI overhauls security OpenAI has dramatically increased its internal security measures in recent months to safeguard its intellectual property from foreign espionage, particularly from China. The company began implementing tighter protocols last year, but these efforts gained renewed urgency after a January rollout of DeepSeek. OpenAI suspects the rival company used “distillation” to develop and release a competing product. Distillation is a technique that involves training a new model to mimic another’s behavior. “The episode prompted OpenAI to be much more rigorous,” said one source close to the security team. Since then, the company has been aggressively expanding its cybersecurity operations and tightening control over access to sensitive research, model data, and infrastructure. So far, DeepSeek has not responded to the allegations. The DeepSeek incident also sparked a widespread internal clampdown at OpenAI. New policies which are known internally as “information tenting,” are now used in determining how sensitive data is accessed and shared. Only a small circle of staff members can access top-tier research projects, and those involved in certain developments must confirm that others they interact with are also part of the same “tent.” This cautious approach was implemented during the development of OpenAI’s next-generation model, referred to internally as “Strawberry.” Staff were instructed not to discuss the project in open office spaces unless they were certain their colleagues were officially authorized to be part of it. This secrecy has caused some internal friction, with one employee describing it as “very tight,” stating, “you either had everything or nothing.” At the system level, OpenAI now stores much of its proprietary technology in isolated, offline environments. This air-gapped infrastructure separates critical systems from broader networks and the public internet, reducing the risk of remote infiltration. Physical security has also been upgraded, with fingerprint biometric access required for certain rooms at the company’s San Francisco office. To further prevent the leakage of the key parameters that govern how OpenAI models respond to user prompts, the company has enforced a strict “deny-by-default egress policy.” This means that systems are blocked from connecting to the internet unless specifically approved. AI leadership has become a U.S.-China battleground There is an unrelenting competition between the U.S. and China over leadership in emerging technologies, particularly artificial intelligence. Washington has imposed a series of export controls designed to prevent Beijing from acquiring advanced semiconductors and related technologies. At the same time, U.S. intelligence agencies have warned that foreign actors have increased their efforts to steal sensitive data from American technology firms. The escalation in threats of espionage has pushed several major Silicon Valley companies to adopt tighter screening of new hires, with OpenAI being no exception. AI and tech firms began evaluating insider threat risks more seriously since 2023. In October last year, OpenAI hired Dane Stuckey as its new chief information security officer. Stuckey, who previously held the same role at Palantir , brings a national security-focused approach to the organization’s defense strategy. Stuckey works closely with Matt Knight, OpenAI’s vice president of security products, who is currently spearheading the efforts to use the company’s own large language models as tools to defend against cyber threats. In addition, OpenAI added retired U.S. Army general Paul Nakasone to its board last year. Nakasone was the former head of the U.S. Cyber Command and the National Security Agency (NSA). He brings a high-level understanding of cybersecurity threats and defense strategies to OpenAI’s executive team. Despite the serious nature of these changes, OpenAI has stated that the security overhaul is not in direct response to any single incident. A spokesperson for the company told the Financial Times that the upgrades are part of the company’s investments in privacy and security as it aims to lead the industry. The increased focus on foreign espionage, particularly from China, has raised concerns about stoking xenophobic undercurrents in the industry. Several insiders and observers have raised concerns that the sweeping security policies might unintentionally alienate employees of Asian descent or lead to undue scrutiny based on nationality rather than actual threat indicators. Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.