Cryptopolitan
2026-01-29 17:45:22

Hackers are hijacking unprotected AI models to steal computing power

About 175,000 private servers are reportedly exposed to the public internet, giving hackers the opportunity to carry out their illicit activities. The problem was reported by the security researchers, SentinelOne and Censys, who tracked 7.23 million observations in over 300 days. Hackers exploit Ollama setting A recent report from SentinelOne and Censys found that over 175,000 private AI servers are accidentally exposed to the internet. These systems use Ollama, an open-source software that lets people run powerful AI models, like Meta’s Llama or Google’s Gemma , on their own computers instead of using a website like ChatGPT. By default, Ollama only talks to the computer it is installed on. However, a user can change the settings to make it easier to access remotely, which can accidentally expose the entire system to the public internet. They tracked 7.23 million observations over nearly 300 days and discovered that while many of these AI “hosts” are temporary, about 23,000 of them stay online almost all the time. These “always-on” systems are perfect targets for hackers because they provide free, powerful hardware that is not monitored by any big tech company. In the United States, about 18% of these exposed systems are in Virginia, likely due to the high density of data centers there. China has 30% of hosts located in Beijing. Surprisingly, 56% of all these exposed AI systems are running on home or residential internet connections. This is a major problem because hackers can use these home IP addresses to hide their identity. When a hacker sends a malicious message through someone’s home AI, it looks like it is coming from a regular person rather than a criminal botnet. How are criminals using these hijacked AI systems? According to Pillar Security, a new criminal network known as Operation Bizarre Bazaar is actively hunting for these exposed AI endpoints. They look for systems running on the default port 11434 that don’t require a password. Once they find one, they steal the “compute” and sell it to others who want to run AI tasks for cheap, like generating thousands of phishing emails or creating deepfake content. Between October 2025 and January 2026, the security firm GreyNoise recorded over 91,403 attack sessions targeting these AI setups. They found two main types of attacks. The first uses a technique called Server-Side Request Forgery (SSRF) to force the AI to connect to the hacker’s own servers. The second is a massive “scanning” campaign where hackers send thousands of simple questions to find out exactly which AI model is running and what it is capable of doing. About 48% of these systems are configured for “tool-calling.” This means the AI is allowed to interact with other software, search the web, or read files on the computer. If a hacker finds a system like this, they can use “prompt injection” to trick the AI. Instead of asking for a poem, they might tell the AI to “list all the API keys in the codebase” or “summarize the secret project files.” Since there is no human watching, the AI often obeys these commands. The Check Point 2026 Cyber Security Report shows that total cyber attacks increased by 70% between 2023 and 2025. In November 2025, Anthropic reported the first documented case of an AI-orchestrated cyber espionage campaign where a state-sponsored group used AI agents to perform 80% of a hack without human help. Several new vulnerabilities, like CVE-2025-1975 and CVE-2025-66959, were discovered just this month. They are flaws that allow hackers to crash an Ollama server by sending it a specially crafted model file. Because 72% of these hosts use the same specific file format called Q4_K_M, a single successful attack could take down thousands of systems at once. Join a premium crypto trading community free for 30 days - normally $100/mo.

Ricevi la newsletter di Crypto
Leggi la dichiarazione di non responsabilità : Tutti i contenuti forniti nel nostro sito Web, i siti con collegamento ipertestuale, le applicazioni associate, i forum, i blog, gli account dei social media e altre piattaforme ("Sito") sono solo per le vostre informazioni generali, procurati da fonti di terze parti. Non rilasciamo alcuna garanzia di alcun tipo in relazione al nostro contenuto, incluso ma non limitato a accuratezza e aggiornamento. Nessuna parte del contenuto che forniamo costituisce consulenza finanziaria, consulenza legale o qualsiasi altra forma di consulenza intesa per la vostra specifica dipendenza per qualsiasi scopo. Qualsiasi uso o affidamento sui nostri contenuti è esclusivamente a proprio rischio e discrezione. Devi condurre la tua ricerca, rivedere, analizzare e verificare i nostri contenuti prima di fare affidamento su di essi. Il trading è un'attività altamente rischiosa che può portare a perdite importanti, pertanto si prega di consultare il proprio consulente finanziario prima di prendere qualsiasi decisione. Nessun contenuto sul nostro sito è pensato per essere una sollecitazione o un'offerta