Russian entities have launched a large-scale campaign to embed propaganda and disinformation directly into training datasets for artificial intelligence systems, according to InsightNews. Cybersecurity experts warn of a new threat: the “poisoning” of algorithms that underlie popular chatbots and search engines. The aim of this strategy is to make Russian narratives an integral part of AI responses, causing neural networks to present distorted information as objective facts.
The operation relies on massive numbers of automated accounts and specialized websites that generate content optimized for parsers—programs that collect data for AI training.
Russia is producing millions of articles, posts, and comments in multiple languages that promote Kremlin-approved narratives. Because AI developers often use publicly available internet data for training, these manipulative texts enter the “brain” of artificial intelligence. As a result, when users ask AI about international conflicts or political events, the algorithm may produce answers based on prepared disinformation, treating it as a widely accepted perspective.
Experts note that this tactic is far more dangerous than traditional troll farms because it targets the very foundation of modern technologies. Whereas propaganda previously targeted social media users, it now seeks to alter the logic of software products used by millions worldwide. This creates a long-term threat, as once AI absorbs “poisoned” information, it is extremely difficult to remove from a trained model. Investigators emphasize that Russian information operations specialists have studied the principles of ranking and data collection algorithms in detail, allowing them to effectively inject intended narratives into the digital space.
Additionally, Russia has created entire networks of fake news outlets that appear to be credible regional media. These sites republish each other, creating the illusion of verification from multiple sources. AI interprets this repetition as a signal of reliability.
Experts urge tech companies to revise their approaches to data filtering for AI training and implement stricter source verification mechanisms to prevent AI from becoming a tool for global manipulation of public opinion. The situation is further complicated by the accelerating pace of Russian disinformation generation, which outstrips the ability of defensive systems to detect and block it.