From Brazil to the United States, the United Kingdom to the European Union, ransomware entered the middle of the decade as a type of parallel“-crime ”: it organizes itself as a service, outsources steps and exploits the dependence of companies and governments on connected systems. The novelty is not encryption itself, but rather how extortion combines with faster operations, with more stolen data and the increasing use of artificial intelligence to reduce cost and increase reach.
The Threat Landscape 2025 report, published by ENISA 'the EU agency for cybersecurity 'O.E., ranked artificial intelligence as one of the defining elements of the current threat landscape.The report highlights that AI-backed phishing campaigns have come to represent a majority share of the social engineering initiatives observed last year.The practical impact is direct: more convincing texts, language adaptation to the victim profile, automation of approach tests and reduction of the operational cost of the attack.
AI does not fully replace the human operator in ransomware, but it reduces effort in steps that historically required time and manual skill. Language models are used to produce highly personalized emails, analyze exfiltrated data to identify sensitive information with greater potential for pressure and support vulnerability research.The National Cyber Security Centre, UK, has already warned that AI tends to increase efficiency, frequency and scale of existing tactics, especially in assisted recognition and exploitation.
The role of AI agents
The most sensitive advance, however, is in the use of agent-based architectures. Unlike a text-only model, agents are systems capable of planning tasks, executing calls to external tools, interacting with APIs and maintaining context over multiple steps. When applied in legitimate corporate environments, these agents automate internal processes, integrate systems and reduce operational friction. From an offensive perspective, the same logic can be used to coordinate distributed actions.
A structured attack may involve an agent dedicated to collecting public and internal information, another focused on credential validation and exploitation of excessive permissions, and a third party responsible for operating cloud service APIs to map resources, tokens, and access keys.From the initial intrusion, automation accelerates lateral movement and exfiltration.
In Brazil, CTIR Gov alerts have already described since 2022 the maturation of groups such as BlackCat/ALPHV, operating with lateral movement techniques and customized encryption. What changes now is the additional layer of intelligent automation, added to the growing corporate adoption of integrations based on APIs, service accounts and automated flows.
This convergence extends the risk surface. Each integration adds credentials, tokens and permissions. Each corporate agent with operational autonomy represents a new machine identity. If compromised, these elements can act with apparent legitimacy within the environment. The investigation stops asking only “quem accessed” and starts questioning “which system performed a certain action and under which chain of decisions”.
From a technical point of view, responding to these new threats requires architectural review. Zero trust models, granular segmentation, and strict control of machine identities become a priority. Privilege management needs to include service accounts and automated integrations. Logs must be centralized and protected from tampering, allowing behavioral analysis based on sequences of events and not just isolated alerts.
Immutable backups remain a key measure against encryption, but they do not address the reputational dimension of leak extortion.Continuous exfiltration monitoring and clear incident response policies are now part of strategic planning.
The adoption of artificial intelligence in companies is not, in itself, the problem. On the contrary, it can strengthen detection and response when applied in a structured way. The risk arises when agents operate with excessive permissions, integrations are implemented without adequate inventory and automated decisions have no auditable track.
Ransomware has evolved from a technical attack to a structured economic model.The incorporation of AI and autonomous agents accelerates this logic, reduces costs for criminals, and increases complexity for companies, which need to keep their defense strategies up to date.The strategic priority is to govern identities, APIs and algorithms with the same rigor applied to physical and financial assets.
Companies that treat agents and automations as a central part of risk architecture will be better positioned to tackle the next wave. Those that see AI only as a productivity tool may find too late that unchecked autonomy extends exposure quietly, but decisively.


