In a startling incident, attackers utilized artificial intelligence (AI) to breach an Amazon Web Services (AWS) environment in under 10 minutes. This breach, identified by the Sysdig Threat Research Team (TRT), underscores the increasingly rapid and sophisticated nature of cyberattacks facilitated by AI technologies.
The attack commenced with the discovery of exposed credentials from public Simple Storage Service (S3) buckets, which were used by the threat actor to gain initial access. Following this, the attacker swiftly escalated their privileges, navigating across 19 unique AWS principals.
AI as a Catalyst for Speed
During the event, which took place on November 28, the attacker employed large language models (LLMs) to automate various tasks such as reconnaissance, malicious code generation, and real-time decision-making. Researchers noted that the utilization of LLMs significantly contributed to both the speed and lateral movement capabilities of the attacker.
Sysdig researchers Alessandro Brucato and Michael Clark remarked, "This attack stands out for its speed, effectiveness, and strong indicators of AI-assisted execution." The report highlights how the attacker not only collected and exfiltrated data from the cloud environment but also provisioned GPU instances on Elastic Compute Cloud (EC2), potentially for resource abuse or LLM model development.
Credential Exposure: A Major Vulnerability
While the rapid execution of the attack was alarming, the researchers emphasized the importance of the initial credential exposure as a critical vulnerability. They warned organizations against leaving access keys in public buckets, advocating instead for the use of IAM roles that leverage temporary credentials. If IAM users with long-term credentials must be used, they should be secured and rotated periodically.
The exposed S3 buckets were reportedly named using familiar AI tool naming conventions, which the attacker exploited during reconnaissance to locate the credentials easily.
Amazon's Response
An AWS spokesperson addressed the incident, clarifying that the issue stemmed from misconfigured S3 buckets and did not affect AWS services or infrastructure. They advised customers to adhere to security best practices, including avoiding public access to S3 buckets, implementing least-privilege access, managing credentials securely, and enabling monitoring services like GuardDuty to mitigate unauthorized activity.
The Role of AI in Attack Execution
The attackers exhibited sophisticated use of AI and LLMs throughout the various stages of the breach. They hijacked LLMs and leveraged the cloud for their model development, which appeared to be a primary objective. Notably, while the compromised credentials were initially limited to ReadOnlyAccess, the attackers executed a Lambda function code injection to gain access to an admin account labeled "frick." This escalation took a mere eight minutes.
During this phase, the code written by the attacker contained comments and exception handling that suggested LLM generation, and the use of Serbian language indicated a possible origin of the threat actor.
Detecting and Mitigating Future Threats
Experts assert that had the organization not made the error of exposing valid credentials, the breach could have been averted. Jason Soroko, a senior fellow at Sectigo, criticized the failure to secure cloud environments, stating, "It is impossible to defend a cloud environment when the keys are left visible to anyone who bothers to look." The incident demonstrates the pressing need for organizations to master security fundamentals.
As AI becomes an integral part of cyberattacks, experts predict that such incidents will become increasingly common. They highlight that in 2026, AI may reach a critical mass as both a threat enabler and an attack surface. Shane Barney, CISO at Keeper Security, noted, "AI removes hesitation, allowing tasks that once took hours to be executed continuously and decisively, compressing the time defenders have historically relied on to detect and respond to threats."
To combat this evolving threat landscape, researchers recommend that organizations prioritize runtime detection, enforce least-privilege access, and implement other mitigation strategies outlined in their report.
Source: Dark Reading News