AWS is a prime target for attackers. Its growing popularity and strategic role make it an attractive service.
To limit the risks, it is crucial to put in place robust security measures. Understanding the types of attack and assessing their impact is also essential.
Several methods can be used to assess the security of an AWS infrastructure. In this article, we present the offensive approach: AWS penetration testing (AWS pentesting). We detail the principles and objectives as well as the methodology of an AWS audit through a concrete example.
An AWS penetration test aims to assess the security of services and resources hosted on an Amazon Web Services (AWS) cloud environment.
This type of audit simulates cyber attacks to identify exploitable vulnerabilities in the infrastructure and configurations of AWS resources, such as EC2 instances, databases, deployed applications and containers.
The aim is to detect potential security vulnerabilities, assess their impact, and provide recommendations for strengthening the security of the target environment.
The scope of an AWS penetration test can be adapted to the specific needs of the organisation. It is possible to test all the services and configurations of your AWS infrastructure or to focus on the most critical elements.
The tests cover (but are not limited to) :
For more information, take a look at our article exploring common vulnerabilities in cloud infrastructures.
To present the methodology of an AWS penetration test, let’s put ourselves in the shoes of an attacker seeking to compromise the company ‘Elicorp’.
Our objective is to compromise the AWS infrastructure database. Let’s take a step-by-step look.
In this initial phase, the attacker focuses on gathering information on EliCorp, in particular on potentially exposed AWS services.
The aim is to identify exploitable elements without directly interacting with the systems in an intrusive way.
The attacker begins by gathering information on EliCorp’s subdomains. These can reveal potential entry points into the infrastructure.
Using tools such as amass and subfinder, it is possible to identify public subdomains, particularly those that could be hosted on AWS.
# Using amass to search for subdomains
amass enum -d elicorp.com -o subdomains.txt
By scanning the subdomains found with nmap, the attacker seeks to identify open ports and exposed services, such as EC2 instances or APIs on AWS Gateway, which could provide configuration information or be vulnerable to specific attacks.
# Subdomain scanning with Nmap for open ports
nmap -iL subdomains.txt -p- -T4 -oN open_scan.txt
In addition, tools such as CloudMapper are used to discover public configuration errors on AWS, such as insecure S3 buckets or vulnerabilities in EC2 security groups.
These elements could be accessed using public policies.
This phase consists of exploiting configuration errors to gain access to sensitive resources and obtain essential information to progress towards the final objective, which is to compromise the database.
Misconfigured S3 buckets can contain sensitive data such as configuration files, logs or even API keys.
The attacker uses awscli to identify the permissions on each bucket found and see if any objects are publicly accessible.
# Verifying public access permissions on each bucket
for bucket in $(cat buckets_elicorp.txt); do
aws s3api get-bucket-acl --bucket $bucket
aws s3 ls s3://$bucket --recursive
done
When inspecting S3 files, the attacker discovers a configuration file containing an AWS API key which appears to correspond to an IAM user with certain permissions.
By exporting the found API key, the attacker tests its permissions to see if this key gives access to sensitive services such as RDS databases or IAM.
The aim of this phase is to understand how far this API key can be used to escalate privileges.
export AWS_ACCESS_KEY_ID=AKIXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXX
# Testing basic permissions to discover IAM privileges
aws iam get-user
aws iam list-attached-user-policies --user-name compromised_user
The API key has restricted permissions, but authorises actions on IAM and RDS services, a possible entry point for more advanced stages of compromise.
# Testing permissions with the IAM API
aws iam get-user
The aim of this phase is to increase the attacker’s access to the AWS infrastructure by exploiting weaknesses in IAM permissions.
Using the limited permissions of the API key, the attacker enumerates IAM roles and policies to identify more permissive permissions. It particularly targets roles with access to RDS instances (databases).
# Retrieving the list of roles to identify potential targets
aws iam list-roles
aws iam list-role-policies --role-name <role_name>
After enumeration, the attacker discovers a role with the rds:DescribeDBInstances
permission, enabling him to obtain critical information about the company’s databases.
If the attacker finds a vulnerable IAM role, he can use the assume role technique to obtain more permissions.
# Assume a role (if authorised) to escalate permissions
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/VulnerableRole --role-session-name pentestSession
Having obtained identification information and access permissions to RDS services, the attacker moves on to a direct attack on the databases.
Using rds:DescribeDBInstances
, the attacker obtains information about the endpoint of the database instance, the type of engine (MySQL, PostgreSQL, etc.) and the security configurations, including the type of firewall and security group rules.
aws rds describe-db-instances –query 'DBInstances[*].[Endpoint.Address,DBInstanceStatus]'
Using credentials discovered in the previously obtained configuration file, the attacker connects directly to the RDS instance to extract sensitive data.
# Connecting to the database with MySQL
mysql -h <db_instance_endpoint> -u <db_user> -p<db_password>
Once connected, the attacker can execute queries to list tables and obtain sensitive data, such as client information or financial transactions.
SHOW TABLES;
SELECT * FROM clients WHERE sensitive_data=true;
It is essential to review EliCorp’s security practices in order to strengthen the protection of the AWS infrastructure and prevent similar attacks.
The reconnaissance phase revealed public entry points, such as staging instances accessible on the Internet.
These staging, development or test environments should not be publicly exposed. EliCorp could limit access to staging environments by using AWS security groups to restrict authorised IP addresses, or by setting up VPNs so that only authenticated users could access them.
The penetration test revealed that certain API keys and IAM roles had excessive permissions, including access to critical resources such as RDS instances.
The principle of least privilege means limiting IAM permissions to those strictly necessary for each user, service or application. In addition, it is essential to regularly review IAM roles and policies and revoke unnecessary access.
Sensitive files were found in public S3 buckets, making it possible to obtain compromising information.
S3 buckets should be configured to allow access only to specific users and services.
EliCorp should also enable access logs to monitor any attempts to access S3 buckets.
During the penetration test, secrets (such as API keys) were stored in publicly accessible configuration files.
Secrets and sensitive information should never be stored in plain text in configuration files. AWS Secrets Manager or AWS Parameter Store offer secure management of secrets by limiting access and using encryption.
Direct access to RDS instances was enabled by an overly permissive security group configuration.
By restricting security groups and using VPC (Virtual Private Cloud) options to isolate databases, EliCorp could reduce the risk of compromise.
During one of our missions, one of our clients asked us to examine its CI/CD pipeline on AWS.
Starting with limited access to GitLab, we discovered and exploited vulnerabilities that allowed us to escalate our privileges and access sensitive data.
To find out more, read our dedicated write-up: White box audit of a CI/CD pipeline on AWS.
It is important to assess the level of resistance of your AWS infrastructure to the attacks described in this article.
This assessment can be carried out using one of the audits we offer. Whether black box, grey box or white box, we can identify all the vulnerabilities in your AWS infrastructure and help you fix them.
Author: Amin TRAORÉ – CMO @Vaadata