AWS Dangling Resources

Do you know what lurks behind your unattached resources that were thought to have been destroyed? Not only are there potential cost implications, paying for resources you are no longer using, there are also risks that threat actors may try to exploit these resources. In our cloud journey, we found this to be a common problem which we had to deal with. Here is how we resolved dangling resources at Code42. 

We utilize Terraform to deploy resources to AWS, and by that same method, Terraform should tear down those resources when they are no longer needed. At the end of last year we found that, despite deploying resources as code, we occasionally had bug bounty researchers notifying us of resources upon which they could squat. 

One easy example of how you could get into this situation: 

  • Create an S3 bucket
  • Host the contents of that bucket with CloudFront
  • Delete your bucket but not the CloudFront entry 

The way AWS handles S3 bucket hosting via CloudFront allows for someone to create an S3 bucket in their account with the same name as one you were hosting, thus potentially hosting a malicious site posing as your company. Their S3 bucket will get attached to your CloudFront entry and the malicious hacker could claim traffic going to your domain. AWS doesn’t appear to have an offering to look for and notify account owners of these dangling resources (interestingly enough, Azure does notify when dangling resources are found within an account). Without an AWS service to handle this we decided to build an in-house solution.

So what are we looking for? As mentioned above, we focused on CloudFront entries which don’t have an associated S3 bucket, or Elastic IPs which aren’t associated with an EC2 instance. I built a pretty simple script in Python, utilizing the AWS SDK Boto3. Since EIPs and CloudFront entries are configured via Route53, we start by listing all zones which are configured for the account and we pull all records within that zone. We are interested in any records which point at CloudFront, since those will be tied to an S3 bucket, and ‘A’ records which will contain EIPs. For records with CloudFront, we compare the name in the record to a list of all S3 buckets within the account. If there isn’t an S3 bucket containing that name, we’ve found a dangling resource. And now, we report it. Similarly with Elastic IPs, we call describe_instances and provide the EIP we found in Route53. If an EC2 instance doesn’t exist with that IP, we’ve found a dangling resource. Again, report it. It is important to note that you can have ‘A’ records porting to internal resources, which won’t have an associated EC2. I’ve added logic to filter those internal resources out based on matching an IP string.

The linked code is a modified version of what we have deployed; this is a small section of a much larger Python script used for info gathering. This code is meant to be run locally, when authenticated to an account, or from within an AWS account. We have our script deployed as an AWS Lambda. Amazon EventBridge handles the scheduling, running every 24 hours, with the results being output to Amazon S3. We have configured a SumoLogic S3 collector which allows us to review and alert on findings.

Deploying infrastructure as code allows for consistent deployments and easy validation that things are correct. While this should add reassurance that resources are torn down appropriately when no longer needed, manual deletions can let resources slip through the cracks. It’s important to verify you’re not leaving resources out there for opportunists to take advantage of. Even if you think your policies and procedures should prevent dangling resources from occurring, review the script and scan your environments.

Associated code: aws_dangling_resources.py
https://github.com/code42/redblue42/blob/master/aws_dangling_resources.py