In the many projects I worked in, a messy AWS was a constant!

Too many people create too many resources, change configs, etc. And in the end you’ll have a bunch of resources you never use but keep based on a “if it works don’t touch it” philosophy.

  • This could be be costing you some money
  • Replicating everything would probably be a nightmare

So let’s start over, let’s try to keep things clean right from the start.


The S3 - Terraform way

Almost everything in your AWS is created via terraform, so your entire AWS infrastructure in a bunch of config files and a state.

The manual things

The only manual part of this setup is creating the S3 buckets,

aws s3api create-bucket --bucket terraform-states-<UNIQUE_ID> --region us-east-1
 
aws s3api put-bucket-versioning --bucket terraform-states-<UNIQUE_ID> --versioning-configuration Status=Enabled

For projects where multiple people might change the Terraform state, you might need a DynamoDB table along with the S3 for state locking, more Terraform state locking

Terraform all the way

  • Create everything in your project’s infra using only Terraform.
  • Use the created S3 buckets (& DynamoDB table) to maintain the state

Have a backend.tf file with the below,

terraform {
  backend "s3" {
    bucket         = "terraform-states-<UNIQUE_ID>"
    key            = "ecr/terraform.tfstate" # Path within bucket
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "vc-terraform-lock"  # Optional: For state locking
  }
}

![Note] Technically you need only one S3 bucket for all your accounts projects, but if there’s a need for separation, you can create project specific buckets as well