Network Load Balancers that are no longer needed often persist after architecture changes, service decommissioning, or migration projects. When no active TCP connections or traffic flow through the NLB, it still generates hourly operational costs. Identifying and removing these idle resources helps reduce unnecessary networking expenses without affecting service availability.
Application Load Balancers that no longer serve active workloads may persist after application migrations, architecture changes, or testing activities. When no incoming requests are processed through the ALB, it continues to generate baseline hourly and LCU charges. Identifying and decommissioning unused ALBs helps reduce networking expenses without impacting operational environments.
Gateway Load Balancers that no longer have active traffic flows can continue to exist indefinitely unless proactively decommissioned. This often happens after network topology changes, security architecture updates, or environment deprecations. Without active packet forwarding, the GLB provides no functional benefit but still incurs hourly and data transfer costs.
Oversized instances within Auto Scaling Groups lead to inflated baseline costs, even when scaling adjusts the number of instances dynamically. When workloads consistently use only a fraction of the available CPU, memory, or network capacity, there is an opportunity to downsize to smaller, less expensive instance types without sacrificing performance. Right-sizing helps balance capacity and efficiency, reducing compute spend while preserving workload stability.
Detection:
This inefficiency arises when snapshots are retained long after they’ve served their purpose. Snapshots may have been created for backups, migrations, or disaster recovery plans but were never deleted—even after the related workload or volume was decommissioned. Over time, these unused snapshots accumulate, continuing to incur storage costs without providing operational value.
This inefficiency occurs when small files are stored in S3 storage classes that impose a minimum object size charge, resulting in unnecessary costs. Small files under 128 KB stored in Glacier Instant Retrieval, Standard-IA, or One Zone-IA are billed as if they were 128 KB. If these small files are accessed frequently, S3 Standard may be a better fit. For infrequently accessed small files, transitioning them to archival storage classes like Glacier Flexible Retrieval or Deep Archive can optimize storage spend. Poorly tuned lifecycle policies often allow small files to remain in suboptimal storage classes indefinitely.
This inefficiency occurs when a table remains in the default Standard storage class despite having minimal or infrequent access. In these cases, switching to Standard-IA can significantly reduce monthly storage costs, especially for archival tables, compliance data, or legacy systems that are still retained but rarely queried.
This inefficiency occurs when an RDS instance uses a high-cost storage type such as io1 or io2 but does not require the performance benefits it provides. In many cases, provisioned IOPS are set at or below the free baseline included with gp3 (3,000 IOPS and 125 MB/s). In such scenarios, continuing to use provisioned IOPS storage results in elevated cost with no functional advantage. These misconfigurations often persist due to legacy templates, default settings, or a lack of periodic review.
This inefficiency occurs when a DynamoDB table is no longer accessed by any active workload but continues to accumulate storage charges. These tables often remain after a project ends, a feature is retired, or data is migrated elsewhere. Without any read or write activity, the table provides no functional value and becomes a cost liability.
This inefficiency occurs when legacy volume types such as gp2 or io1 remain in use, even though AWS has released newer types—like gp3 and io2—that offer better performance at lower cost. Gp3 allows users to configure IOPS and throughput independently of volume size, while io2 provides higher durability and more predictable performance than io1. These newer volumes are generally more cost-effective and can be adopted without re-architecting workloads. Many teams continue using outdated types due to default AMIs, automation templates, or simple oversight.