This inefficiency occurs when workloads are constrained to run only on Spot-based capacity with no viable path to standard nodes when Spot capacity is reclaimed or unavailable. While Spot reduces unit cost, rigid dependence can create hidden costs by requiring standby standard capacity elsewhere, delaying deployments, or increasing operational intervention to keep environments usable. GKE explicitly recommends mixing Spot and standard node pools for continuity when Spot is unavailable.
This inefficiency occurs when Kubernetes Jobs or CronJobs running on EKS Fargate leave completed or failed pod objects in the cluster indefinitely. Although the workload execution has finished, AWS keeps the underlying Fargate microVM running to allow log inspection and final status checks. As a result, vCPU, memory, and networking resources remain allocated and billable until the pod object is explicitly deleted.
Over time, large numbers of stale Job pods can generate direct compute charges as well as consume ENIs and IP addresses, leading to both unnecessary spend and capacity pressure. This pattern is common in batch-processing and scheduled workloads that lack automated cleanup.
This inefficiency occurs when ElastiCache clusters continue running engine versions that have moved into extended support. While the service remains functional, AWS charges an ongoing premium for extended support that provides no added performance or capability. These costs are typically avoidable by upgrading to a version within standard support.
This inefficiency occurs when workloads with predictable, long-running compute usage continue to run entirely on on-demand pricing instead of leveraging Committed Use Discounts. For stable environments, such as production services or continuously running batch workloads, failing to apply CUDs results in materially higher compute spend without any operational benefit. The inefficiency is driven by pricing choice, not resource overuse.
This inefficiency occurs when backup data persists longer than intended due to misaligned or outdated retention policies. It often arises when retention requirements change over time, but older recovery points are not evaluated or cleaned up accordingly. In some cases, manually configured backups or legacy policies remain in place even after operational or compliance needs have been reduced.
As a result, backup storage continues to grow and incur cost without delivering additional recovery value.
This inefficiency occurs when Amazon Aurora database clusters are intentionally stopped to avoid compute costs but are automatically restarted by the service after the maximum allowed stop period. Once restarted, re-started database instances begin accruing instance-hour charges even if the database is not needed.
Because Aurora does not provide native lifecycle controls to keep clusters stopped indefinitely, this behavior can result in recurring, unintended compute spend—particularly in non-production, seasonal, or infrequently accessed environments where clusters are stopped and forgotten.
This inefficiency occurs when automated Cloud SQL backups are retained longer than required by recovery objectives or governance needs. Because backups accumulate over the retention window (and can grow quickly for high-change databases), excessive retention drives ongoing backup storage charges without improving practical recoverability.
This inefficiency occurs when production and non-production applications are hosted within the same App Service Plan. Production workloads often require higher availability, performance, or scaling characteristics, driving the plan toward larger or higher-cost SKUs. When non-production workloads share that plan, they inherit the higher cost structure even though their availability and performance requirements are typically much lower, resulting in unnecessary spend.
This inefficiency occurs when pod resource requests—often inflated by sidecar containers—push total memory or CPU just over a Fargate sizing boundary. Because Fargate adds mandatory system overhead and only supports fixed resource combinations, small incremental increases can force a pod into a much larger billing tier. This results in materially higher cost for marginal additional resource needs, especially in workloads that run continuously or at scale.
This inefficiency occurs when Provisioned Concurrency is enabled for Lambda functions that do not require consistently low latency or steady traffic. In such cases, reserved capacity remains allocated and billed during idle periods, creating ongoing cost without proportional performance or business benefit. This is distinct from standard Lambda execution charges, which are purely usage-based.