NAT Gateways are designed to serve private subnets within the same Availability Zone. When subnets in one AZ are configured to route traffic through a NAT Gateway in a different AZ, the traffic crosses AZ boundaries and incurs inter-AZ data transfer charges in addition to the standard NAT processing fees.
This typically happens when:
* NAT Gateways are deployed in multiple AZs (as recommended for resilience), but * Route tables for all subnets are configured to send traffic to a single NAT Gateway, ignoring AZ placement
In high-throughput environments, this misalignment silently generates excess cost. Ensuring that each subnet routes through the NAT Gateway in its own AZ avoids inter-AZ charges and aligns with AWS architectural best practices.
Standard Load Balancers are frequently provisioned for internal services, internet-facing applications, or testing environments. When a workload is decommissioned or moved, the load balancer may be left behind without any active backend pool or traffic — but continues to incur hourly charges for each frontend IP configuration.Because Azure does not automatically remove or alert on inactive load balancers, and because they may not show significant outbound traffic, these resources often persist unnoticed. In dynamic or multi-team environments, this can result in a growing number of unused Standard Load Balancers generating silent, recurring costs.
In Azure, it’s common for public IP addresses to be created as part of virtual machine or load balancer configurations. When those resources are deleted or reconfigured, the IP address may remain in the environment unassigned. While Basic SKUs are free when idle, Standard SKUs incur ongoing hourly charges, even if the address is not in use.Unassigned Standard public IPs provide no functional value but continue to generate cost, especially in environments with high churn or inconsistent cleanup practices.
When EC2 instances, Lambda functions, or containerized workloads access AWS-managed services without VPC Endpoints, that traffic exits the VPC through a NAT Gateway or Internet Gateway. This introduces unnecessary egress charges and NAT processing costs, especially for data-intensive or high-frequency workloads.
Workloads in private subnets often access AWS services like S3 or DynamoDB. If this traffic is routed through a NAT Gateway, it incurs both hourly and data processing charges. However, AWS offers VPC Gateway Endpoints (for S3/DynamoDB) and Interface Endpoints (for other services), which provide private access paths that bypass the NAT Gateway entirely. When teams fail to use VPC endpoints — often due to default routing configurations or lack of awareness — they unnecessarily route internal service calls through a costlier, public-facing path. This leads to persistent and avoidable spend.
Elastic IPs are often provisioned but forgotten — left unassociated, or still attached to EC2 instances that have been stopped. In either case, AWS treats the EIP as idle and applies an hourly charge. Although the cost per hour is relatively small, these charges accumulate quietly, especially across environments with frequent provisioning, decommissioning, or ephemeral workloads. Many organizations overlook the fact that even a single EIP attached to a stopped instance is billable. Without periodic review, this creates persistent, low-visibility waste across AWS accounts.
In dynamic environments — especially during autoscaling, testing, or infrastructure changes — it's common for load balancers to remain provisioned after their backend resources have been decommissioned. When this happens, the load balancer continues to incur hourly charges despite serving no functional purpose. These inactive resources often go unnoticed, particularly in dev/test environments or when deployment pipelines fail to include proper cleanup logic. Over time, the accumulation of unused load balancers contributes to unnecessary recurring costs with no operational benefit.
Azure WAF configurations attached to Application Gateways can persist after their backend pool resources have been removed — often during environment reconfiguration or application decommissioning. In these cases, the WAF is no longer serving any functional purpose but continues to incur fixed hourly costs. Because no traffic is routed and no applications are protected, the WAF is effectively inactive. These orphaned WAFs are easy to overlook without regular cleanup processes and can quietly accumulate unnecessary charges over time.
Classic Load Balancers that no longer serve active workloads will persist if they are not properly decommissioned. This often happens after application migrations, architecture changes, or testing activities. Even if no connections or traffic are passing through the CLB, it continues to incur baseline charges until manually deleted. Identifying and removing unused load balancers helps eliminate waste without impacting operations.
Network Load Balancers that are no longer needed often persist after architecture changes, service decommissioning, or migration projects. When no active TCP connections or traffic flow through the NLB, it still generates hourly operational costs. Identifying and removing these idle resources helps reduce unnecessary networking expenses without affecting service availability.