Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Lack of Deduplication and Change Block Tracking in AWS Backup
Storage
Cloud Provider
AWS
Service Name
AWS Backup
Inefficiency Type
Underutilization

AWS Backup does not natively support global deduplication or change block tracking across backups. As a result, even traditional incremental or differential backup strategies (e.g., daily incremental, weekly full) can accumulate redundant data. Over time, this leads to higher-than-necessary storage usage and cost — especially in environments with frequent backup schedules or large data volumes that only change minimally between snapshots. While some third-party agents can implement CBT and deduplication at the client level, AWS Backup alone offers no built-in mechanism to avoid storing unchanged data across backup generations.

Excessive Lambda Retries (Retry Storms)
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Retry Misconfiguration

Retry storms occur when a function fails and is automatically retried repeatedly due to default retry behavior for asynchronous events (e.g., SQS, EventBridge). If the error is persistent and unhandled, retries can accumulate rapidly — often invisibly — creating a large volume of billable executions with no successful outcome. This is especially costly when functions run for extended durations or have high memory allocation.

Excessive Shard Count in GCP Bigtable
Cloud Provider
GCP
Service Name
GCP BigTable
Inefficiency Type
Inefficient Configuration

Bigtable automatically splits data into tablets (shards), which are distributed across provisioned nodes. However, poorly designed row key schemas or excessive shard counts (caused by high cardinality, hash-based keys, or timestamp-first designs) can result in performance bottlenecks or hot spotting. To compensate, users often scale up node counts — increasing costs — when the real issue lies in suboptimal data distribution. This leads to inflated infrastructure spend without actual workload increase.

Inactive Memorystore Instance
Cloud Provider
GCP
Service Name
Inefficiency Type
Inactive Resource

Memorystore instances that are provisioned but unused — whether due to deprecated services, orphaned environments, or development/testing phases ending — continue to incur memory and infrastructure charges. Because usage-based metrics like client connections or cache hit ratios are not tied to billing, an idle instance costs the same as a heavily used one. This makes it critical to identify and decommission inactive caches.

Underutilized Cloud SQL Instance
Cloud Provider
GCP
Service Name
GCP Cloud SQL
Inefficiency Type
Underutilized Resource

Cloud SQL instances are often over-provisioned or left running despite low utilization. Since billing is based on allocated vCPUs, memory, and storage — not usage — any misalignment between actual workload needs and provisioned capacity leads to unnecessary spend. Common causes include: * Initial oversizing during launch that was never revisited * Non-production environments with continuous uptime but minimal use * Databases used intermittently (e.g., for nightly reports) but kept running 24/7 Without rightsizing or scheduling strategies, these instances generate ongoing cost with limited business value.

Overprovisioned Memory Allocation in Cloud Run Services
Compute
Cloud Provider
GCP
Service Name
GCP Cloud Run
Inefficiency Type
Overprovisioned Resource Allocation

In Cloud Run, each revision is deployed with a fixed memory allocation (e.g., 512MiB, 1GiB, 2GiB, etc.). These settings are often overestimated during initial development or copied from templates. Unlike auto-scaling platforms that adapt instance size based on workload, Cloud Run continues to bill per the allocated amount regardless of actual memory used during execution. If a service consistently uses significantly less memory than allocated, it results in avoidable overpayment per request — especially for high-throughput or long-running services. Since memory and CPU are billed together based on configured values, this inefficiency compounds quickly at scale.

Idle Cloud NAT Gateway Without Active Traffic
Networking
Cloud Provider
GCP
Service Name
GCP Cloud NAT
Inefficiency Type
Idle Resource with Baseline Cost

Each Cloud NAT gateway provisioned in GCP incurs hourly charges for each external IP address attached, regardless of whether traffic is flowing through the gateway. In many environments, NAT configurations are created for temporary access (e.g., one-off updates, patching windows, or ephemeral resources) and are never cleaned up. If no traffic is flowing, these NAT gateways remain idle yet continue to generate charges due to reserved IPs and persistent gateway configuration. This is especially common in non-production environments or when legacy configurations are forgotten.

Overprovisioned Node Pool in GKE Cluster
Compute
Cloud Provider
GCP
Service Name
GCP GKE
Inefficiency Type
Overprovisioned Resource

Node pools provisioned with large or specialized VMs (e.g., high-memory, GPU-enabled, or compute-optimized) can be significantly overprovisioned relative to the actual pod requirements. If workloads consistently leave a large portion of resources unused (e.g., low CPU/memory request-to-capacity ratio), the organization incurs unnecessary compute spend. This often happens in early cluster design phases, after application demand shifts, or when teams allocate for peak usage without autoscaling.

Idle Cloud Memorystore Redis Instance
Cloud Provider
GCP
Service Name
GCP Cloud Memorystore
Inefficiency Type
Inactive Resource

Cloud Memorystore instances that remain idle—i.e., not receiving read or write requests—continue to incur full costs based on provisioned size. In test environments, migration scenarios, or deprecated application components, Redis instances are often left running unintentionally. Since Redis does not autoscale or suspend, unused capacity results in 100% waste until explicitly deleted.

Excessive Retention of Logs in Cloud Logging
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Excessive Retention of Non-Critical Data

By default, Cloud Logging retains logs for 30 days. However, many organizations increase retention to 90 days, 365 days, or longer — even for non-critical logs such as debug-level messages, transient system logs, or audit logs in dev environments. This extended retention can lead to unnecessary costs, especially when: * Logs are never queried after the first few days * Observability tooling duplicates logs elsewhere (e.g., SIEM platforms) * Retention settings are applied globally without considering log type or project criticality

There are no inefficiency matches the current filters.