Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencies
Filter
:
Filter
x
Overprovisioned Throughput in Pub/Sub Lite
Other
Cloud Provider
GCP
Service Name
GCP Pub/Sub Lite
Inefficiency Type
Overprovisioned Resource Allocation

Pub/Sub Lite is a cost-effective alternative to standard Pub/Sub, but it requires explicitly provisioning throughput capacity. When publish or subscribe throughput is overestimated, customers continue to pay for unused capacity — similar to idle virtual machines or overprovisioned IOPS. This inefficiency is often found in development environments or early-stage production workloads where traffic patterns are unpredictable or have since decreased.

Idle GKE Autopilot Clusters with Always-On System Overhead
Compute
Cloud Provider
GCP
Service Name
GCP GKE
Inefficiency Type
Inactive Resource Consuming Baseline Costs

Even when no user workloads are active, GKE Autopilot clusters continue running system-managed pods that accrue compute and storage charges. These include control plane components and built-in agents for observability and networking. If Autopilot clusters are deployed in non-production or experimental environments and left idle, they may silently accrue ongoing charges unrelated to application activity. This inefficiency often occurs in: * Dev/test clusters that are spun up temporarily but not deleted * Clusters used for one-time jobs or training workloads * Scheduled workloads that run infrequently but don't trigger downscaling

Over-Retained Exported Object Versions in GCS Versioning Buckets
Storage
Cloud Provider
GCP
Service Name
GCP GCS
Inefficiency Type
Over-Retention of Data

When GCS object versioning is enabled, every overwrite or delete operation creates a new noncurrent version. Without a lifecycle rule to manage old versions, they persist indefinitely. Over time, this results in: * Accumulation of outdated data * Unnecessary storage costs, especially in Standard or Nearline classes * Lack of visibility into what is still needed vs. legacy debris This issue often goes unnoticed in environments with frequent data updates or automated processes (e.g., logs, models, config snapshots).

Excessive Data Scanned Due to Unpartitioned Tables in BigQuery
Cloud Provider
GCP
Service Name
Inefficiency Type
Suboptimal Configuration

If a table is not partitioned by a relevant column (typically a timestamp), every query scans the entire dataset, even if filtering by date. This leads to: * High costs per query * Long execution times * Inefficient use of resources when querying recent or small subsets of data This inefficiency is especially common in: * Event or log data stored in raw, unpartitioned form Historical data migrations without schema optimization * Workloads developed without awareness of BigQuery’s scanning model

Overprovisioned Memory in Cloud Run Services
Compute
Cloud Provider
GCP
Service Name
GCP Cloud Run
Inefficiency Type
Overprovisioned Resource

Cloud Run allows users to allocate up to 8 GB of memory per container instance. If memory is overestimated — often as a buffer or based on unvalidated assumptions — customers pay for more than what the workload consumes during execution. Unlike in VM-based environments where memory might be shared or underutilized without direct cost impact, in Cloud Run, you're billed precisely for what you allocate. This inefficiency often results from: * Defaulting to high memory values for “safety” * Not using monitoring tools to assess actual memory usage * Lack of clear ownership over service tuning

Inefficient Use of Reservations in BigQuery
Databases
Cloud Provider
GCP
Service Name
Inefficiency Type
Underutilized Commitment

Teams often adopt flat-rate pricing (slot reservations) to stabilize costs or optimize for heavy, recurring workloads. However, if query volumes drop — due to seasonal cycles, architectural shifts (e.g., workload migration), or inaccurate forecasting — those reserved slots may sit underused. This inefficiency is easy to miss, as the cost remains fixed and detached from usage volume. Unlike autoscaling models, reservations require active monitoring and manual adjustment. In some organizations, multiple projects reserve separate slot pools, exacerbating waste through fragmentation.

Excessive Cold Starts in GCP Cloud Functions
Compute
Cloud Provider
GCP
Service Name
GCP Cloud Functions
Inefficiency Type
Inefficient Configuration

Cloud Functions scale to zero when idle. When invoked after inactivity, they undergo a "cold start," initializing runtime, loading dependencies, and establishing any required network connections (e.g., VPC connectors). These cold starts can dramatically increase execution time, especially for functions with: * High memory allocations * Heavy initialization logic * VPC connector requirements If cold starts are frequent, customers may be paying for unnecessary compute time — particularly in latency-sensitive workloads — without receiving proportional value.

Delete-on-Termination Disabled for EBS Volume
Storage
Cloud Provider
AWS
Service Name
AWS EBS
Inefficiency Type
Misconfiguration Leading to Future Orphaned Resource

When EC2 instances are provisioned, each attached EBS volume has a `DeleteOnTermination` flag that determines whether it will be deleted when the instance is terminated. If this flag is set to `false` — often unintentionally in custom launch templates, AMIs, or older automation scripts — volumes persist after termination, resulting in orphaned storage. While detached volumes are easy to detect and clean up after the fact, proactively identifying attached volumes with `DeleteOnTermination=false` can prevent future waste before it occurs.

Misaligned S3 Storage Tier Selection Based on Access Patterns
Storage
Cloud Provider
AWS
Service Name
AWS S3
Inefficiency Type
Misconfigured Storage Tier

While moving objects to colder storage classes like Glacier or Infrequent Access (IA) can reduce storage costs, premature transitions without analyzing historical access patterns can lead to unintended expenses. Retrieval charges, restore time delays, and early delete penalties often go unaccounted for in simplistic tiering decisions. This inefficiency arises when teams default to colder tiers based solely on perceived “age” of data or storage savings—without confirming access frequency, restore time SLAs, or application requirements. Unlike inefficiencies focused on *underuse* of cold storage, this inefficiency reflects *overuse* or misalignment, resulting in higher total costs or operational friction.

Underutilized VM Commitments Due to Architectural Drift
Compute
Cloud Provider
GCP
Service Name
GCP Compute Engine
Inefficiency Type
Underutilized Commitment

VM-based Committed Use Discounts in GCP offer cost savings for predictable workloads, but they are rigid: they apply only to specified VM types, quantities, and regions. When organizations evolve their architecture — such as moving to GKE (Kubernetes), Cloud Run, or autoscaling — usage patterns often shift away from the original commitments. Because GCP lacks flexible reallocation options like AWS Convertible RIs or Savings Plans, underutilized commitments lead to sustained, silent waste. This is especially common when workload changes go uncoordinated with finance or centralized planning.

There are no inefficiency matches the current filters.