Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Overprovisioned or Idle Azure Container Registry Tier
Other
Cloud Provider
Azure
Service Name
AWS ECR
Inefficiency Type
Overprovisioned Resource

Azure Container Registry charges a fixed daily fee based on the selected tier — Basic, Standard, or Premium — regardless of whether the registry is actively used. This means a registry with zero image pulls, zero pushes, and no active workloads consuming it still incurs the same daily charge as a heavily utilized one. Teams commonly provision Standard or Premium tiers as a default "production-safe" choice without evaluating whether the advanced capabilities exclusive to those tiers — such as geo-replication, private endpoints, content trust, or zone redundancy — are actually needed. The result is a persistent overspend on tier fees that deliver no incremental value.

This waste pattern is especially prevalent in organizations with decentralized container workflows. Registries created for short-lived projects, development and testing environments, or CI/CD pipelines are frequently left running long after their purpose has ended. Because Azure Container Registry has no free tier and cannot be paused or stopped — deletion is the only way to cease billing — these forgotten registries quietly accumulate fixed charges indefinitely. Across an organization with dozens of registries spread across teams and environments, the compounding effect of idle or over-tiered registries can represent a meaningful and entirely avoidable cost.

AWS Marketplace Annual Subscriptions Reverting to Pay-As-You-Go Rates
Other
Cloud Provider
AWS
Service Name
AWS Marketplace
Inefficiency Type
Suboptimal Pricing Model

When organizations purchase third-party software through AWS Marketplace using annual subscriptions, they typically receive meaningful discounts compared to hourly pay-as-you-go (PAYG) pricing. However, when these annual subscriptions expire without active renewal, billing automatically reverts to the default hourly PAYG rate — which can be substantially higher. This is not a renewal at a higher rate; it is the absence of a renewal action that causes the subscription to lapse and the costlier pricing tier to take effect. Because the subscription simply expires silently, many teams do not realize they have lost their discounted rate until the cost increase appears in the next billing cycle.

This inefficiency is especially difficult to manage in enterprise environments where multiple Marketplace subscriptions are purchased at irregular intervals throughout the year, each with its own expiration date. Private offers — which provide custom-negotiated pricing — add further complexity because they cannot auto-renew by design; when a private offer expires, the customer either moves to the product's higher public pricing or loses the subscription entirely. The financial impact can be severe: in some cases, the licensing cost at PAYG rates can exceed the cost of the underlying compute infrastructure itself, as commonly seen with enterprise software such as SUSE Linux for SAP workloads.

Additionally, for AMI-based products, annual subscriptions are tied to specific instance types. Changing instance types during the subscription period causes billing to revert to hourly rates for the new type, creating another avenue for unintended cost increases even before the subscription formally expires.

Idle or Untriggered Azure Logic Apps Generating Continuous Charges
Other
Cloud Provider
Azure
Service Name
Azure Logic Apps
Inefficiency Type
Unused Resource

Azure Logic Apps can quietly accumulate costs even when no workflows are actively executing, but the mechanism differs significantly depending on the deployment model. In the Consumption (multitenant) plan, Logic Apps with polling triggers continue to generate billable trigger executions every time the trigger checks for events — even when no events are found and no workflow runs are initiated. A polling trigger configured to check every 30 seconds produces thousands of billable executions per day, all charged at the per-execution rate, regardless of whether any useful work is performed. Webhook or push-based triggers avoid this particular waste, but retained run history and storage operations can still accrue minor costs over time.

In the Standard (single-tenant) plan, the cost driver is fundamentally different. Customers pay for reserved compute capacity — vCPU and memory — on an hourly basis, whether or not any workflows execute. An idle Standard Logic App incurs the full hosting plan charges around the clock. Disabling a Standard Logic App prevents triggers from firing but does not stop the hosting plan billing; only deletion or consolidation of the underlying plan reduces costs.

These idle Logic Apps commonly arise after application decommissioning, migration projects, or proof-of-concept work that was never cleaned up. At enterprise scale, where dozens or hundreds of Logic Apps may exist across multiple environments, the cumulative waste from untriggered workflows and unused hosting plans can become substantial — particularly when the resources are spread across teams and subscriptions with no centralized review process.

Overselecting Data and Misusing LIMIT for Cost Control in BigQuery
Other
Cloud Provider
GCP
Service Name
GCP BigQuery
Inefficiency Type
Excessive data processed

This inefficiency occurs when analysts use SELECT * (reading more columns than needed) and/or rely on LIMIT as a cost-control mechanism. In BigQuery, projecting excess columns increases the amount of data read and can materially raise query cost, particularly on wide tables and frequently-run queries. Separately, applying LIMIT to a query does not inherently reduce bytes processed for non-clustered tables; it mainly caps the result set returned. The “LIMIT saves cost” assumption is only sometimes true on clustered tables, where BigQuery may be able to stop scanning earlier once enough clustered blocks have been read.

Inactive Licensed Users in Azure DevOps Organization
Other
Cloud Provider
Azure
Service Name
Azure DevOps
Inefficiency Type
Unused licensed users

This inefficiency occurs when licensed Azure DevOps users remain assigned after individuals leave the organization or stop using the platform. These inactive users continue to generate recurring per-user charges despite providing no ongoing value, leading to unnecessary spend over time.

Non-Qualifying AWS Marketplace SaaS Spend Counting Toward Commitments
Other
Cloud Provider
AWS
Service Name
AWS Marketplace
Inefficiency Type
Commitment eligibility misclassification

This inefficiency occurs when teams assume AWS Marketplace SaaS purchases will contribute toward EDP or PPA commitments, but the SaaS product is not eligible under AWS’s “Deployed on AWS” standard. As of May 1, 2025, AWS Marketplace allows SaaS products regardless of where they are hosted, while separately identifying products that qualify for commitment drawdown via a visible “Deployed on AWS” badge.

Eligibility is determined based on the invoice date, not the contract signing date. As a result, Marketplace SaaS contracts signed prior to the policy change may still generate invoices after May 1, 2025 that no longer qualify for commitment retirement. This can lead to Marketplace spend appearing on AWS invoices without reducing commitments, creating false confidence in commitment progress and increasing the risk of end-of-term shortfalls.

Suboptimal Storage for Logs
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Misaligned Storage Destination

Many organizations retain all logs in Cloud Logging’s standard storage, even when the data is rarely queried or required only for audit or compliance. Logging buckets are priced for active access and are not optimized for low-frequency retrievas, results in unnecessary expense. Redirecting logs to BigQuery or Cloud Storage can provide better cost efficiency, particularly when coupled with lifecycle policies or table partitioning. Choosing the optimal storage destination based on access frequency and analytics needs is essential to control log retention costs.

Resources Generating Excessive INFO Logs
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Excessive Log Verbosity

Some GCP services and workloads generate INFO-level logs at very high frequencies — for example, load balancers logging every HTTP request or GKE nodes logging system health messages. While valuable for debugging, these logs can flood Cloud Logging with non-critical data. Without log-level tuning or exclusion filters, organizations incur continuous ingestion charges for messages that are seldom analyzed. Over time, this behavior compounds into a persistent waste driver across large-scale environments.

Logging Buckets in Non-Production Environments Storing Info Logs
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Excessive Ingestion of Low-Value Logs

Non-production environments frequently generate INFO-level logs that capture expected system behavior or routine API calls. While useful for troubleshooting in development, they rarely need to be retained. Allowing all INFO logs to be ingested and stored in Logging buckets across dev or staging environments can lead to disproportionate ingestion and storage costs. This inefficiency often persists because log routing and severity filters are not differentiated between production and non-production projects.

Duplicate Storage of Logs in Cloud Logging
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Redundant Log Routing Configuration

Duplicate log storage occurs when multiple sinks capture the same log data — for example, organization-wide sinks exporting all logs to Cloud Storage and project-level sinks doing the same. This redundancy results in paying twice (or more) for identical data. It often arises from decentralized logging configurations, inherited policies, or unclear ownership between teams. The problem is compounded when logs are routed both to Cloud Logging and external observability platforms, creating parallel ingestion streams and double billing.

There are no inefficiency matches the current filters.