Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Overselecting Data and Misusing LIMIT for Cost Control in BigQuery
Other
Cloud Provider
GCP
Service Name
GCP BigQuery
Inefficiency Type
Excessive data processed

This inefficiency occurs when analysts use SELECT * (reading more columns than needed) and/or rely on LIMIT as a cost-control mechanism. In BigQuery, projecting excess columns increases the amount of data read and can materially raise query cost, particularly on wide tables and frequently-run queries. Separately, applying LIMIT to a query does not inherently reduce bytes processed for non-clustered tables; it mainly caps the result set returned. The “LIMIT saves cost” assumption is only sometimes true on clustered tables, where BigQuery may be able to stop scanning earlier once enough clustered blocks have been read.

Inactive Licensed Users in Azure DevOps Organization
Other
Cloud Provider
Azure
Service Name
Azure DevOps
Inefficiency Type
Unused licensed users

This inefficiency occurs when licensed Azure DevOps users remain assigned after individuals leave the organization or stop using the platform. These inactive users continue to generate recurring per-user charges despite providing no ongoing value, leading to unnecessary spend over time.

Non-Qualifying AWS Marketplace SaaS Spend Counting Toward Commitments
Other
Cloud Provider
AWS
Service Name
AWS Marketplace
Inefficiency Type
Commitment eligibility misclassification

This inefficiency occurs when teams assume AWS Marketplace SaaS purchases will contribute toward EDP or PPA commitments, but the SaaS product is not eligible under AWS’s “Deployed on AWS” standard. As of May 1, 2025, AWS Marketplace allows SaaS products regardless of where they are hosted, while separately identifying products that qualify for commitment drawdown via a visible “Deployed on AWS” badge.

Eligibility is determined based on the invoice date, not the contract signing date. As a result, Marketplace SaaS contracts signed prior to the policy change may still generate invoices after May 1, 2025 that no longer qualify for commitment retirement. This can lead to Marketplace spend appearing on AWS invoices without reducing commitments, creating false confidence in commitment progress and increasing the risk of end-of-term shortfalls.

Suboptimal Storage for Logs
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Misaligned Storage Destination

Many organizations retain all logs in Cloud Logging’s standard storage, even when the data is rarely queried or required only for audit or compliance. Logging buckets are priced for active access and are not optimized for low-frequency retrievas, results in unnecessary expense. Redirecting logs to BigQuery or Cloud Storage can provide better cost efficiency, particularly when coupled with lifecycle policies or table partitioning. Choosing the optimal storage destination based on access frequency and analytics needs is essential to control log retention costs.

Resources Generating Excessive INFO Logs
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Excessive Log Verbosity

Some GCP services and workloads generate INFO-level logs at very high frequencies — for example, load balancers logging every HTTP request or GKE nodes logging system health messages. While valuable for debugging, these logs can flood Cloud Logging with non-critical data. Without log-level tuning or exclusion filters, organizations incur continuous ingestion charges for messages that are seldom analyzed. Over time, this behavior compounds into a persistent waste driver across large-scale environments.

Logging Buckets in Non-Production Environments Storing Info Logs
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Excessive Ingestion of Low-Value Logs

Non-production environments frequently generate INFO-level logs that capture expected system behavior or routine API calls. While useful for troubleshooting in development, they rarely need to be retained. Allowing all INFO logs to be ingested and stored in Logging buckets across dev or staging environments can lead to disproportionate ingestion and storage costs. This inefficiency often persists because log routing and severity filters are not differentiated between production and non-production projects.

Duplicate Storage of Logs in Cloud Logging
Other
Cloud Provider
GCP
Service Name
GCP Cloud Logging
Inefficiency Type
Redundant Log Routing Configuration

Duplicate log storage occurs when multiple sinks capture the same log data — for example, organization-wide sinks exporting all logs to Cloud Storage and project-level sinks doing the same. This redundancy results in paying twice (or more) for identical data. It often arises from decentralized logging configurations, inherited policies, or unclear ownership between teams. The problem is compounded when logs are routed both to Cloud Logging and external observability platforms, creating parallel ingestion streams and double billing.

Excessive AWS Config Costs from Spot Instances
Other
Cloud Provider
AWS
Service Name
AWS Config
Inefficiency Type
Over-Recording of Ephemeral Resources

Spot Instances are designed to be short-lived, with frequent interruptions and replacements. When AWS Config continuously records every lifecycle change for these instances, it produces a large number of CIRs. This drives costs significantly higher without delivering meaningful compliance insight, since Spot Instances are typically stateless and non-critical. In environments with heavy Spot usage, Config costs can balloon and exceed the value of tracking these transient resources.

Continuous AWS Config Recording in Non-Production Environments
Other
Cloud Provider
AWS
Service Name
AWS Config
Inefficiency Type
Excessive Recording Frequency

By default, AWS Config is enabled in continuous recording mode. While this may be justified for production workloads where detailed auditability is critical, it is rarely necessary in non-production environments. Frequent changes in development or testing environments — such as redeploying Lambda functions, ECS tasks, or EC2 instances — generate large volumes of CIRs. This results in disproportionately high costs with minimal benefit to governance or compliance. Switching non-production environments to daily recording reduces CIR volume significantly while maintaining sufficient visibility for tracking changes.

Unnecessary Default Log Retention in Datadog
Other
Cloud Provider
Datadog
Service Name
Inefficiency Type
Excessive Retention Configuration

Many organizations keep Datadog’s default log retention settings without evaluating business requirements. Defaults may extend retention far beyond what is useful for troubleshooting, performance monitoring, or compliance. This leads to unnecessary storage and indexing costs, particularly in non-production environments or for logs with limited value after a short period. By adjusting retention per project, environment, or service, organizations can reduce spend while still meeting compliance and operational needs.

There are no inefficiency matches the current filters.