Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Missing Partition Pruning in Delta Lake Table Queries
Databases
Cloud Provider
AWS
Service Name
Databricks
Inefficiency Type
Inefficient Configuration

When Delta Lake tables are partitioned by specific columns — such as date, region, or tenant identifier — the query engine can use partition pruning to limit data scans to only the relevant subset of files. However, when queries against these partitioned tables omit filter predicates on partition columns, the engine is forced to perform a full table scan across all partitions. This means the cluster reads every data file in the table regardless of how much data the query actually needs, directly inflating both execution time and Databricks Unit (DBU) consumption.

This pattern is especially common in several scenarios: legacy SQL queries written before tables were partitioned, dynamically generated queries from applications or BI tools that do not incorporate partition column awareness, and ad-hoc exploratory queries by analysts unfamiliar with the table's partitioning strategy. On large time-series datasets, the difference can be dramatic — a query that should scan only a few gigabytes of recent data may instead process terabytes across the entire table history. Because Databricks bills DBUs per second, a query that runs significantly longer due to scanning unnecessary data consumes proportionally more DBUs, compounding the waste across both the Databricks platform charges and the underlying cloud infrastructure costs.

This inefficiency is distinct from tables that lack partitioning entirely. Here, the partitioning infrastructure exists and is correctly configured, but queries fail to leverage it — making the investment in partitioning effectively wasted while still incurring full-scan costs.

Continuous Backup Enabled on Non-Production MongoDB Atlas Clusters
Databases
Cloud Provider
Service Name
MongoDB Atlas
Inefficiency Type
Inefficient Configuration

MongoDB Atlas offers two backup mechanisms for dedicated clusters: Cloud Backups (scheduled snapshots using the underlying cloud provider's native snapshot functionality) and Continuous Cloud Backup, which adds point-in-time recovery by continuously capturing the cluster's oplog — a log of all write operations. Continuous Cloud Backup is an optional add-on for M10+ dedicated clusters that stores both snapshots and oplog data, enabling restoration to any specific second within a configurable restore window. While this capability is critical for production workloads with strict Recovery Point Objectives (RPOs), it provides limited value on development, testing, or staging clusters where data is typically transient, synthetic, or easily reproducible.

This inefficiency commonly arises when organizations apply infrastructure-as-code templates or centralized backup policies uniformly across all environments without differentiating between production and non-production recovery requirements. Because Continuous Cloud Backup continuously captures and stores oplog data in object storage, storage charges accumulate based on both the configured restore window and the volume of write activity on the cluster. Clusters with moderate to high write throughput generate proportionally larger oplogs, amplifying the cost impact. MongoDB's own architecture guidance explicitly recommends against enabling backup for development and test environments, recognizing that the cost of continuous oplog storage rarely justifies the recovery benefit for non-critical workloads.

RDS SQL Server Running Bundled Licensing on Older Instance Families
Databases
Cloud Provider
AWS
Service Name
Amazon RDS
Inefficiency Type
Suboptimal Pricing Model

Amazon RDS for SQL Server has traditionally used a License Included model where the SQL Server license cost is bundled into a single hourly instance price alongside Windows OS licensing, compute resources, and RDS management capabilities. On older generation instance families such as db.R6i, db.M6i, db.R5, and db.M5, this bundled rate offers no visibility into how much of the hourly cost is attributable to licensing versus infrastructure — and the licensing component can represent a substantial portion of the total charge, especially for Standard and Enterprise editions.

Starting with 7th generation instances (db.M7i and db.R7i), AWS introduced an unbundled pricing model that separates infrastructure costs from SQL Server licensing fees, billing them as distinct line items. This structural change can yield significantly lower total costs compared to equivalent previous-generation instances. Additionally, the unbundled model enables the Optimize CPU feature, which allows customers to reduce vCPU count — and therefore licensing charges — while retaining the same physical core count, memory, and IOPS capacity. This is particularly valuable for memory-intensive or IOPS-intensive SQL Server workloads that don't need high vCPU counts but were previously forced to pay for licensing on all provisioned vCPUs.

Organizations running RDS SQL Server on older instance families continue to pay the higher bundled rate unnecessarily. The savings opportunity compounds in Multi-AZ deployments and on larger instance sizes (2xlarge and above), where hyperthreading is disabled by default on 7th generation instances, effectively halving the vCPU count and the associated licensing fees without sacrificing physical core performance.

Non-Production RDS SQL Server Using Standard or Enterprise Edition Instead of Developer Edition
Databases
Cloud Provider
AWS
Service Name
Amazon RDS
Inefficiency Type
Inefficient Configuration

Amazon RDS for SQL Server uses a License Included pricing model where the hourly instance rate bundles Microsoft SQL Server licensing fees on a per-vCPU basis. When non-production workloads — such as development, testing, staging, QA, or UAT environments — run on Standard or Enterprise editions, they incur these per-vCPU licensing charges even though the workloads do not require a production-grade license. SQL Server licensing is a major component of the total RDS instance cost, and this overhead scales directly with the number of virtual CPUs provisioned.

Since December 2025, Amazon RDS for SQL Server supports Developer Edition, which includes all Enterprise Edition features but is licensed by Microsoft exclusively for non-production use. Developer Edition instances incur only AWS infrastructure costs with no SQL Server licensing fees. Prior to this capability, customers had no option to use Developer Edition on standard RDS and were forced to pay for Standard or Enterprise licenses even in non-production environments. Organizations with multiple non-production environments running Standard or Enterprise editions now have a significant opportunity to eliminate unnecessary licensing costs by migrating to Developer Edition.

Developer Edition on RDS is provisioned through a Custom Engine Version (CEV) approach, which requires a one-time setup per SQL Server version. While this adds initial complexity compared to standard RDS instance creation, the ongoing licensing savings can be substantial — particularly for organizations running several non-production SQL Server instances across development, testing, and staging environments.

Suboptimal Cache TTL Strategy Causing Repeated Backend Execution
Databases
Cloud Provider
AWS
Service Name
AWS ElastiCache
Inefficiency Type
Inefficient Configuration

Organizations deploy ElastiCache to reduce load on backend systems — databases, APIs, and compute layers — by serving frequently accessed data from fast in-memory storage. However, when Time-to-Live (TTL) values are misaligned with actual data change patterns, the cache delivers poor hit rates and fails to eliminate backend workload. This creates a particularly costly form of dual waste: the organization pays continuously for ElastiCache infrastructure while simultaneously incurring the full backend compute and database costs that caching was meant to reduce.

This inefficiency is especially insidious because it is not immediately visible in cost reporting. ElastiCache charges appear as expected infrastructure spend, while the failure to meaningfully reduce backend costs goes unnoticed unless teams actively correlate cache hit rates with backend workload. The pattern commonly emerges when caching is deployed with default or arbitrary TTL values without analyzing how frequently the underlying data actually changes. When TTL is set too short relative to data volatility, cache entries expire before they can be reused — a phenomenon known as cache churn — turning the cache into an expensive pass-through layer that adds cost and latency without delivering value.

The cost impact scales directly with traffic volume. High-traffic applications with poor cache hit rates waste significant spend on both caching infrastructure and unnecessary backend processing. Critically, this is distinct from over-provisioning cache capacity; the waste occurs even with properly sized cache nodes if the TTL strategy does not align with data change frequency. Each cache miss incurs three operations — the initial cache check, the backend query, and the cache population step — adding both latency and backend load compared to having no cache at all.

Excess vCPU Licensing Costs on RDS for SQL Server Instances
Databases
Cloud Provider
AWS
Service Name
AWS RDS
Inefficiency Type
Inefficient Configuration

Amazon RDS for SQL Server uses a License Included pricing model where SQL Server and Windows OS licensing costs are bundled into the per-instance-hour rate — and those licensing costs scale directly with the number of vCPUs on the instance. Many SQL Server workloads, particularly OLTP, reporting, and data warehousing scenarios, are constrained by memory and storage throughput rather than raw CPU capacity. Organizations frequently provision large instance types to obtain the memory or IOPS their workloads require, but in doing so they also pay for a high vCPU count that remains largely underutilized. Because SQL Server licensing often represents the single largest cost component of an RDS for SQL Server instance, paying for unnecessary vCPUs translates directly into wasted licensing spend.

AWS offers an Optimize CPU feature on 7th generation instance classes (M7i and R7i) that allows customers to reduce the active core count on their RDS for SQL Server instances while preserving the same memory and IOPS capacity. On these newer generation instances, hyperthreading is disabled by default, and vCPU reduction is achieved by lowering the physical core count. AWS benchmarks demonstrate that instances with reduced vCPU counts can match the transaction throughput of instances with double the CPU, with utilization remaining within acceptable thresholds. This feature is supported on Enterprise, Standard, and Web editions for instance sizes of 2xlarge and above, with a minimum of 4 vCPUs after optimization. Organizations that have not evaluated or applied this configuration are likely overpaying for SQL Server licensing on every eligible instance in their fleet.

Overprovisioned Azure Cache for Redis Instance
Databases
Cloud Provider
Azure
Service Name
Azure Cache for Redis
Inefficiency Type
Overprovisioned Resource

Azure Cache for Redis is billed at a fixed rate determined entirely by the provisioned tier and cache size — not by actual utilization. A cache instance that consumes only a fraction of its available memory and throughput incurs the same cost as one running at full capacity. This means that when a cache is sized larger than the workload demands, the unused memory and throughput headroom represent pure waste with no corresponding benefit.

Overprovisioning commonly occurs when teams size caches for anticipated peak loads that never materialize, or when workload patterns shift over time — such as after a migration, application refactor, or traffic decline — without a corresponding review of cache sizing. Because there is no option to stop or pause billing on a cache instance, and charges accrue continuously from the moment the cache is created until it is deleted, oversized caches quietly accumulate unnecessary costs around the clock.

An important constraint compounds this issue: scaling down between tiers is not supported. An organization that initially provisions a Premium-tier cache but later determines that a Standard tier would suffice cannot simply downgrade in place — it must create a new cache at the appropriate tier and migrate data. This friction often delays right-sizing efforts and prolongs overspend.

Orphaned RDS Backup Storage After Database Deletion
Databases
Cloud Provider
AWS
Service Name
AWS RDS
Inefficiency Type
Orphaned backup storage

This inefficiency occurs when an RDS database instance is deleted but its manual snapshots or retained backups remain. Unlike automated backups tied to a live instance, these backups persist independently and continue generating storage costs despite no longer supporting any active database. This is distinct from excessive retention on active databases and typically arises from incomplete cleanup during decommissioning.

Suboptimal Service Tier Selection in Azure SQL Managed Instance
Databases
Cloud Provider
Azure
Service Name
Azure SQL Managed Instance
Inefficiency Type
Suboptimal service tier selection

This inefficiency occurs when Azure SQL Managed Instances continue running on legacy General Purpose or Business Critical tiers despite the availability of the next-gen General Purpose tier. The newer tier enables more granular scaling of vCPU, memory, and storage, allowing workloads to better match actual resource needs. In many cases, workloads running on Business Critical—or overprovisioned legacy General Purpose—do not require the premium performance or architecture of those tiers and could achieve equivalent outcomes at lower cost by moving to next-gen General Purpose.

Outdated ElastiCache Engine Version Incurring Extended Support Charges
Databases
Cloud Provider
AWS
Service Name
AWS ElastiCache
Inefficiency Type
Extended support surcharge

This inefficiency occurs when ElastiCache clusters continue running engine versions that have moved into extended support. While the service remains functional, AWS charges an ongoing premium for extended support that provides no added performance or capability. These costs are typically avoidable by upgrading to a version within standard support.

There are no inefficiency matches the current filters.