Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Suboptimal Warehouse Auto-Suspend Configuration
Compute
Cloud Provider
Snowflake
Service Name
Snowflake Virtual Warehouse
Inefficiency Type
Suboptimal Configuration

If auto-suspend settings are too high, warehouses can sit idle and continue accruing unnecessary charges. Tightening the auto-suspend window ensures that the warehouse shuts down quickly once queries complete, minimizing credit waste while maintaining acceptable user experience (e.g., caching needs, interactive performance).

Suboptimal Query Timeout Configuration
Compute
Cloud Provider
Snowflake
Service Name
Snowflake Virtual Warehouse
Inefficiency Type
Suboptimal Configuration

If no appropriate query timeout is configured, inefficient or runaway queries can execute for extended periods (up to the default 2-day system limit). For as long as the query is running, the warehouse will remain active and accrue costs. Proper timeout settings help terminate inefficient queries, free up compute capacity, and allow the warehouse to become idle sooner, making it eligible for auto-suspend once the inactivity timer is reached.

Inefficient Workload Distribution Across Warehouses
Compute
Cloud Provider
Snowflake
Service Name
Snowflake Virtual Warehouse
Inefficiency Type
Underutilized Resource

Many organizations assign separate Snowflake warehouses to individual business units or teams to simplify chargebacks and operational ownership. This often results in redundant and underutilized warehouses, as workloads frequently do not require the full capacity of even the smallest warehouse size.

By consolidating compatible workloads onto shared warehouses, organizations can maximize utilization, reduce idle runtime across the fleet, and significantly lower total credit consumption. Cost allocation can still be achieved using Query Billing Attribution.

Idle ECS Container Instances Due to ASG Minimum Capacity
Compute
Cloud Provider
AWS
Service Name
AWS ECS
Inefficiency Type
Inefficient Configuration

When ECS clusters are configured with an Auto Scaling Group that maintains a minimum number of EC2 instances (e.g., min = 1 or higher), the instances remain active even when there are no tasks scheduled. This leads to idle compute capacity and unnecessary EC2 charges.Instead, ECS Capacity Providers support target tracking scaling policies that can scale the ASG to zero when idle and automatically increase capacity when new tasks or services are scheduled. Failing to adopt this pattern results in persistent idle infrastructure and unnecessary costs in ECS environments that do not require always-on compute.

Missing Scheduled Shutdown for Non-Production Compute Engine Instances
Compute
Cloud Provider
GCP
Service Name
GCP Compute Engine
Inefficiency Type
Inefficient Configuration

Development and test environments on Compute Engine are commonly provisioned and left running around the clock, even if only used during business hours. This results in wasteful spend on compute time that could be eliminated by scheduling shutdowns during idle periods. GCP enables scheduling via native tools such as Cloud Scheduler, Cloud Functions, or Terraform automation. Stopping VMs during off-hours preserves boot disks and instance metadata while halting compute billing.

Missing Scheduled Shutdown for Non-Production EC2 Instances
Compute
Cloud Provider
AWS
Service Name
AWS EC2
Inefficiency Type
Inefficient Configuration

Non-production EC2 instances are often provisioned for daytime-only usage but remain running 24/7 out of convenience or oversight. This results in unnecessary compute charges, even if the workload is inactive for 16+ hours per day. AWS supports automated schedules to stop and start instances at predefined times, allowing organizations to retain data and instance configuration without paying for unused runtime. Implementing a shutdown schedule for inactive periods (e.g., nights, weekends) can reduce compute costs by up to 60% in typical non-prod environments.

Missing Scheduled Shutdown for Non-Production Azure Virtual Machines
Compute
Cloud Provider
Azure
Service Name
Azure Virtual Machines
Inefficiency Type
Inefficient Configuration

Non-production Azure VMs are frequently left running during off-hours despite being used only during business hours. When these instances remain active overnight or on weekends, they generate unnecessary compute spend. Azure offers built-in auto-shutdown features that allow teams to define daily stop times, retaining disk data and configurations without paying for VM runtime. Implementing scheduled shutdowns in dev/test environments is a simple, low-risk optimization that can reduce compute costs by 30–60%.

Missing Auto-Termination Policy for Databricks Clusters
Compute
Cloud Provider
Databricks
Service Name
Databricks Clusters
Inefficiency Type
Missing Safeguard

In many environments, users launch Databricks clusters for development or analysis and forget to shut them down after use. When no auto-termination policy is configured, these clusters remain active indefinitely, incurring unnecessary charges for both Databricks and cloud infrastructure usage. This inefficiency is especially common in interactive clusters that are user-managed, ephemeral, or exploratory in nature. While Databricks provides built-in support for cluster auto-termination, teams often overlook it unless it's enforced through workspace policies. Without this safeguard in place, idle clusters can persist unnoticed for hours or days, leading to avoidable cost.

Inefficient Use of Photon Engine in Azure Databricks
Compute
Cloud Provider
Azure
Service Name
Databricks
Inefficiency Type
Suboptimal Configuration

Photon is optimized for SQL workloads, delivering significant speedups through vectorized execution and native C++ performance. However, Photon only accelerates workloads that use compatible operations and data patterns. If a workload includes unsupported functions, unoptimized joins, or falls back to interpreted execution, Photon may be silently bypassed — even on a Photon-enabled cluster. In this case, users are billed at a premium DBU rate while receiving no meaningful speed or efficiency gain. This inefficiency typically arises when teams enable Photon globally without validating workload compatibility or updating their pipelines to follow Photon best practices. The result is higher costs with no corresponding benefit — a classic case of configuration drift outpacing optimization discipline.

Overprovisioned Memory Allocation for Lambda Functions
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Overprovisioned Resource

Each Lambda function must be configured with a memory setting, which indirectly controls the amount of CPU and networking performance allocated. In many environments, memory settings are defined arbitrarily or left unchanged as functions evolve. Over time, this leads to overprovisioning — with functions running well below their allocated memory and incurring unnecessary compute costs. Systematic right-sizing using performance benchmarks can significantly reduce spend without sacrificing performance or reliability. This is especially important for frequently invoked functions or those with long execution times.

There are no inefficiency matches the current filters.