Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Unnecessary Costs from Unused Lambda Versions with SnapStart
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Version Sprawl

Many teams publish new Lambda versions frequently (e.g., through CI/CD pipelines) but do not clean up old ones. When SnapStart is enabled, each of these versions retains an active snapshot in the cache, generating ongoing charges. Over time, accumulated unused versions can significantly increase spend without delivering any business value. This problem compounds in environments with high deployment velocity or many functions.

Inefficient SnapStart Configuration in Lambda
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Misconfigured Performance Optimization

SnapStart reduces cold-start latency, but when configured inefficiently, it can increase costs. High-traffic workloads can trigger frequent snapshot restorations, multiplying costs. Slow initialization code inflates the Init phase, which is now billed at the full rate. Suppressed-init conditions, where functions initialize without enhanced resources, can add further inefficiency if memory or timeout settings are misaligned. Together, these factors can cause SnapStart to deliver higher spend without proportional benefit.

Unmanaged Growth of Athena Query Output Buckets
Compute
Cloud Provider
AWS
Service Name
AWS Athena
Inefficiency Type
Missing Lifecycle Policy

Athena generates a new S3 object for every query result, regardless of whether the output is needed long term. Over time, this leads to uncontrolled growth of the output bucket, especially in environments with repetitive queries such as cost and usage reporting. Many of these files are transient and provide little value once the query is consumed. Without lifecycle rules, organizations pay for unnecessary storage and create clutter in S3.

Suboptimal Architecture Selection in AWS Fargate
Compute
Cloud Provider
AWS
Service Name
AWS Fargate
Inefficiency Type
Suboptimal Architecture Selection

AWS Fargate supports both x86 and Graviton2 (ARM64) CPU architectures, but by default, many workloads continue to run on x86. Graviton2 delivers significantly better price-performance, especially for stateless, scale-out container workloads. Teams that fail to configure task definitions with the `ARM64` architecture miss out on meaningful efficiency gains. Because this setting is not enabled automatically and is often overlooked, it results in higher compute costs for functionally equivalent workloads.

Overreliance on Lambda at Sustained Scale
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Suboptimal Pricing Model

Lambda is designed for simplicity and elasticity, but its pricing model becomes expensive at scale. When a function runs frequently (e.g., millions of invocations per day) or for extended durations, the cumulative cost may exceed that of continuously running infrastructure. This is especially true for predictable workloads that don’t require the dynamic scaling Lambda provides.

Teams often continue using Lambda out of convenience or architectural inertia, without revisiting whether the workload would be more cost-effective on EC2, ECS, or EKS. This inefficiency typically hides in plain sight—functions run correctly and scale as needed, but the unit economics are no longer favorable.

Excessive Lambda Duration from Synchronous Waiting
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Inefficient Configuration

Some Lambda functions perform synchronous calls to other services, APIs, or internal microservices and wait for the response before proceeding. During this time, the Lambda is idle from a compute perspective but still fully billed. This anti-pattern can lead to unnecessarily long durations and elevated costs, especially when repeated across high-volume workflows or under memory-intensive configurations.

While this behavior might be functionally correct, it is rarely optimal. Asynchronous invocation patterns—such as decoupling downstream calls with queues, events, or callbacks—can reduce runtime and avoid charging for waiting time. However, detecting this inefficiency is nontrivial, as high duration alone doesn’t always indicate synchronous waiting. Understanding function logic and workload patterns is key.

Oversized Hosting Plan for Azure Functions
Compute
Cloud Provider
Azure
Service Name
Inefficiency Type

Teams often choose the Premium or App Service Plan for Azure Functions to avoid cold start delays or enable VNET connectivity, especially early in a project when performance concerns dominate. However, these decisions are rarely revisited—even as usage patterns change.

In practice, many workloads running on Premium or App Service Plans have low invocation frequency, minimal execution time, and no strict latency requirements. This leads to consistent spend on compute capacity that is largely idle. Because these plans still “work” and don’t cause reliability issues, the inefficiency is easy to overlook. Over time, this misalignment between hosting tier and actual usage creates significant invisible waste.

Orphaned and Overprovisioned Resources in EKS Clusters
Compute
Cloud Provider
AWS
Service Name
AWS EKS
Inefficiency Type
Inefficient Configuration

In EKS environments, cluster sprawl can occur when workloads are removed but underlying resources remain. Common issues include persistent volumes no longer mounted by pods, services still backed by ELBs despite being unused, and overprovisioned nodes for workloads that no longer exist. Node overprovisioning can result from high CPU/memory requests or limits, DaemonSets running on every node, restrictive Pod Disruption Budgets, anti-affinity rules, uneven AZ distribution, or slow scale-down timers. Preventative measures include improving bin packing efficiency, enabling Karpenter consolidation, and right-sizing node instance types and counts. Dev/test namespaces and short-lived environments often accumulate without clear teardown processes, leading to ongoing idle costs.

These remnants contribute to excess infrastructure cost and control plane noise. Since AWS bills independently for each resource (e.g., EBS, ELB, EC2), inefficiencies can add up quickly. Without structured governance or cleanup tooling, clusters gradually fill with orphaned objects and unused capacity.

Suboptimal Architecture Selection for Azure Virtual Machines
Compute
Cloud Provider
Azure
Service Name
Azure Virtual Machines
Inefficiency Type
Suboptimal Pricing Model

Azure provides VM families across three major CPU architectures, but default provisioning often leans toward Intel-based SKUs due to inertia or pre-configured templates. AMD and ARM alternatives offer substantial cost savings; ARM in particular can be 30–50% cheaper for general-purpose workloads. These price differences accumulate quickly at scale.

ARM-based VMs in Azure (e.g., Dps_v5, Eps_v5) are suited for many common workloads, such as microservices, web applications, and containerized environments. However, not all applications are architecture-compatible, especially those with dependencies on x86-specific libraries or instruction sets. Organizations that skip architecture evaluation during provisioning miss out on cost-efficient options.

Orphaned Kubernetes Resources
Compute
Cloud Provider
AWS
Service Name
AWS EKS
Inefficiency Type
Orphaned Resource

In Kubernetes environments, resources such as ConfigMaps, Secrets, Services, and Persistent Volume Claims (PVCs) are often created dynamically by applications or deployment pipelines. When applications are removed or reconfigured, these resources may be left behind if not explicitly cleaned up. Over time, they accumulate as orphaned resources — not referenced by any live workload.

Some of these objects, like PVCs or Services of type LoadBalancer, result in active infrastructure that continues to incur cloud charges (e.g., retained EBS volumes or unused Elastic Load Balancers). Even lightweight objects like ConfigMaps and Secrets bloat the API server’s object store, causing latency and impacting deployments/scaling,, clutter the control plane, and complicate configuration management. This issue is especially common during cluster upgrades, namespace decommissioning, and workload migrations.

There are no inefficiency matches the current filters.