Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Using High-Cost Bedrock Models for Low-Complexity Tasks
Compute
Cloud Provider
AWS
Service Name
AWS Bedrock
Inefficiency Type
Overpowered Model Selection

Many Bedrock workloads involve low-complexity tasks such as tagging, classification, routing, entity extraction, keyword detection, document triage, or lightweight summarization. These tasks **do not require** the advanced reasoning or generative capabilities of higher-cost models such as Claude 3 Opus or comparable premium models. When organizations default to a high-end model across all applications—or fail to periodically reassess model selection—they pay elevated costs for work that could be performed effectively by smaller, lower-cost models such as Claude Haiku or other compact model families. This inefficiency becomes more pronounced in high-volume, repetitive workloads where token counts scale quickly.

Using High-Cost Models for Low-Complexity Tasks
Compute
Cloud Provider
Azure
Service Name
Azure Cognitive Services
Inefficiency Type
Overpowered Model Selection

Some workloads — such as text classification, keyword extraction, intent detection, routing, or lightweight summarization — do not require the capabilities of the most advanced model families. When high-cost models are used for these simple tasks, organizations pay elevated token rates for work that could be handled effectively by more efficient, lower-cost models. This mismatch typically arises from defaulting to a single model for all tasks or not periodically reviewing model usage patterns across applications.

Suboptimal Integration Runtime Region Selection in Azure Data Factory
Compute
Cloud Provider
Azure
Service Name
Azure Data Factory V2
Inefficiency Type
Cross-Region Data Movement

When Integration Runtimes are configured with the default “Auto Resolve” region setting, Azure may automatically provision them in a region different from the data sources or sinks. For example, an environment deployed in West Europe may run pipelines in US East. This causes unnecessary cross-region data transfer, increasing networking costs and pipeline latency. The inefficiency often goes unnoticed because data transfer costs are billed separately from pipeline compute charges.

Outdated AWS Glue Version for Python Jobs
Compute
Cloud Provider
AWS
Service Name
AWS Glue
Inefficiency Type
Outdated Runtime Version

Newer AWS Glue versions—such as Glue 5.0—include significant performance optimizations for **Python-based** ETL jobs, often reducing runtime by 10–60%. These improvements do not require any code changes, making version upgrades a simple and impactful optimization. When jobs remain on older runtimes such as Glue 3.0 or 4.0, they execute more slowly, consume more DPUs, and incur unnecessary cost. Additionally, Glue 5.0 offers more worker types (larger standard workers and memory-optimized workers), that can provide additional performance gain for some jobs. This inefficiency does not apply to Scala-based jobs, which do not benefit from the same performance uplift.

Azure Hybrid Benefit Not Enabled on Virtual Machines
Compute
Cloud Provider
Azure
Service Name
Azure Virtual Machines
Inefficiency Type
Licensing Configuration Gap

Many organizations purchase Software Assurance or subscription-based Windows and SQL Server licenses that entitle them to use Azure Hybrid Benefit. However, if the setting is not applied on eligible resources, Azure continues charging pay-as-you-go rates that already include Microsoft licensing costs. This oversight results in paying twice—once for the on-premises license and once for the built-in Azure license. The inefficiency often goes unnoticed because licensing configurations are not centrally validated or enforced. Enabling AHUB can reduce costs by up to 40% for Windows server VMs and up to 30% for SQL Databases.

Idle Dataflow Workers Running After Pipeline Failure
Compute
Cloud Provider
GCP
Service Name
GCP Dataflow
Inefficiency Type
Unreleased Compute Resources After Failure

When a Dataflow pipeline fails—often due to dependency issues, misconfigurations, or data format mismatches—its worker instances may remain active temporarily until the service terminates them. In some cases, misconfigured jobs, stuck retries, or delayed monitoring can cause workers to continue running for extended periods. These idle workers consume vCPU, memory, and storage resources without performing useful work. The inefficiency is compounded in large or high-frequency batch environments where repeated failures can leave many orphaned workers running concurrently.

Pipeline Breaks from Outdated Dependency Images in Dataflow
Compute
Cloud Provider
GCP
Service Name
GCP Dataflow
Inefficiency Type
Operational Overhead from Custom Image Maintenance

In restricted or isolated network environments, Dataflow workers often cannot reach the public internet to download runtime dependencies. To operate securely, organizations build custom worker images that bundle required libraries. However, these images must be manually updated to keep dependencies current. As upstream packages evolve, outdated internal images can cause pipeline errors, execution delays, or total job failures. Each failure wastes worker runtime, increases troubleshooting time, and leads to rebuild cycles that inflate operational and compute costs.

Using High-Cost Bedrock Models for Low-Complexity Tasks
Compute
Cloud Provider
AWS
Service Name
Inefficiency Type

Many Bedrock workloads involve low-complexity tasks such as tagging, classification, routing, entity extraction, keyword detection, document triage, or lightweight summarization. These tasks **do not require** the advanced reasoning or generative capabilities of higher-cost models such as Claude 3 Opus or comparable premium models. When organizations default to a high-end model across all applications—or fail to periodically reassess model selection—they pay elevated costs for work that could be performed effectively by smaller, lower-cost models such as Claude Haiku or other compact model families. This inefficiency becomes more pronounced in high-volume, repetitive workloads where token counts scale quickly.

Using High-Cost Models for Low-Complexity Tasks
Compute
Cloud Provider
Azure
Service Name
Inefficiency Type

Some workloads — such as text classification, keyword extraction, intent detection, routing, or lightweight summarization — do not require the capabilities of the most advanced model families. When high-cost models are used for these simple tasks, organizations pay elevated token rates for work that could be handled effectively by more efficient, lower-cost models. This mismatch typically arises from defaulting to a single model for all tasks or not periodically reviewing model usage patterns across applications.

Outdated AWS Glue Version for Python Jobs
Compute
Cloud Provider
AWS
Service Name
Inefficiency Type

Newer AWS Glue versions—such as Glue 5.0—include significant performance optimizations for **Python-based** ETL jobs, often reducing runtime by 10–60%. These improvements do not require any code changes, making version upgrades a simple and impactful optimization. When jobs remain on older runtimes such as Glue 3.0 or 4.0, they execute more slowly, consume more DPUs, and incur unnecessary cost. Additionally, Glue 5.0 offers more worker types (larger standard workers and memory-optimized workers), that can provide additional performance gain for some jobs. This inefficiency does not apply to Scala-based jobs, which do not benefit from the same performance uplift.

There are no inefficiency matches the current filters.