When Marketplace contracts or subscriptions expire or change without visibility, Azure may automatically continue billing at higher on-demand or list prices. These lapses often go unnoticed due to lack of proactive tracking, ownership, or renewal alerts, resulting in substantial cost increases. The issue is amplified when contract records are siloed across procurement, finance, and engineering teams, with no centralized mechanism to monitor entitlement status or reconcile expected versus actual billing.
In many organizations, AWS Marketplace purchases are lumped into a single consolidated billing line without visibility into individual vendors. This lack of transparency makes it difficult to identify which Marketplace spend is eligible to count toward the EDP cap. As a result, teams may either overspend on direct AWS services to fulfill their commitment unnecessarily or miss the opportunity to right-size new commitments based on existing Marketplace purchases. In both cases, the absence of vendor-level detail hinders optimization.
Azure Marketplace offers two types of listings: transactable and non-transactable. Only transactable purchases contribute toward a customer’s MACC commitment. However, many teams mistakenly assume that all Marketplace spend counts, leading to missed opportunities to burn down commitments and risking budget inefficiencies. Selecting a non-transactable listing, when a transactable equivalent exists, can result in identical services being acquired at higher effective cost due to lost discounts. This confusion is exacerbated when procurement and engineering teams do not coordinate or consult Microsoft's guidance.
Many organizations mistakenly believe that all AWS Marketplace spend automatically contributes to their EDP commitment. In reality, only certain Marketplace transactions, those involving EDP-eligible vendors and transactable SKUs, will count towards a portion of their EDP commitment. This misunderstanding can lead to double counting: forecasting based on the assumption that both native AWS usage and Marketplace purchases will fully draw down the commitment. If the assumptions are incorrect, the organization risks failing to meet its EDP threshold, incurring penalties or losing expected discounts.
Organizations frequently inherit continuous recording by default (e.g., through landing zones) without validating the business need for per-change granularity across all resource types and environments. In change-heavy accounts (ephemeral resources, CI/CD churn, autoscaling), continuous mode drives very high CIR volumes with limited additional operational value. Selecting periodic recording for lower-risk resource types and/or non-production environments can maintain necessary visibility while reducing CIR volume and cost. Recorder settings are account/region scoped, so you can apply continuous in production where required and periodic elsewhere.
By default, AWS Config can be set to record changes across all supported resource types, including those that change frequently, such as security group rules, IAM role policies, route tables, or network interfaces frequent ephemeral resources in containerized or auto-scaling setupsThese high-churn resources can generate an outsized number of configuration items and inflate costs — especially in dynamic or large-scale environments.
This inefficiency arises when recording is enabled indiscriminately across all resources without evaluating whether the data is necessary. Without targeted scoping, teams may incur large charges for configuration data that provides minimal value, especially in non-production environments.This can also obscure meaningful compliance signals by introducing noise
VPC Flow Logs configured with the ALL filter and delivered to CloudWatch Logs often result in unnecessarily high log ingestion volumes — especially in high-traffic environments. This setup is rarely required for day-to-day monitoring or security use cases but is commonly enabled by default or for temporary debugging and then left in place. As a result, teams incur excessive CloudWatch charges without realizing the logging configuration is misaligned with actual needs.
Teams often overuse Microsoft-hosted agents by running redundant or low-value jobs, failing to configure pipelines efficiently, or neglecting to use self-hosted agents for steady workloads. These inefficiencies result in unnecessary cost and delivery friction, especially when pipelines create queues due to limited agent availability.
By default, all Log Analytics tables are created under the Analytics plan, which is optimized for high-performance querying and interactive analysis. However, not all telemetry requires real-time access or frequent querying. Some tables may serve audit, archival, or compliance use cases where querying is rare or unnecessary. Leaving such tables on the Analytics plan results in unnecessary spend—especially when ingestion volumes are high or the table receives data from verbose sources (e.g., diagnostic logs, platform metrics).
Azure now allows users to assign different pricing plans at the table level, including the Basic plan, which offers significantly lower ingestion costs at the expense of reduced query functionality. This provides a valuable opportunity to align cost with access patterns by assigning less expensive plans to tables that are retained for record-keeping or compliance, rather than analysis.
While high-frequency alerting is sometimes justified for production SLAs, it's often overused across non-critical alerts or replicated blindly across environments. Projects with multiple environments (e.g., dev, QA, staging, prod) often duplicate alert rules without adjusting for business impact, which can lead to alert sprawl and inflated monitoring costs.
In large-scale environments, reducing the frequency of non-critical alerts—especially in lower environments—can yield significant savings. Teams often overlook this lever because alert configuration is considered part of operational hygiene rather than cost control. Tuning alert frequencies based on SLA requirements and actual urgency is a low-friction optimization opportunity that does not compromise observability when implemented thoughtfully.