Databricks users can select from a wide range of instance types for cluster driver and worker nodes. Without guardrails, teams may choose high-cost configurations (e.g., 16xlarge nodes) that exceed workload requirements. This results in inflated costs with little performance benefit. To reduce this risk, administrators can use compute policies to define acceptable node types and enforce size limits across the workspace.
Databricks costs are driven by:
Larger node types (e.g., high-memory or high-I/O VMs) incur significantly higher charges. Oversizing clusters without justification leads to unnecessary DBU and infrastructure costs.