Skip to content

Compute (Clusters & Warehouses)

The Compute section gives you visibility into every Databricks cluster in your environment — what they cost, how well they’re utilized, and where optimization opportunities exist. SQL warehouse visibility is available through the SQL Analysis page.

The cluster list shows all clusters across your connected workspaces. Each row displays:

ColumnWhat it shows
ClusterCluster name and ID, with a link to Databricks
TypeAll-purpose, job, or pipeline
StateLive state from the Databricks API (running, terminated, pending, restarting, resizing, terminating, deleted, or error), with a freshness indicator
UtilizationSparkline showing CPU utilization trend over the period
CPU %Average CPU utilization over the period
Idle (hrs)Total idle hours with no running commands or jobs
Cost (range)Total cost for the selected time range
Uptime (hrs)Total cluster uptime hours in the period
SinceWhen the cluster configuration was first seen

Use the column headers to sort by any field. Sorting by cost descending quickly surfaces your most expensive clusters.

Use the text search box to filter clusters by name. Global filters in the header bar (workspace, team, department, org unit, region, and tags) also apply to the cluster list.

The Compute page has two tabs:

  • Overview — The cluster list with summary charts showing clusters by type and state distribution
  • Optimization — Oversized driver detection and weekend spend analysis across workspaces

Click any cluster to see its dedicated detail page. The detail view includes:

Five cards across the top of the page show key metrics at a glance:

CardWhat it shows
Live StateCurrent state from the Databricks API with freshness indicator
WorkersCurrent or configured worker count
CostTotal cost (DBU + cloud) for the selected time range
UtilizationAverage CPU utilization percentage with a sparkline
Auto-terminateAuto-termination setting in minutes, or “Disabled”, “On job end” (for job clusters), or “Pipeline managed” (for pipeline clusters)

A daily cost bar chart for the cluster over the selected time range. When cost breakdown mode is enabled, the chart shows DBU cost and cloud infrastructure cost as stacked bars. The chart includes a trend line and average reference line. Hover over any day to see the exact amounts.

If the cluster has cost outside the selected time range, an alert banner offers a link to view all-time cost.

For all-purpose clusters with multiple days of data, a daily CPU utilization bar chart shows utilization over time. A summary below the chart shows average idle time and average memory usage.

For job and pipeline clusters with a single run, a compact run summary view shows:

MetricWhat it measures
CPUAverage CPU usage during the run
MemoryAverage memory usage during the run
IdleIdle minutes during the run

If the cluster was created by a job or pipeline, a link to the parent work unit is shown.

The cluster’s current (or last-known) configuration:

  • Workspace ID
  • Cluster ID (with link to Databricks)
  • Type (all-purpose, job, pipeline)
  • Node type (worker instance type)
  • Driver node type
  • Worker count
  • Creator

SQL warehouse visibility is provided through the SQL Analysis page rather than the Compute page. The SQL Analysis overview tab includes a “Top Warehouses by Volume” section showing:

ColumnWhat it shows
WarehouseWarehouse name and workspace
QueriesNumber of queries executed in the period
CostTotal cost for the selected time range
Error %Percentage of queries that failed
Avg RuntimeMean query execution time
P95 Runtime95th percentile query time
Avg QueueMean time queries wait before execution starts

For deeper warehouse-level query cost analysis, the SQL Analysis page also provides per-warehouse query cost breakdowns. See SQL Analysis for details.

Use the cluster list to identify optimization opportunities:

  1. Sort by cost to find the most expensive clusters.
  2. Check CPU % — a high-cost cluster with low utilization is a prime optimization target.
  3. Switch to the Optimization tab to find oversized drivers and excessive weekend spend.