Compute (Clusters & Warehouses)
The Compute section gives you visibility into every Databricks cluster in your environment — what they cost, how well they’re utilized, and where optimization opportunities exist. SQL warehouse visibility is available through the SQL Analysis page.
Clusters
Section titled “Clusters”Cluster list
Section titled “Cluster list”The cluster list shows all clusters across your connected workspaces. Each row displays:
| Column | What it shows |
|---|---|
| Cluster | Cluster name and ID, with a link to Databricks |
| Type | All-purpose, job, or pipeline |
| State | Live state from the Databricks API (running, terminated, pending, restarting, resizing, terminating, deleted, or error), with a freshness indicator |
| Utilization | Sparkline showing CPU utilization trend over the period |
| CPU % | Average CPU utilization over the period |
| Idle (hrs) | Total idle hours with no running commands or jobs |
| Cost (range) | Total cost for the selected time range |
| Uptime (hrs) | Total cluster uptime hours in the period |
| Since | When the cluster configuration was first seen |
Use the column headers to sort by any field. Sorting by cost descending quickly surfaces your most expensive clusters.
Search and filtering
Section titled “Search and filtering”Use the text search box to filter clusters by name. Global filters in the header bar (workspace, team, department, org unit, region, and tags) also apply to the cluster list.
Overview and Optimization tabs
Section titled “Overview and Optimization tabs”The Compute page has two tabs:
- Overview — The cluster list with summary charts showing clusters by type and state distribution
- Optimization — Oversized driver detection and weekend spend analysis across workspaces
Cluster detail view
Section titled “Cluster detail view”Click any cluster to see its dedicated detail page. The detail view includes:
Summary cards
Section titled “Summary cards”Five cards across the top of the page show key metrics at a glance:
| Card | What it shows |
|---|---|
| Live State | Current state from the Databricks API with freshness indicator |
| Workers | Current or configured worker count |
| Cost | Total cost (DBU + cloud) for the selected time range |
| Utilization | Average CPU utilization percentage with a sparkline |
| Auto-terminate | Auto-termination setting in minutes, or “Disabled”, “On job end” (for job clusters), or “Pipeline managed” (for pipeline clusters) |
Cost trend
Section titled “Cost trend”A daily cost bar chart for the cluster over the selected time range. When cost breakdown mode is enabled, the chart shows DBU cost and cloud infrastructure cost as stacked bars. The chart includes a trend line and average reference line. Hover over any day to see the exact amounts.
If the cluster has cost outside the selected time range, an alert banner offers a link to view all-time cost.
Utilization
Section titled “Utilization”For all-purpose clusters with multiple days of data, a daily CPU utilization bar chart shows utilization over time. A summary below the chart shows average idle time and average memory usage.
For job and pipeline clusters with a single run, a compact run summary view shows:
| Metric | What it measures |
|---|---|
| CPU | Average CPU usage during the run |
| Memory | Average memory usage during the run |
| Idle | Idle minutes during the run |
If the cluster was created by a job or pipeline, a link to the parent work unit is shown.
Configuration
Section titled “Configuration”The cluster’s current (or last-known) configuration:
- Workspace ID
- Cluster ID (with link to Databricks)
- Type (all-purpose, job, pipeline)
- Node type (worker instance type)
- Driver node type
- Worker count
- Creator
SQL warehouses
Section titled “SQL warehouses”SQL warehouse visibility is provided through the SQL Analysis page rather than the Compute page. The SQL Analysis overview tab includes a “Top Warehouses by Volume” section showing:
| Column | What it shows |
|---|---|
| Warehouse | Warehouse name and workspace |
| Queries | Number of queries executed in the period |
| Cost | Total cost for the selected time range |
| Error % | Percentage of queries that failed |
| Avg Runtime | Mean query execution time |
| P95 Runtime | 95th percentile query time |
| Avg Queue | Mean time queries wait before execution starts |
For deeper warehouse-level query cost analysis, the SQL Analysis page also provides per-warehouse query cost breakdowns. See SQL Analysis for details.
Comparing resources
Section titled “Comparing resources”Use the cluster list to identify optimization opportunities:
- Sort by cost to find the most expensive clusters.
- Check CPU % — a high-cost cluster with low utilization is a prime optimization target.
- Switch to the Optimization tab to find oversized drivers and excessive weekend spend.
Next steps
Section titled “Next steps”- SQL Analysis — Query-level cost investigation and warehouse metrics
- Work Units — Job and pipeline cost tracking
- Budgets — Set spending targets for compute resources
- Waste Detection & Insights — How idle and overprovisioned resources are identified
- Insights & Actions — Acting on compute optimization recommendations