Overview Dashboard
The Overview Dashboard is your starting point in LakeSentry — a single page that summarizes your entire Databricks spend, highlights what changed, and surfaces what needs attention.
For a first-time walkthrough of the dashboard layout and navigation, see Understanding the Dashboard.
Dashboard sections
Section titled “Dashboard sections”Cost trend chart
Section titled “Cost trend chart”The main chart displays total Databricks spend over the selected time range as a bar chart. Use the granularity toggle (Day / Week / Month) to change the aggregation level.
When the cost mode is set to Breakdown, the chart shows stacked DBU vs. cloud costs. In all other modes it shows a single series for the selected cost view.
Hover over any bar to see the exact cost for that period.
Top cost drivers
Section titled “Top cost drivers”Below the trend chart, a ranked list shows the resources consuming the most budget. Each entry displays:
| Column | What it shows |
|---|---|
| Resource | Resource name and workspace |
| Type | Resource type (job, pipeline, warehouse, cluster, or serving) |
| Cost | Total cost for the selected time range |
| Activity | Type-specific activity metric (runs, queries, or requests) |
Click any row to expand it and see type-specific details (success rate, duration, error rate, etc.), with a link to the full detail view.
Above the table, a cost category breakdown shows spend grouped into clickable cards. Click a category to filter the table:
- Jobs & Pipelines — Job and DLT pipeline costs
- SQL Warehouses — Classic and serverless warehouse costs
- Interactive Clusters — All-purpose cluster costs
- Model Serving — Inference endpoint costs
- Platform & Other — Platform overhead and unattributed costs
Stat cards
Section titled “Stat cards”Summary cards at the top of the page show headline numbers:
- Cost — Aggregate cost for the selected period with percentage change vs. previous period
- Cost of Failure — Total cost attributed to failed job runs
- 30-Day Forecast — Projected spend range with trend direction and confidence level
- Active Jobs — Count of jobs that ran during the period
- Insights — Count of active insights, with critical insight count highlighted
- Attribution Quality — Percentage of spend attributed to a known owner (vs. unallocated)
Insights are grouped into three categories:
| Category | What it tracks |
|---|---|
| Anomaly | Cost spikes, duration spikes, failure rate spikes, and other statistical deviations |
| Waste | Idle warehouses, zombie model endpoints, runaway jobs, retry storms, unused tables |
| Optimization | Right-sizing suggestions, spot instance candidates, outdated runtimes, overprovisioned clusters |
Savings and budget cards
Section titled “Savings and budget cards”Below the cost trend chart, additional cards appear:
- Savings summary — Potential and realized savings from insights
- Budget status — Progress against configured budgets (shown only when budgets are active)
Global controls
Section titled “Global controls”These controls appear in the page header and affect all sections on the page.
Time range selector
Section titled “Time range selector”Choose the analysis window for all charts and tables:
| Option | Period |
|---|---|
| 24 hours | Last 24 hours |
| 7 days | Last 7 calendar days |
| 30 days | Last 30 calendar days |
| 90 days | Last 90 calendar days |
| Last year | Last 12 months |
| Month to date | Current calendar month |
| Year to date | Current calendar year |
| All time | Full data history |
| Custom | Any start and end date within your data retention window |
Some presets may be locked depending on your plan’s history limit. The selected time range is always visible in the header. Changing it refreshes all dashboard sections.
Workspace filter
Section titled “Workspace filter”A multi-select dropdown to focus on specific Databricks workspaces. By default, all workspaces are included. Select one or more to narrow the view.
Team and department filter
Section titled “Team and department filter”Filter by your organizational hierarchy — org units, departments, or teams. This is useful for managers who want to see only their team’s spend, or for FinOps reviews scoped to a department.
Tag filter
Section titled “Tag filter”Filter by Databricks resource tags. You can select one or more tag keys, and optionally narrow to specific tag values. This lets you slice costs by any tagging convention used in your workspaces (e.g., environment, cost center, project).
Cost mode toggle
Section titled “Cost mode toggle”Switch between different cost views:
| Mode | Shows |
|---|---|
| Total | DBU + Cloud combined |
| DBU Only | Databricks license costs |
| Cloud Only | Cloud infrastructure costs |
| Breakdown | Show DBU and Cloud separately |
Data freshness
Section titled “Data freshness”LakeSentry refreshes data based on your collector’s schedule — configurable to run every hour, every 4 hours, or daily. You can check your collector schedule and run history in the Settings page under the Connector tab.
If dashboard data appears outdated, check your collector health in the Connector settings. See Data Freshness & Pipeline Status for troubleshooting.
Keyboard shortcuts
Section titled “Keyboard shortcuts”| Shortcut | Action |
|---|---|
⌘K / Ctrl+K | Open command palette |
/ | Open search |
g then o | Go to Overview |
g then e | Go to Cost Explorer |
j / k | Move down / up in lists |
Enter | Open selected item |
Esc | Go back or close dialog |
? | Show keyboard shortcuts help |
Next steps
Section titled “Next steps”- Cost Explorer — Deep-dive into costs with multi-dimensional breakdowns
- Insights & Actions — Review and act on optimization findings
- Understanding the Dashboard — First-time orientation guide