Skip to content

Overview Dashboard

The Overview Dashboard is your starting point in LakeSentry — a single page that summarizes your entire Databricks spend, highlights what changed, and surfaces what needs attention.

For a first-time walkthrough of the dashboard layout and navigation, see Understanding the Dashboard.

The main chart displays total Databricks spend over the selected time range as a bar chart. Use the granularity toggle (Day / Week / Month) to change the aggregation level.

When the cost mode is set to Breakdown, the chart shows stacked DBU vs. cloud costs. In all other modes it shows a single series for the selected cost view.

Hover over any bar to see the exact cost for that period.

Below the trend chart, a ranked list shows the resources consuming the most budget. Each entry displays:

ColumnWhat it shows
ResourceResource name and workspace
TypeResource type (job, pipeline, warehouse, cluster, or serving)
CostTotal cost for the selected time range
ActivityType-specific activity metric (runs, queries, or requests)

Click any row to expand it and see type-specific details (success rate, duration, error rate, etc.), with a link to the full detail view.

Above the table, a cost category breakdown shows spend grouped into clickable cards. Click a category to filter the table:

  • Jobs & Pipelines — Job and DLT pipeline costs
  • SQL Warehouses — Classic and serverless warehouse costs
  • Interactive Clusters — All-purpose cluster costs
  • Model Serving — Inference endpoint costs
  • Platform & Other — Platform overhead and unattributed costs

Summary cards at the top of the page show headline numbers:

  • Cost — Aggregate cost for the selected period with percentage change vs. previous period
  • Cost of Failure — Total cost attributed to failed job runs
  • 30-Day Forecast — Projected spend range with trend direction and confidence level
  • Active Jobs — Count of jobs that ran during the period
  • Insights — Count of active insights, with critical insight count highlighted
  • Attribution Quality — Percentage of spend attributed to a known owner (vs. unallocated)

Insights are grouped into three categories:

CategoryWhat it tracks
AnomalyCost spikes, duration spikes, failure rate spikes, and other statistical deviations
WasteIdle warehouses, zombie model endpoints, runaway jobs, retry storms, unused tables
OptimizationRight-sizing suggestions, spot instance candidates, outdated runtimes, overprovisioned clusters

Below the cost trend chart, additional cards appear:

  • Savings summary — Potential and realized savings from insights
  • Budget status — Progress against configured budgets (shown only when budgets are active)

These controls appear in the page header and affect all sections on the page.

Choose the analysis window for all charts and tables:

OptionPeriod
24 hoursLast 24 hours
7 daysLast 7 calendar days
30 daysLast 30 calendar days
90 daysLast 90 calendar days
Last yearLast 12 months
Month to dateCurrent calendar month
Year to dateCurrent calendar year
All timeFull data history
CustomAny start and end date within your data retention window

Some presets may be locked depending on your plan’s history limit. The selected time range is always visible in the header. Changing it refreshes all dashboard sections.

A multi-select dropdown to focus on specific Databricks workspaces. By default, all workspaces are included. Select one or more to narrow the view.

Filter by your organizational hierarchy — org units, departments, or teams. This is useful for managers who want to see only their team’s spend, or for FinOps reviews scoped to a department.

Filter by Databricks resource tags. You can select one or more tag keys, and optionally narrow to specific tag values. This lets you slice costs by any tagging convention used in your workspaces (e.g., environment, cost center, project).

Switch between different cost views:

ModeShows
TotalDBU + Cloud combined
DBU OnlyDatabricks license costs
Cloud OnlyCloud infrastructure costs
BreakdownShow DBU and Cloud separately

LakeSentry refreshes data based on your collector’s schedule — configurable to run every hour, every 4 hours, or daily. You can check your collector schedule and run history in the Settings page under the Connector tab.

If dashboard data appears outdated, check your collector health in the Connector settings. See Data Freshness & Pipeline Status for troubleshooting.

ShortcutAction
⌘K / Ctrl+KOpen command palette
/Open search
g then oGo to Overview
g then eGo to Cost Explorer
j / kMove down / up in lists
EnterOpen selected item
EscGo back or close dialog
?Show keyboard shortcuts help