Skip to content

Understanding the Dashboard

Once data starts flowing from your collectors, the LakeSentry dashboard gives you a complete picture of your Databricks spend. This page walks through the main sections and how to navigate them.

The Overview page is your starting point. It shows a high-level summary of key metrics, cost trends, and top spenders across your entire Databricks environment.

At the top of the page, stat cards summarize key metrics at a glance:

  • Total Cost — Current period spend with percent change from the previous period.
  • Cost of Failure — Spend attributed to failed job runs, linking to waste-related insights.
  • 30-Day Forecast — Projected spend range with trend direction and confidence level.
  • Active Jobs — Count of work units with activity in the selected time range.
  • Insights — Count of active insights (with critical count highlighted), linking to the full Insights queue.
  • Attribution Quality — Percentage of spend that has been attributed to teams or owners.

Below the stat cards, the cost trend chart shows your total Databricks spend over time, with day, week, and month granularity options. In Breakdown cost mode, the chart splits into DBU and Cloud cost layers. The chart also automatically detects and highlights anomalies — unusual spikes or drops appear as visual markers directly on the chart. Use this to spot trends — is spend growing? Are there periodic spikes? Did something change recently?

Below the trend chart, you’ll see the top cost drivers — the jobs, pipelines, SQL warehouses, clusters, and serving endpoints consuming the most budget. A cost category breakdown (jobs & pipelines, SQL warehouse, interactive, serving, overhead) lets you filter by compute type. Each entry shows cost, type, and activity metrics.

Click any item to expand its details or drill into its detail view.

These controls appear in the header across all pages:

Controls the analysis window for all charts and tables on the page. Options include:

  • Preset ranges: 24 hours, 7 days, 30 days, 90 days, 1 year, month to date, year to date, and all time
  • Custom range: Pick any start and end date

The time range is always visible in the header so you know exactly what period you’re looking at.

If you have multiple workspaces, use this multi-select filter to focus on specific ones. By default, all workspaces are included.

Filter by your organizational hierarchy (org unit, department, team). This is useful once you’ve set up attribution rules and org hierarchy mappings.

Filter by Databricks resource tags. Select specific tag keys and values to narrow down cost data to tagged resources.

Switch between different cost views:

ModeShows
TotalDBU + Cloud combined
DBU OnlyDatabricks license costs
Cloud OnlyCloud infrastructure costs
BreakdownShow DBU and Cloud separately

LakeSentry organizes features into five main areas, accessible from the sidebar:

  • Overview — The dashboard described above: trends, top spenders, anomalies.
  • Cost Explorer — Deep multi-dimensional cost analysis with 8 tabs: Breakdown, Compute Types, Principals, Tables, Comparison, Matrix, Trends, and Attribution. See Cost Explorer for details.
  • Budgets — Track spend against budgets at the team, department, or workspace level. See Budgets.
  • Commitments — Track and manage Databricks commit contracts.
  • Insights — The queue of actionable findings: anomalies, waste detection, and optimization suggestions. Each insight includes evidence, estimated savings, and a path to action. See Insights & Actions.
  • AI Agent — AI-powered cost analysis assistant.

Detailed views for each type of Databricks resource:

  • All Workloads — Jobs and pipelines with per-run cost history. See Work Units.
  • Compute — Clusters and SQL warehouses with utilization metrics. See Compute.
  • SQL — Query-level cost analysis for warehouses and serverless. See SQL Analysis.
  • Serving — Model serving endpoint costs and traffic patterns. See Model Serving.
  • Storage — Storage cost breakdown and trends. See Storage.
  • Attribution — Configure attribution rules that map costs to teams and owners. See Attribution Rules.
  • Mappings — Manage organizational hierarchy mappings (org units, departments, teams).
  • Tag Governance — Manage tag policies and compliance. See Tag Governance.
  • Settings — Organization preferences, display options, and configuration.
  • Audit Log — Record of all actions, approvals, and changes. See Audit Log.

LakeSentry includes a command palette (press ⌘K or Ctrl+K) for quick navigation and search. Type the name of any page, resource, or entity to jump directly to it.

Here’s how a typical cost investigation works using the dashboard:

  1. Start on Overview — Notice a cost spike in the trend chart or an anomaly marker.
  2. Identify the driver — Check top spenders to see which workspace, job, or user is responsible.
  3. Drill into details — Click through to the specific resource (e.g., a job’s detail page) to see per-run cost history.
  4. Check insights — Visit the Insights page to see if LakeSentry has already flagged the issue and suggested an action.
  5. Take action — Approve the suggested optimization, or use the information to make changes in your Databricks environment directly.

The Overview page shows data as fresh as your collector’s last run (typically within 15–30 minutes). If data seems stale, check your collector health on the Connector tab in Settings. For details, see Data Freshness & Pipeline Status.