Understanding the Dashboard
Once data starts flowing from your collectors, the LakeSentry dashboard gives you a complete picture of your Databricks spend. This page walks through the main sections and how to navigate them.
The Overview page
Section titled “The Overview page”The Overview page is your starting point. It shows a high-level summary of key metrics, cost trends, and top spenders across your entire Databricks environment.
Stat cards
Section titled “Stat cards”At the top of the page, stat cards summarize key metrics at a glance:
- Total Cost — Current period spend with percent change from the previous period.
- Cost of Failure — Spend attributed to failed job runs, linking to waste-related insights.
- 30-Day Forecast — Projected spend range with trend direction and confidence level.
- Active Jobs — Count of work units with activity in the selected time range.
- Insights — Count of active insights (with critical count highlighted), linking to the full Insights queue.
- Attribution Quality — Percentage of spend that has been attributed to teams or owners.
Cost trends
Section titled “Cost trends”Below the stat cards, the cost trend chart shows your total Databricks spend over time, with day, week, and month granularity options. In Breakdown cost mode, the chart splits into DBU and Cloud cost layers. The chart also automatically detects and highlights anomalies — unusual spikes or drops appear as visual markers directly on the chart. Use this to spot trends — is spend growing? Are there periodic spikes? Did something change recently?
Top spenders
Section titled “Top spenders”Below the trend chart, you’ll see the top cost drivers — the jobs, pipelines, SQL warehouses, clusters, and serving endpoints consuming the most budget. A cost category breakdown (jobs & pipelines, SQL warehouse, interactive, serving, overhead) lets you filter by compute type. Each entry shows cost, type, and activity metrics.
Click any item to expand its details or drill into its detail view.
Global controls
Section titled “Global controls”These controls appear in the header across all pages:
Time range selector
Section titled “Time range selector”Controls the analysis window for all charts and tables on the page. Options include:
- Preset ranges: 24 hours, 7 days, 30 days, 90 days, 1 year, month to date, year to date, and all time
- Custom range: Pick any start and end date
The time range is always visible in the header so you know exactly what period you’re looking at.
Workspace filter
Section titled “Workspace filter”If you have multiple workspaces, use this multi-select filter to focus on specific ones. By default, all workspaces are included.
Organization filter
Section titled “Organization filter”Filter by your organizational hierarchy (org unit, department, team). This is useful once you’ve set up attribution rules and org hierarchy mappings.
Tag filter
Section titled “Tag filter”Filter by Databricks resource tags. Select specific tag keys and values to narrow down cost data to tagged resources.
Cost mode toggle
Section titled “Cost mode toggle”Switch between different cost views:
| Mode | Shows |
|---|---|
| Total | DBU + Cloud combined |
| DBU Only | Databricks license costs |
| Cloud Only | Cloud infrastructure costs |
| Breakdown | Show DBU and Cloud separately |
Main navigation sections
Section titled “Main navigation sections”LakeSentry organizes features into five main areas, accessible from the sidebar:
Cost Analysis
Section titled “Cost Analysis”- Overview — The dashboard described above: trends, top spenders, anomalies.
- Cost Explorer — Deep multi-dimensional cost analysis with 8 tabs: Breakdown, Compute Types, Principals, Tables, Comparison, Matrix, Trends, and Attribution. See Cost Explorer for details.
- Budgets — Track spend against budgets at the team, department, or workspace level. See Budgets.
- Commitments — Track and manage Databricks commit contracts.
Optimization
Section titled “Optimization”- Insights — The queue of actionable findings: anomalies, waste detection, and optimization suggestions. Each insight includes evidence, estimated savings, and a path to action. See Insights & Actions.
- AI Agent — AI-powered cost analysis assistant.
Workloads
Section titled “Workloads”Detailed views for each type of Databricks resource:
- All Workloads — Jobs and pipelines with per-run cost history. See Work Units.
- Compute — Clusters and SQL warehouses with utilization metrics. See Compute.
- SQL — Query-level cost analysis for warehouses and serverless. See SQL Analysis.
- Serving — Model serving endpoint costs and traffic patterns. See Model Serving.
- Storage — Storage cost breakdown and trends. See Storage.
Organization
Section titled “Organization”- Attribution — Configure attribution rules that map costs to teams and owners. See Attribution Rules.
- Mappings — Manage organizational hierarchy mappings (org units, departments, teams).
- Tag Governance — Manage tag policies and compliance. See Tag Governance.
System
Section titled “System”- Settings — Organization preferences, display options, and configuration.
- Audit Log — Record of all actions, approvals, and changes. See Audit Log.
Quick navigation
Section titled “Quick navigation”LakeSentry includes a command palette (press ⌘K or Ctrl+K) for quick navigation and search. Type the name of any page, resource, or entity to jump directly to it.
Typical investigation flow
Section titled “Typical investigation flow”Here’s how a typical cost investigation works using the dashboard:
- Start on Overview — Notice a cost spike in the trend chart or an anomaly marker.
- Identify the driver — Check top spenders to see which workspace, job, or user is responsible.
- Drill into details — Click through to the specific resource (e.g., a job’s detail page) to see per-run cost history.
- Check insights — Visit the Insights page to see if LakeSentry has already flagged the issue and suggested an action.
- Take action — Approve the suggested optimization, or use the information to make changes in your Databricks environment directly.
Data freshness
Section titled “Data freshness”The Overview page shows data as fresh as your collector’s last run (typically within 15–30 minutes). If data seems stale, check your collector health on the Connector tab in Settings. For details, see Data Freshness & Pipeline Status.
Next steps
Section titled “Next steps”- Cost Explorer — Deep-dive cost analysis with multi-dimensional breakdowns
- Insights & Actions — Review and act on optimization findings
- How LakeSentry Works — Understand the data pipeline behind the dashboard