Skip to content

Common Issues

Solutions to the most frequently encountered issues with LakeSentry. For collector-specific problems, see Collector Issues. For data pipeline questions, see Data Freshness & Pipeline Status.

SymptomLikely causeJump to
Can’t log in or access the appAuthentication or invitation issueLogin and access problems
Dashboard shows no dataConnector not set up or collector not runningMissing data
Costs don’t match Databricks consoleDifferent time ranges, cost model, or attribution scopeCost discrepancies
Metrics appear stale or outdatedPipeline lag or collector schedule issueStale metrics
Some workspaces or resources missingRegion connector or permission gapIncomplete coverage
Insights not appearingDetection thresholds or insufficient historyMissing insights

If you can’t log in or see a “no access” message:

  1. Check your invitation — Invitations expire after 7 days. Ask your organization admin to resend the invitation from Settings > Access.
  2. Verify the email address — You must log in with the exact email address the invitation was sent to. Check for typos or alias differences.
  3. Check your role — If you can log in but can’t access certain features, your role may not have the required permissions. See Roles & Permissions for the permission matrix.

If authentication redirects fail or loop:

  • Clear browser cookies for app.lakesentry.io and try again.
  • Try an incognito/private window to rule out cached credentials.
  • Check your identity provider — If your organization uses SSO, verify your account is active in the IdP.

If the dashboard is completely empty after connecting your Databricks account:

  1. Check connector status — Go to Settings > Connector and verify the connector shows “Synced” status.
  2. Check region connectors — Each region needs its own connector. If no region connectors are set up, there’s no data pipeline. See Region Connectors.
  3. Check collector status — The collector must have run at least once. Look at the region connector detail page for “Last ingestion” time. If it shows “Never,” the collector hasn’t pushed data yet. See Collector Deployment.
  4. Wait for pipeline processing — After the first collector run, data takes 15–30 minutes to flow through the processing pipeline before appearing in dashboards.

If dashboards show cost data but certain features or resource types are empty:

Missing dataLikely causeFix
SQL warehouse dataWarehouse system tables not accessibleGrant SELECT on system.compute.warehouse_events to the service principal
Job/pipeline dataMissing system.lakeflow.* permissionsGrant access per Account & Connector Setup
MLflow dataOptional tables not enabledGrant access to MLflow system tables per optional tables
Model serving dataOptional tables not enabledGrant access to serving system tables
Storage dataOptional tables not enabledGrant access to storage system tables

Check the region connector’s “Tables received” list to see which tables the collector is successfully extracting.

LakeSentry costs differ from Databricks console

Section titled “LakeSentry costs differ from Databricks console”

Cost differences between LakeSentry and the Databricks billing console are common and usually explainable:

Time range alignment

  • LakeSentry and Databricks may use different time zone conventions. LakeSentry uses UTC for all cost calculations. Verify both tools are set to the same date range.
  • Databricks billing data has a 1–4 hour delay (see Data Freshness). If you’re comparing “today’s” costs, the most recent hours may not be included yet.

Cost model differences

  • By default, LakeSentry uses Databricks list prices for DBU costs. If your organization has negotiated pricing, the numbers will differ unless you’ve configured DBU price overrides.
  • LakeSentry separates DBU costs (compute charges) from cloud infrastructure costs (VM, storage, networking). The Databricks console may show them combined.

Scope differences

  • LakeSentry shows costs only for workspaces and regions with active connectors. If some workspaces aren’t connected, their costs won’t appear.
  • Attribution groupings (by team, project, or tag) may slice costs differently than the Databricks console’s per-workspace view.

Attribution shows “unattributed” costs

Section titled “Attribution shows “unattributed” costs”

If a significant portion of costs appear as “Unattributed”:

  • New resources — Resources without tag or ownership data default to unattributed. Configure Attribution Rules to assign them.
  • Shared compute — Interactive clusters shared across teams may lack clear ownership signals. Use tag-based or pattern-based attribution rules.
  • System overhead — Some Databricks costs (like platform fees or photon charges) don’t map to specific user workloads.

See Cost Attribution & Confidence Tiers for how attribution works and how to improve coverage.

If metrics haven’t updated recently:

  1. Check pipeline status — Go to Settings > Connector and look at the “Last ingestion” timestamp for each region. If it’s more than 2 hours old, there may be a collector or pipeline issue.
  2. Check collector health — Look at the connector status indicator:
    • Green (Synced) — Collector running normally, data has been received
    • Red (Error) — Connector is in an error state or validation failure
    • Gray (Awaiting data) — Connector is set up but has not received data yet
  3. Check for Databricks-side delays — Some system tables have inherent latency. See Data Freshness for expected lag by data type.

If metrics suddenly drop to zero:

  • Collector stopped running — The Databricks job may have been disabled, deleted, or failed. Check the job status in Databricks.
  • Permissions revoked — The service principal may have lost access. Re-grant permissions per Account & Connector Setup.
  • Region connector disconnected — Check connector status. If it shows “Error,” investigate the error message.

If certain Databricks workspaces don’t appear in LakeSentry:

  • Check region mapping — Workspaces in regions without a region connector aren’t monitored. Add a region connector for each active region. See Region Connectors.
  • Check workspace access — The service principal must have workspace-level access in each workspace it should monitor. Account-level access alone isn’t sufficient for all system tables.

LakeSentry only shows data from when the collector started running. It cannot backfill data from before the first collection.

  • Billing data — Databricks system tables retain billing history. LakeSentry can extract historical billing data on the first run, covering up to the connector’s lookback setting (default 180 days, subject to plan limits).
  • Compute and workload data — These tables have shorter retention. The collector captures what’s available at first run and tracks incrementally after that.

If the Insights page is empty despite having cost data:

  • Insufficient history — Anomaly detection requires at least 5 runs to establish a baseline. New connections won’t generate anomalies until enough history accumulates. See Anomaly Detection.
  • Below minimum thresholds — Anomaly detection ignores work units with baseline costs under $10 and cost deltas under $50. Low-cost environments may not trigger any anomalies.
  • No waste conditions detected — Waste detection looks for specific patterns (idle clusters, overprovisioned resources). If your environment is well-optimized, there may genuinely be nothing to flag.
  • Auto-dismiss rules — Check if auto-dismiss rules are filtering out insights. Auto-dismiss rules are configured at the system level and can filter insights based on criteria like savings thresholds, age, or severity.
  • Insights are regenerated on each detection cycle, which runs after new data is processed. If the data pipeline is delayed, insight generation is also delayed.
  • Check the Data Freshness page for pipeline status.

If you’ve worked through the relevant sections above and the issue persists:

  1. Note the specific symptom and any error messages you see.
  2. Check Settings > Connector for connector status and last ingestion times.
  3. Review the FAQ for additional answers.
  4. Contact LakeSentry support with these details.