Skip to content

Data Freshness & Pipeline Status

LakeSentry’s data flows through several stages before appearing in dashboards. Understanding these stages and their expected latency helps you distinguish between normal pipeline lag and actual issues.

Data moves through four stages, each adding latency:

StageWhat happensTypical latency
1. Databricks system tablesDatabricks writes usage events to system tables1 minute – 4 hours (varies by table)
2. Collector extractionThe LakeSentry collector reads system tables and pushes dataDepends on schedule (default: once daily at ~8 AM UTC)
3. Ingestion & validationLakeSentry validates, deduplicates, and stores raw data1–5 minutes
4. Processing & aggregationData is transformed into metrics, cost rollups, and insights5–20 minutes

End-to-end latency from a Databricks event occurring to it appearing in LakeSentry dashboards is typically 20 minutes – 5 hours, depending on the data type and collector schedule.

Different data types have different inherent latency at the Databricks level:

Data typeDatabricks system table latencyLakeSentry display latency
Billing / cost data1–4 hours1.5–5 hours from the actual usage
Cluster eventsNear real-time20–40 minutes (next collector run + processing)
Query historyMinutes to 1 hour20–90 minutes
Job run historyMinutes to 1 hour20–90 minutes
Warehouse eventsMinutes to 1 hour20–90 minutes
Storage metadataHours (updated periodically)1–5 hours

Go to Settings > Connector to see the health of each connector:

IndicatorMeaningAction needed
Green (Synced)Data has been received from this connectorNone — operating normally
Red (Error)Connector status is “error” or “failed”, or no data in 30+ hours (triggers an email alert to admins)Investigate — the connector may be broken or misconfigured. See Collector Issues.
Grey (Awaiting data)Connector is configured but no data has been received yetWait for the first extraction to complete, or check the collector job.

Click a region connector to see detailed status:

  • Last ingestion — Timestamp of the last successful data push from the collector
  • Tables received — List of system tables the collector is successfully extracting
  • Extraction checkpoints — Per-table watermarks showing how far the collector has progressed
  • Ingestion history — Recent ingestion events with row counts and durations

Dashboard pages display a “Data as of” indicator showing the most recent data point. If this timestamp seems too old:

  1. Check the connector health (above).
  2. Consider the expected latency for the data type you’re viewing.
  3. If the staleness exceeds expected latency, investigate the collector and pipeline.

Some lag patterns are expected and do not indicate a problem:

  • Morning cost updates — Yesterday’s billing data often finalizes overnight. Expect cost dashboards to update with the previous day’s complete data in the early morning (UTC).
  • Weekend/holiday gaps — If compute usage drops on weekends, there may be less new data to display. The pipeline is still running, but the deltas are smaller.
  • Post-deployment lag — After first deploying the collector, the initial extraction takes longer than incremental runs. The first dashboards may take 30–60 minutes to populate.

These patterns suggest an issue that needs investigation:

PatternLikely causeWhat to check
One region is fresh, another is staleThe stale region’s collector isn’t runningCheck the collector job in Databricks for that region
All regions are staleCollector infrastructure issue or LakeSentry pipeline delayCheck multiple collector jobs; if all are running, contact support
Specific data type is stalePermission lost for that system tableCheck “Tables received” on the region connector
Dashboard shows “No data” for recent datesCollector checkpoint issue or Databricks table retentionCheck extraction checkpoints
  1. In LakeSentry, open Settings > Connector and note the “Last ingestion” time.
  2. If last ingestion is recent (within the expected schedule), the collector is fine — skip to Step 3.
  3. If last ingestion is stale, check the Databricks job:
    • Is the job running? Has it run recently?
    • Did the most recent run succeed or fail?
    • See Collector Issues for detailed diagnosis.

Databricks system tables sometimes have their own delays, independent of the collector:

  • Check the Databricks System Table Freshness dashboard (if available in your account console).
  • Query the system table directly to see if recent data exists:
    SELECT MAX(usage_end_time) FROM system.billing.usage;
    If the max timestamp is hours behind, the delay is at the Databricks level.

If the collector is pushing data but dashboards still appear stale:

  • Processing backlog — After large imports (first run or checkpoint reset), the processing pipeline may take longer than usual. This resolves on its own.
  • Pipeline error — Rare, but if processing fails on specific data, it can cause a backlog. The connector detail page shows ingestion errors if any exist.

If the scheduled extraction hasn’t run recently, you can trigger a manual extraction from LakeSentry:

  1. Go to Settings > Connector in LakeSentry.
  2. In the Data Sync panel, click the trigger button to start an immediate extraction.
  3. Wait for the extraction to complete (progress is visible in the panel), then check your dashboards.

The default extraction schedule is once daily at ~8 AM UTC. You can adjust this per connector in Settings > Connector:

ScheduleTrade-off
Every hourMost frequent data updates, higher compute cost
Every 4 hoursGood balance of freshness and cost
Daily at 8 AM UTC (default)Lower cost, suitable for daily reporting and non-urgent monitoring
PausedNo automatic extraction — useful when temporarily disabling a connector

Each region has its own collector and schedule. High-priority regions (production workloads) can run more frequently while development regions run less often.

LakeSentry tracks internal pipeline metrics that can help diagnose freshness issues:

MetricWhat it shows
Extraction durationHow long the collector took to extract data
Rows extractedNumber of rows pulled in the last extraction
Ingestion durationHow long it took to validate and store raw data
Processing durationHow long metric computation and aggregation took
End-to-end latencyTime from extraction to data appearing in dashboards

These metrics are visible on the region connector detail page under the “Performance” tab.