Clean Rooms
The Clean Rooms page tracks activity for Databricks Clean Rooms — secure environments where multiple organizations collaborate on shared data without exposing their underlying datasets. Clean room notebook runs and collaboration events can be difficult to monitor, especially across collaborators. LakeSentry surfaces notebook run counts, durations, and collaboration events to help teams understand clean room usage.
Clean rooms overview
Section titled “Clean rooms overview”The top of the page shows headline metrics:
| Metric | What it shows |
|---|---|
| Active Rooms | Number of clean rooms that have not been deleted |
| Total Runs | Total notebook executions across all clean rooms |
| Successful | Number of successful notebook runs |
| Collaborators | Total number of distinct collaborating organizations |
Clean room list
Section titled “Clean room list”The Overview tab shows all tracked clean rooms in a table, along with a bar chart of notebook runs by room (top 10):
| Column | What it shows |
|---|---|
| Clean Room | Name and truncated central clean room ID |
| Status | Active or Deleted |
| Collaborators | Number of participating organizations |
| Notebook Runs | Total number of notebook executions, with a badge showing failed runs if any |
| Total Duration | Cumulative wall-clock time of all notebook runs |
| Last Activity | When the most recent event occurred, along with the event type |
Sorting
Section titled “Sorting”The table defaults to sorting by Notebook Runs (descending). Sortable columns include:
- Clean Room — Alphabetical by name
- Collaborators — By collaborator count
- Notebook Runs — By total execution count
Daily activity
Section titled “Daily activity”The Daily Activity tab shows per-day, per-room activity metrics for the selected time range. You can filter by a specific clean room using the room_id parameter.
| Column | What it shows |
|---|---|
| Date | Day of recorded activity |
| Clean Room | Name of the clean room |
| Runs | Number of completed runs, with failed runs shown separately if any |
| Assets | Number of asset updates |
| Collaborators | Number of distinct collaborators active on that day |
| Avg Duration | Average notebook run duration for the day |
Tracked event types
Section titled “Tracked event types”LakeSentry ingests the following event types from the system.access.clean_room_events system table:
| Event type | What it records |
|---|---|
| CLEAN_ROOM_CREATED | When the clean room was created and by which collaborator |
| CLEAN_ROOM_DELETED | When the clean room was deleted |
| RUN_NOTEBOOK_STARTED | A notebook execution was initiated — includes which collaborator started it |
| RUN_NOTEBOOK_COMPLETED | A notebook execution finished — includes duration, status, and output schema |
| CLEAN_ROOM_ASSETS_UPDATED | Assets (tables, notebooks) were added, modified, or removed |
| ASSET_REVIEW_CREATED | An asset review was created — approvals, rejections, or auto-approvals |
| OUTPUT_SCHEMA_DELETED | An expired output schema was cleaned up (initiated by the system) |
Each event records the initiating collaborator (identified by their alias — “creator”, “collaborator”, or a custom value) and the timestamp.
Common workflows
Section titled “Common workflows”Understanding clean room activity
Section titled “Understanding clean room activity”- Sort the clean room list by Notebook Runs (descending).
- Review the bar chart to see which rooms have the most executions.
- Check total duration to identify rooms with long-running notebooks.
- Review failed run counts — failed runs still consume compute time.
Monitoring daily trends
Section titled “Monitoring daily trends”- Switch to the Daily Activity tab.
- Review runs, asset updates, and collaborator counts per day.
- Look at average run durations to spot performance changes over time.
Investigating failed runs
Section titled “Investigating failed runs”- In the Overview tab, look for rooms with failed run badges.
- Switch to Daily Activity to see when failures occurred.
- Note the duration — long-running failures are expensive.
- Compare successful vs. failed run counts to identify reliability issues.
Next steps
Section titled “Next steps”- Cost Explorer — See clean room compute costs in the broader spending context
- Cost Attribution & Confidence Tiers — How costs are attributed to teams
- Audit Log — Full audit trail of clean room operations