GOVERNANCE · April 15, 2026 · 11 min read
From one-shot audit to continuous Copilot monitoring
The first Copilot Readiness assessment you run on a tenant is always revealing. You point the scanner at the tenant, six modules light up, the 1.0–4.0 score comes back, and a concrete list of Must-Do-Before-Copilot findings shows you exactly where broken inheritance, oversharing, unowned teams, and unlabelled sensitive data are going to embarrass the client on Copilot’s first week. You deliver the PDF. The client feels informed. The project ends.
Then, about three weeks later, the tenant has changed. Someone created an Anyone link on a sensitive library because it was the fastest way to share a deck. A new team got created without an owner. A user was offboarded but their OneDrive with 40GB of indexed content was not. A new set of external guests were added to a Teams channel. By the time Copilot actually rolls out to the pilot group, the readiness posture the assessment reported no longer reflects reality.
Point-in-time assessments go stale. A tenant is not a static artefact; it is a stream of changes. The right way to treat Copilot readiness is the same way you treat backup verification or patch compliance — it is a recurring control, not a one-off deliverable. That is what the continuous monitoring module at /governance/monitoring does.
What changes when you move from one-shot to continuous
The one-shot assessment asks: “on this day, how ready is the tenant?”. Continuous monitoring asks four harder questions:
- Is the score trending up or down over time?
- What changed since the last scan — new findings, resolved findings, persistent findings?
- Are there specific findings appearing repeatedly that suggest a process issue rather than a content issue?
- When a new Must-Do-Before-Copilot finding appears, who gets notified and how fast?
Each of those questions is actionable in a way that a single score is not.
The trend chart
Every scheduled re-run stores its aggregate module scores plus its per-finding fingerprints. The monitoring dashboard plots the six-module scores over time. A stable tenant shows a flat-or-rising line. A tenant with ongoing drift shows a decline.
The default view is 12 months with one data point per run. You can overlay specific event markers (“Copilot pilot launched 2026-02-14”, “Mergers intake completed 2026-03-08”) so the trend is readable in context. When the SharePoint governance score drops 0.4 in a single run, the event marker directly above it is usually the explanation.
The diff view: new, resolved, persistent
This is the workhorse of the monitoring module. Every scan produces a set of findings, each with a stable fingerprint that survives across runs (the fingerprint is a hash of finding-type + affected-resource + specific-attribute, so the same finding on the same site looks like the same finding). Compare run N with run N-1 and every finding falls into one of three buckets:
- New. Present in the current run, absent in the previous one. Something changed in the tenant that created this finding. Worth investigating now while the change is fresh.
- Resolved. Present in the previous run, absent in the current one. Either the finding was fixed (good), or the affected resource no longer exists (also probably fine, but verify). The resolved list is the progress you can show the client.
- Persistent. Present in both. Nothing has been done. These are the findings that need escalation; they are the ones eating your tenant score over time.
Persistent findings are often the most important category to surface. A finding that has been “Must Do Before Copilot” for four consecutive scans is either blocked by something organisational (policy friction, ownership gap) or blocked by a technical challenge that the responsible team has not been given resources to handle. Surfacing a count like “23 persistent Must-Do findings unchanged for 90+ days” is a client conversation that would never happen from a single-run report.
Monthly auto-re-runs
The default cadence is monthly. First of the month, 02:00 tenant time. You can change it to weekly (useful during active pilot rollout) or quarterly (useful for steady-state tenants). The re-run uses the same credential set as the original assessment, needs no manual trigger, and completes in the same time the original scan did — typically 5–30 minutes depending on tenant size.
Each re-run produces:
- A new scored report (all six modules)
- A diff against the previous run
- A changelog entry on the tenant’s monitoring timeline
- An email summary to the subscribed recipient list
Email alerts when new Must-Do findings appear
The email summary is two things. At the top: “Scan complete, overall score 3.4 (up from 3.3), 12 new findings, 8 resolved, 87 persistent.” Below that: a focused list of new Must-Do-Before-Copilot findings only. Must-Do findings are the subset that actually block Copilot rollout — a new Anyone link on a library labelled Confidential, a new team with no owner, a new external guest granted access to a site containing HR documents, a new Conditional Access gap that opens MFA exemption.
The intention of the email is simple: the admin does not need to read a full PDF every month to know whether the tenant got worse. The email either says “zero new Must-Do findings, posture stable” or it lists the new ones explicitly. Only the second case needs a human response.
For tenants under active compliance scrutiny (financial services, healthcare, public-sector), the weekly cadence plus the new-finding email is the closest thing the Copilot governance space has to a SOC feed. It is not a replacement for a proper SIEM, but it is a vertical-specific monitoring layer where no general-purpose SIEM would have the fingerprint logic to detect “a new Anyone link on a sensitive library” as an event.
The recurring-revenue shape for teams billing clients
For anyone running Copilot readiness engagements on behalf of clients, continuous monitoring is a natural recurring offering. The pattern we see in the field:
- Sell the initial Copilot Readiness Assessment as a one-off engagement. Deliver the report.
- At delivery, offer continuous monitoring as a recurring service. Typical shape: monthly scan, quarterly client review call, ad-hoc escalation when new blocking findings appear.
- The scan runs automatically. You receive the same email alerts the client does. When something material appears, you reach out to the client before the client reaches out to you.
- Billing is a flat monthly or quarterly retainer. The review call is the touch point. The email alerts are the always-on surface.
The economics work because the human is not re-doing the assessment each month — the tool is. The human is triaging findings, facilitating fixes, and holding the review conversation. That is what the client is actually paying for.
The best retainers sell the outcome (tenant stays Copilot-ready) not the activity (we run a scan). Continuous monitoring is the instrumentation that makes the outcome measurable.
How to set it up
- Open
/governance/monitoring. If you have already run a Copilot Readiness Assessment on this tenant, it shows up as the “baseline run.” - Pick a cadence. Weekly, monthly, quarterly.
- Pick a run time. We default to 02:00 in the tenant’s primary time zone but you can change it.
- Add the recipient list for email alerts. These addresses receive the scan summary and the Must-Do-Before-Copilot new-finding emails. Typically: the tenant admin, the migration lead, the CISO, the Copilot program lead.
- Optionally set event markers for trend annotation (“Pilot group added”, “Phase 2 rollout”).
- Turn on the schedule. The first auto-re-run fires at the next scheduled slot. Baseline diff is against your original assessment.
What the trend actually looks like on a real tenant
Anonymised data from a 900-user tenant that has been under monitoring since October 2025. Initial overall score: 2.8. After six months of monitored cycles and active remediation:
| Month | Overall | New findings | Resolved | Must-Do persistent |
|---|---|---|---|---|
| Oct 2025 (baseline) | 2.8 | — | — | 41 |
| Nov 2025 | 2.9 | 8 | 12 | 37 |
| Dec 2025 | 3.0 | 11 | 15 | 33 |
| Jan 2026 | 3.1 | 6 | 18 | 21 |
| Feb 2026 | 3.3 | 7 | 11 | 17 |
| Mar 2026 | 3.4 | 5 | 9 | 13 |
The shape is the useful artefact here. The overall score is creeping up, the persistent Must-Do count is going down, and the new-finding rate is roughly stable (the tenant is generating new issues at a normal pace; remediation is simply faster than creation). Absent the monthly data, none of this would be visible; the client would just have a feeling that things were getting better.
Known limits
- Fingerprints are stable for finding types where the affected-resource identity is stable. If a finding depends on a transient attribute (e.g., a specific session token), it cannot be reliably diffed. These are flagged as “not tracked for diff” in the findings list.
- The trend chart needs at least three data points to display a meaningful line. You need to wait through three scans before the chart has shape.
- Email alerts batch one email per scan. We do not do per-finding real-time paging — that would require push integration with a SIEM, which is a roadmap item.
- Module scoring methodology is versioned. If we change how a module is scored, the trend line shows a version-bump marker at the affected point so prior scores are not misinterpreted as drift.
Related reading
- Copilot Readiness Assessment: the baseline scan
- What’s new in Copilot Readiness v2
- SharePoint & OneDrive oversharing audit
Get started
Turn on continuous monitoring at app.migrationfox.com/register. The first baseline scan is free; monitoring adds a recurring run against the same tenant with diffs and alerts.