← Back to Blog

GOVERNANCE · April 15, 2026 · 11 min read

From one-shot audit to continuous Copilot monitoring

app.migrationfox.com/governance/monitoring
Composite Score Trend
contoso.onmicrosoft.com · monthly cadence
TRENDING UP
Apr 1
1.2
Apr 15
1.5
May 1
1.8
Jun 1
2.3
Jul 1
2.7
Aug 1
3.3
New findings
14
Resolved
28
Regressions
3
This month · new Must-Do findings
SPO-004 3 new Anyone links on sensitivity-labelled libraries
TEAM-007 2 new teams created without owners
IAM-002 1 new Global Admin added outside PIM
Anonymised trend · 900-user tenant under monitoring since Oct 2025

The first Copilot Readiness assessment you run on a tenant is always revealing. You point the scanner at the tenant, six modules light up, the 1.0–4.0 score comes back, and a concrete list of Must-Do-Before-Copilot findings shows you exactly where broken inheritance, oversharing, unowned teams, and unlabelled sensitive data are going to embarrass the client on Copilot’s first week. You deliver the PDF. The client feels informed. The project ends.

Then, about three weeks later, the tenant has changed. Someone created an Anyone link on a sensitive library because it was the fastest way to share a deck. A new team got created without an owner. A user was offboarded but their OneDrive with 40GB of indexed content was not. A new set of external guests were added to a Teams channel. By the time Copilot actually rolls out to the pilot group, the readiness posture the assessment reported no longer reflects reality.

Point-in-time assessments go stale. A tenant is not a static artefact; it is a stream of changes. The right way to treat Copilot readiness is the same way you treat backup verification or patch compliance — it is a recurring control, not a one-off deliverable. That is what the continuous monitoring module at /governance/monitoring does.

What changes when you move from one-shot to continuous

The one-shot assessment asks: “on this day, how ready is the tenant?”. Continuous monitoring asks four harder questions:

Each of those questions is actionable in a way that a single score is not.

The trend chart

Every scheduled re-run stores its aggregate module scores plus its per-finding fingerprints. The monitoring dashboard plots the six-module scores over time. A stable tenant shows a flat-or-rising line. A tenant with ongoing drift shows a decline.

The default view is 12 months with one data point per run. You can overlay specific event markers (“Copilot pilot launched 2026-02-14”, “Mergers intake completed 2026-03-08”) so the trend is readable in context. When the SharePoint governance score drops 0.4 in a single run, the event marker directly above it is usually the explanation.

The diff view: new, resolved, persistent

This is the workhorse of the monitoring module. Every scan produces a set of findings, each with a stable fingerprint that survives across runs (the fingerprint is a hash of finding-type + affected-resource + specific-attribute, so the same finding on the same site looks like the same finding). Compare run N with run N-1 and every finding falls into one of three buckets:

Persistent findings are often the most important category to surface. A finding that has been “Must Do Before Copilot” for four consecutive scans is either blocked by something organisational (policy friction, ownership gap) or blocked by a technical challenge that the responsible team has not been given resources to handle. Surfacing a count like “23 persistent Must-Do findings unchanged for 90+ days” is a client conversation that would never happen from a single-run report.

Run-over-run diff · Aug 1 vs Jul 1
RESOLVED 28 findings closed since last scan
NEW 14 findings appeared this month +
PERSIST 23 Must-Do unchanged for 90+ days !

Monthly auto-re-runs

The default cadence is monthly. First of the month, 02:00 tenant time. You can change it to weekly (useful during active pilot rollout) or quarterly (useful for steady-state tenants). The re-run uses the same credential set as the original assessment, needs no manual trigger, and completes in the same time the original scan did — typically 5–30 minutes depending on tenant size.

Each re-run produces:

Email alerts when new Must-Do findings appear

The email summary is two things. At the top: “Scan complete, overall score 3.4 (up from 3.3), 12 new findings, 8 resolved, 87 persistent.” Below that: a focused list of new Must-Do-Before-Copilot findings only. Must-Do findings are the subset that actually block Copilot rollout — a new Anyone link on a library labelled Confidential, a new team with no owner, a new external guest granted access to a site containing HR documents, a new Conditional Access gap that opens MFA exemption.

The intention of the email is simple: the admin does not need to read a full PDF every month to know whether the tenant got worse. The email either says “zero new Must-Do findings, posture stable” or it lists the new ones explicitly. Only the second case needs a human response.

For tenants under active compliance scrutiny (financial services, healthcare, public-sector), the weekly cadence plus the new-finding email is the closest thing the Copilot governance space has to a SOC feed. It is not a replacement for a proper SIEM, but it is a vertical-specific monitoring layer where no general-purpose SIEM would have the fingerprint logic to detect “a new Anyone link on a sensitive library” as an event.

The recurring-revenue shape for teams billing clients

For anyone running Copilot readiness engagements on behalf of clients, continuous monitoring is a natural recurring offering. The pattern we see in the field:

  1. Sell the initial Copilot Readiness Assessment as a one-off engagement. Deliver the report.
  2. At delivery, offer continuous monitoring as a recurring service. Typical shape: monthly scan, quarterly client review call, ad-hoc escalation when new blocking findings appear.
  3. The scan runs automatically. You receive the same email alerts the client does. When something material appears, you reach out to the client before the client reaches out to you.
  4. Billing is a flat monthly or quarterly retainer. The review call is the touch point. The email alerts are the always-on surface.

The economics work because the human is not re-doing the assessment each month — the tool is. The human is triaging findings, facilitating fixes, and holding the review conversation. That is what the client is actually paying for.

The best retainers sell the outcome (tenant stays Copilot-ready) not the activity (we run a scan). Continuous monitoring is the instrumentation that makes the outcome measurable.

How to set it up

  1. Open /governance/monitoring. If you have already run a Copilot Readiness Assessment on this tenant, it shows up as the “baseline run.”
  2. Pick a cadence. Weekly, monthly, quarterly.
  3. Pick a run time. We default to 02:00 in the tenant’s primary time zone but you can change it.
  4. Add the recipient list for email alerts. These addresses receive the scan summary and the Must-Do-Before-Copilot new-finding emails. Typically: the tenant admin, the migration lead, the CISO, the Copilot program lead.
  5. Optionally set event markers for trend annotation (“Pilot group added”, “Phase 2 rollout”).
  6. Turn on the schedule. The first auto-re-run fires at the next scheduled slot. Baseline diff is against your original assessment.

What the trend actually looks like on a real tenant

Anonymised data from a 900-user tenant that has been under monitoring since October 2025. Initial overall score: 2.8. After six months of monitored cycles and active remediation:

MonthOverallNew findingsResolvedMust-Do persistent
Oct 2025 (baseline)2.841
Nov 20252.981237
Dec 20253.0111533
Jan 20263.161821
Feb 20263.371117
Mar 20263.45913

The shape is the useful artefact here. The overall score is creeping up, the persistent Must-Do count is going down, and the new-finding rate is roughly stable (the tenant is generating new issues at a normal pace; remediation is simply faster than creation). Absent the monthly data, none of this would be visible; the client would just have a feeling that things were getting better.

Known limits

Related reading

Get started

Turn on continuous monitoring at app.migrationfox.com/register. The first baseline scan is free; monitoring adds a recurring run against the same tenant with diffs and alerts.

Stop monitoring by exception

Scheduled scans, trend charts, diffs, email alerts.

Start Free →