← Back to Blog

ENGINEERING · April 14, 2026 · 13 min read

How MigrationFox cut SharePoint migration time by ~50%

Migration performance is not an abstract benchmark. It is the difference between your cutover finishing on Sunday at 2am versus Monday at 9am with users waiting. This month we shipped an 18-fix performance pass across the SharePoint site migration path, the governance scanner, and the worker HTTP stack. On the tenants where we have measured end-to-end, typical SharePoint migrations are landing in about half the wall-clock time they used to take, and governance scans are 60–70% faster.

This post is the engineering-honest version of what changed. We will walk through the four categories of improvement, show where the wins actually came from, and say out loud which optimisations were theatre and which ones moved the needle.

What “throughput” actually means in migration

Before the fixes, a question: what is a migration tool’s throughput a measure of? The intuitive answer is megabytes per second — byte streaming speed. For file-heavy migrations (SMB-to-Blob, big document libraries) that is basically correct. For SharePoint site migrations it is only half the story.

A typical SP site migration includes:

Most of the wall-clock time is spent on small requests, not on file bytes. A 5 GB document library can finish the byte transfer in five minutes but still spend twenty-five minutes waiting on 12,000 sequential Graph calls to tag each file with its content type and custom columns. Optimising throughput means optimising the small-request path — the HTTP round trips, the serialisation, the ordering — not just the bytes-per-second number that looks impressive on a marketing slide.

ShareGate and BitTitan both have “Insane Mode” / “High Speed” marketing around file transfer. That is real and it applies to file bytes. It is not the same thing as site-migration throughput, where the bottleneck is the Graph call count, not the file bytes.

Fix 1: Graph $batch for the metadata path

The Microsoft Graph /$batch endpoint accepts up to 20 sub-requests in a single HTTP round trip. For metadata-heavy phases — like reading the content types on a site, or applying column metadata to a list of items — that one feature alone is a 20x reduction in network round trips.

Before the audit, our content-type, site-column, and list-view enumeration phases were each issuing one GET per list per attribute type, sequentially. A site with 40 lists was costing us 40 × 3 round trips just to read schema. Post-fix, those enumerations are batched in groups of 20, which on a site with 40 lists drops 120 round trips to 6. Each round trip on Graph is typically 120–400ms, so the wall-clock difference on a mid-size site is roughly 30 to 45 seconds of dead air the user does not have to sit through.

We did not use $batch for item writes. Item creation in Graph has enough quirks (per-item field-stripping for invalid columns, per-item retry on 429, progressive fallback when a field fails) that the sub-request semantics of $batch (partial success, shared throttling bucket) are more hindrance than help. For writes, we went a different direction.

Fix 2: p-limit parallelism with back-pressure

The old item-write path was either fully sequential (safe, slow, used for anything sensitive) or fully parallel with Promise.all (fast, noisy, prone to 429 storms). Neither is what you want for a production migration. What you want is bounded concurrency with back-pressure.

We moved every parallel phase onto p-limit with per-phase concurrency caps. The caps we ended up with after measurement:

On top of the cap, the worker watches the Retry-After header from every 429. If throttling kicks in, the affected phase pulls its effective concurrency down and waits for the retry window before resuming. If the tenant is healthy, the cap stays at the configured value. No manual tuning.

The subtle win here is not the top-end speed — it is the consistency. Fully-parallel implementations hit 429 storms that take 30–90 seconds to drain; bounded-parallel implementations barely see 429s at all. The p-limit version finishes faster on average because it does not blow itself up.

Fix 3: Undici keep-alive and HTTP/2

Every HTTP library has a default connection pool. Node’s built-in https agent creates a new TCP + TLS connection per request unless you explicitly enable keep-alive, and even with keep-alive its pool defaults are modest. For a worker making thousands of requests to graph.microsoft.com in quick succession, that TLS handshake overhead is real.

We moved the Graph client onto undici with:

The measured win on a 5,000-item list was about 25% of the phase wall-clock time. The first couple of requests still pay TCP+TLS, but everything after that reuses the established connection. Over a 40-minute site migration, that is meaningful.

Fix 4: Governance scan dedup

This one lives in the Copilot Readiness scanner, not the migration worker, but it matters for the same reason: too many redundant Graph calls.

The governance scan runs six modules (Purview, Identity, SharePoint, Teams, OneDrive, Power Platform) and each one used to hit /users, /sites, or /groups independently. The SharePoint module needed the sites list. The Teams module needed the sites list. The Identity module and OneDrive module each needed the users list.

We added a request cache scoped to a single scan run, keyed on the normalised Graph URL. First module to ask for /users?$select=id,userPrincipalName pays the API cost; every subsequent module gets the cached response. No Graph calls were removed from scan logic; they just stopped happening two or three times.

On a 1,200-user tenant, full six-module scan time dropped from about 5m30s to about 1m50s. The cache is discarded at the end of each scan so the next run still reflects live tenant state.

Fix 5 through 18: the long tail

Not every fix is worth a section heading. A few worth mentioning because they are generally useful patterns:

None of these by themselves is dramatic. Together they are the difference between “pretty fast” and “done by the time you get back from lunch”.

What we measured

On the three tenants where we have reliable before-and-after numbers:

WorkloadBeforeAfterDelta
Mid-size SP site (40 lists, 12k items, 8 GB)1h 48m52m~52% faster
Small SP site (6 lists, 800 items, 400 MB)9m 20s4m 10s~55% faster
Governance full scan (1,200 users)5m 30s1m 50s~66% faster
Azure Blob ingest (SMB, 1 TB)4h 40m3h 15m~30% faster

The SMB-to-Blob number is smaller because that path was already bottlenecked on the bytes-per-second of the network link, not on per-request overhead. Speed fixes help less when the wall clock is already set by hardware.

We do not claim “10x faster than ShareGate” or similar marketing numbers. We have not run head-to-head benchmarks, and on file-bytes-only workloads ShareGate’s Insane Mode is very probably competitive with or faster than what we do today, because Insane Mode uses the SPMT stream format that bypasses Graph’s per-item write cost. We are saying: MigrationFox got substantially faster relative to itself, and on Graph-based site migration it is now in the range where the difference to Insane Mode on a site migration (as opposed to a file-only one) is small.

What did not help

Worth calling out because we tried them and they were not worth it:

What is next

The roadmap item that could move the needle further on site migrations is switching the largest-list write path to the SharePoint REST _api/web/lists/... endpoint where it supports batched item creation in a single POST. Graph does not expose this shape of batching for list-item writes. It is more work to support two write paths and we have to maintain parity on field stripping, error shapes, and permission replay. We will ship it when the engineering cost is justified — currently that is somewhere on the margin of another 15–25% improvement on write-heavy phases.

Related reading

Get started

The speed fixes are live on every workspace. Start a free SharePoint migration at app.migrationfox.com/register and watch your first pre-flight report come back in seconds rather than minutes.

Migrate on the fast path

Free account. All 18 speed fixes on by default.

Start Free →