MIGRATION · April 14, 2026 · 10 min read
How to migrate files to Azure Blob Storage (2026 guide)
Most of the writing on Azure Blob Storage migration is either a sales pitch for AzCopy or a vendor comparison that skips the parts that actually break. This guide is the opposite: it walks through a real migration from a typical source (a Windows file share, a SharePoint library, or a Google Drive) into a target Azure Blob container, in enough detail that you can follow it on a Friday and trust the result on Monday.
Why Azure Blob for migration targets
Blob is the cheapest tier of durable storage Microsoft sells. A terabyte of Cool tier is under $10/month list, Archive is under $2. That is one to two orders of magnitude cheaper than the same terabyte sitting in SharePoint or OneDrive, which is why nearly every serious file archival strategy on Azure ends up in Blob.
The other reason is compliance. Blob supports immutable storage policies (WORM), customer-managed keys via Key Vault, legal holds, soft delete, point-in-time restore, and a Microsoft-issued SOC 2 / ISO 27001 / HIPAA posture out of the box. If you are migrating a file share under a retention obligation, Blob is usually the defensible landing spot.
The catch with Blob is that it is not a file system. It is a flat namespace with a key/value model. Virtual folders exist only as a naming convention (team-a/project-x/report.pdf). Your migration tool has to be the thing that preserves directory structure as blob names and preserves source metadata as blob metadata — the storage itself does not care.
Supported sources
MigrationFox writes into Azure Blob from four source types, each with its own authentication and metadata model. Pick whichever matches your reality:
- SMB file shares — Windows file servers, Samba, NetApp. Requires the MigrationFox SMB agent inside your network.
- SharePoint Online — document libraries, via Microsoft Graph with an Azure AD app registration.
- OneDrive for Business — per-user or tenant-wide, same Graph app.
- Google Drive — personal and shared drives via a Google Cloud service account with domain-wide delegation. Native Google Docs are exported to Office formats on the way through.
If your source is not on this list but speaks HTTP/S or SMB, open a ticket — the destination-side work is the same regardless of where the bytes originate.
Step 1: Create a storage account
In the Azure portal, create a Storage Account with these settings for most migration workloads:
- Performance: Standard (Premium is block-blob with an expensive egress profile; rarely justified for archival)
- Redundancy: LRS if the tenant is single-region, GRS if you have an RPO requirement across regions
- Access tier: Hot for working sets; Cool or Archive for retention-only archives (you can set per-blob tier at write time from MigrationFox)
- Hierarchical namespace: Off unless you are specifically using ADLS Gen2 for analytics workloads. For straight migrations, leave it off — it costs more per operation.
- Soft delete: Enable, 7–30 day retention. This is cheap insurance against a bad migration run.
Once created, open the storage account and create a container. The container name is the root of your blob namespace; pick something descriptive like archive-2026 or fileshare-finance. Container names must be lowercase, 3–63 characters, and cannot contain underscores.
Step 2: Credentials
MigrationFox authenticates to Blob using one of two credential types:
- Storage account key (Shared Key). The full-privilege key for the whole account. From the portal, go to Security + networking → Access keys → key1 → Show. Copy the
key1value, not the connection string. - SAS token. A scoped credential that can be restricted to one container, to write-only, and to a specific expiry. Recommended if you want tighter blast radius. Generate via Shared access tokens on the container, grant
Write, Create, Add, set expiry to one week past your expected migration completion.
Either credential is pasted into the MigrationFox destination wizard. The credential is encrypted at rest with AES-256-GCM, never logged, and never transmitted outside the worker that needs it for the current request.
If you are on Azure US Government, Azure China, or another sovereign cloud, override the endpoint in the wizard. Examples:
Commercial: blob.core.windows.net
Gov Cloud: blob.core.usgovcloudapi.net
Azure China: blob.core.chinacloudapi.cn
Step 3: Create the job
In MigrationFox, create a new migration and choose the source platform. Connect the source credential (Graph app for SharePoint/OneDrive, service-account JSON for Google Drive, agent registration for SMB). Then select Azure Blob as the destination and paste your storage key or SAS token, storage account name, container name, and endpoint.
A few job-level options worth knowing:
- Prefix. Every blob name gets a prefix prepended. Useful to land multiple jobs in one container under
finance/andhr/subfolders without naming collisions. - Access tier. Hot, Cool, or Archive at write time. Archive is cheapest but blobs are not readable for ~15 hours after write — use it only for true cold archives.
- Delta mode. Re-run the same job to pick up only items modified since the last completion. The engine tracks the source-side modifiedDateTime per item.
- Chunk size. Default 8 MB per block. Leave it alone unless you are on a high-latency link, in which case 4 MB and more concurrency can help.
Hit Preview. MigrationFox enumerates the source, reports total file count and total size, and gives you a dry-run cost estimate before you spend a byte. Hit Start and the migration begins.
Step 4: Verification
Every upload is verified before it is marked complete. For single-PUT blobs we read back the committed blob’s content MD5 and compare it to the MD5 we computed streaming the source. For block blobs we record each block ID, verify each block’s acknowledgement, and store the block list in the job record.
The job record is your audit trail. For every migrated file you get:
- Source path, destination container + blob name, final size in bytes
- MD5 of the streamed content and the Azure ETag returned on commit
- Source metadata (NTFS ACL summary, SharePoint column values, Drive ownership) written as
x-ms-meta-*headers - Start and finish timestamps, worker ID, bytes-per-second
Anything that fails verification or fails to upload at all lands on the Exceptions tab. You do not need to hunt for it; it is surfaced.
Troubleshooting: the three things that waste a weekend
1. Wrong container name
The single most common mistake. You typed Archive_2026; the container is named archive-2026. Azure returns a ContainerNotFound error which the engine surfaces verbatim, but it looks like an auth error if you are tired. Container names are lowercase only, hyphens not underscores, 3–63 characters. Copy-paste from the portal rather than re-typing.
2. Wrong access-key format
People paste the connection string instead of the access key. A connection string looks like DefaultEndpointsProtocol=https;AccountName=... and the actual key is one field inside it (AccountKey=...). MigrationFox expects just the key. If your paste starts with DefaultEndpointsProtocol=, you grabbed the wrong thing.
The other common failure is pasting a SAS token into the Shared Key field or vice versa. The UI validates the format on save, but the underlying failure is almost always a cross-paste.
3. Soft delete eating your re-runs
If you enabled soft delete (good) and then re-ran a migration that overwrites blobs (also good), your bill quietly includes the soft-deleted older versions for the retention period. For a 10 TB archive re-migrated five times during testing, that is an unpleasant line item.
Two fixes. Either shorten the soft-delete retention during the migration window (set it to 7 days, re-enable 30 days after cutover) or use a clean container per run and swap which container is “primary” at the end. Both are cheap; neither is the default.
If in doubt, start with a small job — 10 GB of known content — and inspect the first few blobs in the portal before you fire the full migration. Five minutes of paranoia beats five hours of re-running.
What happens at scale
Migrations of several TB or more are common for archive workloads. The engine handles that case with parallel workers per source (up to your plan’s concurrency), parallel block uploads within a single large file, and automatic backoff on Azure-side throttling (503 ServerBusy). A 10 TB SharePoint archive typically completes in 24–48 hours on a standard link without anyone babysitting it.
If your source is SMB, the MigrationFox agent is a single lightweight Windows service that sits on a jump host, mounts the shares, and streams through. Deploying it is a 10-minute job; the rest is waiting.
Related reading
- Azure Blob Storage migration platform page — the formal capability list
- Migrating an SMB file server to the cloud — the source-side companion to this post
- Delta sync and incremental migration — how re-runs work
Get started
Create a free account at app.migrationfox.com/register and move your first 2 GB to Azure Blob without a credit card. If your storage account is on a sovereign cloud or you need the SMB agent for a locked-down file server, the setup wizard walks you through it.