Prompt
How do I ensure that a file was successfully stored in Azure Blob Storage?
Answer
You generally verify a successful upload to Azure Blob Storage by one or more of:
- Checking the SDK/API response (status code, ETag, request ID)
- Listing/reading the blob back
- Using diagnostics (logs/metrics) for confirmation in production
Below are concrete patterns for common languages and tools.
1. General principles
Whenever you upload a blob (via SDK, REST, or CLI), Azure will:
- Return an HTTP status code
201 Created(for new blobs) or200 OK(for overwrites/commits) - Include headers like:
ETag: a version identifierLast-Modifiedx-ms-request-id: for tracking in logs
To “ensure” the file is stored:
- Confirm the call did not throw an exception / error.
- Check the returned status code or SDK-specific result.
- Optionally, do a read-after-write check:
- Fetch blob properties or download the blob
- Confirm size / MD5 / content as expected
- In production, also rely on:
- Application logs
- Storage diagnostic logs and metrics in Azure Portal
- Retry logic and idempotency
2. .NET (C#) example
using Azure;
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;
using System;
using System.IO;
using System.Threading.Tasks;
public async Task UploadAndVerifyAsync()
{
string connectionString = "<your-connection-string>";
string containerName = "my-container";
string blobName = "myfile.txt";
string localFilePath = @"C:\temp\myfile.txt";
BlobContainerClient containerClient = new BlobContainerClient(connectionString, containerName);
await containerClient.CreateIfNotExistsAsync();
BlobClient blobClient = containerClient.GetBlobClient(blobName);
// Upload
using FileStream uploadFileStream = File.OpenRead(localFilePath);
Response<BlobContentInfo> response = await blobClient.UploadAsync(uploadFileStream, overwrite: true);
uploadFileStream.Close();
// 1. Check response basics
if (response.GetRawResponse().Status is 200 or 201)
{
Console.WriteLine("Upload call succeeded.");
Console.WriteLine($"ETag: {response.Value.ETag}, LastModified: {response.Value.LastModified}");
}
else
{
throw new Exception($"Upload failed with status {response.GetRawResponse().Status}");
}
// 2. Optional: verify by fetching properties
Response<BlobProperties> propsResponse = await blobClient.GetPropertiesAsync();
long size = propsResponse.Value.ContentLength;
Console.WriteLine($"Stored size: {size} bytes");
if (size <= 0)
throw new Exception("Blob has zero length, upload may not have been correct.");
// 3. Optional: verify content hash
byte[]? md5 = propsResponse.Value.ContentHash; // if you set ContentHash during upload
}
Key checks:
response.GetRawResponse().Status- Presence of an
ETag GetPropertiesAsync()to confirm the blob exists and has the expected size/hash.
3. Python example
from azure.storage.blob import BlobServiceClient
from azure.core.exceptions import ResourceNotFoundError
connection_string = "<your-connection-string>"
container_name = "my-container"
blob_name = "myfile.txt"
file_path = "myfile.txt"
service_client = BlobServiceClient.from_connection_string(connection_string)
container_client = service_client.get_container_client(container_name)
container_client.create_container(exist_ok=True)
blob_client = container_client.get_blob_client(blob_name)
# Upload
with open(file_path, "rb") as data:
result = blob_client.upload_blob(data, overwrite=True)
# 1. Check that result has an etag
if not result.etag:
raise Exception("Upload result has no ETag – something may be wrong.")
print("Upload call succeeded, ETag:", result.etag)
# 2. Verify by getting properties
props = blob_client.get_blob_properties()
print("Blob size (bytes):", props.size)
if props.size <= 0:
raise Exception("Uploaded blob has zero size.")
# 3. Optional: quick existence check via download
try:
data = blob_client.download_blob().readall()
print(f"Downloaded {len(data)} bytes from Azure Blob Storage.")
except ResourceNotFoundError:
raise Exception("Blob not found after upload!")
4. Node.js / JavaScript example
const { BlobServiceClient } = require("@azure/storage-blob");
const fs = require("fs");
async function uploadAndVerify() {
const connectionString = process.env.AZURE_STORAGE_CONNECTION_STRING;
const containerName = "my-container";
const blobName = "myfile.txt";
const filePath = "./myfile.txt";
const serviceClient = BlobServiceClient.fromConnectionString(connectionString);
const containerClient = serviceClient.getContainerClient(containerName);
await containerClient.createIfNotExists();
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadResponse = await blockBlobClient.uploadFile(filePath);
// 1. Basic check
const status = uploadResponse._response.status; // 201 typically
if (status !== 201 && status !== 200) {
throw new Error(`Upload failed with status ${status}`);
}
console.log("Upload succeeded with status:", status);
console.log("ETag:", uploadResponse.etag);
// 2. Verify blob properties
const properties = await blockBlobClient.getProperties();
console.log("Blob size:", properties.contentLength);
if (!properties.contentLength || properties.contentLength <= 0) {
throw new Error("Blob size is zero or undefined after upload.");
}
}
uploadAndVerify().catch(console.error);
5. Azure CLI
If you upload with CLI:
az storage blob upload \
--account-name <account> \
--container-name my-container \
--name myfile.txt \
--file ./myfile.txt \
--auth-mode login
The command’s JSON output contains etag and lastModified. To verify:
az storage blob show \
--account-name <account> \
--container-name my-container \
--name myfile.txt \
--auth-mode login \
--query "{name:name, size:properties.contentLength, etag:properties.etag}"
If that returns the blob with expected size and an etag, it’s stored successfully.
6. Stronger guarantees in production
For critical workflows:
- Checksum validation
- Compute MD5 (or other hash) of the file before upload.
- Provide it as
Content-MD5/ContentHashwhen uploading. - After upload, read properties and compare hashes.
- Read-after-write verification
- After upload, immediately
GetPropertiesorDownloadand verify:- Size matches your source file
- Optional content hash matches
- After upload, immediately
- Retries and idempotency
- Use operation IDs or deterministic blob names; if an upload fails mid-way and is retried, you won’t create inconsistent duplicates.
- Use the SDK’s built-in retry policies (most Azure SDKs have them by default).
- Monitoring
- Enable Storage logging/metrics in Azure Portal.
- Track
x-ms-request-idfrom responses in your app logs so you can correlate issues.
If you tell me which language/SDK (C#, Java, Python, Node, Go, REST, etc.) and whether you’re uploading from a web app, function, or CLI, I can give you a very targeted snippet and recommended pattern.