Logo
CloudWithSingh
Back to all posts
AWS
Azure
Best Practices

AWS S3 for Azure Blob Storage Users: What's Different and What's Better

A practical comparison of S3 and Azure Blob Storage from someone who uses both — buckets vs containers, storage tiers, security models, and the things each platform gets right.

Parveen Singh
November 18, 2025
9 min read
TLDR

S3 is simpler than Azure Blob Storage — no storage account intermediary, flatter structure, easier static hosting. Key differences: bucket names are globally unique, S3 Intelligent-Tiering auto-manages lifecycle, and the two-sided permission model (IAM + bucket policies) requires checking both sides. S3 is easier to start with; Azure Blob Storage integrates better with enterprise identity.

Storage is one of those services where you think "it's just files in the cloud — how different can it be?" And then you spend a week accidentally recreating your Azure Blob Storage mental model in S3 and wondering why nothing behaves the way you expect.

I've been using Azure Blob Storage for years. I understand storage accounts, containers, access tiers, lifecycle management, and the SAS token model inside and out. When I started working with S3 more seriously, I expected a straightforward transition.

Some things mapped directly. Others didn't map at all.

Here's what I found.

The Architecture Is Fundamentally Different

In Azure, storage lives inside a storage account. That account is your management boundary — it controls redundancy, networking, encryption, and billing. Inside the storage account, you create containers, and inside containers you put blobs (files).

Azure: Subscription → Resource Group → Storage Account → Container → Blob

In AWS, S3 has a flatter structure:

AWS: Account → Bucket → Object (with optional prefixes/folders)

There's no "storage account" intermediary. A bucket IS the management unit. Each bucket has its own policies, versioning settings, encryption config, and lifecycle rules.

Why this matters: In Azure, if you want different security settings for different sets of files, you might share a storage account but apply different container-level access policies. In AWS, you'd typically just create separate buckets — they're lightweight and there's no cost to having them exist.

The biggest adjustment: S3 bucket names are globally unique across all AWS accounts. Your bucket name must be unique across the entire S3 namespace worldwide. Azure storage account names are also globally unique, but container names within them aren't. This means in S3, you'll spend more time thinking about naming conventions.

Gotcha

S3 bucket names are globally unique across ALL AWS accounts worldwide. You can't use a name that anyone else has taken. Adopt a naming convention like companyname-environment-purpose to avoid conflicts.

Storage Classes: Simpler but Different

Both platforms have tiered storage, but they approach it differently.

Use CaseAzure TierAWS Storage ClassNotes
Frequently accessedHotS3 StandardDefault on both
Infrequently accessedCoolS3 Standard-IAAWS has higher per-request cost, lower storage cost
Rarely accessedColdS3 Glacier Instant RetrievalNew Azure Cold tier maps well here
Archive / complianceArchiveS3 Glacier Deep ArchiveBoth require hours for retrieval
Intelligent auto-tieringS3 Intelligent-TieringAzure doesn't have a direct equivalent

The standout AWS feature here is S3 Intelligent-Tiering. You pay a small monitoring fee per object, and S3 automatically moves objects between tiers based on access patterns. Azure has lifecycle management rules that can do something similar, but it's rules-based rather than automatic — you have to define the policies yourself.

For Azure engineers: think of S3 Intelligent-Tiering as "lifecycle management that manages itself."

Pro Tip

Think of S3 Intelligent-Tiering as "lifecycle management that manages itself." For a small per-object monitoring fee, S3 automatically moves objects between access tiers based on usage patterns — no rules to write or maintain.

The Permission Model: This Is Where It Gets Interesting

Azure Blob Storage permissions work through a combination of:

  • RBAC roles (Storage Blob Data Reader, Contributor, Owner)
  • SAS tokens (shared access signatures for time-limited, scoped access)
  • Access keys (full account-level access — equivalent to root)

AWS S3 permissions work through:

  • IAM policies (identity-based, as covered in my previous post)
  • Bucket policies (resource-based JSON policies attached to the bucket)
  • ACLs (legacy, AWS now recommends disabling these)
  • Pre-signed URLs (similar to SAS tokens)

The key difference: S3 has bucket policies that sit on the resource side. A bucket can define who can access it — including users from other AWS accounts. This two-sided permission model means you need to check both the IAM policy AND the bucket policy when troubleshooting access.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::123456789012:root"},
      "Action": "s3:GetObject",
      "Resource": "arn:aws:s3:::my-bucket/*"
    }
  ]
}

In Azure, you'd achieve cross-subscription access through Entra ID RBAC assignments. In AWS, the bucket policy approach is more self-contained but requires understanding JSON policy syntax.

My recommendation: Disable ACLs on all new S3 buckets (AWS now does this by default). Use bucket policies for resource-level access and IAM policies for identity-level access. Use pre-signed URLs instead of making buckets public. This mirrors the "use RBAC + SAS tokens, never use access keys" best practice from Azure.

Static Website Hosting — S3 Is Genuinely Easier

Both platforms can host static websites from object storage. But the experience is night and day.

Azure Blob storage static website hosting:

  1. Enable static website hosting on the storage account
  2. Upload files to the $web container
  3. Get an awkward URL like https://mysite.z13.web.core.windows.net
  4. Set up Azure CDN or Front Door for custom domain and HTTPS
  5. Configure CDN rules for routing

S3 static website hosting:

  1. Create a bucket
  2. Enable static website hosting
  3. Upload files
  4. Optionally put CloudFront in front for HTTPS and caching
  5. Done

S3's approach is simpler because buckets are already web-addressable. The URL format (http://bucket-name.s3-website-region.amazonaws.com) is cleaner, and CloudFront integration is more straightforward than Azure CDN configuration.

This isn't a thing — S3 + CloudFront for hosting is one of the most battle-tested patterns in cloud computing.

Event-Driven Patterns: S3 Notifications vs Azure Event Grid

Both platforms support event-driven architectures triggered by storage events, but the execution differs.

Azure: Blob storage events → Event Grid → Function App / Logic App / Event Hub

AWS: S3 events → S3 Event Notifications → Lambda / SQS / SNS / EventBridge

AWS S3 event notifications feel slightly more direct. You configure them on the bucket itself, and they fire Lambda functions, send messages to SQS queues, or publish to SNS topics. No separate Event Grid resource to manage.

Azure's Event Grid is more powerful as a general-purpose event routing service, but for simple "a file was uploaded, do something" workflows, S3 notifications are quicker to set up.

Versioning and Lifecycle

Both platforms support object versioning and lifecycle policies. The implementations are remarkably similar:

  • Versioning: Both keep previous versions of objects. Both charge storage for all versions. Both let you restore previous versions.
  • Lifecycle policies: Both let you transition objects to cheaper tiers and delete objects after a set time.

The main difference: S3 lifecycle rules are slightly more flexible. You can target rules by prefix, tags, or object size. Azure lifecycle management works at the blob level with similar filtering options but the rule syntax is different.

One AWS-specific gotcha: Delete markers in versioned buckets. When you "delete" an object in a versioned S3 bucket, AWS creates a delete marker — the object isn't actually gone. Old versions still exist and still cost money. You need explicit lifecycle rules to clean these up.

The Things S3 Gets Right

  • Simplicity. Creating a bucket and uploading files is genuinely easier than creating a storage account, then a container, then uploading blobs.
  • S3 Select. Query data inside objects using SQL without downloading the whole object. Azure has something similar with query acceleration, but S3 Select is more mature.
  • Transfer Acceleration. Uses CloudFront edge locations to speed up uploads. Simple toggle, noticeably faster for large files from distant regions.
  • Object Lock. For compliance and WORM (Write Once Read Many) requirements, S3 Object Lock is more straightforward than Azure immutable storage.

The Things Azure Blob Storage Gets Right

  • RBAC integration. Using Entra ID roles for storage access is more elegant than writing JSON bucket policies.
  • Azure Data Lake Storage Gen2. If you're building a data lake, ADLS Gen2 (which sits on top of Blob Storage) has hierarchical namespaces that S3 doesn't natively support.
  • Soft delete with recovery. Azure's soft delete for blobs and containers is more user-friendly than S3 versioning + delete markers.
  • Storage account firewall. Network-level access restrictions are cleaner in Azure with service endpoints and private endpoints integrated into the storage account.

Practical Tips for Azure Engineers Using S3

  1. Stop thinking in "storage accounts." Each S3 bucket is independent. Create buckets for different purposes rather than trying to share one.

  2. Enable versioning and encryption from day one. Both are free (encryption is now default in AWS). Don't wait until you need them.

  3. Use aws s3 sync like you'd use AzCopy. The CLI experience is similar. aws s3 sync local-folder/ s3://bucket-name/ is your new best friend.

  4. Block public access at the account level. AWS has an account-level setting to prevent any bucket from being made public. Enable this unless you have a specific reason not to. It's the equivalent of preventing anonymous access at the Azure subscription level.

  5. Get hands-on. The best way to learn S3 patterns is by doing. CloudLearn's S3 lab exercises walk through versioning, lifecycle policies, cross-region replication, and static hosting in a safe environment.

  6. Watch your request costs. S3 storage is cheap, but PUT/GET requests add up with high-volume applications. In Azure, request costs exist too but are less visible. S3 makes them very clear on your bill.

Warning

S3 storage is cheap, but request costs add up fast with high-volume applications. A million PUT requests costs ~$5, and a million GET requests costs ~$0.40. Monitor request counts in CloudWatch, not just storage size.

S3 is one of those services that looks simple on the surface but has genuine depth once you start using it in production patterns. Coming from Azure Blob Storage, the transition is smoother than IAM or networking — but respect the differences in the permission model and cost structure.

Resource

Next: CloudWatch vs Azure Monitor

Setting up AWS monitoring when your muscle memory is Azure Monitor.

Read Next

Part 4 of my "Azure to AWS" series. Previously: The $73 Free Tier bill. Next: setting up monitoring with CloudWatch and comparing it to what I'm used to with Azure Monitor.

Read Next