Logo
CloudWithSingh
Back to all posts
AWS
Networking
Azure
Monitoring

AWS VPC Networking: What Azure VNet Experience Does (and Doesn't) Prepare You For

Building production VPC networking in AWS when your muscle memory is Azure VNets — subnets, security groups, NAT Gateways, route tables, and the mental model shifts that matter.

Parveen Singh
December 8, 2025
11 min read
TLDR

AWS VPC networking forces better security practices than Azure VNets — explicit public vs private subnets, per-AZ subnet design. Key differences: subnets are per-AZ (need 6+ for HA vs 2 in Azure), Security Groups + NACLs give two firewall layers, and NAT Gateway costs ~$33/month. Create S3/DynamoDB gateway endpoints immediately (they're free) and leave NACLs at defaults.

Networking is the part of cloud that either makes you feel like an expert or makes you feel like you've never touched a computer before. It depends entirely on which platform you learned first.

I learned networking on Azure. VNets, subnets, NSGs, Azure Firewall, ExpressRoute, VNet peering — I've designed, deployed, and troubleshot all of it. So when I started building production-grade VPC architectures in AWS, I thought networking would be the easiest transition.

I was wrong about that.

The concepts are similar enough to be dangerous. You think you know what's happening because the terms sound familiar. But the defaults are different, the mental models are different, and the places where you'll get stuck are exactly the places where Azure muscle memory misleads you.

VNet vs VPC: The Defaults That Trip You Up

Both Azure VNets and AWS VPCs are logically isolated network environments in the cloud. Same purpose. But the defaults differ in ways that matter:

PropertyAzure VNetAWS VPC
Default internet accessOutbound by default (changing in 2025)Only in public subnets with internet gateway
SubnetsAll subnets can reach internet by default"Public" vs "private" — you choose
DNS resolutionAzure-provided DNS automaticAmazon-provided DNS, configurable
Default securityNSGs (allow all outbound by default)Security Groups (allow all outbound by default)
CIDR flexibilityCan add address spaces laterPrimary CIDR set at creation, secondary can be added

The big one: AWS forces you to think about public vs private subnets from day one. In Azure, every subnet can reach the internet by default (outbound). In AWS, a subnet is only "public" if it has a route to an Internet Gateway. Private subnets have no internet access unless you add a NAT Gateway or VPC endpoint.

This is actually better architecture — it forces you to be intentional about internet access. But coming from Azure where outbound just worked, it caught me off guard when my EC2 instance in a private subnet couldn't pull OS updates.

Subnets: Same Word, Different Implications

In Azure, subnets are relatively simple. You create a subnet within a VNet, assign it an address range, optionally associate an NSG, and deploy resources into it. All subnets behave the same unless you add restrictions.

In AWS, subnets have more architectural significance:

  • Public subnet: Has a route to the Internet Gateway. Resources with public IPs can be reached from the internet.
  • Private subnet: No route to the Internet Gateway. Resources are completely isolated unless you add a NAT Gateway for outbound access.
  • Subnets are per-AZ. Each subnet exists in exactly one Availability Zone. In Azure, subnets span all AZs in a region by default.

That last point is critical. In Azure, if you create a subnet 10.0.1.0/24, your VMs in that subnet could be in any AZ. In AWS, if you create a subnet 10.0.1.0/24 in us-east-1a, it only exists in that AZ.

This means you need more subnets in AWS. A typical production VPC looks like:

VPC: 10.0.0.0/16

Public subnets:
  10.0.1.0/24  → us-east-1a
  10.0.2.0/24  → us-east-1b
  10.0.3.0/24  → us-east-1c

Private subnets:
  10.0.11.0/24 → us-east-1a
  10.0.12.0/24 → us-east-1b
  10.0.13.0/24 → us-east-1c

Six subnets minimum for HA. In Azure, you'd accomplish the same with two subnets (public-facing and private).

Gotcha

AWS subnets are per-Availability Zone — you need a minimum of 6 subnets (3 public + 3 private across 3 AZs) for high availability. In Azure, 2 subnets would suffice since they span all AZs.

Security Groups vs NSGs: The Stateful/Stateless Trap

This is where most Azure engineers get confused, because both platforms have a concept that sounds the same but works differently.

Azure NSGs (Network Security Groups):

  • Can be attached to subnets OR network interfaces
  • Rules have priorities (100, 200, 300...)
  • Stateful — if you allow inbound traffic, the response is automatically allowed
  • Default: allow VNet-to-VNet, allow outbound, deny inbound

AWS Security Groups:

  • Attached to network interfaces (not subnets — that's NACLs)
  • No priority numbers — all rules are evaluated together
  • Stateful — same as NSGs
  • Default: allow all outbound, deny all inbound
  • Can reference other security groups as sources

AWS NACLs (Network ACLs):

  • Attached to subnets
  • Rules have priorities
  • Stateless — you need explicit rules for both inbound AND outbound
  • Default: allow all traffic (but you can change this)

Here's where it gets confusing: AWS has TWO layers of network security. Security Groups (stateful, at the instance level) AND NACLs (stateless, at the subnet level). Azure essentially just has NSGs, which can work at both levels.

My recommendation: Leave NACLs at their defaults and do all your filtering with Security Groups. This matches the Azure NSG experience most closely. Only use NACLs when you need to explicitly deny traffic from specific IP ranges at the subnet level — something Security Groups can't do (they only have "allow" rules, no "deny").

The Security Group Feature Azure Engineers Love

AWS Security Groups can reference other Security Groups as sources. Instead of saying "allow traffic from 10.0.1.0/24", you say "allow traffic from any instance that has security group sg-abc123 attached."

Inbound Rule:
  Type: MySQL/Aurora
  Port: 3306
  Source: sg-0123456789abcdef0  (web-tier security group)

This means your database security group says "allow MySQL from anything in the web-tier security group" — regardless of IP address. If you scale your web tier from 2 to 20 instances, the security group reference still works.

Azure NSGs can reference Application Security Groups (ASGs) for something similar, but the AWS approach feels more natural once you get used to it.

Route Tables: The Hidden Control Layer

Azure VNets have system routes that handle most routing automatically. You add User Defined Routes (UDRs) when you need custom routing — like forcing traffic through a firewall.

AWS route tables require more explicit management:

  • Every subnet must be associated with a route table (there's a "main" route table as default)
  • Public subnets need an explicit route: 0.0.0.0/0 → Internet Gateway
  • Private subnets need an explicit route: 0.0.0.0/0 → NAT Gateway (for internet access)
  • VPC peering routes must be added manually to each route table

In Azure, most of this routing is automatic. In AWS, you're in control — which means you need to understand what routes exist and why.

The mistake I made: I created a VPC with the wizard, which set up route tables correctly. Then I added a new subnet manually and forgot to associate it with the right route table. It defaulted to the main route table which didn't have a NAT Gateway route. My EC2 instance in that subnet couldn't reach the internet, and it took me 30 minutes of confusion to realise it was a route table association issue.

NAT Gateway: The Service That Costs More Than Your App

I've written about the NAT Gateway cost trap before, but it deserves emphasis in a networking post.

Azure NAT Gateway: Flat rate ~$32/month + per-GB processed. Integrated with VNet subnets through association.

AWS NAT Gateway: ~$33/month (hourly rate) + $0.045/GB processed. Must be placed in a public subnet with an Elastic IP.

The costs are similar, but in Azure you're less likely to accidentally create a NAT Gateway — it's a deliberate resource you deploy. AWS VPC wizards love creating NAT Gateways by default.

Cost-saving alternatives:

  • VPC endpoints for AWS services (S3, DynamoDB) — free for gateway endpoints, and they bypass NAT entirely
  • NAT instances (self-managed EC2 instances doing NAT) — cheaper for low-throughput scenarios but you manage the instance
  • For learning: Just use public subnets and skip NAT entirely until you need production architecture
Warning

NAT Gateway costs ~$33/month just to exist, plus $0.045/GB of data processed. The VPC wizard creates one by default. For learning environments, skip NAT Gateway entirely and use public subnets. See my detailed breakdown in the free tier bill post.

VPC Peering and Transit Gateway

VPC peering maps directly to Azure VNet peering. Same concept: connect two VPCs so they can communicate using private IP addresses. Same limitations: non-transitive (if VPC-A peers with VPC-B, and VPC-B peers with VPC-C, VPC-A cannot reach VPC-C through B).

For hub-and-spoke topologies, AWS has Transit Gateway — equivalent to Azure Virtual WAN or Azure vHub. It acts as a central hub that connects multiple VPCs (and on-premises networks).

The concepts are nearly identical. If you've designed hub-and-spoke in Azure, you'll feel right at home with Transit Gateway.

VPC Endpoints: The Feature I Wish Azure Had Earlier

AWS VPC endpoints let resources in private subnets access AWS services (S3, DynamoDB, etc.) without going through the internet or NAT Gateway.

Two types:

  • Gateway endpoints (S3 and DynamoDB only) — free, adds a route to your route table
  • Interface endpoints (everything else) — creates an ENI in your subnet, costs ~$7.20/month per AZ

Azure's equivalent is Private Endpoints with Private Link. Functionally similar, but AWS gateway endpoints for S3 being completely free is a genuine advantage. In Azure, Private Endpoints always cost money.

My setup now: Every VPC I create gets a gateway endpoint for S3 and DynamoDB immediately. Free, simple, and removes NAT Gateway dependency for AWS service access.

Pro Tip

Create S3 and DynamoDB gateway endpoints immediately in every VPC — they're completely free and route traffic directly to AWS services without going through NAT Gateway. This saves money and reduces latency.

The VPC Blueprint I Use Every Time

After iterating on different approaches, here's the VPC architecture I've settled on for labs and small projects:

VPC: 10.0.0.0/16

├── Public Subnet 1a (10.0.1.0/24)
│   ├── Route: 0.0.0.0/0 → Internet Gateway
│   └── Used for: ALB, bastion hosts
│
├── Public Subnet 1b (10.0.2.0/24)
│   ├── Route: 0.0.0.0/0 → Internet Gateway
│   └── Used for: ALB (multi-AZ)
│
├── Private Subnet 1a (10.0.11.0/24)
│   ├── Route: 0.0.0.0/0 → NAT Gateway (only when needed)
│   └── Used for: EC2, ECS, Lambda
│
├── Private Subnet 1b (10.0.12.0/24)
│   ├── Route: 0.0.0.0/0 → NAT Gateway (only when needed)
│   └── Used for: EC2, ECS, Lambda
│
├── Data Subnet 1a (10.0.21.0/24)
│   └── Used for: RDS, ElastiCache
│
├── Data Subnet 1b (10.0.22.0/24)
│   └── Used for: RDS, ElastiCache (multi-AZ)
│
├── S3 Gateway Endpoint (free)
├── DynamoDB Gateway Endpoint (free)
└── Flow Logs → CloudWatch Logs

This gives you multi-AZ redundancy, clear separation between tiers, and a predictable CIDR scheme. For production, you'd add a third AZ and potentially more subnet tiers.

If you want to practice building this from scratch, CloudLearn's VPC networking lab walks through the full setup including security groups, route tables, and NAT Gateway configuration.

What I'd Tell Azure Engineers Starting AWS Networking

  1. Draw the diagram first. AWS networking requires more upfront planning than Azure because of the per-AZ subnet requirement. Plan your CIDR blocks and subnet layout before you create anything.

  2. Start without NAT Gateway. Use public subnets for learning. Add private subnets and NAT Gateway when you need production architecture (and budget for the cost).

  3. Security Groups are your primary firewall. Ignore NACLs for now. Use Security Group cross-references to build dynamic rules.

  4. Enable VPC Flow Logs. The equivalent of NSG flow logs in Azure. Invaluable for troubleshooting connectivity issues. Send them to CloudWatch Logs with a 7-day retention.

  5. Tag your subnets clearly. Public-1a, Private-1b, Data-1a — when you're debugging a networking issue at midnight, clear subnet names save your sanity.

AWS networking is genuinely well-designed. It forces better security practices (explicit public vs private) and gives more control (route tables, VPC endpoints). The trade-off is more complexity and more resources to manage. But coming from Azure, your mental model transfers — you just need to adjust for the differences in defaults and the per-AZ subnet model.

Resource

Azure to AWS: The Full Series

Start from the beginning — getting oriented in AWS when your brain thinks in Azure.

Start the Series

Final post in my "Azure to AWS" series (for now). If you're making the same journey, check out the full series: Getting StartedIAMFree Tier Cost TrapsS3 vs Blob StorageCloudWatch vs Azure Monitor → VPC Networking (this post).

Read Next