Logo
CloudWithSingh
Back to all posts
AWS
Monitoring
Azure

CloudWatch vs Azure Monitor: Setting Up AWS Monitoring When You're Used to Azure

A practical guide to AWS CloudWatch for engineers who already know Azure Monitor — what maps cleanly, what doesn't, and the monitoring setup I wish I'd built from day one.

Parveen Singh
December 1, 2025
10 min read
TLDR

CloudWatch is capable but fragmented compared to Azure Monitor's unified experience. Key gotchas: EC2 doesn't report memory metrics by default (you need the CloudWatch Agent), basic monitoring is 5-minute intervals, and log-based alerting requires creating metric filters first. CloudWatch Logs ingestion is cheaper than Azure Log Analytics, but KQL is more powerful than CloudWatch Logs Insights.

Monitoring is one of those things where you don't appreciate what you had until you're rebuilding it in another platform.

In Azure, I had my monitoring dialled in. Azure Monitor collected metrics automatically. Application Insights handled application-level telemetry. Log Analytics Workspace was my central query engine. Alerts fired into Action Groups that sent Teams messages and created ServiceNow tickets. It took months to build, but it worked.

Then I started building out proper monitoring in AWS and realised: I need to rebuild all of this in CloudWatch. And CloudWatch is a very different beast.

The Architecture: One Platform vs Many Services

The first thing you'll notice: Azure Monitor is one unified platform with sub-components (Metrics, Logs, Alerts, Application Insights, Workbooks). Everything queries from the same backend, uses the same KQL language, and lives under one blade in the portal.

AWS splits monitoring across multiple services:

What You're MonitoringAzureAWS
Infrastructure metricsAzure Monitor MetricsCloudWatch Metrics
Logs (centralised)Log Analytics WorkspaceCloudWatch Logs
Application tracesApplication InsightsAWS X-Ray
DashboardsAzure Dashboards / WorkbooksCloudWatch Dashboards
Alerts & notificationsAzure Monitor Alerts + Action GroupsCloudWatch Alarms + SNS
Distributed tracingApplication Insights (integrated)X-Ray (separate service)
Uptime monitoringCloudWatch Synthetics
Cost monitoringAzure Cost ManagementAWS Budgets (separate)

This fragmentation is both CloudWatch's weakness and, in some ways, its strength. Each service does its job well, but the integration between them isn't as seamless as Azure Monitor's unified experience.

Metrics: Similar Concepts, Different Defaults

Both platforms collect infrastructure metrics automatically. CPU, memory, disk, network — the basics are covered.

But here's where AWS caught me off guard:

Azure provides basic VM metrics at 1-minute intervals by default. Guest OS metrics (like memory usage) require the Azure Monitor Agent, but platform metrics are free and automatic.

AWS provides basic EC2 metrics at 5-minute intervals by default. Want 1-minute intervals? That's "detailed monitoring" and it costs extra. And here's the real kicker — EC2 doesn't report memory metrics by default at all. You need to install the CloudWatch Agent on your instances to get memory and disk usage.

Gotcha

EC2 instances don't report memory or disk metrics by default — only CPU, network, and status checks. You must install the CloudWatch Agent to get memory utilisation. Coming from Azure where memory metrics just exist, this is a common surprise.

# Install CloudWatch Agent on Amazon Linux 2
sudo yum install amazon-cloudwatch-agent -y
 
# Create config (or use the wizard)
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
 
# Start the agent
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl \
  -a fetch-config -m ec2 \
  -s -c file:/opt/aws/amazon-cloudwatch-agent/bin/config.json
Pro Tip

Use AWS Systems Manager (SSM) Parameter Store to centrally manage CloudWatch Agent configuration across all your EC2 instances. Change the config once, and all instances pick it up — no SSH required.

Coming from Azure where memory metrics just... exist, this felt like a step backwards. But the CloudWatch Agent is powerful once configured — it can collect custom metrics, system-level logs, and application logs all in one agent.

The Azure equivalent: Azure Monitor Agent (AMA) + Data Collection Rules. Similar concept, more configuration options in AWS.

CloudWatch Logs: The Good, the Expensive, and the Confusing

Azure Log Analytics uses KQL (Kusto Query Language) for querying logs. It's powerful, expressive, and once you learn it, you can query anything.

CloudWatch Logs uses CloudWatch Logs Insights with its own query syntax. It's... fine. Not as powerful as KQL, but functional.

KQL (Azure):

AzureDiagnostics
| where ResourceType == "SERVERS"
| where TimeGenerated > ago(24h)
| summarize count() by bin(TimeGenerated, 1h), Category
| order by TimeGenerated desc

CloudWatch Logs Insights:

fields @timestamp, @message
| filter @logStream like /server/
| stats count(*) as logCount by bin(1h) as timeWindow
| sort timeWindow desc

Both get the job done. KQL is more mature and handles complex joins and aggregations better. CloudWatch Logs Insights is simpler for basic queries but hits limitations faster.

The cost difference that matters:

CloudWatch Logs charges for:

  • Ingestion: $0.50 per GB
  • Storage: $0.03 per GB per month
  • Queries: $0.005 per GB scanned

Azure Log Analytics charges:

  • Ingestion: ~$2.76 per GB (Pay-As-You-Go) or commitment tiers
  • Retention: 31 days free, then $0.10 per GB per month
  • Queries: Free (included)

So CloudWatch Logs ingestion is significantly cheaper, but Azure includes free queries and more free retention. Depending on your query patterns, either could be cheaper overall.

My experience: For a learning environment, CloudWatch Logs costs are very manageable. But I can see how production environments with heavy logging could see costs escalate with both platforms.

Alarms and Alerts: Two Philosophies

Setting up alerts showed me the philosophical difference between the platforms.

Azure Monitor alerts are flexible:

  • Metric alerts (threshold-based)
  • Log alerts (KQL query results)
  • Activity log alerts (control plane events)
  • Smart detection (AI-based anomaly detection)
  • All routing through Action Groups (email, SMS, webhook, ITSM, Logic App, Function, etc.)

CloudWatch Alarms are simpler:

  • Metric alarms (threshold or anomaly detection)
  • Composite alarms (combine multiple alarms)
  • All routing through SNS Topics → subscribers (email, SMS, Lambda, SQS, HTTP)

The key difference: Azure treats alerts as a unified system. CloudWatch Alarms only work with metrics — if you want to alert on log content, you first need to create a metric filter on a log group that generates a custom metric, and then alarm on that metric.

CloudWatch Logs → Metric Filter → Custom Metric → CloudWatch Alarm → SNS Topic → Email/Lambda

In Azure, you'd just write a log query alert. Done. The CloudWatch approach requires more plumbing.

The Monitoring Setup I Recommend for AWS

After running AWS environments alongside Azure, here's the monitoring stack I settled on:

1. CloudWatch Agent on All EC2 Instances

Collects memory, disk, and custom metrics. Use the SSM Parameter Store to centrally manage the agent configuration.

2. Basic Alarms for Every Environment

At minimum, set up:

  • CPU utilisation > 80% for 5 minutes
  • Memory utilisation > 85% (requires CloudWatch Agent)
  • Disk usage > 90%
  • Status check failures (this catches hardware issues)
  • Billing alarm for unexpected spend

3. CloudWatch Dashboards

Create one overview dashboard per environment. Include:

  • EC2 instance metrics (CPU, memory, network)
  • RDS metrics (connections, IOPS, freeable memory)
  • S3 request counts and error rates
  • Lambda invocations and errors (if using serverless)

4. Log Groups with Retention Policies

Always set a retention policy on CloudWatch Log Groups. The default is "never expire" which means logs accumulate forever and costs grow silently. Set 30 days for dev, 90 days for staging, 365 days for production.

Warning

CloudWatch Log Groups default to "never expire" retention. Logs accumulate silently and costs grow month over month. Always set an explicit retention policy on every log group — 30 days for dev environments saves significant money.

5. SNS Topics for Alert Routing

Create at least two SNS topics:

  • critical-alerts → email + SMS
  • warning-alerts → email only

What CloudWatch Gets Right

  • CloudWatch Synthetics — Canary scripts that test your endpoints externally. Azure doesn't have a direct equivalent (you'd use Application Insights availability tests, but Synthetics is more flexible with full Puppeteer/Selenium scripts).
  • Anomaly detection — CloudWatch can learn metric patterns and alert on anomalies automatically. Azure has Smart Detection in Application Insights, but CloudWatch's per-metric anomaly detection is more granular.
  • Embedded metrics format — You can emit custom metrics from application logs without a separate API call. Clever and cost-effective for high-throughput applications.
  • Contributor Insights — Identifies top contributors to a metric (like which IP addresses are generating the most errors). Unique feature.

What Azure Monitor Gets Right

  • Unified experience. One portal blade, one query language, one alert system. CloudWatch feels fragmented by comparison.
  • KQL is superior. For complex log analysis, KQL is more powerful than CloudWatch Logs Insights. Cross-table joins, time-series analysis, rendering charts — KQL does it all.
  • Application Insights is more complete than X-Ray + CloudWatch combined for application-level monitoring. Auto-instrumentation, live metrics, application map, failure analysis — it's a full APM solution.
  • Workbooks for custom reporting are more flexible than CloudWatch Dashboards.
  • Cost predictability. Azure's commitment tiers for Log Analytics make costs predictable. CloudWatch's per-GB-scanned query pricing is harder to forecast.

The Translation Cheat Sheet

What You NeedAzureAWS
Basic infra metricsAzure Monitor (automatic)CloudWatch Metrics (automatic, 5-min)
Memory/disk metricsAzure Monitor AgentCloudWatch Agent
Centralised logsLog Analytics WorkspaceCloudWatch Log Groups
Query languageKQLCloudWatch Logs Insights
AlertsAzure Monitor AlertsCloudWatch Alarms + SNS
Application monitoringApplication InsightsX-Ray + CloudWatch
Custom dashboardsAzure Workbooks / DashboardsCloudWatch Dashboards
Uptime checksApp Insights Availability TestsCloudWatch Synthetics
Config for agentsData Collection RulesSSM Parameter Store configs

The Bottom Line

CloudWatch is a capable monitoring platform, but coming from Azure Monitor, you'll feel the fragmentation. The lack of a unified query language across metrics and logs, the extra steps needed for log-based alerts, and the need to manually install agents for basic metrics like memory — these are all friction points you won't be used to.

That said, CloudWatch has strengths that Azure Monitor doesn't: Synthetics, per-metric anomaly detection, embedded metrics, and generally lower costs for log storage.

My advice: Don't try to recreate your Azure Monitor setup exactly in CloudWatch. Learn what CloudWatch does well, use it for those things, and accept the architectural differences. The monitoring outcomes — knowing when things break, understanding performance trends, controlling costs — are the same on both platforms. The tooling to achieve them is just different enough to keep you on your toes.

Resource

Next: AWS VPC Networking

Building production VPC networking when your muscle memory is Azure VNets.

Read Next

Part 5 of my "Azure to AWS" series. Previously: S3 for Azure Blob Storage Users. Next up: the final piece — VPC networking, and why Azure VNet intuitions both help and hurt.

Read Next