I use Claude Code as a lab-building co-pilot every day. It scaffolds Bicep/ARM templates, writes student-facing walkthroughs from infrastructure code, generates certification-style exam questions, and builds validation scripts that check whether students completed a lab correctly. Here's exactly how I do it — prompts, gotchas, and all.
Why This Matters
Building cloud labs is one of those jobs that sounds creative but is actually 80% boilerplate. You're writing the same Bicep parameters, the same "navigate to the portal and click..." instructions, the same multiple-choice distractors. Over and over.
I've been building labs for CloudLearn and running workshops for almost a decade. At QA Ltd, I built hundreds of hands-on scenarios. And the bottleneck was never "do I know the content?" — it was "do I have 6 hours to turn this into a polished lab with instructions, validation, and exam questions?"
Claude Code collapsed that. Not by replacing what I know, but by handling the repetitive parts so I can focus on the parts that actually need an MCT's judgment — the gotchas, the "docs vs reality" moments, the teaching design.
My Workflow: Before vs After
Here's the honest comparison:
| Task | Before | With Claude Code |
|---|---|---|
| Scaffold a Bicep template for a lab environment | 45-60 min (copy from old labs, modify) | 5-10 min (prompt + review) |
| Write step-by-step student instructions | 2-3 hours per lab | 30-45 min (generate + edit for voice) |
| Generate 10 exam-quality MCQs | 1-2 hours (writing distractors is painful) | 15-20 min (generate + verify against docs) |
| Build a validation script | 30-60 min | 10 min |
| Total for a complete lab | 5-8 hours | 1-2 hours |
The time saved is real. But the important part isn't speed — it's that I'm spending my time on the high-value work instead of boilerplate.
Use Case 1: Scaffolding Bicep Templates
This is where I start most labs. I need a deployment template that sets up the environment students will work in — VNets, subnets, App Services, storage accounts, whatever the lab requires.
Here's a real prompt I'd use:
Create a Bicep template that deploys a lab environment for practicing
Azure Private Endpoints with App Service. I need:
- A VNet with two subnets (one for the PE, one for a jump VM)
- An App Service Plan (B1 SKU — keep costs low)
- A Web App with a simple default page
- A Private Endpoint connecting the Web App to the VNet
- A Private DNS Zone linked to the VNet
- All resources in canadacentral, prefixed with "lab-pe-"
Use parameter defaults so students can deploy with zero modification.
Add comments explaining what each resource does — students will read this.Claude Code generates the full template. But here's the thing — I don't just accept it blindly. I review for:
- API versions — Claude sometimes uses outdated versions. I always check
Microsoft.Web/sites@2023-12-01is current. - SKU choices — I asked for B1, but does the template actually use B1? I've seen it default to S1.
- Subnet sizing — For a lab,
/27is usually fine. Claude sometimes over-provisions with/24. - Parameter defaults — Students shouldn't need to fill in 15 parameters. Good defaults matter.
Tell Claude Code your constraints upfront: "This is for a student lab, keep costs under $5/day, use the cheapest SKUs that still demonstrate the concept." It makes a real difference in the output.
Here's an example of what the output looks like — this is from a real lab I built for a Private Endpoints module:
@description('Location for all resources')
param location string = 'canadacentral'
@description('Prefix for all resource names')
param prefix string = 'lab-pe'
// The VNet is the backbone — two subnets isolate the Private Endpoint
// from the jump box VM that students will use to test connectivity
resource vnet 'Microsoft.Network/virtualNetworks@2024-01-01' = {
name: '${prefix}-vnet'
location: location
properties: {
addressSpace: {
addressPrefixes: ['10.0.0.0/16']
}
subnets: [
{
name: 'snet-pe'
properties: {
addressPrefix: '10.0.1.0/27'
// This delegation is required for Private Endpoints
privateEndpointNetworkPolicies: 'Disabled'
}
}
{
name: 'snet-vm'
properties: {
addressPrefix: '10.0.2.0/27'
}
}
]
}
}I typically generate the full template, then iterate: "Add a deploymentScript resource that creates a simple index.html on the Web App so students can verify the app is running before they add the Private Endpoint."
The back-and-forth is where Claude Code shines. It has the full context of the template and can make targeted modifications without me re-explaining the architecture.
Use Case 2: Writing Lab Instructions
This is the use case that saves me the most time. I have a Bicep template — now I need student-facing, step-by-step instructions that walk someone through the lab.
I have a Bicep template that deploys a Private Endpoint for an Azure
Web App. Read the template and write student-facing lab instructions.
Requirements:
- Assume the student has already deployed the template
- Walk them through verifying the PE is working
- Include CLI commands AND portal steps (students should try both)
- Add "checkpoint" boxes after each major step so students can
verify they're on track
- Difficulty: intermediate (they know the portal, new to networking)
- Tone: friendly but technical, like a senior colleague walking
them through itClaude Code reads the Bicep file, understands the architecture, and generates instructions that actually match the deployed resources. No more "Step 3 says to click on a blade that doesn't exist because I renamed the resource."
The generated instructions will use generic Azure portal paths. Always verify these against the current portal UI — Microsoft updates the navigation frequently. I learned this the hard way when 30 students couldn't find the "Networking" blade because it had moved under "Settings" in a portal refresh.
The key edit I always make: voice. Claude Code writes clean instructions, but they don't sound like me. I go through and add the "here's what's actually happening" asides, the "if you see this error, it's because..." gotchas, and the "in production, you'd also want to..." context. That's where the MCT value lives — not in the steps, but in the commentary between them.
Use Case 3: Generating Exam Questions
Writing good multiple-choice questions is an underrated skill. The questions themselves are straightforward — it's the distractors (wrong answers) that take forever. Good distractors need to be plausible but wrong in a specific, educational way.
Here's my prompt pattern:
Generate 10 exam-quality multiple-choice questions about Azure Private
Endpoints. Requirements:
- Mix of single-answer and multiple-select (mark which is which)
- Each question should test a DIFFERENT concept (don't repeat DNS
resolution 3 times)
- Distractors must be plausible — things a student might actually
believe if they haven't done the hands-on work
- Include the correct answer AND a 1-sentence explanation of WHY
each distractor is wrong
- Difficulty: AZ-104 level
- Ground every question in official Microsoft Learn documentation —
no made-up service behaviorsThe "explain why each distractor is wrong" part is critical. It forces Claude Code to generate distractors that are wrong for specific, teachable reasons — not just random Azure service names.
Example output:
Q: A company deploys a Private Endpoint for their Azure SQL Database.
Users on the corporate VNet report they can't connect. The Private
Endpoint shows as "Approved." What is the MOST LIKELY cause?
A) The SQL Database firewall still allows public access ✗
(This wouldn't prevent private access — it controls public access)
B) The Private DNS Zone is not linked to the VNet ✓
(Without DNS zone linkage, the FQDN resolves to the public IP)
C) The subnet has Network Security Groups blocking port 1433 ✗
(NSGs don't apply to Private Endpoint traffic by default)
D) The Private Endpoint needs a managed identity ✗
(Private Endpoints don't use managed identities for connectivity)Always verify generated questions against current Microsoft Learn documentation. I've caught Claude Code generating questions about service behaviors that were true 2 years ago but have since changed. Azure moves fast — your exam questions need to keep up.
Use Case 4: Validation Scripts
Every good lab needs a way to check whether the student actually completed it. I use Claude Code to generate validation scripts that run after the lab and report pass/fail.
Write a bash validation script for the Private Endpoints lab.
It should check:
1. The Private Endpoint exists and is in "Approved" state
2. The Private DNS Zone has an A record pointing to the PE's
private IP
3. The Web App is NOT accessible from the public internet
(curl should timeout or return 403)
4. The Web App IS accessible through the VNet (nslookup returns
a 10.x.x.x address)
Output format: green checkmark for pass, red X for fail, with
a one-line explanation for each check. Exit code 0 only if all pass.The output is a script I can drop directly into the lab's validation folder:
#!/bin/bash
PASS="\033[0;32m✓\033[0m"
FAIL="\033[0;31m✗\033[0m"
ERRORS=0
RG="lab-pe-rg"
PE_NAME="lab-pe-endpoint"
WEBAPP="lab-pe-app"
# Check 1: Private Endpoint exists and is approved
PE_STATE=$(az network private-endpoint show \
--name $PE_NAME --resource-group $RG \
--query "privateLinkServiceConnections[0].privateLinkServiceConnectionState.status" \
-o tsv 2>/dev/null)
if [ "$PE_STATE" = "Approved" ]; then
echo -e "$PASS Private Endpoint is provisioned and approved"
else
echo -e "$FAIL Private Endpoint not found or not in Approved state (got: $PE_STATE)"
((ERRORS++))
fi
# Check 2: DNS zone has A record
DNS_IP=$(az network private-dns record-set a show \
--zone-name "privatelink.azurewebsites.net" \
--resource-group $RG --name $WEBAPP \
--query "aRecords[0].ipv4Address" -o tsv 2>/dev/null)
if [[ $DNS_IP == 10.* ]]; then
echo -e "$PASS DNS A record resolves to private IP ($DNS_IP)"
else
echo -e "$FAIL DNS A record missing or not pointing to private IP"
((ERRORS++))
fi
echo ""
[ $ERRORS -eq 0 ] && echo "All checks passed." || echo "$ERRORS check(s) failed."
exit $ERRORSI always review the az commands for correct flag syntax and API behavior. Claude Code gets the logic right almost every time, but occasionally uses deprecated flags or incorrect --query JMESPath expressions.
What Will Bite You
All right, so here's the honest part. Claude Code is incredibly useful for this workflow, but it's not magic. Here are the gotchas I've hit:
Outdated API Versions
Claude Code sometimes generates Bicep with API versions from 2022 or early 2023. Azure services evolve — newer API versions add required properties, change defaults, or deprecate fields. Always check the Bicep resource reference for the latest stable version.
Hallucinated CLI Flags
I've seen Claude Code generate az commands with flags that don't exist. The command looks right, the flag name sounds plausible, but az will just throw an error. If a command looks unfamiliar, run az <command> --help before putting it in front of students.
Portal Navigation Drift
Azure portal navigation changes frequently. Claude Code generates instructions based on its training data, which might reference a blade or menu item that's been moved or renamed. Always click through the steps yourself before publishing.
Over-Confident Exam Questions
This is the subtle one. Claude Code will generate questions that sound authoritative but test behaviors that are technically incorrect or have changed. The distractor explanations are the canary — if the "why it's wrong" explanation doesn't make sense to you, the question probably needs to be rewritten.
My rule: if I can't independently verify a generated exam question against Microsoft Learn in under 2 minutes, I throw it out and write a new one. The time you save generating 10 questions easily covers rewriting 2-3 of them.
The Bigger Lesson
Here's what I tell my coaching clients when they ask about AI tools: Claude Code doesn't replace expertise. It amplifies it.
I can use Claude Code to build labs 10x faster because I've built hundreds of labs by hand. I know what good Bicep looks like. I know what good exam questions test. I know which gotchas students hit. Without that context, you'd just be generating plausible-looking templates and hoping they work.
The real workflow is: expertise in, speed out. Claude Code handles the boilerplate so I can focus on the teaching design — the part that actually matters for students.
If you're building training content, certification prep, or hands-on labs — give this workflow a try. Start with something small: take a Bicep template you've already written and ask Claude Code to generate the student instructions for it. Compare the output to what you'd write manually. That's how you calibrate trust.
My Prompt Patterns — Quick Reference
| Task | Prompt Pattern |
|---|---|
| Scaffold a lab template | "Create a Bicep template that deploys [architecture]. Constraints: [SKU, region, cost]. Add comments for students." |
| Generate instructions | "Read this Bicep template. Write step-by-step lab instructions for [audience]. Include CLI and portal paths. Add checkpoints." |
| Exam questions | "Generate [N] MCQs about [topic] at [cert level]. Explain why each distractor is wrong. Ground in Microsoft Learn." |
| Validation script | "Write a bash script that validates [lab completion criteria]. Output pass/fail with explanations. Exit 0 only if all pass." |
| Iterate on a template | "Add [resource/config] to the existing template. Keep the naming convention and parameter style consistent." |
| Review for gotchas | "Review this Bicep template for common student mistakes, missing dependencies, or cost surprises. List what could go wrong." |
What's Next
- If you're studying for an Azure cert, check out the Azure Certification Roadmap 2026 — it covers which certs to target and in what order.
- Want to learn Bicep from scratch? Start with Getting Started with Terraform on Azure — the IaC concepts transfer directly, and then Bicep will feel natural.
- If you're building your own labs or portfolio projects, the 4 Azure Projects That Cost Under $1 guide is a good starting point for ideas.