Free Ebook cover Azure Fundamentals for Web Hosting: From App Service to Virtual Machines

Azure Fundamentals for Web Hosting: From App Service to Virtual Machines

New course

12 pages

Azure Fundamentals for Web Hosting: Deployment workflows and release safety

Capítulo 9

Estimated reading time: 9 minutes

+ Exercise

Why deployment workflows matter for web hosting

A deployment workflow is the repeatable process that turns a code change into a running, verified version of your web application. “Release safety” means you can ship changes frequently while reducing the risk of outages, data loss, or security exposure. In Azure web hosting, the workflow differs by hosting model:

  • App Service: deploy code packages to an app, use deployment slots for staged releases and quick swap/rollback.
  • Container Apps: deploy container images, use revisions and traffic splitting for gradual rollout.
  • Virtual Machines: deploy by updating OS/app configuration and binaries on servers using scripts and automation; safety relies on repeatability, health checks, and controlled rollout across instances.

A standard release pipeline template (build → test → deploy → validate → rollback)

1) Build

Goal: produce a versioned artifact that can be deployed consistently.

  • App Service: build a ZIP/package or compiled output; include a version identifier (commit SHA, build number).
  • Container Apps: build a container image; tag it immutably (e.g., myapp:1.4.12 and/or myapp:sha-abc123).
  • VMs: build an artifact (ZIP, MSI, tarball) or a VM image (golden image) if you use image-based VM deployments; at minimum, version your deployable package.

2) Test

Goal: catch issues before deployment and prevent unsafe releases.

  • Unit tests: run fast tests on every change.
  • Integration tests: validate dependencies (databases, queues, external APIs) using test environments or mocks.
  • Security checks: dependency scanning, secret scanning, and container image vulnerability scanning (for container workloads).
  • Build gates: fail the pipeline if tests or checks fail.

3) Deploy

Goal: push the artifact to the target environment using a repeatable method.

  • App Service: deploy to a staging slot first, then swap to production.
  • Container Apps: deploy a new revision from a new image tag; optionally split traffic gradually.
  • VMs: run automation to update the app on one VM (or a small subset) first, then roll forward across the fleet.

4) Validate

Goal: confirm the new version is healthy before sending full traffic.

Continue in our app.

You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.

Or continue reading below...
Download App

Download the app

  • Health endpoints: check /health or /ready for readiness.
  • Smoke tests: basic user journeys (home page, login, critical API call).
  • Observability checks: error rate, latency, CPU/memory, and key business metrics.

5) Rollback

Goal: revert quickly if validation fails or production metrics degrade.

  • App Service: swap back to the previous slot (fast rollback).
  • Container Apps: shift traffic back to the previous stable revision.
  • VMs: redeploy the previous artifact, revert configuration, or restore from a known-good image; rollback is slower unless you design for it.

Deployment approach 1: App Service CI/CD with deployment slots

Concept: staged deployment with slot swap

With App Service, you typically deploy to a non-production slot (often named staging) and validate it. When ready, you swap the staging slot with production. This enables near-zero downtime releases because the platform warms up the target before switching traffic.

Step-by-step workflow (staging → swap)

  • Step 1: Build and package your app and generate a version label (build number/commit SHA).
  • Step 2: Deploy to the staging slot (not production). Ensure the slot runs the new build.
  • Step 3: Configure slot settings so environment-specific values stay in the slot (see “Environment configuration” below).
  • Step 4: Warm up and validate the staging slot by calling health endpoints and running smoke tests against the staging URL.
  • Step 5: Swap staging → production. Use swap preview if you need to verify configuration changes before finalizing.
  • Step 6: Post-swap validation on production endpoints and monitoring signals.
  • Step 7: Rollback if needed by swapping back.

Release safety tips for slots

  • Use “slot settings” for secrets and environment-specific config so they do not move during swap (e.g., connection strings, API keys).
  • Keep a stable “last known good” slot (production after swap becomes your rollback target).
  • Plan for database changes: prefer backward-compatible migrations so old and new app versions can run during swap/rollback windows.

Deployment approach 2: Container Apps image-based deployments with revisions

Concept: immutable images and revision-based rollout

In Container Apps, a deployment is typically a new container image tag. Each deployment creates a revision. You can keep multiple revisions and control how traffic is routed between them. This supports staged rollouts and quick rollback by shifting traffic back.

Step-by-step workflow (new revision + traffic control)

  • Step 1: Build the container image and tag it immutably (avoid latest for releases).
  • Step 2: Push the image to your container registry.
  • Step 3: Create a new revision by updating the Container App to reference the new image tag.
  • Step 4: Validate the new revision with health checks and smoke tests. If possible, validate without sending public traffic first.
  • Step 5: Gradual rollout by splitting traffic (e.g., 10% new revision, 90% old revision), then increase as metrics look healthy.
  • Step 6: Promote the new revision to 100% traffic when stable.
  • Step 7: Rollback by shifting traffic back to the previous revision (fast) and optionally deactivating the bad revision.

Release safety tips for revisions

  • Use readiness checks so traffic only reaches containers that are ready to serve.
  • Keep at least one previous stable revision active during rollout for immediate rollback.
  • Prefer small, frequent releases so you can identify problematic changes quickly.

Deployment approach 3: VM deployments with scripts and automation concepts

Concept: configuration-managed, repeatable server changes

On VMs, you control the OS and runtime, so deployments often involve copying artifacts, updating configuration, and restarting services. Release safety depends on automation and consistency: the same steps should run every time, with minimal manual intervention.

Common VM deployment patterns

  • In-place update (single VM): simplest but highest risk; downtime may occur during restart.
  • Rolling update (multiple VMs): update one VM at a time behind a load balancer; reduces downtime and limits blast radius.
  • Blue/green with two VM pools: maintain two sets of VMs (blue=live, green=new). Switch traffic when green is validated; fastest rollback by switching back.

Step-by-step workflow (rolling update example)

  • Step 1: Build and publish an artifact (ZIP/tarball) with a version identifier.
  • Step 2: Pre-deployment checks: ensure disk space, required runtime versions, and connectivity to dependencies.
  • Step 3: Drain one VM (stop sending it new requests) if behind a load balancer.
  • Step 4: Deploy to that VM using a script: download artifact, verify checksum, unpack to a versioned folder, update a symlink/current pointer, update config, restart service.
  • Step 5: Validate on that VM: local health endpoint, service status, logs.
  • Step 6: Re-add VM to rotation when healthy.
  • Step 7: Repeat for the next VM(s).
  • Step 8: Rollback by switching the pointer back to the previous versioned folder and restarting, or redeploying the previous artifact.

Automation concepts to apply on VMs

  • Idempotent scripts: running the script twice should not break the server (check before creating users, folders, firewall rules, etc.).
  • Versioned deployments: keep /opt/myapp/releases/2026-01-16_001 and /opt/myapp/releases/2026-01-15_003 so rollback is a pointer change.
  • Service management: restart only what you must; verify the service is listening and healthy after restart.
  • Health gates: stop the rollout if one VM fails validation.
# Example Linux deployment skeleton (conceptual) set -euo pipefail APP_DIR=/opt/myapp RELEASES=$APP_DIR/releases NEW_RELEASE=$RELEASES/$BUILD_ID mkdir -p "$NEW_RELEASE" curl -fSL "$ARTIFACT_URL" -o /tmp/app.tgz tar -xzf /tmp/app.tgz -C "$NEW_RELEASE" # Update config from environment/secret store here ln -sfn "$NEW_RELEASE" "$APP_DIR/current" systemctl restart myapp.service curl -f http://localhost:8080/health

Environment configuration: dev/test/prod without changing code

Principles

  • Separate config from code: the same artifact/image should run in multiple environments.
  • Use environment variables or platform settings for environment-specific values.
  • Prefer immutable artifacts: rebuild only when code changes, not when config changes.

How it maps to each hosting model

  • App Service: use application settings and connection strings; mark environment-specific values as slot settings so they stay with the slot during swap.
  • Container Apps: use environment variables and secrets; keep the same image across environments and inject config at deploy time.
  • VMs: use configuration files generated from templates (e.g., appsettings.Production.json rendered at deploy time) or environment variables managed by your service manager; avoid hand-editing production config.

Secret management basics (what to do in every model)

Core rules

  • Never store secrets in source control (including pipeline YAML and container images).
  • Use a dedicated secret store (commonly Azure Key Vault) and grant access using managed identities where possible.
  • Rotate secrets and design apps to reload secrets without full redeploy when feasible.

Practical guidance by hosting model

  • App Service: reference secrets via Key Vault references in app settings when supported; keep secrets as slot settings if they differ per environment/slot.
  • Container Apps: store secrets in the Container App secret store and/or reference Key Vault; inject as environment variables; avoid baking secrets into the image.
  • VMs: retrieve secrets at deploy time using an identity-based approach, write them to protected locations with least privilege, and restrict file permissions; avoid leaving secrets in shell history or logs.

Minimizing downtime: blue/green and staged rollouts

Blue/green vs staged rollout

  • Blue/green: two environments (blue=live, green=new). Validate green, then switch traffic. Rollback is switching back. Best when supported by platform routing (App Service slots, VM pools behind a load balancer).
  • Staged rollout (canary): send a small percentage of traffic to the new version first, then increase gradually. Best when you can split traffic (Container Apps revisions; can be done with VM load balancer rules or gateway features).

Where each model fits

  • App Service: blue/green via slots and swap; staged rollout is less direct unless you add additional routing components.
  • Container Apps: staged rollout is a first-class pattern via revision traffic splitting; also supports quick rollback.
  • VMs: blue/green requires two VM groups and a traffic switch; staged rollout requires careful load balancer configuration and health-based routing.

Putting it together: a reusable release checklist

Build checklist

  • Artifact/image is versioned and traceable to a commit.
  • Dependencies are pinned (package lockfiles, base image tags).
  • For containers: image scan completed and acceptable.

Test checklist

  • Unit tests pass.
  • Integration/smoke tests pass in a non-prod environment.
  • Basic security checks pass (dependency/secret scanning).

Deploy checklist

  • App Service: deploy to staging slot; Container Apps: create new revision; VMs: deploy to a small subset first.
  • Environment config injected without rebuilding the artifact.
  • Secrets pulled from a secret store; no secrets in logs.

Validate checklist

  • Health endpoints return success.
  • Key user flows work.
  • Monitoring signals are within thresholds (error rate/latency).

Rollback checklist

  • Rollback mechanism is tested (slot swap back, traffic shift to old revision, VM version pointer revert).
  • Rollback does not require emergency manual edits.
  • Post-rollback validation confirms recovery.

Now answer the exercise about the content:

Which deployment practice best enables a gradual rollout with quick rollback in Azure Container Apps?

You are right! Congratulations, now go to the next page

You missed! Try again.

Container Apps deployments create revisions from immutable image tags. You can split traffic (canary) between revisions and quickly roll back by shifting traffic back to the previous stable revision.

Next chapter

Azure Fundamentals for Web Hosting: Scaling and reliability basics for small-to-medium sites

Arrow Right Icon
Download the app to earn free Certification and listen to the courses in the background, even with the screen off.