Serverless in practical Azure terms
In Azure, “serverless” does not mean “no servers.” It means you do not manage the servers. Azure owns the operational responsibilities of provisioning, patching, capacity management, and much of the scaling logic, while you focus on code and configuration.
Managed infrastructure
With Azure Functions, you deploy code into a managed runtime. You do not create or maintain VMs, OS images, or container hosts. You configure settings (runtime, scaling plan, networking options, app settings), and Azure runs your functions on your behalf.
Event-driven execution
Functions are typically triggered by events: an HTTP request, a message arriving on a queue, a timer schedule, a blob upload, or a database change feed. Instead of running continuously, your code runs when an event occurs, and Azure handles dispatching and concurrency.
Consumption-based billing
In the Consumption plan, you pay primarily for executions and compute time (plus any trigger/service costs). In Premium and Dedicated plans, you pay for reserved capacity (instances) whether or not functions are running, but you gain more control and features (for example, networking and predictable performance).
Mapping serverless concepts to Azure Functions components
- Function: the unit of code that runs in response to a trigger (for example, an HTTP-triggered endpoint or a queue-triggered worker).
- Trigger: the event source that starts the function (HTTP, Timer, Service Bus, Storage Queue, Event Grid, etc.).
- Bindings: declarative connections to input/output data sources (for example, read a blob as input, write to a queue as output) to reduce boilerplate integration code.
- Function App: the deployment and management boundary that hosts one or more functions. It is the container for configuration, scaling, networking, and identity.
- Hosting plan: determines how the Function App is run (Consumption, Premium, Dedicated/App Service), affecting scaling behavior, cold starts, and networking capabilities.
- Application settings: environment variables and connection strings used by functions and bindings.
- Identity: Managed Identity for secure access to Azure resources without storing secrets.
Function Apps vs Functions: what is the boundary?
What a Function is
A Function is a single entry point with a trigger and optional bindings. It should be small and focused: validate input, call domain logic, read/write to dependencies, and return/emit a result. Multiple functions can share code via libraries or shared modules within the same project.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
What a Function App is
A Function App is the operational unit you deploy. All functions inside a Function App share:
- Hosting plan and scaling (they scale together).
- Runtime version and language worker (for example, Node.js or .NET isolated).
- Configuration (app settings, connection strings, feature flags).
- Networking (VNet integration, private endpoints where applicable).
- Identity (Managed Identity is assigned at the app level).
- Deployment lifecycle (deploying updates affects the whole app).
Practical implication: choosing “one Function App vs multiple Function Apps” is less about code organization and more about operational isolation, scaling characteristics, security boundaries, and deployment independence.
Hosting plans: Consumption, Premium, Dedicated (App Service)
Consumption plan
Best for: bursty workloads, event-driven tasks with variable traffic, cost-sensitive scenarios where idle time should be near-zero cost.
- Billing: pay per execution and execution duration (plus memory), with a free grant in many subscriptions.
- Scaling: automatic scale out based on events; Azure adds instances as needed.
- Cold starts: possible. When the app has been idle, the first request/event may incur startup latency.
- Scaling limits: there are platform limits on concurrent instances and throughput; suitable for many workloads but not all high-throughput/low-latency requirements.
- Networking: more limited compared to Premium/Dedicated for advanced private networking scenarios (capabilities vary by region and feature; plan choice often determines feasibility).
Premium plan
Best for: production workloads needing better performance predictability, VNet integration, higher scale, and reduced cold starts.
- Billing: pay for pre-warmed and scaled instances (reserved capacity), not per execution.
- Cold starts: can be minimized using pre-warmed instances (keeping workers ready).
- Scaling: scales out automatically, typically with higher limits and more control than Consumption.
- Networking: supports more advanced networking scenarios (for example, VNet integration) and is commonly chosen when private access to resources is required.
Dedicated plan (App Service plan)
Best for: steady workloads, organizations standardizing on App Service, or when you want Functions alongside web apps/APIs on the same plan.
- Billing: pay for the App Service plan instances (always on), regardless of usage.
- Cold starts: typically avoided when Always On is enabled (depending on runtime and configuration).
- Scaling: manual or autoscale based on App Service rules; not event-driven scaling in the same way as Consumption/Premium.
- Networking: strong App Service networking features; good fit for enterprise network topologies.
What changes across plans: cold starts, scaling, networking
Cold starts (startup latency)
- Consumption: most likely to experience cold starts after idle periods.
- Premium: reduced cold starts via pre-warmed instances; still possible in some scenarios (for example, scaling to new instances).
- Dedicated: typically minimal when Always On is enabled and capacity is provisioned.
Scaling behavior and limits
- Consumption: event-driven scaling; great for spiky traffic but subject to platform concurrency and throughput constraints.
- Premium: event-driven scaling with higher ceilings and more predictable performance due to reserved capacity.
- Dedicated: scale is tied to App Service instances; you control instance count and autoscale rules.
Networking and private access
Plan choice affects whether you can integrate with VNets, use private endpoints, and meet enterprise routing requirements. If your functions must access private resources (for example, a database reachable only inside a VNet), Premium or Dedicated is often the practical choice.
Guided walkthrough: creating a Function App
This walkthrough focuses on the key decisions you make during creation. You can create a Function App in the Azure portal, via Azure CLI, or using Infrastructure as Code. The portal steps below map cleanly to CLI/IaC parameters.
Step 1: Decide the operational basics
- Subscription and Resource Group: choose where the app lives and who manages it.
- Region: pick the closest region to your users and dependencies to reduce latency.
- Name: Function App name becomes part of the default hostname (for HTTP triggers).
Step 2: Choose the hosting plan
Select one of: Consumption, Premium, or Dedicated. Make this choice early because it affects networking, scaling, and cost model.
- If you need VNet integration/private networking: start by evaluating Premium or Dedicated.
- If you expect unpredictable bursts and want minimal idle cost: Consumption is often the default starting point.
- If you need predictable latency and want to avoid cold starts: Premium is a common compromise.
Step 3: Choose runtime stack and language
Azure Functions supports multiple language workers. Your choice should align with team skills, existing libraries, and operational requirements (startup time, dependency management, and deployment model).
- JavaScript/TypeScript (Node.js): great for I/O-heavy workloads and fast iteration. TypeScript adds compile-time safety; you typically build to JavaScript during CI/CD.
- .NET: strong tooling and performance. Many teams choose .NET isolated worker for modern .NET versions and clearer dependency isolation.
- Python: productive for data processing and automation. Pay attention to dependency size and startup time; use a clean requirements file and consider packaging strategies.
- Java: good fit for JVM ecosystems and existing enterprise libraries. Consider memory footprint and warm-up time; Premium/Dedicated can help with performance predictability.
Step 4: Configure storage and monitoring
- Storage account: many triggers and runtime features rely on a storage account (for example, checkpointing, host state). Choose a general-purpose storage account in the same region when possible.
- Application Insights: enable it to capture logs, traces, failures, and performance metrics. This is essential for diagnosing cold starts, dependency latency, and scaling behavior.
Step 5: Configure identity and secrets strategy
- Managed Identity: enable a system-assigned identity so the app can access Azure resources securely (for example, Key Vault, Storage, Service Bus) without embedding credentials.
- App settings: store configuration as environment variables. For secrets, prefer references to a secret store rather than raw values.
Step 6: Create the first function (quick validation)
After the Function App exists, create a simple function to validate deployment and configuration. Examples:
- HTTP trigger: confirm routing, authentication level, and response behavior.
- Timer trigger: confirm scheduling and logging.
- Queue trigger: confirm connectivity to messaging and retry behavior.
Keep the first function minimal: log the trigger payload, call a small helper method, and emit a simple output. This verifies that runtime, permissions, and monitoring are wired correctly.
Organizing a solution: one Function App vs multiple Function Apps
When one Function App is enough
- Same scaling profile: functions have similar load patterns and can scale together without causing resource contention.
- Same security/networking needs: all functions can share the same VNet integration and access rules.
- Same release cadence: deploying them together is acceptable.
- Shared configuration: they use the same set of app settings and dependencies.
When to split into multiple Function Apps
- Different scaling characteristics: one set is high-throughput (queue workers) while another is latency-sensitive (HTTP APIs). Splitting prevents one workload from affecting the other’s scaling and resource usage.
- Different networking requirements: some functions must run with private access to internal resources; others are public-facing and should remain isolated.
- Different runtime/language needs: you cannot mix incompatible runtime stacks in the same Function App. If one team needs Python and another needs .NET with different worker settings, separate apps simplify operations.
- Different deployment ownership: separate apps allow independent CI/CD pipelines, rollback strategies, and access control.
- Blast radius control: configuration mistakes or deployments in one app should not impact unrelated functions.
Practical patterns for real projects
- By domain boundary: group functions that serve the same business capability (for example, “orders” vs “billing”).
- By trigger type and workload: separate HTTP APIs from background processors if they have different performance and scaling needs.
- By environment sensitivity: isolate functions that handle regulated data with stricter network and identity controls.
Decision points: choosing a plan and defining app boundaries
Plan selection checklist
- Traffic pattern: spiky and unpredictable favors Consumption; steady or latency-sensitive favors Premium/Dedicated.
- Cold start tolerance: if first-hit latency is unacceptable, evaluate Premium (pre-warmed) or Dedicated (Always On).
- Networking: if you need private connectivity to VNets or strict egress control, start with Premium or Dedicated.
- Scale requirements: high concurrency and throughput often push toward Premium or a carefully sized Dedicated plan.
- Cost model preference: per-execution cost (Consumption) vs reserved capacity (Premium/Dedicated).
App boundary checklist
- Do these functions need to scale together? If not, split.
- Do they share the same runtime and dependencies? If not, split.
- Do they share the same security posture and network access? If not, split.
- Do they require independent deployments and ownership? If yes, split.
- Will a single configuration set remain manageable? If app settings become unwieldy or conflicting, split.