How Triggers Define a Function’s Entry Point
A trigger is the event source that starts an Azure Function. Your trigger choice determines how the function receives input (payload shape), how quickly it can react (latency), how it scales (concurrency), and what reliability guarantees you can expect. In practice, you design the trigger first, then shape the function around the trigger’s contract and operational behavior.
This chapter focuses on four common triggers and how to choose between them: HTTP (APIs and webhooks), Timer (scheduled jobs), Queue (background processing), and Event Grid (event routing). For each, you’ll see typical payloads, scaling/concurrency characteristics, security considerations (especially for HTTP), and pitfalls that commonly cause production issues.
HTTP Triggers (APIs and Webhooks)
When to use
- Public or internal APIs (request/response).
- Webhooks from SaaS systems (GitHub, Stripe-like patterns) where an external system calls you.
- Low-latency synchronous operations where the caller expects an immediate result.
Expected payload shape
HTTP triggers receive an HTTP request: method, headers, query string, route parameters, and (optionally) a body. The body is commonly JSON, but can be form data or raw bytes.
POST /api/orders/submit?source=mobile HTTP/1.1
Content-Type: application/json
Authorization: Bearer <token>
{
"orderId": "A123",
"customerId": "C456",
"items": [
{"sku": "SKU-1", "qty": 2},
{"sku": "SKU-2", "qty": 1}
],
"requestedAt": "2026-01-16T10:15:00Z"
}Design tip: document and validate the request contract explicitly (required fields, types, max sizes). Treat the body as untrusted input.
Authentication and authorization considerations
HTTP triggers are the most exposed entry point, so you must decide how callers authenticate and what they are authorized to do.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
- Function-level keys: simple shared secret (via query string or header). Useful for internal services, but not ideal for public clients. Rotate keys and avoid embedding them in client apps.
- Azure AD / Entra ID (OAuth2/JWT): recommended for user or service authentication. Validate audience/issuer and enforce scopes/roles for authorization.
- Front with API Management: centralizes auth, rate limiting, quotas, and request shaping. The function can then trust APIM as a gateway and focus on business logic.
- Webhook signature validation: many providers sign payloads (HMAC). Verify signature and timestamp to prevent replay attacks.
Authorization is not just “is the caller authenticated?”; it’s also “can they perform this action on this resource?” For example, ensure the token’s subject can submit orders for the specified customerId.
Concurrency characteristics
- HTTP triggers scale out based on incoming request rate and platform heuristics. Multiple instances can handle requests concurrently.
- Within a single instance, multiple requests may be processed in parallel depending on runtime and configuration.
- Because the caller is waiting, long-running work can cause timeouts and poor user experience. Prefer offloading heavy work to a queue and returning quickly (202 Accepted).
Common pitfalls
- Doing heavy work synchronously: leads to timeouts and wasted compute. Pattern: validate + enqueue + return.
- Missing idempotency: clients retry on network failures; without idempotency you may create duplicate side effects. Use an idempotency key (e.g., orderId) and deduplicate.
- Not validating payload size: large bodies can cause memory pressure and slow cold starts. Enforce limits and consider blob upload patterns for large content.
- Leaking secrets in URLs: function keys in query strings can be logged by proxies. Prefer headers.
- Insufficient rate limiting: public endpoints can be abused. Use API Management or other gateway controls.
Step-by-step design pattern: HTTP to Queue for reliable background work
- Step 1: Validate required fields, schema, and authorization.
- Step 2: Create a work item with a stable idempotency key (e.g., orderId) and minimal necessary data.
- Step 3: Enqueue the work item to a queue for processing.
- Step 4: Return 202 Accepted with a status URL (optional) for polling.
// Pseudocode outline
// HTTP function:
// - validate request
// - enqueue { orderId, customerId, ... }
// - return 202 + correlationIdTimer Triggers (Scheduled Jobs)
When to use
- Periodic maintenance tasks (cleanup, aggregation, report generation).
- Polling scenarios (when no event source exists), though event-driven is preferred when possible.
- Time-based workflows (e.g., run every day at 02:00).
Expected payload shape
Timer triggers don’t receive a business payload from an external system. Instead, they provide schedule metadata (e.g., current invocation time, next scheduled time, whether the invocation is late).
// Conceptual timer trigger input
{
"scheduleStatus": {
"last": "2026-01-16T02:00:00Z",
"next": "2026-01-17T02:00:00Z",
"lastUpdated": "2026-01-16T02:00:01Z"
},
"isPastDue": false
}Design tip: because there is no external payload, you typically load work from a data store (e.g., “all invoices due today”) and process in batches.
Concurrency characteristics
- Timers are schedule-driven; concurrency depends on how long the job takes versus how often it runs.
- If a run overlaps the next scheduled time, you can get overlapping executions unless you design to prevent it.
- Timer jobs often need a “single leader” behavior (only one instance should run the job) or a partitioned approach (each instance processes a distinct slice).
Common pitfalls
- Overlapping runs: if the job takes longer than the interval, you can process the same window twice. Use a distributed lock (e.g., blob lease) or store a checkpoint/watermark.
- Unbounded batch size: loading “everything” can exceed memory/time. Use paging and checkpoints.
- Time zone confusion: schedules are typically expressed in UTC. Convert explicitly and store timestamps in UTC.
- Polling when events exist: if the source can emit events (e.g., storage events), prefer event triggers to reduce cost and improve latency.
Step-by-step design pattern: scheduled batch with watermark
- Step 1: Read watermark (last processed timestamp or ID) from durable storage.
- Step 2: Query next page of items greater than watermark, ordered deterministically.
- Step 3: Process items idempotently and update watermark after each page (or each item for higher safety).
- Step 4: Stop when no more items remain for this run.
// Pseudocode outline
// Timer function:
// watermark = load()
// while true:
// page = query(after=watermark, limit=100)
// if empty: break
// for item in page: process(item)
// watermark = page.last.id
// save(watermark)Queue Triggers (Background Processing)
When to use
- Decouple user-facing requests from heavy work (image processing, document conversion, sending emails).
- Buffer bursts of traffic and smooth throughput.
- Increase reliability with retries and dead-letter handling patterns.
Expected payload shape
Queue messages are typically small JSON documents. Keep them minimal: include identifiers and correlation data, not large blobs.
// Example queue message
{
"jobId": "J789",
"type": "ResizeImage",
"blobUrl": "https://<storage>/images/raw/abc.jpg",
"sizes": [256, 1024],
"requestedBy": "C456",
"correlationId": "b2f1c0..."
}Design tip: include a stable jobId for idempotency and a correlationId for tracing across components.
Concurrency characteristics
- Queue triggers scale out to process many messages in parallel. Each function instance can process multiple messages concurrently.
- Ordering is not guaranteed in most queue processing patterns. Design as if messages can arrive out of order.
- At-least-once delivery is common: a message may be processed more than once (e.g., due to retries or visibility timeouts). Your handler must be idempotent.
Common pitfalls
- Non-idempotent handlers: duplicates cause double charges, double emails, etc. Use deduplication (jobId) and transactional updates where possible.
- Poison messages: malformed or consistently failing messages can retry forever. Implement a dead-letter/poison strategy (move aside after N attempts) and alert.
- Visibility timeout mismatch: if processing takes longer than the message invisibility window, the message can be picked up by another worker, causing duplicates. Set timeouts appropriately or renew locks where supported.
- Large message payloads: queues have size limits; store large content in blob storage and pass references.
- Assuming exactly-once: build for at-least-once and eventual consistency.
Step-by-step design pattern: reliable worker with idempotency
- Step 1: Parse and validate message schema; if invalid, route to poison handling immediately.
- Step 2: Check idempotency store (e.g., a table keyed by jobId) to see if already completed.
- Step 3: Perform work in a way that can be retried safely.
- Step 4: Mark complete in idempotency store and emit any follow-up events.
// Pseudocode outline
// Queue function:
// msg = parse()
// if completed(jobId): return
// doWork(msg)
// markCompleted(jobId)Event Grid Triggers (Event Routing)
When to use
- React to events from Azure services (storage events, resource changes) or your own applications.
- Fan-out: route one event to multiple subscribers (multiple functions, workflows, or services).
- Near-real-time event-driven architectures where producers and consumers are loosely coupled.
Expected payload shape
Event Grid delivers events as an array. Each event has metadata plus a data object that depends on the event type. Your function should handle batches and iterate through events.
// Example Event Grid event array
[
{
"id": "e1d2...",
"eventType": "Contoso.Orders.OrderCreated",
"subject": "/orders/A123",
"eventTime": "2026-01-16T10:15:00Z",
"dataVersion": "1.0",
"metadataVersion": "1",
"data": {
"orderId": "A123",
"customerId": "C456",
"total": 42.50
}
}
]Design tip: treat eventType + dataVersion as part of your contract. Version your event schemas intentionally.
Concurrency characteristics
- Event Grid can deliver events quickly and in bursts; your function must handle spikes.
- Delivery is typically at-least-once; duplicates can occur. Implement idempotency using the event
id(or a domain-specific key). - Events can arrive out of order. If ordering matters, add sequence information in the event data and handle reordering or use a different mechanism.
Common pitfalls
- Not handling subscription validation: Event Grid performs a validation handshake for webhooks/subscribers. Ensure your endpoint responds correctly to validation events when required.
- Assuming single event per invocation: payload is an array; handle batches.
- Schema drift: changing event data without versioning breaks consumers. Use
dataVersionand backward-compatible changes. - Side effects without idempotency: duplicates lead to repeated actions. Deduplicate by event id.
- Overloading events with large data: keep events small; store large payloads elsewhere and include references.
Step-by-step design pattern: event handler with deduplication
- Step 1: Iterate through the event array.
- Step 2: Validate eventType and dataVersion; reject or route unknown versions.
- Step 3: Deduplicate using event id (store processed ids with TTL).
- Step 4: Apply business logic and emit downstream events if needed.
// Pseudocode outline
// Event Grid function:
// for evt in events:
// if !supported(evt.eventType, evt.dataVersion): continue
// if seen(evt.id): continue
// handle(evt.data)
// markSeen(evt.id)Choosing the Right Trigger: Latency, Throughput, Reliability
Quick selection guide
- HTTP: best for synchronous request/response and webhooks; lowest perceived latency to caller; reliability depends on client retries and your design.
- Timer: best for predictable schedules; latency is bounded by schedule frequency; reliability depends on checkpointing and overlap control.
- Queue: best for buffering and background work; high throughput; strong retry story; at-least-once means idempotency is mandatory.
- Event Grid: best for routing events to multiple consumers; low latency and decoupling; at-least-once delivery and schema versioning are key.
Mini design exercises (choose a trigger)
For each scenario, pick a trigger and justify it based on latency, throughput, and reliability. Then compare with the suggested answer.
Exercise 1: Checkout API that must respond in under 300 ms
Scenario: A mobile app calls “SubmitOrder”. The user must get an immediate response. The actual fulfillment (inventory reservation, sending confirmation email) can take seconds. Peak: 200 requests/second. Reliability: don’t lose orders; duplicates must not create double orders.
- Your choice: Which trigger starts the workflow?
- Design notes: What do you return to the caller? Where do you enforce idempotency?
Suggested answer: Use an HTTP trigger to accept the request, validate/authenticate, then enqueue a work item to a Queue trigger for fulfillment. Return 202 Accepted with an order reference. Enforce idempotency using orderId at the queue worker (and optionally at the HTTP layer).
Exercise 2: Nightly billing aggregation
Scenario: Every night at 02:00 UTC, compute daily totals for all tenants. Data volume is large; the job can take 30–60 minutes. Reliability: must not skip days; reruns must not double-count.
- Your choice: Which trigger?
- Design notes: How do you prevent overlap and double-counting?
Suggested answer: Use a Timer trigger with a watermark (date partition) and idempotent aggregation writes (e.g., upsert per tenant/day). Prevent overlap with a distributed lock or by checking if the day’s aggregation is already finalized before processing.
Exercise 3: Image processing burst after marketing campaign
Scenario: Users upload images; each image must be resized into 5 variants. Upload bursts can reach 5,000 images/minute for 10 minutes. Latency: variants can appear within a few minutes. Reliability: must process all images; retries are acceptable.
- Your choice: Which trigger drives resizing?
- Design notes: How do you handle throughput spikes and avoid duplicate processing?
Suggested answer: Use a Queue trigger (or Event Grid to enqueue) so uploads create messages per image. Queue buffering absorbs spikes; workers scale out. Use idempotency keyed by image id + variant to avoid duplicate outputs.
Exercise 4: Notify multiple systems when an order ships
Scenario: When an order is shipped, you must notify analytics, CRM, and a customer notification service. Throughput: steady 50 events/second. Latency: seconds. Reliability: each subscriber should receive the event even if others fail.
- Your choice: Which trigger and why?
- Design notes: How do you version the event and handle duplicates?
Suggested answer: Use Event Grid trigger with an OrderShipped event. Each subscriber is independent, enabling fan-out. Include eventType and dataVersion; deduplicate using Event Grid event id in each consumer.
Exercise 5: External SaaS webhook with occasional retries
Scenario: A SaaS provider calls your endpoint when a payment succeeds. They retry up to 10 times on non-2xx responses. Latency: respond quickly. Reliability: must not process the same payment twice. Security: verify the call is authentic.
- Your choice: Which trigger?
- Design notes: How do you authenticate and ensure idempotency?
Suggested answer: Use an HTTP trigger to receive the webhook, validate the signature/timestamp, then enqueue to a queue for processing. Return 200 quickly after validation (or 202 if the provider accepts it). Deduplicate by payment id in the worker.