Why “secure edge configuration” matters
Your “edge” is the first component that receives untrusted traffic before it reaches application code: typically an Ingress controller, an API gateway, or a service-mesh ingress gateway. Secure edge configuration means enforcing security and safety controls at that boundary so every request is normalized, constrained, and decorated with the right headers before it can interact with upstream services. This chapter focuses on three high-impact areas you can implement at the edge without changing application code: security headers, redirects, and request limits. Done well, these controls reduce common web risks (clickjacking, MIME sniffing, referrer leakage), prevent accidental insecure access paths (HTTP instead of HTTPS, wrong hostnames), and protect upstreams from abusive or malformed traffic (oversized bodies, header floods, slow uploads).
Security headers: what to set at the edge and why
Security headers are response headers that instruct browsers how to handle content. Setting them at the edge is attractive because it centralizes policy across many services. However, some headers are application-specific (especially Content-Security-Policy). A practical approach is to set a safe baseline at the edge and allow per-service overrides where needed.
Baseline headers you can safely standardize
These are commonly safe defaults for most web apps and static sites. You should still test because some legacy behavior (iframes, mixed content, cross-origin needs) can conflict.
- Strict-Transport-Security (HSTS): forces browsers to use HTTPS for future requests. Typical:
max-age=31536000; includeSubDomains. Addpreloadonly if you understand the irreversible nature of browser preload lists. - X-Content-Type-Options:
nosniffprevents MIME-type sniffing. - X-Frame-Options:
DENYorSAMEORIGINto reduce clickjacking. If you need fine-grained iframe control, prefer CSPframe-ancestors. - Referrer-Policy: controls referrer leakage. A common default is
strict-origin-when-cross-origin. - Permissions-Policy: restricts powerful browser features (camera, geolocation). Start with a minimal set and expand as needed.
- Cross-Origin-Opener-Policy and Cross-Origin-Resource-Policy: help isolate browsing contexts and reduce cross-origin data exposure. These can break some integrations; roll out carefully.
Headers that require application awareness
Content-Security-Policy (CSP) is the most powerful browser-side mitigation against XSS, but it is also the easiest to break if you set it blindly. CSP often needs knowledge of script sources, inline scripts, third-party CDNs, and embedded frames. A common pattern is: set CSP per application (or per route) while still enforcing other baseline headers globally at the edge.
Implementing headers with NGINX Ingress (practical steps)
NGINX Ingress supports adding headers via annotations and snippets. The exact capabilities depend on whether snippet directives are enabled by your cluster policy. If snippets are disabled (common in hardened clusters), you should use supported annotations or a ConfigMap-driven approach. The examples below show a typical pattern; adapt to your controller’s security posture.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Step 1: Decide which headers are global vs per-Ingress
Make a short policy table: “global baseline” (applies to all apps) and “per-app” (CSP, special cross-origin rules). Global settings reduce drift; per-app settings prevent breaking specialized apps.
Step 2: Add baseline headers on an Ingress
This example adds several baseline headers for a single host. If you manage many Ingress objects, consider templating with Helm or Kustomize overlays so the same header set is applied consistently.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web
namespace: prod
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "X-Frame-Options: SAMEORIGIN";
more_set_headers "Referrer-Policy: strict-origin-when-cross-origin";
more_set_headers "Permissions-Policy: geolocation=(), microphone=(), camera=()";
more_set_headers "Strict-Transport-Security: max-age=31536000; includeSubDomains";
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80Notes: the more_set_headers directive comes from the headers-more module, commonly included in NGINX Ingress builds. If your controller does not support it, you may need to use add_header or controller-specific mechanisms. Also, ensure HSTS is only served over HTTPS; if your edge can still serve HTTP, combine HSTS with an HTTPS redirect (covered later).
Step 3: Allow per-app CSP without losing the baseline
A practical method is to keep the baseline in a shared template and add CSP only where needed. For example, a stricter app might add:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Content-Security-Policy: default-src 'self'; img-src 'self' data:; object-src 'none'; frame-ancestors 'none'";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "Referrer-Policy: strict-origin-when-cross-origin";In practice, you will want to avoid duplicating baseline headers in every Ingress. Use your deployment tooling to inject a shared snippet or use a controller-level configuration if your organization allows it.
Implementing headers with a service-mesh ingress gateway (Envoy-based)
In a service mesh, the ingress gateway is often an Envoy proxy. Envoy can add, remove, or rewrite headers at the edge. The exact resource type depends on the mesh (for example, Istio uses EnvoyFilter or higher-level APIs; other meshes provide their own CRDs). The conceptual steps are consistent: define a filter or policy that injects response headers, and scope it to the gateway listener and hosts you want.
Step-by-step approach (mesh-agnostic)
- Step 1: Identify the gateway workload (labels, namespace) and the listener/port handling external HTTP(S).
- Step 2: Decide whether headers are set at the gateway for all virtual hosts, or only for specific hosts/routes.
- Step 3: Apply a policy that adds response headers (HSTS, nosniff, referrer-policy) and optionally removes sensitive headers (like
Serveror internal debug headers). - Step 4: Validate with a real browser and with
curl -Iagainst multiple routes to ensure headers are present and not duplicated.
One key advantage of Envoy-based gateways is fine-grained routing: you can apply different header policies per hostname or per path without relying on controller-specific snippets. The tradeoff is operational complexity: you must manage and test filter changes carefully because a misconfigured filter can affect all traffic.
Redirects: enforcing canonical hostnames and HTTPS
Redirects are not just about user experience; they are a security control. You want to eliminate ambiguous entry points such as http:// access, alternate hostnames, or mixed-case paths that bypass caching and security rules. A canonical redirect strategy also reduces the chance that cookies are sent to unintended hosts or that users bookmark insecure URLs.
Common redirect policies at the edge
- HTTP to HTTPS: redirect all plaintext requests to HTTPS. Use 301/308 for permanent redirects; 308 preserves method (important for POST) but some older clients may not handle it well.
- Non-canonical host to canonical host: redirect
example.comtowww.example.com(or the reverse), and redirect deprecated subdomains to the primary one. - Remove trailing slash or enforce it: choose one style for SEO and caching consistency. Be cautious: APIs often treat
/v1/resourceand/v1/resource/differently. - Block or redirect suspicious paths: for example, redirect
/.well-knownonly if you know what you’re doing; otherwise you may break standards-based integrations.
NGINX Ingress: HTTPS redirect and canonical host
Many NGINX Ingress deployments support a simple HTTPS redirect annotation. For canonical host redirects, you can create a dedicated Ingress that matches the alternate host and returns a redirect.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirect-http
namespace: prod
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
ingressClassName: nginx
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-svc
port:
number: 80For a host redirect (for example, example.com to app.example.com), a common pattern is to use a “redirect” backend or a snippet that returns 301. If your environment forbids snippets, use a small dedicated redirect service (a minimal container that returns redirects) so policy stays explicit and auditable.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: redirect-host
namespace: prod
annotations:
nginx.ingress.kubernetes.io/permanent-redirect: https://app.example.com$request_uri
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: dummy
port:
number: 80This approach ensures that any request to example.com is redirected before it reaches application code. Keep the redirect list small and intentional; too many redirects can create loops and complicate debugging.
Mesh gateway redirects
With a mesh ingress gateway, redirects are typically implemented at the routing layer: match a host or scheme and respond with a redirect. The operational best practice is to keep redirect logic close to the gateway configuration (virtual hosts/routes) rather than embedding it in application services, so you can update canonicalization without redeploying apps.
Request limits: protecting upstreams from oversized or abusive requests
Request limits constrain how much work the edge will accept on behalf of upstream services. They are both a security control (mitigating some denial-of-service patterns) and a reliability control (preventing accidental large uploads or header explosions from consuming memory and CPU). Limits should be chosen based on real application needs, not arbitrary “small” numbers, and should be paired with clear error responses so clients can adjust.
Key limits to configure
- Maximum request body size: caps upload size. This prevents a client from sending a multi-gigabyte body that ties up connections and buffers.
- Header size and header count: mitigates header-based attacks and protects parsers. Some proxies allow tuning max header bytes and max number of headers.
- Request rate limiting: limits requests per client identity (IP, API key, JWT claim). This is often implemented with a shared state (Redis) or proxy-local token buckets.
- Connection and concurrency limits: caps simultaneous connections or requests to protect upstream pools.
- Timeouts: read timeout, send timeout, and upstream timeouts prevent slowloris-style behavior and hung upstream calls from consuming resources.
NGINX Ingress: body size and timeouts (step-by-step)
Start by measuring what your applications actually need: typical upload sizes, maximum expected payload, and longest request duration. Then set limits slightly above those values, and tighten over time.
- Step 1: Set a maximum body size per Ingress (or per path if supported). Example: allow up to 10 MB uploads.
- Step 2: Set proxy timeouts to prevent slow clients and hung upstreams from consuming worker resources.
- Step 3: Validate behavior with a large upload test and confirm the response code (often 413 for body too large).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: uploads
namespace: prod
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "10m"
nginx.ingress.kubernetes.io/proxy-read-timeout: "30"
nginx.ingress.kubernetes.io/proxy-send-timeout: "30"
nginx.ingress.kubernetes.io/proxy-connect-timeout: "5"
spec:
ingressClassName: nginx
rules:
- host: upload.example.com
http:
paths:
- path: /api/upload
pathType: Prefix
backend:
service:
name: upload-svc
port:
number: 8080Be careful with timeouts: too low will break legitimate slow networks; too high increases exposure to slow-client attacks. If you support large uploads, consider using direct-to-object-storage patterns and keep edge limits low for application endpoints.
Rate limiting at the edge
Rate limiting can be implemented in several ways depending on your edge technology. The important design decision is the key used to identify a client. IP-based limits are simple but can penalize users behind NAT or proxies. Identity-based limits (API key, JWT subject) are more accurate but require authentication to happen before the limit is evaluated.
When you implement rate limiting, define: the limit (requests per second/minute), the burst behavior, and the response (usually 429). Also decide whether limits are global (shared across replicas) or local (per proxy instance). Local limits are easier but less precise under load balancing.
Header and request normalization
Normalization reduces ambiguity. At the edge, you can remove or overwrite headers that should not be trusted from the internet, such as X-Forwarded-For, X-Forwarded-Proto, and custom “user identity” headers. The edge should be the only component that sets trusted forwarding headers, and upstream services should only trust them if they come from the edge. Similarly, consider stripping response headers that leak implementation details (for example, a verbose Server header) if your policy requires it.
Putting it together: a repeatable rollout checklist
Secure edge configuration is easiest to maintain when you treat it like a product: versioned, tested, and rolled out gradually. Use this checklist to implement headers, redirects, and limits without surprising downstream teams.
Step-by-step rollout plan
- Step 1: Inventory entry points: list all public hosts, paths, and gateways. Include “legacy” hostnames that still receive traffic.
- Step 2: Define a baseline header policy: choose HSTS (with a cautious max-age), nosniff, referrer-policy, and a minimal permissions-policy. Decide what to do with X-Frame-Options and whether any apps require framing.
- Step 3: Implement redirects: enforce HTTPS and canonical hostnames. Test for redirect loops and ensure query strings are preserved.
- Step 4: Set request limits: body size, timeouts, and (where feasible) rate limits. Start with observability: log 413/429 responses and tune thresholds.
- Step 5: Validate with automated checks: use a script or CI job that runs
curl -Iagainst each host/path and asserts required headers and redirect behavior. - Step 6: Provide an exception process: some apps will need different CSP, framing, or larger body sizes. Make exceptions explicit, reviewed, and time-bounded.
Example: simple edge validation script
This script checks that HTTPS redirects work and that baseline headers are present. Extend it to cover your full host list.
#!/usr/bin/env bash
set -euo pipefail
HOST="app.example.com"
# Check redirect from HTTP to HTTPS
curl -sI "http://${HOST}/" | grep -E "^Location: https://" >/dev/null
# Check security headers on HTTPS
curl -sI "https://${HOST}/" | grep -i "strict-transport-security" >/dev/null
curl -sI "https://${HOST}/" | grep -i "x-content-type-options: nosniff" >/dev/null
curl -sI "https://${HOST}/" | grep -i "referrer-policy" >/dev/null
echo "Edge checks passed for ${HOST}"Automated checks like this prevent regressions when teams change Ingress annotations, gateway policies, or controller versions. Treat the edge as shared infrastructure: changes should be reviewed with the same rigor as application code because they can affect every request.