What “edge routing” and “internal routing” mean in a mesh-integrated cluster
Concept: Edge routing is how traffic enters your platform from outside the cluster (internet, corporate WAN, partner networks). Internal routing is how traffic moves between internal networks, clusters, or segments (for example, between namespaces, between VPCs, or between Kubernetes clusters) while still being governed by the mesh’s routing rules.
Why you need both: In Kubernetes, an Ingress controller (or Gateway API controller) commonly owns north-south entry. A service mesh commonly owns east-west service-to-service traffic. When you integrate them, you can apply consistent routing decisions at the edge (host/path-based routing, header-based routing, canary splits, blue/green, failover between backends) and also control how traffic is forwarded internally through mesh gateways (ingress gateways, east-west gateways, or dedicated internal gateways).
Key idea: Treat “edge” and “internal” as different trust and routing zones. Edge routing terminates external protocols and selects an internal destination. Internal routing uses mesh gateways to move traffic across boundaries (namespace, cluster, network) while keeping routing logic centralized and observable.
Gateway roles: Ingress controller, mesh ingress gateway, and internal mesh gateways
Ingress controller: A Kubernetes component (like NGINX Ingress, HAProxy Ingress, Traefik, or a Gateway API implementation) that watches Ingress/Gateway resources and configures a data plane to accept external traffic. It typically handles TLS termination, host/path routing, and L7 features.
Mesh ingress gateway: A mesh-managed proxy deployment (often Envoy-based) that acts as the entry point into the mesh. It can terminate TLS, apply routing rules, and forward traffic to mesh workloads. It is configured by mesh resources (for example, Istio Gateway/VirtualService, Linkerd Server/HTTPRoute, Consul API Gateway, etc.).
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Internal mesh gateways: Gateways dedicated to internal boundaries. Common patterns include an “east-west gateway” for cross-cluster traffic, a “shared services gateway” for accessing platform services, or a “namespace gateway” that enforces how other namespaces reach a team’s services. These gateways are not internet-facing but are crucial for routing across networks and clusters.
Integration options: You can (1) let the Ingress controller forward to a mesh ingress gateway, (2) replace the Ingress controller with the mesh gateway as the edge, or (3) use Kubernetes Gateway API to unify configuration so that the same API drives both edge and mesh routing.
Reference architecture: two-stage routing (edge selection, then mesh routing)
Two-stage routing is a practical way to avoid configuration sprawl. Stage 1 is edge selection: choose the correct internal “entry service” based on host/path and possibly headers. Stage 2 is mesh routing: once traffic is inside the mesh, route to the correct workload version, subset, or cluster-local destination.
- Stage 1 (Ingress): External client → Ingress controller → mesh ingress gateway Service (ClusterIP) or directly to a gateway Deployment via Service.
- Stage 2 (Mesh): mesh ingress gateway → internal services (with mesh routing rules) → optional internal gateway for cross-cluster or cross-network hops.
Why this helps: Your Ingress config stays stable (mostly host/path mapping), while mesh routing rules handle application rollout logic and internal topology changes. You also get a clean place to apply edge-specific concerns like WAF, DDoS protection, or global rate limiting (without re-litigating internal routing each time).
Step-by-step: Integrating a Kubernetes Ingress with a mesh ingress gateway
Goal: Keep your existing Ingress controller, but have it forward traffic to the mesh ingress gateway so that the mesh controls the final routing to services.
Step 1: Deploy or enable the mesh ingress gateway
Most meshes provide a gateway component as a Deployment with a Service of type LoadBalancer or NodePort. In this integration pattern, you usually expose it as a ClusterIP (internal) and let the Ingress controller be the public entry. Ensure the gateway pods run in a dedicated namespace (for example, mesh-gateways) and are configured to accept traffic for your domains.
Step 2: Create an internal Service for the mesh gateway
Create a Kubernetes Service that targets the gateway pods. This Service is what the Ingress controller will forward to. Example (generic Kubernetes Service):
apiVersion: v1
kind: Service
metadata:
name: mesh-ingress-gateway
namespace: mesh-gateways
spec:
type: ClusterIP
selector:
app: mesh-ingress-gateway
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443Pick ports that match your gateway container ports. If your gateway expects plaintext HTTP from the Ingress controller, you can forward only port 80 and terminate TLS at the Ingress controller. If you want TLS passthrough to the mesh gateway, you will forward 443 and configure passthrough at the Ingress controller.
Step 3: Configure the Ingress to route to the mesh gateway Service
At the edge, route by host/path to the mesh gateway Service. Example (Kubernetes Ingress):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: edge-ingress
namespace: edge
spec:
ingressClassName: nginx
rules:
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mesh-ingress-gateway
port:
number: 80This makes the Ingress controller a “traffic director” that forwards everything for api.example.com to the mesh gateway. You can add additional hosts and paths that map to different mesh gateways (for example, one gateway per environment or per tenant).
Step 4: Configure mesh routing on the gateway
Now define mesh gateway routing rules that map host/path to internal services. The exact resource names differ by mesh, but the idea is consistent: bind a listener (host, port, protocol) and attach routes that forward to Kubernetes Services.
Example (Istio-style Gateway + VirtualService):
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: public-gw
namespace: mesh-gateways
spec:
selector:
app: mesh-ingress-gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- api.example.com
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: api-route
namespace: app
spec:
hosts:
- api.example.com
gateways:
- mesh-gateways/public-gw
http:
- match:
- uri:
prefix: /v1/
route:
- destination:
host: api.app.svc.cluster.local
port:
number: 8080Notice the separation: Ingress only forwards to the gateway; the mesh decides which internal service receives /v1/. You can add additional matches for /v2/, admin paths, or header-based routing for specific clients.
Step 5: Decide where TLS terminates
TLS at Ingress: The Ingress controller terminates TLS and forwards HTTP to the mesh gateway. This is simple and works well if you already manage certificates at the edge. Ensure you preserve the original host and client IP information (for example, X-Forwarded-For, X-Forwarded-Proto) so the mesh gateway can make correct routing and logging decisions.
TLS at mesh gateway: The Ingress controller does TCP/TLS passthrough to the mesh gateway, which terminates TLS. This centralizes TLS policy in the mesh gateway and can simplify multi-cluster setups where the mesh gateway is the consistent entry point. It requires your Ingress controller to support passthrough and your mesh gateway to be configured with certificates.
Internal routing with dedicated mesh gateways (namespace, shared services, and cross-cluster)
Concept: Internal gateways are used when traffic must cross a boundary where you want explicit control: different clusters, different networks, different namespaces with strict ownership, or a shared platform services zone. Instead of letting any pod talk directly to any other pod IP, you force traffic through a gateway that becomes the “choke point” for routing decisions and observability.
Pattern 1: Namespace gateway for team-owned services
In a multi-team cluster, you may want other namespaces to access a team’s services only through a stable gateway address. This allows the team to change internal service names, versions, or ports without breaking consumers, and it creates a single place to apply routing rules for that team’s APIs.
How it works: Consumers call team-a-gateway.platform.svc. The gateway routes to team-a services based on host/path. You can also expose multiple “virtual hosts” on the same internal gateway to represent different APIs.
Practical steps:
- Deploy a gateway Deployment in the team namespace or a shared gateways namespace.
- Create a ClusterIP Service for it (internal only).
- Publish a DNS name (internal) that points to that Service.
- Define mesh routes that map host/path to the team’s services.
Pattern 2: Shared services gateway (platform APIs)
Platform services like authentication, billing, feature flags, or internal admin APIs often need controlled access. A shared services gateway can present a curated set of routes to these services, while hiding everything else in the platform namespace.
Practical steps:
- Create a dedicated gateway for shared services.
- Expose only the required listeners and hosts (for example,
auth.platform.internal,flags.platform.internal). - Route to the backing services by path or host.
- Optionally require specific headers for routing to admin endpoints (for example, route
/adminonly when a header is present).
Pattern 3: Cross-cluster routing with east-west gateways
When you have multiple clusters (for example, per region or per environment), you typically want service-to-service calls to be able to reach remote services without exposing every service publicly. An east-west gateway is the internal entry point for cross-cluster traffic. Workloads in Cluster A send traffic destined for Cluster B to Cluster B’s east-west gateway, which then routes to the target service.
Practical steps (conceptual):
- Deploy an east-west gateway in each cluster.
- Expose it via an internal load balancer or private network address (not internet-facing).
- Configure service discovery so that services can be resolved across clusters (mesh-specific approach).
- Define routing rules so that requests for remote service hostnames are forwarded to the appropriate east-west gateway.
Operational tip: Keep cross-cluster routing rules minimal and stable. Prefer routing by service identity (service DNS name) rather than by pod IPs or ephemeral endpoints.
Ingress integration strategies: choose based on ownership and change frequency
Strategy A: Ingress owns edge, mesh owns internal: The Ingress controller is the only public entry. It forwards to one or more mesh gateways. This is a good fit when a platform team owns edge configuration and application teams own mesh routing rules.
Strategy B: Mesh gateway is the edge: You expose the mesh ingress gateway directly as a LoadBalancer and manage edge routing in mesh resources. This reduces components but shifts certificate and edge routing ownership into the mesh configuration.
Strategy C: Gateway API as the unifying layer: Use Kubernetes Gateway API (Gateway, HTTPRoute, TCPRoute) so that teams define routes in a consistent Kubernetes-native way. Depending on your controllers, the same API can configure the edge gateway and mesh gateways. This reduces the “two different CRD worlds” problem (Ingress vs mesh-specific CRDs).
Step-by-step: Using Gateway API to connect edge Gateway to a mesh gateway backend
Goal: Use Gateway API at the edge, but forward to a mesh gateway Service as the backend. This keeps edge routing in a standardized API while still letting the mesh handle internal routing.
Step 1: Create an edge Gateway
This is implemented by your edge controller (for example, an Envoy-based gateway, NGINX, or cloud provider gateway). Example:
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: edge-gateway
namespace: edge
spec:
gatewayClassName: edge-gw-class
listeners:
- name: http
protocol: HTTP
port: 80
hostname: api.example.comStep 2: Create an HTTPRoute that forwards to the mesh gateway Service
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: api-to-mesh
namespace: edge
spec:
parentRefs:
- name: edge-gateway
hostnames:
- api.example.com
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: mesh-ingress-gateway
namespace: mesh-gateways
port: 80At this point, Gateway API is effectively replacing the Ingress resource for edge routing. The mesh gateway still decides how to route to internal services.
Step 3: Define mesh gateway routes for internal services
Use your mesh’s routing resources (or, if supported, mesh-aware Gateway API routes) to map api.example.com and paths to internal services. Keep the edge route broad (forward all) and the mesh routes specific (service mapping and release logic).
Practical routing examples that combine edge and internal gateways
Example 1: One domain, multiple internal APIs via mesh gateway
Scenario: You expose api.example.com externally, but internally you have multiple services: users, orders, catalog. You want the edge to forward everything to the mesh gateway, and the mesh gateway to split by path.
Implementation approach:
- Edge: a single host rule for
api.example.com→ mesh gateway Service. - Mesh gateway: routes
/users→ users service,/orders→ orders service,/catalog→ catalog service.
Benefit: You can add a new API path without touching the edge configuration, as long as it stays under the same host and gateway listener.
Example 2: Internal gateway as a stable contract for consumers
Scenario: Team A provides an internal API consumed by many namespaces. You want consumers to call a stable internal hostname, while Team A can reorganize services behind it.
Implementation approach:
- Create
team-a-gatewayService in a shared namespace. - Publish internal DNS name
team-a.internalto that Service. - Define routes on the gateway:
team-a.internal+/→ current backend service(s).
Benefit: Consumers do not need to know whether Team A split a monolith into microservices or changed ports; the gateway remains the contract.
Example 3: Cross-cluster failover using internal gateways
Scenario: You run two clusters in different regions. External traffic enters Region A, but some services live in Region B or need to be reachable if Region A’s service is unavailable.
Implementation approach:
- Edge routes external traffic to Region A mesh ingress gateway.
- Mesh ingress gateway routes to local service by default.
- For specific paths or hosts, route to a remote service via Region B’s east-west gateway (private address), then to the service in Region B.
Operational note: Keep the “remote hop” explicit in configuration and observability. You want to be able to see when traffic leaves the local cluster and why.
Configuration boundaries and ownership: avoiding a routing “tug of war”
Common pitfall: Both the Ingress controller and the mesh gateway can do L7 routing. If you split routing logic arbitrarily, debugging becomes difficult (“Is the path rewrite happening at the edge or in the mesh?”).
Practical guideline: Put stable, coarse-grained routing at the edge (hostnames, top-level paths, tenant selection). Put dynamic, application-specific routing inside the mesh (service mapping, version selection, internal topology). This reduces coordination overhead between platform and app teams.
Change frequency heuristic:
- If it changes weekly or daily during releases, prefer mesh routing resources.
- If it changes rarely and affects public entry, prefer edge routing resources.
Debugging and verification workflow for edge-to-mesh routing
Verify the edge forwards correctly
Start by confirming that the edge component (Ingress or Gateway API controller) is sending traffic to the mesh gateway Service and preserving the host header. Use a simple request and check response headers or logs at the mesh gateway.
curl -H 'Host: api.example.com' http://<edge-address>/v1/healthIf the mesh gateway routes by host, an incorrect host header will cause 404s or default backend responses.
Verify the mesh gateway receives and matches routes
Check the mesh gateway logs and route configuration. Most meshes provide a way to dump the proxy config (for example, Envoy config dump) to confirm that the expected virtual hosts and routes are present.
Typical symptoms and what they mean:
- 404 at gateway: host/path match not found in mesh routing rules.
- 503 at gateway: backend service endpoints not available or wrong port.
- Unexpected backend: overlapping route matches; order or specificity needs adjustment.
Verify internal gateway hops (when used)
If you route through an internal gateway (namespace gateway or east-west gateway), verify each hop independently: edge → mesh ingress gateway, mesh ingress gateway → internal gateway, internal gateway → service. This is easier if each gateway adds a distinct header (for example, X-Gateway-Hop) so you can see the path of the request in logs.
Design checklist: building a clean edge/internal routing model
- Define zones: public edge, internal shared services, team namespaces, cross-cluster networks.
- Choose gateway types per zone: edge Ingress/Gateway API, mesh ingress gateway, internal gateways.
- Decide TLS termination points: edge vs mesh gateway; document it per domain.
- Standardize hostnames: external domains and internal DNS names should be predictable and owned.
- Minimize duplicated routing logic: keep edge coarse, mesh fine-grained.
- Plan for multi-cluster: if cross-cluster is in scope, deploy east-west gateways early and keep routes explicit.