Why a Local Kubernetes Lab Matters for Ingress and Service Mesh Work
A local Kubernetes lab is a reproducible environment you can run on your laptop or workstation to test Ingress controllers, gateway APIs, and service mesh behavior without waiting on shared clusters or cloud provisioning. The goal is not to perfectly mimic production, but to create a tight feedback loop: apply manifests, observe routing and policy behavior, iterate, and reset quickly. For this course theme, the lab should support multiple namespaces, TLS, load balancer simulation, and enough resources to run an Ingress controller plus a mesh control plane and a few sample services.
Choosing a Local Cluster Option
Several tools can run Kubernetes locally. The best choice depends on your OS, available CPU/RAM, and whether you need built-in load balancer support. Kind (Kubernetes in Docker) is lightweight and excellent for CI-like reproducibility. Minikube is convenient and includes add-ons and a built-in load balancer simulation. k3d (k3s in Docker) is fast and often simpler for multi-node setups. Docker Desktop Kubernetes is easy but less portable across teams. For portable manifests and Helm charts, the most important factor is that your cluster creation is scripted and versioned, so the same cluster topology can be recreated by anyone.
Baseline Lab Goals and Constraints
Before writing manifests or Helm charts, define what “portable” means for your lab. Portability usually includes: no hard-coded node IPs, no reliance on cloud-specific load balancers, minimal assumptions about storage classes, and predictable DNS names. For Ingress testing, you need a stable way to reach the Ingress controller from your host machine. For service mesh testing, you need a predictable way to enable sidecar injection and to observe traffic policies. A practical baseline is: one cluster, two namespaces (apps and platform), an Ingress controller, a mesh, and one or two demo services with HTTP endpoints.
Step-by-Step: Create a Kind Cluster with Ingress-Friendly Port Mappings
Kind is a strong default because it is easy to recreate and works well with GitOps-style workflows. The key for Ingress is mapping host ports 80 and 443 into the Kind control-plane container so you can reach the Ingress controller via localhost. Create a Kind config file and then create the cluster.
cat > kind-ingress.yaml <<'EOF'
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: lab
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
EOF
kind create cluster --config kind-ingress.yaml
kubectl cluster-info --context kind-labThis mapping allows an Ingress controller Service of type NodePort (or a controller that binds host ports) to be reachable from your browser and curl at http://localhost and https://localhost. If ports 80/443 are already in use on your machine, map to 8080/8443 instead and adjust your testing URLs.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
Step-by-Step: Create Namespaces and a Minimal “Platform” Layout
Keep a consistent namespace layout so manifests and Helm values remain stable. A common pattern is a platform namespace for shared components and an apps namespace for workloads. Apply a small bootstrap manifest that creates namespaces and common labels.
cat > 00-namespaces.yaml <<'EOF'
apiVersion: v1
kind: Namespace
metadata:
name: platform
labels:
purpose: platform
---
apiVersion: v1
kind: Namespace
metadata:
name: apps
labels:
purpose: apps
EOF
kubectl apply -f 00-namespaces.yamlIn later steps, you can add network policies, resource quotas, and admission policies, but start with a minimal layout to avoid coupling your lab to features not available in every local distribution.
Portable Manifests: Design Principles
Portable manifests are Kubernetes YAML files that behave consistently across environments (local, staging, production) with minimal edits. The main technique is to avoid embedding environment-specific values directly in manifests. Instead, use: ConfigMaps and Secrets for configuration, labels and selectors that don’t depend on generated names, and Service discovery via DNS rather than IPs. Also avoid storage assumptions: if you need persistence, either provide a StorageClass abstraction or make persistence optional via values.
Use Services and DNS, Not Pod IPs
When you reference another workload, always use the Service DNS name (for example, http://api.apps.svc.cluster.local) rather than Pod IPs. This is essential for portability because Pod IP ranges differ across local clusters and cloud providers.
Prefer ClusterIP Services in Manifests
In local labs, it is tempting to use NodePort everywhere. For portability, keep application Services as ClusterIP and expose them through Ingress or a gateway. This keeps the same manifests valid in cloud clusters where a dedicated load balancer or gateway is used.
Parameterize Hostnames and TLS
Ingress and gateway resources often require hostnames and TLS secrets. In a lab, you might use example.local or localhost-based routing. Make these values configurable so the same chart can be installed with different hostnames in different environments.
Step-by-Step: Deploy an Ingress Controller with Helm (Example: ingress-nginx)
Helm is a practical way to install shared components like Ingress controllers because it packages templates, default values, and upgrade logic. Even if you later manage production via GitOps, Helm charts are often the upstream delivery mechanism. Install ingress-nginx into the platform namespace and configure it for Kind’s port mappings.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace platform --create-namespace \
--set controller.service.type=NodePort \
--set controller.hostPort.enabled=true \
--set controller.hostPort.ports.http=80 \
--set controller.hostPort.ports.https=443Different local environments may require different settings. Some setups work better with NodePort, others with hostPort. The portability strategy is to keep the chart installation values in a versioned file (for example, values-kind.yaml) and have a separate values file for other environments (values-minikube.yaml, values-cloud.yaml). That way, the chart stays the same, and only the environment overlay changes.
Step-by-Step: Deploy a Sample App with Portable Kubernetes Manifests
To validate your lab, deploy a small HTTP service and expose it through Ingress. The following example uses a simple container that serves HTTP responses. The important part is that the Deployment and Service are environment-agnostic, while the Ingress is parameterized by host and ingressClassName.
cat > 10-echo.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
namespace: apps
labels:
app: echo
spec:
replicas: 1
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: ealen/echo-server:0.9.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo
namespace: apps
spec:
selector:
app: echo
ports:
- name: http
port: 80
targetPort: 80
EOF
kubectl apply -f 10-echo.yamlNow create an Ingress that routes to the Service. In a lab, you can use a host like echo.local and map it in your /etc/hosts file to 127.0.0.1, or you can use a wildcard approach depending on your controller. Keep the hostname configurable in Helm later; for now, use a clear placeholder.
cat > 20-echo-ingress.yaml <<'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo
namespace: apps
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: echo.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo
port:
number: 80
EOF
kubectl apply -f 20-echo-ingress.yamlTest routing from your host. If you use echo.local, add an /etc/hosts entry pointing echo.local to 127.0.0.1. Then run curl.
curl -H 'Host: echo.local' http://localhost/This Host header technique is useful even without editing /etc/hosts, and it is portable across environments because it tests the Ingress routing logic directly.
Helm for Your Own Workloads: Turning Manifests into a Chart
Once your base manifests work, convert them into a Helm chart so you can parameterize hostnames, image tags, replica counts, resource requests, and optional features like TLS. Helm charts are also a good way to keep “portable defaults” while allowing environment-specific overrides via values files.
Step-by-Step: Create a Chart Skeleton
helm create echo-chart
rm -rf echo-chart/templates/*Create templates for Deployment, Service, and Ingress. Use values for image, replicas, service port, ingress class, and host.
cat > echo-chart/values.yaml <<'EOF'
replicaCount: 1
image:
repository: ealen/echo-server
tag: 0.9.2
service:
port: 80
ingress:
enabled: true
className: nginx
host: echo.local
path: /
EOFcat > echo-chart/templates/deployment.yaml <<'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Release.Name }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
metadata:
labels:
app: {{ .Release.Name }}
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- containerPort: 80
EOFcat > echo-chart/templates/service.yaml <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
selector:
app: {{ .Release.Name }}
ports:
- name: http
port: {{ .Values.service.port }}
targetPort: 80
EOFcat > echo-chart/templates/ingress.yaml <<'EOF'
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}
spec:
ingressClassName: {{ .Values.ingress.className }}
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: {{ .Values.ingress.path }}
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}
port:
number: {{ .Values.service.port }}
{{- end }}
EOFInstall the chart into the apps namespace and test it the same way as before. The key portability win is that you can now change the host or ingress class without editing templates.
helm upgrade --install echo echo-chart --namespace apps
curl -H 'Host: echo.local' http://localhost/Environment Overlays: Values Files for Kind, Minikube, and “Cloud-Like” Setups
To keep your chart portable, store environment-specific settings in separate values files. For example, Kind might use hostPort and localhost testing, while a cloud environment might use a managed load balancer and a real DNS name. Your application chart should not need to change; only the values should.
cat > values-kind.yaml <<'EOF'
ingress:
enabled: true
className: nginx
host: echo.local
EOF
cat > values-cloud.yaml <<'EOF'
ingress:
enabled: true
className: nginx
host: echo.example.com
EOFInstall with the appropriate overlay.
helm upgrade --install echo echo-chart --namespace apps -f values-kind.yamlPortable TLS in a Local Lab: Self-Signed Certificates and cert-manager
Ingress and service mesh scenarios often require TLS. In a local lab, you can use self-signed certificates or install cert-manager to issue local certificates. The portability principle is to treat TLS as an optional layer: your chart should work without TLS, and enable TLS via values when needed. If you use cert-manager, keep the Issuer and Certificate resources in a platform chart, and reference the resulting Secret from your app Ingress.
Step-by-Step: Install cert-manager with Helm (Optional)
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install cert-manager jetstack/cert-manager \
--namespace platform --create-namespace \
--set crds.enabled=trueThen create a self-signed Issuer in the apps namespace and a Certificate for your host. This is lab-friendly and avoids external dependencies.
cat > 30-selfsigned.yaml <<'EOF'
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned
namespace: apps
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: echo-cert
namespace: apps
spec:
secretName: echo-tls
issuerRef:
name: selfsigned
dnsNames:
- echo.local
EOF
kubectl apply -f 30-selfsigned.yamlUpdate your Ingress to reference the TLS secret. In Helm, make this conditional via values (ingress.tls.enabled, ingress.tls.secretName).
cat > 40-echo-ingress-tls.yaml <<'EOF'
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echo-tls
namespace: apps
spec:
ingressClassName: nginx
tls:
- hosts:
- echo.local
secretName: echo-tls
rules:
- host: echo.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo
port:
number: 80
EOF
kubectl apply -f 40-echo-ingress-tls.yamlService Mesh Readiness in a Local Lab: What to Prepare
Even before installing a service mesh, you can prepare your lab for mesh workflows by standardizing labels, namespaces, and traffic entry points. Most meshes rely on namespace labels to enable sidecar injection, and they introduce additional CRDs for traffic policy. Keep your application manifests “mesh-neutral”: do not assume sidecars exist, and do not hard-code ports that conflict with proxy behavior. Also ensure your local cluster has enough resources; meshes can be memory-intensive. A practical approach is to keep the mesh installation in its own Helm release in the platform namespace, and keep app charts independent so you can install them with or without injection enabled.
Step-by-Step: Enable Injection via Namespace Label (Mesh-Dependent)
The exact label depends on the mesh you choose, but the portability pattern is the same: label the namespace rather than modifying every Deployment. For example, you might apply a label like istio-injection=enabled or linkerd.io/inject=enabled. Keep this as a separate, optional manifest so your app chart remains portable.
kubectl label namespace apps istio-injection=enabled --overwriteWhen you later test Ingress-to-mesh interactions, you can decide whether the Ingress controller is inside or outside the mesh. In a local lab, it is often simpler to keep the Ingress controller outside the mesh initially, then experiment with meshing it once basic routing works.
Keeping the Lab Reproducible: Makefile Targets and Scripted Workflows
Portability is not only about YAML; it is also about repeatable commands. A simple Makefile can standardize cluster creation, platform installation, and app deployment. This reduces “it works on my machine” drift and makes it easy to reset the lab when experimenting with Ingress rules or mesh policies.
cat > Makefile <<'EOF'
KIND_CONFIG=kind-ingress.yaml
.PHONY: cluster-up cluster-down platform-up apps-up
cluster-up:
kind create cluster --config $(KIND_CONFIG)
kubectl apply -f 00-namespaces.yaml
cluster-down:
kind delete cluster --name lab
platform-up:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install ingress-nginx ingress-nginx/ingress-nginx \
--namespace platform --create-namespace \
--set controller.service.type=NodePort \
--set controller.hostPort.enabled=true \
--set controller.hostPort.ports.http=80 \
--set controller.hostPort.ports.https=443
apps-up:
helm upgrade --install echo ./echo-chart --namespace apps -f values-kind.yaml
EOFThis structure also makes it easier to run the same steps in CI, which is a strong test of portability: if your lab can be created and validated in an automated pipeline, it is likely to be reproducible for other learners and teammates.
Common Portability Pitfalls and How to Avoid Them
Local labs often fail to be portable because of hidden dependencies. One common issue is relying on a specific StorageClass name; avoid this by making persistence optional and documenting the required StorageClass in values. Another issue is assuming a LoadBalancer Service works locally; in many local clusters it does not without an add-on. Prefer Ingress with host port mappings or use a local load balancer solution only behind a values flag. A third issue is hard-coding image tags like latest; always pin tags for reproducibility. Finally, avoid relying on kubectl context names or cluster IP ranges in scripts; detect or pass them as variables.
Validation Checklist: What to Verify Before Moving On
Before using the lab for more advanced Ingress and service mesh exercises, verify a few basics. Confirm that kubectl can reach the cluster, that the Ingress controller pods are Running, and that you can route HTTP traffic through Ingress using a Host header. If you enabled TLS, confirm that the TLS secret exists and that HTTPS routing works (even if your browser warns about self-signed certificates). Confirm that your Helm releases are idempotent: running helm upgrade --install again should not break anything. Finally, confirm that deleting and recreating the cluster using your scripts reproduces the same working state.