Why TLS termination matters in Kubernetes
TLS (Transport Layer Security) is the mechanism that encrypts HTTP traffic so clients can use HTTPS. In Kubernetes-based web serving, TLS is typically terminated at an edge component (often an Ingress controller or a service mesh gateway) so that the cluster can present a trusted certificate to the outside world. “Termination” means the TLS session is established at that component: it decrypts incoming traffic, optionally applies routing and security policies, and then forwards the request to upstream services either as plain HTTP or as TLS again (re-encryption). Terminating TLS at the edge simplifies certificate management because you manage certificates in one place rather than inside every application container.
TLS termination also enables consistent security controls: you can enforce modern cipher suites, redirect HTTP to HTTPS, attach security headers, and implement mutual TLS (mTLS) at the boundary when needed. The operational challenge is that certificates expire, renewals must be timely, and private keys must be protected. This is where cert-manager fits: it automates certificate issuance and renewal using Kubernetes-native resources.
What cert-manager does (and what it does not)
cert-manager is a Kubernetes controller that manages X.509 certificates as first-class Kubernetes objects. You declare what certificate you want (domain names, issuer, validity, key type), and cert-manager obtains it from an issuer (for example, Let’s Encrypt, an internal CA, or a cloud provider CA), stores it in a Kubernetes Secret, and keeps it renewed before expiration. It integrates with common ACME flows (HTTP-01 and DNS-01 challenges) and supports self-signed and CA-based issuers for internal environments.
cert-manager does not replace your Ingress controller or service mesh gateway; it supplies the certificate material those components reference. It also does not automatically “make your app HTTPS” by itself: you still configure your Ingress or Gateway to use the Secret containing the TLS keypair. Finally, cert-manager is not a full PKI suite; it automates issuance/renewal but relies on an issuer backend for trust and policy.
Core resources: Issuer, ClusterIssuer, Certificate, and Secrets
cert-manager introduces several custom resources. An Issuer is namespaced and can issue certificates only within its namespace. A ClusterIssuer is cluster-scoped and can issue certificates across namespaces. A Certificate declares the desired certificate and points to an Issuer or ClusterIssuer. cert-manager then creates or updates a Kubernetes Secret (type kubernetes.io/tls) containing tls.crt and tls.key. Your Ingress or Gateway references that Secret to terminate TLS.
Continue in our app.
You can listen to the audiobook with the screen off, receive a free certificate for this course, and also have access to 5,000 other free online courses.
Or continue reading below...Download the app
In practice, many teams use a ClusterIssuer for Let’s Encrypt (so any namespace can request public certificates) and separate Issuers for internal CAs in restricted namespaces. This separation helps enforce policy: who can request public certs, which domains are allowed, and which challenge method is used.
Installing cert-manager (Helm-based)
cert-manager is commonly installed via Helm and requires CustomResourceDefinitions (CRDs). The exact commands vary by environment, but the key points are: install CRDs, deploy the controller components, and ensure it has permissions to create Secrets and manage challenge resources. After installation, verify that the cert-manager pods are running and that the CRDs exist.
Step-by-step: install and verify
- Install cert-manager CRDs and chart into a dedicated namespace (often
cert-manager). - Wait for deployments:
cert-manager,cert-manager-webhook, andcert-manager-cainjector. - Confirm CRDs like
certificates.cert-manager.ioandclusterissuers.cert-manager.ioare present.
# Example (illustrative) Helm install flow; adapt versions and repo as needed. helm repo add jetstack https://charts.jetstack.io helm repo update kubectl create namespace cert-manager helm install cert-manager jetstack/cert-manager -n cert-manager --set crds.enabled=true kubectl -n cert-manager get pods kubectl get crds | grep cert-managerOperational note: the webhook is critical because it validates and mutates cert-manager resources. If the webhook is not healthy, certificate requests may fail with admission errors.
Choosing an issuer strategy
Before creating certificates, decide how they will be issued. For public internet domains, Let’s Encrypt via ACME is common. For internal-only services, you may prefer an internal CA (either a corporate PKI integrated via CA Issuer or a dedicated internal CA). The issuer choice affects trust distribution: public CAs are trusted by browsers, while internal CAs require distributing the root certificate to clients.
Also decide the challenge type. ACME HTTP-01 proves domain control by serving a token over HTTP at /.well-known/acme-challenge/. This is simple when your Ingress is internet-reachable and can route that path. ACME DNS-01 proves control by creating a DNS TXT record; it works even when services are not publicly reachable and is often preferred for wildcard certificates. DNS-01 requires credentials to modify DNS records, so it introduces secrets management considerations.
Public certificates with Let’s Encrypt using HTTP-01
HTTP-01 is a good starting point when you have a public hostname that routes to your cluster. cert-manager will create temporary challenge resources (often an Ingress) so that Let’s Encrypt can reach the token endpoint. Once validated, cert-manager stores the issued certificate in a Secret and keeps it renewed.
Step-by-step: create a ClusterIssuer for Let’s Encrypt (staging first)
Use Let’s Encrypt staging to avoid rate limits while you validate configuration. After it works, switch to production. The ClusterIssuer below uses ACME with an email address and a private key Secret used to register the ACME account.
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: email: ops@example.com server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-staging-account-key solvers: - http01: ingress: class: nginxThe ingress.class (or the newer ingressClassName behavior depending on your controller) must match the Ingress controller that will serve the challenge. If you run multiple controllers, be explicit to avoid the challenge being created for the wrong one.
Step-by-step: request a certificate with a Certificate resource
Create a Certificate in the same namespace as your application. It specifies the DNS names and the Secret name where the keypair will be stored.
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: web-cert spec: secretName: web-tls # Secret created/managed by cert-manager issuerRef: name: letsencrypt-staging kind: ClusterIssuer dnsNames: - app.example.comAfter applying, cert-manager creates a CertificateRequest, performs the ACME challenge, and eventually populates web-tls. You can watch status conditions to see progress.
kubectl -n your-namespace get certificate web-cert -o wide kubectl -n your-namespace describe certificate web-cert kubectl -n your-namespace get secret web-tls -o yamlStep-by-step: configure your Ingress to use the TLS Secret
Your Ingress references the Secret under spec.tls. The host must match the certificate’s DNS name. Many controllers also support annotations to force HTTPS redirects; use controller-specific settings as appropriate.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web spec: tls: - hosts: - app.example.com secretName: web-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-svc port: number: 80At this point, the Ingress controller terminates TLS using web-tls. When cert-manager renews the certificate, it updates the Secret, and the controller reloads it (most controllers watch Secrets and reload automatically).
Automating certificate issuance directly from Ingress (Ingress-shim)
cert-manager can also create Certificates automatically from Ingress resources using “ingress-shim.” You annotate an Ingress with the issuer reference, and cert-manager generates a Certificate behind the scenes. This reduces YAML but can hide details, so many teams prefer explicit Certificate resources for clarity and review.
Step-by-step: Ingress annotation approach
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web annotations: cert-manager.io/cluster-issuer: letsencrypt-staging spec: tls: - hosts: - app.example.com secretName: web-tls rules: - host: app.example.com http: paths: - path: / pathType: Prefix backend: service: name: web-svc port: number: 80When you apply this, cert-manager notices the annotation and the TLS block, then creates a Certificate that targets web-tls. You can still inspect the generated Certificate to troubleshoot.
Wildcard and private endpoints with DNS-01 challenges
DNS-01 is the preferred method when you need wildcard certificates (for example, *.example.com) or when your services are not reachable from the public internet on port 80. With DNS-01, cert-manager creates a TXT record under _acme-challenge for the domain. Let’s Encrypt checks DNS, not HTTP routing. This is also useful when you want a single wildcard certificate shared across multiple Ingresses or gateways.
DNS-01 requires a DNS provider integration. cert-manager supports many providers via solver configuration (for example, Route53, Cloud DNS, Cloudflare). The pattern is: store DNS API credentials in a Secret, reference that Secret in the solver, and scope permissions to only the zones you need.
Step-by-step: example DNS-01 ClusterIssuer (pattern)
The exact fields depend on your DNS provider. The example below illustrates the structure: a ClusterIssuer with a DNS-01 solver referencing a Secret.
apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-dns01 spec: acme: email: ops@example.com server: https://acme-v02.api.letsencrypt.org/directory privateKeySecretRef: name: letsencrypt-dns01-account-key solvers: - dns01: cloudflare: apiTokenSecretRef: name: cloudflare-api-token key: api-tokenThen request a wildcard certificate:
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: wildcard-example spec: secretName: wildcard-example-tls issuerRef: name: letsencrypt-dns01 kind: ClusterIssuer dnsNames: - example.com - '*.example.com'Be careful with wildcard Secrets: if multiple apps share the same wildcard Secret, they also share the same private key. That can be acceptable for a gateway layer but is often avoided for strict isolation. A common compromise is to use wildcard certs only at a shared edge gateway and use per-service certificates internally.
Integrating with a service mesh gateway
In a service mesh, north-south traffic often enters through a dedicated gateway (for example, an Envoy-based ingress gateway). The gateway terminates TLS using a Kubernetes Secret. cert-manager’s role remains the same: create and renew the Secret; the gateway consumes it. The main difference is which resource references the Secret: instead of an Ingress, you may configure the mesh gateway resource to use the Secret (often via a TLS credential reference).
Two practical considerations arise with meshes: first, the gateway may run in a different namespace than the application, so you must ensure the Secret is available where the gateway expects it. Second, some meshes support SDS (Secret Discovery Service) and can dynamically load certificates; others may require a restart or have specific annotations for hot reload. Verify how your gateway watches Secrets and plan rotations accordingly.
End-to-end encryption: termination vs re-encryption
TLS termination at the edge means traffic from the edge to the backend may be plain HTTP. In many clusters, that is acceptable if you have strong network policies and node-level security. If you need encryption all the way to the workload, you can use re-encryption: the edge terminates client TLS and then establishes a new TLS connection to the upstream service. This requires the upstream to present a certificate trusted by the edge.
cert-manager can help with upstream certificates too, especially for internal services. You can issue internal certificates from a private CA (or a self-signed root for development) and configure your edge proxy to trust that CA. In a service mesh, east-west encryption is often handled by mesh mTLS automatically, but you may still use cert-manager for gateway certificates or for non-mesh workloads.
Internal certificates with a private CA (CA Issuer pattern)
For internal-only endpoints, you can create a private CA and have cert-manager issue leaf certificates from it. One common pattern is: create a self-signed root CA (or import an existing corporate CA), store its keypair in a Secret, and define an Issuer of type ca that signs certificates. Clients must trust the root CA certificate.
Step-by-step: create a self-signed root CA and use it to issue internal certs
First, create a self-signed Issuer and a Certificate that becomes your root CA. Mark it as a CA certificate with isCA: true and store it in a Secret.
apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: selfsigned-root namespace: security spec: selfSigned: {} --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: internal-root-ca namespace: security spec: isCA: true commonName: internal-root-ca secretName: internal-root-ca-secret issuerRef: name: selfsigned-root kind: IssuerNext, create a CA Issuer that uses the root CA Secret to sign leaf certificates:
apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: internal-ca namespace: security spec: ca: secretName: internal-root-ca-secretFinally, request an internal certificate (in the namespace where it will be consumed) by referencing the CA Issuer. If the Issuer is namespaced, it must exist in the same namespace as the Certificate; alternatively, use a ClusterIssuer if you want cluster-wide issuance.
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: internal-service-cert namespace: apps spec: secretName: internal-service-tls issuerRef: name: internal-ca kind: Issuer dnsNames: - internal-service.apps.svc.cluster.localThis approach is powerful for internal TLS and for re-encryption from gateways to backends. The trade-off is trust distribution: you must ensure clients (or proxies) trust the internal root CA.
Certificate lifecycle: renewal, rotation, and key management
cert-manager renews certificates automatically before they expire. Renewal timing is controlled by the certificate’s renewBefore (or defaults based on duration). When renewal happens, cert-manager updates the Secret with a new certificate and potentially a new private key depending on configuration. Your edge component must pick up the updated Secret; most modern controllers and gateways can reload without downtime, but you should test this behavior.
Key management decisions matter. By default, cert-manager generates private keys and stores them in Secrets. You can influence key algorithm and size (RSA vs ECDSA) and rotation behavior. ECDSA certificates can be smaller and faster, but compatibility requirements may push you to RSA. If you have strict security requirements, consider integrating with external key management (where supported) or at least enforce RBAC so only the necessary controllers and operators can read TLS Secrets.
Step-by-step: specify key algorithm and certificate duration
apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: web-cert spec: secretName: web-tls duration: 2160h # 90 days renewBefore: 360h # 15 days privateKey: algorithm: ECDSA size: 256 rotationPolicy: Always issuerRef: name: letsencrypt-prod kind: ClusterIssuer dnsNames: - app.example.comBe cautious with rotationPolicy: Always if downstream systems pin public keys or if you have replication delays; it is usually fine for standard HTTPS endpoints but should be validated in your environment.
Troubleshooting: reading status and common failure modes
When issuance fails, cert-manager typically records the reason in resource status and events. Start with the Certificate, then inspect the related CertificateRequest, Order, and Challenge resources (for ACME). Most issues fall into a few categories: DNS misconfiguration, HTTP routing problems for HTTP-01, wrong Ingress class selection, blocked inbound traffic to the solver, or insufficient permissions to update DNS for DNS-01.
Step-by-step: a practical debugging checklist
- Check Certificate conditions and events:
kubectl describe certificate .... - Inspect the CertificateRequest:
kubectl get certificaterequestandkubectl describe. - For ACME, inspect Orders and Challenges:
kubectl get orders,challenges. - For HTTP-01, confirm the temporary solver Ingress exists and is served by the correct controller.
- Verify the hostname resolves to the correct external IP and that port 80 is reachable from the internet (HTTP-01 requirement).
- For DNS-01, verify the
_acme-challengeTXT record appears and propagates; check provider API credentials and zone permissions. - Confirm the target Secret exists and is in the namespace expected by the consumer (Ingress/gateway).
# Useful commands kubectl -n your-namespace describe certificate web-cert kubectl -n your-namespace get certificaterequest kubectl -n your-namespace get orders,challenges kubectl -n cert-manager logs deploy/cert-manager --tail=200A subtle but common issue is creating a Certificate for a hostname that does not match the Ingress host exactly, or referencing the wrong Secret name in the Ingress. Another is attempting HTTP-01 while forcing HTTP-to-HTTPS redirects globally; the solver path must remain reachable over plain HTTP for validation unless your controller handles the challenge exception correctly.
Operational practices: rate limits, staging-to-prod promotion, and multi-tenant safety
Public CAs enforce rate limits. Use staging while iterating, then switch to production once stable. Keep separate ClusterIssuers for staging and production so you can switch by changing only the issuer reference. For multi-tenant clusters, restrict who can create Certificates referencing public ClusterIssuers. Without controls, a tenant could request certificates for domains they do not own, which will fail validation but still consumes operational capacity and may trigger rate limits.
Use RBAC to limit access to Secrets containing private keys. Consider namespace boundaries: if a shared gateway terminates TLS for many apps, you may centralize certificate Secrets in the gateway namespace and manage them there. If you want per-team autonomy, allow teams to manage Certificates in their namespaces but ensure the gateway can reference those Secrets only if your gateway technology supports cross-namespace secret references safely (many do not, by design). In those cases, you may replicate Secrets into the gateway namespace using a controlled process.