In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.
Note
The description of other supported configuration modes is off-scope for this document.
Warning
MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
In traditional cloud environments, where network load balancers are available on-demand, a single Kubernetes manifest suffices to provide a single point of contact to the Ingress-Nginx Controller to external clients and, indirectly, to any application running inside the cluster. Bare-metal environments lack this commodity, requiring a slightly different setup to offer the same kind of access to external consumers.
The rest of this document describes a few recommended approaches to deploying the Ingress-Nginx Controller inside a Kubernetes cluster running on bare-metal.
MetalLB provides a network load-balancer implementation for Kubernetes clusters that do not run on a supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.
This section demonstrates how to use the Layer 2 configuration mode of MetalLB together with the NGINX Ingress controller in a Kubernetes cluster that has publicly accessible nodes. In this mode, one node attracts all the traffic for the ingress-nginx Service IP. See Traffic policies for more details.
Note
The description of other supported configuration modes is off-scope for this document.
Warning
MetalLB is currently in beta. Read about the Project maturity and make sure you inform yourself by reading the official documentation thoroughly.
MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB was deployed following the Installation instructions, and that the Ingress-Nginx Controller was installed using the steps described in the quickstart section of the installation guide.
MetalLB requires a pool of IP addresses in order to be able to take ownership of the ingress-nginx Service. This pool can be defined through IPAddressPool objects in the same namespace as the MetalLB controller. This pool of IPs must be dedicated to MetalLB's use, you can't reuse the Kubernetes node IPs or IPs handed out by a DHCP server.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectlgetnode
NAME STATUS ROLES EXTERNAL-IPhost-1 Ready master 203.0.113.1host-2 Ready node 203.0.113.2
@@ -29,7 +29,7 @@
As soon as MetalLB sets the external IP address of the ingress-nginx LoadBalancer Service, the corresponding entries are created in the iptables NAT table and the node with the selected IP address starts responding to HTTP requests on the ports configured in the LoadBalancer Service:
In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.
Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.
Info
A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services.
In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.
You can customize the exposed node port numbers by setting the controller.service.nodePorts.* Helm values, but they still have to be in the 30000-32767 range.
Example
Given the NodePort 30100 allocated to the ingress-nginx Service
$ kubectl-ningress-nginxgetsvc
+
Tip
In order to preserve the source IP address in HTTP requests sent to NGINX, it is necessary to use the Local traffic policy. Traffic policies are described in more details in Traffic policies as well as in the next section.
Due to its simplicity, this is the setup a user will deploy by default when following the steps described in the installation guide.
Info
A Service of type NodePort exposes, via the kube-proxy component, the same unprivileged port (default: 30000-32767) on every Kubernetes node, masters included. For more information, see Services.
In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports 80 and 443. Instead, the external client must append the NodePort allocated to the ingress-nginx Service to HTTP requests.
Example
Given the NodePort 30100 allocated to the ingress-nginx Service
a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.
Impact on the host system
While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.
This approach has a few other limitations one ought to be aware of:
Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.
The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local (example).
Warning
This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.
Example
In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectlgetnode
+
a client would reach an Ingress with host: myapp.example.com at http://myapp.example.com:30100, where the myapp.example.com subdomain resolves to the 203.0.113.2 IP address.
Impact on the host system
While it may sound tempting to reconfigure the NodePort range using the --service-node-port-range API server flag to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant kube-proxy privileges it may otherwise not require.
This practice is therefore discouraged. See the other approaches proposed in this page for alternatives.
This approach has a few other limitations one ought to be aware of:
Source IP address
Services of type NodePort perform source address translation by default. This means the source IP of a HTTP request is always the IP address of the Kubernetes node that received the request from the perspective of NGINX.
The recommended way to preserve the source IP in a NodePort setup is to set the value of the externalTrafficPolicy field of the ingress-nginx Service spec to Local (example).
Warning
This setting effectively drops packets sent to Kubernetes nodes which are not running any instance of the NGINX Ingress controller. Consider assigning NGINX Pods to specific nodes in order to control on what nodes the Ingress-Nginx Controller should be scheduled or not scheduled.
Example
In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments this value is <None>)
Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.
Other ways to preserve the source IP in a NodePort setup are described here: Source IP address.
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages.
$ kubectlgetingress
+
Requests sent to host-2 and host-3 would be forwarded to NGINX and original client's IP would be preserved, while requests to host-1 would get dropped because there is no NGINX replica running on that node.
Ingress status
Because NodePort Services do not get a LoadBalancerIP assigned by definition, the Ingress-Nginx Controller does not update the status of Ingress objects it manages.
$ kubectlgetingress
NAME HOSTS ADDRESS PORTStest-ingress myapp.example.com 80
Despite the fact there is no load balancer providing a public IP address to the Ingress-Nginx Controller, it is possible to force the status update of all managed Ingress objects by setting the externalIPs field of the ingress-nginx Service.
Warning
There is more to setting externalIPs than just enabling the Ingress-Nginx Controller to update the status of Ingress objects. Please read about this option in the Services page of official Kubernetes documentation as well as the section about External IPs in this document for more information.
Example
Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal environments this value is <None>)
$ kubectlgetnode
@@ -64,7 +64,7 @@
which would in turn be reflected on Ingress objects as follows:
$ kubectlgetingress-owide
NAME HOSTS ADDRESS PORTStest-ingress myapp.example.com 203.0.113.1,203.0.113.2,203.0.113.3 80
-
As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.
Example
Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:
$ curl-D-http://myapp.example.com:30100`
+
Redirects
As NGINX is not aware of the port translation operated by the NodePort Service, backend applications are responsible for generating redirect URLs that take into account the URL used by external clients, including the NodePort.
Example
Redirects generated by NGINX, for instance HTTP to HTTPS or domain to www.domain, are generated without NodePort:
$ curl-D-http://myapp.example.com:30100`HTTP/1.1 308 Permanent RedirectServer: nginx/1.15.2Location: https://myapp.example.com/ #-> missing NodePort in HTTPS redirect
@@ -82,7 +82,7 @@
Type Reason From Message ---- ------ ---- ------- Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.
-
One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment.
Info
A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods. For more information, see DaemonSet.
Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.
Like with NodePorts, this approach has a few quirks it is important to be aware of.
Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.
Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.
$ kubectlgetingress
+
One way to ensure only schedulable Pods are created is to deploy the Ingress-Nginx Controller as a DaemonSet instead of a traditional Deployment.
Info
A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to repel those Pods. For more information, see DaemonSet.
Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the configuration of the corresponding manifest at the user's discretion.
Like with NodePorts, this approach has a few quirks it is important to be aware of.
DNS resolution
Pods configured with hostNetwork: true do not use the internal DNS resolver (i.e. kube-dns or CoreDNS), unless their dnsPolicy spec field is set to ClusterFirstWithHostNet. Consider using this setting if NGINX is expected to resolve internal names for any reason.
Ingress status
Because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not apply and the status of all Ingress objects remains blank.
$ kubectlgetingress
NAME HOSTS ADDRESS PORTStest-ingress myapp.example.com 80
Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the --report-node-internal-ip-address flag, which sets the status of all Ingress objects to the internal IP address of all nodes running the Ingress-Nginx Controller.
Example
Given a ingress-nginx-controller DaemonSet composed of 2 replicas
$ kubectl-ningress-nginxgetpod-owide
diff --git a/deploy/index.html b/deploy/index.html
index db9da3472..87441deec 100644
--- a/deploy/index.html
+++ b/deploy/index.html
@@ -24,7 +24,7 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name:"somebucket"service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix:"ingress-nginx"service.beta.kubernetes.io/aws-load-balancer-access-log-emit-interval:"5"
-
If you don't have Helm or if you prefer to use a YAML manifest, you can run the following command instead:
The YAML manifest in the command above was generated with helm template, so you will end up with almost the same resources as if you had used Helm to install the controller.
Attention
If you are running an old version of Kubernetes (1.18 or earlier), please read this paragraph for specific instructions. Because of api deprecations, the default manifest may not work on your cluster. Specific manifests for supported Kubernetes versions are available within a sub-folder of each provider.
To check which ports are used by your installation of ingress-nginx, look at the output of kubectl -n ingress-nginx get pod -o yaml. In general, you need:
Port 8443 open between all hosts on which the kubernetes nodes are running. This is used for the ingress-nginx admission controller.
Port 80 (for HTTP) and/or 443 (for HTTPS) open to the public on the kubernetes nodes to which the DNS of your apps are pointing.
A few pods should start in the ingress-nginx namespace:
kubectl get pods --namespace=ingress-nginx
After a while, they should all be running. The following command will wait for the ingress controller pod to be up, running, and ready:
kubectl wait --namespace ingress-nginx \ --for=condition=ready pod \
@@ -43,24 +43,24 @@ free to use those subdirectories and get the manifest(s) related to their K8S ve
--rule www.demo.io/=demo:80
You should then be able to see the "It works!" page when you connect to http://www.demo.io/. Congratulations, you are serving a public website hosted on a Kubernetes cluster! 🎉
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
First, make sure that Kubernetes is enabled in the Docker settings. The command kubectl get nodes should show a single node called docker-desktop.
The ingress controller can be installed on Docker Desktop using the default quick start instructions.
On most systems, if you don't have any other service of type LoadBalancer bound to port 80, the ingress controller will be assigned the EXTERNAL-IP of localhost, which means that it will be reachable on localhost:80. If that doesn't work, you might have to fall back to the kubectl port-forward method described in the local testing section.
Rancher Desktop provides Kubernetes and Container Management on the desktop. Kubernetes is enabled by default in Rancher Desktop.
Rancher Desktop uses K3s under the hood, which in turn uses Traefik as the default ingress controller for the Kubernetes cluster. To use Ingress-Nginx Controller in place of the default Traefik, disable Traefik from Preference > Kubernetes menu.
Once traefik is disabled, the Ingress-Nginx Controller can be installed on Rancher Desktop using the default quick start instructions. Follow the instructions described in the local testing section to try a sample.
If the load balancers of your cloud provider do active healthchecks on their backends (most do), you can change the externalTrafficPolicy of the ingress controller Service to Local (instead of the default Cluster) to save an extra hop in some cases. If you're installing with Helm, this can be done by adding --set controller.service.externalTrafficPolicy=Local to the helm install or helm upgrade command.
Furthermore, if the load balancers of your cloud provider support the PROXY protocol, you can enable it, and it will let the ingress controller see the real IP address of the clients. Otherwise, it will generally see the IP address of the upstream load balancer. This must be done both in the ingress controller (with e.g. --set controller.config.use-proxy-protocol=true) and in the cloud provider's load balancer configuration to function correctly.
In the following sections, we provide YAML manifests that enable these options when possible, using the specific options of various cloud providers.
By default, TLS is terminated in the ingress controller. But it is also possible to terminate TLS in the Load Balancer. This section explains how to do that on AWS using an NLB.
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.
By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
For private clusters, you will need to either add a firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to port 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp. More information can be found in the Official GCP Documentation.
By default the service object of the ingress-nginx-controller for Digital-Ocean, only configures one annotation. Its this one service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true". While this makes the service functional, it was reported that the Digital-Ocean LoadBalancer graphs shows no data, unless a few other annotations are also configured. Some of these other annotations require values that can not be generic and hence not forced in a out-of-the-box installation. These annotations and a discussion on them is well documented in this issue. Please refer to the issue to add annotations, with values specific to user, to get graphs of the DO-LB populated with data.
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
This section is applicable to Kubernetes clusters deployed on bare metal servers, as well as "raw" VMs where Kubernetes was installed manually, using generic Linux distros (like CentOS, Ubuntu...)
For quick testing, you can use a NodePort. This should work on almost every cluster, but it will typically use a port in the range 30000-32767.
For more information about bare metal deployments (and how to use port 80 instead of a random port in the 30000-32767 range), see bare-metal considerations.
If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here.
For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path.
This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0.
When "ingress.spec.rules.http.pathType=Exact" or "pathType=Prefix", this validation will limit the characters accepted on the field "ingress.spec.rules.http.paths.path", to "alphanumeric characters", and "/", "_", "-". Also, in this case, the path should start with "/".
When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be "ImplementationSpecific".
When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, "/", "_", "-", in the path field, but is not using pathType value as ImplementationSpecific, then the ingress object will be denied admission.
The cluster admin should establish validation rules using mechanisms like "Open Policy Agent", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here
A complete example of an Openpolicyagent gatekeeper rule is available here
If you have any issues or concerns, please do one of the following:
Why is chunking not working since controller v1.10 ? ¶
If your code is setting the HTTP header "Transfer-Encoding: chunked" and the controller log messages show an error about duplicate header, it is because of this change http://hg.nginx.org/nginx/rev/2bf7792c262e
If you are using Ingress objects in your cluster (running Kubernetes older than version 1.22), and you plan to upgrade your Kubernetes version to K8S 1.22 or above, then please read the migration guide here.
For improving security and also following desired standards on Kubernetes API spec, the next release, scheduled for v1.8.0, will include a new & optional feature of validating the value for the key ingress.spec.rules.http.paths.path.
This behavior will be disabled by default on the 1.8.0 release and enabled by default on the next breaking change release, set for 2.0.0.
When "ingress.spec.rules.http.pathType=Exact" or "pathType=Prefix", this validation will limit the characters accepted on the field "ingress.spec.rules.http.paths.path", to "alphanumeric characters", and "/," "_," "-." Also, in this case, the path should start with "/."
When the ingress resource path contains other characters (like on rewrite configurations), the pathType value should be "ImplementationSpecific".
When this option is enabled, the validation will happen on the Admission Webhook. So if any new ingress object contains characters other than alphanumeric characters, and, "/,","_","-", in the path field, but is not using pathType value as ImplementationSpecific, then the ingress object will be denied admission.
The cluster admin should establish validation rules using mechanisms like "Open Policy Agent", to validate that only authorized users can use ImplementationSpecific pathType and that only the authorized characters can be used. The configmap value is here
A complete example of an Openpolicyagent gatekeeper rule is available here
If you have any issues or concerns, please do one of the following:
Why is chunking not working since controller v1.10 ? ¶
If your code is setting the HTTP header "Transfer-Encoding: chunked" and the controller log messages show an error about duplicate header, it is because of this change http://hg.nginx.org/nginx/rev/2bf7792c262e