<ahref="https://github.com/kubernetes/ingress-nginx/edit/master/docs/deploy/baremetal.md"title="Edit this page"class="md-icon md-content__icon"></a>
<h2id="a-pure-software-solution-metallb">A pure software solution: MetalLB<aclass="headerlink"href="#a-pure-software-solution-metallb"title="Permanent link">¶</a></h2>
<p><ahref="https://metallb.universe.tf/">MetalLB</a> provides a network load-balancer implementation for Kubernetes clusters that do not run on a
supported cloud provider, effectively allowing the usage of LoadBalancer Services within any cluster.</p>
<p>This section demonstrates how to use the <ahref="https://metallb.universe.tf/tutorial/layer2/">Layer 2 configuration mode</a> of MetalLB together with the NGINX
Ingress controller in a Kubernetes cluster that has <strong>publicly accessible nodes</strong>. In this mode, one node attracts all
the traffic for the <codeclass="codehilite">ingress-nginx</code> Service IP. See <ahref="https://metallb.universe.tf/usage/#traffic-policies">Traffic policies</a> for more details.</p>
<p><imgalt="MetalLB in L2 mode"src="../../images/baremetal/metallb.jpg"/></p>
<divclass="admonition note">
<pclass="admonition-title">Note</p>
<p>The description of other supported configuration modes is off-scope for this document.</p>
</div>
<divclass="admonition warning">
<pclass="admonition-title">Warning</p>
<p>MetalLB is currently in <em>beta</em>. Read about the <ahref="https://metallb.universe.tf/concepts/maturity/">Project maturity</a> and make sure you inform
yourself by reading the official documentation thoroughly.</p>
</div>
<p>MetalLB can be deployed either with a simple Kubernetes manifest or with Helm. The rest of this example assumes MetalLB
was deployed following the <ahref="https://metallb.universe.tf/installation/">Installation</a> instructions.</p>
<p>MetalLB requires a pool of IP addresses in order to be able to take ownership of the <codeclass="codehilite">ingress-nginx</code> Service. This pool
can be defined in a ConfigMap named <codeclass="codehilite">config</code> located in the same namespace as the MetalLB controller. In the simplest
possible scenario, the pool is composed of the IP addresses of Kubernetes nodes, but IP addresses can also be handed out
by a DHCP server.</p>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal
<p>A Service of type <codeclass="codehilite">NodePort</code> exposes, via the <codeclass="codehilite">kube-proxy</code> component, the <strong>same unprivileged</strong> port (default:
30000-32767) on every Kubernetes node, masters included. For more information, see <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport">Services</a>.</p>
</div>
<p>In this configuration, the NGINX container remains isolated from the host network. As a result, it can safely bind to
any port, including the standard HTTP ports 80 and 443. However, due to the container namespace isolation, a client
located outside the cluster network (e.g. on the public internet) is not able to access Ingress hosts directly on ports
80 and 443. Instead, the external client must append the NodePort allocated to the <codeclass="codehilite">ingress-nginx</code> Service to HTTP
<p>a client would reach an Ingress with <codeclass="codehilite"><spanclass="n">host</span><spanclass="o">:</span><spanclass="n">myapp</span><spanclass="o">.</span><spanclass="na">example</span><spanclass="o">.</span><spanclass="na">com</span></code> at <codeclass="codehilite">http://myapp.example.com:30100</code>, where the
myapp.example.com subdomain resolves to the 203.0.113.2 IP address.</p>
</div>
<divclass="admonition danger">
<pclass="admonition-title">Impact on the host system</p>
<p>While it may sound tempting to reconfigure the NodePort range using the <codeclass="codehilite">--service-node-port-range</code> API server flag
to include unprivileged ports and be able to expose ports 80 and 443, doing so may result in unexpected issues
including (but not limited to) the use of ports otherwise reserved to system daemons and the necessity to grant
<codeclass="codehilite">kube-proxy</code> privileges it may otherwise not require.</p>
<p>This practice is therefore <strong>discouraged</strong>. See the other approaches proposed in this page for alternatives.</p>
</div>
<p>This approach has a few other limitations one ought to be aware of:</p>
<ul>
<li><strong>Source IP address</strong></li>
</ul>
<p>Services of type NodePort perform <ahref="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport">source address translation</a> by default. This means the source IP of a
HTTP request is always <strong>the IP address of the Kubernetes node that received the request</strong> from the perspective of
NGINX.</p>
<p>The recommended way to preserve the source IP in a NodePort setup is to set the value of the <codeclass="codehilite">externalTrafficPolicy</code>
field of the <codeclass="codehilite">ingress-nginx</code> Service spec to <codeclass="codehilite">Local</code> (<ahref="https://github.com/kubernetes/ingress-nginx/blob/nginx-0.19.0/deploy/provider/aws/service-nlb.yaml#L12-L14">example</a>).</p>
<divclass="admonition warning">
<pclass="admonition-title">Warning</p>
<p>This setting effectively <strong>drops packets</strong> sent to Kubernetes nodes which are not running any instance of the NGINX
Ingress controller. Consider <ahref="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/">assigning NGINX Pods to specific nodes</a> in order to control on what nodes
the NGINX Ingress controller should be scheduled or not scheduled.</p>
</div>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>In a Kubernetes cluster composed of 3 nodes (the external IP is added as an example, in most bare-metal environments
<p>Requests sent to <codeclass="codehilite">host-2</code> and <codeclass="codehilite">host-3</code> would be forwarded to NGINX and original client's IP would be preserved,
while requests to <codeclass="codehilite">host-1</code> would get dropped because there is no NGINX replica running on that node.</p>
</div>
<ul>
<li><strong>Ingress status</strong></li>
</ul>
<p>Because NodePort Services do not get a LoadBalancerIP assigned by definition, the NGINX Ingress controller <strong>does not
update the status of Ingress objects it manages</strong>.</p>
<divclass="codehilite"><pre><span></span><spanclass="gp">$</span> kubectl get ingress
<p>Despite the fact there is no load balancer providing a public IP address to the NGINX Ingress controller, it is possible
to force the status update of all managed Ingress objects by setting the <codeclass="codehilite">externalIPs</code> field of the <codeclass="codehilite">ingress-nginx</code>
<p>There is more to setting <codeclass="codehilite">externalIPs</code> than just enabling the NGINX Ingress controller to update the status of
Ingress objects. Please read about this option in the <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips">Services</a> page of official Kubernetes
documentation as well as the section about <ahref="#external-ips">External IPs</a> in this document for more information.</p>
<p>As NGINX is <strong>not aware of the port translation operated by the NodePort Service</strong>, backend applications are responsible
for generating redirect URLs that take into account the URL used by external clients, including the NodePort.</p>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>Redirects generated by NGINX, for instance HTTP to HTTPS or <codeclass="codehilite">domain</code> to <codeclass="codehilite">www.domain</code>, are generated without
<p>One major limitation of this deployment approach is that only <strong>a single NGINX Ingress controller Pod</strong> may be scheduled
on each cluster node, because binding the same port multiple times on the same network interface is technically
impossible. Pods that are unschedulable due to such situation fail with the following event:</p>
<divclass="codehilite"><pre><span></span><spanclass="gp">$</span> kubectl -n ingress-nginx describe pod <unschedulable-nginx-ingress-controller-pod>
<spanclass="go">...</span>
<spanclass="go">Events:</span>
<spanclass="go"> Type Reason From Message</span>
<spanclass="go"> ---- ------ ---- -------</span>
<spanclass="go"> Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't have free ports for the requested pod ports.</span>
</pre></div>
<p>One way to ensure only schedulable Pods are created is to deploy the NGINX Ingress controller as a <em>DaemonSet</em> instead
of a traditional Deployment.</p>
<divclass="admonition info">
<pclass="admonition-title">Info</p>
<p>A DaemonSet schedules exactly one type of Pod per cluster node, masters included, unless a node is configured to
<ahref="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/">repel those Pods</a>. For more information, see <ahref="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/">DaemonSet</a>.</p>
</div>
<p>Because most properties of DaemonSet objects are identical to Deployment objects, this documentation page leaves the
configuration of the corresponding manifest at the user's discretion.</p>
<p><imgalt="DaemonSet with hostNetwork flow"src="../../images/baremetal/hostnetwork.jpg"/></p>
<p>Like with NodePorts, this approach has a few quirks it is important to be aware of.</p>
<ul>
<li><strong>DNS resolution</strong></li>
</ul>
<p>Pods configured with <codeclass="codehilite"><spanclass="n">hostNetwork</span><spanclass="o">:</span><spanclass="kc">true</span></code> do not use the internal DNS resolver (i.e. <em>kube-dns</em> or <em>CoreDNS</em>), unless
their <codeclass="codehilite">dnsPolicy</code> spec field is set to <ahref="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-policy"><codeclass="codehilite">ClusterFirstWithHostNet</code></a>. Consider using this setting if NGINX is
expected to resolve internal names for any reason.</p>
<ul>
<li><strong>Ingress status</strong></li>
</ul>
<p>Because there is no Service exposing the NGINX Ingress controller in a configuration using the host network, the default
<codeclass="codehilite">--publish-service</code> flag used in standard cloud setups <strong>does not apply</strong> and the status of all Ingress objects remains
blank.</p>
<divclass="codehilite"><pre><span></span><spanclass="gp">$</span> kubectl get ingress
<p>Instead, and because bare-metal nodes usually don't have an ExternalIP, one has to enable the
<ahref="../../../user-guide/cli-arguments/"><codeclass="codehilite">--report-node-internal-ip-address</code></a> flag, which sets the status of all Ingress objects to the internal IP
address of all nodes running the NGINX Ingress controller.</p>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>Given a <codeclass="codehilite">nginx-ingress-controller</code> DaemonSet composed of 2 replicas</p>
<divclass="codehilite"><pre><span></span><spanclass="gp">$</span> kubectl -n ingress-nginx get pod -o wide
<p>Alternatively, it is possible to override the address written to Ingress objects using the
<codeclass="codehilite">--publish-status-address</code> flag. See <ahref="../../../user-guide/cli-arguments/">Command line arguments</a>.</p>
</div>
<h2id="using-a-self-provisioned-edge">Using a self-provisioned edge<aclass="headerlink"href="#using-a-self-provisioned-edge"title="Permanent link">¶</a></h2>
<p>Similarly to cloud environments, this deployment approach requires an edge network component providing a public
entrypoint to the Kubernetes cluster. This edge component can be either hardware (e.g. vendor appliance) or software
(e.g. <em>HAproxy</em>) and is usually managed outside of the Kubernetes landscape by operations teams.</p>
<p>Such deployment builds upon the NodePort Service described above in <ahref="#over-a-nodeport-service">Over a NodePort Service</a>,
with one significant difference: external clients do not access cluster nodes directly, only the edge component does.
This is particularly suitable for private Kubernetes clusters where none of the nodes has a public IP address.</p>
<p>On the edge side, the only prerequisite is to dedicate a public IP address that forwards all HTTP traffic to Kubernetes
nodes and/or masters. Incoming traffic on TCP ports 80 and 443 is forwarded to the corresponding HTTP and HTTPS NodePort
on the target nodes as shown in the diagram below:</p>
<p>This method does not allow preserving the source IP of HTTP requests in any manner, it is therefore <strong>not
recommended</strong> to use it despite its apparent simplicity.</p>
</div>
<p>The <codeclass="codehilite">externalIPs</code> Service option was previously mentioned in the <ahref="#over-a-nodeport-service">NodePort</a> section.</p>
<p>As per the <ahref="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips">Services</a> page of the official Kubernetes documentation, the <codeclass="codehilite">externalIPs</code> option causes
<codeclass="codehilite">kube-proxy</code> to route traffic sent to arbitrary IP addresses <strong>and on the Service ports</strong> to the endpoints of that
Service. These IP addresses <strong>must belong to the target node</strong>.</p>
<divclass="admonition example">
<pclass="admonition-title">Example</p>
<p>Given the following 3-node Kubernetes cluster (the external IP is added as an example, in most bare-metal