Merge branch 'master' into server-alias

This commit is contained in:
Fernando Diaz 2017-08-17 17:32:48 -05:00 committed by GitHub
commit 47e4dd59a8
157 changed files with 26072 additions and 489 deletions

View file

@ -1,11 +1,11 @@
# Ingress controllers
This directory contains ingress controllers.
=======
# Ingress Controllers
This directory contains Ingress controllers.
Configuring a webserver or loadbalancer is harder than it should be. Most webserver configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part you can apply the same logic to them and achieve a desired result. The Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a loadbalancer, or a more complicated setup of frontends that provide GSLB, DDoS protection etc).
Configuring a webserver or loadbalancer is harder than it should be. Most webserver configuration files are very similar. There are some applications that have weird little quirks that tend to throw a wrench in things, but for the most part you can apply the same logic to them and achieve a desired result.
The Ingress resource embodies this idea, and an Ingress controller is meant to handle all the quirks associated with a specific "class" of Ingress (be it a single instance of a loadbalancer, or a more complicated setup of frontends that provide GSLB, DDoS protection, etc).
## What is an Ingress Controller?
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's `/ingresses` endpoint for updates to the [Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/). Its job is to satisfy requests for ingress.
An Ingress Controller is a daemon, deployed as a Kubernetes Pod, that watches the apiserver's `/ingresses` endpoint for updates to the [Ingress resource](https://kubernetes.io/docs/concepts/services-networking/ingress/). Its job is to satisfy requests for Ingresses.

View file

@ -13,7 +13,7 @@ This is a list of beta limitations:
* [Large clusters](#large-clusters): Ingress on GCE isn't supported on large (>1000 nodes), single-zone clusters.
* [Teardown](README.md#deletion): The recommended way to tear down a cluster with active Ingresses is to either delete each Ingress, or hit the `/delete-all-and-quit` endpoint on GLBC, before invoking a cluster teardown script (eg: kube-down.sh). You will have to manually cleanup GCE resources through the [cloud console](https://cloud.google.com/compute/docs/console#access) or [gcloud CLI](https://cloud.google.com/compute/docs/gcloud-compute/) if you simply tear down the cluster with active Ingresses.
* [Changing UIDs](#changing-the-cluster-uid): You can change the UID used as a suffix for all your GCE cloud resources, but this requires you to delete existing Ingresses first.
* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might've leaked due to permature teardown through the GCE console.
* [Cleaning up](#cleaning-up-cloud-resources): You can delete loadbalancers that older clusters might've leaked due to premature teardown through the GCE console.
## Prerequisites
@ -172,6 +172,6 @@ If you deleted a GKE/GCE cluster without first deleting the associated Ingresses
1. Navigate to the [cloud console](https://console.cloud.google.com/) and click on the "Networking" tab, then choose "LoadBalancing"
2. Find the loadbalancer you'd like to delete, it should have a name formatted as: k8s-um-ns-name--UUID
3. Delete it, check the boxes to also casade the deletion down to associated resources (eg: backend-services)
3. Delete it, check the boxes to also cascade the deletion down to associated resources (eg: backend-services)
4. Switch to the "Compute Engine" tab, then choose "Instance Groups"
5. Delete the Instance Group allocated for the leaked Ingress, it should have a name formatted as: k8s-ig-UUID

View file

@ -53,7 +53,7 @@ __Lines 8-9__: Each http rule contains the following information: A host (eg: fo
__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule. The `port` is the desired `spec.ports[*].port` from the Service Spec -- Note, though, that the L7 actually directs traffic to the corresponding `NodePort`.
__Global Prameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc.
__Global Parameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc.
## Load Balancer Management
@ -135,7 +135,7 @@ Go to your GCE console and confirm that the following resources have been create
* BackendServices (one for each Kubernetes nodePort service)
* An Instance Group (with ports corresponding to the BackendServices)
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healtchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healthchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:
```shell
$ curl --resolve foo.bar.com:80:107.178.245.239 http://foo.bar.com/foo

View file

@ -212,9 +212,20 @@ func (c *ClusterManager) GC(lbNames []string, nodePorts []backends.ServicePort)
}
func getGCEClient(config io.Reader) *gce.GCECloud {
allConfig, err := ioutil.ReadAll(config)
if err != nil {
glog.Fatalf("Error while reading entire config: %v", err)
getConfigReader := func() io.Reader { return nil }
if config != nil {
allConfig, err := ioutil.ReadAll(config)
if err != nil {
glog.Fatalf("Error while reading entire config: %v", err)
}
glog.V(2).Infof("Using cloudprovider config file:\n%v ", string(allConfig))
getConfigReader = func() io.Reader {
return bytes.NewReader(allConfig)
}
} else {
glog.V(2).Infoln("No cloudprovider config file provided. Continuing with default values.")
}
// Creating the cloud interface involves resolving the metadata server to get
@ -222,7 +233,7 @@ func getGCEClient(config io.Reader) *gce.GCECloud {
// No errors are thrown. So we need to keep retrying till it works because
// we know we're on GCE.
for {
cloudInterface, err := cloudprovider.GetCloudProvider("gce", bytes.NewReader(allConfig))
cloudInterface, err := cloudprovider.GetCloudProvider("gce", getConfigReader())
if err == nil {
cloud := cloudInterface.(*gce.GCECloud)

View file

@ -55,7 +55,7 @@ Wait for the loadbalancer to be created and functioning. When you receive a succ
Websocket example. Connect to /ws%
```
The binary we deployed does not have any html/javascript to demonstrate thwe websocket, so we'll use websocket.org's client.
The binary we deployed does not have any html/javascript to demonstrate the websocket, so we'll use websocket.org's client.
Visit http://www.websocket.org/echo.html. It's important to use `HTTP` instead of `HTTPS` since we assembled an `HTTP` load balancer. Browsers may prevent `HTTP` websocket connections as a security feature.
Set the `Location` to

View file

@ -173,7 +173,7 @@ spec:
```
Please follow [test.sh](https://github.com/bprashanth/Ingress/blob/master/examples/sni/nginx/test.sh) as a guide on how to generate secrets containing SSL certificates. The name of the secret can be different than the name of the certificate.
Check the [example](examples/tls/README.md)
Check the [example](../../examples/tls-termination/nginx)
### Default SSL Certificate
@ -297,8 +297,8 @@ To disable this behavior use `hsts=false` in the NGINX config map.
### Automated Certificate Management with Kube-Lego
[Kube-Lego] automatically requests missing certificates or expired from
[Let's Encrypt] by monitoring ingress resources and its referenced secrets. To
[Kube-Lego] automatically requests missing or expired certificates from
[Let's Encrypt] by monitoring ingress resources and their referenced secrets. To
enable this for an ingress resource you have to add an annotation:
```
@ -432,7 +432,7 @@ Description:
### Local cluster
Using [`hack/local-up-cluster.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh) is possible to start a local kubernetes cluster consisting of a master and a single node. Please read [running-locally.md](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/running-locally.md) for more details.
Using [`hack/local-up-cluster.sh`](https://github.com/kubernetes/kubernetes/blob/master/hack/local-up-cluster.sh) is possible to start a local kubernetes cluster consisting of a master and a single node. Please read [running-locally.md](https://github.com/kubernetes/community/blob/master/contributors/devel/running-locally.md) for more details.
Use of `hostNetwork: true` in the ingress controller is required to falls back at localhost:8080 for the apiserver if every other client creation check fails (eg: service account not present, kubeconfig doesn't exist, no master env vars...)

View file

@ -7,6 +7,7 @@
* [Authentication](#authentication)
* [Rewrite](#rewrite)
* [Rate limiting](#rate-limiting)
* [SSL Passthrough](#ssl-passthrough)
* [Secure backends](#secure-backends)
* [Server-side HTTPS enforcement through redirect](#server-side-https-enforcement-through-redirect)
* [Whitelist source range](#whitelist-source-range)
@ -210,6 +211,13 @@ The annotations `ingress.kubernetes.io/limit-connections`, `ingress.kubernetes.i
If you specify multiple annotations in a single Ingress rule, `limit-rpm`, and then `limit-rps` takes precedence.
The annotation `ingress.kubernetes.io/limit-rate`, `ingress.kubernetes.io/limit-rate-after` define a limit the rate of response transmission to a client. The rate is specified in bytes per second. The zero value disables rate limiting. The limit is set per a request, and so if a client simultaneously opens two connections, the overall rate will be twice as much as the specified limit.
`ingress.kubernetes.io/limit-rate-after`: sets the initial amount after which the further transmission of a response to a client will be rate limited.
`ingress.kubernetes.io/limit-rate`: rate of request that accepted from a client each second.
To configure this setting globally for all Ingress rules, the `limit-rate-after` and `limit-rate` value may be set in the NGINX ConfigMap. if you set the value in ingress annotation will cover global setting.
### SSL Passthrough

View file

@ -268,6 +268,10 @@ func (n NGINXController) BackendDefaults() defaults.Backend {
// printDiff returns the difference between the running configuration
// and the new one
func (n NGINXController) printDiff(data []byte) {
if !glog.V(2) {
return
}
in, err := os.Open(cfgPath)
if err != nil {
return
@ -296,10 +300,9 @@ func (n NGINXController) printDiff(data []byte) {
return
}
if glog.V(2) {
glog.Infof("NGINX configuration diff\n")
glog.Infof("%v", string(diffOutput))
}
glog.Infof("NGINX configuration diff\n")
glog.Infof("%v", string(diffOutput))
os.Remove(tmpfile.Name())
}
}
@ -401,7 +404,7 @@ func (n *NGINXController) UpdateIngressStatus(*extensions.Ingress) []api_v1.Load
return nil
}
// OnUpdate is called by syncQueue in https://github.com/aledbf/ingress-controller/blob/master/pkg/ingress/controller/controller.go#L82
// OnUpdate is called by syncQueue in https://github.com/kubernetes/ingress/blob/master/core/pkg/ingress/controller/controller.go#L426
// periodically to keep the configuration in sync.
//
// convert configmap to custom configuration object (different in each implementation)
@ -410,15 +413,6 @@ func (n *NGINXController) UpdateIngressStatus(*extensions.Ingress) []api_v1.Load
// returning nill implies the backend will be reloaded.
// if an error is returned means requeue the update
func (n *NGINXController) OnUpdate(ingressCfg ingress.Configuration) error {
var longestName int
var serverNameBytes int
for _, srv := range ingressCfg.Servers {
if longestName < len(srv.Hostname) {
longestName = len(srv.Hostname)
}
serverNameBytes += len(srv.Hostname)
}
cfg := ngx_template.ReadConfig(n.configmap.Data)
cfg.Resolver = n.resolver
@ -465,14 +459,22 @@ func (n *NGINXController) OnUpdate(ingressCfg ingress.Configuration) error {
n.setupMonitor(defaultStatusModule)
}
// NGINX cannot resize the has tables used to store server names.
// NGINX cannot resize the hash tables used to store server names.
// For this reason we check if the defined size defined is correct
// for the FQDN defined in the ingress rules adjusting the value
// if is required.
// https://trac.nginx.org/nginx/ticket/352
// https://trac.nginx.org/nginx/ticket/631
nameHashBucketSize := nginxHashBucketSize(longestName)
var longestName int
var serverNameBytes int
for _, srv := range ingressCfg.Servers {
if longestName < len(srv.Hostname) {
longestName = len(srv.Hostname)
}
serverNameBytes += len(srv.Hostname)
}
if cfg.ServerNameHashBucketSize == 0 {
nameHashBucketSize := nginxHashBucketSize(longestName)
glog.V(3).Infof("adjusting ServerNameHashBucketSize variable to %v", nameHashBucketSize)
cfg.ServerNameHashBucketSize = nameHashBucketSize
}

View file

@ -387,6 +387,8 @@ func NewDefault() Configuration {
CustomHTTPErrors: []int{},
WhitelistSourceRange: []string{},
SkipAccessLogURLs: []string{},
LimitRate: 0,
LimitRateAfter: 0,
},
UpstreamKeepaliveConnections: 0,
LimitConnZoneVariable: defaultLimitConnZoneVariable,

View file

@ -407,6 +407,18 @@ func buildRateLimit(input interface{}) []string {
limits = append(limits, limit)
}
if loc.RateLimit.LimitRateAfter > 0 {
limit := fmt.Sprintf("limit_rate_after %vk;",
loc.RateLimit.LimitRateAfter)
limits = append(limits, limit)
}
if loc.RateLimit.LimitRate > 0 {
limit := fmt.Sprintf("limit_rate %vk;",
loc.RateLimit.LimitRate)
limits = append(limits, limit)
}
return limits
}

View file

@ -27,18 +27,15 @@ events {
http {
{{/* we use the value of the header X-Forwarded-For to be able to use the geo_ip module */}}
{{ if $cfg.UseProxyProtocol }}
{{ range $trusted_ip := $cfg.ProxyRealIPCIDR }}
set_real_ip_from {{ $trusted_ip }};
{{ end }}
real_ip_header proxy_protocol;
{{ else }}
{{ range $trusted_ip := $cfg.ProxyRealIPCIDR }}
set_real_ip_from {{ $trusted_ip }};
{{ end }}
real_ip_header X-Forwarded-For;
{{ end }}
real_ip_recursive on;
{{ range $trusted_ip := $cfg.ProxyRealIPCIDR }}
set_real_ip_from {{ $trusted_ip }};
{{ end }}
{{/* databases used to determine the country depending on the client IP address */}}
{{/* http://nginx.org/en/docs/http/ngx_http_geoip_module.html */}}
@ -156,7 +153,7 @@ http {
{{ else }}
map $http_x_forwarded_for $the_real_ip {
default $http_x_forwarded_for;
'' $remote_addr;
'' $realip_remote_addr;
}
{{ end }}
@ -567,13 +564,15 @@ stream {
{{ if not (empty $authPath) }}
# this location requires authentication
auth_request {{ $authPath }};
auth_request_set $auth_cookie $upstream_http_set_cookie;
add_header Set-Cookie $auth_cookie;
{{- range $idx, $line := buildAuthResponseHeaders $location }}
{{ $line }}
{{- end }}
{{ end }}
{{ if not (empty $location.ExternalAuth.SigninURL) }}
error_page 401 = {{ $location.ExternalAuth.SigninURL }};
error_page 401 = {{ $location.ExternalAuth.SigninURL }}?rd=$request_uri;
{{ end }}