git mv Ingress ingress

This commit is contained in:
Prashanth Balasubramanian 2016-02-21 16:13:08 -08:00
parent 34b949c134
commit 3da4e74e5a
2185 changed files with 754743 additions and 0 deletions

View file

@ -0,0 +1,34 @@
# Copyright 2015 The Kubernetes Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# TODO: use radial/busyboxplus:curl or alping instead
FROM ubuntu:14.04
MAINTAINER Prashanth B <beeps@google.com>
# so apt-get doesn't complain
ENV DEBIAN_FRONTEND=noninteractive
RUN sed -i 's/^exit 101/exit 0/' /usr/sbin/policy-rc.d
# TODO: Move to using haproxy:1.5 image instead. Honestly,
# that image isn't much smaller and the convenience of having
# an ubuntu container for dev purposes trumps the tiny amounts
# of disk and bandwidth we'd save in doing so.
RUN \
apt-get update && \
apt-get install -y ca-certificates && \
apt-get install -y curl && \
rm -rf /var/lib/apt/lists/*
ADD glbc glbc
ENTRYPOINT ["/glbc"]

17
controllers/gce/Makefile Normal file
View file

@ -0,0 +1,17 @@
all: push
# 0.0 shouldn't clobber any released builds
TAG = 0.6.0
PREFIX = gcr.io/google_containers/glbc
server:
CGO_ENABLED=0 GOOS=linux godep go build -a -installsuffix cgo -ldflags '-w' -o glbc *.go
container: server
docker build -t $(PREFIX):$(TAG) .
push: container
gcloud docker push $(PREFIX):$(TAG)
clean:
rm -f glbc

448
controllers/gce/README.md Normal file
View file

@ -0,0 +1,448 @@
# GLBC
GLBC is a GCE L7 load balancer controller that manages external loadbalancers configured through the Kubernetes Ingress API.
## Disclaimer
- This is a **work in progress**.
- It relies on an experimental Kubernetes resource.
- The loadbalancer controller pod is not aware of your GCE quota.
## Overview
__A reminder on GCE L7__: Google Compute Engine does not have a single resource that represents a L7 loadbalancer. When a user request comes in, it is first handled by the global forwarding rule, which sends the traffic to an HTTP proxy service that sends the traffic to a URL map that parses the URL to see which backend service will handle the request. Each backend service is assigned a set of virtual machine instances grouped into instance groups.
__A reminder on Services__: A Kubernetes Service defines a set of pods and a means by which to access them, such as single stable IP address and corresponding DNS name. This IP defaults to a cluster VIP in a private address range. You can direct ingress traffic to a particular Service by setting its `Type` to NodePort or LoadBalancer. NodePort opens up a port on *every* node in your cluster and proxies traffic to the endpoints of your service, while LoadBalancer allocates an L4 cloud loadbalancer.
### L7 Load balancing on Kubernetes
To achive L7 loadbalancing through Kubernetes, we employ a resource called `Ingress`. The Ingress is consumed by this loadbalancer controller, which creates the following GCE resource graph:
[Global Forwarding Rule](https://cloud.google.com/compute/docs/load-balancing/http/global-forwarding-rules) -> [TargetHttpProxy](https://cloud.google.com/compute/docs/load-balancing/http/target-proxies) -> [Url Map](https://cloud.google.com/compute/docs/load-balancing/http/url-map) -> [Backend Service](https://cloud.google.com/compute/docs/load-balancing/http/backend-service) -> [Instance Group](https://cloud.google.com/compute/docs/instance-groups/)
The controller (glbc) manages the lifecycle of each component in the graph. It uses the Kubernetes resources as a spec for the desired state, and the GCE cloud resources as the observed state, and drives the observed to the desired. If an edge is disconnected, it fixes it. Each Ingress translates to a new GCE L7, and the rules on the Ingress become paths in the GCE Url Map. This allows you to route traffic to various backend Kubernetes Services through a single public IP, which is in contrast to `Type=LoadBalancer`, which allocates a public IP *per* Kubernetes Service. For this to work, the Kubernetes Service *must* have Type=NodePort.
### The Ingress
An Ingress in Kubernetes is a REST object, similar to a Service. A minimal Ingress might look like:
```yaml
01. apiVersion: extensions/v1beta1
02. kind: Ingress
03. metadata:
04. name: hostlessendpoint
05. spec:
06. rules:
07. - http:
08. paths:
09. - path: /hostless
10. backend:
11. serviceName: test
12. servicePort: 80
```
POSTing this to the Kubernetes API server would result in glbc creating a GCE L7 that routes all traffic sent to `http://ip-of-loadbalancer/hostless` to :80 of the service named `test`. If the service doesn't exist yet, or doesn't have a nodePort, glbc will allocate an IP and wait till it does. Once the Service shows up, it will create the required path rules to route traffic to it.
__Lines 1-4__: Resource metadata used to tag GCE resources. For example, if you go to the console you would see a url map called: k8-fw-default-hostlessendpoint, where default is the namespace and hostlessendpoint is the name of the resource. The Kubernetes API server ensures that namespace/name is unique so there will never be any collisions.
__Lines 5-7__: Ingress Spec has all the information needed to configure a GCE L7. Most importantly, it contains a list of `rules`. A rule can take many forms, but the only rule relevant to glbc is the `http` rule.
__Lines 8-9__: Each http rule contains the following information: A host (eg: foo.bar.com, defaults to `*` in this example), a list of paths (eg: `/hostless`) each of which has an associated backend (`test:80`). Both the `host` and `path` must match the content of an incoming request before the L7 directs traffic to the `backend`.
__Lines 10-12__: A `backend` is a service:port combination. It selects a group of pods capable of servicing traffic sent to the path specified in the parent rule.
__Global Prameters__: For the sake of simplicity the example Ingress has no global parameters. However, one can specify a default backend (see examples below) in the absence of which requests that don't match a path in the spec are sent to the default backend of glbc. Though glbc doesn't support HTTPS yet, security configs would also be global.
## Load Balancer Management
You can manage a GCE L7 by creating/updating/deleting the associated Kubernetes Ingress.
### Creation
Before you can start creating Ingress you need to start up glbc. We can use the rc.yaml in this directory:
```shell
$ kubectl create -f rc.yaml
replicationcontroller "glbc" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
glbc-6m6b6 2/2 Running 0 21s
```
A couple of things to note about this controller:
* It needs a service with a node port to use as the default backend. This is the backend that's used when an Ingress does not specify the default.
* It has an intentionally long terminationGracePeriod, this is only required with the --delete-all-on-quit flag (see [Deletion](#deletion))
* Don't start 2 instances of the controller in a single cluster, they will fight each other.
The loadbalancer controller will watch for Services, Nodes and Ingress. Nodes already exist (the nodes in your cluster). We need to create the other 2. You can do so using the ingress-app.yaml in this directory.
A couple of things to note about the Ingress:
* It creates a Replication Controller for a simple echoserver application, with 1 replica.
* It creates 3 services for the same application pod: echoheaders[x, y, default]
* It creates an Ingress with 2 hostnames and 3 endpoints (foo.bar.com{/foo} and bar.baz.com{/foo, /bar}) that access the given service
```shell
$ kubectl create -f ingress-app.yaml
$ kubectl get svc
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
echoheadersdefault 10.0.43.119 nodes 80/TCP app=echoheaders 16m
echoheadersx 10.0.126.10 nodes 80/TCP app=echoheaders 16m
echoheadersy 10.0.134.238 nodes 80/TCP app=echoheaders 16m
Kubernetes 10.0.0.1 <none> 443/TCP <none> 21h
$ kubectl get ing
NAME RULE BACKEND ADDRESS
echomap - echoheadersdefault:80
foo.bar.com
/foo echoheadersx:80
bar.baz.com
/bar echoheadersy:80
/foo echoheadersx:80
```
You can tail the logs of the controller to observe its progress:
```
$ kubectl logs --follow glbc-6m6b6 l7-lb-controller
I1005 22:11:26.731845 1 instances.go:48] Creating instance group k8-ig-foo
I1005 22:11:34.360689 1 controller.go:152] Created new loadbalancer controller
I1005 22:11:34.360737 1 controller.go:172] Starting loadbalancer controller
I1005 22:11:34.380757 1 controller.go:206] Syncing default/echomap
I1005 22:11:34.380763 1 loadbalancer.go:134] Syncing loadbalancers [default/echomap]
I1005 22:11:34.380810 1 loadbalancer.go:100] Creating l7 default-echomap
I1005 22:11:34.385161 1 utils.go:83] Syncing e2e-test-beeps-minion-ugv1
...
```
When it's done, it will update the status of the Ingress with the ip of the L7 it created:
```shell
$ kubectl get ing
NAME RULE BACKEND ADDRESS
echomap - echoheadersdefault:80 107.178.254.239
foo.bar.com
/foo echoheadersx:80
bar.baz.com
/bar echoheadersy:80
/foo echoheadersx:80
```
Go to your GCE console and confirm that the following resources have been created through the HTTPLoadbalancing panel:
* A Global Forwarding Rule
* An UrlMap
* A TargetHTTPProxy
* BackendServices (one for each Kubernetes nodePort service)
* An Instance Group (with ports corresponding to the BackendServices)
The HTTPLoadBalancing panel will also show you if your backends have responded to the health checks, wait till they do. This can take a few minutes. If you see `Health status will display here once configuration is complete.` the L7 is still bootstrapping. Wait till you have `Healthy instances: X`. Even though the GCE L7 is driven by our controller, which notices the Kubernetes healtchecks of a pod, we still need to wait on the first GCE L7 health check to complete. Once your backends are up and healthy:
```shell
$ curl --resolve foo.bar.com:80:107.178.245.239 http://foo.bar.com/foo
CLIENT VALUES:
client_address=('10.240.29.196', 56401) (10.240.29.196)
command=GET
path=/echoheadersx
real path=/echoheadersx
query=
request_version=HTTP/1.1
SERVER VALUES:
server_version=BaseHTTP/0.6
sys_version=Python/3.4.3
protocol_version=HTTP/1.0
HEADERS RECEIVED:
Accept=*/*
Connection=Keep-Alive
Host=107.178.254.239
User-Agent=curl/7.35.0
Via=1.1 google
X-Forwarded-For=216.239.45.73, 107.178.254.239
X-Forwarded-Proto=http
```
You can also edit `/etc/hosts` instead of using `--resolve`.
#### Updates
Say you don't want a default backend and you'd like to allow all traffic hitting your loadbalancer at /foo to reach your echoheaders backend service, not just the traffic for foo.bar.com. You can modify the Ingress Spec:
```yaml
spec:
rules:
- http:
paths:
- path: /foo
..
```
and replace the existing Ingress (ignore errors about replacing the Service, we're using the same .yaml file but we only care about the Ingress):
```
$ kubectl replace -f ingress-app.yaml
ingress "echomap" replaced
$ curl http://107.178.254.239/foo
CLIENT VALUES:
client_address=('10.240.143.179', 59546) (10.240.143.179)
command=GET
path=/foo
real path=/foo
...
$ curl http://107.178.254.239/
<pre>
INTRODUCTION
============
This is an nginx webserver for simple loadbalancer testing. It works well
for me but it might not have some of the features you want. If you would
...
```
A couple of things to note about this particular update:
* An Ingress without a default backend inherits the backend of the Ingress controller.
* A IngressRule without a host gets the wildcard. This is controller specific, some loadbalancer controllers do not respect anything but a DNS subdomain as the host. You *cannot* set the host to a regex.
* You never want to delete then re-create an Ingress, as it will result in the controller tearing down and recreating the loadbalancer.
__Unexpected updates__: Since glbc constantly runs a control loop it won't allow you to break links that black hole traffic. An easy link to break is the url map itself, but you can also disconnect a target proxy from the urlmap, or remove an instance from the instance group (note this is different from *deleting* the instance, the loadbalancer controller will not recreate it if you do so). Modify one of the url links in the map to point to another backend through the GCE Control Panel UI, and wait till the controller sync (this happens as frequently as you tell it to, via the --resync-period flag). The same goes for the Kubernetes side of things, the API server will validate against obviously bad updates, but if you relink an Ingress so it points to the wrong backends the controller will blindly follow.
### Paths
Till now, our examples were simplified in that they hit an endpoint with a catch-all path regex. Most real world backends have subresources. Let's create service to test how the loadbalancer handles paths:
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: nginxtest
spec:
replicas: 1
template:
metadata:
labels:
app: nginxtest
spec:
containers:
- name: nginxtest
image: bprashanth/nginxtest:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginxtest
labels:
app: nginxtest
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginxtest
```
Running kubectl create against this manifest will given you a service with multiple endpoints:
```shell
$ kubectl get svc nginxtest -o yaml | grep -i nodeport:
nodePort: 30404
$ curl nodeip:30404/
ENDPOINTS
=========
<a href="hostname">hostname</a>: An endpoint to query the hostname.
<a href="stress">stress</a>: An endpoint to stress the host.
<a href="fs/index.html">fs</a>: A file system for static content.
```
You can put the nodeip:port into your browser and play around with the endpoints so you're familiar with what to expect. We will test the `/hostname` and `/fs/files/nginx.html` endpoints. Modify/create your Ingress:
```yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxtest-ingress
spec:
rules:
- http:
paths:
- path: /hostname
backend:
serviceName: nginxtest
servicePort: 80
```
And check the endpoint (you will have to wait till the update takes effect, this could be a few minutes):
```shell
$ kubectl replace -f ingress.yaml
$ curl loadbalancerip/hostname
nginx-tester-pod-name
```
Note what just happened, the endpoint exposes /hostname, and the loadbalancer forwarded the entire matching url to the endpoint. This means if you had '/foo' in the Ingress and tried accessing /hostname, your endpoint would've received /foo/hostname and not known how to route it. Now update the Ingress to access static content via the /fs endpoint:
```
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxtest-ingress
spec:
rules:
- http:
paths:
- path: /fs/*
backend:
serviceName: nginxtest
servicePort: 80
```
As before, wait a while for the update to take effect, and try accessing `loadbalancerip/fs/files/nginx.html`.
#### Deletion
Most production loadbalancers live as long as the nodes in the cluster and are torn down when the nodes are destroyed. That said, there are plenty of use cases for deleting an Ingress, deleting a loadbalancer controller, or just purging external loadbalancer resources alltogether. Deleting a loadbalancer controller pod will not affect the loadbalancers themselves, this way your backends won't suffer a loss of availability if the scheduler pre-empts your controller pod. Deleting a single loadbalancer is as easy as deleting an Ingress via kubectl:
```shell
$ kubectl delete ing echomap
$ kubectl logs --follow glbc-6m6b6 l7-lb-controller
I1007 00:25:45.099429 1 loadbalancer.go:144] Deleting lb default-echomap
I1007 00:25:45.099432 1 loadbalancer.go:437] Deleting global forwarding rule k8-fw-default-echomap
I1007 00:25:54.885823 1 loadbalancer.go:444] Deleting target proxy k8-tp-default-echomap
I1007 00:25:58.446941 1 loadbalancer.go:451] Deleting url map k8-um-default-echomap
I1007 00:26:02.043065 1 backends.go:176] Deleting backends []
I1007 00:26:02.043188 1 backends.go:134] Deleting backend k8-be-30301
I1007 00:26:05.591140 1 backends.go:134] Deleting backend k8-be-30284
I1007 00:26:09.159016 1 controller.go:232] Finished syncing default/echomap
```
Note that it takes ~30 seconds to purge cloud resources, the API calls to create and delete are a one time cost. GCE BackendServices are ref-counted and deleted by the controller as you delete Kubernetes Ingress'. This is not sufficient for cleanup, because you might have deleted the Ingress while glbc was down, in which case it would leak cloud resources. You can delete the glbc and purge cloud resources in 2 more ways:
__The dev/test way__: If you want to delete everything in the cloud when the loadbalancer controller pod dies, start it with the --delete-all-on-quit flag. When a pod is killed it's first sent a SIGTERM, followed by a grace period (set to 10minutes for loadbalancer controllers), followed by a SIGKILL. The controller pod uses this time to delete cloud resources. Be careful with --delete-all-on-quit, because if you're running a production glbc and the scheduler re-schedules your pod for some reason, it will result in a loss of availability. You can do this because your rc.yaml has:
```yaml
args:
# auto quit requires a high termination grace period.
- --delete-all-on-quit=true
```
So simply delete the replication controller:
```shell
$ kubectl get rc glbc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
glbc default-http-backend gcr.io/google_containers/defaultbackend:1.0 k8s-app=glbc,version=v0.5 1 2m
l7-lb-controller gcr.io/google_containers/glbc:0.5
$ kubectl delete rc glbc
replicationcontroller "glbc" deleted
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
glbc-6m6b6 1/1 Terminating 0 13m
```
__The prod way__: If you didn't start the controller with `--delete-all-on-quit`, you can execute a GET on the `/delete-all-and-quit` endpoint. This endpoint is deliberately not exported.
```
$ kubectl exec -it glbc-6m6b6 -- curl http://localhost:8081/delete-all-and-quit
..Hangs till quit is done..
$ kubectl logs glbc-6m6b6 --follow
I1007 00:26:09.159016 1 controller.go:232] Finished syncing default/echomap
I1007 00:29:30.321419 1 controller.go:192] Shutting down controller queues.
I1007 00:29:30.321970 1 controller.go:199] Shutting down cluster manager.
I1007 00:29:30.321574 1 controller.go:178] Shutting down Loadbalancer Controller
I1007 00:29:30.322378 1 main.go:160] Handled quit, awaiting pod deletion.
I1007 00:29:30.321977 1 loadbalancer.go:154] Creating loadbalancers []
I1007 00:29:30.322617 1 loadbalancer.go:192] Loadbalancer pool shutdown.
I1007 00:29:30.322622 1 backends.go:176] Deleting backends []
I1007 00:30:00.322528 1 main.go:160] Handled quit, awaiting pod deletion.
I1007 00:30:30.322751 1 main.go:160] Handled quit, awaiting pod deletion
```
You just instructed the loadbalancer controller to quit, however if it had done so, the replication controller would've just created another pod, so it waits around till you delete the rc.
#### Health checks
Currently, all service backends must respond with a 200 on '/'. The content does not matter. If they fail to do so they will be deemed unhealthy by the GCE L7. This limitation is because there are 2 sets of health checks:
* From the kubernetes endpoints, taking the form of liveness/readiness probes
* From the GCE L7, which periodically pings '/'
We really want (1) to control the health of an instance but (2) is a GCE requirement. Ideally, we would point (2) at (1), but we still need (2) for pods that don't have a defined health check. This will probably get resolved when Ingress grows up.
## Troubleshooting:
This controller is complicated because it exposes a tangled set of external resources as a single logical abstraction. It's recommended that you are at least *aware* of how one creates a GCE L7 [without a kubernetes Ingress](https://cloud.google.com/container-engine/docs/tutorials/http-balancer). If weird things happen, here are some basic debugging guidelines:
* Check loadbalancer controller pod logs via kubectl
A typical sign of trouble is repeated retries in the logs:
```shell
I1006 18:58:53.451869 1 loadbalancer.go:268] Forwarding rule k8-fw-default-echomap already exists
I1006 18:58:53.451955 1 backends.go:162] Syncing backends [30301 30284 30301]
I1006 18:58:53.451998 1 backends.go:134] Deleting backend k8-be-30302
E1006 18:58:57.029253 1 utils.go:71] Requeuing default/echomap, err googleapi: Error 400: The backendService resource 'projects/Kubernetesdev/global/backendServices/k8-be-30302' is already being used by 'projects/Kubernetesdev/global/urlMaps/k8-um-default-echomap'
I1006 18:58:57.029336 1 utils.go:83] Syncing default/echomap
```
This could be a bug or quota limitation. In the case of the former, please head over to slack or github.
* If you see a GET hanging, followed by a 502 with the following response:
```
<html><head>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<title>502 Server Error</title>
</head>
<body text=#000000 bgcolor=#ffffff>
<h1>Error: Server Error</h1>
<h2>The server encountered a temporary error and could not complete your request.<p>Please try again in 30 seconds.</h2>
<h2></h2>
</body></html>
```
The loadbalancer is probably bootstrapping itself.
* If a GET responds with a 404 and the following response:
```
<a href=//www.google.com/><span id=logo aria-label=Google></span></a>
<p><b>404.</b> <ins>Thats an error.</ins>
<p>The requested URL <code>/hostless</code> was not found on this server. <ins>Thats all we know.</ins>
```
It means you have lost your IP somehow, or just typed in the wrong IP.
* If you see requests taking an abnormal amount of time, run the echoheaders pod and look for the client address
```shell
CLIENT VALUES:
client_address=('10.240.29.196', 56401) (10.240.29.196)
```
Then head over to the GCE node with internal ip 10.240.29.196 and check that the [Service is functioning](https://github.com/kubernetes/kubernetes/blob/release-1.0/docs/user-guide/debugging-services.md) as expected. Remember that the GCE L7 is routing you through the NodePort service, and try to trace back.
* Check if you can access the backend service directly via nodeip:nodeport
* Check the GCE console
* Make sure you only have a single loadbalancer controller running
* Make sure the initial GCE health checks have passed
* A crash loop looks like:
```shell
$ kubectl get pods
glbc-fjtlq 0/1 CrashLoopBackOff 17 1h
```
If you hit that it means the controller isn't even starting. Re-check your input flags, especially the required ones.
## GCELBC Implementation Details
For the curious, here is a high level overview of how the GCE LoadBalancer controller manages cloud resources.
The controller manages cloud resources through a notion of pools. Each pool is the representation of the last known state of a logical cloud resource. Pools are periodically synced with the desired state, as reflected by the Kubernetes api. When you create a new Ingress, the following happens:
* Create BackendServices for each Kubernetes backend in the Ingress, through the backend pool.
* Add nodePorts for each BackendService to an Instance Group with all the instances in your cluster, through the instance pool.
* Create a UrlMap, TargetHttpProxy, Global Forwarding Rule through the loadbalancer pool.
* Update the loadbalancer's urlmap according to the Ingress.
Periodically, each pool checks that it has a valid connection to the next hop in the above resource graph. So for example, the backend pool will check that each backend is connected to the instance group and that the node ports match, the instance group will check that all the Kubernetes nodes are a part of the instance group, and so on. Since Backends are a limited resource, they're shared (well, everything is limited by your quota, this applies doubly to backend services). This means you can setup N Ingress' exposing M services through different paths and the controller will only create M backends. When all the Ingress' are deleted, the backend pool GCs the backend.
## Wishlist:
* E2e, integration tests
* Better events
* Detect leaked resources even if the Ingress has been deleted when the controller isn't around
* Specify health checks (currently we just rely on kubernetes service/pod liveness probes and force pods to have a `/` endpoint that responds with 200 for GCE)
* Alleviate the NodePort requirement for Service Type=LoadBalancer.
* Async pool management of backends/L7s etc
* Retry back-off when GCE Quota is done
* GCE Quota integration
* HTTP support as the Ingress grows
* More aggressive resource sharing
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/contrib/service-loadbalancer/gce/README.md?pixel)]()

View file

@ -0,0 +1,242 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backends
import (
"fmt"
"net/http"
"strconv"
"k8s.io/kubernetes/pkg/util/sets"
"github.com/golang/glog"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
"k8s.io/contrib/ingress/controllers/gce/instances"
"k8s.io/contrib/ingress/controllers/gce/storage"
"k8s.io/contrib/ingress/controllers/gce/utils"
)
// Backends implements BackendPool.
type Backends struct {
cloud BackendServices
nodePool instances.NodePool
healthChecker healthchecks.HealthChecker
snapshotter storage.Snapshotter
namer utils.Namer
}
func portKey(port int64) string {
return fmt.Sprintf("%d", port)
}
// NewBackendPool returns a new backend pool.
// - cloud: implements BackendServices and syncs backends with a cloud provider
// - nodePool: implements NodePool, used to create/delete new instance groups.
func NewBackendPool(
cloud BackendServices,
healthChecker healthchecks.HealthChecker,
nodePool instances.NodePool, namer utils.Namer) *Backends {
return &Backends{
cloud: cloud,
nodePool: nodePool,
snapshotter: storage.NewInMemoryPool(),
healthChecker: healthChecker,
namer: namer,
}
}
// Get returns a single backend.
func (b *Backends) Get(port int64) (*compute.BackendService, error) {
be, err := b.cloud.GetBackendService(b.namer.BeName(port))
if err != nil {
return nil, err
}
b.snapshotter.Add(portKey(port), be)
return be, nil
}
func (b *Backends) create(ig *compute.InstanceGroup, namedPort *compute.NamedPort, name string) (*compute.BackendService, error) {
// Create a new health check
if err := b.healthChecker.Add(namedPort.Port, ""); err != nil {
return nil, err
}
hc, err := b.healthChecker.Get(namedPort.Port)
if err != nil {
return nil, err
}
// Create a new backend
backend := &compute.BackendService{
Name: name,
Protocol: "HTTP",
Backends: []*compute.Backend{
{
Group: ig.SelfLink,
},
},
// Api expects one, means little to kubernetes.
HealthChecks: []string{hc.SelfLink},
Port: namedPort.Port,
PortName: namedPort.Name,
}
if err := b.cloud.CreateBackendService(backend); err != nil {
return nil, err
}
return b.Get(namedPort.Port)
}
// Add will get or create a Backend for the given port.
func (b *Backends) Add(port int64) error {
// We must track the port even if creating the backend failed, because
// we might've created a health-check for it.
be := &compute.BackendService{}
defer func() { b.snapshotter.Add(portKey(port), be) }()
ig, namedPort, err := b.nodePool.AddInstanceGroup(b.namer.IGName(), port)
if err != nil {
return err
}
be, _ = b.Get(port)
if be == nil {
glog.Infof("Creating backend for instance group %v port %v named port %v",
ig.Name, port, namedPort)
be, err = b.create(ig, namedPort, b.namer.BeName(port))
if err != nil {
return err
}
}
if err := b.edgeHop(be, ig); err != nil {
return err
}
return err
}
// Delete deletes the Backend for the given port.
func (b *Backends) Delete(port int64) (err error) {
name := b.namer.BeName(port)
glog.Infof("Deleting backend %v", name)
defer func() {
if utils.IsHTTPErrorCode(err, http.StatusNotFound) {
err = nil
}
if err == nil {
b.snapshotter.Delete(portKey(port))
}
}()
// Try deleting health checks even if a backend is not found.
if err = b.cloud.DeleteBackendService(name); err != nil &&
!utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
if err = b.healthChecker.Delete(port); err != nil &&
!utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
return nil
}
// List lists all backends.
func (b *Backends) List() (*compute.BackendServiceList, error) {
// TODO: for consistency with the rest of this sub-package this method
// should return a list of backend ports.
return b.cloud.ListBackendServices()
}
// edgeHop checks the links of the given backend by executing an edge hop.
// It fixes broken links.
func (b *Backends) edgeHop(be *compute.BackendService, ig *compute.InstanceGroup) error {
if len(be.Backends) == 1 &&
utils.CompareLinks(be.Backends[0].Group, ig.SelfLink) {
return nil
}
glog.Infof("Backend %v has a broken edge, adding link to %v",
be.Name, ig.Name)
be.Backends = []*compute.Backend{
{Group: ig.SelfLink},
}
if err := b.cloud.UpdateBackendService(be); err != nil {
return err
}
return nil
}
// Sync syncs backend services corresponding to ports in the given list.
func (b *Backends) Sync(svcNodePorts []int64) error {
glog.V(3).Infof("Sync: backends %v", svcNodePorts)
// create backends for new ports, perform an edge hop for existing ports
for _, port := range svcNodePorts {
if err := b.Add(port); err != nil {
return err
}
}
return nil
}
// GC garbage collects services corresponding to ports in the given list.
func (b *Backends) GC(svcNodePorts []int64) error {
knownPorts := sets.NewString()
for _, port := range svcNodePorts {
knownPorts.Insert(portKey(port))
}
pool := b.snapshotter.Snapshot()
for port := range pool {
p, err := strconv.Atoi(port)
if err != nil {
return err
}
nodePort := int64(p)
if knownPorts.Has(portKey(nodePort)) {
continue
}
glog.V(3).Infof("GCing backend for port %v", p)
if err := b.Delete(nodePort); err != nil {
return err
}
}
if len(svcNodePorts) == 0 {
glog.Infof("Deleting instance group %v", b.namer.IGName())
if err := b.nodePool.DeleteInstanceGroup(b.namer.IGName()); err != nil {
return err
}
}
return nil
}
// Shutdown deletes all backends and the default backend.
// This will fail if one of the backends is being used by another resource.
func (b *Backends) Shutdown() error {
if err := b.GC([]int64{}); err != nil {
return err
}
return nil
}
// Status returns the status of the given backend by name.
func (b *Backends) Status(name string) string {
backend, err := b.cloud.GetBackendService(name)
if err != nil {
return "Unknown"
}
// TODO: Include port, ip in the status, since it's in the health info.
hs, err := b.cloud.GetHealth(name, backend.Backends[0].Group)
if err != nil || len(hs.HealthStatus) == 0 || hs.HealthStatus[0] == nil {
return "Unknown"
}
// TODO: State transition are important, not just the latest.
return hs.HealthStatus[0].HealthState
}

View file

@ -0,0 +1,126 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backends
import (
"testing"
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
"k8s.io/contrib/ingress/controllers/gce/instances"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/util/sets"
)
func newBackendPool(f BackendServices, fakeIGs instances.InstanceGroups) BackendPool {
namer := utils.Namer{}
return NewBackendPool(
f,
healthchecks.NewHealthChecker(healthchecks.NewFakeHealthChecks(), "/", namer),
instances.NewNodePool(fakeIGs, "default-zone"), namer)
}
func TestBackendPoolAdd(t *testing.T) {
f := NewFakeBackendServices()
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
pool := newBackendPool(f, fakeIGs)
namer := utils.Namer{}
// Add a backend for a port, then re-add the same port and
// make sure it corrects a broken link from the backend to
// the instance group.
nodePort := int64(8080)
pool.Add(nodePort)
beName := namer.BeName(nodePort)
// Check that the new backend has the right port
be, err := f.GetBackendService(beName)
if err != nil {
t.Fatalf("Did not find expected backend %v", beName)
}
if be.Port != nodePort {
t.Fatalf("Backend %v has wrong port %v, expected %v", be.Name, be.Port, nodePort)
}
// Check that the instance group has the new port
var found bool
for _, port := range fakeIGs.Ports {
if port == nodePort {
found = true
}
}
if !found {
t.Fatalf("Port %v not added to instance group", nodePort)
}
// Mess up the link between backend service and instance group.
// This simulates a user doing foolish things through the UI.
f.calls = []int{}
be, err = f.GetBackendService(beName)
be.Backends[0].Group = "test edge hop"
f.UpdateBackendService(be)
pool.Add(nodePort)
for _, call := range f.calls {
if call == utils.Create {
t.Fatalf("Unexpected create for existing backend service")
}
}
gotBackend, _ := f.GetBackendService(beName)
gotGroup, _ := fakeIGs.GetInstanceGroup(namer.IGName(), "default-zone")
if gotBackend.Backends[0].Group != gotGroup.SelfLink {
t.Fatalf(
"Broken instance group link: %v %v",
gotBackend.Backends[0].Group,
gotGroup.SelfLink)
}
}
func TestBackendPoolSync(t *testing.T) {
// Call sync on a backend pool with a list of ports, make sure the pool
// creates/deletes required ports.
svcNodePorts := []int64{81, 82, 83}
f := NewFakeBackendServices()
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
pool := newBackendPool(f, fakeIGs)
pool.Add(81)
pool.Add(90)
pool.Sync(svcNodePorts)
pool.GC(svcNodePorts)
if _, err := pool.Get(90); err == nil {
t.Fatalf("Did not expect to find port 90")
}
for _, port := range svcNodePorts {
if _, err := pool.Get(port); err != nil {
t.Fatalf("Expected to find port %v", port)
}
}
}
func TestBackendPoolShutdown(t *testing.T) {
f := NewFakeBackendServices()
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
pool := newBackendPool(f, fakeIGs)
namer := utils.Namer{}
pool.Add(80)
pool.Shutdown()
if _, err := f.GetBackendService(namer.BeName(80)); err == nil {
t.Fatalf("%v", err)
}
}

View file

@ -0,0 +1,145 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backends
import (
"fmt"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/utils"
)
// NewFakeBackendServices creates a new fake backend services manager.
func NewFakeBackendServices() *FakeBackendServices {
return &FakeBackendServices{
backendServices: []*compute.BackendService{},
}
}
// FakeBackendServices fakes out GCE backend services.
type FakeBackendServices struct {
backendServices []*compute.BackendService
calls []int
}
// GetBackendService fakes getting a backend service from the cloud.
func (f *FakeBackendServices) GetBackendService(name string) (*compute.BackendService, error) {
f.calls = append(f.calls, utils.Get)
for i := range f.backendServices {
if name == f.backendServices[i].Name {
return f.backendServices[i], nil
}
}
return nil, fmt.Errorf("Backend service %v not found", name)
}
// CreateBackendService fakes backend service creation.
func (f *FakeBackendServices) CreateBackendService(be *compute.BackendService) error {
f.calls = append(f.calls, utils.Create)
be.SelfLink = be.Name
f.backendServices = append(f.backendServices, be)
return nil
}
// DeleteBackendService fakes backend service deletion.
func (f *FakeBackendServices) DeleteBackendService(name string) error {
f.calls = append(f.calls, utils.Delete)
newBackends := []*compute.BackendService{}
for i := range f.backendServices {
if name != f.backendServices[i].Name {
newBackends = append(newBackends, f.backendServices[i])
}
}
f.backendServices = newBackends
return nil
}
// ListBackendServices fakes backend service listing.
func (f *FakeBackendServices) ListBackendServices() (*compute.BackendServiceList, error) {
return &compute.BackendServiceList{Items: f.backendServices}, nil
}
// UpdateBackendService fakes updating a backend service.
func (f *FakeBackendServices) UpdateBackendService(be *compute.BackendService) error {
f.calls = append(f.calls, utils.Update)
for i := range f.backendServices {
if f.backendServices[i].Name == be.Name {
f.backendServices[i] = be
}
}
return nil
}
// GetHealth fakes getting backend service health.
func (f *FakeBackendServices) GetHealth(name, instanceGroupLink string) (*compute.BackendServiceGroupHealth, error) {
be, err := f.GetBackendService(name)
if err != nil {
return nil, err
}
states := []*compute.HealthStatus{
{
HealthState: "HEALTHY",
IpAddress: "",
Port: be.Port,
},
}
return &compute.BackendServiceGroupHealth{
HealthStatus: states}, nil
}
// NewFakeHealthChecks returns a health check fake.
func NewFakeHealthChecks() *FakeHealthChecks {
return &FakeHealthChecks{hc: []*compute.HttpHealthCheck{}}
}
// FakeHealthChecks fakes out health checks.
type FakeHealthChecks struct {
hc []*compute.HttpHealthCheck
}
// CreateHttpHealthCheck fakes health check creation.
func (f *FakeHealthChecks) CreateHttpHealthCheck(hc *compute.HttpHealthCheck) error {
f.hc = append(f.hc, hc)
return nil
}
// GetHttpHealthCheck fakes getting a http health check.
func (f *FakeHealthChecks) GetHttpHealthCheck(name string) (*compute.HttpHealthCheck, error) {
for _, h := range f.hc {
if h.Name == name {
return h, nil
}
}
return nil, fmt.Errorf("Health check %v not found.", name)
}
// DeleteHttpHealthCheck fakes deleting a http health check.
func (f *FakeHealthChecks) DeleteHttpHealthCheck(name string) error {
healthChecks := []*compute.HttpHealthCheck{}
exists := false
for _, h := range f.hc {
if h.Name == name {
exists = true
continue
}
healthChecks = append(healthChecks, h)
}
if !exists {
return fmt.Errorf("Failed to find health check %v", name)
}
f.hc = healthChecks
return nil
}

View file

@ -0,0 +1,58 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package backends
import (
compute "google.golang.org/api/compute/v1"
)
// BackendPool is an interface to manage a pool of kubernetes nodePort services
// as gce backendServices, and sync them through the BackendServices interface.
type BackendPool interface {
Add(port int64) error
Get(port int64) (*compute.BackendService, error)
Delete(port int64) error
Sync(ports []int64) error
GC(ports []int64) error
Shutdown() error
Status(name string) string
List() (*compute.BackendServiceList, error)
}
// BackendServices is an interface for managing gce backend services.
type BackendServices interface {
GetBackendService(name string) (*compute.BackendService, error)
UpdateBackendService(bg *compute.BackendService) error
CreateBackendService(bg *compute.BackendService) error
DeleteBackendService(name string) error
ListBackendServices() (*compute.BackendServiceList, error)
GetHealth(name, instanceGroupLink string) (*compute.BackendServiceGroupHealth, error)
}
// SingleHealthCheck is an interface to manage a single GCE health check.
type SingleHealthCheck interface {
CreateHttpHealthCheck(hc *compute.HttpHealthCheck) error
DeleteHttpHealthCheck(name string) error
GetHttpHealthCheck(name string) (*compute.HttpHealthCheck, error)
}
// HealthChecker is an interface to manage cloud HTTPHealthChecks.
type HealthChecker interface {
Add(port int64, path string) error
Delete(port int64) error
Get(port int64) (*compute.HttpHealthCheck, error)
}

View file

@ -0,0 +1,173 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"fmt"
"k8s.io/contrib/ingress/controllers/gce/backends"
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
"k8s.io/contrib/ingress/controllers/gce/instances"
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/cloudprovider"
gce "k8s.io/kubernetes/pkg/cloudprovider/providers/gce"
)
const (
defaultPort = 80
defaultHealthCheckPath = "/"
// A single instance-group is created per cluster manager.
// Tagged with the name of the controller.
instanceGroupPrefix = "k8s-ig"
// A backend is created per nodePort, tagged with the nodeport.
// This allows sharing of backends across loadbalancers.
backendPrefix = "k8s-be"
// A single target proxy/urlmap/forwarding rule is created per loadbalancer.
// Tagged with the namespace/name of the Ingress.
targetProxyPrefix = "k8s-tp"
forwardingRulePrefix = "k8s-fw"
urlMapPrefix = "k8s-um"
// Used in the test RunServer method to denote a delete request.
deleteType = "del"
// port 0 is used as a signal for port not found/no such port etc.
invalidPort = 0
// Names longer than this are truncated, because of GCE restrictions.
nameLenLimit = 62
)
// ClusterManager manages cluster resource pools.
type ClusterManager struct {
ClusterNamer utils.Namer
defaultBackendNodePort int64
instancePool instances.NodePool
backendPool backends.BackendPool
l7Pool loadbalancers.LoadBalancerPool
}
// IsHealthy returns an error if the cluster manager is unhealthy.
func (c *ClusterManager) IsHealthy() (err error) {
// TODO: Expand on this, for now we just want to detect when the GCE client
// is broken.
_, err = c.backendPool.List()
return
}
func (c *ClusterManager) shutdown() error {
if err := c.l7Pool.Shutdown(); err != nil {
return err
}
// The backend pool will also delete instance groups.
return c.backendPool.Shutdown()
}
// Checkpoint performs a checkpoint with the cloud.
// - lbNames are the names of L7 loadbalancers we wish to exist. If they already
// exist, they should not have any broken links between say, a UrlMap and
// TargetHttpProxy.
// - nodeNames are the names of nodes we wish to add to all loadbalancer
// instance groups.
// - nodePorts are the ports for which we require BackendServices. Each of
// these ports must also be opened on the corresponding Instance Group.
// If in performing the checkpoint the cluster manager runs out of quota, a
// googleapi 403 is returned.
func (c *ClusterManager) Checkpoint(lbs []*loadbalancers.L7RuntimeInfo, nodeNames []string, nodePorts []int64) error {
if err := c.backendPool.Sync(nodePorts); err != nil {
return err
}
if err := c.instancePool.Sync(nodeNames); err != nil {
return err
}
if err := c.l7Pool.Sync(lbs); err != nil {
return err
}
return nil
}
// GC garbage collects unused resources.
// - lbNames are the names of L7 loadbalancers we wish to exist. Those not in
// this list are removed from the cloud.
// - nodePorts are the ports for which we want BackendServies. BackendServices
// for ports not in this list are deleted.
// This method ignores googleapi 404 errors (StatusNotFound).
func (c *ClusterManager) GC(lbNames []string, nodePorts []int64) error {
// On GC:
// * Loadbalancers need to get deleted before backends.
// * Backends are refcounted in a shared pool.
// * We always want to GC backends even if there was an error in GCing
// loadbalancers, because the next Sync could rely on the GC for quota.
// * There are at least 2 cases for backend GC:
// 1. The loadbalancer has been deleted.
// 2. An update to the url map drops the refcount of a backend. This can
// happen when an Ingress is updated, if we don't GC after the update
// we'll leak the backend.
lbErr := c.l7Pool.GC(lbNames)
beErr := c.backendPool.GC(nodePorts)
if lbErr != nil {
return lbErr
}
if beErr != nil {
return beErr
}
return nil
}
func defaultInstanceGroupName(clusterName string) string {
return fmt.Sprintf("%v-%v", instanceGroupPrefix, clusterName)
}
// NewClusterManager creates a cluster manager for shared resources.
// - name: is the name used to tag cluster wide shared resources. This is the
// string passed to glbc via --gce-cluster-name.
// - defaultBackendNodePort: is the node port of glbc's default backend. This is
// the kubernetes Service that serves the 404 page if no urls match.
// - defaultHealthCheckPath: is the default path used for L7 health checks, eg: "/healthz"
func NewClusterManager(
name string,
defaultBackendNodePort int64,
defaultHealthCheckPath string) (*ClusterManager, error) {
cloudInterface, err := cloudprovider.GetCloudProvider("gce", nil)
if err != nil {
return nil, err
}
cloud := cloudInterface.(*gce.GCECloud)
cluster := ClusterManager{ClusterNamer: utils.Namer{name}}
zone, err := cloud.GetZone()
if err != nil {
return nil, err
}
cluster.instancePool = instances.NewNodePool(cloud, zone.FailureDomain)
healthChecker := healthchecks.NewHealthChecker(cloud, defaultHealthCheckPath, cluster.ClusterNamer)
cluster.backendPool = backends.NewBackendPool(
cloud, healthChecker, cluster.instancePool, cluster.ClusterNamer)
defaultBackendHealthChecker := healthchecks.NewHealthChecker(cloud, "/healthz", cluster.ClusterNamer)
defaultBackendPool := backends.NewBackendPool(
cloud, defaultBackendHealthChecker, cluster.instancePool, cluster.ClusterNamer)
cluster.defaultBackendNodePort = defaultBackendNodePort
cluster.l7Pool = loadbalancers.NewLoadBalancerPool(
cloud, defaultBackendPool, defaultBackendNodePort, cluster.ClusterNamer)
return &cluster, nil
}

View file

@ -0,0 +1,435 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"fmt"
"net/http"
"reflect"
"sync"
"time"
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/apis/extensions"
"k8s.io/kubernetes/pkg/client/cache"
"k8s.io/kubernetes/pkg/client/record"
client "k8s.io/kubernetes/pkg/client/unversioned"
"k8s.io/kubernetes/pkg/controller/framework"
"k8s.io/kubernetes/pkg/fields"
"k8s.io/kubernetes/pkg/runtime"
"k8s.io/kubernetes/pkg/watch"
"github.com/golang/glog"
)
var (
keyFunc = framework.DeletionHandlingMetaNamespaceKeyFunc
// DefaultClusterUID is the uid to use for clusters resources created by an
// L7 controller created without specifying the --cluster-uid flag.
DefaultClusterUID = ""
)
// LoadBalancerController watches the kubernetes api and adds/removes services
// from the loadbalancer, via loadBalancerConfig.
type LoadBalancerController struct {
client *client.Client
ingController *framework.Controller
nodeController *framework.Controller
svcController *framework.Controller
ingLister StoreToIngressLister
nodeLister cache.StoreToNodeLister
svcLister cache.StoreToServiceLister
CloudClusterManager *ClusterManager
recorder record.EventRecorder
nodeQueue *taskQueue
ingQueue *taskQueue
tr *GCETranslator
stopCh chan struct{}
// stopLock is used to enforce only a single call to Stop is active.
// Needed because we allow stopping through an http endpoint and
// allowing concurrent stoppers leads to stack traces.
stopLock sync.Mutex
shutdown bool
}
// NewLoadBalancerController creates a controller for gce loadbalancers.
// - kubeClient: A kubernetes REST client.
// - clusterManager: A ClusterManager capable of creating all cloud resources
// required for L7 loadbalancing.
// - resyncPeriod: Watchers relist from the Kubernetes API server this often.
func NewLoadBalancerController(kubeClient *client.Client, clusterManager *ClusterManager, resyncPeriod time.Duration, namespace string) (*LoadBalancerController, error) {
eventBroadcaster := record.NewBroadcaster()
eventBroadcaster.StartLogging(glog.Infof)
eventBroadcaster.StartRecordingToSink(kubeClient.Events(""))
lbc := LoadBalancerController{
client: kubeClient,
CloudClusterManager: clusterManager,
stopCh: make(chan struct{}),
recorder: eventBroadcaster.NewRecorder(
api.EventSource{Component: "loadbalancer-controller"}),
}
lbc.nodeQueue = NewTaskQueue(lbc.syncNodes)
lbc.ingQueue = NewTaskQueue(lbc.sync)
// Ingress watch handlers
pathHandlers := framework.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
addIng := obj.(*extensions.Ingress)
lbc.recorder.Eventf(addIng, api.EventTypeNormal, "ADD", fmt.Sprintf("%s/%s", addIng.Namespace, addIng.Name))
lbc.ingQueue.enqueue(obj)
},
DeleteFunc: lbc.ingQueue.enqueue,
UpdateFunc: func(old, cur interface{}) {
if !reflect.DeepEqual(old, cur) {
glog.V(3).Infof("Ingress %v changed, syncing",
cur.(*extensions.Ingress).Name)
}
lbc.ingQueue.enqueue(cur)
},
}
lbc.ingLister.Store, lbc.ingController = framework.NewInformer(
&cache.ListWatch{
ListFunc: ingressListFunc(lbc.client, namespace),
WatchFunc: ingressWatchFunc(lbc.client, namespace),
},
&extensions.Ingress{}, resyncPeriod, pathHandlers)
// Service watch handlers
svcHandlers := framework.ResourceEventHandlerFuncs{
AddFunc: lbc.enqueueIngressForService,
UpdateFunc: func(old, cur interface{}) {
if !reflect.DeepEqual(old, cur) {
lbc.enqueueIngressForService(cur)
}
},
// Ingress deletes matter, service deletes don't.
}
lbc.svcLister.Store, lbc.svcController = framework.NewInformer(
cache.NewListWatchFromClient(
lbc.client, "services", namespace, fields.Everything()),
&api.Service{}, resyncPeriod, svcHandlers)
nodeHandlers := framework.ResourceEventHandlerFuncs{
AddFunc: lbc.nodeQueue.enqueue,
DeleteFunc: lbc.nodeQueue.enqueue,
// Nodes are updated every 10s and we don't care, so no update handler.
}
// Node watch handlers
lbc.nodeLister.Store, lbc.nodeController = framework.NewInformer(
&cache.ListWatch{
ListFunc: func(opts api.ListOptions) (runtime.Object, error) {
return lbc.client.Get().
Resource("nodes").
FieldsSelectorParam(fields.Everything()).
Do().
Get()
},
WatchFunc: func(options api.ListOptions) (watch.Interface, error) {
return lbc.client.Get().
Prefix("watch").
Resource("nodes").
FieldsSelectorParam(fields.Everything()).
Param("resourceVersion", options.ResourceVersion).Watch()
},
},
&api.Node{}, 0, nodeHandlers)
lbc.tr = &GCETranslator{&lbc}
glog.V(3).Infof("Created new loadbalancer controller")
return &lbc, nil
}
func ingressListFunc(c *client.Client, ns string) func(api.ListOptions) (runtime.Object, error) {
return func(opts api.ListOptions) (runtime.Object, error) {
return c.Extensions().Ingress(ns).List(opts)
}
}
func ingressWatchFunc(c *client.Client, ns string) func(options api.ListOptions) (watch.Interface, error) {
return func(options api.ListOptions) (watch.Interface, error) {
return c.Extensions().Ingress(ns).Watch(options)
}
}
// enqueueIngressForService enqueues all the Ingress' for a Service.
func (lbc *LoadBalancerController) enqueueIngressForService(obj interface{}) {
svc := obj.(*api.Service)
ings, err := lbc.ingLister.GetServiceIngress(svc)
if err != nil {
glog.V(5).Infof("ignoring service %v: %v", svc.Name, err)
return
}
for _, ing := range ings {
lbc.ingQueue.enqueue(&ing)
}
}
// Run starts the loadbalancer controller.
func (lbc *LoadBalancerController) Run() {
glog.Infof("Starting loadbalancer controller")
go lbc.ingController.Run(lbc.stopCh)
go lbc.nodeController.Run(lbc.stopCh)
go lbc.svcController.Run(lbc.stopCh)
go lbc.ingQueue.run(time.Second, lbc.stopCh)
go lbc.nodeQueue.run(time.Second, lbc.stopCh)
<-lbc.stopCh
glog.Infof("Shutting down Loadbalancer Controller")
}
// Stop stops the loadbalancer controller. It also deletes cluster resources
// if deleteAll is true.
func (lbc *LoadBalancerController) Stop(deleteAll bool) error {
// Stop is invoked from the http endpoint.
lbc.stopLock.Lock()
defer lbc.stopLock.Unlock()
// Only try draining the workqueue if we haven't already.
if !lbc.shutdown {
close(lbc.stopCh)
glog.Infof("Shutting down controller queues.")
lbc.ingQueue.shutdown()
lbc.nodeQueue.shutdown()
lbc.shutdown = true
}
// Deleting shared cluster resources is idempotent.
if deleteAll {
glog.Infof("Shutting down cluster manager.")
return lbc.CloudClusterManager.shutdown()
}
return nil
}
// sync manages Ingress create/updates/deletes.
func (lbc *LoadBalancerController) sync(key string) {
glog.V(3).Infof("Syncing %v", key)
paths, err := lbc.ingLister.List()
if err != nil {
lbc.ingQueue.requeue(key, err)
return
}
nodePorts := lbc.tr.toNodePorts(&paths)
lbNames := lbc.ingLister.Store.ListKeys()
lbs, _ := lbc.ListRuntimeInfo()
nodeNames, err := lbc.getReadyNodeNames()
if err != nil {
lbc.ingQueue.requeue(key, err)
return
}
obj, ingExists, err := lbc.ingLister.Store.GetByKey(key)
if err != nil {
lbc.ingQueue.requeue(key, err)
return
}
// This performs a 2 phase checkpoint with the cloud:
// * Phase 1 creates/verifies resources are as expected. At the end of a
// successful checkpoint we know that existing L7s are WAI, and the L7
// for the Ingress associated with "key" is ready for a UrlMap update.
// If this encounters an error, eg for quota reasons, we want to invoke
// Phase 2 right away and retry checkpointing.
// * Phase 2 performs GC by refcounting shared resources. This needs to
// happen periodically whether or not stage 1 fails. At the end of a
// successful GC we know that there are no dangling cloud resources that
// don't have an associated Kubernetes Ingress/Service/Endpoint.
defer func() {
if err := lbc.CloudClusterManager.GC(lbNames, nodePorts); err != nil {
lbc.ingQueue.requeue(key, err)
}
glog.V(3).Infof("Finished syncing %v", key)
}()
if err := lbc.CloudClusterManager.Checkpoint(lbs, nodeNames, nodePorts); err != nil {
// TODO: Implement proper backoff for the queue.
eventMsg := "GCE"
if utils.IsHTTPErrorCode(err, http.StatusForbidden) {
eventMsg += " :Quota"
}
if ingExists {
lbc.recorder.Eventf(obj.(*extensions.Ingress), api.EventTypeWarning, eventMsg, err.Error())
} else {
err = fmt.Errorf("%v Error: %v", eventMsg, err)
}
lbc.ingQueue.requeue(key, err)
return
}
if !ingExists {
return
}
// Update the UrlMap of the single loadbalancer that came through the watch.
l7, err := lbc.CloudClusterManager.l7Pool.Get(key)
if err != nil {
lbc.ingQueue.requeue(key, err)
return
}
ing := *obj.(*extensions.Ingress)
if urlMap, err := lbc.tr.toUrlMap(&ing); err != nil {
lbc.ingQueue.requeue(key, err)
} else if err := l7.UpdateUrlMap(urlMap); err != nil {
lbc.recorder.Eventf(&ing, api.EventTypeWarning, "UrlMap", err.Error())
lbc.ingQueue.requeue(key, err)
} else if lbc.updateIngressStatus(l7, ing); err != nil {
lbc.recorder.Eventf(&ing, api.EventTypeWarning, "Status", err.Error())
lbc.ingQueue.requeue(key, err)
}
return
}
// updateIngressStatus updates the IP and annotations of a loadbalancer.
// The annotations are parsed by kubectl describe.
func (lbc *LoadBalancerController) updateIngressStatus(l7 *loadbalancers.L7, ing extensions.Ingress) error {
ingClient := lbc.client.Extensions().Ingress(ing.Namespace)
// Update IP through update/status endpoint
ip := l7.GetIP()
currIng, err := ingClient.Get(ing.Name)
if err != nil {
return err
}
currIng.Status = extensions.IngressStatus{
LoadBalancer: api.LoadBalancerStatus{
Ingress: []api.LoadBalancerIngress{
{IP: ip},
},
},
}
lbIPs := ing.Status.LoadBalancer.Ingress
if len(lbIPs) == 0 && ip != "" || lbIPs[0].IP != ip {
// TODO: If this update fails it's probably resource version related,
// which means it's advantageous to retry right away vs requeuing.
glog.Infof("Updating loadbalancer %v/%v with IP %v", ing.Namespace, ing.Name, ip)
if _, err := ingClient.UpdateStatus(currIng); err != nil {
return err
}
lbc.recorder.Eventf(currIng, api.EventTypeNormal, "CREATE", "ip: %v", ip)
}
// Update annotations through /update endpoint
currIng, err = ingClient.Get(ing.Name)
if err != nil {
return err
}
currIng.Annotations = loadbalancers.GetLBAnnotations(l7, currIng.Annotations, lbc.CloudClusterManager.backendPool)
if !reflect.DeepEqual(ing.Annotations, currIng.Annotations) {
glog.V(3).Infof("Updating annotations of %v/%v", ing.Namespace, ing.Name)
if _, err := ingClient.Update(currIng); err != nil {
return err
}
}
return nil
}
// ListRuntimeInfo lists L7RuntimeInfo as understood by the loadbalancer module.
func (lbc *LoadBalancerController) ListRuntimeInfo() (lbs []*loadbalancers.L7RuntimeInfo, err error) {
for _, m := range lbc.ingLister.Store.List() {
ing := m.(*extensions.Ingress)
k, err := keyFunc(ing)
if err != nil {
glog.Warningf("Cannot get key for Ingress %v/%v: %v", ing.Namespace, ing.Name, err)
continue
}
tls, err := lbc.loadSecrets(ing)
if err != nil {
glog.Warningf("Cannot get certs for Ingress %v/%v: %v", ing.Namespace, ing.Name, err)
}
lbs = append(lbs, &loadbalancers.L7RuntimeInfo{
Name: k,
TLS: tls,
AllowHTTP: ingAnnotations(ing.ObjectMeta.Annotations).allowHTTP(),
})
}
return lbs, nil
}
func (lbc *LoadBalancerController) loadSecrets(ing *extensions.Ingress) (*loadbalancers.TLSCerts, error) {
if len(ing.Spec.TLS) == 0 {
return nil, nil
}
// GCE L7s currently only support a single cert.
if len(ing.Spec.TLS) > 1 {
glog.Warningf("Ignoring %d certs and taking the first for ingress %v/%v",
len(ing.Spec.TLS)-1, ing.Namespace, ing.Name)
}
secretName := ing.Spec.TLS[0].SecretName
// TODO: Replace this for a secret watcher.
glog.V(3).Infof("Retrieving secret for ing %v with name %v", ing.Name, secretName)
secret, err := lbc.client.Secrets(ing.Namespace).Get(secretName)
if err != nil {
return nil, err
}
cert, ok := secret.Data[api.TLSCertKey]
if !ok {
return nil, fmt.Errorf("Secret %v has no private key", secretName)
}
key, ok := secret.Data[api.TLSPrivateKeyKey]
if !ok {
return nil, fmt.Errorf("Secret %v has no cert", secretName)
}
// TODO: Validate certificate with hostnames in ingress?
return &loadbalancers.TLSCerts{Key: string(key), Cert: string(cert)}, nil
}
// syncNodes manages the syncing of kubernetes nodes to gce instance groups.
// The instancegroups are referenced by loadbalancer backends.
func (lbc *LoadBalancerController) syncNodes(key string) {
nodeNames, err := lbc.getReadyNodeNames()
if err != nil {
lbc.nodeQueue.requeue(key, err)
return
}
if err := lbc.CloudClusterManager.instancePool.Sync(nodeNames); err != nil {
lbc.nodeQueue.requeue(key, err)
}
return
}
func nodeReady(node api.Node) bool {
for ix := range node.Status.Conditions {
condition := &node.Status.Conditions[ix]
if condition.Type == api.NodeReady {
return condition.Status == api.ConditionTrue
}
}
return false
}
// getReadyNodeNames returns names of schedulable, ready nodes from the node lister.
func (lbc *LoadBalancerController) getReadyNodeNames() ([]string, error) {
nodeNames := []string{}
nodes, err := lbc.nodeLister.NodeCondition(nodeReady).List()
if err != nil {
return nodeNames, err
}
for _, n := range nodes.Items {
if n.Spec.Unschedulable {
continue
}
nodeNames = append(nodeNames, n.Name)
}
return nodeNames, nil
}

View file

@ -0,0 +1,375 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"fmt"
"math/rand"
"testing"
"time"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/api/testapi"
"k8s.io/kubernetes/pkg/apis/extensions"
client "k8s.io/kubernetes/pkg/client/unversioned"
"k8s.io/kubernetes/pkg/util"
"k8s.io/kubernetes/pkg/util/intstr"
)
const testClusterName = "testcluster"
var (
testPathMap = map[string]string{"/foo": defaultBackendName(testClusterName)}
testIPManager = testIP{}
)
// TODO: Use utils.Namer instead of this function.
func defaultBackendName(clusterName string) string {
return fmt.Sprintf("%v-%v", backendPrefix, clusterName)
}
// newLoadBalancerController create a loadbalancer controller.
func newLoadBalancerController(t *testing.T, cm *fakeClusterManager, masterUrl string) *LoadBalancerController {
client := client.NewOrDie(&client.Config{Host: masterUrl, ContentConfig: client.ContentConfig{GroupVersion: testapi.Default.GroupVersion()}})
lb, err := NewLoadBalancerController(client, cm.ClusterManager, 1*time.Second, api.NamespaceAll)
if err != nil {
t.Fatalf("%v", err)
}
return lb
}
// toHTTPIngressPaths converts the given pathMap to a list of HTTPIngressPaths.
func toHTTPIngressPaths(pathMap map[string]string) []extensions.HTTPIngressPath {
httpPaths := []extensions.HTTPIngressPath{}
for path, backend := range pathMap {
httpPaths = append(httpPaths, extensions.HTTPIngressPath{
Path: path,
Backend: extensions.IngressBackend{
ServiceName: backend,
ServicePort: testBackendPort,
},
})
}
return httpPaths
}
// toIngressRules converts the given ingressRule map to a list of IngressRules.
func toIngressRules(hostRules map[string]utils.FakeIngressRuleValueMap) []extensions.IngressRule {
rules := []extensions.IngressRule{}
for host, pathMap := range hostRules {
rules = append(rules, extensions.IngressRule{
Host: host,
IngressRuleValue: extensions.IngressRuleValue{
HTTP: &extensions.HTTPIngressRuleValue{
Paths: toHTTPIngressPaths(pathMap),
},
},
})
}
return rules
}
// newIngress returns a new Ingress with the given path map.
func newIngress(hostRules map[string]utils.FakeIngressRuleValueMap) *extensions.Ingress {
return &extensions.Ingress{
ObjectMeta: api.ObjectMeta{
Name: fmt.Sprintf("%v", util.NewUUID()),
Namespace: api.NamespaceNone,
},
Spec: extensions.IngressSpec{
Backend: &extensions.IngressBackend{
ServiceName: defaultBackendName(testClusterName),
ServicePort: testBackendPort,
},
Rules: toIngressRules(hostRules),
},
Status: extensions.IngressStatus{
LoadBalancer: api.LoadBalancerStatus{
Ingress: []api.LoadBalancerIngress{
{IP: testIPManager.ip()},
},
},
},
}
}
// validIngress returns a valid Ingress.
func validIngress() *extensions.Ingress {
return newIngress(map[string]utils.FakeIngressRuleValueMap{
"foo.bar.com": testPathMap,
})
}
// getKey returns the key for an ingress.
func getKey(ing *extensions.Ingress, t *testing.T) string {
key, err := keyFunc(ing)
if err != nil {
t.Fatalf("Unexpected error getting key for Ingress %v: %v", ing.Name, err)
}
return key
}
// nodePortManager is a helper to allocate ports to services and
// remember the allocations.
type nodePortManager struct {
portMap map[string]int
start int
end int
namer utils.Namer
}
// randPort generated pseudo random port numbers.
func (p *nodePortManager) getNodePort(svcName string) int {
if port, ok := p.portMap[svcName]; ok {
return port
}
p.portMap[svcName] = rand.Intn(p.end-p.start) + p.start
return p.portMap[svcName]
}
// toNodePortSvcNames converts all service names in the given map to gce node
// port names, eg foo -> k8-be-<foo nodeport>
func (p *nodePortManager) toNodePortSvcNames(inputMap map[string]utils.FakeIngressRuleValueMap) map[string]utils.FakeIngressRuleValueMap {
expectedMap := map[string]utils.FakeIngressRuleValueMap{}
for host, rules := range inputMap {
ruleMap := utils.FakeIngressRuleValueMap{}
for path, svc := range rules {
ruleMap[path] = p.namer.BeName(int64(p.portMap[svc]))
}
expectedMap[host] = ruleMap
}
return expectedMap
}
func newPortManager(st, end int) *nodePortManager {
return &nodePortManager{map[string]int{}, st, end, utils.Namer{}}
}
// addIngress adds an ingress to the loadbalancer controllers ingress store. If
// a nodePortManager is supplied, it also adds all backends to the service store
// with a nodePort acquired through it.
func addIngress(lbc *LoadBalancerController, ing *extensions.Ingress, pm *nodePortManager) {
lbc.ingLister.Store.Add(ing)
if pm == nil {
return
}
for _, rule := range ing.Spec.Rules {
for _, path := range rule.HTTP.Paths {
svc := &api.Service{
ObjectMeta: api.ObjectMeta{
Name: path.Backend.ServiceName,
Namespace: ing.Namespace,
},
}
var svcPort api.ServicePort
switch path.Backend.ServicePort.Type {
case intstr.Int:
svcPort = api.ServicePort{Port: int(path.Backend.ServicePort.IntVal)}
default:
svcPort = api.ServicePort{Name: path.Backend.ServicePort.StrVal}
}
svcPort.NodePort = pm.getNodePort(path.Backend.ServiceName)
svc.Spec.Ports = []api.ServicePort{svcPort}
lbc.svcLister.Store.Add(svc)
}
}
}
func TestLbCreateDelete(t *testing.T) {
cm := NewFakeClusterManager(DefaultClusterUID)
lbc := newLoadBalancerController(t, cm, "")
inputMap1 := map[string]utils.FakeIngressRuleValueMap{
"foo.example.com": {
"/foo1": "foo1svc",
"/foo2": "foo2svc",
},
"bar.example.com": {
"/bar1": "bar1svc",
"/bar2": "bar2svc",
},
}
inputMap2 := map[string]utils.FakeIngressRuleValueMap{
"baz.foobar.com": {
"/foo": "foo1svc",
"/bar": "bar1svc",
},
}
pm := newPortManager(1, 65536)
ings := []*extensions.Ingress{}
for _, m := range []map[string]utils.FakeIngressRuleValueMap{inputMap1, inputMap2} {
newIng := newIngress(m)
addIngress(lbc, newIng, pm)
ingStoreKey := getKey(newIng, t)
lbc.sync(ingStoreKey)
l7, err := cm.l7Pool.Get(ingStoreKey)
if err != nil {
t.Fatalf("%v", err)
}
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(m))
ings = append(ings, newIng)
}
lbc.ingLister.Store.Delete(ings[0])
lbc.sync(getKey(ings[0], t))
// BackendServices associated with ports of deleted Ingress' should get gc'd
// when the Ingress is deleted, regardless of the service. At the same time
// we shouldn't pull shared backends out from existing loadbalancers.
unexpected := []int{pm.portMap["foo2svc"], pm.portMap["bar2svc"]}
expected := []int{pm.portMap["foo1svc"], pm.portMap["bar1svc"]}
for _, port := range expected {
if _, err := cm.backendPool.Get(int64(port)); err != nil {
t.Fatalf("%v", err)
}
}
for _, port := range unexpected {
if be, err := cm.backendPool.Get(int64(port)); err == nil {
t.Fatalf("Found backend %+v for port %v", be, port)
}
}
lbc.ingLister.Store.Delete(ings[1])
lbc.sync(getKey(ings[1], t))
// No cluster resources (except the defaults used by the cluster manager)
// should exist at this point.
for _, port := range expected {
if be, err := cm.backendPool.Get(int64(port)); err == nil {
t.Fatalf("Found backend %+v for port %v", be, port)
}
}
if len(cm.fakeLbs.Fw) != 0 || len(cm.fakeLbs.Um) != 0 || len(cm.fakeLbs.Tp) != 0 {
t.Fatalf("Loadbalancer leaked resources")
}
for _, lbName := range []string{getKey(ings[0], t), getKey(ings[1], t)} {
if l7, err := cm.l7Pool.Get(lbName); err == nil {
t.Fatalf("Found unexpected loadbalandcer %+v: %v", l7, err)
}
}
}
func TestLbFaultyUpdate(t *testing.T) {
cm := NewFakeClusterManager(DefaultClusterUID)
lbc := newLoadBalancerController(t, cm, "")
inputMap := map[string]utils.FakeIngressRuleValueMap{
"foo.example.com": {
"/foo1": "foo1svc",
"/foo2": "foo2svc",
},
"bar.example.com": {
"/bar1": "bar1svc",
"/bar2": "bar2svc",
},
}
ing := newIngress(inputMap)
pm := newPortManager(1, 65536)
addIngress(lbc, ing, pm)
ingStoreKey := getKey(ing, t)
lbc.sync(ingStoreKey)
l7, err := cm.l7Pool.Get(ingStoreKey)
if err != nil {
t.Fatalf("%v", err)
}
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(inputMap))
// Change the urlmap directly through the lb pool, resync, and
// make sure the controller corrects it.
l7.UpdateUrlMap(utils.GCEURLMap{
"foo.example.com": {
"/foo1": &compute.BackendService{SelfLink: "foo2svc"},
},
})
lbc.sync(ingStoreKey)
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(inputMap))
}
func TestLbDefaulting(t *testing.T) {
cm := NewFakeClusterManager(DefaultClusterUID)
lbc := newLoadBalancerController(t, cm, "")
// Make sure the controller plugs in the default values accepted by GCE.
ing := newIngress(map[string]utils.FakeIngressRuleValueMap{"": {"": "foo1svc"}})
pm := newPortManager(1, 65536)
addIngress(lbc, ing, pm)
ingStoreKey := getKey(ing, t)
lbc.sync(ingStoreKey)
l7, err := cm.l7Pool.Get(ingStoreKey)
if err != nil {
t.Fatalf("%v", err)
}
expectedMap := map[string]utils.FakeIngressRuleValueMap{loadbalancers.DefaultHost: {loadbalancers.DefaultPath: "foo1svc"}}
cm.fakeLbs.CheckURLMap(t, l7, pm.toNodePortSvcNames(expectedMap))
}
func TestLbNoService(t *testing.T) {
cm := NewFakeClusterManager(DefaultClusterUID)
lbc := newLoadBalancerController(t, cm, "")
inputMap := map[string]utils.FakeIngressRuleValueMap{
"foo.example.com": {
"/foo1": "foo1svc",
},
}
ing := newIngress(inputMap)
ing.Spec.Backend.ServiceName = "foo1svc"
ingStoreKey := getKey(ing, t)
// Adds ingress to store, but doesn't create an associated service.
// This will still create the associated loadbalancer, it will just
// have empty rules. The rules will get corrected when the service
// pops up.
addIngress(lbc, ing, nil)
lbc.sync(ingStoreKey)
l7, err := cm.l7Pool.Get(ingStoreKey)
if err != nil {
t.Fatalf("%v", err)
}
// Creates the service, next sync should have complete url map.
pm := newPortManager(1, 65536)
addIngress(lbc, ing, pm)
lbc.enqueueIngressForService(&api.Service{
ObjectMeta: api.ObjectMeta{
Name: "foo1svc",
Namespace: ing.Namespace,
},
})
// TODO: This will hang if the previous step failed to insert into queue
key, _ := lbc.ingQueue.queue.Get()
lbc.sync(key.(string))
inputMap[utils.DefaultBackendKey] = map[string]string{
utils.DefaultBackendKey: "foo1svc",
}
expectedMap := pm.toNodePortSvcNames(inputMap)
cm.fakeLbs.CheckURLMap(t, l7, expectedMap)
}
type testIP struct {
start int
}
func (t *testIP) ip() string {
t.start++
return fmt.Sprintf("0.0.0.%v", t.start)
}
// TODO: Test lb status update when annotation stabilize

View file

@ -0,0 +1,52 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// This is the structure of the gce l7 controller:
// apiserver <-> controller ---> pools --> cloud
// | |
// |-> Ingress |-> backends
// |-> Services | |-> health checks
// |-> Nodes |
// |-> instance groups
// | |-> port per backend
// |
// |-> loadbalancers
// |-> http proxy
// |-> forwarding rule
// |-> urlmap
// * apiserver: kubernetes api serer.
// * controller: gce l7 controller, watches apiserver and interacts
// with sync pools. The controller doesn't know anything about the cloud.
// Communication between the controller and pools is 1 way.
// * pool: the controller tells each pool about desired state by inserting
// into shared memory store. The pools sync this with the cloud. Pools are
// also responsible for periodically checking the edge links between various
// cloud resources.
//
// A note on sync pools: this package has 3 sync pools: for node, instances and
// loadbalancer resources. A sync pool is meant to record all creates/deletes
// performed by a controller and periodically verify that links are not broken.
// For example, the controller might create a backend via backendPool.Add(),
// the backend pool remembers this and continuously verifies that the backend
// is connected to the right instance group, and that the instance group has
// the right ports open.
//
// A note on naming convention: per golang style guide for Initialisms, Http
// should be HTTP and Url should be URL, however because these interfaces
// must match their siblings in the Kubernetes cloud provider, which are in turn
// consistent with GCE compute API, there might be inconsistencies.
package controller

View file

@ -0,0 +1,70 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"k8s.io/contrib/ingress/controllers/gce/backends"
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
"k8s.io/contrib/ingress/controllers/gce/instances"
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/util/intstr"
"k8s.io/kubernetes/pkg/util/sets"
)
const (
testDefaultBeNodePort = int64(3000)
defaultZone = "default-zone"
)
var testBackendPort = intstr.IntOrString{Type: intstr.Int, IntVal: 80}
// ClusterManager fake
type fakeClusterManager struct {
*ClusterManager
fakeLbs *loadbalancers.FakeLoadBalancers
fakeBackends *backends.FakeBackendServices
fakeIGs *instances.FakeInstanceGroups
}
// NewFakeClusterManager creates a new fake ClusterManager.
func NewFakeClusterManager(clusterName string) *fakeClusterManager {
fakeLbs := loadbalancers.NewFakeLoadBalancers(clusterName)
fakeBackends := backends.NewFakeBackendServices()
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
fakeHCs := healthchecks.NewFakeHealthChecks()
namer := utils.Namer{clusterName}
nodePool := instances.NewNodePool(fakeIGs, defaultZone)
healthChecker := healthchecks.NewHealthChecker(fakeHCs, "/", namer)
backendPool := backends.NewBackendPool(
fakeBackends,
healthChecker, nodePool, namer)
l7Pool := loadbalancers.NewLoadBalancerPool(
fakeLbs,
// TODO: change this
backendPool,
testDefaultBeNodePort,
namer,
)
cm := &ClusterManager{
ClusterNamer: namer,
instancePool: nodePool,
backendPool: backendPool,
l7Pool: l7Pool,
}
return &fakeClusterManager{cm, fakeLbs, fakeBackends, fakeIGs}
}

View file

@ -0,0 +1,306 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controller
import (
"fmt"
"strconv"
"time"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/loadbalancers"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/apis/extensions"
"k8s.io/kubernetes/pkg/client/cache"
"k8s.io/kubernetes/pkg/util/intstr"
"k8s.io/kubernetes/pkg/util/wait"
"k8s.io/kubernetes/pkg/util/workqueue"
"github.com/golang/glog"
)
const allowHTTPKey = "kubernetes.io/ingress.allowHTTP"
// ingAnnotations represents Ingress annotations.
type ingAnnotations map[string]string
// allowHTTP returns the allowHTTP flag. True by default.
func (ing ingAnnotations) allowHTTP() bool {
val, ok := ing[allowHTTPKey]
if !ok {
return true
}
v, err := strconv.ParseBool(val)
if err != nil {
return true
}
return v
}
// errorNodePortNotFound is an implementation of error.
type errorNodePortNotFound struct {
backend extensions.IngressBackend
origErr error
}
func (e errorNodePortNotFound) Error() string {
return fmt.Sprintf("Could not find nodeport for backend %+v: %v",
e.backend, e.origErr)
}
// taskQueue manages a work queue through an independent worker that
// invokes the given sync function for every work item inserted.
type taskQueue struct {
// queue is the work queue the worker polls
queue *workqueue.Type
// sync is called for each item in the queue
sync func(string)
// workerDone is closed when the worker exits
workerDone chan struct{}
}
func (t *taskQueue) run(period time.Duration, stopCh <-chan struct{}) {
wait.Until(t.worker, period, stopCh)
}
// enqueue enqueues ns/name of the given api object in the task queue.
func (t *taskQueue) enqueue(obj interface{}) {
key, err := keyFunc(obj)
if err != nil {
glog.Infof("Couldn't get key for object %+v: %v", obj, err)
return
}
t.queue.Add(key)
}
func (t *taskQueue) requeue(key string, err error) {
glog.Errorf("Requeuing %v, err %v", key, err)
t.queue.Add(key)
}
// worker processes work in the queue through sync.
func (t *taskQueue) worker() {
for {
key, quit := t.queue.Get()
if quit {
close(t.workerDone)
return
}
glog.V(3).Infof("Syncing %v", key)
t.sync(key.(string))
t.queue.Done(key)
}
}
// shutdown shuts down the work queue and waits for the worker to ACK
func (t *taskQueue) shutdown() {
t.queue.ShutDown()
<-t.workerDone
}
// NewTaskQueue creates a new task queue with the given sync function.
// The sync function is called for every element inserted into the queue.
func NewTaskQueue(syncFn func(string)) *taskQueue {
return &taskQueue{
queue: workqueue.New(),
sync: syncFn,
workerDone: make(chan struct{}),
}
}
// compareLinks returns true if the 2 self links are equal.
func compareLinks(l1, l2 string) bool {
// TODO: These can be partial links
return l1 == l2 && l1 != ""
}
// StoreToIngressLister makes a Store that lists Ingress.
// TODO: Move this to cache/listers post 1.1.
type StoreToIngressLister struct {
cache.Store
}
// List lists all Ingress' in the store.
func (s *StoreToIngressLister) List() (ing extensions.IngressList, err error) {
for _, m := range s.Store.List() {
ing.Items = append(ing.Items, *(m.(*extensions.Ingress)))
}
return ing, nil
}
// GetServiceIngress gets all the Ingress' that have rules pointing to a service.
// Note that this ignores services without the right nodePorts.
func (s *StoreToIngressLister) GetServiceIngress(svc *api.Service) (ings []extensions.Ingress, err error) {
for _, m := range s.Store.List() {
ing := *m.(*extensions.Ingress)
if ing.Namespace != svc.Namespace {
continue
}
for _, rules := range ing.Spec.Rules {
if rules.IngressRuleValue.HTTP == nil {
continue
}
for _, p := range rules.IngressRuleValue.HTTP.Paths {
if p.Backend.ServiceName == svc.Name {
ings = append(ings, ing)
}
}
}
}
if len(ings) == 0 {
err = fmt.Errorf("No ingress for service %v", svc.Name)
}
return
}
// GCETranslator helps with kubernetes -> gce api conversion.
type GCETranslator struct {
*LoadBalancerController
}
// toUrlMap converts an ingress to a map of subdomain: url-regex: gce backend.
func (t *GCETranslator) toUrlMap(ing *extensions.Ingress) (utils.GCEURLMap, error) {
hostPathBackend := utils.GCEURLMap{}
for _, rule := range ing.Spec.Rules {
if rule.HTTP == nil {
glog.Errorf("Ignoring non http Ingress rule")
continue
}
pathToBackend := map[string]*compute.BackendService{}
for _, p := range rule.HTTP.Paths {
backend, err := t.toGCEBackend(&p.Backend, ing.Namespace)
if err != nil {
// If a service doesn't have a nodeport we can still forward traffic
// to all other services under the assumption that the user will
// modify nodeport.
if _, ok := err.(errorNodePortNotFound); ok {
glog.Infof("%v", err)
continue
}
// If a service doesn't have a backend, there's nothing the user
// can do to correct this (the admin might've limited quota).
// So keep requeuing the l7 till all backends exist.
return utils.GCEURLMap{}, err
}
// The Ingress spec defines empty path as catch-all, so if a user
// asks for a single host and multiple empty paths, all traffic is
// sent to one of the last backend in the rules list.
path := p.Path
if path == "" {
path = loadbalancers.DefaultPath
}
pathToBackend[path] = backend
}
// If multiple hostless rule sets are specified, last one wins
host := rule.Host
if host == "" {
host = loadbalancers.DefaultHost
}
hostPathBackend[host] = pathToBackend
}
defaultBackend, _ := t.toGCEBackend(ing.Spec.Backend, ing.Namespace)
hostPathBackend.PutDefaultBackend(defaultBackend)
return hostPathBackend, nil
}
func (t *GCETranslator) toGCEBackend(be *extensions.IngressBackend, ns string) (*compute.BackendService, error) {
if be == nil {
return nil, nil
}
port, err := t.getServiceNodePort(*be, ns)
if err != nil {
return nil, err
}
backend, err := t.CloudClusterManager.backendPool.Get(int64(port))
if err != nil {
return nil, fmt.Errorf(
"No GCE backend exists for port %v, kube backend %+v", port, be)
}
return backend, nil
}
// getServiceNodePort looks in the svc store for a matching service:port,
// and returns the nodeport.
func (t *GCETranslator) getServiceNodePort(be extensions.IngressBackend, namespace string) (int, error) {
obj, exists, err := t.svcLister.Store.Get(
&api.Service{
ObjectMeta: api.ObjectMeta{
Name: be.ServiceName,
Namespace: namespace,
},
})
if !exists {
return invalidPort, errorNodePortNotFound{be, fmt.Errorf(
"Service %v/%v not found in store", namespace, be.ServiceName)}
}
if err != nil {
return invalidPort, errorNodePortNotFound{be, err}
}
var nodePort int
for _, p := range obj.(*api.Service).Spec.Ports {
switch be.ServicePort.Type {
case intstr.Int:
if p.Port == int(be.ServicePort.IntVal) {
nodePort = p.NodePort
break
}
default:
if p.Name == be.ServicePort.StrVal {
nodePort = p.NodePort
break
}
}
}
if nodePort != invalidPort {
return nodePort, nil
}
return invalidPort, errorNodePortNotFound{be, fmt.Errorf(
"Could not find matching nodeport from service.")}
}
// toNodePorts converts a pathlist to a flat list of nodeports.
func (t *GCETranslator) toNodePorts(ings *extensions.IngressList) []int64 {
knownPorts := []int64{}
for _, ing := range ings.Items {
defaultBackend := ing.Spec.Backend
if defaultBackend != nil {
port, err := t.getServiceNodePort(*defaultBackend, ing.Namespace)
if err != nil {
glog.Infof("%v", err)
} else {
knownPorts = append(knownPorts, int64(port))
}
}
for _, rule := range ing.Spec.Rules {
if rule.HTTP == nil {
glog.Errorf("Ignoring non http Ingress rule.")
continue
}
for _, path := range rule.HTTP.Paths {
port, err := t.getServiceNodePort(path.Backend, ing.Namespace)
if err != nil {
glog.Infof("%v", err)
continue
}
knownPorts = append(knownPorts, int64(port))
}
}
}
return knownPorts
}

View file

@ -0,0 +1,67 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package healthchecks
import (
"fmt"
compute "google.golang.org/api/compute/v1"
)
// NewFakeHealthChecks returns a new FakeHealthChecks.
func NewFakeHealthChecks() *FakeHealthChecks {
return &FakeHealthChecks{hc: []*compute.HttpHealthCheck{}}
}
// FakeHealthChecks fakes out health checks.
type FakeHealthChecks struct {
hc []*compute.HttpHealthCheck
}
// CreateHttpHealthCheck fakes out http health check creation.
func (f *FakeHealthChecks) CreateHttpHealthCheck(hc *compute.HttpHealthCheck) error {
f.hc = append(f.hc, hc)
return nil
}
// GetHttpHealthCheck fakes out getting a http health check from the cloud.
func (f *FakeHealthChecks) GetHttpHealthCheck(name string) (*compute.HttpHealthCheck, error) {
for _, h := range f.hc {
if h.Name == name {
return h, nil
}
}
return nil, fmt.Errorf("Health check %v not found.", name)
}
// DeleteHttpHealthCheck fakes out deleting a http health check.
func (f *FakeHealthChecks) DeleteHttpHealthCheck(name string) error {
healthChecks := []*compute.HttpHealthCheck{}
exists := false
for _, h := range f.hc {
if h.Name == name {
exists = true
continue
}
healthChecks = append(healthChecks, h)
}
if !exists {
return fmt.Errorf("Failed to find health check %v", name)
}
f.hc = healthChecks
return nil
}

View file

@ -0,0 +1,89 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package healthchecks
import (
compute "google.golang.org/api/compute/v1"
"github.com/golang/glog"
"k8s.io/contrib/ingress/controllers/gce/utils"
"net/http"
)
// HealthChecks manages health checks.
type HealthChecks struct {
cloud SingleHealthCheck
defaultPath string
namer utils.Namer
}
// NewHealthChecker creates a new health checker.
// cloud: the cloud object implementing SingleHealthCheck.
// defaultHealthCheckPath: is the HTTP path to use for health checks.
func NewHealthChecker(cloud SingleHealthCheck, defaultHealthCheckPath string, namer utils.Namer) HealthChecker {
return &HealthChecks{cloud, defaultHealthCheckPath, namer}
}
// Add adds a healthcheck if one for the same port doesn't already exist.
func (h *HealthChecks) Add(port int64, path string) error {
hc, _ := h.Get(port)
name := h.namer.BeName(port)
if path == "" {
path = h.defaultPath
}
if hc == nil {
glog.Infof("Creating health check %v", name)
if err := h.cloud.CreateHttpHealthCheck(
&compute.HttpHealthCheck{
Name: name,
Port: port,
RequestPath: path,
Description: "Default kubernetes L7 Loadbalancing health check.",
// How often to health check.
CheckIntervalSec: 1,
// How long to wait before claiming failure of a health check.
TimeoutSec: 1,
// Number of healthchecks to pass for a vm to be deemed healthy.
HealthyThreshold: 1,
// Number of healthchecks to fail before the vm is deemed unhealthy.
UnhealthyThreshold: 10,
}); err != nil {
return err
}
} else {
// TODO: Does this health check need an edge hop?
glog.Infof("Health check %v already exists", hc.Name)
}
return nil
}
// Delete deletes the health check by port.
func (h *HealthChecks) Delete(port int64) error {
name := h.namer.BeName(port)
glog.Infof("Deleting health check %v", name)
if err := h.cloud.DeleteHttpHealthCheck(h.namer.BeName(port)); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
}
return nil
}
// Get returns the given health check.
func (h *HealthChecks) Get(port int64) (*compute.HttpHealthCheck, error) {
return h.cloud.GetHttpHealthCheck(h.namer.BeName(port))
}

View file

@ -0,0 +1,35 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package healthchecks
import (
compute "google.golang.org/api/compute/v1"
)
// SingleHealthCheck is an interface to manage a single GCE health check.
type SingleHealthCheck interface {
CreateHttpHealthCheck(hc *compute.HttpHealthCheck) error
DeleteHttpHealthCheck(name string) error
GetHttpHealthCheck(name string) (*compute.HttpHealthCheck, error)
}
// HealthChecker is an interface to manage cloud HTTPHealthChecks.
type HealthChecker interface {
Add(port int64, path string) error
Delete(port int64) error
Get(port int64) (*compute.HttpHealthCheck, error)
}

View file

@ -0,0 +1,102 @@
# This Service writes the HTTP request headers out to the response. Access it
# through its NodePort, LoadBalancer or Ingress endpoint.
apiVersion: v1
kind: Service
metadata:
name: echoheadersx
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 80
nodePort: 30301
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
---
apiVersion: v1
kind: Service
metadata:
name: echoheadersdefault
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 80
nodePort: 30302
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
---
apiVersion: v1
kind: Service
metadata:
name: echoheadersy
labels:
app: echoheaders
spec:
type: NodePort
ports:
- port: 80
nodePort: 30284
targetPort: 8080
protocol: TCP
name: http
selector:
app: echoheaders
---
# This is a replication controller for the endpoint that services the 3
# Services above.
apiVersion: v1
kind: ReplicationController
metadata:
name: echoheaders
spec:
replicas: 1
template:
metadata:
labels:
app: echoheaders
spec:
containers:
- name: echoheaders
image: bprashanth/echoserver:0.0
ports:
- containerPort: 8080
---
# This is the Ingress resource that creates a HTTP Loadbalancer configured
# according to the Ingress rules.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echomap
spec:
backend:
serviceName: echoheadersdefault
servicePort: 80
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: echoheadersx
servicePort: 80
- host: bar.baz.com
http:
paths:
- path: /bar
backend:
serviceName: echoheadersy
servicePort: 80
- path: /foo
backend:
serviceName: echoheadersx
servicePort: 80

View file

@ -0,0 +1,127 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package instances
import (
"fmt"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/util/sets"
)
// NewFakeInstanceGroups creates a new FakeInstanceGroups.
func NewFakeInstanceGroups(nodes sets.String) *FakeInstanceGroups {
return &FakeInstanceGroups{
instances: nodes,
listResult: getInstanceList(nodes),
namer: utils.Namer{},
}
}
// InstanceGroup fakes
// FakeInstanceGroups fakes out the instance groups api.
type FakeInstanceGroups struct {
instances sets.String
instanceGroups []*compute.InstanceGroup
Ports []int64
getResult *compute.InstanceGroup
listResult *compute.InstanceGroupsListInstances
calls []int
namer utils.Namer
}
// GetInstanceGroup fakes getting an instance group from the cloud.
func (f *FakeInstanceGroups) GetInstanceGroup(name, zone string) (*compute.InstanceGroup, error) {
f.calls = append(f.calls, utils.Get)
for _, ig := range f.instanceGroups {
if ig.Name == name {
return ig, nil
}
}
// TODO: Return googleapi 404 error
return nil, fmt.Errorf("Instance group %v not found", name)
}
// CreateInstanceGroup fakes instance group creation.
func (f *FakeInstanceGroups) CreateInstanceGroup(name, zone string) (*compute.InstanceGroup, error) {
newGroup := &compute.InstanceGroup{Name: name, SelfLink: name}
f.instanceGroups = append(f.instanceGroups, newGroup)
return newGroup, nil
}
// DeleteInstanceGroup fakes instance group deletion.
func (f *FakeInstanceGroups) DeleteInstanceGroup(name, zone string) error {
newGroups := []*compute.InstanceGroup{}
found := false
for _, ig := range f.instanceGroups {
if ig.Name == name {
found = true
continue
}
newGroups = append(newGroups, ig)
}
if !found {
return fmt.Errorf("Instance Group %v not found", name)
}
f.instanceGroups = newGroups
return nil
}
// ListInstancesInInstanceGroup fakes listing instances in an instance group.
func (f *FakeInstanceGroups) ListInstancesInInstanceGroup(name, zone string, state string) (*compute.InstanceGroupsListInstances, error) {
return f.listResult, nil
}
// AddInstancesToInstanceGroup fakes adding instances to an instance group.
func (f *FakeInstanceGroups) AddInstancesToInstanceGroup(name, zone string, instanceNames []string) error {
f.calls = append(f.calls, utils.AddInstances)
f.instances.Insert(instanceNames...)
return nil
}
// RemoveInstancesFromInstanceGroup fakes removing instances from an instance group.
func (f *FakeInstanceGroups) RemoveInstancesFromInstanceGroup(name, zone string, instanceNames []string) error {
f.calls = append(f.calls, utils.RemoveInstances)
f.instances.Delete(instanceNames...)
return nil
}
// AddPortToInstanceGroup fakes adding ports to an Instance Group.
func (f *FakeInstanceGroups) AddPortToInstanceGroup(ig *compute.InstanceGroup, port int64) (*compute.NamedPort, error) {
f.Ports = append(f.Ports, port)
return &compute.NamedPort{Name: f.namer.BeName(port), Port: port}, nil
}
// getInstanceList returns an instance list based on the given names.
// The names cannot contain a '.', the real gce api validates against this.
func getInstanceList(nodeNames sets.String) *compute.InstanceGroupsListInstances {
instanceNames := nodeNames.List()
computeInstances := []*compute.InstanceWithNamedPorts{}
for _, name := range instanceNames {
instanceLink := fmt.Sprintf(
"https://www.googleapis.com/compute/v1/projects/%s/zones/%s/instances/%s",
"project", "zone", name)
computeInstances = append(
computeInstances, &compute.InstanceWithNamedPorts{
Instance: instanceLink})
}
return &compute.InstanceGroupsListInstances{
Items: computeInstances,
}
}

View file

@ -0,0 +1,165 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package instances
import (
"net/http"
"strings"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/storage"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/util/sets"
"github.com/golang/glog"
)
const (
// State string required by gce library to list all instances.
allInstances = "ALL"
)
// Instances implements NodePool.
type Instances struct {
cloud InstanceGroups
zone string
snapshotter storage.Snapshotter
}
// NewNodePool creates a new node pool.
// - cloud: implements InstanceGroups, used to sync Kubernetes nodes with
// members of the cloud InstanceGroup.
func NewNodePool(cloud InstanceGroups, zone string) NodePool {
glog.V(3).Infof("NodePool is only aware of instances in zone %v", zone)
return &Instances{cloud, zone, storage.NewInMemoryPool()}
}
// AddInstanceGroup creates or gets an instance group if it doesn't exist
// and adds the given port to it.
func (i *Instances) AddInstanceGroup(name string, port int64) (*compute.InstanceGroup, *compute.NamedPort, error) {
ig, _ := i.Get(name)
if ig == nil {
glog.Infof("Creating instance group %v", name)
var err error
ig, err = i.cloud.CreateInstanceGroup(name, i.zone)
if err != nil {
return nil, nil, err
}
} else {
glog.V(3).Infof("Instance group already exists %v", name)
}
defer i.snapshotter.Add(name, ig)
namedPort, err := i.cloud.AddPortToInstanceGroup(ig, port)
if err != nil {
return nil, nil, err
}
return ig, namedPort, nil
}
// DeleteInstanceGroup deletes the given IG by name.
func (i *Instances) DeleteInstanceGroup(name string) error {
defer i.snapshotter.Delete(name)
return i.cloud.DeleteInstanceGroup(name, i.zone)
}
func (i *Instances) list(name string) (sets.String, error) {
nodeNames := sets.NewString()
instances, err := i.cloud.ListInstancesInInstanceGroup(
name, i.zone, allInstances)
if err != nil {
return nodeNames, err
}
for _, ins := range instances.Items {
// TODO: If round trips weren't so slow one would be inclided
// to GetInstance using this url and get the name.
parts := strings.Split(ins.Instance, "/")
nodeNames.Insert(parts[len(parts)-1])
}
return nodeNames, nil
}
// Get returns the Instance Group by name.
func (i *Instances) Get(name string) (*compute.InstanceGroup, error) {
ig, err := i.cloud.GetInstanceGroup(name, i.zone)
if err != nil {
return nil, err
}
i.snapshotter.Add(name, ig)
return ig, nil
}
// Add adds the given instances to the Instance Group.
func (i *Instances) Add(groupName string, names []string) error {
glog.V(3).Infof("Adding nodes %v to %v", names, groupName)
return i.cloud.AddInstancesToInstanceGroup(groupName, i.zone, names)
}
// Remove removes the given instances from the Instance Group.
func (i *Instances) Remove(groupName string, names []string) error {
glog.V(3).Infof("Removing nodes %v from %v", names, groupName)
return i.cloud.RemoveInstancesFromInstanceGroup(groupName, i.zone, names)
}
// Sync syncs kubernetes instances with the instances in the instance group.
func (i *Instances) Sync(nodes []string) (err error) {
glog.V(3).Infof("Syncing nodes %v", nodes)
defer func() {
// The node pool is only responsible for syncing nodes to instance
// groups. It never creates/deletes, so if an instance groups is
// not found there's nothing it can do about it anyway. Most cases
// this will happen because the backend pool has deleted the instance
// group, however if it happens because a user deletes the IG by mistake
// we should just wait till the backend pool fixes it.
if utils.IsHTTPErrorCode(err, http.StatusNotFound) {
glog.Infof("Node pool encountered a 404, ignoring: %v", err)
err = nil
}
}()
pool := i.snapshotter.Snapshot()
for name := range pool {
gceNodes := sets.NewString()
gceNodes, err = i.list(name)
if err != nil {
return err
}
kubeNodes := sets.NewString(nodes...)
// A node deleted via kubernetes could still exist as a gce vm. We don't
// want to route requests to it. Similarly, a node added to kubernetes
// needs to get added to the instance group so we do route requests to it.
removeNodes := gceNodes.Difference(kubeNodes).List()
addNodes := kubeNodes.Difference(gceNodes).List()
if len(removeNodes) != 0 {
if err = i.Remove(
name, gceNodes.Difference(kubeNodes).List()); err != nil {
return err
}
}
if len(addNodes) != 0 {
if err = i.Add(
name, kubeNodes.Difference(gceNodes).List()); err != nil {
return err
}
}
}
return nil
}

View file

@ -0,0 +1,75 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package instances
import (
"testing"
"k8s.io/kubernetes/pkg/util/sets"
)
const defaultZone = "default-zone"
func TestNodePoolSync(t *testing.T) {
f := NewFakeInstanceGroups(sets.NewString(
[]string{"n1", "n2"}...))
pool := NewNodePool(f, defaultZone)
pool.AddInstanceGroup("test", 80)
// KubeNodes: n1
// GCENodes: n1, n2
// Remove n2 from the instance group.
f.calls = []int{}
kubeNodes := sets.NewString([]string{"n1"}...)
pool.Sync(kubeNodes.List())
if f.instances.Len() != kubeNodes.Len() || !kubeNodes.IsSuperset(f.instances) {
t.Fatalf("%v != %v", kubeNodes, f.instances)
}
// KubeNodes: n1, n2
// GCENodes: n1
// Try to add n2 to the instance group.
f = NewFakeInstanceGroups(sets.NewString([]string{"n1"}...))
pool = NewNodePool(f, defaultZone)
pool.AddInstanceGroup("test", 80)
f.calls = []int{}
kubeNodes = sets.NewString([]string{"n1", "n2"}...)
pool.Sync(kubeNodes.List())
if f.instances.Len() != kubeNodes.Len() ||
!kubeNodes.IsSuperset(f.instances) {
t.Fatalf("%v != %v", kubeNodes, f.instances)
}
// KubeNodes: n1, n2
// GCENodes: n1, n2
// Do nothing.
f = NewFakeInstanceGroups(sets.NewString([]string{"n1", "n2"}...))
pool = NewNodePool(f, defaultZone)
pool.AddInstanceGroup("test", 80)
f.calls = []int{}
kubeNodes = sets.NewString([]string{"n1", "n2"}...)
pool.Sync(kubeNodes.List())
if len(f.calls) != 0 {
t.Fatalf(
"Did not expect any calls, got %+v", f.calls)
}
}

View file

@ -0,0 +1,47 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package instances
import (
compute "google.golang.org/api/compute/v1"
)
// NodePool is an interface to manage a pool of kubernetes nodes synced with vm instances in the cloud
// through the InstanceGroups interface.
type NodePool interface {
AddInstanceGroup(name string, port int64) (*compute.InstanceGroup, *compute.NamedPort, error)
DeleteInstanceGroup(name string) error
// TODO: Refactor for modularity
Add(groupName string, nodeNames []string) error
Remove(groupName string, nodeNames []string) error
Sync(nodeNames []string) error
Get(name string) (*compute.InstanceGroup, error)
}
// InstanceGroups is an interface for managing gce instances groups, and the instances therein.
type InstanceGroups interface {
GetInstanceGroup(name, zone string) (*compute.InstanceGroup, error)
CreateInstanceGroup(name, zone string) (*compute.InstanceGroup, error)
DeleteInstanceGroup(name, zone string) error
// TODO: Refactor for modulatiry.
ListInstancesInInstanceGroup(name, zone string, state string) (*compute.InstanceGroupsListInstances, error)
AddInstancesToInstanceGroup(name, zone string, instanceNames []string) error
RemoveInstancesFromInstanceGroup(name, zone string, instanceName []string) error
AddPortToInstanceGroup(ig *compute.InstanceGroup, port int64) (*compute.NamedPort, error)
}

View file

@ -0,0 +1,438 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package loadbalancers
import (
"fmt"
"testing"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/utils"
)
var testIPManager = testIP{}
type testIP struct {
start int
}
func (t *testIP) ip() string {
t.start++
return fmt.Sprintf("0.0.0.%v", t.start)
}
// Loadbalancer fakes
// FakeLoadBalancers is a type that fakes out the loadbalancer interface.
type FakeLoadBalancers struct {
Fw []*compute.ForwardingRule
Um []*compute.UrlMap
Tp []*compute.TargetHttpProxy
Tps []*compute.TargetHttpsProxy
IP []*compute.Address
Certs []*compute.SslCertificate
name string
}
// TODO: There is some duplication between these functions and the name mungers in
// loadbalancer file.
func (f *FakeLoadBalancers) fwName(https bool) string {
if https {
return fmt.Sprintf("%v-%v", httpsForwardingRulePrefix, f.name)
}
return fmt.Sprintf("%v-%v", forwardingRulePrefix, f.name)
}
func (f *FakeLoadBalancers) umName() string {
return fmt.Sprintf("%v-%v", urlMapPrefix, f.name)
}
func (f *FakeLoadBalancers) tpName(https bool) string {
if https {
return fmt.Sprintf("%v-%v", targetHTTPSProxyPrefix, f.name)
}
return fmt.Sprintf("%v-%v", targetProxyPrefix, f.name)
}
// String is the string method for FakeLoadBalancers.
func (f *FakeLoadBalancers) String() string {
msg := fmt.Sprintf(
"Loadbalancer %v,\nforwarding rules:\n", f.name)
for _, fw := range f.Fw {
msg += fmt.Sprintf("\t%v\n", fw.Name)
}
msg += fmt.Sprintf("Target proxies\n")
for _, tp := range f.Tp {
msg += fmt.Sprintf("\t%v\n", tp.Name)
}
msg += fmt.Sprintf("UrlMaps\n")
for _, um := range f.Um {
msg += fmt.Sprintf("%v\n", um.Name)
msg += fmt.Sprintf("\tHost Rules:\n")
for _, hostRule := range um.HostRules {
msg += fmt.Sprintf("\t\t%v\n", hostRule)
}
msg += fmt.Sprintf("\tPath Matcher:\n")
for _, pathMatcher := range um.PathMatchers {
msg += fmt.Sprintf("\t\t%v\n", pathMatcher.Name)
for _, pathRule := range pathMatcher.PathRules {
msg += fmt.Sprintf("\t\t\t%+v\n", pathRule)
}
}
}
return msg
}
// Forwarding Rule fakes
// GetGlobalForwardingRule returns a fake forwarding rule.
func (f *FakeLoadBalancers) GetGlobalForwardingRule(name string) (*compute.ForwardingRule, error) {
for i := range f.Fw {
if f.Fw[i].Name == name {
return f.Fw[i], nil
}
}
return nil, fmt.Errorf("Forwarding rule %v not found", name)
}
// CreateGlobalForwardingRule fakes forwarding rule creation.
func (f *FakeLoadBalancers) CreateGlobalForwardingRule(proxyLink, ip, name, portRange string) (*compute.ForwardingRule, error) {
if ip == "" {
ip = fmt.Sprintf(testIPManager.ip())
}
rule := &compute.ForwardingRule{
Name: name,
IPAddress: ip,
Target: proxyLink,
PortRange: portRange,
IPProtocol: "TCP",
SelfLink: name,
}
f.Fw = append(f.Fw, rule)
return rule, nil
}
// SetProxyForGlobalForwardingRule fakes setting a global forwarding rule.
func (f *FakeLoadBalancers) SetProxyForGlobalForwardingRule(fw *compute.ForwardingRule, proxyLink string) error {
for i := range f.Fw {
if f.Fw[i].Name == fw.Name {
f.Fw[i].Target = proxyLink
}
}
return nil
}
// DeleteGlobalForwardingRule fakes deleting a global forwarding rule.
func (f *FakeLoadBalancers) DeleteGlobalForwardingRule(name string) error {
fw := []*compute.ForwardingRule{}
for i := range f.Fw {
if f.Fw[i].Name != name {
fw = append(fw, f.Fw[i])
}
}
f.Fw = fw
return nil
}
// UrlMaps fakes
// GetUrlMap fakes getting url maps from the cloud.
func (f *FakeLoadBalancers) GetUrlMap(name string) (*compute.UrlMap, error) {
for i := range f.Um {
if f.Um[i].Name == name {
return f.Um[i], nil
}
}
return nil, fmt.Errorf("Url Map %v not found", name)
}
// CreateUrlMap fakes url-map creation.
func (f *FakeLoadBalancers) CreateUrlMap(backend *compute.BackendService, name string) (*compute.UrlMap, error) {
urlMap := &compute.UrlMap{
Name: name,
DefaultService: backend.SelfLink,
SelfLink: f.umName(),
}
f.Um = append(f.Um, urlMap)
return urlMap, nil
}
// UpdateUrlMap fakes updating url-maps.
func (f *FakeLoadBalancers) UpdateUrlMap(urlMap *compute.UrlMap) (*compute.UrlMap, error) {
for i := range f.Um {
if f.Um[i].Name == urlMap.Name {
f.Um[i] = urlMap
return urlMap, nil
}
}
return nil, nil
}
// DeleteUrlMap fakes url-map deletion.
func (f *FakeLoadBalancers) DeleteUrlMap(name string) error {
um := []*compute.UrlMap{}
for i := range f.Um {
if f.Um[i].Name != name {
um = append(um, f.Um[i])
}
}
f.Um = um
return nil
}
// TargetProxies fakes
// GetTargetHttpProxy fakes getting target http proxies from the cloud.
func (f *FakeLoadBalancers) GetTargetHttpProxy(name string) (*compute.TargetHttpProxy, error) {
for i := range f.Tp {
if f.Tp[i].Name == name {
return f.Tp[i], nil
}
}
return nil, fmt.Errorf("Targetproxy %v not found", name)
}
// CreateTargetHttpProxy fakes creating a target http proxy.
func (f *FakeLoadBalancers) CreateTargetHttpProxy(urlMap *compute.UrlMap, name string) (*compute.TargetHttpProxy, error) {
proxy := &compute.TargetHttpProxy{
Name: name,
UrlMap: urlMap.SelfLink,
SelfLink: name,
}
f.Tp = append(f.Tp, proxy)
return proxy, nil
}
// DeleteTargetHttpProxy fakes deleting a target http proxy.
func (f *FakeLoadBalancers) DeleteTargetHttpProxy(name string) error {
tp := []*compute.TargetHttpProxy{}
for i := range f.Tp {
if f.Tp[i].Name != name {
tp = append(tp, f.Tp[i])
}
}
f.Tp = tp
return nil
}
// SetUrlMapForTargetHttpProxy fakes setting an url-map for a target http proxy.
func (f *FakeLoadBalancers) SetUrlMapForTargetHttpProxy(proxy *compute.TargetHttpProxy, urlMap *compute.UrlMap) error {
for i := range f.Tp {
if f.Tp[i].Name == proxy.Name {
f.Tp[i].UrlMap = urlMap.SelfLink
}
}
return nil
}
// TargetHttpsProxy fakes
// GetTargetHttpsProxy fakes getting target http proxies from the cloud.
func (f *FakeLoadBalancers) GetTargetHttpsProxy(name string) (*compute.TargetHttpsProxy, error) {
for i := range f.Tps {
if f.Tps[i].Name == name {
return f.Tps[i], nil
}
}
return nil, fmt.Errorf("Targetproxy %v not found", name)
}
// CreateTargetHttpsProxy fakes creating a target http proxy.
func (f *FakeLoadBalancers) CreateTargetHttpsProxy(urlMap *compute.UrlMap, cert *compute.SslCertificate, name string) (*compute.TargetHttpsProxy, error) {
proxy := &compute.TargetHttpsProxy{
Name: name,
UrlMap: urlMap.SelfLink,
SslCertificates: []string{cert.SelfLink},
SelfLink: name,
}
f.Tps = append(f.Tps, proxy)
return proxy, nil
}
// DeleteTargetHttpsProxy fakes deleting a target http proxy.
func (f *FakeLoadBalancers) DeleteTargetHttpsProxy(name string) error {
tp := []*compute.TargetHttpsProxy{}
for i := range f.Tps {
if f.Tps[i].Name != name {
tp = append(tp, f.Tps[i])
}
}
f.Tps = tp
return nil
}
// SetUrlMapForTargetHttpsProxy fakes setting an url-map for a target http proxy.
func (f *FakeLoadBalancers) SetUrlMapForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, urlMap *compute.UrlMap) error {
for i := range f.Tps {
if f.Tps[i].Name == proxy.Name {
f.Tps[i].UrlMap = urlMap.SelfLink
}
}
return nil
}
// SetSslCertificateForTargetHttpsProxy fakes out setting certificates.
func (f *FakeLoadBalancers) SetSslCertificateForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, SSLCert *compute.SslCertificate) error {
found := false
for i := range f.Tps {
if f.Tps[i].Name == proxy.Name {
f.Tps[i].SslCertificates = []string{SSLCert.SelfLink}
found = true
}
}
if !found {
return fmt.Errorf("Failed to find proxy %v", proxy.Name)
}
return nil
}
// UrlMap fakes
// CheckURLMap checks the URL map.
func (f *FakeLoadBalancers) CheckURLMap(t *testing.T, l7 *L7, expectedMap map[string]utils.FakeIngressRuleValueMap) {
um, err := f.GetUrlMap(l7.um.Name)
if err != nil || um == nil {
t.Fatalf("%v", err)
}
// Check the default backend
var d string
if h, ok := expectedMap[utils.DefaultBackendKey]; ok {
if d, ok = h[utils.DefaultBackendKey]; ok {
delete(h, utils.DefaultBackendKey)
}
delete(expectedMap, utils.DefaultBackendKey)
}
// The urlmap should have a default backend, and each path matcher.
if d != "" && l7.um.DefaultService != d {
t.Fatalf("Expected default backend %v found %v",
d, l7.um.DefaultService)
}
for _, matcher := range l7.um.PathMatchers {
var hostname string
// There's a 1:1 mapping between pathmatchers and hosts
for _, hostRule := range l7.um.HostRules {
if matcher.Name == hostRule.PathMatcher {
if len(hostRule.Hosts) != 1 {
t.Fatalf("Unexpected hosts in hostrules %+v", hostRule)
}
if d != "" && matcher.DefaultService != d {
t.Fatalf("Expected default backend %v found %v",
d, matcher.DefaultService)
}
hostname = hostRule.Hosts[0]
break
}
}
// These are all pathrules for a single host, found above
for _, rule := range matcher.PathRules {
if len(rule.Paths) != 1 {
t.Fatalf("Unexpected rule in pathrules %+v", rule)
}
pathRule := rule.Paths[0]
if hostMap, ok := expectedMap[hostname]; !ok {
t.Fatalf("Expected map for host %v: %v", hostname, hostMap)
} else if svc, ok := expectedMap[hostname][pathRule]; !ok {
t.Fatalf("Expected rule %v in host map", pathRule)
} else if svc != rule.Service {
t.Fatalf("Expected service %v found %v", svc, rule.Service)
}
delete(expectedMap[hostname], pathRule)
if len(expectedMap[hostname]) == 0 {
delete(expectedMap, hostname)
}
}
}
if len(expectedMap) != 0 {
t.Fatalf("Untranslated entries %+v", expectedMap)
}
}
// Static IP fakes
// ReserveGlobalStaticIP fakes out static IP reservation.
func (f *FakeLoadBalancers) ReserveGlobalStaticIP(name, IPAddress string) (*compute.Address, error) {
ip := &compute.Address{
Name: name,
Address: IPAddress,
}
f.IP = append(f.IP, ip)
return ip, nil
}
// GetGlobalStaticIP fakes out static IP retrieval.
func (f *FakeLoadBalancers) GetGlobalStaticIP(name string) (*compute.Address, error) {
for i := range f.IP {
if f.IP[i].Name == name {
return f.IP[i], nil
}
}
return nil, fmt.Errorf("Static IP %v not found", name)
}
// DeleteGlobalStaticIP fakes out static IP deletion.
func (f *FakeLoadBalancers) DeleteGlobalStaticIP(name string) error {
ip := []*compute.Address{}
for i := range f.IP {
if f.IP[i].Name != name {
ip = append(ip, f.IP[i])
}
}
f.IP = ip
return nil
}
// SslCertificate fakes
// GetSslCertificate fakes out getting ssl certs.
func (f *FakeLoadBalancers) GetSslCertificate(name string) (*compute.SslCertificate, error) {
for i := range f.Certs {
if f.Certs[i].Name == name {
return f.Certs[i], nil
}
}
return nil, fmt.Errorf("Cert %v not found", name)
}
// CreateSslCertificate fakes out certificate creation.
func (f *FakeLoadBalancers) CreateSslCertificate(cert *compute.SslCertificate) (*compute.SslCertificate, error) {
cert.SelfLink = cert.Name
f.Certs = append(f.Certs, cert)
return cert, nil
}
// DeleteSslCertificate fakes out certificate deletion.
func (f *FakeLoadBalancers) DeleteSslCertificate(name string) error {
certs := []*compute.SslCertificate{}
for i := range f.Certs {
if f.Certs[i].Name != name {
certs = append(certs, f.Certs[i])
}
}
f.Certs = certs
return nil
}
// NewFakeLoadBalancers creates a fake cloud client. Name is the name
// inserted into the selfLink of the associated resources for testing.
// eg: forwardingRule.SelfLink == k8-fw-name.
func NewFakeLoadBalancers(name string) *FakeLoadBalancers {
return &FakeLoadBalancers{
Fw: []*compute.ForwardingRule{},
name: name,
}
}

View file

@ -0,0 +1,74 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package loadbalancers
import (
compute "google.golang.org/api/compute/v1"
)
// LoadBalancers is an interface for managing all the gce resources needed by L7
// loadbalancers. We don't have individual pools for each of these resources
// because none of them are usable (or acquirable) stand-alone, unlinke backends
// and instance groups. The dependency graph:
// ForwardingRule -> UrlMaps -> TargetProxies
type LoadBalancers interface {
// Forwarding Rules
GetGlobalForwardingRule(name string) (*compute.ForwardingRule, error)
CreateGlobalForwardingRule(proxyLink, ip, name, portRange string) (*compute.ForwardingRule, error)
DeleteGlobalForwardingRule(name string) error
SetProxyForGlobalForwardingRule(fw *compute.ForwardingRule, proxy string) error
// UrlMaps
GetUrlMap(name string) (*compute.UrlMap, error)
CreateUrlMap(backend *compute.BackendService, name string) (*compute.UrlMap, error)
UpdateUrlMap(urlMap *compute.UrlMap) (*compute.UrlMap, error)
DeleteUrlMap(name string) error
// TargetProxies
GetTargetHttpProxy(name string) (*compute.TargetHttpProxy, error)
CreateTargetHttpProxy(urlMap *compute.UrlMap, name string) (*compute.TargetHttpProxy, error)
DeleteTargetHttpProxy(name string) error
SetUrlMapForTargetHttpProxy(proxy *compute.TargetHttpProxy, urlMap *compute.UrlMap) error
// TargetHttpsProxies
GetTargetHttpsProxy(name string) (*compute.TargetHttpsProxy, error)
CreateTargetHttpsProxy(urlMap *compute.UrlMap, SSLCerts *compute.SslCertificate, name string) (*compute.TargetHttpsProxy, error)
DeleteTargetHttpsProxy(name string) error
SetUrlMapForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, urlMap *compute.UrlMap) error
SetSslCertificateForTargetHttpsProxy(proxy *compute.TargetHttpsProxy, SSLCerts *compute.SslCertificate) error
// SslCertificates
GetSslCertificate(name string) (*compute.SslCertificate, error)
CreateSslCertificate(certs *compute.SslCertificate) (*compute.SslCertificate, error)
DeleteSslCertificate(name string) error
// Static IP
ReserveGlobalStaticIP(name, IPAddress string) (*compute.Address, error)
GetGlobalStaticIP(name string) (*compute.Address, error)
DeleteGlobalStaticIP(name string) error
}
// LoadBalancerPool is an interface to manage the cloud resources associated
// with a gce loadbalancer.
type LoadBalancerPool interface {
Get(name string) (*L7, error)
Add(ri *L7RuntimeInfo) error
Delete(name string) error
Sync(ri []*L7RuntimeInfo) error
GC(names []string) error
Shutdown() error
}

View file

@ -0,0 +1,789 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package loadbalancers
import (
"crypto/md5"
"encoding/hex"
"encoding/json"
"fmt"
"net/http"
"strings"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/backends"
"k8s.io/contrib/ingress/controllers/gce/storage"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/util/sets"
"github.com/golang/glog"
)
const (
// The gce api uses the name of a path rule to match a host rule.
hostRulePrefix = "host"
// DefaultHost is the host used if none is specified. It is a valid value
// for the "Host" field recognized by GCE.
DefaultHost = "*"
// DefaultPath is the path used if none is specified. It is a valid path
// recognized by GCE.
DefaultPath = "/*"
// A single target proxy/urlmap/forwarding rule is created per loadbalancer.
// Tagged with the namespace/name of the Ingress.
targetProxyPrefix = "k8s-tp"
targetHTTPSProxyPrefix = "k8s-tps"
sslCertPrefix = "k8s-ssl"
forwardingRulePrefix = "k8s-fw"
httpsForwardingRulePrefix = "k8s-fws"
urlMapPrefix = "k8s-um"
httpDefaultPortRange = "80-80"
httpsDefaultPortRange = "443-443"
)
// L7s implements LoadBalancerPool.
type L7s struct {
cloud LoadBalancers
snapshotter storage.Snapshotter
// TODO: Remove this field and always ask the BackendPool using the NodePort.
glbcDefaultBackend *compute.BackendService
defaultBackendPool backends.BackendPool
defaultBackendNodePort int64
namer utils.Namer
}
// NewLoadBalancerPool returns a new loadbalancer pool.
// - cloud: implements LoadBalancers. Used to sync L7 loadbalancer resources
// with the cloud.
// - defaultBackendPool: a BackendPool used to manage the GCE BackendService for
// the default backend.
// - defaultBackendNodePort: The nodePort of the Kubernetes service representing
// the default backend.
func NewLoadBalancerPool(
cloud LoadBalancers,
defaultBackendPool backends.BackendPool,
defaultBackendNodePort int64, namer utils.Namer) LoadBalancerPool {
return &L7s{cloud, storage.NewInMemoryPool(), nil, defaultBackendPool, defaultBackendNodePort, namer}
}
func (l *L7s) create(ri *L7RuntimeInfo) (*L7, error) {
// Lazily create a default backend so we don't tax users who don't care
// about Ingress by consuming 1 of their 3 GCE BackendServices. This
// BackendService is deleted when there are no more Ingresses, either
// through Sync or Shutdown.
if l.glbcDefaultBackend == nil {
err := l.defaultBackendPool.Add(l.defaultBackendNodePort)
if err != nil {
return nil, err
}
l.glbcDefaultBackend, err = l.defaultBackendPool.Get(l.defaultBackendNodePort)
if err != nil {
return nil, err
}
}
return &L7{
runtimeInfo: ri,
Name: l.namer.LBName(ri.Name),
cloud: l.cloud,
glbcDefaultBackend: l.glbcDefaultBackend,
namer: l.namer,
sslCert: nil,
}, nil
}
// Get returns the loadbalancer by name.
func (l *L7s) Get(name string) (*L7, error) {
name = l.namer.LBName(name)
lb, exists := l.snapshotter.Get(name)
if !exists {
return nil, fmt.Errorf("Loadbalancer %v not in pool", name)
}
return lb.(*L7), nil
}
// Add gets or creates a loadbalancer.
// If the loadbalancer already exists, it checks that its edges are valid.
func (l *L7s) Add(ri *L7RuntimeInfo) (err error) {
name := l.namer.LBName(ri.Name)
lb, _ := l.Get(name)
if lb == nil {
glog.Infof("Creating l7 %v", name)
lb, err = l.create(ri)
if err != nil {
return err
}
}
// Add the lb to the pool, in case we create an UrlMap but run out
// of quota in creating the ForwardingRule we still need to cleanup
// the UrlMap during GC.
defer l.snapshotter.Add(name, lb)
// Why edge hop for the create?
// The loadbalancer is a fictitious resource, it doesn't exist in gce. To
// make it exist we need to create a collection of gce resources, done
// through the edge hop.
if err := lb.edgeHop(); err != nil {
return err
}
return nil
}
// Delete deletes a loadbalancer by name.
func (l *L7s) Delete(name string) error {
name = l.namer.LBName(name)
lb, err := l.Get(name)
if err != nil {
return err
}
glog.Infof("Deleting lb %v", name)
if err := lb.Cleanup(); err != nil {
return err
}
l.snapshotter.Delete(name)
return nil
}
// Sync loadbalancers with the given runtime info from the controller.
func (l *L7s) Sync(lbs []*L7RuntimeInfo) error {
glog.V(3).Infof("Creating loadbalancers %+v", lbs)
// The default backend is completely managed by the l7 pool.
// This includes recreating it if it's deleted, or fixing broken links.
if err := l.defaultBackendPool.Sync([]int64{l.defaultBackendNodePort}); err != nil {
return err
}
// create new loadbalancers, perform an edge hop for existing
for _, ri := range lbs {
if err := l.Add(ri); err != nil {
return err
}
}
// Tear down the default backend when there are no more loadbalancers
// because the cluster could go down anytime and we'd leak it otherwise.
if len(lbs) == 0 {
if err := l.defaultBackendPool.Delete(l.defaultBackendNodePort); err != nil {
return err
}
l.glbcDefaultBackend = nil
}
return nil
}
// GC garbage collects loadbalancers not in the input list.
func (l *L7s) GC(names []string) error {
knownLoadBalancers := sets.NewString()
for _, n := range names {
knownLoadBalancers.Insert(l.namer.LBName(n))
}
pool := l.snapshotter.Snapshot()
// Delete unknown loadbalancers
for name := range pool {
if knownLoadBalancers.Has(name) {
continue
}
glog.V(3).Infof("GCing loadbalancer %v", name)
if err := l.Delete(name); err != nil {
return err
}
}
return nil
}
// Shutdown logs whether or not the pool is empty.
func (l *L7s) Shutdown() error {
if err := l.GC([]string{}); err != nil {
return err
}
if err := l.defaultBackendPool.Shutdown(); err != nil {
return err
}
glog.Infof("Loadbalancer pool shutdown.")
return nil
}
// TLSCerts encapsulates .pem encoded TLS information.
type TLSCerts struct {
// Key is private key.
Key string
// Cert is a public key.
Cert string
// Chain is a certificate chain.
Chain string
}
// L7RuntimeInfo is info passed to this module from the controller runtime.
type L7RuntimeInfo struct {
// Name is the name of a loadbalancer.
Name string
// IP is the desired ip of the loadbalancer, eg from a staticIP.
IP string
// TLS are the tls certs to use in termination.
TLS *TLSCerts
// AllowHTTP will not setup :80, if TLS is nil and AllowHTTP is set,
// no loadbalancer is created.
AllowHTTP bool
}
// L7 represents a single L7 loadbalancer.
type L7 struct {
Name string
// runtimeInfo is non-cloudprovider information passed from the controller.
runtimeInfo *L7RuntimeInfo
// cloud is an interface to manage loadbalancers in the GCE cloud.
cloud LoadBalancers
// um is the UrlMap associated with this L7.
um *compute.UrlMap
// tp is the TargetHTTPProxy associated with this L7.
tp *compute.TargetHttpProxy
// tps is the TargetHTTPSProxy associated with this L7.
tps *compute.TargetHttpsProxy
// fw is the GlobalForwardingRule that points to the TargetHTTPProxy.
fw *compute.ForwardingRule
// fws is the GlobalForwardingRule that points to the TargetHTTPSProxy.
fws *compute.ForwardingRule
// ip is the static-ip associated with both GlobalForwardingRules.
ip *compute.Address
// sslCert is the ssl cert associated with the targetHTTPSProxy.
// TODO: Make this a custom type that contains crt+key
sslCert *compute.SslCertificate
// glbcDefaultBacked is the backend to use if no path rules match.
// TODO: Expose this to users.
glbcDefaultBackend *compute.BackendService
// namer is used to compute names of the various sub-components of an L7.
namer utils.Namer
}
func (l *L7) checkUrlMap(backend *compute.BackendService) (err error) {
if l.glbcDefaultBackend == nil {
return fmt.Errorf("Cannot create urlmap without default backend.")
}
urlMapName := l.namer.Truncate(fmt.Sprintf("%v-%v", urlMapPrefix, l.Name))
urlMap, _ := l.cloud.GetUrlMap(urlMapName)
if urlMap != nil {
glog.V(3).Infof("Url map %v already exists", urlMap.Name)
l.um = urlMap
return nil
}
glog.Infof("Creating url map %v for backend %v", urlMapName, l.glbcDefaultBackend.Name)
urlMap, err = l.cloud.CreateUrlMap(l.glbcDefaultBackend, urlMapName)
if err != nil {
return err
}
l.um = urlMap
return nil
}
func (l *L7) checkProxy() (err error) {
if l.um == nil {
return fmt.Errorf("Cannot create proxy without urlmap.")
}
proxyName := l.namer.Truncate(fmt.Sprintf("%v-%v", targetProxyPrefix, l.Name))
proxy, _ := l.cloud.GetTargetHttpProxy(proxyName)
if proxy == nil {
glog.Infof("Creating new http proxy for urlmap %v", l.um.Name)
proxy, err = l.cloud.CreateTargetHttpProxy(l.um, proxyName)
if err != nil {
return err
}
l.tp = proxy
return nil
}
if !utils.CompareLinks(proxy.UrlMap, l.um.SelfLink) {
glog.Infof("Proxy %v has the wrong url map, setting %v overwriting %v",
proxy.Name, l.um, proxy.UrlMap)
if err := l.cloud.SetUrlMapForTargetHttpProxy(proxy, l.um); err != nil {
return err
}
}
l.tp = proxy
return nil
}
func (l *L7) checkSSLCert() (err error) {
// TODO: Currently, GCE only supports a single certificate per static IP
// so we don't need to bother with disambiguation. Naming the cert after
// the loadbalancer is a simplification.
certName := l.namer.Truncate(fmt.Sprintf("%v-%v", sslCertPrefix, l.Name))
cert, _ := l.cloud.GetSslCertificate(certName)
if cert == nil {
glog.Infof("Creating new sslCertificates %v for %v", l.Name, certName)
cert, err = l.cloud.CreateSslCertificate(&compute.SslCertificate{
Name: certName,
Certificate: l.runtimeInfo.TLS.Cert,
PrivateKey: l.runtimeInfo.TLS.Key,
})
if err != nil {
return err
}
}
l.sslCert = cert
return nil
}
func (l *L7) checkHttpsProxy() (err error) {
if l.sslCert == nil {
glog.V(3).Infof("No SSL certificates for %v, will not create HTTPS proxy.", l.Name)
return nil
}
if l.um == nil {
return fmt.Errorf("No UrlMap for %v, will not create HTTPS proxy.", l.Name)
}
proxyName := l.namer.Truncate(fmt.Sprintf("%v-%v", targetHTTPSProxyPrefix, l.Name))
proxy, _ := l.cloud.GetTargetHttpsProxy(proxyName)
if proxy == nil {
glog.Infof("Creating new https proxy for urlmap %v", l.um.Name)
proxy, err = l.cloud.CreateTargetHttpsProxy(l.um, l.sslCert, proxyName)
if err != nil {
return err
}
l.tps = proxy
return nil
}
if !utils.CompareLinks(proxy.UrlMap, l.um.SelfLink) {
glog.Infof("Https proxy %v has the wrong url map, setting %v overwriting %v",
proxy.Name, l.um, proxy.UrlMap)
if err := l.cloud.SetUrlMapForTargetHttpsProxy(proxy, l.um); err != nil {
return err
}
}
cert := proxy.SslCertificates[0]
if !utils.CompareLinks(cert, l.sslCert.SelfLink) {
glog.Infof("Https proxy %v has the wrong ssl certs, setting %v overwriting %v",
proxy.Name, l.sslCert.SelfLink, cert)
if err := l.cloud.SetSslCertificateForTargetHttpsProxy(proxy, l.sslCert); err != nil {
return err
}
}
glog.V(3).Infof("Created target https proxy %v", proxy.Name)
l.tps = proxy
return nil
}
func (l *L7) checkForwardingRule(name, proxyLink, ip, portRange string) (fw *compute.ForwardingRule, err error) {
fw, _ = l.cloud.GetGlobalForwardingRule(name)
if fw != nil && (ip != "" && fw.IPAddress != ip || fw.PortRange != portRange) {
glog.Warningf("Recreating forwarding rule %v(%v), so it has %v(%v)",
fw.IPAddress, fw.PortRange, ip, portRange)
if err := l.cloud.DeleteGlobalForwardingRule(name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return nil, err
}
}
fw = nil
}
if fw == nil {
parts := strings.Split(proxyLink, "/")
glog.Infof("Creating forwarding rule for proxy %v and ip %v:%v", parts[len(parts)-1:], ip, portRange)
fw, err = l.cloud.CreateGlobalForwardingRule(proxyLink, ip, name, portRange)
if err != nil {
return nil, err
}
}
// TODO: If the port range and protocol don't match, recreate the rule
if utils.CompareLinks(fw.Target, proxyLink) {
glog.V(3).Infof("Forwarding rule %v already exists", fw.Name)
} else {
glog.Infof("Forwarding rule %v has the wrong proxy, setting %v overwriting %v",
fw.Name, fw.Target, proxyLink)
if err := l.cloud.SetProxyForGlobalForwardingRule(fw, proxyLink); err != nil {
return nil, err
}
}
return fw, nil
}
func (l *L7) checkHttpForwardingRule() (err error) {
if l.tp == nil {
return fmt.Errorf("Cannot create forwarding rule without proxy.")
}
var address string
if l.ip != nil {
address = l.ip.Address
}
name := l.namer.Truncate(fmt.Sprintf("%v-%v", forwardingRulePrefix, l.Name))
fw, err := l.checkForwardingRule(name, l.tp.SelfLink, address, httpDefaultPortRange)
if err != nil {
return err
}
l.fw = fw
return nil
}
func (l *L7) checkHttpsForwardingRule() (err error) {
if l.tps == nil {
glog.V(3).Infof("No https target proxy for %v, not created https forwarding rule", l.Name)
return nil
}
var address string
if l.ip != nil {
address = l.ip.Address
}
name := l.namer.Truncate(fmt.Sprintf("%v-%v", httpsForwardingRulePrefix, l.Name))
fws, err := l.checkForwardingRule(name, l.tps.SelfLink, address, httpsDefaultPortRange)
if err != nil {
return err
}
l.fws = fws
return nil
}
func (l *L7) checkStaticIP() (err error) {
if l.fw == nil || l.fw.IPAddress == "" {
return fmt.Errorf("Will not create static IP without a forwarding rule.")
}
staticIPName := l.namer.Truncate(fmt.Sprintf("%v-%v", forwardingRulePrefix, l.Name))
ip, _ := l.cloud.GetGlobalStaticIP(staticIPName)
if ip == nil {
glog.Infof("Creating static ip %v", staticIPName)
ip, err = l.cloud.ReserveGlobalStaticIP(staticIPName, l.fw.IPAddress)
if err != nil {
if utils.IsHTTPErrorCode(err, http.StatusConflict) ||
utils.IsHTTPErrorCode(err, http.StatusBadRequest) {
glog.V(3).Infof("IP %v(%v) is already reserved, assuming it is OK to use.",
l.fw.IPAddress, staticIPName)
return nil
}
return err
}
}
l.ip = ip
return nil
}
func (l *L7) edgeHop() error {
if err := l.checkUrlMap(l.glbcDefaultBackend); err != nil {
return err
}
if l.runtimeInfo.AllowHTTP {
if err := l.edgeHopHttp(); err != nil {
return err
}
}
// Defer promoting an emphemral to a static IP till it's really needed.
if l.runtimeInfo.AllowHTTP && l.runtimeInfo.TLS != nil {
if err := l.checkStaticIP(); err != nil {
return err
}
}
if l.runtimeInfo.TLS != nil {
if err := l.edgeHopHttps(); err != nil {
return err
}
}
return nil
}
func (l *L7) edgeHopHttp() error {
if err := l.checkProxy(); err != nil {
return err
}
if err := l.checkHttpForwardingRule(); err != nil {
return err
}
return nil
}
func (l *L7) edgeHopHttps() error {
if err := l.checkSSLCert(); err != nil {
return err
}
if err := l.checkHttpsProxy(); err != nil {
return err
}
if err := l.checkHttpsForwardingRule(); err != nil {
return err
}
return nil
}
// GetIP returns the ip associated with the forwarding rule for this l7.
func (l *L7) GetIP() string {
if l.fw != nil {
return l.fw.IPAddress
}
if l.fws != nil {
return l.fws.IPAddress
}
return ""
}
// getNameForPathMatcher returns a name for a pathMatcher based on the given host rule.
// The host rule can be a regex, the path matcher name used to associate the 2 cannot.
func getNameForPathMatcher(hostRule string) string {
hasher := md5.New()
hasher.Write([]byte(hostRule))
return fmt.Sprintf("%v%v", hostRulePrefix, hex.EncodeToString(hasher.Sum(nil)))
}
// UpdateUrlMap translates the given hostname: endpoint->port mapping into a gce url map.
//
// HostRule: Conceptually contains all PathRules for a given host.
// PathMatcher: Associates a path rule with a host rule. Mostly an optimization.
// PathRule: Maps a single path regex to a backend.
//
// The GCE url map allows multiple hosts to share url->backend mappings without duplication, eg:
// Host: foo(PathMatcher1), bar(PathMatcher1,2)
// PathMatcher1:
// /a -> b1
// /b -> b2
// PathMatcher2:
// /c -> b1
// This leads to a lot of complexity in the common case, where all we want is a mapping of
// host->{/path: backend}.
//
// Consider some alternatives:
// 1. Using a single backend per PathMatcher:
// Host: foo(PathMatcher1,3) bar(PathMatcher1,2,3)
// PathMatcher1:
// /a -> b1
// PathMatcher2:
// /c -> b1
// PathMatcher3:
// /b -> b2
// 2. Using a single host per PathMatcher:
// Host: foo(PathMatcher1)
// PathMatcher1:
// /a -> b1
// /b -> b2
// Host: bar(PathMatcher2)
// PathMatcher2:
// /a -> b1
// /b -> b2
// /c -> b1
// In the context of kubernetes services, 2 makes more sense, because we
// rarely want to lookup backends (service:nodeport). When a service is
// deleted, we need to find all host PathMatchers that have the backend
// and remove the mapping. When a new path is added to a host (happens
// more frequently than service deletion) we just need to lookup the 1
// pathmatcher of the host.
func (l *L7) UpdateUrlMap(ingressRules utils.GCEURLMap) error {
if l.um == nil {
return fmt.Errorf("Cannot add url without an urlmap.")
}
glog.V(3).Infof("Updating urlmap for l7 %v", l.Name)
// All UrlMaps must have a default backend. If the Ingress has a default
// backend, it applies to all host rules as well as to the urlmap itself.
// If it doesn't the urlmap might have a stale default, so replace it with
// glbc's default backend.
defaultBackend := ingressRules.GetDefaultBackend()
if defaultBackend != nil {
l.um.DefaultService = defaultBackend.SelfLink
} else {
l.um.DefaultService = l.glbcDefaultBackend.SelfLink
}
glog.V(3).Infof("Updating url map %+v", ingressRules)
for hostname, urlToBackend := range ingressRules {
// Find the hostrule
// Find the path matcher
// Add all given endpoint:backends to pathRules in path matcher
var hostRule *compute.HostRule
pmName := getNameForPathMatcher(hostname)
for _, hr := range l.um.HostRules {
// TODO: Hostnames must be exact match?
if hr.Hosts[0] == hostname {
hostRule = hr
break
}
}
if hostRule == nil {
// This is a new host
hostRule = &compute.HostRule{
Hosts: []string{hostname},
PathMatcher: pmName,
}
// Why not just clobber existing host rules?
// Because we can have multiple loadbalancers point to a single
// gce url map when we have IngressClaims.
l.um.HostRules = append(l.um.HostRules, hostRule)
}
var pathMatcher *compute.PathMatcher
for _, pm := range l.um.PathMatchers {
if pm.Name == hostRule.PathMatcher {
pathMatcher = pm
break
}
}
if pathMatcher == nil {
// This is a dangling or new host
pathMatcher = &compute.PathMatcher{Name: pmName}
l.um.PathMatchers = append(l.um.PathMatchers, pathMatcher)
}
pathMatcher.DefaultService = l.um.DefaultService
// TODO: Every update replaces the entire path map. This will need to
// change when we allow joining. Right now we call a single method
// to verify current == desired and add new url mappings.
pathMatcher.PathRules = []*compute.PathRule{}
// Longest prefix wins. For equal rules, first hit wins, i.e the second
// /foo rule when the first is deleted.
for expr, be := range urlToBackend {
pathMatcher.PathRules = append(
pathMatcher.PathRules, &compute.PathRule{Paths: []string{expr}, Service: be.SelfLink})
}
}
um, err := l.cloud.UpdateUrlMap(l.um)
if err != nil {
return err
}
l.um = um
return nil
}
// Cleanup deletes resources specific to this l7 in the right order.
// forwarding rule -> target proxy -> url map
// This leaves backends and health checks, which are shared across loadbalancers.
func (l *L7) Cleanup() error {
if l.fw != nil {
glog.Infof("Deleting global forwarding rule %v", l.fw.Name)
if err := l.cloud.DeleteGlobalForwardingRule(l.fw.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
}
l.fw = nil
}
if l.fws != nil {
glog.Infof("Deleting global forwarding rule %v", l.fws.Name)
if err := l.cloud.DeleteGlobalForwardingRule(l.fws.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
l.fws = nil
}
}
if l.ip != nil {
glog.Infof("Deleting static IP %v(%v)", l.ip.Name, l.ip.Address)
if err := l.cloud.DeleteGlobalStaticIP(l.ip.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
l.ip = nil
}
}
if l.tps != nil {
glog.Infof("Deleting target https proxy %v", l.tps.Name)
if err := l.cloud.DeleteTargetHttpsProxy(l.tps.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
}
l.tps = nil
}
if l.sslCert != nil {
glog.Infof("Deleting sslcert %v", l.sslCert.Name)
if err := l.cloud.DeleteSslCertificate(l.sslCert.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
}
l.sslCert = nil
}
if l.tp != nil {
glog.Infof("Deleting target http proxy %v", l.tp.Name)
if err := l.cloud.DeleteTargetHttpProxy(l.tp.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
}
l.tp = nil
}
if l.um != nil {
glog.Infof("Deleting url map %v", l.um.Name)
if err := l.cloud.DeleteUrlMap(l.um.Name); err != nil {
if !utils.IsHTTPErrorCode(err, http.StatusNotFound) {
return err
}
}
l.um = nil
}
return nil
}
// getBackendNames returns the names of backends in this L7 urlmap.
func (l *L7) getBackendNames() []string {
if l.um == nil {
return []string{}
}
beNames := sets.NewString()
for _, pathMatcher := range l.um.PathMatchers {
for _, pathRule := range pathMatcher.PathRules {
// This is gross, but the urlmap only has links to backend services.
parts := strings.Split(pathRule.Service, "/")
name := parts[len(parts)-1]
if name != "" {
beNames.Insert(name)
}
}
}
// The default Service recorded in the urlMap is a link to the backend.
// Note that this can either be user specified, or the L7 controller's
// global default.
parts := strings.Split(l.um.DefaultService, "/")
defaultBackendName := parts[len(parts)-1]
if defaultBackendName != "" {
beNames.Insert(defaultBackendName)
}
return beNames.List()
}
// GetLBAnnotations returns the annotations of an l7. This includes it's current status.
func GetLBAnnotations(l7 *L7, existing map[string]string, backendPool backends.BackendPool) map[string]string {
if existing == nil {
existing = map[string]string{}
}
backends := l7.getBackendNames()
backendState := map[string]string{}
for _, beName := range backends {
backendState[beName] = backendPool.Status(beName)
}
jsonBackendState := "Unknown"
b, err := json.Marshal(backendState)
if err == nil {
jsonBackendState = string(b)
}
existing[fmt.Sprintf("%v/url-map", utils.K8sAnnotationPrefix)] = l7.um.Name
// Forwarding rule and target proxy might not exist if allowHTTP == false
if l7.fw != nil {
existing[fmt.Sprintf("%v/forwarding-rule", utils.K8sAnnotationPrefix)] = l7.fw.Name
}
if l7.tp != nil {
existing[fmt.Sprintf("%v/target-proxy", utils.K8sAnnotationPrefix)] = l7.tp.Name
}
// HTTPs resources might not exist if TLS == nil
if l7.fws != nil {
existing[fmt.Sprintf("%v/https-forwarding-rule", utils.K8sAnnotationPrefix)] = l7.fws.Name
}
if l7.tps != nil {
existing[fmt.Sprintf("%v/https-target-proxy", utils.K8sAnnotationPrefix)] = l7.tps.Name
}
if l7.ip != nil {
existing[fmt.Sprintf("%v/static-ip", utils.K8sAnnotationPrefix)] = l7.ip.Name
}
// TODO: We really want to know *when* a backend flipped states.
existing[fmt.Sprintf("%v/backends", utils.K8sAnnotationPrefix)] = jsonBackendState
return existing
}

View file

@ -0,0 +1,189 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package loadbalancers
import (
"testing"
compute "google.golang.org/api/compute/v1"
"k8s.io/contrib/ingress/controllers/gce/backends"
"k8s.io/contrib/ingress/controllers/gce/healthchecks"
"k8s.io/contrib/ingress/controllers/gce/instances"
"k8s.io/contrib/ingress/controllers/gce/utils"
"k8s.io/kubernetes/pkg/util/sets"
)
const (
testDefaultBeNodePort = int64(3000)
defaultZone = "default-zone"
)
func newFakeLoadBalancerPool(f LoadBalancers, t *testing.T) LoadBalancerPool {
fakeBackends := backends.NewFakeBackendServices()
fakeIGs := instances.NewFakeInstanceGroups(sets.NewString())
fakeHCs := healthchecks.NewFakeHealthChecks()
namer := utils.Namer{}
healthChecker := healthchecks.NewHealthChecker(fakeHCs, "/", namer)
backendPool := backends.NewBackendPool(
fakeBackends, healthChecker, instances.NewNodePool(fakeIGs, defaultZone), namer)
return NewLoadBalancerPool(f, backendPool, testDefaultBeNodePort, namer)
}
func TestCreateHTTPLoadBalancer(t *testing.T) {
// This should NOT create the forwarding rule and target proxy
// associated with the HTTPS branch of this loadbalancer.
lbInfo := &L7RuntimeInfo{Name: "test", AllowHTTP: true}
f := NewFakeLoadBalancers(lbInfo.Name)
pool := newFakeLoadBalancerPool(f, t)
pool.Add(lbInfo)
l7, err := pool.Get(lbInfo.Name)
if err != nil || l7 == nil {
t.Fatalf("Expected l7 not created")
}
um, err := f.GetUrlMap(f.umName())
if err != nil ||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
t.Fatalf("%v", err)
}
tp, err := f.GetTargetHttpProxy(f.tpName(false))
if err != nil || tp.UrlMap != um.SelfLink {
t.Fatalf("%v", err)
}
fw, err := f.GetGlobalForwardingRule(f.fwName(false))
if err != nil || fw.Target != tp.SelfLink {
t.Fatalf("%v", err)
}
}
func TestCreateHTTPSLoadBalancer(t *testing.T) {
// This should NOT create the forwarding rule and target proxy
// associated with the HTTP branch of this loadbalancer.
lbInfo := &L7RuntimeInfo{
Name: "test",
AllowHTTP: false,
TLS: &TLSCerts{Key: "key", Cert: "cert"},
}
f := NewFakeLoadBalancers(lbInfo.Name)
pool := newFakeLoadBalancerPool(f, t)
pool.Add(lbInfo)
l7, err := pool.Get(lbInfo.Name)
if err != nil || l7 == nil {
t.Fatalf("Expected l7 not created")
}
um, err := f.GetUrlMap(f.umName())
if err != nil ||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
t.Fatalf("%v", err)
}
tps, err := f.GetTargetHttpsProxy(f.tpName(true))
if err != nil || tps.UrlMap != um.SelfLink {
t.Fatalf("%v", err)
}
fws, err := f.GetGlobalForwardingRule(f.fwName(true))
if err != nil || fws.Target != tps.SelfLink {
t.Fatalf("%v", err)
}
}
func TestCreateBothLoadBalancers(t *testing.T) {
// This should create 2 forwarding rules and target proxies
// but they should use the same urlmap, and have the same
// static ip.
lbInfo := &L7RuntimeInfo{
Name: "test",
AllowHTTP: true,
TLS: &TLSCerts{Key: "key", Cert: "cert"},
}
f := NewFakeLoadBalancers(lbInfo.Name)
pool := newFakeLoadBalancerPool(f, t)
pool.Add(lbInfo)
l7, err := pool.Get(lbInfo.Name)
if err != nil || l7 == nil {
t.Fatalf("Expected l7 not created")
}
um, err := f.GetUrlMap(f.umName())
if err != nil ||
um.DefaultService != pool.(*L7s).glbcDefaultBackend.SelfLink {
t.Fatalf("%v", err)
}
tps, err := f.GetTargetHttpsProxy(f.tpName(true))
if err != nil || tps.UrlMap != um.SelfLink {
t.Fatalf("%v", err)
}
tp, err := f.GetTargetHttpProxy(f.tpName(false))
if err != nil || tp.UrlMap != um.SelfLink {
t.Fatalf("%v", err)
}
fws, err := f.GetGlobalForwardingRule(f.fwName(true))
if err != nil || fws.Target != tps.SelfLink {
t.Fatalf("%v", err)
}
fw, err := f.GetGlobalForwardingRule(f.fwName(false))
if err != nil || fw.Target != tp.SelfLink {
t.Fatalf("%v", err)
}
ip, err := f.GetGlobalStaticIP(f.fwName(false))
if err != nil || ip.Address != fw.IPAddress || ip.Address != fws.IPAddress {
t.Fatalf("%v", err)
}
}
func TestUpdateUrlMap(t *testing.T) {
um1 := utils.GCEURLMap{
"bar.example.com": {
"/bar2": &compute.BackendService{SelfLink: "bar2svc"},
},
}
um2 := utils.GCEURLMap{
"foo.example.com": {
"/foo1": &compute.BackendService{SelfLink: "foo1svc"},
"/foo2": &compute.BackendService{SelfLink: "foo2svc"},
},
"bar.example.com": {
"/bar1": &compute.BackendService{SelfLink: "bar1svc"},
},
}
um2.PutDefaultBackend(&compute.BackendService{SelfLink: "default"})
lbInfo := &L7RuntimeInfo{Name: "test", AllowHTTP: true}
f := NewFakeLoadBalancers(lbInfo.Name)
pool := newFakeLoadBalancerPool(f, t)
pool.Add(lbInfo)
l7, err := pool.Get(lbInfo.Name)
if err != nil {
t.Fatalf("%v", err)
}
for _, ir := range []utils.GCEURLMap{um1, um2} {
if err := l7.UpdateUrlMap(ir); err != nil {
t.Fatalf("%v", err)
}
}
// The final map doesn't contain /bar2
expectedMap := map[string]utils.FakeIngressRuleValueMap{
utils.DefaultBackendKey: {
utils.DefaultBackendKey: "default",
},
"foo.example.com": {
"/foo1": "foo1svc",
"/foo2": "foo2svc",
},
"bar.example.com": {
"/bar1": "bar1svc",
},
}
f.CheckURLMap(t, l7, expectedMap)
}

245
controllers/gce/main.go Normal file
View file

@ -0,0 +1,245 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
go_flag "flag"
"fmt"
"net/http"
"os"
"os/signal"
"strings"
"syscall"
"time"
flag "github.com/spf13/pflag"
"k8s.io/contrib/ingress/controllers/gce/controller"
"k8s.io/kubernetes/pkg/api"
"k8s.io/kubernetes/pkg/api/unversioned"
client "k8s.io/kubernetes/pkg/client/unversioned"
kubectl_util "k8s.io/kubernetes/pkg/kubectl/cmd/util"
"k8s.io/kubernetes/pkg/util/wait"
"github.com/golang/glog"
)
// Entrypoint of GLBC. Example invocation:
// 1. In a pod:
// glbc --delete-all-on-quit
// 2. Dry run (on localhost):
// $ kubectl proxy --api-prefix="/"
// $ glbc --proxy="http://localhost:proxyport"
const (
// lbApiPort is the port on which the loadbalancer controller serves a
// minimal api (/healthz, /delete-all-and-quit etc).
lbApiPort = 8081
// A delimiter used for clarity in naming GCE resources.
clusterNameDelimiter = "--"
// Arbitrarily chosen alphanumeric character to use in constructing resource
// names, eg: to avoid cases where we end up with a name ending in '-'.
alphaNumericChar = "0"
// Current docker image version. Only used in debug logging.
imageVersion = "glbc:0.6.0"
)
var (
flags = flag.NewFlagSet(
`gclb: gclb --runngin-in-cluster=false --default-backend-node-port=123`,
flag.ExitOnError)
proxyUrl = flags.String("proxy", "",
`If specified, the controller assumes a kubctl proxy server is running on the
given url and creates a proxy client and fake cluster manager. Results are
printed to stdout and no changes are made to your cluster. This flag is for
testing.`)
clusterName = flags.String("cluster-uid", controller.DefaultClusterUID,
`Optional, used to tag cluster wide, shared loadbalancer resources such
as instance groups. Use this flag if you'd like to continue using the
same resources across a pod restart. Note that this does not need to
match the name of you Kubernetes cluster, it's just an arbitrary name
used to tag/lookup cloud resources.`)
inCluster = flags.Bool("running-in-cluster", true,
`Optional, if this controller is running in a kubernetes cluster, use the
pod secrets for creating a Kubernetes client.`)
resyncPeriod = flags.Duration("sync-period", 30*time.Second,
`Relist and confirm cloud resources this often.`)
deleteAllOnQuit = flags.Bool("delete-all-on-quit", false,
`If true, the controller will delete all Ingress and the associated
external cloud resources as it's shutting down. Mostly used for
testing. In normal environments the controller should only delete
a loadbalancer if the associated Ingress is deleted.`)
defaultSvc = flags.String("default-backend-service", "kube-system/default-http-backend",
`Service used to serve a 404 page for the default backend. Takes the form
namespace/name. The controller uses the first node port of this Service for
the default backend.`)
healthCheckPath = flags.String("health-check-path", "/",
`Path used to health-check a backend service. All Services must serve
a 200 page on this path. Currently this is only configurable globally.`)
watchNamespace = flags.String("watch-namespace", api.NamespaceAll,
`Namespace to watch for Ingress/Services/Endpoints.`)
verbose = flags.Bool("verbose", false,
`If true, logs are displayed at V(4), otherwise V(2).`)
)
func registerHandlers(lbc *controller.LoadBalancerController) {
http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
if err := lbc.CloudClusterManager.IsHealthy(); err != nil {
w.WriteHeader(500)
w.Write([]byte(fmt.Sprintf("Cluster unhealthy: %v", err)))
return
}
w.WriteHeader(200)
w.Write([]byte("ok"))
})
http.HandleFunc("/delete-all-and-quit", func(w http.ResponseWriter, r *http.Request) {
// TODO: Retry failures during shutdown.
lbc.Stop(true)
})
glog.Fatal(http.ListenAndServe(fmt.Sprintf(":%v", lbApiPort), nil))
}
func handleSigterm(lbc *controller.LoadBalancerController, deleteAll bool) {
// Multiple SIGTERMs will get dropped
signalChan := make(chan os.Signal, 1)
signal.Notify(signalChan, syscall.SIGTERM)
<-signalChan
glog.Infof("Received SIGTERM, shutting down")
// TODO: Better retires than relying on restartPolicy.
exitCode := 0
if err := lbc.Stop(deleteAll); err != nil {
glog.Infof("Error during shutdown %v", err)
exitCode = 1
}
glog.Infof("Exiting with %v", exitCode)
os.Exit(exitCode)
}
// main function for GLBC.
func main() {
// TODO: Add a healthz endpoint
var kubeClient *client.Client
var err error
var clusterManager *controller.ClusterManager
flags.Parse(os.Args)
clientConfig := kubectl_util.DefaultClientConfig(flags)
// Set glog verbosity levels
if *verbose {
go_flag.Lookup("logtostderr").Value.Set("true")
go_flag.Set("v", "4")
}
glog.Infof("Starting GLBC image: %v", imageVersion)
if *defaultSvc == "" {
glog.Fatalf("Please specify --default-backend")
}
if *proxyUrl != "" {
// Create proxy kubeclient
kubeClient = client.NewOrDie(&client.Config{
Host: *proxyUrl,
ContentConfig: client.ContentConfig{GroupVersion: &unversioned.GroupVersion{Version: "v1"}},
})
} else {
// Create kubeclient
if *inCluster {
if kubeClient, err = client.NewInCluster(); err != nil {
glog.Fatalf("Failed to create client: %v.", err)
}
} else {
config, err := clientConfig.ClientConfig()
if err != nil {
glog.Fatalf("error connecting to the client: %v", err)
}
kubeClient, err = client.New(config)
}
}
// Wait for the default backend Service. There's no pretty way to do this.
parts := strings.Split(*defaultSvc, "/")
if len(parts) != 2 {
glog.Fatalf("Default backend should take the form namespace/name: %v",
*defaultSvc)
}
defaultBackendNodePort, err := getNodePort(kubeClient, parts[0], parts[1])
if err != nil {
glog.Fatalf("Could not configure default backend %v: %v",
*defaultSvc, err)
}
if *proxyUrl == "" && *inCluster {
// Create cluster manager
clusterManager, err = controller.NewClusterManager(
*clusterName, defaultBackendNodePort, *healthCheckPath)
if err != nil {
glog.Fatalf("%v", err)
}
} else {
// Create fake cluster manager
clusterManager = controller.NewFakeClusterManager(*clusterName).ClusterManager
}
// Start loadbalancer controller
lbc, err := controller.NewLoadBalancerController(kubeClient, clusterManager, *resyncPeriod, *watchNamespace)
if err != nil {
glog.Fatalf("%v", err)
}
if clusterManager.ClusterNamer.ClusterName != "" {
glog.V(3).Infof("Cluster name %+v", clusterManager.ClusterNamer.ClusterName)
}
go registerHandlers(lbc)
go handleSigterm(lbc, *deleteAllOnQuit)
lbc.Run()
for {
glog.Infof("Handled quit, awaiting pod deletion.")
time.Sleep(30 * time.Second)
}
}
// getNodePort waits for the Service, and returns it's first node port.
func getNodePort(client *client.Client, ns, name string) (nodePort int64, err error) {
var svc *api.Service
glog.V(3).Infof("Waiting for %v/%v", ns, name)
wait.Poll(1*time.Second, 5*time.Minute, func() (bool, error) {
svc, err = client.Services(ns).Get(name)
if err != nil {
return false, nil
}
for _, p := range svc.Spec.Ports {
if p.NodePort != 0 {
nodePort = int64(p.NodePort)
glog.V(3).Infof("Node port %v", nodePort)
break
}
}
return true, nil
})
return
}

82
controllers/gce/rc.yaml Normal file
View file

@ -0,0 +1,82 @@
apiVersion: v1
kind: Service
metadata:
# This must match the --default-backend-service argument of the l7 lb
# controller and is required because GCE mandates a default backend.
name: default-http-backend
labels:
k8s-app: glbc
spec:
# The default backend must be of type NodePort.
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: http
selector:
k8s-app: glbc
---
apiVersion: v1
kind: ReplicationController
metadata:
name: l7-lb-controller
labels:
k8s-app: glbc
version: v0.5.2
spec:
# There should never be more than 1 controller alive simultaneously.
replicas: 1
selector:
k8s-app: glbc
version: v0.5.2
template:
metadata:
labels:
k8s-app: glbc
version: v0.5.2
name: glbc
spec:
terminationGracePeriodSeconds: 600
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
- image: gcr.io/google_containers/glbc:0.5.2
livenessProbe:
httpGet:
path: /healthz
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
name: l7-lb-controller
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 50Mi
args:
- --default-backend-service=default/default-http-backend
- --sync-period=300s

View file

@ -0,0 +1,30 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Storage backends used by the Ingress controller.
// Ingress controllers require their own storage for the following reasons:
// 1. There is only so much information we can pack into 64 chars allowed
// by GCE for resource names.
// 2. An Ingress controller cannot assume total control over a project, in
// fact in a majority of cases (ubernetes, tests, multiple gke clusters in
// same project) there *will* be multiple controllers in a project.
// 3. If the Ingress controller pod is killed, an Ingress is deleted while
// the pod is down, and then the controller is re-scheduled on another node,
// it will leak resources. Note that this will happen today because
// the only implemented storage backend is InMemoryPool.
// 4. Listing from cloudproviders is really slow.
package storage

View file

@ -0,0 +1,53 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package storage
import (
"k8s.io/kubernetes/pkg/client/cache"
)
// Snapshotter is an interface capable of providing a consistent snapshot of
// the underlying storage implementation of a pool. It does not guarantee
// thread safety of snapshots, so they should be treated as read only unless
// the implementation specifies otherwise.
type Snapshotter interface {
Snapshot() map[string]interface{}
cache.ThreadSafeStore
}
// InMemoryPool is used as a cache for cluster resource pools.
type InMemoryPool struct {
cache.ThreadSafeStore
}
// Snapshot returns a read only copy of the k:v pairs in the store.
// Caller beware: Violates traditional snapshot guarantees.
func (p *InMemoryPool) Snapshot() map[string]interface{} {
snap := map[string]interface{}{}
for _, key := range p.ListKeys() {
if item, ok := p.Get(key); ok {
snap[key] = item
}
}
return snap
}
// NewInMemoryPool creates an InMemoryPool.
func NewInMemoryPool() *InMemoryPool {
return &InMemoryPool{
cache.NewThreadSafeStore(cache.Indexers{}, cache.Indices{})}
}

View file

@ -0,0 +1,21 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// utils contains odd structs, constants etc that don't fit cleanly into any
// sub-module because they're shared. Ideally this module wouldn't exist, but
// sharing these odd bits reduces margin for error.
package utils

View file

@ -0,0 +1,178 @@
/*
Copyright 2015 The Kubernetes Authors All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package utils
import (
"fmt"
"strings"
compute "google.golang.org/api/compute/v1"
"google.golang.org/api/googleapi"
)
const (
// Add used to record additions in a sync pool.
Add = iota
// Remove used to record removals from a sync pool.
Remove
// Sync used to record syncs of a sync pool.
Sync
// Get used to record Get from a sync pool.
Get
// Create used to recrod creations in a sync pool.
Create
// Update used to record updates in a sync pool.
Update
// Delete used to record deltions from a sync pool.
Delete
// AddInstances used to record a call to AddInstances.
AddInstances
// RemoveInstances used to record a call to RemoveInstances.
RemoveInstances
// This allows sharing of backends across loadbalancers.
backendPrefix = "k8s-be"
// Prefix used for instance groups involved in L7 balancing.
igPrefix = "k8s-ig"
// A delimiter used for clarity in naming GCE resources.
clusterNameDelimiter = "--"
// Arbitrarily chosen alphanumeric character to use in constructing resource
// names, eg: to avoid cases where we end up with a name ending in '-'.
alphaNumericChar = "0"
// Names longer than this are truncated, because of GCE restrictions.
nameLenLimit = 62
// DefaultBackendKey is the key used to transmit the defaultBackend through
// a urlmap. It's not a valid subdomain, and it is a catch all path.
// TODO: Find a better way to transmit this, once we've decided on default
// backend semantics (i.e do we want a default per host, per lb etc).
DefaultBackendKey = "DefaultBackend"
// K8sAnnotationPrefix is the prefix used in annotations used to record
// debug information in the Ingress annotations.
K8sAnnotationPrefix = "ingress.kubernetes.io"
)
// Namer handles centralized naming for the cluster.
type Namer struct {
ClusterName string
}
// Truncate truncates the given key to a GCE length limit.
func (n *Namer) Truncate(key string) string {
if len(key) > nameLenLimit {
// GCE requires names to end with an albhanumeric, but allows characters
// like '-', so make sure the trucated name ends legally.
return fmt.Sprintf("%v%v", key[:nameLenLimit], alphaNumericChar)
}
return key
}
func (n *Namer) decorateName(name string) string {
if n.ClusterName == "" {
return name
}
return n.Truncate(fmt.Sprintf("%v%v%v", name, clusterNameDelimiter, n.ClusterName))
}
// BeName constructs the name for a backend.
func (n *Namer) BeName(port int64) string {
return n.decorateName(fmt.Sprintf("%v-%d", backendPrefix, port))
}
// IGName constructs the name for an Instance Group.
func (n *Namer) IGName() string {
// Currently all ports are added to a single instance group.
return n.decorateName(igPrefix)
}
// LBName constructs a loadbalancer name from the given key. The key is usually
// the namespace/name of a Kubernetes Ingress.
func (n *Namer) LBName(key string) string {
// TODO: Pipe the clusterName through, for now it saves code churn to just
// grab it globally, especially since we haven't decided how to handle
// namespace conflicts in the Ubernetes context.
parts := strings.Split(key, clusterNameDelimiter)
scrubbedName := strings.Replace(key, "/", "-", -1)
if n.ClusterName == "" || parts[len(parts)-1] == n.ClusterName {
return scrubbedName
}
return n.Truncate(fmt.Sprintf("%v%v%v", scrubbedName, clusterNameDelimiter, n.ClusterName))
}
// GCEURLMap is a nested map of hostname->path regex->backend
type GCEURLMap map[string]map[string]*compute.BackendService
// GetDefaultBackend performs a destructive read and returns the default
// backend of the urlmap.
func (g GCEURLMap) GetDefaultBackend() *compute.BackendService {
var d *compute.BackendService
var exists bool
if h, ok := g[DefaultBackendKey]; ok {
if d, exists = h[DefaultBackendKey]; exists {
delete(h, DefaultBackendKey)
}
delete(g, DefaultBackendKey)
}
return d
}
// String implements the string interface for the GCEURLMap.
func (g GCEURLMap) String() string {
msg := ""
for host, um := range g {
msg += fmt.Sprintf("%v\n", host)
for url, be := range um {
msg += fmt.Sprintf("\t%v: ", url)
if be == nil {
msg += fmt.Sprintf("No backend\n")
} else {
msg += fmt.Sprintf("%v\n", be.Name)
}
}
}
return msg
}
// PutDefaultBackend performs a destructive write replacing the
// default backend of the url map with the given backend.
func (g GCEURLMap) PutDefaultBackend(d *compute.BackendService) {
g[DefaultBackendKey] = map[string]*compute.BackendService{
DefaultBackendKey: d,
}
}
// IsHTTPErrorCode checks if the given error matches the given HTTP Error code.
// For this to work the error must be a googleapi Error.
func IsHTTPErrorCode(err error, code int) bool {
apiErr, ok := err.(*googleapi.Error)
return ok && apiErr.Code == code
}
// CompareLinks returns true if the 2 self links are equal.
func CompareLinks(l1, l2 string) bool {
// TODO: These can be partial links
return l1 == l2 && l1 != ""
}
// FakeIngressRuleValueMap is a convenience type used by multiple submodules
// that share the same testing methods.
type FakeIngressRuleValueMap map[string]string