Initial upload
This commit is contained in:
parent
fab682a18f
commit
dd928c140a
103 changed files with 31 additions and 14959 deletions
|
|
@ -1,126 +0,0 @@
|
|||
# Local Backup with Velero and Minio
|
||||
|
||||
This is example is adapted from the original icpbuilder stack.
|
||||
|
||||
The two significant changes from the original were made:
|
||||
|
||||
* disabled `hostPath` mount to persist backups within kind, since backups do not work sufficiently in this example due to PVC issues, see below.
|
||||
* renamed `minio` namespace to `minio-backup` so it does not collide with other minio examples.
|
||||
|
||||
Within kind, it can only backup kubernetes objects. Data from PVC's is skipped, see below why.
|
||||
|
||||
[Velero](https://velero.io/) requires some compatible storage providers as its backup target. This local installation uses [MinIO](https://min.io/) as an example.
|
||||
However, MinIO is not officially supported by Velero but works due to S3 compatibility.
|
||||
|
||||
The current setup does NOT persist backups but stores them in MinIO's PVCs. Proper backups should configure external storage, see [Supported Providers](https://velero.io/docs/main/supported-providers/).
|
||||
|
||||
## Installation
|
||||
|
||||
The stack is installed as part of the `./example.sh` run.
|
||||
|
||||
In order to persist a local backup you have to mount a local directory within `main.go`:
|
||||
|
||||
```yaml
|
||||
nodes:
|
||||
- role: control-plane
|
||||
extraMounts:
|
||||
- hostPath: /some/path/backup # replace with your own path
|
||||
containerPath: /backup
|
||||
```
|
||||
|
||||
Kind creates the directory on the host but you might have to adjust the permissions, otherwise the minio pod fails to start.
|
||||
|
||||
## Using it
|
||||
|
||||
After the installation velero and minio should be visible in ArgoCD.
|
||||
|
||||
During the installation credentials for minio are generated and shared with velero. You can access them manually:
|
||||
|
||||
```bash
|
||||
kubectl -n minio-backup get secret root-creds -o go-template='{{ range $key, $value := .data }}{{ printf "%s: %s\n" $key ($value | base64decode) }}{{ end }}'
|
||||
# example output
|
||||
# rootPassword: aKKZzLnyry6OYZts17vMTf32H5ghFL4WYgu6bHujm
|
||||
# rootUser: ge8019yksArb7BICt3MLY9
|
||||
```
|
||||
|
||||
A bucket in minio was created and velero uses it for its backups by default, see helm `values.yaml` files.
|
||||
|
||||
|
||||
### Backup and Restore
|
||||
|
||||
Backups and subsequent restores can be scheduled by either using the velero cli or by creating CRD objects.
|
||||
|
||||
Check the `./demo` directory for equivalent CRD manifests.
|
||||
|
||||
Create a backup of the backstage namespace, see `schedule` task for more permanent setups:
|
||||
|
||||
```shell
|
||||
velero backup create backstage-backup --include-namespaces backstage
|
||||
```
|
||||
|
||||
There are more options to create a fine granular backup and to set the backup storage.
|
||||
See velero's docs for details.
|
||||
|
||||
Check the backup with:
|
||||
```shell
|
||||
velero backup get
|
||||
```
|
||||
|
||||
To get more details on the backup you need to be able to connect to velero's backup storage, i.e. minio.
|
||||
Using `kubefwd` here helps a lot (this is not necessary for restore).
|
||||
|
||||
```shell
|
||||
kubefwd services -n minio-backup
|
||||
```
|
||||
|
||||
More details with `describe` and `logs`:
|
||||
|
||||
```shell
|
||||
velero backup describe backstage-backup --details
|
||||
velero backup logs backstage-backup
|
||||
```
|
||||
|
||||
Restore the backup into the original namespace, you might want to delete the existing namespace beforehand:
|
||||
|
||||
```shell
|
||||
kubectl delete namespace backstage
|
||||
velero restore create --from-backup backstage-backup
|
||||
```
|
||||
|
||||
When restoring, velero does not replace existing objects in the backup target.
|
||||
|
||||
ArgoCD does pickup on the changes and also validates that the backup is in sync.
|
||||
|
||||
|
||||
## Issues with Persistent Volumes
|
||||
|
||||
Velero has no issue to backup kubernetes objects like Deployments, ConfigMaps, etc. since they are just yaml/json definitions.
|
||||
Volumes containing data are, however, more complex. The preferred type of backup are kubernetes' VolumeSnapshots as they consistently store the state
|
||||
of a volume at a given point in time in an atomic action. Those snapshots live within the cluster and are subsequently downloaded into one of velero's
|
||||
storage backends for safekeeping.
|
||||
|
||||
However, VolumeSnapshots are only possible on storage backends that support them via CSI drivers.
|
||||
Backends like `nfs` or `hostPath` do NOT support them. Here, velero uses an alternative method
|
||||
called [File System Backups](https://velero.io/docs/main/file-system-backup/).
|
||||
In essence, this a simple copy operation based on the file system. Even though
|
||||
this uses more sophisticated tooling under the hood, i.e. kopia, it is not
|
||||
possible to create a backup in an atomic transaction. Thus, the resulting backup
|
||||
might be inconsistent.
|
||||
|
||||
Furthermore, for file system backups to work velero installs a node-agent as a
|
||||
DaemonSet on each Kubernetes node. The agent is aware of the node's internal
|
||||
storage and accesses the directories on the host directly to copy the files.
|
||||
This is not supported for hostPath volumes as they mount an arbitrary path
|
||||
on the host. In theory, a backup is possible but due extra config and security
|
||||
considerations intentionally skipped. Kind's local-path provider storage uses
|
||||
a hostPath and is thus not supported for any kind of backup.
|
||||
|
||||
## TODOs
|
||||
|
||||
* The MinIO -backup installation is only intended as an example and must either
|
||||
be configured properly or replaced.
|
||||
|
||||
* The current example does not automatically schedule backups.
|
||||
|
||||
* velero chart must be properly parameterized
|
||||
|
||||
|
|
@ -1,9 +0,0 @@
|
|||
# velero backup create backstage-backup --include-namespaces backstage
|
||||
apiVersion: velero.io/v1
|
||||
kind: Backup
|
||||
metadata:
|
||||
name: backstage-backup
|
||||
namespace: velero
|
||||
spec:
|
||||
includedNamespaces:
|
||||
- 'backstage'
|
||||
|
|
@ -1,10 +0,0 @@
|
|||
# velero restore create --from-backup backstage-backup
|
||||
apiVersion: velero.io/v1
|
||||
kind: Restore
|
||||
metadata:
|
||||
name: backstage-backup
|
||||
namespace: velero
|
||||
spec:
|
||||
backupName: backstage-backup
|
||||
includedNamespaces:
|
||||
- 'backstage'
|
||||
|
|
@ -1,33 +0,0 @@
|
|||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: minio
|
||||
namespace: argocd
|
||||
labels:
|
||||
env: dev
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
project: default
|
||||
sources:
|
||||
- repoURL: "https://charts.min.io"
|
||||
targetRevision: 5.0.15
|
||||
helm:
|
||||
releaseName: minio
|
||||
valueFiles:
|
||||
- $values/otc/edp.buildth.ing/stacks/local-backup/minio/helm/values.yaml
|
||||
chart: minio
|
||||
- repoURL: https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW-CICD/stacks-instances
|
||||
targetRevision: HEAD
|
||||
ref: values
|
||||
- repoURL: https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW-CICD/stacks-instances
|
||||
targetRevision: HEAD
|
||||
path: "otc/edp.buildth.ing/stacks/local-backup/minio/manifests"
|
||||
destination:
|
||||
server: "https://kubernetes.default.svc"
|
||||
namespace: minio-backup
|
||||
syncPolicy:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
automated:
|
||||
selfHeal: true
|
||||
|
|
@ -1,17 +0,0 @@
|
|||
replicas: 1
|
||||
mode: standalone
|
||||
|
||||
resources:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClass: standard
|
||||
size: 512Mi
|
||||
# volumeName: backup # re-enable this to mount a local host path, see minio-pv.yaml
|
||||
|
||||
buckets:
|
||||
- name: edfbuilder-backups
|
||||
|
||||
existingSecret: root-creds
|
||||
|
|
@ -1,13 +0,0 @@
|
|||
# re-enable this config to mount a local host path, see `../helm/values.yaml`
|
||||
# apiVersion: v1
|
||||
# kind: PersistentVolume
|
||||
# metadata:
|
||||
# name: backup
|
||||
# spec:
|
||||
# storageClassName: standard
|
||||
# accessModes:
|
||||
# - ReadWriteOnce
|
||||
# capacity:
|
||||
# storage: 512Mi
|
||||
# hostPath:
|
||||
# path: /backup
|
||||
|
|
@ -1,154 +0,0 @@
|
|||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: secret-sync
|
||||
namespace: minio-backup
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: Sync
|
||||
argocd.argoproj.io/sync-wave: "-20"
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: secret-sync
|
||||
namespace: minio-backup
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: Sync
|
||||
argocd.argoproj.io/sync-wave: "-20"
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "create", "update", "patch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: secret-sync
|
||||
namespace: minio-backup
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: Sync
|
||||
argocd.argoproj.io/sync-wave: "-20"
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: secret-sync
|
||||
namespace: minio-backup
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: secret-sync
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: secret-sync
|
||||
namespace: velero
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: Sync
|
||||
argocd.argoproj.io/sync-wave: "-20"
|
||||
rules:
|
||||
- apiGroups: [""]
|
||||
resources: ["secrets"]
|
||||
verbs: ["get", "create", "update", "patch"]
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: secret-sync
|
||||
namespace: velero
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: Sync
|
||||
argocd.argoproj.io/sync-wave: "-20"
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: secret-sync
|
||||
namespace: minio-backup
|
||||
roleRef:
|
||||
kind: Role
|
||||
name: secret-sync
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: secret-sync
|
||||
namespace: minio-backup
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: PostSync
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
generateName: secret-sync
|
||||
spec:
|
||||
serviceAccountName: secret-sync
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: kubectl
|
||||
image: docker.io/bitnami/kubectl
|
||||
command: ["/bin/bash", "-c"]
|
||||
args:
|
||||
- |
|
||||
set -e
|
||||
kubectl get secrets -n minio-backup root-creds -o json > /tmp/secret
|
||||
ACCESS=$(jq -r '.data.rootUser | @base64d' /tmp/secret)
|
||||
SECRET=$(jq -r '.data.rootPassword | @base64d' /tmp/secret)
|
||||
|
||||
echo \
|
||||
"apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: secret-key
|
||||
namespace: velero
|
||||
type: Opaque
|
||||
stringData:
|
||||
aws: |
|
||||
[default]
|
||||
aws_access_key_id=${ACCESS}
|
||||
aws_secret_access_key=${SECRET}
|
||||
" > /tmp/secret.yaml
|
||||
|
||||
kubectl apply -f /tmp/secret.yaml
|
||||
---
|
||||
apiVersion: batch/v1
|
||||
kind: Job
|
||||
metadata:
|
||||
name: minio-root-creds
|
||||
namespace: minio-backup
|
||||
annotations:
|
||||
argocd.argoproj.io/hook: Sync
|
||||
argocd.argoproj.io/sync-wave: "-10"
|
||||
spec:
|
||||
template:
|
||||
metadata:
|
||||
generateName: minio-root-creds
|
||||
spec:
|
||||
serviceAccountName: secret-sync
|
||||
restartPolicy: Never
|
||||
containers:
|
||||
- name: kubectl
|
||||
image: docker.io/bitnami/kubectl
|
||||
command: ["/bin/bash", "-c"]
|
||||
args:
|
||||
- |
|
||||
kubectl get secrets -n minio-backup root-creds
|
||||
if [ $? -eq 0 ]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
set -e
|
||||
|
||||
NAME=$(openssl rand -base64 24)
|
||||
PASS=$(openssl rand -base64 36)
|
||||
|
||||
echo \
|
||||
"apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: root-creds
|
||||
namespace: minio-backup
|
||||
type: Opaque
|
||||
stringData:
|
||||
rootUser: "${NAME}"
|
||||
rootPassword: "${PASS}"
|
||||
" > /tmp/secret.yaml
|
||||
|
||||
kubectl apply -f /tmp/secret.yaml
|
||||
|
|
@ -1,31 +0,0 @@
|
|||
apiVersion: argoproj.io/v1alpha1
|
||||
kind: Application
|
||||
metadata:
|
||||
name: velero
|
||||
namespace: argocd
|
||||
labels:
|
||||
env: dev
|
||||
finalizers:
|
||||
- resources-finalizer.argocd.argoproj.io
|
||||
spec:
|
||||
project: default
|
||||
sources:
|
||||
- repoURL: "https://vmware-tanzu.github.io/helm-charts"
|
||||
targetRevision: 8.0.0
|
||||
helm:
|
||||
releaseName: velero
|
||||
valueFiles:
|
||||
- $values/otc/edp.buildth.ing/stacks/local-backup/velero/helm/values.yaml
|
||||
chart: velero
|
||||
- repoURL: https://forgejo.edf-bootstrap.cx.fg1.ffm.osc.live/DevFW-CICD/stacks-instances
|
||||
targetRevision: HEAD
|
||||
ref: values
|
||||
destination:
|
||||
server: "https://kubernetes.default.svc"
|
||||
namespace: velero
|
||||
syncPolicy:
|
||||
syncOptions:
|
||||
- CreateNamespace=true
|
||||
automated:
|
||||
prune: true
|
||||
selfHeal: true
|
||||
|
|
@ -1,25 +0,0 @@
|
|||
resources:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
initContainers:
|
||||
- name: velero-plugin-for-aws
|
||||
image: velero/velero-plugin-for-aws:v1.11.0
|
||||
imagePullPolicy: IfNotPresent
|
||||
volumeMounts:
|
||||
- mountPath: /target
|
||||
name: plugins
|
||||
# snapshotsEnabled: false # create snapshot crd?
|
||||
# deployNodeAgent: true # install node agent as daemonset for file system backups?
|
||||
configuration:
|
||||
# defaultVolumesToFsBackup: true # backup pod volumes via fsb without explicit annotation?
|
||||
backupStorageLocation:
|
||||
- name: default
|
||||
provider: aws
|
||||
bucket: edfbuilder-backups
|
||||
credential:
|
||||
name: secret-key # this key is created within the minio-backup/secret-sync and injected into the velero namespace
|
||||
key: aws
|
||||
config:
|
||||
region: minio
|
||||
s3Url: http://minio.minio-backup.svc.cluster.local:9000 # internal resolution, external access for velero cli via fwd
|
||||
s3ForcePathStyle: "true"
|
||||
Loading…
Add table
Add a link
Reference in a new issue