You can deploy Velero on IBM Public or Private clouds, or even on any other Kubernetes cluster, but anyway you can use IBM Cloud Object Store as a destination for Velero’s backups.
To set up IBM Cloud Object Storage (COS) as Velero’s destination, you:
Download the latest official release’s tarball for your client platform.
We strongly recommend that you use an
official release of
Velero. The tarballs for each release contain the velero
command-line client. The code in the main branch
of the Velero repository is under active development and is not guaranteed to be stable!
Extract the tarball:
tar -xvf <RELEASE-TARBALL-NAME>.tar.gz -C /dir/to/extract/to
The directory you extracted is called the “Velero directory” in subsequent steps.
Move the velero
binary from the Velero directory to somewhere in your PATH.
If you don’t have a COS instance, you can create a new one, according to the detailed instructions in Creating a new resource instance.
Velero requires an object storage bucket to store backups in. See instructions in Create some buckets to store your data.
The process of creating service credentials is described in Service credentials. Several comments:
The Velero service will write its backup into the bucket, so it requires the “Writer” access role.
Velero uses an AWS S3 compatible API. Which means it authenticates using a signature created from a pair of access and secret keys — a set of HMAC credentials. You can create these HMAC credentials by specifying {“HMAC”:true}
as an optional inline parameter. See
HMAC credentials guide.
After successfully creating a Service credential, you can view the JSON definition of the credential. Under the cos_hmac_keys
entry there are access_key_id
and secret_access_key
. Use them in the next step.
Create a Velero-specific credentials file (credentials-velero
) in your local directory:
[default]
aws_access_key_id=<ACCESS_KEY_ID>
aws_secret_access_key=<SECRET_ACCESS_KEY>
Where the access key id and secret are the values that you got above.
Install Velero, including all prerequisites, into the cluster and start the deployment. This will create a namespace called velero
, and place a deployment named velero
in it.
velero install \
--provider aws \
--bucket <YOUR_BUCKET> \
--secret-file ./credentials-velero \
--plugins velero/velero-plugin-for-aws:v1.10.0\
--use-volume-snapshots=false \
--backup-location-config region=<YOUR_REGION>,s3ForcePathStyle="true",s3Url=<YOUR_URL_ACCESS_POINT>,checksumAlgorithm=""
Velero does not have a volume snapshot plugin for IBM Cloud, so creating volume snapshots is disabled.
Additionally, you can specify --use-node-agent
to enable
File System Backup, and --wait
to wait for the deployment to be ready.
(Optional) Specify CPU and memory resource requests and limits for the Velero/node-agent pods.
Once the installation is complete, remove the default VolumeSnapshotLocation
that was created by velero install
, since it’s specific to AWS and won’t work for IBM Cloud:
kubectl -n velero delete volumesnapshotlocation.velero.io default
For more complex installation needs, use either the Helm chart, or add --dry-run -o yaml
options for generating the YAML representation for the installation.
If you run the nginx example, in file examples/nginx-app/with-pv.yaml
:
Uncomment storageClassName: <YOUR_STORAGE_CLASS_NAME>
and replace with your StorageClass
name.
To help you get started, see the documentation.