Node backup and Pod Restore

Navigation:  Administration > Backups >

Node backup and Pod Restore

Previous pageReturn to chapter overviewNext page

Our backup tool script scans node local storage location and archives it to destination folder.

 

Steps for node backup

 

1. Login to KD master with root privileges.

 

2. Mount backup storage to both master and nodes by running command:

 

mount -t nfs <mountpoint>

 

-t, --types <list> limit the set of filesystem types

Nfs                  filesystem type 

 <mountpoint>          Specifies the directory where the backup storage should be mounted.

 

For more details of how to mount storage, please refer the link.

 

3. Add job to cron on master by running the command:

 

kdctl pods batch-dump --target-dir /mnt/bak/pods/

 

Usage: kdctl pods batch-dump [OPTIONS]

 

Options:

 --owner TEXT                            If specified, only pods of this user will be dumped

 --target-dir DIRECTORY           If specified, pod dumps will be saved there in the following structure:

                            <target_dir>/<owner_id>/<pod_id>

 -h, --help                         Show this message and exit.

 

Please refer to cron documentation for more details about configuring automated tasks.

 

 

4. Add job to cron on node by running command:

 

kd-backup-node /mnt/bak/storage/

 

Usage: kd-backup-node [-h] [-v | -q] [-s] [-e CALLBACK] backup_dir

 

Positional arguments:

 

·       backup_dir        Destination for all created files

 

Optional arguments:

 

 -h, --help             show this help message and exit

 -v, --verbose  verbose (debug) logging

 -q, --quiet            silent mode, only log warnings

 -s, --skip             do not stop if one steps is failed

 -e CALLBACK, --callback CALLBACK             callback for backup file (backup path passed as a 1st arg)

 

Please note that CEPH cluster admin isn't required to have backups of persistent volumes to make restoration of a pod (because persistent data is stored on the CEPH).

So the command to restore persistent volume on a CEPH cluster looks as follows:

 

kdctl pods restore --file /mnt/bak/pods/<user_id>/<pod_id> --owner <to_whom_to_restore>

 

without parameter backup_dir which locates persistent volume backups on backups by CEPH case.

 

Steps for pod restore

 

1. Login to KD node with root privileges.

 

2. Make sure backup storage is mounted on all nodes and master.

 

3. Merge storage backups from different nodes by running on the node:

 

kd-backup-node-merge /mnt/bak/storage/

 

Usage: kd-backup-node-merge [-h] [-v | -q] [-s] [-d] [-p PRECISION] [-i]

                         backups

 

Positional arguments:

·       backups            Target git which contains all backups

 

Optional arguments:

 

 -h, --help             How this help message and exit

 -v, --verbose  Verbose (debug) logging

 -q, --quiet            Silent mode, only log warnings

 -s, --skip             Do not stop if one steps is failed

 -d, --dry-run        Do not touch any files

 -p PRECISION, --precision PRECISION

                            Maximum time gap to group in hours. Default: 1hr.

 -i, --include-latest           Set to also include latest (possible incomplete)

                            backup folder

 

4. Login to KD master with root privileges.

 

5. For the pod restore run the following command on master:

 

kdctl pods restore --file /mnt/bak/pods/<user_id>/<pod_id> --pv-backups-location  file:///mnt/bak/storage/<version_of_storage_backup_to_restore> --owner <to_whom_to_restore>