The page has been translated by Gen AI.
Kubernetes Engine Migration
Kubernetes Engine Migration
Overview
- Guide to procedures and tasks for using Velero to migrate Workloads (Stateless, Stateful) located in SCP (Samsung Cloud Platform) Kubernetes Engine to SCP (Samsung Cloud Platform) V2.
- Velero is an open-source tool used to back up and restore data across Kubernetes clusters or in cloud environments. This guide explains step-by-step how to migrate a Kubernetes cluster using Velero.
Constraints
- Supported Kubernetes versions: Velero only supports specific versions of Kubernetes. Refer to the official documentation to check compatible versions.
- Resource limitation: In the case of large-scale clusters, backup and restore times may be prolonged.
- Network settings: Communication between clusters must be possible.
- Storage support: Velero only supports certain storage plugins. Make sure the storage provider you are using is compatible.
Pre-work
1. Install Velero CLI: Install on Bastion Host
- Virtual Server required for Velero CLI tasks (OS:Ubuntu 24.04 / vCPU:2Cores, Memory: 4GB recommended)
- Download the Velero (tarball) suitable for the work server OS.
- Extract the downloaded zip file.
wget https://github.com/vmware-tanzu/velero/releases/download/v1.16.2/velero-v1.16.2-linux-amd64.tar.gz
tar -xvzf velero-v1.16.2-linux-amd64.tar.gz
- Copy the extracted Velero binary file to the execution path.
chmod +x velero
mv velero /usr/local/bin
2. Backup Storage Settings
Velero backup for data storage create object storage.
- bucket create
- object storage details > access control (SCP) or service resource allowance (SCP v2) register bastion host, worker node
- v1 → v2 or v1 ← v2 object storage bucket using VPC Endpoint to access, add a resource that allows private access > Add the corresponding VPC Endpoint in the VPC Endpoint
Grant access permission to security group for 8443(SCP)/443(SCP v2) ports: bastion host, worker Node
Object Storage Credentials (authentication for using Object Storage for backup storage)Prepare the file.
cat << EOF > credentials-velero [default] aws_access_key_id=xxxx aws_secret_access_key=xxxxx EOF
3. Velero server and component installation: Cluster installation
- Prepare the kubeconfig file of Source and target kubenetes Cluster.
- Image preparation
velero/velero-plugin-for-aws:v1.12.1
velero/velero:v1.16.1
velero/velero-restore-helper:v1.15.2
bitnamilegacy/kubectl :1.30.6
quay.io/skopeo/stable:v1.19.0
alpine:3.22
- Register Image to container registry
docker pull <image name>:<tag name>
skopeo copy docker-daemon:velero/velero:v1.16.1 docker://<registry address>/<repository name>/velero/velero:v1.16.1 --authfile ~/auth.json
* Check skopeo content in auth.json
- helm installation reference file creation: Need to check Image path and version, etc
REGISTRY=<scr registry>
REPOSITORY=<repository> # applies to SCP v2
REGION=<object storage region, kr-west1>
BUCKET=<object storage bucket name>
OBS_ENDPOINT=<object storage access endpoint, do not include protocol scheme like https://>
OBS_VPC_ENDPOINT_IP=<object storage vpc endpoint ip, only assign v1 environment>
cat << EOF > values-additional.yaml
image:
repository: $REGISTRY/$REPOSITORY/velero/velero
tag: v1.16.1
$(if [[ -n $OBS_VPC_ENDPOINT_IP ]]; then
cat <<INNER
hostAliases:
- ip: $OBS_VPC_ENDPOINT_IP
hostnames:
- $OBS_ENDPOINT
nodeAgent:
hostAliases:
- ip: $OBS_VPC_ENDPOINT_IP
hostnames:
- $OBS_ENDPOINT
INNER
fi)
initContainers:
- name: velero-plugin-for-aws
image: $REGISTRY/$REPOSITORY/velero/velero-plugin-for-aws:v1.12.1
volumeMounts:
- mountPath: /target
name: plugins
kubectl:
image:
repository: $REGISTRY/$REPOSITORY/bitnamilegacy/kubectl
tag: 1.30.6
configuration:
backupStorageLocation:
- name: default
provider: aws
bucket: $BUCKET
config:
region: $REGION
s3ForcePathStyle: true
s3Url: https://$OBS_ENDPOINT
checksumAlgorithm:
defaultVolumesToFsBackup: true
features: EnableAPIGroupVersions
serviceAccount:
server:
imagePullSecrets:
- <secret person>
snapshotsEnabled: false
deployNodeAgent: true
configMaps:
fs-restore-action-config:
labels:
velero.io/plugin-config: ""
velero.io/pod-volume-restore: RestoreItemAction
data:
image: $REGISTRY/$REPOSITORY/velero/velero-restore-helper:v1.15.2
EOF
- Create an Image pull secret to use when pulling an Image registered in the Container Registry
kubectl create namespace velero
kubectl create secret generic <secret name> \
--from-file=.dockerconfigjson=$HOME/auth.json \
--type=kubernetes.io/dockerconfigjson -n velero
- Create Credential file for Object Storage access
cat << EOF > credentials-velero
[default]
aws_access_key_id=<accesskey>
aws_secret_access_key=<secretkey>
EOF
- Helm installation
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version
- Download the Helm Chart file for installing Velero server and components.
helm repo add vmware-tanzu https://vmware-tanzu.github.io/helm-charts/
helm repo update
helm pull vmware-tanzu/velero --untar
- Create a namespace for installing Velero on the Source and Target Kubernetes clusters.
kubectl create namespace velero
- Install Velero via Helm Chart.
helm install velero -n velero velero \
--set-file credentials.secretContents.cloud=credentials-velero \
-f values-additional.yaml
- You can also install the Velero server and components via the Velero CLI (optional).
velero install \
--provider aws \
--plugins <Registry address>/velero/velero-plugin-for-aws:v1.10.0 \"
--bucket <object storage bucket name> \
--secret-file ./credentials-velero \
--backup-location-config region=<object storage region>,s3ForcePathStyle="true",s3Url=https://<object storage endpoint> \
--use-volume-snapshots=false \
--use-node-agent \
--features=EnableAPIGroupVersions \
--default-volumes-to-fs-backup
--kubeconfig=kubeconfig
4. Cluster Preparation
- Check the status of the cluster to be migrated.
- Verify that the required resources (e.g., Pod, Service, PersistentVolume, etc.) are operating correctly.
5. Velero cluster deployment verification
kubectl get backupstoragelocation default -n velero -o yaml
NAME PROVIDER BUCKET/PREFIX PHASE LAST VALIDATED ACCESS MODE DEFAULT
default aws v2migs Available 2025-08-12 12:45:41 +0900 KST ReadWrite true
* PHASE must be in available state
Kubernetes Migration Procedure
1. Backup execution
velero backup create mlops --include-namespaces mynamespace --selector helm.sh/chart=mariadb-1.7.1-0
Note
| filter | value | description |
|---|---|---|
| –include-namespaces | ingress | Only include resources of the ingress namespace |
| –exclude-resources | pods,replicasets | Exclude pod and replicaset |
| –include-cluster-resources | true | include all cluster resources that match the label selector condition |
| –selector | helm.sh/chart=ingress-nginx-4.12.3 | helm.sh/chart: ingress-nginx-4.12.3 Include only resources with the label |
- Backup result check
kubectl get backups -A
kubectl describe backups -n velero
velero backup describe mydb --details
2. Backup Data Check
- When querying the backup storage (ObjectStorage), Kubernetes resources including workloads that become application shapes, and the volume data area are stored encrypted and compressed. In backups, subfolders are created per backup object, and Kubernetes resources are backed up. And in kopia, volume data is backed up.
3. Restore execution
velero restore create mlops --from-backup mlops --parallel-files-download 4
* Set download to parallel using the --parallel-files-download option
5. Restore Result Check
kubectl get pod,svc,deploy,cm,sa,secret,pvc -n mynamespace -l helm.sh/chart=mariadb-1.7.1-0