Migrating OVN data
Migrate the data in the OVN databases from the original {rhos_prev_long} deployment to ovsdb-server
instances that are running in the {rhocp_long} cluster.
-
The
OpenStackControlPlane
resource is created. -
NetworkAttachmentDefinition
custom resources (CRs) for the original cluster are defined. Specifically, theinternalapi
network is defined. -
The original {networking_first_ref} and OVN
northd
are not running. -
There is network routability between the control plane services and the adopted cluster.
-
The cloud is migrated to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver.
-
Define the following shell variables. Replace the example values with values that are correct for your environment:
STORAGE_CLASS_NAME=ocs-external-storagecluster-ceph-rbd OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9:18.0 SOURCE_OVSDB_IP=172.17.0.15 # For IPv4
To get the value to set
SOURCE_OVSDB_IP
, query the puppet-generated configurations in a Controller node:grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/
SOURCE_OVSDB_IP is 172.17.0.15
-
Prepare a temporary
PersistentVolume
claim and the helper pod for the OVN backup. Adjust the storage requests for a large database, if needed:oc apply -f - <<EOF --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ovn-data-cert namespace: openstack spec: commonName: ovn-data-cert secretName: ovn-data-cert issuerRef: name: rootca-internal --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ovn-data spec: storageClassName: $STORAGE_CLASS accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: ovn-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $OVSDB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: ovn-data - mountPath: /etc/pki/tls/misc name: ovn-data-cert readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: ovn-data persistentVolumeClaim: claimName: ovn-data - name: ovn-data-cert secret: secretName: ovn-data-cert EOF
-
Wait for the pod to be ready:
oc wait --for=condition=Ready pod/ovn-copy-data --timeout=30s
-
Back up your OVN databases:
-
If you did not enable TLS everywhere, run the following command:
oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db"
-
-
Start the control plane OVN database services prior to import, with
northd
andovn-controller
disabled:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: replicas: 3 ovnController: networkAttachment: tenant nodeSelector: '
-
Wait for the OVN database services to reach the
Running
phase:oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-nb oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-sb
-
Fetch the OVN database IP addresses on the
clusterIP
service network:PODIFIED_OVSDB_NB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-nb-0" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_OVSDB_SB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-sb-0" -ojsonpath='{.items[0].spec.clusterIP}')
-
Upgrade the database schema for the backup files:
-
As we did not enable TLS everywhere, use the following command:
oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema"
-
-
Restore the database backup to the new OVN database servers:
-
As we did not enable TLS everywhere, use the following command:
oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db"
-
-
Check that the data was successfully migrated by running the following commands against the new database servers, for example:
oc exec -it ovsdbserver-nb-0 -- ovn-nbctl show oc exec -it ovsdbserver-sb-0 -- ovn-sbctl list Chassis
-
Start the control plane
ovn-northd
service to keep both OVN databases in sync:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnNorthd: replicas: 1 '
-
If you are running OVN gateway services on {OpenShiftShort} nodes, enable the control plane
ovn-controller
service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnController: networkAttachment: tenant '
Running OVN gateways on {OpenShiftShort} nodes might be prone to data plane downtime during Open vSwitch upgrades. Consider running OVN gateways on dedicated Networker
data plane nodes for production deployments instead. -
Delete the
ovn-data
helper pod and the temporaryPersistentVolumeClaim
that is used to store OVN database backup files:oc delete --ignore-not-found=true pod ovn-copy-data oc delete --ignore-not-found=true pvc ovn-data
Consider taking a snapshot of the ovn-data
helper pod and the temporaryPersistentVolumeClaim
before deleting them. For more information, see About volume snapshots in OpenShift Container Platform storage overview. -
Stop the adopted OVN database servers:
ServicesToStop=("tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done