Migrating databases to the control plane
To begin creating the control plane, enable back-end services and import the databases from your original {rhos_prev_long} {rhos_prev_ver} deployment.
Retrieving topology-specific service configuration
Before you migrate your databases to the {rhos_long} control plane, retrieve the topology-specific service configuration from your {rhos_prev_long} ({OpenStackShort}) environment. You need this configuration for the following reasons:
-
To check your current database for inaccuracies
-
To ensure that you have the data you need before the migration
-
To compare your {OpenStackShort} database with the adopted {rhos_acro} database
Get the controller VIP, to use it later in the environment variable SOURCE_MARIADB_IP.
-
In the allinone host, get the internal_api VIP from this file:
[cloud-user@allinone ~]cat /home/cloud-user/deployed_network.yaml [...] VipPortMap: storage: ip_address: 172.18.0.16 ip_address_uri: 172.18.0.16 ip_subnet: 172.18.0.16/24 storage_mgmt: ip_address: 172.20.0.16 ip_address_uri: 172.20.0.16 ip_subnet: 172.20.0.16/24 internal_api: ip_address: 172.17.0.16 ip_address_uri: 172.17.0.16 ip_subnet: 172.17.0.16/24 # tenant: # ip_address: 172.19.0.2 # ip_address_uri: 172.19.0.2 # ip_subnet: 172.19.0.2/24 external: ip_address: 172.21.0.16 ip_address_uri: 172.21.0.16 ip_subnet: 172.21.0.16/24 [...]
SOURCE_MARIADB_IP is then 172.17.0.16
In the bastion host, copy the overcloud-passwords from the all-in-one node to the bastion node:
scp -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@allinone:/home/cloud-user/tripleo-standalone-passwords.yaml ~/.
-
In the bastion host. Define the following shell variables. Replace the example values with values that are correct for your environment:
CONTROLLER1_SSH="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100" MARIADB_IMAGE=quay.io/podified-antelope-centos9/openstack-mariadb:current-podified SOURCE_MARIADB_IP=172.17.0.16 SOURCE_DB_ROOT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }') MARIADB_CLIENT_ANNOTATIONS='--annotations=k8s.v1.cni.cncf.io/networks=internalapi'
-
Export the shell variables for the following outputs and test the connection to the {OpenStackShort} database:
export PULL_OPENSTACK_CONFIGURATION_DATABASES=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh "$SOURCE_MARIADB_IP" -uroot -p"$SOURCE_DB_ROOT_PASSWORD" -e 'SHOW databases;') echo "$PULL_OPENSTACK_CONFIGURATION_DATABASES"
The nova
,nova_api
, andnova_cell0
databases are included in the same database host. -
Run
mysqlcheck
on the {OpenStackShort} database to check for inaccuracies:export PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysqlcheck --all-databases -h $SOURCE_MARIADB_IP -u root -p"$SOURCE_DB_ROOT_PASSWORD" | grep -v OK) echo "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK"
-
Get the {compute_service_first_ref} cell mappings:
export PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh "${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" nova_api -e \ 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;') echo "$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS"
-
Get the hostnames of the registered Compute services:
export PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES=$(oc run mariadb-client ${MARIADB_CLIENT_ANNOTATIONS} -q --image ${MARIADB_IMAGE} -i --rm --restart=Never -- \ mysql -rsh "$SOURCE_MARIADB_IP" -uroot -p"$SOURCE_DB_ROOT_PASSWORD" nova_api -e \ "select host from nova.services where services.binary='nova-compute' and deleted=0;") echo "$PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES"
-
Get the list of the mapped {compute_service} cells:
export PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS=$($CONTROLLER1_SSH sudo podman exec -it nova_api nova-manage cell_v2 list_cells) echo "$PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS"
After the {OpenStackShort} control plane services are shut down, if any of the exported values are lost, re-running the command fails because the control plane services are no longer running on the source cloud, and the data cannot be retrieved. To avoid data loss, preserve the exported values in an environment file before shutting down the control plane services. -
Store the exported variables for future use:
cat >~/.source_cloud_exported_variables << EOF PULL_OPENSTACK_CONFIGURATION_DATABASES="$PULL_OPENSTACK_CONFIGURATION_DATABASES" PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK="$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS="$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS" PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES="$PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES" PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS="$PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS" SRIOV_AGENTS="$SRIOV_AGENTS" EOF
Deploying back-end services
Create the OpenStackControlPlane
custom resource (CR) with the basic back-end services deployed, and disable all the {rhos_prev_long} ({OpenStackShort}) services. This CR is the foundation of the control plane.
-
The cloud that you want to adopt is running, and it is on the latest minor version of {OpenStackShort} {rhos_prev_ver}.
-
All control plane and data plane hosts of the source cloud are running, and continue to run throughout the adoption procedure.
-
The
openstack-operator
is deployed, butOpenStackControlPlane
is not deployed.For developer/CI environments, the {OpenStackShort} operator can be deployed by running
make openstack
inside install_yamls repo.For production environments, the deployment method will likely be different.
-
There are free PVs available for MariaDB and RabbitMQ.
For developer/CI environments driven by install_yamls, make sure you’ve run
make crc_storage
. -
Set the desired admin password for the control plane deployment. This can be the admin password from your original deployment or a different password:
ADMIN_PASSWORD=SomePassword
To use the existing {OpenStackShort} deployment password:
ADMIN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' AdminPassword:' | awk -F ': ' '{ print $2; }')
-
Set the service password variables to match the original deployment. Database passwords can differ in the control plane environment, but you must synchronize the service account passwords.
For example, in developer environments with {OpenStackPreviousInstaller} Standalone, the passwords can be extracted:
AODH_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' AodhPassword:' | awk -F ': ' '{ print $2; }') BARBICAN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' BarbicanPassword:' | awk -F ': ' '{ print $2; }') CEILOMETER_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' CeilometerPassword:' | awk -F ': ' '{ print $2; }') CINDER_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' CinderPassword:' | awk -F ': ' '{ print $2; }') GLANCE_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' GlancePassword:' | awk -F ': ' '{ print $2; }') HEAT_AUTH_ENCRYPTION_KEY=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatAuthEncryptionKey:' | awk -F ': ' '{ print $2; }') HEAT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatPassword:' | awk -F ': ' '{ print $2; }') HEAT_STACK_DOMAIN_ADMIN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatStackDomainAdminPassword:' | awk -F ': ' '{ print $2; }') IRONIC_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' IronicPassword:' | awk -F ': ' '{ print $2; }') MANILA_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' ManilaPassword:' | awk -F ': ' '{ print $2; }') NEUTRON_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' NeutronPassword:' | awk -F ': ' '{ print $2; }') NOVA_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' NovaPassword:' | awk -F ': ' '{ print $2; }') OCTAVIA_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' OctaviaPassword:' | awk -F ': ' '{ print $2; }') PLACEMENT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' PlacementPassword:' | awk -F ': ' '{ print $2; }') SWIFT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' SwiftPassword:' | awk -F ': ' '{ print $2; }')
-
Ensure that you are using the {rhocp_long} namespace where you want the control plane to be deployed:
oc project openstack
-
Create the {OpenStackShort} secret.
cd /home/lab-user/labrepo/content/files oc apply -f osp-ng-ctlplane-secret.yaml
-
If the
$ADMIN_PASSWORD
is different than the password you set inosp-secret
, amend theAdminPassword
key in theosp-secret
:oc set data secret/osp-secret "AdminPassword=$ADMIN_PASSWORD"
-
Set service account passwords in
osp-secret
to match the service account passwords from the original deployment:oc set data secret/osp-secret "AodhPassword=$AODH_PASSWORD" oc set data secret/osp-secret "BarbicanPassword=$BARBICAN_PASSWORD" oc set data secret/osp-secret "CeilometerPassword=$CEILOMETER_PASSWORD" oc set data secret/osp-secret "CinderPassword=$CINDER_PASSWORD" oc set data secret/osp-secret "GlancePassword=$GLANCE_PASSWORD" oc set data secret/osp-secret "HeatAuthEncryptionKey=$HEAT_AUTH_ENCRYPTION_KEY" oc set data secret/osp-secret "HeatPassword=$HEAT_PASSWORD" oc set data secret/osp-secret "HeatStackDomainAdminPassword=$HEAT_STACK_DOMAIN_ADMIN_PASSWORD" oc set data secret/osp-secret "IronicPassword=$IRONIC_PASSWORD" oc set data secret/osp-secret "IronicInspectorPassword=$IRONIC_PASSWORD" oc set data secret/osp-secret "ManilaPassword=$MANILA_PASSWORD" oc set data secret/osp-secret "MetadataSecret=$METADATA_SECRET" oc set data secret/osp-secret "NeutronPassword=$NEUTRON_PASSWORD" oc set data secret/osp-secret "NovaPassword=$NOVA_PASSWORD" oc set data secret/osp-secret "OctaviaPassword=$OCTAVIA_PASSWORD" oc set data secret/osp-secret "PlacementPassword=$PLACEMENT_PASSWORD" oc set data secret/osp-secret "SwiftPassword=$SWIFT_PASSWORD"
-
Deploy the
OpenStackControlPlane
CR. Ensure that you only enable the DNS, MariaDB, Memcached, and RabbitMQ services. All other services must be disabled:cd /home/lab-user/labrepo/content/files oc apply -f osp-ng-ctlplane-deploy-backend.yaml
-
Verify that MariaDB is running:
oc get pod openstack-galera-0 -o jsonpath='{.status.phase}{"\n"}' oc get pod openstack-cell1-galera-0 -o jsonpath='{.status.phase}{"\n"}'
Stopping {rhos_prev_long} services
Before you start the {rhos_long} adoption, you must stop the {rhos_prev_long} ({OpenStackShort}) services to avoid inconsistencies in the data that you migrate for the data plane adoption. Inconsistencies are caused by resource changes after the database is copied to the new deployment.
You should not stop the infrastructure management services yet, such as:
-
Database
-
RabbitMQ
-
HAProxy Load Balancer
-
Ceph-nfs
-
Compute service
-
Containerized modular libvirt daemons
-
{object_storage_first_ref} back-end services
-
Ensure that there no long-running tasks that require the services that you plan to stop, such as instance live migrations, volume migrations, volume creation, backup and restore, attaching, detaching, and other similar operations. In the all-in-one host:
export OS_CLOUD=standalone openstack server list --all-projects -c ID -c Status |grep -E '\| .+ing \|' openstack volume list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error openstack volume backup list --all-projects -c ID -c Status |grep -E '\| .+ing \|' | grep -vi error openstack image list -c ID -c Status |grep -E '\| .+ing \|'
-
Collect the services topology-specific configuration. For more information, see Retrieving topology-specific service configuration.
-
In the bastion, define the following shell variables. The values are examples and refer to a single node standalone {OpenStackPreviousInstaller} deployment. Replace these example values with values that are correct for your environment:
CONTROLLER1_SSH="ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@allinone"
-
Disable {OpenStackShort} control plane services:
# Update the services list to be stopped ServicesToStop=("tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_notification.service" "tripleo_horizon.service" "tripleo_keystone.service" "tripleo_barbican_api.service" "tripleo_barbican_worker.service" "tripleo_barbican_keystone_listener.service" "tripleo_cinder_api.service" "tripleo_cinder_api_cron.service" "tripleo_cinder_scheduler.service" "tripleo_cinder_volume.service" "tripleo_cinder_backup.service" "tripleo_collectd.service" "tripleo_glance_api.service" "tripleo_gnocchi_api.service" "tripleo_gnocchi_metricd.service" "tripleo_gnocchi_statsd.service" "tripleo_manila_api.service" "tripleo_manila_api_cron.service" "tripleo_manila_scheduler.service" "tripleo_neutron_api.service" "tripleo_placement_api.service" "tripleo_nova_api_cron.service" "tripleo_nova_api.service" "tripleo_nova_conductor.service" "tripleo_nova_metadata.service" "tripleo_nova_scheduler.service" "tripleo_nova_vnc_proxy.service" "tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_compute.service" "tripleo_ceilometer_agent_ipmi.service" "tripleo_ceilometer_agent_notification.service" "tripleo_ovn_cluster_northd.service" "tripleo_ironic_neutron_agent.service" "tripleo_ironic_api.service" "tripleo_ironic_inspector.service" "tripleo_ironic_conductor.service") PacemakerResourcesToStop=("openstack-cinder-volume" "openstack-cinder-backup" "openstack-manila-share") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done echo "Stopping pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then echo "Stopping $resource" ${!SSH_CMD} sudo pcs resource disable $resource else echo "Service $resource not present" fi done break fi done echo "Checking pacemaker OpenStack services" for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then if ! ${!SSH_CMD} sudo pcs resource status $resource | grep Started; then echo "OK: Service $resource is stopped" else echo "ERROR: Service $resource is started" fi fi done break fi done
If the status of each service is
OK
, then the services stopped successfully.
Migrating databases to MariaDB instances
Migrate your databases from the original {rhos_prev_long} ({OpenStackShort}) deployment to the MariaDB instances in the {rhocp_long} cluster.
-
Ensure that the control plane MariaDB and RabbitMQ are running, and that no other control plane services are running.
-
Retrieve the topology-specific service configuration. For more information, see Retrieving topology-specific service configuration.
-
Stop the {OpenStackShort} services. For more information, see Stopping {rhos_prev_long} services.
-
Ensure that there is network routability between the original MariaDB and the MariaDB for the control plane.
-
Define the following shell variables. Replace the following example values with values that are correct for your environment:
PODIFIED_MARIADB_IP=$(oc get svc --selector "mariadb/name=openstack" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_CELL1_MARIADB_IP=$(oc get svc --selector "mariadb/name=openstack-cell1" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_DB_ROOT_PASSWORD=$(oc get -o json secret/osp-secret | jq -r .data.DbRootPassword | base64 -d) # The CHARACTER_SET and collation should match the source DB # if the do not then it will break foreign key relationships # for any tables that are created in the future as part of db sync CHARACTER_SET=utf8 COLLATION=utf8_general_ci STORAGE_CLASS=ocs-external-storagecluster-ceph-rbd MARIADB_IMAGE=quay.io/podified-antelope-centos9/openstack-mariadb:current-podified # Replace with your environment's MariaDB Galera cluster VIP and backend IPs: SOURCE_MARIADB_IP=172.17.0.16 SOURCE_DB_ROOT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }')
-
Prepare the MariaDB adoption helper pod:
-
Create a temporary volume claim and a pod for the database data copy. Edit the volume claim storage request if necessary, to give it enough space for the overcloud databases:
oc apply -f - <<EOF --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mariadb-data spec: storageClassName: $STORAGE_CLASS accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: mariadb-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $MARIADB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: mariadb-data securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: mariadb-data persistentVolumeClaim: claimName: mariadb-data EOF
-
Wait for the pod to be ready:
oc wait --for condition=Ready pod/mariadb-copy-data --timeout=30s
-
-
Get the count of source databases with the
NOK
(not-OK) status:oc rsh mariadb-copy-data mysql -h "${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" -e "SHOW databases;"
-
Check that
mysqlcheck
had no errors:. ~/.source_cloud_exported_variables test -z "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" || [ "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" = " " ] && echo "OK" || echo "CHECK FAILED"
-
Test the connection to the control plane databases:
oc run mariadb-client --image $MARIADB_IMAGE -i --rm --restart=Never -- \ mysql -rsh "$PODIFIED_MARIADB_IP" -uroot -p"$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;' oc run mariadb-client --image $MARIADB_IMAGE -i --rm --restart=Never -- \ mysql -rsh "$PODIFIED_CELL1_MARIADB_IP" -uroot -p"$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;'
You must transition {compute_service_first_ref} services that are imported later into a superconductor architecture by deleting the old service records in the cell databases, starting with cell1
. New records are registered with different hostnames provided by the {compute_service} operator. All Compute services, except the Compute agent, have no internal state, and their service records can be safely deleted. You also need to rename the formerdefault
cell tocell1
. -
Create a dump of the original databases:
oc rsh mariadb-copy-data << EOF mysql -h"${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" \ -N -e "show databases" | grep -E -v "schema|mysql|gnocchi|aodh" | \ while read dbname; do echo "Dumping \${dbname}"; mysqldump -h"${SOURCE_MARIADB_IP}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD}" \ --single-transaction --complete-insert --skip-lock-tables --lock-tables=0 \ "\${dbname}" > /backup/"\${dbname}".sql; done EOF
-
Restore the databases from
.sql
files into the control plane MariaDB:oc rsh mariadb-copy-data << EOF # db schemas to rename on import declare -A db_name_map db_name_map['nova']='nova_cell1' db_name_map['ovs_neutron']='neutron' db_name_map['ironic-inspector']='ironic_inspector' # db servers to import into declare -A db_server_map db_server_map['default']=${PODIFIED_MARIADB_IP} db_server_map['nova_cell1']=${PODIFIED_CELL1_MARIADB_IP} # db server root password map declare -A db_server_password_map db_server_password_map['default']=${PODIFIED_DB_ROOT_PASSWORD} db_server_password_map['nova_cell1']=${PODIFIED_DB_ROOT_PASSWORD} cd /backup for db_file in \$(ls *.sql); do db_name=\$(echo \${db_file} | awk -F'.' '{ print \$1; }') if [[ -v "db_name_map[\${db_name}]" ]]; then echo "renaming \${db_name} to \${db_name_map[\${db_name}]}" db_name=\${db_name_map[\${db_name}]} fi db_server=\${db_server_map["default"]} if [[ -v "db_server_map[\${db_name}]" ]]; then db_server=\${db_server_map[\${db_name}]} fi db_password=\${db_server_password_map['default']} if [[ -v "db_server_password_map[\${db_name}]" ]]; then db_password=\${db_server_password_map[\${db_name}]} fi echo "creating \${db_name} in \${db_server}" mysql -h"\${db_server}" -uroot "-p\${db_password}" -e \ "CREATE DATABASE IF NOT EXISTS \${db_name} DEFAULT \ CHARACTER SET ${CHARACTER_SET} DEFAULT COLLATE ${COLLATION};" echo "importing \${db_name} into \${db_server}" mysql -h "\${db_server}" -uroot "-p\${db_password}" "\${db_name}" < "\${db_file}" done mysql -h "\${db_server_map['default']}" -uroot -p"\${db_server_password_map['default']}" -e \ "update nova_api.cell_mappings set name='cell1' where name='default';" mysql -h "\${db_server_map['nova_cell1']}" -uroot -p"\${db_server_password_map['nova_cell1']}" -e \ "delete from nova_cell1.services where host not like '%nova-cell1-%' and services.binary != 'nova-compute';" EOF
Compare the following outputs with the topology-specific service configuration. For more information, see Retrieving topology-specific service configuration.
-
Check that the databases are imported correctly:
. ~/.source_cloud_exported_variables # use 'oc exec' and 'mysql -rs' to maintain formatting dbs=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;') echo $dbs | grep -Eq '\bkeystone\b' && echo "OK" || echo "CHECK FAILED" # ensure neutron db is renamed from ovs_neutron echo $dbs | grep -Eq '\bneutron\b' echo $PULL_OPENSTACK_CONFIGURATION_DATABASES | grep -Eq '\bovs_neutron\b' && echo "OK" || echo "CHECK FAILED" # ensure nova cell1 db is extracted to a separate db server and renamed from nova to nova_cell1 c1dbs=$(oc exec openstack-cell1-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" -e 'SHOW databases;') echo $c1dbs | grep -Eq '\bnova_cell1\b' && echo "OK" || echo "CHECK FAILED" # ensure default cell renamed to cell1, and the cell UUIDs retained intact novadb_mapped_cells=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" \ nova_api -e 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;') uuidf='\S{8,}-\S{4,}-\S{4,}-\S{4,}-\S{12,}' left_behind=$(comm -23 \ <(echo $PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS | grep -oE " $uuidf \S+") \ <(echo $novadb_mapped_cells | tr -s "| " " " | grep -oE " $uuidf \S+")) changed=$(comm -13 \ <(echo $PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS | grep -oE " $uuidf \S+") \ <(echo $novadb_mapped_cells | tr -s "| " " " | grep -oE " $uuidf \S+")) test $(grep -Ec ' \S+$' <<<$left_behind) -eq 1 && echo "OK" || echo "CHECK FAILED" default=$(grep -E ' default$' <<<$left_behind) test $(grep -Ec ' \S+$' <<<$changed) -eq 1 && echo "OK" || echo "CHECK FAILED" grep -qE " $(awk '{print $1}' <<<$default) cell1$" <<<$changed && echo "OK" || echo "CHECK FAILED" # ensure the registered Compute service name has not changed novadb_svc_records=$(oc exec openstack-cell1-galera-0 -c galera -- mysql -rs -uroot "-p$PODIFIED_DB_ROOT_PASSWORD" \ nova_cell1 -e "select host from services where services.binary='nova-compute' and deleted=0 order by host asc;") diff -Z <(echo $novadb_svc_records) <(echo $PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES) && echo "OK" || echo "CHECK FAILED"
-
Delete the
mariadb-data
pod and themariadb-copy-data
persistent volume claim that contains the database backup:Consider taking a snapshot of them before deleting. oc delete pod mariadb-copy-data oc delete pvc mariadb-data
During the pre-checks and post-checks, the mariadb-client pod might return a pod security warning related to the restricted:latest security context constraint. This warning is due to default security context constraints and does not prevent the admission controller from creating a pod. You see a warning for the short-lived pod, but it does not interfere with functionality.
For more information, see About pod security standards and warnings.
|
Migrating OVN data
Migrate the data in the OVN databases from the original {rhos_prev_long} deployment to ovsdb-server
instances that are running in the {rhocp_long} cluster.
-
The
OpenStackControlPlane
resource is created. -
NetworkAttachmentDefinition
custom resources (CRs) for the original cluster are defined. Specifically, theinternalapi
network is defined. -
The original {networking_first_ref} and OVN
northd
are not running. -
There is network routability between the control plane services and the adopted cluster.
-
The cloud is migrated to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver.
-
Define the following shell variables. Replace the example values with values that are correct for your environment:
STORAGE_CLASS_NAME=ocs-external-storagecluster-ceph-rbd OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9:18.0 SOURCE_OVSDB_IP=172.17.0.15 # For IPv4
To get the value to set
SOURCE_OVSDB_IP
, query the puppet-generated configurations in a Controller node:grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/
SOURCE_OVSDB_IP is 172.17.0.15
-
Prepare a temporary
PersistentVolume
claim and the helper pod for the OVN backup. Adjust the storage requests for a large database, if needed:oc apply -f - <<EOF --- apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: ovn-data-cert namespace: openstack spec: commonName: ovn-data-cert secretName: ovn-data-cert issuerRef: name: rootca-internal --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ovn-data spec: storageClassName: $STORAGE_CLASS accessModes: - ReadWriteOnce resources: requests: storage: 10Gi --- apiVersion: v1 kind: Pod metadata: name: ovn-copy-data annotations: openshift.io/scc: anyuid k8s.v1.cni.cncf.io/networks: internalapi labels: app: adoption spec: containers: - image: $OVSDB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: ovn-data - mountPath: /etc/pki/tls/misc name: ovn-data-cert readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: ovn-data persistentVolumeClaim: claimName: ovn-data - name: ovn-data-cert secret: secretName: ovn-data-cert EOF
-
Wait for the pod to be ready:
oc wait --for=condition=Ready pod/ovn-copy-data --timeout=30s
-
Back up your OVN databases:
-
If you did not enable TLS everywhere, run the following command:
oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db"
-
-
Start the control plane OVN database services prior to import, with
northd
andovn-controller
disabled:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: replicas: 3 ovnController: networkAttachment: tenant nodeSelector: '
-
Wait for the OVN database services to reach the
Running
phase:oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-nb oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-sb
-
Fetch the OVN database IP addresses on the
clusterIP
service network:PODIFIED_OVSDB_NB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-nb-0" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_OVSDB_SB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-sb-0" -ojsonpath='{.items[0].spec.clusterIP}')
-
Upgrade the database schema for the backup files:
-
As we did not enable TLS everywhere, use the following command:
oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema"
-
-
Restore the database backup to the new OVN database servers:
-
As we did not enable TLS everywhere, use the following command:
oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db"
-
-
Check that the data was successfully migrated by running the following commands against the new database servers, for example:
oc exec -it ovsdbserver-nb-0 -- ovn-nbctl show oc exec -it ovsdbserver-sb-0 -- ovn-sbctl list Chassis
-
Start the control plane
ovn-northd
service to keep both OVN databases in sync:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnNorthd: replicas: 1 '
-
If you are running OVN gateway services on {OpenShiftShort} nodes, enable the control plane
ovn-controller
service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnController: networkAttachment: tenant '
Running OVN gateways on {OpenShiftShort} nodes might be prone to data plane downtime during Open vSwitch upgrades. Consider running OVN gateways on dedicated Networker
data plane nodes for production deployments instead. -
Delete the
ovn-data
helper pod and the temporaryPersistentVolumeClaim
that is used to store OVN database backup files:oc delete --ignore-not-found=true pod ovn-copy-data oc delete --ignore-not-found=true pvc ovn-data
Consider taking a snapshot of the ovn-data
helper pod and the temporaryPersistentVolumeClaim
before deleting them. For more information, see About volume snapshots in OpenShift Container Platform storage overview. -
Stop the adopted OVN database servers:
ServicesToStop=("tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi done done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done