Adopting Red Hat OpenStack Platform control plane services
Adopt your Red Hat OpenStack Platform 17.1 control plane services to deploy them in the Red Hat OpenStack Services on OpenShift (RHOSO) 18.0 control plane.
Adopting the Identity service
To adopt the Identity service (keystone), you patch an existing OpenStackControlPlane
custom resource (CR) where the Identity service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
-
In the bastion host, create the keystone secret that includes the Fernet keys that were copied from the RHOSP environment:
oc apply -f - <<EOF apiVersion: v1 data: CredentialKeys0: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/0 | base64 -w 0) CredentialKeys1: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/credential-keys/1 | base64 -w 0) FernetKeys0: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/0 | base64 -w 0) FernetKeys1: $($CONTROLLER1_SSH sudo cat /var/lib/config-data/puppet-generated/keystone/etc/keystone/fernet-keys/1 | base64 -w 0) kind: Secret metadata: name: keystone namespace: openstack type: Opaque EOF
-
Patch the
OpenStackControlPlane
CR to deploy the Identity service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: keystone: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer databaseInstance: openstack secret: osp-secret '
-
Wait for the openstackclient reaching the running phase:
while ! oc get pods -n openstack | grep openstackclient ; do echo "waiting for osc pod"; sleep 5; done oc wait --for=condition=Ready pod/openstackclient --timeout=30s oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=openstackclient
-
Create an alias to use the
openstack
command in the Red Hat OpenStack Services on OpenShift (RHOSO) deployment:alias openstack="oc exec -t openstackclient -- openstack"
-
Remove services and endpoints that still point to the RHOSP control plane, excluding the Identity service and its endpoints:
openstack endpoint list | grep keystone | awk '/admin/{ print $2; }' | xargs ${BASH_ALIASES[openstack]} endpoint delete || true for service in aodh heat heat-cfn barbican cinderv3 glance gnocchi manila manilav2 neutron nova placement swift ironic-inspector ironic; do openstack service list | awk "/ $service /{ print \$2; }" | xargs -r ${BASH_ALIASES[openstack]} service delete || true done
-
Verify that you can access the
OpenStackClient
pod. For more information, see Accessing the OpenStackClient pod in Maintaining the Red Hat OpenStack Services on OpenShift deployment. -
Confirm that the Identity service endpoints are defined and are pointing to the control plane FQDNs:
openstack endpoint list | grep keystone
Adopting the Networking service
To adopt the Networking service (neutron), you patch an existing OpenStackControlPlane
custom resource (CR) that has the Networking service disabled. The patch starts the service with the
configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
The Networking service adoption is complete if you see the following results:
-
The
NeutronAPI
service is running. -
The Identity service (keystone) endpoints are updated, and the same back end of the source cloud is available.
-
Ensure that Single Node OpenShift or OpenShift Local is running in the Red Hat OpenShift Container Platform (RHOCP) cluster.
-
Adopt the Identity service. For more information, see Adopting the Identity service.
-
Migrate your OVN databases to
ovsdb-server
instances that run in the Red Hat OpenShift Container Platform (RHOCP) cluster. For more information, see Migrating OVN data.
-
Patch the
OpenStackControlPlane
CR to deploy the Networking service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: neutron: enabled: true apiOverride: route: {} template: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer databaseInstance: openstack databaseAccount: neutron secret: osp-secret networkAttachments: - internalapi '
-
Inspect the resulting Networking service pods:
NEUTRON_API_POD=`oc get pods -l service=neutron | tail -n 1 | cut -f 1 -d' '` oc exec -t $NEUTRON_API_POD -c neutron-api -- cat /etc/neutron/neutron.conf
-
Ensure that the
Neutron API
service is registered in the Identity service:openstack service list | grep network
openstack endpoint list | grep network | 6a805bd6c9f54658ad2f24e5a0ae0ab6 | regionOne | neutron | network | True | public | http://neutron-public-openstack.apps-crc.testing | | b943243e596847a9a317c8ce1800fa98 | regionOne | neutron | network | True | internal | http://neutron-internal.openstack.svc:9696 |
-
Create sample resources so that you can test whether the user can create networks, subnets, ports, or routers:
openstack network create net openstack subnet create --network net --subnet-range 10.0.0.0/24 subnet openstack router create router
Adopting the Image service
To adopt the Image Service (glance) you patch an existing OpenStackControlPlane
custom resource (CR) that has the Image service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
The Image service adoption is complete if you see the following results:
-
The
GlanceAPI
service up and running. -
The Identity service endpoints are updated, and the same back end of the source cloud is available.
To complete the Image service adoption, ensure that your environment meets the following criteria:
-
You have a running director environment (the source cloud).
Adopting the Image service that is deployed with an NFS back end
Adopt the Image Service (glance) that you deployed with an NFS back end. To complete the following procedure, ensure that your environment meets the following criteria:
-
The Storage network is propagated to the Red Hat OpenStack Platform (RHOSP) control plane.
-
The Image service can reach the Storage network and connect to the nfs-server through the port
2049
.
-
You have completed the previous adoption steps.
-
In the source cloud, verify the NFS parameters that the overcloud uses to configure the Image service back end. Specifically, in yourdirector heat templates, find the following variables that override the default content that is provided by the
glance-nfs.yaml
file in the/usr/share/openstack-tripleo-heat-templates/environments/storage
directory:GlanceBackend: file GlanceNfsEnabled: true GlanceNfsShare: 172.18.0.13:/nfs/glance
Procedure-
Adopt the Image service and create a new
default
GlanceAPI
instance that is connected with the existing NFS share:cat << EOF > glance_nfs_patch.yaml spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: /nfs/glance server: 172.18.0.13 name: r1 region: r1 glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images/ storage: storageRequest: 10G glanceAPIs: default: replicas: 1 type: single override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer networkAttachments: - storage EOF
-
-
Replace
<ip_address>
with the IP address that you use to reach thenfs-server
. -
Replace
<exported_path>
with the exported path in thenfs-server
.-
Patch the
OpenStackControlPlane
CR to deploy the Image service with an NFS back end:oc patch openstackcontrolplane openstack --type=merge --patch-file glance_nfs_patch.yaml
-
-
When
GlanceAPI
is active, confirm that you can see a single API instance:oc get pods -l service=glance NAME READY STATUS RESTARTS glance-default-single-0 3/3 Running 0 ```
-
Ensure that the description of the pod reports the following output:
Mounts: ... nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: {{ server ip address }} Path: {{ nfs export path }} ReadOnly: false ...
-
Check that the mountpoint that points to
/var/lib/glance/images
is mapped to the expectednfs server ip
andnfs path
that you defined in the new defaultGlanceAPI
instance:oc rsh -c glance-api glance-default-single-0 sh-5.1# mount ... ... {{ ip address }}:/var/nfs on /var/lib/glance/images type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.18.0.5,local_lock=none,addr=172.18.0.5) ... ...
-
Confirm that the UUID is created in the exported directory on the NFS node. For example:
oc rsh openstackclient openstack image list sh-5.1 curl -L -o /tmp/cirros-0.5.2-x86_64-disk.img http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img ... ... sh-5.1openstack image create --container-format bare --disk-format raw --file /tmp/cirros-0.5.2-x86_64-disk.img cirros ... ... sh-5.1openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 634482ca-4002-4a6d-b1d5-64502ad02630 | cirros | active | +--------------------------------------+--------+--------+ [source,bash,role=execute,subs=attributes]
-
On the
nfs-server
node, the sameuuid
is in the exported/nfs/glance
:ls /nfs/glance/ 634482ca-4002-4a6d-b1d5-64502ad02630
Adopting the Placement service
To adopt the Placement service, you patch an existing OpenStackControlPlane
custom resource (CR) that has the Placement service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
-
You import your databases to MariaDB instances on the control plane. For more information, see Migrating databases to MariaDB instances.
-
You adopt the Identity service (keystone). For more information, see Adopting the Identity service.
-
Patch the
OpenStackControlPlane
CR to deploy the Placement service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: placement: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: placement secret: osp-secret override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer '
-
Check that the Placement service endpoints are defined and pointing to the control plane FQDNs, and that the Placement API responds:
alias openstack="oc exec -t openstackclient -- openstack" openstack endpoint list | grep placement # Without OpenStack CLI placement plugin installed: PLACEMENT_PUBLIC_URL=$(openstack endpoint list -c 'Service Name' -c 'Service Type' -c URL | grep placement | grep public | awk '{ print $6; }') oc exec -t openstackclient -- curl "$PLACEMENT_PUBLIC_URL" # With OpenStack CLI placement plugin installed: openstack resource class list
Adopting the Compute service
To adopt the Compute service (nova), you patch an existing OpenStackControlPlane
custom resource (CR) where the Compute service is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment. The following procedure describes a single-cell setup.
-
You have completed the previous adoption steps.
-
You have defined the following shell variables. Replace the following example values with the values that are correct for your environment:
alias openstack="oc exec -t openstackclient -- openstack"
-
Patch the
OpenStackControlPlane
CR to deploy the Compute service:oc patch openstackcontrolplane openstack -n openstack --type=merge --patch ' spec: nova: enabled: true apiOverride: route: {} template: secret: osp-secret apiServiceTemplate: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true metadataServiceTemplate: enabled: true # deploy single nova metadata on the top level override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true schedulerServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cellTemplates: cell0: conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true cell1: metadataServiceTemplate: enabled: false # enable here to run it in a cell instead override: service: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true conductorServiceTemplate: customServiceConfig: | [workarounds] disable_compute_service_check_for_ffu=true '
The local Conductor services are started for each cell, while the superconductor runs in cell0 .
Note that disable_compute_service_check_for_ffu is mandatory for all imported Compute services until the external data plane is imported, and until Compute services are fast-forward upgraded. For more information, see Adopting Compute services to the RHOSO data plane and Performing a fast-forward upgrade on Compute services.
|
-
Check that Compute service endpoints are defined and pointing to the control plane FQDNs, and that the Nova API responds:
openstack endpoint list | grep nova openstack server list
-
Compare the outputs with the topology-specific configuration in Retrieving topology-specific service configuration.
-
-
Query the superconductor to check that
cell1
exists, and compare it to pre-adoption values:. ~/.source_cloud_exported_variables echo $PULL_OPENSTACK_CONFIGURATION_NOVAMANAGE_CELL_MAPPINGS oc rsh nova-cell0-conductor-0 nova-manage cell_v2 list_cells | grep -F '| cell1 |'
The following changes are expected:
-
The
cell1
nova
database and username becomenova_cell1
. -
The default cell is renamed to
cell1
. -
RabbitMQ transport URL no longer uses
guest
.
-
At this point, the Compute service control plane services do not control the existing Compute service workloads. The control plane manages the data plane only after the data adoption process is completed. For more information, see Adopting Compute services to the RHOSO data plane. |
To import external Compute services to the RHOSO data plane, you must upgrade them first. For more information, see Adopting Compute services to the RHOSO data plane, and Performing a fast-forward upgrade on Compute services. |
Adopting the Block Storage service
To adopt a director-deployed Block Storage service (cinder), create the manifest based on the existing cinder.conf
file, deploy the Block Storage service, and validate the new deployment.
-
You have reviewed the Block Storage service limitations. For more information, see Limitations for adopting the Block Storage service.
-
You have planned the placement of the Block Storage services.
-
You have prepared the Red Hat OpenShift Container Platform (RHOCP) nodes where the volume and backup services run. For more information, see RHOCP preparation for Block Storage service adoption.
-
The Block Storage service (cinder) is stopped.
-
The service databases are imported into the control plane MariaDB.
-
The Identity service (keystone) and Key Manager service (barbican) are adopted.
-
The Storage network is correctly configured on the RHOCP cluster.
-
Prepare the secret to place the NFS server connection used by Cinder
oc create secret generic cinder-nfs-config --from-file=nfs-cinder-conf
-
Create a new file, for example,
cinder_nfs_patch.yaml
, and apply the configuration:cat << EOF > cinder_nfs_patch.yaml spec: cinder: enabled: true apiOverride: route: {} template: databaseInstance: openstack secret: osp-secret cinderAPI: override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.89 spec: type: LoadBalancer replicas: 1 customServiceConfig: | [DEFAULT] default_volume_type=tripleo cinderScheduler: replicas: 1 cinderBackup: networkAttachments: - storage replicas: 0 # backend needs to be configured cinderVolumes: nfs: networkAttachments: - storage customServiceConfig: | [nfs] volume_backend_name=nfs volume_driver=cinder.volume.drivers.nfs.NfsDriver nfs_snapshot_support=true nas_secure_file_operations=false nas_secure_file_permissions=false customServiceConfigSecrets: - cinder-nfs-config EOF
-
Apply the configuration:
oc patch openstackcontrolplane openstack --type=merge --patch-file=cinder_nfs_patch.yaml
-
Wait for Cinder control plane services' CRs to become ready:
oc wait --for condition=Ready --timeout=300s Cinder/cinder
-
Retrieve the list of the previous scheduler and backup services:
openstack volume service list
-
Sample Output
+------------------+-------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | standalone.localdomain | nova | enabled | up | 2025-01-17T17:08:12.000000 | | cinder-backup | standalone.localdomain | nova | enabled | up | 2025-01-17T17:08:12.000000 | | cinder-volume | hostgroup@tripleo_nfs | nova | enabled | up | 2025-01-17T17:08:12.000000 | | cinder-scheduler | cinder-scheduler-0 | nova | enabled | up | 2025-01-17T17:08:23.000000 | | cinder-volume | cinder-volume-nfs-0@nfs | nova | enabled | up | 2025-01-17T17:09:01.000000 | +------------------+-------------------------+------+---------+-------+----------------------------+
-
Remove services for hosts that are in the
down
state:oc exec -it cinder-scheduler-0 -c cinder-scheduler -- cinder-manage service remove cinder-scheduler standalone.localdomain oc exec -it cinder-scheduler-0 -c cinder-scheduler -- cinder-manage service remove cinder-backup standalone.localdomain oc exec -it cinder-scheduler-0 -c cinder-scheduler -- cinder-manage service remove cinder-volume hostgroup@tripleo_nfs
-
Replace
<service_binary>
with the name of the binary, for example,cinder-backup
. -
Replace
<service_host>
with the host name, for example,cinder-backup-0
.
-
-
Check if all the services are up and running.
openstack volume service list +------------------+-------------------------+------+---------+-------+----------------------------+ | Binary | Host | Zone | Status | State | Updated At | +------------------+-------------------------+------+---------+-------+----------------------------+ | cinder-scheduler | cinder-scheduler-0 | nova | enabled | up | 2025-01-17T17:11:23.000000 | | cinder-volume | cinder-volume-nfs-0@nfs | nova | enabled | up | 2025-01-17T17:11:31.000000 | +------------------+-------------------------+------+---------+-------+----------------------------+
-
Apply the DB data migrations:
You are not required to run the data migrations at this step, but you must run them before the next upgrade. However, for adoption, it is recommended to run the migrations now to ensure that there are no issues before you run production workloads on the deployment.
oc exec -it cinder-scheduler-0 -- cinder-manage db online_data_migrations
-
Ensure that the
openstack
alias is defined:alias openstack="oc exec -t openstackclient -- openstack"
-
Confirm that Block Storage service endpoints are defined and pointing to the control plane FQDNs:
openstack endpoint list --service
-
Replace
<endpoint>
with the name of the endpoint that you want to confirm.
-
-
Confirm that the Block Storage services are running:
openstack volume service list
Cinder API services do not appear in the list. However, if you get a response from the openstack volume service list
command, that means at least one of the cinder API services is running. -
Confirm that you have your previous volume types, volumes, snapshots, and backups:
openstack volume type list openstack volume list openstack volume snapshot list openstack volume backup list
-
To confirm that the configuration is working, perform the following steps:
-
Create a volume from an image to check that the connection to Image Service (glance) is working:
openstack volume create --image cirros --bootable --size 1 disk_new
-
Adopting the Dashboard service
To adopt the Dashboard service (horizon), you patch an existing OpenStackControlPlane
custom resource (CR) that has the Dashboard service disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform environment.
-
You adopted Memcached. For more information, see Deploying back-end services.
-
You adopted the Identity service (keystone). For more information, see Adopting the Identity service.
-
Patch the
OpenStackControlPlane
CR to deploy the Dashboard service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: horizon: enabled: true apiOverride: route: {} template: memcachedInstance: memcached secret: osp-secret '
-
Verify that the Dashboard service instance is successfully deployed and ready:
oc get horizon
-
Confirm that the Dashboard service is reachable and returns a
200
status code:PUBLIC_URL=$(oc get horizon horizon -o jsonpath='{.status.endpoint}') curl --silent --output /dev/stderr --head --write-out "%{http_code}" "$PUBLIC_URL/dashboard/auth/login/?next=/dashboard/" -k | grep 200
Adopting the Orchestration service
To adopt the Orchestration service (heat), you patch an existing OpenStackControlPlane
custom resource (CR), where the Orchestration service
is disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) environment.
After you complete the adoption process, you have CRs for Heat
, HeatAPI
, HeatEngine
, and HeatCFNAPI
, and endpoints within the Identity service (keystone) to facilitate these services.
-
The source director environment is running.
-
The target Red Hat OpenShift Container Platform (RHOCP) environment is running.
-
You adopted MariaDB and the Identity service.
-
If your existing Orchestration service stacks contain resources from other services such as Networking service (neutron), Compute service (nova), Object Storage service (swift), and so on, adopt those sevices before adopting the Orchestration service.
-
Retrieve the existing
auth_encryption_key
andservice
passwords. You use these passwords to patch theosp-secret
. In the following example, theauth_encryption_key
is used asHeatAuthEncryptionKey
and theservice
password is used asHeatPassword
:HEAT_AUTH_ENCRYPTION_KEY_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatAuthEncryptionKey:' | awk -F ': ' '{ print $2; }') HEAT_AUTH_ENCRYPTION_KEY_BASE64=$(echo -n "$HEAT_AUTH_ENCRYPTION_KEY_PASSWORD" | base64) HEAT_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatPassword:' | awk -F ': ' '{ print $2; }') HEAT_PASSWORD_BASE64=$(echo -n "$HEAT_PASSWORD" | base64) HEAT_STACK_DOMAIN_ADMIN_PASSWORD=$(cat ~/tripleo-standalone-passwords.yaml | grep ' HeatStackDomainAdminPassword:' | awk -F ': ' '{ print $2; }') HEAT_STACK_DOMAIN_ADMIN_PASSWORD_BASE64=$(echo -n "$HEAT_STACK_DOMAIN_ADMIN_PASSWORD" | base64)
-
Patch the
osp-secret
to update theHeatAuthEncryptionKey
andHeatPassword
parameters. These values must match the values in the director Orchestration service configuration:oc patch secret osp-secret --type='json' -p="[{'op': 'replace', 'path': '/data/HeatAuthEncryptionKey', 'value': '$HEAT_AUTH_ENCRYPTION_KEY_BASE64'}]" oc patch secret osp-secret --type='json' -p="[{'op': 'replace', 'path': '/data/HeatPassword', 'value': '$HEAT_PASSWORD_BASE64'}]" oc patch secret osp-secret --type='json' -p="[{'op': 'replace', 'path': '/data/HeatStackDomainAdminPassword', 'value': '$HEAT_STACK_DOMAIN_ADMIN_PASSWORD_BASE64'}]"
-
Patch the
OpenStackControlPlane
CR to deploy the Orchestration service:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: heat: enabled: true apiOverride: route: {} template: databaseInstance: openstack databaseAccount: heat secret: osp-secret memcachedInstance: memcached passwordSelectors: authEncryptionKey: HeatAuthEncryptionKey service: HeatPassword stackDomainAdminPassword: HeatStackDomainAdminPassword '
-
Ensure that the statuses of all the CRs are
Setup complete
:oc get Heat,HeatAPI,HeatEngine,HeatCFNAPI NAME STATUS MESSAGE heat.heat.openstack.org/heat True Setup complete NAME STATUS MESSAGE heatapi.heat.openstack.org/heat-api True Setup complete NAME STATUS MESSAGE heatengine.heat.openstack.org/heat-engine True Setup complete NAME STATUS MESSAGE heatcfnapi.heat.openstack.org/heat-cfnapi True Setup complete
-
Check that the Orchestration service is registered in the Identity service:
oc exec -it openstackclient -- openstack service list -c Name -c Type +------------+----------------+ | Name | Type | +------------+----------------+ | heat | orchestration | | glance | image | | heat-cfn | cloudformation | | ceilometer | Ceilometer | | keystone | identity | | placement | placement | | cinderv3 | volumev3 | | nova | compute | | neutron | network | +------------+----------------+
oc exec -it openstackclient -- openstack endpoint list --service=heat -f yaml - Enabled: true ID: 1da7df5b25b94d1cae85e3ad736b25a5 Interface: public Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-public-openstack-operators.apps.okd.bne-shift.net/v1/%(tenant_id)s - Enabled: true ID: 414dd03d8e9d462988113ea0e3a330b0 Interface: internal Region: regionOne Service Name: heat Service Type: orchestration URL: http://heat-api-internal.openstack-operators.svc:8004/v1/%(tenant_id)s
-
Check that the Orchestration service engine services are running:
oc exec -it openstackclient -- openstack orchestration service list -f yaml - Binary: heat-engine Engine ID: b16ad899-815a-4b0c-9f2e-e6d9c74aa200 Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000' - Binary: heat-engine Engine ID: 887ed392-0799-4310-b95c-ac2d3e6f965f Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 26ed9668-b3f2-48aa-92e8-2862252485ea Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:00.000000' - Binary: heat-engine Engine ID: 1011943b-9fea-4f53-b543-d841297245fd Host: heat-engine-6d47856868-p7pzz Hostname: heat-engine-6d47856868-p7pzz Status: up Topic: engine Updated At: '2023-10-11T21:48:01.000000'
-
Verify that you can see your Orchestration service stacks:
openstack stack list -f yaml - Creation Time: '2023-10-11T22:03:20Z' ID: 20f95925-7443-49cb-9561-a1ab736749ba Project: 4eacd0d1cab04427bc315805c28e66c9 Stack Name: test-networks Stack Status: CREATE_COMPLETE Updated Time: null
Adopting Telemetry services
To adopt Telemetry services, you patch an existing OpenStackControlPlane
custom resource (CR) that has Telemetry services disabled to start the service with the configuration parameters that are provided by the Red Hat OpenStack Platform (RHOSP) 17.1 environment.
If you adopt Telemetry services, the observability solution that is used in the RHOSP 17.1 environment, Service Telemetry Framework, is removed from the cluster. The new solution is deployed in the Red Hat OpenStack Services on OpenShift (RHOSO) environment, allowing for metrics, and optionally logs, to be retrieved and stored in the new back ends.
You cannot automatically migrate old data because different back ends are used. Metrics and logs are considered short-lived data and are not intended to be migrated to the RHOSO environment. For information about adopting legacy autoscaling stack templates to the RHOSO environment, see Adopting Autoscaling services.
-
The director environment is running (the source cloud).
-
Previous adoption steps are completed.
-
Patch the
OpenStackControlPlane
CR to deploycluster-observability-operator
:cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-observability-operator namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: cluster-observability-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
-
Wait for the installation to succeed:
oc get clusterserviceversion -n openshift-operators -o custom-columns=Name:.metadata.name,Phase:.status.phase
-
Patch the
OpenStackControlPlane
CR to deploy Ceilometer services:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: enabled: true template: ceilometer: passwordSelector: ceilometerService: CeilometerPassword enabled: true secret: osp-secret serviceUser: ceilometer logging: enabled: false port: 10514 cloNamespace: openshift-logging '
-
Optional - Enable the metrics storage back end:
oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: template: metricStorage: enabled: true monitoringStack: alertingEnabled: true scrapeInterval: 30s storage: strategy: persistent retention: 24h persistent: pvcStorageRequest: 20G '
-
Verify that the
alertmanager
andprometheus
pods are available:oc get pods -l alertmanager=metric-storage -n openstack NAME READY STATUS RESTARTS AGE alertmanager-metric-storage-0 2/2 Running 0 46s alertmanager-metric-storage-1 2/2 Running 0 46s oc get pods -l prometheus=metric-storage -n openstack NAME READY STATUS RESTARTS AGE prometheus-metric-storage-0 3/3 Running 0 46s
-
Inspect the resulting Ceilometer pods:
CEILOMETETR_POD=`oc get pods -l service=ceilometer -n openstack | tail -n 1 | cut -f 1 -d' '` oc exec -t $CEILOMETETR_POD -c ceilometer-central-agent -- cat /etc/ceilometer/ceilometer.conf
-
Inspect enabled pollsters:
oc get secret ceilometer-config-data -o jsonpath="{.data['polling\.yaml\.j2']}" | base64 -d
-
Optional: Override default pollsters according to the requirements of your environment:
oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: template: ceilometer: defaultConfigOverwrite: polling.yaml.j2: | --- sources: - name: pollsters interval: 100 meters: - volume.* - image.size enabled: true secret: osp-secret '
Adopting autoscaling services
To adopt services that enable autoscaling, you patch an existing OpenStackControlPlane
custom resource (CR) where the Alarming services (aodh) are disabled. The patch starts the service with the configuration parameters that are provided by the Red Hat OpenStack Platform environment.
-
The source director environment is running.
-
You have adopted the following services:
-
MariaDB
-
Identity service (keystone)
-
Orchestration service (heat)
-
Telemetry service
-
-
Patch the
OpenStackControlPlane
CR to deploy the autoscaling services:oc patch openstackcontrolplane openstack --type=merge --patch ' spec: telemetry: enabled: true template: autoscaling: enabled: true aodh: passwordSelector: aodhService: AodhPassword databaseAccount: aodh databaseInstance: openstack secret: osp-secret serviceUser: aodh heatInstance: heat '
-
Inspect the aodh pods:
AODH_POD=`oc get pods -l service=aodh -n openstack | tail -n 1 | cut -f 1 -d' '` oc exec -t $AODH_POD -c aodh-api -- cat /etc/aodh/aodh.conf
-
Check whether the aodh API service is registered in the Identity service:
openstack endpoint list | grep aodh | d05d120153cd4f9b8310ac396b572926 | regionOne | aodh | alarming | True | internal | http://aodh-internal.openstack.svc:8042 | | d6daee0183494d7a9a5faee681c79046 | regionOne | aodh | alarming | True | public | http://aodh-public.openstack.svc:8042 |
-
Optional: Create aodh alarms with the
PrometheusAlarm
alarm type:You must use the PrometheusAlarm
alarm type instead ofGnocchiAggregationByResourcesAlarm
.openstack alarm create --name high_cpu_alarm \ --type prometheus \ --query "(rate(ceilometer_cpu{resource_name=~'cirros'})) * 100" \ --alarm-action 'log://' \ --granularity 15 \ --evaluation-periods 3 \ --comparison-operator gt \ --threshold 7000000000
-
Verify that the alarm is enabled:
openstack alarm list +--------------------------------------+------------+------------------+-------------------+----------+ | alarm_id | type | name | state | severity | enabled | +--------------------------------------+------------+------------------+-------------------+----------+ | 209dc2e9-f9d6-40e5-aecc-e767ce50e9c0 | prometheus | prometheus_alarm | ok | low | True | +--------------------------------------+------------+------------------+-------------------+----------+ [source,bash,role=execute,subs=attributes]
-