Install RHOSP 17.1 Environment to Adopt

Access Your Lab Environment

  1. Access to the bastion executing the following command:

    ssh {bastion_ssh_user_name}@{bastion_public_hostname} -p {bastion_ssh_port}

    SSH password is {bastion_ssh_password}. The uuid of your lab is: my-guid

  2. Optionally copy your public SSH key to the bastion allowing you to authenticate with the server without entering a password every time you connect:

    ssh-copy-id -p {bastion_ssh_port} {bastion_ssh_user_name}@{bastion_public_hostname}

    If needed, you can navigate to the OpenShift console URL: {openshift_cluster_console_url}[{openshift_cluster_console_url}^] using user admin and password {openshift_cluster_admin_password}

RHOSP 17.1 Installation

This installation method for RHOSP 17.1 is not supported and is intended for lab or testing purposes only.
  1. Clone the Files Repository

    In the bastion terminal, clone the repository and change to the directory containing the files that will be used later in the lab:

    git clone https://github.com/rh-osp-demo/showroom_osp-on-ocp-day2.git labrepo
    cd labrepo/content/files

    Copy the SSH keys to the all-in-one host:

    scp -i /home/lab-user/.ssh/my-guidkey.pem /home/lab-user/.ssh/my-guidkey.* cloud-user@allinone:/home/cloud-user/.ssh/

Install the NFS Server on the NFS Server Node

  1. Configure the NFS server for Glance and Cinder:

    From the bastion, connect to the nfs-server host.:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@nfsserver

    Escalate to root:

    sudo -i

    Configure the networking and enable the nfs-server:

    mkdir -p /nfs/cinder
    mkdir -p /nfs/glance
    chmod 777 /nfs/cinder
    chmod 777 /nfs/glance
    
    cat << EOF > /etc/exports
    /nfs/cinder *(rw,sync,no_root_squash)
    /nfs/glance *(rw,sync,no_root_squash)
    EOF
    
    nmcli co delete 'Wired connection 1'
    nmcli con add con-name "static-eth1" ifname eth1 type ethernet ip4 172.18.0.13/24
    nmcli con up "static-eth1"
    
    systemctl start nfs-server
    systemctl enable nfs-server
    logout
    logout

Install the RHOSP 17.1 Controller

  1. Configure the Standalone Controller

    From the bastion, connect to the all-in-one host:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@allinone

    Configure networking:

    eval "$(ssh-agent -s)"
    ssh-add ~/.ssh/my-guidkey.pem
    sudo hostnamectl set-hostname standalone.localdomain
    sudo nmcli co delete 'Wired connection 2'
    sudo nmcli con add con-name "static-eth1" ifname eth1 type ethernet ip4 172.22.0.15/24
    sudo nmcli con up "static-eth1"
    sudo nmcli co delete 'Wired connection 3'
    sudo nmcli con add con-name "static-eth2" ifname eth2 type ethernet ip4 172.17.0.15/24
    sudo nmcli con up "static-eth2"
    sudo nmcli co delete 'Wired connection 4'
    sudo nmcli con add con-name "static-eth3" ifname eth3 type ethernet ip4 172.18.0.15/24
    sudo nmcli con up "static-eth3"
    sudo nmcli co delete 'Wired connection 5'
    sudo nmcli con add con-name "static-eth4" ifname eth4 type ethernet ip4 172.19.0.15/24
    sudo nmcli con up "static-eth4"
    sudo nmcli co delete 'Wired connection 6'
    sudo nmcli con add con-name "static-eth5" ifname eth5 type ethernet ip4 172.21.0.15/24
    sudo nmcli con up "static-eth5"
    logout
  2. Configure the Compute Node 2

    From the bastion, connect to the compute02 host and configure the networking:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@compute02

    Configure networking:

    sudo hostnamectl set-hostname compute02.localdomain
    sudo nmcli co delete 'Wired connection 2'
    sudo nmcli con add con-name "static-eth1" ifname eth1 type ethernet ip4 172.22.0.110/24
    sudo nmcli con up "static-eth1"
    sudo nmcli co delete 'Wired connection 3'
    sudo nmcli con add con-name "static-eth2" ifname eth2 type ethernet ip4 172.17.0.110/24
    sudo nmcli con up "static-eth2"
    sudo nmcli co delete 'Wired connection 4'
    sudo nmcli con add con-name "static-eth3" ifname eth3 type ethernet ip4 172.18.0.110/24
    sudo nmcli con up "static-eth3"
    sudo nmcli co delete 'Wired connection 5'
    sudo nmcli con add con-name "static-eth4" ifname eth4 type ethernet ip4 172.19.0.110/24
    sudo nmcli con up "static-eth4"
    sudo nmcli co delete 'Wired connection 6'
    sudo nmcli con add con-name "static-eth5" ifname eth5 type ethernet ip4 172.21.0.110/24
    sudo nmcli con up "static-eth5"
    logout
  3. Configure the Compute Node 3

    From the bastion, connect to the compute03 host and configure the networking:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@compute03

    Configure networking:

    sudo hostnamectl set-hostname compute03.localdomain
    sudo nmcli co delete 'Wired connection 2'
    sudo nmcli con add con-name "static-eth1" ifname eth1 type ethernet ip4 172.22.0.112/24
    sudo nmcli con up "static-eth1"
    sudo nmcli co delete 'Wired connection 3'
    sudo nmcli con add con-name "static-eth2" ifname eth2 type ethernet ip4 172.17.0.112/24
    sudo nmcli con up "static-eth2"
    sudo nmcli co delete 'Wired connection 4'
    sudo nmcli con add con-name "static-eth3" ifname eth3 type ethernet ip4 172.18.0.112/24
    sudo nmcli con up "static-eth3"
    sudo nmcli co delete 'Wired connection 5'
    sudo nmcli con add con-name "static-eth4" ifname eth4 type ethernet ip4 172.19.0.112/24
    sudo nmcli con up "static-eth4"
    sudo nmcli co delete 'Wired connection 6'
    sudo nmcli con add con-name "static-eth5" ifname eth5 type ethernet ip4 172.21.0.112/24
    sudo nmcli con up "static-eth5"
    logout
  4. Install RHOSP 17.1 Standalone Controller

    From the bastion, connect to the all-in-one host:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@allinone

    Subscribe the host with the subscription:

    sudo subscription-manager register --username $RHN_user --password $RHN_password
    sudo subscription-manager release --set=9.2
    sudo subscription-manager repos --disable=*
    sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms \
    --enable=rhel-9-for-x86_64-appstream-eus-rpms \
    --enable=rhel-9-for-x86_64-highavailability-eus-rpms \
    --enable=openstack-17.1-for-rhel-9-x86_64-rpms \
    --enable=fast-datapath-for-rhel-9-x86_64-rpms
    sudo dnf -y update
    sudo dnf -y install python3-tripleoclient python-jinja2 git
    git clone https://github.com/rh-osp-demo/showroom_osp-on-ocp-day2.git labrepo
    cp labrepo/content/standalone/* /home/cloud-user/.
    sh openstack_standalone.sh
  5. Disable the Standalone Compute

    From the all-in-one host:

    export OS_CLOUD=standalone
    openstack compute service set --disable standalone.localdomain nova-compute
    sudo systemctl stop tripleo_nova_compute.service
    sudo systemctl disable tripleo_nova_compute.service
    openstack compute service delete 5
  6. Generate Files for Compute Installation

    Execute the openstack_compute_prepare.sh script and copy the generated files to compute02 and compute03:

    sh openstack_compute_prepare.sh
    scp -i /home/cloud-user/.ssh/my-guidkey.pem oslo.yaml cloud-user@172.22.0.110:
    scp -i /home/cloud-user/.ssh/my-guidkey.pem passwords.yaml cloud-user@172.22.0.110:
    scp -i /home/cloud-user/.ssh/my-guidkey.pem oslo.yaml cloud-user@172.22.0.112:
    scp -i /home/cloud-user/.ssh/my-guidkey.pem passwords.yaml cloud-user@172.22.0.112:
  7. Install the Compute Node 2

    From the bastion, connect to the compute02 host:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@compute02

    Execute the openstack_compute.sh script to install compute02:

    sudo subscription-manager register --username $RHN_user --password $RHN_password
    sudo subscription-manager release --set=9.2
    sudo subscription-manager repos --disable=*
    sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms \
    --enable=rhel-9-for-x86_64-appstream-eus-rpms \
    --enable=rhel-9-for-x86_64-highavailability-eus-rpms \
    --enable=openstack-17.1-for-rhel-9-x86_64-rpms \
    --enable=fast-datapath-for-rhel-9-x86_64-rpms
    sudo dnf -y update
    sudo dnf -y install python3-tripleoclient python-jinja2 git
    git clone https://github.com/rh-osp-demo/showroom_osp-on-ocp-day2.git labrepo
    cp -n labrepo/content/standalone/* /home/cloud-user/
    sh openstack_compute_02.sh
  8. Install the Compute Node 3

    From the bastion, connect to the compute03 host:

    ssh -i /home/lab-user/.ssh/my-guidkey.pem cloud-user@compute03

    Execute the openstack_compute.sh script to install compute03:

    sudo subscription-manager register --username $RHN_user --password $RHN_password
    sudo subscription-manager release --set=9.2
    sudo subscription-manager repos --disable=*
    sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms \
    --enable=rhel-9-for-x86_64-appstream-eus-rpms \
    --enable=rhel-9-for-x86_64-highavailability-eus-rpms \
    --enable=openstack-17.1-for-rhel-9-x86_64-rpms \
    --enable=fast-datapath-for-rhel-9-x86_64-rpms
    sudo dnf -y update
    sudo dnf -y install python3-tripleoclient python-jinja2 git
    git clone https://github.com/rh-osp-demo/showroom_osp-on-ocp-day2.git labrepo
    cp -n labrepo/content/standalone/* /home/cloud-user/
    sh openstack_compute_03.sh

You can run openstack_compute_02.sh and openstack_compute_03.sh on paralel to gain execution time.

  1. Discover the computes Host

    From the all-in-one host, execute:

    sudo podman exec -it nova_api nova-manage cell_v2 discover_hosts --verbose
  2. Create Some Workloads

    export OS_CLOUD=standalone
    export GATEWAY=172.21.0.1
    export PUBLIC_NETWORK_CIDR=172.21.0.1/24
    export PRIVATE_NETWORK_CIDR=192.168.100.0/24
    export PUBLIC_NET_START=172.21.0.200
    export PUBLIC_NET_END=172.21.0.210
    export DNS_SERVER=172.30.0.10
    openstack flavor create --ram 512 --disk 1 --vcpu 1 --public tiny
    curl -O -L https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img
    openstack image create cirros --container-format bare --disk-format qcow2 --public --file cirros-0.6.2-x86_64-disk.img
  3. Create and inject ssh key

    ssh-keygen -m PEM -t rsa -b 2048 -f ~/.ssh/id_rsa_pem
  4. Create the remainining openstack resources

    openstack keypair create --public-key ~/.ssh/id_rsa_pem.pub default
    openstack security group create basic
    openstack security group rule create basic --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
    openstack security group rule create --protocol icmp basic
    openstack security group rule create --protocol udp --dst-port 53:53 basic
    openstack network create --external --provider-physical-network datacentre --provider-network-type flat public
    openstack network create --internal private
    openstack subnet create public-net \
    --subnet-range $PUBLIC_NETWORK_CIDR \
    --no-dhcp \
    --gateway $GATEWAY \
    --allocation-pool start=$PUBLIC_NET_START,end=$PUBLIC_NET_END \
    --network public
    openstack subnet create private-net \
    --subnet-range $PRIVATE_NETWORK_CIDR \
    --network private
    openstack router create vrouter
    openstack router set vrouter --external-gateway public
    openstack router add subnet vrouter private-net
    
    openstack server create \
        --flavor tiny --key-name default --network private --security-group basic --availability-zone=nova:compute02.localdomain \
        --image cirros test-server-compute-02
    openstack floating ip create public --floating-ip-addres 172.21.0.204
    
    openstack server create \
        --flavor tiny --key-name default --network private --security-group basic --availability-zone=nova:compute03.localdomain \
        --image cirros test-server-compute-03
    openstack floating ip create public --floating-ip-addres 172.21.0.205
    
    openstack server add floating ip test-server-compute-02 172.21.0.204
    openstack server add floating ip test-server-compute-03 172.21.0.205