If you don't already have an OpenStack cloud available, you can use the
metal/ module to deploy a single-node lab. For multi-node
labs you will need to modify the included Ansible roles, or follow the
Kolla Ansible documentation
to manually deploy your own.
Before we begin deployment, we need to configure the variables files
Be sure to set these correctly, otherwise the deployment will not work.
Search for the following values in
globals.yml, and make sure they are set correctly.metal/vars/globals.yml
# The desired static IP address of the node.
# The network interface that is connected to your local network.
# The other network interface.
# This one should NOT have an IP address, and doesn't need a connection.
Make sure the highlighted values in
main.ymlare set correctly.metal/vars/main.yml
# Name (path) of the venv, using the root user's home as the base.
# Ex. A value of 'kolla-venv' will become '/root/kolla-venv'
# Target disk for the root filesystem.
# Path to your SSH pubkey file. This will be used to access the node.
# The hostname to be assigned to the AIO OpenStack node.
# A list of two public DNS resolvers to use for the network.
public_dns_servers: ['22.214.171.124', '126.96.36.199']
# Desired names (within OpenStack) of your 'public' network and subnet.
# This network is attached to your LAN.
# CIDR of your LAN subnet.
# The IP address of your LAN gateway (your router).
# Range of the floating IP address pool for public OpenStack network.
# This should be OUTSIDE of the DHCP range of your router, and should
# NOT include the IP address of your gateway or your OpenStack node.
Installing the host OS
00-make-kickstart-iso.ymlto generate an ISO file in
Write the ISO file to a USB drive (or use PXE boot), and boot from it.
dd if=<iso-file> of=/dev/<usb-drive> bs=4M conv=fsync oflag=direct status=progress
Wait for the automated installer to complete (the system will reboot).
SSH into your new node, and configure an additonal LVM Volume Group named
cinder-standardon your disk or RAID array.Create Cinder volume group
# Partition a virtual device / physical disk.
# Create a Physical Volume on the partition.
# Create a Volume Group with the Physical Volume.
vgcreate cinder-standard /dev/<path-to-vdev>1
10-deploy-openstack.ymlagainst your new node.
ansible-playbook -i <node-ip-address>, 10-deploy-openstack.yml
clouds.yamlto your OpenStack config directory.
mkdir -p ~/.config/openstack
cp output/clouds.yaml ~/.config/openstack/clouds.yaml
This optional step will deploy a local GitLab Runner, in a Docker container, directly on the OpenStack host. This can be used to run Terraform CI/CD jobs.
Using a self-hosted runner can potentially be dangerous. If a malicious actor were to open a Merge Request containing exploit code, they could potentially execute that code on your OpenStack host (and within your network). To counter this risk, you should adjust your repository settings so that untrusted users cannot run CI jobs without explicit approval.
Create a new Runner in your GitLab repository settings, with the tag
openstack, and set the token environment variable on your local system.
Go back to the Runners page, find the ID number of the runner, and set the ID environment variable on your local system.Tip
The ID number will be in the format
#12345678. Do NOT include the hash sign when setting the variable, only the digits.
99-gitlab-runner.ymlagainst your OpenStack host.
ansible-playbook -i <node-ip-address>, 99-gitlab-runner.yml
Refresh the Runners page in GitLab, and make sure your new runner has connected.