Starting with Openstack Administration

Recently I wrote a post about deploying Openstack using kolla-ansible. I finished the article explaining that it was working based on the results of Openstack’s dashboard. Yes, I can say that it is working. However, there is not much we can do with an empty installation of Openstack. So, it is working… What should I do as Openstack admin to make it (a bit) useful?

  • Install Openstack CLI (Command Line Interface)
    So, the admin can type commands to openstack in the CLI Interface. Everything explained in this article can be done with the Web Console but I think it is more productive to use the Command Line.
  • Add images to Glance
    When a Virtual Machine is created (new Instance), the disk of the Instance is built from a previously existent disk image. Something to act as the base for the instance. We need one or more images to build VMs
  • Add flavors
    When a new Instance is created, we need to specify its size in terms of Disk, Memory and Virtual CPUs. This is done using Flavors. By default there is no flavor defined:
Listing of flavors is empty.
  • Create Networks
    A new instance is not useful if it can’t connect to the internet or we can’t login in the VM. We need to define virtual networks so we can work with our VMs.
  • Create/Manage users
    So different users (at least a non-admin user) should work with the Openstack installation.

On the other hand, what shoud I do as an Openstack user to start using virtual hosts in my new Openstack Installation?

  • Create a Keypair
    That’s the way to access the Virtual Servers when they are running
  • Define security group
    So I can set up my own “firewall” and open the ports I need to install and run the software I want in my VM.
  • Create a Virtual Machine
    To deploy a new Instance I need to define some things: The image to use, the Flavor of the VM, the network the VM is connected to, the Security groups applied to the VM and the keypair to access the VM.

The user part is out of the scope of this article, however, I’ll show how these parts are done using the CLI.

Install locally the Openstack CLI

In order to have (almost) full control of Openstack, I’d recommend to install the Openstack CLI. There are many ways to install it, however, in this case we’ll do it using a python virtualenv.

virtualenv -p python3 .venv/openstack
source .venv/openstack/bin/activate
pip install python-openstackclient

We’ll also have to define a few environment variables. Taking advantage of this, we can activate the virtualenv when loading these variables. In order to do this, I’ve created a file named keystoneIdmLocal (the name is not important at all) whith this content:

unset OS_TENANT_ID
unset OS_TENANT_NAME

export OS_REGION_NAME="RegionDemo"
export OS_USERNAME='admin'
export OS_PASSWORD=DrXMLxtrIDl2MIwZq6hZJTU0wUvyZ2KvWSEwgJy9
export OS_AUTH_URL=http://controller:5000
export OS_PROJECT_NAME=admin

export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_IDENTITY_API_VERSION=3

export PS1='[\u@\h \W(keystone_admin)]\$ '
source ~/.venv/openstack/bin/activate

So, before using the Openstack commands I simply load these environment using:

source keystoneIdmLocal

Basic Networking

At the beginning, after the installation there are no networks defined. We’ll need to create at least 2 networks: An internal network allowing communication between all the Virtual Machines and an external network connected to a router (virtual) to connect the Virtual Machines to the internet.

Internal Network

This network will provide internal communication between virtual machines, access to DHCP internal server and metadata server. It is important to have a network ready. This network will be shared, so all users will be able to use it. You can forget this step and expect every user to create its own network.

Creating a Network means creating the network and at least a Subnet where we will configure the most relevants parameters for the Network:

openstack network create  --share internal

openstack subnet create --network internal \
--allocation-pool start=10.202.254.3,end=10.202.254.254 \
--dns-nameserver 8.8.8.8 --gateway 10.202.254.1 \
--subnet-range 10.202.254.0/24 sub-int-net

The parameter –share will make this internal network available for every openstack user. All the VMs will be able to use this network.

External Network

The external network will provide Internet access to the Virtual Machines. This is a common way to access the VMs from the Internet or from somewhere outside the Openstack installation. In order to create an external network we can type something like this:

openstack network create --external \
--provider-network-type flat \
--provider-physical-network physnet1 ext-net

openstack subnet create --network ext-net \
--allocation-pool start=172.23.16.16,end=172.23.16.254 \
--dns-nameserver 8.8.8.8 --gateway 172.23.16.1 \
--subnet-range 172.23.16.0/24 sub-ext-net

This will create a new external network named ext-net. It will provide IPs from 172.23.16.16 to 172.23.16.254 and it will use 8.8.8.8 as DNS server, 172.23.16.1 as gateway.

Adding a router

Once we have an internal and an external network, we’ll need to connect them so incoming/outgoing data in the external network can flow from and to the internal network.

Basically, the way to communicate 2 different networks is using a router and we need a router (a virtual one) to communicate our internal network with the external one. This is done this way:

# Create a router named rt-ext
openstack router create rt-ext

# Set the external gateway (gateway to the Internet)
openstack router set rt-ext --external-gateway  ext-net

# Add the subnet (the internal one)
openstack router add subnet rt-ext sub-int-net   

After this last step, the external network an the internal one are connected with this router.

A recap on networking:

We need to create an internal network, an external network and router to link both networks:

However, using the Openstack console we can depict nicely the Network topology we’ve created:

Network topology after our Openstack commands

A few commands related to networking

Networking is complex and there are many different things to deal with. Anyway, here are a few commands related to networking which could be useful sometimes (apart from the previously given ones):

# Getting help
openstack help network
openstack help router
openstack help subnet

# Show the Networking Agents. Useful to understand the state
# of the different networking components.
openstack network agent list

# Get a list of the networks
openstack network list

# Get the details of a network
openstack network show <network_id_or_name>

# Delete a network
openstack network delete <network_id_or_name>

# Get a list of subnets
openstack subnet list

# Get a list of subnets belonging to a network
openstack subnet list --network <network_id_or_name>

# Show the details of a subnet
openstack subnet show <subnet_id_or_name>

# Delete a subnetwork
openstack subnet delete <subnet_id_or_name>

# List the routers
openstack router list

# Get the details of a router
openstack router show <router_id_or_name>

# Remove the connection of a subnet with a router
openstack router remove subnet <router> <subnet>

# Remove on router's gateway
openstack router unset --external-gateway <router>

Of course in this case, there are tons more of options to deal with. However, this can give you an overview of the basic commands.

Adding images to Glance

A new Virtual Machine is usually built from a disk image stored in Glance. We need disk images to create virtual machines. The most easy way to get these images is downloading them from the Internet.

There is a really small image very useful for testing which can be downloaded from Internet. The next example shows the 2 steps needed to upload a new image to Glance: downloading the Image and uploading it to glance:

wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
    
openstack image create "cirros" \
--file cirros-0.3.4-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public

There are many more images available to be downloaded. A good starting point is: https://docs.openstack.org/image-guide/obtain-images.html

List the available images

A few commands related to Glance:

# Getting help
openstack help image

# List images:
openstack image list

# Show details from an image
openstack image show <id_or_name>

# Delete an image
openstack image delete <id_or_name>

# Save a local copy of an image
openstack image save --file local_copy.img <id_or_name>

Creating Flavors

The “Flavor” is a set of definitions regarding the number of CPUs, the virtual disk size and the memory which can be used by a virtual machine. By default there are no flavors defined in Openstack but we’ll need a flavor to create a new Virtual Machine.

Let’s create a couple of public flavors: A small one with 1Mb of RAM, 1 virtual CPU and 10Gb for the disk size (with name small) and a medium one with 2Gb of RAM, 2 Virtual CPUS and 20Gb for the disk size (with name medium):

openstack flavor create --ram $((1*1024)) \
--disk 10 --vcpus 1 --public small

openstack flavor create --ram $((2*1024)) \
--disk 20 --vcpus 2 --public medium
Listing of the new flavors

A few flavor commands:

# Getting help
openstack help flavor

# Listing of flavors 
openstack flavor list

# Show details of a flavor
openstack flavor show <flavor_name_or_id>

# Delete a flavor
openstack flavor delete <flavor_name_or_id>

Adding a new user

We could always work as admin, but this is not usually a best practice. We would like to create new users and new projects to work with Openstack.

The basic rules to keep in mind are:

  • Resources usually belong to projects
  • An user has one or more roles in one or more projects.

So, we’ll need to create at least a project, at least one user and assign at least one role to the user in the project.

By default there are some roles defined after Opentstack installation:

Default roles after Openstack installation

So, we create a new project, a new user and assign a role to the user in the project this way:

# 1st we create a new project called jicg_project
openstack project create --domain default jicg_project

# 2nd we create a new user called jicg. In this case 
# setting a password. This can be changed in the console.
openstack user create --password mysecretpassword jicg

# 3rd we assign the role "member" to the user jicg 
# in the project jicg_project
openstack role add --user jicg --project jicg_project member

Now the user should be able to log in the console:

Nova: Our first virtual machine

Everybody wants to know how to deploy a VM and how to use it. Let’s say that there are 4 mandatory steps after the Administrator has created the flavors, the networks and uploaded some images to work with (an user can usually create their own Images, networks, etc.):

  • Creating one (or more than one) Security Group with some Security Rules.
  • Creating a Keypair.
  • Allocating a Floating IP to the project
  • Launching our 1st Virtual machine.

The first 3 steps are only mandatory for the 1st VM. The following ones could be created

Creating a Security Group and security rules.

Openstack acts as a closed firewall for the VMs it manages and the user is resposible to create some rules for that firewall thoroughly. For the shake of simplicity I’m going to leave out the meaning of “thoroughly” here.

So, the 1st step is creating a Security Group:

# Create a SG with the name demosecgroup
openstack create security group demosecgroup

Add rules to the security group:

# Open tcp port 22 for everybody (network 0.0.0.0/0)
# for incoming traffic.
openstack security group rule create \
--remote-ip 0.0.0.0/0 \
--protocol tcp \
--dst-port 22 \
--ingress \
demosecgroup

# The same can be done for port HTTP (80) and HTTPS (443).

Creating a new Keypair

In order to access the Virtual Machines, we’ll use ssh protocol. Most of the images we can get from the Internet are prepared to allow only ssh connections using a Private-public keypair.

We can generate one easily:

# This makes openstack create a keypair. The output of the
# command is the private key. The public key is stored in 
# Opentasck so it can be injected in the VMs.
openstack  keypair create demokeypair > demokeypair.pem

As you can see, the content of the file is a private keypair which can be used to login in the VMs. There a few things to consider:

  • The new file has too many permissions. We have to close a bit more the permissions of the file so we can use it with the command “chmod 400 demokeypair.pem“.
  • If we loose this file, we probably won’t be able to connect to our VMs. There is no way to recover this file.

Allocate a new Public IP.

We have to allocate a new public IP to the project in order to associate it to the VM (when we have a VM). The allocation of a new IP can be done like this:

# ext-net is the name of the external network we created
# before.
openstack floating ip create ext-net

We can find the available networks like this

List of available networks.

And as you can see, we have a new Floating IP (public IP) which can be shown with command “openstack floating ip list“:

Create our 1st Virtual Host

Finally, this long expected thing!!!. In this step everything done until now makes sense. In order to create a new VM we need to merge all the things we’ve done before in a single command: The image, the shared (internal) network, the security group, the floating IP, the keypair, … Everything!

# Create a new Server which name is demovm
# Using flavor small, from cirros image, using sec. group
# demosecgroup the network internal and the keypair.
# Everything previously created
openstack server create \
--flavor small \
--image cirros \
--security-group demosecgroup \
--network internal \
--key-name demokeypair \
demovm

After a few minutes we’ll have our VM created:

Newly created Server

The last step is setting the floating IP to this server, given a floating IP:

My floating IP…
openstack floating ip list
....
# My floating IP ID=b43f4537-d28b-4444-a2db-3467500c1900

openstack server add floating ip demovm \
b43f4537-d28b-4444-a2db-3467500c1900

Once done this, I can ssh to my new VM!!!

Caveat: In modern Linuxes the key exchange algorithm will be disabled and thus we’ll be able to login our Cirros VM unless we activate it. We can activate it for this VM editing the file ~/.ssh/config and adding:

Host 172.23.16.48
    KexAlgorithms +diffie-hellman-group1-sha1

Two more comands:

# Show the console of the VM (the boot output, basically)
openstack console log show <server_name_or_id>

# Get an URL to connect the VM In the browser.
openstack console url show <server_name_or_id>
Example of Console in the browser

Finally, I’d like to show the new network topology with the VM connected to the internal network:

New network topology with the VM connected

A few commands for Servers

At least there are a few commands we should know in order to work with our Virtual Machines (servers) quickly:

# Getting help
openstack help server

# Shutdown a server
openstack server stop <server_id_or_name>

# Restart a server
openstack server start <server_id_or_name>

# Pause / unapuse a server
openstack server pause <server_id_or_name>
openstack server unpause <server_id_or_name>

# Suspend / resume a server
openstack server suspend <server_id_or_name>
openstack server resume <server_id_or_name>

# Delete a server (Forever! Destroy everything)
openstack server delete <server_id_or_name>

Deploying Openstack with Ansible-Kolla on LXC

After writing about using LXC and showing its capabilities to run Docker and Virtual Machines, in a Post showing how to start with LXC, and the second one showing how to deploy a Kubernetes cluster in with LXC, I’m going to show today how I’ve deployed an Openstack Cluster using LXC, once more I’ll be using ansible as a helper tool to deploy the containers and install the software I’ll need to deploy in the cluster before deploying Openstack.

As I already said in my previous post, I’ll recomend you to be aware that THIS IS NOT A PRODUCTION SOLUTION. IT IS REALLY UNSECURE. IT IS ONLY INTENDED TO SIMULATE INFRASTRUCUTRE.

Before doing anything, just starting my Ubuntu 20.04 with my Brave Browser to start writing this POST, my memory consumption is 2.62 Gb.

After starting my Openstack cluster with one controller and 3 compute nodes, my memory consumption is 8.4Gb. Yes, it is not cheap in terms of memory, however, it is rather affordable. You can always do with 1 compute node anyway.

As you can imagine, this is going to be a very simple Openstack deployment. Only with a few basic services: Nova (to deploy VMs), Neutron (to manage networks), Glance (for the VM images) and Keystone (for user identification). Maybe in a future I’ll write about deploying more services to this Openstack installation like Cinder to manage volumes or Ceilometer for Telemetry. But in this case it is going to be a very basic deployment.

Installing previous software in my server

Please, keep in mind that “My Server” is nothing but my own laptop. A HP Pavillion with Ubuntu 20.04 installed: Intel i7 with 16Gb RAM and 512Gb Hd.

First of all, I must have ansible installed, however, I already explained that in my POST about Kubernetes on LXC. Anyway, it is only a few lines, so I can write that again:

# To install LXC:
sudo apt install lxc lxc-utils lxc- templates lxc-dev

# To install other packages needed: 
sudo apt install python3-lxc virtualenv sshpass bridge-utils
# ....
# Create the Virtual environment for ansible:
virtualenv -p python3 ~/.venv/ansible
source ~/.venv/ansible/bin/activate
# ....
# To install Ansible.
pip install ansible

As I’ll be using Kolla to deploy Openstack in this installation, I’ll install also in my Virtual Environment kolla-ansible (kolla-ansible==8.1.1 kolla-ansible==8.2.0 — I’ve chosen Stein version in this demo) and the Openstack CLI tools to be able to manage the Openstack Installation:

# Using the Virtual Environment from previous step:
pip install kolla-ansible==8.2.0 python-openstackclient

Bug: There is critical bug affecting due to other providers changes in the required packages this kolla-ansible version and previous ones (https://bugs.launchpad.net/kolla-ansible/+bug/1888657), but the bug fix is not released until version 8.2.1 of kolla-ansible (at the moment of this editing, it is not released yet), so a manual patch will be needed. Luckily its really easy to patch: https://opendev.org/openstack/kolla-ansible/commit/bbaa82619ee404d495ec5aef9468ecd52c5d76d3 — In our case, the file is ~/.venv/ansible/s
hare/kolla-ansible/ansible/roles/common/defaults/main.yml
and you only need to insert this line after line 26:
PYTHONWARNINGS: “ignore::UserWarning”

    environment:                                                                                                                       
      ANSIBLE_NOCOLOR: "1"
      ANSIBLE_LIBRARY: "/usr/share/ansible"
      # The next is the new line to insert...
      PYTHONWARNINGS: "ignore::UserWarning"
    privileged: True

Caveat: Kolla-ansible doesn’t support Ubuntu 20.04. So, the containers must be Ubuntu 18.04. I’ll recomend you creating a first container manually before running these ansible playbooks because LXC downloads the base container and it installs the software using some kind of normal installation. The command is this one:

sudo lxc-create -t ubuntu -n u1 -- -r bionic

Whilst the lxc-create is running and downloading the Ubuntu packages, you can see that apt is running:

And at a certain point of the installation, it will make you questions that you’ll need to answer. If this is done for the first time using ansible, you won’t have any way to respond these questions and the installation will be stuck forever and it will never end.

Once deployed for the first time, the container will be in Cache and you won’t need to respond these questions again. So, you can answer “yes” this time and when it finish, you can destroy the container:

sudo lxc-destroy -n u1

Creating the containers:

Once more, the configuration files I’ve created for the deployment are available in my github repo: https://github.com/jicarretero/jicgeu-support/tree/master/KollaAnsibleOnLXC. In order to visualize the Interconnection of the containers inside the server, I’ve drawn a very simple graph:

So, the big box is my server. Inside it there are 4 LXC containers connected to 2 bridges (I’ll create them in the installation). The Br-os bridge is intended as internal communication between the Openstack nodes. The br-ext bridge is intended for the communication of external VMs to the Internet. I’ve decided not to create VLANs or any other Isolation between the different networks for simplicity. However, in a production environment using real servers, these networks should be isolated for security (to prevent VMs from connecting to the Compute or Controller nodes.

The containers will be connected to the bridge br-ext using eth1 and this interface will have no IP configured. The Containers will be connected to the bridge br-os using eth2 and they’ll have an IP here in 172.23.32.0/24.

In order to create the containers, the ansible playbook named CreateLxcContainers.yaml must be run.

ansible-playbook  -i inventory playbooks/CreateLxcContainers.yaml

This playbook ensures the 2 bridges are created, they are up and they have their corresponding IP. Despite of this, it ensures a few modules are loaded, because they’ll be needed whether for ansible-kolla deployment or for the containers to run properly:
– ebtables (required in ansible-kolla deployment)
– tap (required in the containers to implement a proper network)
– ip_vs (required in ansible-kolla deployment)

iptables -t nat -D POSTROUTING -s 172.23.16.0/24 ! -d 172.23.16.0/24 -j MASQUERADE || true
brctl addbr br-os || true
brctl addbr br-ext || true
ip addr add 172.23.16.1/24 dev br-ext  || true
ip addr add 172.23.32.1/24 dev br-os  || true
ip link set br-os up
ip link set br-ext up
iptables -t nat -A POSTROUTING -s 172.23.16.0/24 ! -d 172.23.16.0/24 -j MASQUERADE
modprobe ebtables
modprobe tap
modprobe ip_vs

After this configuration, it deploys the containers and some packets we’ll need for ansible-kolla deployment. You can see the file code in the Github repo mentioned above.

Preparing the containers

Once the containers are running, we must install some software in them to make them ready for Openstack. This is done with the playbook PrepareContainers.yml.

ansible-playbook  -i containers_inventory playbooks/PrepareContainers.yaml

This playbook creates the user/group kolla and add it to the sudoers file. It adds an authorized ssh public key to ~kolla/.ssh/authorized_keys (this must be configured in file playbook/vars/containers.yaml, in the variable public_key. It is not configured with any default key.

The palybook also sets a netplan to configure networking in the container and restart the network and adds some packages that kolla installation will need.

The playbook will set up a new service which will run before the docker service starts in the LXC Container when the container boots. The service will run a script named shared-run.sh:

#!/bin/bash

mount --make-shared /run

[ -d /dev/net ] || mkdir /dev/net
[ -c /dev/net/tun ] || mknod  /dev/net/tun c 10 200
[ -c /dev/kvm ] || mknod  /dev/kvm c 10 232
[ -c /dev/vhost-net ] || mknod  /dev/vhost-net c 10 238

This script is intended to:

  • Make the /run directory as shared (this is needed by Kolla dockers)
  • Creates /dev/net/tun character device, which will provide better network performance.
  • Creates /dev/kvm character device so Virtual Machines can be created using KVM (instead of QEMU — emulated virtualization, much slower)
  • Creates /dev/vhost-net characer device to be able to create virtual networks and let the VMs to communicate between them.

Kolla-Ansible: The installation.

The installation of Kolla-Ansible is described here. However, I’m going to provide the steps in order repeat the installation I did.

As I explained before, I installed kolla ansible (Openstack Stein) in the ansible’s virtual environment. You can find kolla-ansible version here.

pip install kolla-ansible==8.1.1

First of all, we need to create the directory /etc/kolla where kolla-ansible configuration files are stored. However, I simply will create a link this way from the KollaAnsibleOnLXC directory that I’ve previously downloaded from github:

(ansible) [jicg@corporario KollaAnsibleOnLXC(keystone_admin)]$ sudo ln -s $PWD/etc/kolla /etc/kolla

The next step is generating the Passwords for the installation (I’ve provided the passwords that I got once in one installation, but you should generate new passwords for your self). This step will overwrite the file /etc/kolla/passwords.yml with randomly generated passwords.

kolla-genpwd

Inventory

The inventory file is needed to deploy Openstack using Kolla. Of course, we must edit it before deploying Openstack. In the repository it is already provided an inventory file named multinode. It is tuned for the IPs I’m using in this demo deployment.

# These hostname must be resolvable from your deployment host
172.23.32.2 ansible_user=kolla ansible_become=true

# The above can also be specified as follows:
#control[01:03]     ansible_user=kolla

# The network nodes are where your l3-agent and loadbalancers will run
# This can be the same as a host in the control group
[network]
172.23.32.2 ansible_user=kolla ansible_become=true

[compute]
172.23.32.5 ansible_user=kolla ansible_become=true
172.23.32.6 ansible_user=kolla ansible_become=true
172.23.32.7 ansible_user=kolla ansible_become=true
.......

Kolla’s Configuration File

Now, we must consider reading and understanding the file /etc/kolla/globals.yml. In this file we’ll describe the deployment values that we want to configure for our Openstack deployment. A file tuned for our demo installation is provided in the code you could download from Git Hub.

I’m not going to describe all the parameters I used, but, at least I will describe a few parameters. Let’s start with our network interfaces:

network_interface: "eth2"
api_interface: "eth2"
tunnel_interface: "eth2"
dns_interface: "eth2"
neutron_external_interface: "eth1"

All LXC Containers are connected in the same way to the bridges. The interface eth2 will be used for “management” and it is connected to the bridge I named br-os. They’ll have an in the network 172.23.32.0/24.

The interface eth1 will be used for the Virtual Machines to connect to the Internet. In this case, this interface is connected to br-ext. No IP Is needed here for any LXC Container. In fact, in this case, it would be enough if only the controller had this network interface.

Another parameter is

kolla_internal_vip_address: "172.23.32.254"

Which is a Virtual Interface to be used with ha-proxy. Maybe you can think it is not so interesting, but it is. In this case, no ha-proxy should be needed. But if I don’t use it, one task in kolla-ansible will wait until mariadb is ready in ha-proxy. It’ll never be ready and it will fail.

Other configurations are:

# What version of Openstack we'll be installing and the type of installation
kolla_base_distro: "ubuntu"
kolla_install_type: "source"
openstack_release: "stein"

# We'll be using rabbitmq for the communications amongst processes.
om_rpc_transport: "rabbit"

# We'll be using LinuxBridge to build networks
neutron_plugin_agent: "linuxbridge"

# The region name will be this RegionDemo:
openstack_region_name: "RegionDemo"

# The services we will use are: Keystone, glance, nova, neutron and rabbitmq
enable_openstack_core: "no"
enable_glance: "yes"
enable_haproxy: "yes"
enable_keystone: "yes"
enable_mariadb: "yes"
enable_memcached: "yes"
enable_neutron: "yes"
enable_nova: "yes"
enable_rabbitmq: "yes"

# We'll be using KVM for virtualization. Not QEMU (The default one)
nova_compute_virt_type: "kvm"

Despite all of these configurations, we will add 3 passwords (the problem these passwords are not generated with command kolla-genpwd and they are required to complete the installation:

rabbitmq_monitoring_password: "12345678"
redis_master_password: "12345678"
placement_database_password: "12345678"

Anyway, please, take a look to the /etc/kolla/globals.yml file and try to understand it.

Finally, the deployment

There are 3 steps to deploy ansible-kolla:

The first step is bootstrapping the LXC Containers, which is done with command:

kolla-ansible -i multinode bootstrap-servers

The second step is optional. This is used to check if everything is ready to install:

kolla-ansible -i multinode prechecks

And finally the installation. The installation will fail because some configurations can’t be applyed in the containers. There are workarounds for this, although this is not the solution I choose, anyway, you’d only need to add in your /etc/kolla/globals.yaml the line:

set_sysctl: "no"                                                                                                                       

However, I’ve modified one kolla-ansible file: ~/.venv/ansible/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml ~/.venv/ansible/share/kolla-ansible/ansible/roles/neutron/tasks/config-host.yml adding the line “ignore_errors: yes” at the end of the fist task there:

- name: Setting sysctl values
  become: true
  vars:
    neutron_l3_agent: "{{ neutron_services['neutron-l3-agent'] }}"
  sysctl: name={{ item.name }} value={{ item.value }} sysctl_set=yes
  with_items:
    - { name: "net.ipv4.ip_forward", value: 1}
    - { name: "net.ipv4.conf.all.rp_filter", value: "{{ neutron_l3_agent_host_rp_filter_mode }}"}
    - { name: "net.ipv4.conf.default.rp_filter", value: "{{ neutron_l3_agent_host_rp_filter_mode }}"}
    - { name: "net.ipv4.neigh.default.gc_thresh1", value: "{{ neutron_l3_agent_host_ipv4_neigh_gc_thresh1 }}"}
    - { name: "net.ipv4.neigh.default.gc_thresh2", value: "{{ neutron_l3_agent_host_ipv4_neigh_gc_thresh2 }}"}
    - { name: "net.ipv4.neigh.default.gc_thresh3", value: "{{ neutron_l3_agent_host_ipv4_neigh_gc_thresh3 }}"}
    - { name: "net.ipv6.neigh.default.gc_thresh1", value: "{{ neutron_l3_agent_host_ipv6_neigh_gc_thresh1 }}"}
    - { name: "net.ipv6.neigh.default.gc_thresh2", value: "{{ neutron_l3_agent_host_ipv6_neigh_gc_thresh2 }}"}
    - { name: "net.ipv6.neigh.default.gc_thresh3", value: "{{ neutron_l3_agent_host_ipv6_neigh_gc_thresh3 }}"}
  when:
    - set_sysctl | bool
    - (neutron_l3_agent.enabled | bool and neutron_l3_agent.host_in_groups | bool)
  ignore_errors: yes

The fact is that there are things that doesn’t go well, but indeed the task does other things which are useful, I mean, it sets the variables it can set and which are needed, although it can’t set them all. That’s the reason I don’t set the variable “set_sysctl” to false to prevent it from executing. I could have done so and adding the lines to the sysctl.conf in my own playbooks. I preferred this one. Anyway, up to you.

Once this little patch is appliyed, I run the deployment:

kolla-ansible -i multinode deploy

Your Openstack Installation

After the installation, you can start using you new Openstack. First you’ll need to know your admin Password:

grep keystone_admin_password /etc/kolla/passwords.yml

Another thing you’ll probably want to do is adding a few lines to your /etc/hosts files. Just for convenience. Please pay attention to the controller IP. Yes, it is the same we added in the configurations for the variable kolla_internal_vip_address

172.23.32.254     controller
172.23.32.5       compute-01
172.23.32.6       compute-02
172.23.32.7       compute-03

Once done that, you can start your browser to see something or your Openstack installation:

After login you’ll get in a very meaningful screen:

This screen shows the results of some queries done to Openstack in order to get the usage of resource. Openstack services are needed (so they must be properly running) to render this page without errors.

Final consideration

I’ll write soon another POST about what to do with a new and clean Installation of Openstack. This POST has grown long, but I will write a crash course to make your Openstack installation in something which can be used soon.

It is here: https://www.jicg.eu/index.php/2020/06/27/starting-with-openstack-administration/