I3WM installation on Arch Linux

There is a nice site to start learning Linux: This is Arch wiki. In fact, an interesting thing to do is starting by installing Arch linux following Arch’s wiki installation guide. Of course, I wouldn’t recommend doing this on your laptop hardware, but I would rather recommend installing it using a virtual machine first, until you feel comfortable with your Arch Lilnux installation.

In this case, as I’m a linux user, I’ll create a new VM using KVM with the help or virt-manager, which is faster than VirtualBox. However, Virtualbox has a clear advantage over KVM: You can use it in Windows, MacOS and of course, in Linux.

Creating the Virtual Machine

You can download the ISO image following the instructions in their download page. You can also create your Virtual server using virt-manager (or the way you prefer it). You should properly configure your virtual hardware (in my case I’ll use 8Gb Ram and 4 CPU cores and a new 60Gb hard disk, which is far more than needed).

The CPU configuration for my VM will use a host-passthrough for my host configuration. This will be quite performant and will allow me to use nested virtualization whenever I want to use it… And I’m sure I’ll want to use it a some point in the future.

CPU Configuration – Copy CPU configuration.

In order to get good enough performance with the video driver without bloating the CPU usage in the physical host, I’ll configure:

Video virtio (paravirtualized) allowing 3D acceleration
Display Spice configuration (Intel chipset)

Starting my Arch Linux VM and Installation

Once we have everything configured, we start our virtual server and start Installation. The important thing here is read and understand the wiki’s installation guide.

# Load keymap -- Default is "US". Mine is "es"... Use yours
loadkeys es

# Verify your IP link
ip link

# Mine is enp1s0 -- so, I'll get my IP
dhclient enp1s0

# Update date
timedatectl set-ntp true

Now one very important part: partitioning disks. In my case, as I’m using KVM, my disk is “named” as /dev/vda, This is the one where I need to make the partitions with fdisk. I’ll do it the simplest way this time:

Using fdisk to partition disk.

After the partition, we must type a few commands to do the actual installation

# Format the partition
mkfs -t ext4 /dev/vda1

# Mount the partition in /mnt
mount /dev/vda1 /mnt

# Install the essential packages (and other useful packages)
pacstrap /mnt base base-devel linux linux-firmware grub  neovim nano sudo git

# Generate the fstab file
genfstab -U /mnt >> /mnt/etc/fstab

# Change the root directory to /mnt to continue the installation.
arch-chroot /mnt

# Configure the timezone (mine is Madrid)
# ln -sf /usr/share/zoneinfo/Region/City /etc/localtime
ln -sf /usr/share/zoneinfo/Europe/Madrid /etc/localtime

# Run hwclock(8) to generate /etc/adjtime:
hwclock --systohc

# Edit /etc/locale.gen and uncomment needed locales.... This can be usim nvim
# or using nano
sed -i 's|#en_US.UTF|en_US.UTF|g' /etc/locale.gen
sed -i 's|#es_ES.UTF|es_ES.UTF|g' /etc/locale.gen

# generate locales
locale-gen

# Configure LANG variable in locale.conf
echo "LANG=es_ES.UTF-8" > /etc/locale.conf

# Configure Console keys
echo "KEYMAP=es" > /etc/vconsole.conf

# Configure a hostname for the server and /etc/hosts
echo "archi3" > /etc/hostname

# Esto se puede editar con nvim o nano
cat << EOT > /etc/hosts
127.0.0.1    localhost
::1          localhost
127.0.0.1    archi3
EOT

We should add now a new user and allow it to become root using sudo. Arch Linux suggest the mechanism to do this is allowing the wheel group to become root using sudo, that’s why we need to change /etc/sudoers as follows.

## Uncomment to allow members of group wheel to execute any command
%wheel ALL=(ALL) ALL

It is time to create the user and add it to goup “wheel” so it can become root.

# Create the user
useradd -m jicg

# Add it to the group "wheel", so it can become root
usermod -aG wheel jicg

# Add a new password to jicg
passwd jicg

The next step is adding grub loader so the operating system can load in next reboots and enabling NetworkManager service so it boots with a simple Network configuration. One last step: reboot.

grub-install /dev/vda

grub-mkconfig -o /boot/grub/grub.cfg

systemctl enable NetworkManager.service

exit

reboot

Additional configurations

I’ve created a small script which can be used to install a few important things: https://raw.githubusercontent.com/jicarretero/jicgeu-support/master/ArchLinuxWithI3WM/00_install_basic_software.sh

You can download it and run it as root. It will install X11 with lighdm, openssh and set the key map to Spanish inside X. So, after running the script we’ll have X-Windows there installed with a very simple greeter (lightdm default greeter):

lightdm greeter

The first time we start I3WM, it will create a new configuration file. A very simple configuration file. It’ll make us choose what is the “default modifier” key. I thing the Windows key is the one to choose.

1st time starting i3wm
Select Mod Key — Win is the one I choose.

In order to be able to do something with i3wm, we must know a couple key bindings:

  • <win>+<enter> — Opens a terminal
  • <win>+d — Opens a menu. We can type, for example firefox in order to open a browser
  • <win>+1,2,3,…, 0 — Changes to a different “desktop”. We have 10 by default.

A final thought

The windows are not in optimal resolution at this time. In order to improve resolution, I would check with xrandr the possibilities

In my case, 1920×1080 will do the job:

xrandr -s 1920x1080

This is a very lightweight operating system. At boot time, with only a terminal, it will less than 140Mb to run:

Less than 140Mb to run our Arch Linux + basic I3WM

An introduction to GlusterFS

In GlusterFS web page, they describe their product simply like this: “Gluster is a free and open source software scalable network filesystem”. It is interesting since it is designed to scale by providing new resources to the Gluster Cluster.

There is a nice description about what is GlusterFS in their own web page. There, you can read:

  • Gluster is an easy way to provision your own storage backend NAS using almost any hardware you choose.
  • It’s not just a toy. Gluster is enterprise-ready, and commercial support is available if you need it. It is used in some of the most taxing environments like media serving, natural resource exploration, medical imaging, and even as a filesystem for Big Data.

However, they also warn, among other things that “Gluster does not support so called ‘structured data’, meaning live, SQL databases. Of course, using Gluster to backup and restore the database would be fine”. Using Gluster to store data in your database might lead to delays and sharing a volume with different database servers might lead to corruption.

GlusterFS in the Servers

Following with my classical Ubuntu 20.04 installations, I’ll describe how to install GlusterFS in 2 or more “server” nodes (it doesn’t work only with one node). Let’s imagine we have 3 nodes to install glusterfs-server and the name of these server nodes are gluster1, gluster2 and gluster3. So, in the 3 nodes we should run:

sudo apt install software-properties-common
sudo apt-add-repository  ppa:gluster/glusterfs-7
sudo apt update
sudo apt install glusterfs-server

# Enable and start the glusterd service:
sudo systemctl enable glusterd
sudo systemctl start glusterd

Great, after running those 4 commands we have GlusterFS installed in 3 servers. We need to “connect” them to start working together. So, in only one node (let’s say gluster1 and assuming that every node can access by name to every other node) we can simply run the commands:

sudo gluster peer probe gluster2
sudo gluster peer probe gluster3

We can test that the peers are connected with command:

gluster peer status
gluster peer status output

Once the nodes are connected, we’ll be able to create a new Volume to be shared using GlusterFS. For testing purposes we are going to create the Volumes in the root partition which is not recommended. The recommendation is using another different partition to create the volumes, so, sorry for the “trick”. Anyway, for learning and testing could be enough.

First, we need to create the directory containing the volume in the 3 gluster nodes (if we want 3 replicas, we could do it with only 2 replicas and the directory would only be needed in 2 nodes). So, run this command in the 3 nodes:

mkdir /storage

After the directory exists in the 3 nodes, we can create the gluster volume with “gluster volume create” (force parameter is there to force using the root partition) and we can start the synchronization in the nodes using “gluster volume start”. In this example the name of the volume is “mongodb”:

sudo gluster volume create mongodb replica 3 transport tcp gluster1:/storage/mongodb gluster2:/storage/mongodb gluster3:/storage/mongodb force
sudo gluster volume start mongodb

In order to know the list of volumes or their status, we can check with the commands:

sudo gluster volume list
sudo gluster volume status [volume_name]
gluster volume list and gluster volume status

Caveat: I’ll follow the approach for the demo to install a MongoDB using docker having the data in GlusterFS.

Two other interesting commands are meant to stop the volume and to delete the volume. The volume won’t be deleted if is started, so we need to stop it before deleting it:

sudo gluster volume stop xxxx
sudo gluster volume delete xxxx

GlusterFS in the Client

We can mount the “shared” GlusterFS directories in the clients where we have the Gluster client installed. In order to install the Gluster Client:

sudo apt install software-properties-common
sudo apt-add-repository  ppa:gluster/glusterfs-7
sudo apt update
sudo apt install glusterfs-client

Great! We have our client now running. We are going to install a MongoDB using Docker with the data in our Gluster cluster. First, we can create a mount point directory in our Gluster client and mount the Volume:

sudo mkdir /mongodb
sudo mount -t glusterfs gluster1:/mongodb /mongodb

And finally, we only need to run the Mongodb docker:

sudo docker run -v /mongodb:/data/db -p 27017:27017 --name mongodb -d mongo

The docker is run (I hope you have Docker installed) and the files are synchronized in the 3 Gluster nodes according to the way we created the Volume:

Something interesting to do is configuring the volume to be automatically mounted when the Gluster Client starts, so, a line like this one could be added to fstab:

gluster1:/mongodb   /mongodb   glusterfs defaults,_netdev,noauto,x-systemd.automount 0 0

Using LXC to Deploy the GlusterFS Cluster

As I’ve done some other times, I’ve coded a couple of Ansible scripts to deploy the GlusterFS cluster using LXC. These scripts can be found in my Github.

In order to get “Ansible” + LXC working in your laptop, you can follow the instructions I gave in my article about deploying Kubernetes on LXC. Under the title: “Prepare my server (laptop)”.

LXC Containers in Ubuntu 20.04

LXC, as you can read in https://en.wikipedia.org/wiki/LXC, is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel.

Basically this means that you can have multiple containers behaving as if they were Virtual Machines. This is a different approach to Dockers, which are meant to run a single application inside. For instance, if we wanted to run a WordPress using Docker, we should run a docker for the database and another one for the HTTP server and both Docker should communicate using some networking capabilities. However, if we run the same instance using LXC, we could install the Database and the HTTP server using the same container (a kind of lightweight Virtual Machine). So, I think this is basically the biggest difference: A Docker is thought to run an application and LXC is thought to behave in a close way to a virtual machine.

Installation and first Steps

In order to install LXC we could simply run:

apt install lxc lxc-templates lxc-utils

Now, we can start working with LXC. Of course, the first thing we’d like to do is creating our first LXC Virtual Machine, in order to do this, we could type the following command:

lxc-create -t ubuntu -n u1 -- -r focal

And after a while, we’ll have our container. It won’t be runing, but it will be created. The name of my container, in this case, is n1 In order to run the container we should type:

lxc-start -d -n u1

Once we are running our container, we can see it using the command “lxc-ls –fancy“:

As you can see, when the container is STOPPED, it has no IP. After the container is started, it is assigned an IP address and will use that IP address for the life of the container (until it is destroyed). By default, a bridge is created in LXC host named lxcbr0 which will act as “gateway” for the containers.

NOTE: The default network is 10.0.3.0/24 instead of 10.0.4.0/24 in the example. It doesn’t matter much at this point.

Two other interesting commands to work with containers are: lxc-stop to stop a container and lxc-destroy to definitely remove the container (forever):

# Stop a running container
lxc-stop -n u1

# Remove a stopped container
lxc-destroy -n u1

Configuration of Containers

Once the container is created, a new directory is created in /var/lib/lxc with the name of our container. In our example, the directory will be /var/lib/lxc/u1.

There is a config file in that directory named config. We can change the configuration of the container here. There are plenty of different configurations which can be done here.

The default configuration is enough to run most applications. However, there are some things that might require to change this configuration. For example: Running nested LXC Containers or Dockers, running Virtual Machines using KVM…

If we wanted to run these kind of applications, we must add the following lines to the end of the config file:

lxc.apparmor.profile = unconfined
lxc.cgroup.devices.allow = a
lxc.cap.drop =

In my example, I’m running LXC in LXC. So this is the reason the IPs of my examples are 10.0.4.0/24 instead of 10.0.3.0/24.

Running Docker inside LXC

I run a new container (this time it won’t be nested):

lxc-create -n docker-in-lxc -t ubuntu

And I add the lines shown above to the end of its config file: /var/lib/lxc/docker-in-lxc/config and I start the container with lxc-start.

I log in the container and I run the command

sudo apt install docker docker.io

Once everything is installed, we can test if it works:

KVM In LXC

Again, we create another lxc container changing its config file as explained above and we can ssh to that container.

sudo apt install libvirt-clients libvirt-daemon libvirt-daemon-system libvirt-daemon-system qemu-kvm qemu-utils qemu-system-x86

Once it gets installed, our libvirt will be there.

Before we can do anything with this, we’ll need to create a device driver which is not present by default in our LXC. The /dev/net/tun which is a must in order to run our Virtual machines with networking:

We also need to be aware that Kernel Modules are not loaded inside the LXC containers. Modules are loaded in the Kernel and the Kernel is shared for every LXC container. So, if we want virtualization to work, we’ll need the KVM Module loaded in the real Linux Kernel: We need kvm installed in our real operating system.

It is not mandatory to have nested virtualization configured in our module. But anyway, I have it.

In order to test my “brand-new” kmv-in-lxc, I’ve downloaded a small linux image (Cirros): http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img

Once the image is downloaded, I will start a new VM using that image. However, it is a cloud based image which will try to connect to 169.254.169.254 in order to get cloud metadata information. As I don’t have any cloud provider such as openstack, booting a Virtual Machine from this image will take a long, long time. In order to make all this to fail faster, I’m going to add the ip 169.254.169.254 to device eth0 inside the container:

sudo ip addr add 169.254.169.254/32 dev eth0

And now, I boot a VM from my image:

sudo kvm  -no-reboot -nographic  -m 2048 -hda cirros-0.5.1-x86_64-disk.img

This VM can be destroyed with “sudo poweroff“.

NFS Divertimento

Some weeks ago, I wrote something about iSCSI. This is a way to show a remote disk in another server as if it was a local disk. You need to care in you client about everything: It is basically like another disk for the initiators.

On the other hand, you can choose using NFS which is a different approach: The server exports the Filesystem (directory or whatever) and it is mounted remotely in the clients. So, the “internals” about the filesystem are managed in the server and not in the client. I only mean that the approaches are different, I don’t want to discuss which one is better. Anyway, the question should be something like “which one is better for what?”.

In this example, I’d like to explain how we can install and use NFS both in our client and our Server

NFS Server

The installation is simple:

apt-get install -y nfs-common nfs-kernel-server

Once we have installed that software, we only need to think what we want to export, and write that in “/etc/exports” file. For example, let’s imagine we create a new folder /exports/nfs and I want to export that, then I should add the following line to /etc/exports file:

/export/nfs               *(rw,no_root_squash)

and reload the nfs-kernel-server:

sudo systemctl reload nfs-kernel-server

That was pretty easy.

NFS Client

The installation of the client side is even easier than the installation in the server:

sudo apt-get install -y nfs-common

And in order to mount the /export/nfs which our NFS server is exporting, we only need to run the following command (my NFS_SERVER_IP is 192.168.192.10):

sudo mount -t nfs ${NFS_SERVER_IP}:/export/nfs /mnt

Of course, we might want to mount that directory at startup. So we must change our /etc/fstab file adding this line (knowing that my NFS_SERVER_IP is 192.168.192.10):

192.168.192.10:/export/nfs   /mnt    nfs     rw,relatime,addr=192.168.192.10     0 0

We can start working with our NFS.

iSCSI Divertimento

iSCSI (Intenet Small Computer Storage Interface) implements SCSI protocol over a TCP/IP network. SCSI was a standard interface to transfer data in a computer bus. So iSCSI is a way to Store and Retrieve data from a remote disk as if it were a local Disk. It can be uses as a kind of Storage Area Network.

It is rather ease to install both the server (called iSCSI Target) and the clients (known as iSCSI initiators). As in title this is just some kind of divertimento. I’m quite experienced with Openstack thus I want to emulate some kind of simplification of the use of iSCSI using LVM as Openstack Cinder Volume does.

So, my first step is using a disk with LVM and create a volume group as explained in the first post I wrote related to LVM:

pvcreate /dev/vdb
vgcreate krtvolumes /dev/vdb

Install the Target

sudo apt install tgt

Great… the target is working now. That was easy (of course we might want to add further configurations, but not now).

Install the Initiators

sudo apt install open-iscsi

That’s the way to install the initiator software, that is, the software for the clients.

Adding volumes to our target

I’ve written a simple script which creates LVM device, adds the device as a new target and exports it. This script must be run as root.

# Creates a new Logical Volume with lvcreate.
# The name is krtvol and a generated ID.
volumeGroup=krtvolumes
volId=krtvol-$(uuidgen)
iqn=iqn.2020-01.eu.jicg
size=${1:-1}G

lvcreate -L ${size} -n ${volumeGroup}/${volId}

# lastId is a file containing a number as Last Target ID 
thisId=$(($(cat lastId) + 1))

# Creates a new target using the LV previously created
tgtadm --lld iscsi --mode target --op new --tid ${thisId} -T ${iqn}:${volId}

# Gets the device file name from the volId (changing every - for --)
deviceFile=/dev/mapper/${volumeGroup}-$(echo ${volId} | sed 's/-/--/g')

# Adds the new target so it can be found
tgtadm --lld iscsi --mode logicalunit --op new \
  --tid ${thisId} --lun 1 -b ${deviceFile}

# formats it using ext4
mkfs -t ext4 ${deviceFile}

# Sets the new last targret Id in the "counter" file.
echo -n $thisId > lastId

# echoes the name of the target
echo ${iqn}:${volId}

Using that simple script we can add a new volume, formatted with ext4 to our scsi target.

In my example the IP address of my iSCSI target is 192.168.192.10 and using that small script, I got a volume which ID is: 70f370fc-5954-4d2e-a3ff-fccfb57caf2

Setting up volumes in our Initiator

I know that mi iSCSI IP is 192.168.192.10 — So from my initiator node I can query the tgtd server this way:

sudo iscsiadm -m discovery -t st -p 192.168.192.10

When we run that command, it is created an structure tree accesible only by root. We can see that structure with the tree command so we can get a grasp of it: sudo tree /etc/iscsi/nodes/

Later, we’ll make some changes in the default file belonging to one of the targets we’ve discovered.

I see that the VolumeID was “70f370fc-5954-4d2e-a3ff-fccfb57caf25” . So, from the Initiator (client) I can type the following command:

iscsiadm --mode node --targetname iqm.2020-1.eu.jicg:krtvol-70f370fc-5954-4d2e-a3ff-fccfb57caf25 \
-p 192.168.192.10 --login

I’ll be able to see this kind of logs in /etc/log/syslog:

Great, now I have a new disk /dev/sda there and I’m able to use it:

sudo mount /dev/sda /mnt

If I try to find the openend connections in the initiator I woud find one with my Target node:

Connecting automatilly at startup

We can connect automatically a disk at startup. Basically we’d need to add a new line to /etc/fstab. A good idea would be using it’s block id:

sudo blkid /dev/sda

So I edit /etc/fstab to add a new line:

UUID="b3e931d4-a923-4e3d-8c4e-bbd5f5c0a390" /mnt ext4   _netdev,rw        0 0

And finally, I set the connection automatillay in the file describing the initiator, so iscsiadm connects to it when the daemon starts:

sudo sed -i 's|manual|automatic|g' /etc/iscsi/nodes/iqm.2020-1.eu.jicg\:krtvol-70f370fc-5954-4d2e-a3ff-fccfb57caf25/192.168.192.10\,3260\,1/default

This “weird” directory structure is shown previously in this article.

So, whenever we restart open-iscsi service an initiator would be started to this exported volume. And as it is in our /etc/fstab file, it would be automatically mount in /mnt.

Unconnecting the Initiator

Of course the remote disk should be disconnected with care. We need to umount the disk and “logout” the initiator:

sudo umount /mnt
sudo iscsiadm --mode node --targetname iqm.2020-1.eu.jicg:krtvol-70f370fc-5954-4d2e-a3ff-fccfb57caf25 -p 192.168.192.10 --logout

Deleting a target

The way to delete the target in our SCSI targets server is using this command:

sudo tgtadm --lld iscsi --op delete --mode target --tid 1

Once done this, we can delete de Logical Volumes we’ve created in order to clean everything.

We can use the command lvs to see the volumes we’ve created, and the command to remove one of the volumes (the one we’ve been using through this example) is:

sudo lvremove krtvolumes/krtvol-70f370fc-5954-4d2e-a3ff-fccfb57caf25

Managing Logical Volumes in Linux: LVM – The basics.

Creating the Logical Volumes

There is an interesting feature in Linux: The implementation of Logical Volumes using LVM (Logical Volume Manager). This implementation manages 3 concepts:

  • Physical Volumes, corresponding, basically, to physical disks or partitions.
  • Volume Groups which are an agregation of several Physical Volumes.
  • Logical Volumes which resembles disk partitions and which are contained in a Volume Group.

Let’s consider a simple example. Let’s imagine we have 2 disks which are named in Linux with /dev/sdb and /dev/sdc and /dev/sdd

We can convert those 2 volumes in Physical Volumes to LVM using the pvcreate command. This commands initializes a physical volume (PV) so it can be recognized to be used by LVM.

sudo pvcreate /dev/sdb
sudo pvcreate /dev/sdc
sudo pvcreate /dev/sdd

Great, we can check our PVs using the command pvscan.

Once we’re done with this initialization, we’d like to create some new groups of volumes. In this case, I’ll create the groups named “databases” and “documents” (or whatever valid names you can think of) using the command vgcreate.

sudo vgcreate documents /dev/sdb
sudo vgcreate databases /dev/sdc /dev/sdd

So, now we have 2 Volume Groups (VG): documents and databases

Using the command vgs we can get some basic information about our Volume Groups:

There’s only one step before we can use our disks: Creating the Logical Volumes (LV) using the command lvcreate. So, let’s do:

# Create a LV for Mongo. 25% del VG databases
sudo lvcreate -l 25%VG -n databases/mongo

# Create a LV for MySQL, 3Gb (Look the equivalent syntax with -n)
sudo lvcreate -L 3G -n mysql databases

# Another 2 LVs for videos and presentations in VG documents
sudo lvcreate -L 10G -n documents/videos
sudo lvcreate -l 20%VG -n documents/presentations

Now we have our Logical volumes in our Volume Groups… One day I’ll learn about drawing good graphics… but not today.

We can see what we have using lvs command:

Using the Logical Volumes

Yes, it is great to have logical volumes… if we could use them. In fact it is easy to use them and mount them at startup. LVM is a device mapper, this means that this Logical Volumes that we have created are mapped as devices in /dev/mapper:

We should format the devices:

for a in databases-mongo  databases-mysql  documents-slides  documents-videos; do 
sudo mkfs -t ext4 /dev/mapper/${a}
done

So the devices are ready to be used, and mount them. Let’s mount a couple of those Logical Volumes by adding a couple of lines in our /etc/fstab file:

/dev/mapper/databases-mongo  /var/lib/mongodb   ext4   defaults    0 0
/dev/mapper/databases-mysql  /var/lib/mysql     ext4   defaults    0 0

After adding these 2 lines, we can mount the directories:

sudo mkdir  /var/lib/mongodb /var/lib/mysql 
sudo mount /var/lib/mongodb
sudo mount /var/lib/mysql

The only thing left to do is installing MongoDB and mysql:

sudo apt install mongodb-server mysql-server

Problem with mysql

I had a problem starting mysql after the installation. I had to remove everything in /var/lib/mysql and initialize the database again:

rm -rf /var/lib/mysql/*
mysqld --initialize

That’s it. One day I’ll write something else about lvm, on how to extend the Volume Groups and the Logical Volumes, snapshots and some more interestings operations with LVM.