2016년 4월 30일 토요일

Returning the last or second-to-last text field with awk

It you pipe '/' separated text to awk -F "/" '{print $5}'
it will return the 5th field in a string using / as the field separator character.
But if you want to return the last text field in a string with variable length and number of fields, '{print $5}' might not return the last field; imagine if there are 6 or 7 fields.
One way to deal with this is to use the awk variable $NF (number of fields). The relevant man page content is listed below:

The variable NF is set to the total number of fields in the input record.

To return the last field in a string with field separator '/':

awk -F "/" '{print $NF}'
A coworker wanted to return the name of the last sub-directory in some path. He was using a command similar to ls -d */ to return directory contents.
For example, here are the directories in my $HOME folder on my work laptop:

[archjun@pinkS310 ~]$ ls -d */
bin/        Downloads/         kolla/       'SpiderOak Hive/'  'VirtualBox VMs/'
Desktop/    Dropbox/           MyMachines/  SpiderOak_Hive/
Documents/  Images/            ot353/       tmp/
dotfiles/   jun-vagrantfiles/  playground/  txt2regex/
 

I have highlighted directory names containing spaces in yellow above.

Note that each directory is followed by a trailing '/' character. Thus in this case using awk -F "/" '{print $NF}' would return nothing, because nothing follows the final '/':

[archjun@pinkS310 ~]$ ls -d */ | awk -F "/" '{print $NF}'


However, it is possible to return the next-to-last field by specifying the awk variable $(NF-1):

[archjun@pinkS310 ~]$ ls -d */ | awk -F "/" '{print $(NF-1)}'
bin
Desktop
Documents
dotfiles
Downloads
Dropbox
Images
jun-vagrantfiles
kolla
MyMachines
ot353
playground
SpiderOak Hive
SpiderOak_Hive
tmp
txt2regex
VirtualBox VMs
From Stack Overflow, I also learned of a coreutils function named basename, which "strips directory and suffix from filenames".
Examples from the basename man page:

       basename /usr/bin/sort
              -> "sort"

       basename include/stdio.h .h
              -> "stdio"

       basename -s .h include/stdio.h
              -> "stdio"

       basename -a any/str1 any/str2
              -> "str1" followed by "str2"

The -a or --multiple option flag supports multiple arguments, but this option is only available in versions of GNU coreutils after Sept. 2015 (8.23+).
I found that on older machines running CentOS 6.X, the version of coreutils was 8.4 (2013). That version doesn't support some of the basename options above.

The following invocation of basename is almost equivalent to ls -d */ | awk -F "/" '{print $(NF-1)}'with one big catch:

[archjun@pinkS310 ~]$ basename -a $(ls -d */)
bin
Desktop
Documents
dotfiles
Downloads
Dropbox
Images
jun-vagrantfiles
kolla
MyMachines
ot353
playground
SpiderOak
Hive

SpiderOak_Hive
tmp
txt2regex
VirtualBox
VMs

The big drawback of basename is that it incorrectly splits on whitespace when there are spaces in a directory name.
The red highlighted text shows directory names that have been split into separate lines; "SpiderOak Hive" and"VirtualBox VMs" should be treated as single words including the space.
Even using double quotes to prevent word splitting and globbing doesn't help because the word splitting is done in $(ls -d */)
If you look above at the interactive shell output of ls -d */ you will notice that directories with spaces in their names are single-quoted (The yellow-highlighted directories). But what happens when you store the output of ls -d */ into a variable?

[archjun@pinkS310 ~]$ dirsvar=$(ls -d */)
[archjun@pinkS310 ~]$ echo $dirsvar
bin/ Desktop/ Documents/ dotfiles/ Downloads/ Dropbox/ Images/ jun-vagrantfiles/ kolla/ MyMachines/ ot353/ playground/ SpiderOak Hive/ SpiderOak_Hive/ tmp/ txt2regex/ VirtualBox VMs/
[archjun@pinkS310 ~]$ for i in $dirsvar; do echo $i; done
bin/
Desktop/
Documents/
dotfiles/
Downloads/
Dropbox/
Images/
jun-vagrantfiles/
kolla/
MyMachines/
ot353/
playground/
SpiderOak
Hive/

SpiderOak_Hive/
tmp/
txt2regex/
VirtualBox
VMs/

When the ls -d directory output is stored in a variable, word splitting on whitespace occurs, and the single-quoting on the two directories SpiderOak Hive and VirtualBox VMs disappears for some reason. I will try to look for a workaround, but for the time being, it is safer to use awk for the use case of returning the last directory in a path.

2016년 4월 23일 토요일

Syncing Mnemosyne default.db with Mnemosyne Android app

I am an avid user of the open-source spaced repetition software (SRS) Mnemosyne, which is developed by Peter Bienstman, who also released a free mnemosyne app for Android. The app makes Mnemosyne much more useful, as you can now do card reviews while waiting for the bus or riding the subway.

Mnemosyne stores all cards in the sqlite database file default.db which is normally located in $HOME/.local/share/mnemosyne/ on Linux machines and in C:\Users\username\Application Data\Mnemosyne\ or C:\Users\username\AppData\Roaming\Mnemosyne\ on Windows.

If you use mnemosyne on multiple computers (I use mnemosyne on a two computers at home and on a laptop at work), it is convenient to place your mnemosyne folder on Dropbox and then create a symlink from Dropbox to the default mnemosyne path. I detail how to do this on linux in the post Sharing mnemosyne cards across multiple machines using dropbox and symlinks.

If you do not create this symlink and just open a mnemosyne db file from a custom path ($HOME/foo/my.db for example) through the File -> Open menu, you will have no problems when using mnemosyne on that single machine.

Now enable the Sync Server on TCP port 8512 by selecting menu tab Settings -> Configure Mnemosyne -> Servers and check the box "Allow other devices to sync with this computer". Now open the mnemosyne Android app and enter the IP of your machine and try to sync. The sync will proceed, but 0 cards will appear on the Android app and the mnemosyne instance on your computer will also show 0 cards!

This is because the Mnemosyne Android app will always look for default.db in the default path, and will reset the Mnemosyne program on your PC to the default path. The way to avoid this problem is to create a symlink from your custom path to the default path.

References:

http://mnemosyne-proj.org/help/backups.php Default paths for mnemosyne

2016년 4월 14일 목요일

Setting up Kega Fusion in Fedora 23

Fedora is a great Linux distro to use at work as it offers up-to-date kernels, SELinux, firewalld, and most packages needed for development and sysadmin. It is also pretty stable.

Where it doesn't shine is in multimedia and gaming. To get mplayer and non-free codecs, for instance, you have to enable the RPM Fusion repo. Unfortunately, there are quite a few games that are not available through RPM Fusion (such as ufoai, a 3-D remake of X-Com, and the Sega Genesis/megadrive emulator Kega Fusion).

In the case of Kega Fusion, Steve Snake offers pre-compiled binaries for 32-and-64-bit Linux on his site, so all you have to do is make sure you have the required dependencies installed on your local machine.

In the installation FAQ on Steve's site, he explains what packages need to be installed for Kega Fusion to work:

sudo apt-get install libglu1-mesa:i386 libgtk2.0-0:i386 libasound2:i386 libsm6:i386 libasound2-plugins:i386

Of course, these are the package names for Ubuntu/Debian. For Fedora 23, you will need to install the following 32-bit packages (denoted by i686):

sudo dnf install mesa-dri-drivers.i686 mesa-libGLU.i686 gtk2.i686 alsa-lib.i686 libmSM.i686


If you are missing some 32-bit packages you will get errors like the following:

[fedjun@u36jfed23 Fusion]$ ./Fusion
./Fusion: error while loading shared libraries: libgtk-x11-2.0.so.0: cannot open shared object file: No such file or directory

In such a case, first find the location of the 64-bit version of the file with

find / -type -name foo

And then find the package which owns the file with

rpm -qf /path/to/foo

and install the 32-bit version of that package.

Here's an example of these commands in action:

[fedjun@u36jfed23 Fusion]$ ./Fusion
./Fusion: error while loading shared libraries: libasound.so.2: cannot open shared object file: No such file or directory


OK - it look like I'm missing the 32-bit version of libasound2.so.2... First I need to find the location of the 64-bit version of this file by using find

[fedjun@u36jfed23 Fusion]$ sudo find /usr -type f -name libasound*
/usr/lib64/libasound.so.2.0.0

...

Now that I know the path to the file, I can discover which package the file belongs to:

[fedjun@u36jfed23 Fusion]$ rpm -qf /usr/lib64/libasound.so.2.0.0
alsa-lib-1.1.1-1.fc23.x86_64


Recall that Steve Snake said that Ubuntu/Debian users need to install libasound2 -- well there is no such package with that name in Fedora! Instead Fedora users would need to install the package we found above: alsa-lib (i686)

[fedjun@u36jfed23 Fusion]$ sudo dnf search alsa-lib
Last metadata expiration check: 0:33:28 ago on Sun Apr 10 21:15:55 2016.
...
alsa-lib.x86_64 : The Advanced Linux Sound Architecture (ALSA) library
alsa-lib.i686 : The Advanced Linux Sound Architecture (ALSA) library


Now just install the 32-bit alsa-lib package with

sudo dnf install alsa-lib.i686

Once all dependencies are installed, you can now use the most accurate Sega Genesis / Megadrive emulator on Fedora 23!





2016년 4월 9일 토요일

Use LVM physical block device instead of loopback file for storing Docker containers

Today I will show you how to setup Docker to use physical LVM volumes for storing container images and metadata. This post is conceptually similar to my previous post about setting up Cinder to use a physical block device instead of a loopback file.

You can find many guides that discuss the performance downsides of mounting files as loopback devices to use as virtual disks (see References below). Docker and Cinder default to using loopback devices for their storage backends out of convenience; developers just want to get started without mucking around with creating LVM partitions and doing sysadmin work.

Fortunately, setting up Docker to use a real (as opposed to virtual) LVM block device is not hard.

First stop the docker daemon and rm -rf /var/lib/docker from your disk.

Second, create an LVM physical volume that will store your Volume Group which will in turn contain your Logical Volumes for Docker data and metadata. For a step-by-step example of creating a new LVM-type partition using gdisk, creating the PV and VG refer to my previous post about setting up Cinder. In the case of Docker, however, you will also need to create 2 Logical Volumes with the lvcreate command.

Assuming your Volume Group is named vg-docker, you can create logical volumes data and metadata as follows:

sudo lvcreate -L xG -n data vg-docker
sudo lvcreate -L yG -n metadata vg-docker

Where:
-L specifies the size of the volume
xG, yG denote the size in Gigabytes for the Logical Volumes
-n specifies the name of the Logical volume
vg-docker is the name of a pre-existing Volume Group

Note that you SHOULD NOT create a file system on the new LV's with mkfs or some other tool. Docker will manage the containers without the overhead of a file system.

Now you need to configure the Docker daemon to use /dev/vg-docker/data and /dev/vg-docker/metadata for storing containers and their metadata.

For Archlinux:

$ sudo cp /usr/lib/systemd/system/docker.service
/etc/systemd/system/


Edit /etc/systemd/system/docker.service so that it contains the following:

ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=devicemapper --storage-opt dm.datadev=/dev/vg-docker/data --storage-opt dm.metadatadev=/dev/vg-docker/metadata

Since the docker.service systemd unit file has changed you must run systemctl daemon-reload to apply the config changes. Then restart the docker daemon with systemctl restart docker


For Fedora 23:

The process is a little bit different for f23; instead of editing a systemd service file, you have to edit /etc/sysconfig/docker-storage and specify the LV to use for container data and metadata (note that on F23 I named the Volume Group vgdocker instead of vg-docker):

DOCKER_STORAGE_OPTIONS= --storage-opt dm.metadatadev=/dev/vgdocker/metadata --storage-opt dm.datadev=/dev/vgdocker/data

Finally restart the docker daemon:

[fedjun@u36jfed23 sysconfig]$ systemctl restart docker
[fedjun@u36jfed23 sysconfig]$ systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2016-04-07 13:50:41 KST; 4s ago
     Docs: http://docs.docker.com
 Main PID: 8420 (sh)
   CGroup: /system.slice/docker.service
           ├─8420 /bin/sh -c /usr/bin/docker daemon            $OPTIONS           ...
           ├─8425 /usr/bin/docker daemon --selinux-enabled --log-driver=journald -...
           └─8426 /usr/bin/forward-journald -tag docker

Apr 07 13:50:39 u36jfed23 forward-journal[8426]: Forwarding stdin to journald usi...r
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.071256..."
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.218922..."
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.661117..."
Apr 07 13:50:41 u36jfed23 forward-journal[8426]:
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.661407..."
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.661936..."
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.661978...1
Apr 07 13:50:41 u36jfed23 forward-journal[8426]: time="2016-04-07T13:50:41.662365..."
Apr 07 13:50:41 u36jfed23 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[fedjun@u36jfed23 sysconfig]$ sudo docker info
[sudo] password for fedjun:
Containers: 0
Images: 0
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-253:9-131604-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file: /dev/vgdocker/data
 Metadata file: /dev/vgdocker/metadata

 Data Space Used: 53.74 MB
 Data Space Total: 96.64 GB
 Data Space Available: 96.58 GB
 Metadata Space Used: 1.09 MB
 Metadata Space Total: 10.73 GB
 Metadata Space Available: 10.73 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Library Version: 1.02.109 (2015-09-22)
Execution Driver: native-0.2
Logging Driver: journald
Kernel Version: 4.4.6-300.fc23.x86_64
Operating System: Fedora 23 (Twenty Three)
CPUs: 4
Total Memory: 7.6 GiB
Name: u36jfed23
ID: 45U7:CXDD:LUT3:NBHX:7L6X:VELC:JBZG:XFY5:SQ6L:6LHX:LBAD:PMG5


You can see in docker info that docker is now using a real block device for data and metadata. If docker was still using a loopback device, docker info would contain the lines, Data loop file and Metadata loop file.

References:

http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/

https://docs.docker.com/engine/userguide/storagedriver/device-mapper-driver/

https://docs.docker.com/engine/admin/systemd/

https://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatbpstorage.htm



2016년 4월 2일 토요일

Use a real block storage backend for Cinder instead of mounting a virtual disk as a loopback device

Proof of Concept (PoC) installations of Openstack using Packstack use a sparse file virtual disk mounted under /var/lib/cinder. New block storage devices to be attached to instances are created under /var/lib/cinder/volumes

This might be OK for a test installation, but it is a terrible idea for production deployments because loopback devices have poor I/O performance under load.

The workaround is to first generate a packstack answer file and then edit the settings for Cinder block storage. To generate an answer file named foo.cfg:

$ packstack --gen-answer-file=foo.cfg

Within the answer file change
CONFIG_CINDER_VOLUMES_CREATE=y to CONFIG_CINDER_VOLUMES_CREATE=n

Packstack will search for a LVM Volume Group named cinder-volumes. If it finds a VG with that name, it will just use it as the backend for Cinder instead of creating a virtual disk file.

Now that you have edited the answer file telling Packstack not to create a Cinder Volume Group, you need to create an LVM Physical Volume and a Volume Group within the PV.

Before you create the PV, first you need to create a new partition. I will do this using gdisk (because my server uses GPT as its partition table):


[fedgro@fx8350no2 ~]$ sudo gdisk /dev/sdb
GPT fdisk (gdisk) version 1.0.1

Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): p
Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A2744C5B-BBEE-4FEB-9152-853B5E707FF9
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 1743809901 sectors (831.5 GiB)


Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048       209717247   100.0 GiB   8E00  Linux LVM


Command (? for help): n
Partition number (2-128, default 2): 2
First sector (34-1953525134, default = 209717248) or {+-}size{KMGTP}:
Last sector (209717248-1953525134, default = 1953525134) or {+-}size{KMGTP}: +206G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'

Command (? for help): p
Disk /dev/sdb: 1953525168 sectors, 931.5 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): A2744C5B-BBEE-4FEB-9152-853B5E707FF9
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1953525134
Partitions will be aligned on 2048-sector boundaries
Total free space is 1311796589 sectors (625.5 GiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048       209717247   100.0 GiB   8E00  Linux LVM
   2       209717248       641730559   206.0 GiB   8E00  Linux LVM

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
Warning: The kernel is still using the old partition table.
The new table will be used at the next reboot or after you
run partprobe(8) or kpartx(8)
The operation has completed successfully.


OK~ now I have created a new partition /dev/sdb2 that is 206 GB in size. Let's check the existing partitions with lsblk:

[fedgro@fx8350no2 ~]$ sudo partprobe /dev/sdb
[fedgro@fx8350no2 ~]$ lsblk -f
NAME                      FSTYPE      LABEL UUID                                   MOUNTPOINT
sda                                                                               
├─sda1                    vfat              EAD9-3A00                              /boot/efi
├─sda2                    ext4              09c9cf88-30d9-4d21-8a5f-5723acec3bff   /boot
└─sda3                    LVM2_member       xT7fD3-vdcc-xgde-oZmD-RCUG-zA1k-Go15vd
  ├─fedora_fx8350no2-root xfs               c4e8d596-c9eb-48ce-a967-212c7f7b037e   /
  ├─fedora_fx8350no2-swap swap              a3c25d7c-d6a4-40db-8b43-488a08785a61   [SWAP]
  ├─fedora_fx8350no2-var  xfs               c244b30d-9f9c-47aa-bd17-3b519dd93c6f   /var
  └─fedora_fx8350no2-home xfs               bd9481ac-b484-495f-b45c-5e78616938df   /home
sdb                                                                               
├─sdb1                    LVM2_member       edPpZO-6Kdz-B46f-3umr-UpzZ-OQ80-jP6Lnc
│ └─fedora_fx8350no2-var  xfs               c244b30d-9f9c-47aa-bd17-3b519dd93c6f   /var
└─sdb2       
                                                                     


The new partition is highlighted above. 200 GB will be used for block storage while 3% (6 GB) will be used to store LVM metadata.


Next I have to create a LVM PV on /dev/sdb2:

[fedgro@fx8350no2 ~]$ sudo pvcreate /dev/sdb2
  Physical volume "/dev/sdb2" successfully created




Now I will create a the Volume Group cinder-volumes in the PV on /dev/sdb2:

[fedgro@fx8350no2 ~]$ sudo vgcreate cinder-volumes /dev/sdb2
  Volume group "cinder-volumes" successfully created


Now I'm ready to run packstack with the pre-generated answer file foo.cfg:

[fedgro@fx8350no2 ~]$ sudo packstack --answer-file foo.cfg

You might have noticed that I didn't create any LVM Logical Volumes in the VG cinder-volumes. You should not create any LV's inside the VG, nor do you need to format LV's with a file system. This is because every time Cinder creates a volume, it automatically creates its own LV within VG cinder-volumes.