## 2014년 12월 29일 월요일

### Review of 1-day LFCS Preparation Course - LFS202

On Friday, Dec. 12th Central Standard Time (CST in the United States), I participated in a one-day Linux Foundation class LFS202 offered through Google Hangouts with an optional dial-in telephone bridge in case the class suffered from audio problems. The class is designed as a prep session to briefly go over each of the topics from the LFCS (Linux Foundation Certified System Administrator) exam syllabus which is as follows (circa Dec. 2014):

Command-line
• Editing text files on the command line
• Manipulating text files from the command line
Filesystem & storage
• Archiving and compressing files and directories
• Assembling partitions as RAID devices
• Configuring swap partitions
• File attributes
• Finding files on the filesystem
• Formatting filesystems
• Mounting filesystems automatically at boot time
• Mounting networked filesystems
• Partitioning storage devices
• Troubleshooting filesystem issues
• Creating backups
• Creating local user groups
• Managing file permissions
• Managing fstab entries
• Managing local users accounts
• Managing the startup process and related services
• Managing user accounts
• Managing user account attributes
• Managing user processes
• Restoring backed up data
• Setting file permissions and ownership
Local security
• Accessing the root account
Shell scripting
• Basic bash shell scripting
Software management
• Installing software packages

Our teacher for the 8-hour class (7 hours of instruction, 1 hour break for lunch) was Kevin C. Smallwood, a Computer Science graduate from Purdue University who began his career in IT back in the days of ARPAnet and Motorola 6800 assembly language and worked on UNIX, BSD, and later LINUX machines while on staff at the Purdue University Dept. of Computer Science and the Computing Center. He possesses RHCS, RHCE, LFCS, and LFCE certifications. He is also a certified Redhat instructor.

In the LFS202 session I participated in, we had a total of 7 people in a Google Hangout (in addition to 1 'participant' which was actually a telephone voice bridge). Here is a screenshot of the Google Hangout:

When entering the G+ Hangout, Kevin asked everyone to turn off their cameras and microphones so we wouldn't hear ambient noise from each others' microphones. Although all participants are welcome to unmute their mics to ask questions, everyone seemed more comfortable asking questions through the group chat window on the right-hand side of the Hangout.

Kevin went through each of the topics on the syllabus above and showed examples using the Google Hangout screenshare feature. Since the LFCS and LFCE allow candidates to choose among Ubuntu, OpenSUSE, and CentOS as their testing platform, Kevin used several different virtual machines in VMware Player to illustrate how to accomplish various system administration tasks across the three distros.

This class is suited to those with an intermediate level of Linux knowledge. At the beginning of the class, Kevin pointed out that students should be familiar with all the topics covered in the free Linux Foundation LFS101x offered on edX. In fact, Kevin actually developed some of the curriculum for the first edition of the LFS101x course for Autumn 2014, namely, Chapter 13 Manipulating Text (covering sed, awk, grep and piping multiple commands together).

Although I am familiar with most of the GNU coreutils, it was enlightening to (virtually) look over Kevin's shoulder as he illustrated the infinite ways we could pipe output of one command to many others. For example, the LFCS exam might give you the following task:

Print all user accounts, sort them, remove duplicates, and count how many lines remain.

awk -F : '{ print $1$6}' /etc/passwd | sort | uniq | wc -l

The first command prints the 1st and 6th columns from /etc/passwd which is then piped to sort, uniq and finally wc to count the number of unique lines. Kevin showed us countless other examples which I found very helpful, as I am not very familiar with sed and awk.

I felt more comfortable with the other topics on the syllabus having to do with filesystem administration tasks like manual partition creation using fdisk, creating LVM partitions, and editing /etc/fstab, as I encounter these tasks day-to-day in my job as a Linux System Engineer.

The class was a real pleasure to take and I think the Linux Foundation is lucky to have access to such experienced instructors. The only gripes I have are administrative.

(1) The class times on the Linux Foundation Training page are incorrectly listed in UTC, but in reality, the classes are on American Central Standard Time which is UTC -6 hours. See the screenshot below:

You can see that Start Time reads 00:00 UTC

This really confused me, because Korea is UTC +9 hours, so I thought that maybe the class might start at 9am Korea time. During the winter, there is a 15-hour time difference between American CST and KST (Korean Standard Time). Therefore I started class at midnight 0:00 on Saturday, Dec. 13 after a long day at work. It would be nice if the LF listed the proper time in UTC or got rid of UTC entirely and just listed times in CST to avoid confusion.

(2) More Flexible Class Hours for Non-US Students

I know there are some excellent English-speaking engineers in India, which is just a few hours behind the UTC+9 time zone that included Korea and Japan. Also Australia is in roughly the same time zone with East Asia. I think adding Australian or Indian instructors would be one way to expand the reach of Linux Foundation live sessions to more students around the world.

If the time difference wasn't such a big obstacle, I would definitely register for more online courses offered through the Linux Foundation!

## 2014년 12월 21일 일요일

### Editing /usr/bin/brainworkshop and /usr/share/.../brainworkshop.pyw to share BW data between machines using Dropbox

In January 2013, I wrote a post about sharing Brain Workshop data between several machines using Dropbox. I recently tried to follow these same instructions, only to discover that they no longer work.

I spent some time over the weekend coming up with another way to share my BW training data with other machines using Dropbox, and have found a solution that requires light editing of one bash script and a Python source file. This post will assume that there is only one user of Brainworkshop (this user is automatically named default by BW).

First, let's take a look at the important files that brainworkshop package 4.8.4-4 installs on Archlinux:

[archjun@lenovoS310 ~]$sudo pacman -Ql brainworkshop [sudo] password for archjun: brainworkshop /usr/ brainworkshop /usr/bin/ brainworkshop /usr/bin/brainworkshop ... brainworkshop /usr/share/brainworkshop/ brainworkshop /usr/share/brainworkshop/brainworkshop.pyw brainworkshop /usr/share/brainworkshop/data/ brainworkshop /usr/share/brainworkshop/data/Readme-stats.txt ... btw, commands on other distros to list files installed by pkgName are rpm -ql pkgName (RHEL/CentOS/Fedora etc) and dpkg-query -L pkgName (Debian/Ubuntu). The file /usr/bin/brainworkshop is simply a bash script that calls /usr/share/brainworkshop/brainworkshop.pyw #!/bin/sh python2 /usr/share/brainworkshop/brainworkshop.pyw A cautionary note -- on Archlinux, invoking python from the CLI will actually bring up the Python 3 REPL, as it is the default in Arch. On other distros, invoking python from the CLI will probably bring up the Python 2 REPL instead. To invoke Python 2 in Arch, you have to explicitly invoke python2 on the CLI or in scripts. The particulars of brainworkshop launch scripts will differ slightly between Linux distros. I know that the official brainworkshop project is hosted on Sourceforge, but looking through the trunk branch of their SVN repo, I cannot find any brainworkshop.sh bash script file. Apparently the BW launch scripts for Linux are created separately by each distro. In order to make Brainworkshop store config files and training data in a Dropbox folder instead of locally, we must make some edits to the launch script above. There are several config flags, --configfile, --statsfile, and --datadir that we need to add to our invocation of brainworkshop.pyw. The last flag was added at the end of March 2013, according to the brainworkshop Github repo for Slackware. (Click here for an example of how this flag can be used in a script or when invoking brainworkshop.pyw from the CLI). Below you can see my edited version of /usr/bin/brainworkshop.sh that specifies a Dropbox folder as the location for saving data and configuration files: #!/bin/sh BRW_CONFIGFILE=$HOME/Dropbox/.brainworkshop/data/.brainworkshop.ini
BRW_STATFILE=$HOME/Dropbox/.brainworkshop/data/.brainworkshop.stats BRW_DATADIR=$HOME/Dropbox/.brainworkshop/data/

python2 /usr/share/brainworkshop/brainworkshop.pyw --configfile $BRW_CONFIGFILE --statsfile$BRW_STATFILE --datadir $BRW_DATADIR But we are still not done, as the Python 2 script, brainworkshop.pyw, has slightly different names for the statistics and data files. We will change these manually. The function we need to find is named rewrite_configfile() and it should be around line 691 (Meta-g-g 691 for my fellow Emacs users out there). Below is an edited version of /usr/share/brainworkshop/brainworkshop.pyw that changes the names of the stats and data files to match the settings in our launch script above: You will see that I have commented out the following three lines: #statsfile = 'stats.txt' ... #STATS_BINARY = 'logfile.dat' ... #newconfigfile_contents = CONFIGFILE_DEFAULT_CONTENTS.replace('stats.txt', statsfile) and replaced them with the following (to make stats and data filenames consistent with settings in our custom /usr/bin/brainworkshop.sh) statsfile = '.brainworkshop.stats' ... STATS_BINARY = 'default-sessions.dat' ... newconfigfile_contents = CONFIGFILE_DEFAULT_CONTENTS.replace('.brainworkshop.stats', statsfile) Now when you open Brain Workshop, it will access data files from Dropbox instead of looking in your home directory! ## 2014년 12월 15일 월요일 ### [SOLVED] PXE booting woes on ATCA blade server hardware (ENP & Adlink) Background At my current company, we do a lot of work for two of the "Big Three" (SKT, LGU+, KT) mobile telecom providers in Korea. Interestingly, most of the 3G and 4G mobile communications infrastructure here runs on RHEL 5.X and 6.X. In addition to more conventional server hardware like the HP Proliant line, Korean telecoms also use ATCA (Advanced Telecom Architecture) blade servers made by ENP (Emerson Network Power) and AdLink. ATCA blades are unique because they do not possess regular back panels with RJ45 network ports. Instead, the ATCA server backplane plugs directly into an ATCA switch. Another oddity of ATCA hardware is that in many cases, there is no VGA port to connect to (as in the ENP ATCA 7350 and 736X series). Instead, to get video output, you must connect via serial console cable and use serial/modem communications through a terminal emulator like minicom or putty. Although the ATCA hardware engineers usually provide console speed settings (i.e. 57600 8N1, 115200 8N2, etc.) for getting video output through a terminal emulator, in many cases you still have to play around with the speed until you get a screen that doesn't show gibberish. Unlike 4U servers which have PCI expansion slots for plugging in network cards, everything on ATCA blades is on-board, including the network adapters. This makes it unfeasible to flash updated PXE ROMs onto the NIC itself. Instead, you would have to get a BIOS upgrade to update outdated PXE implementations. Problem gpxelinux.0 from syslinux 4.05-8.el7 for RHEL 7 / CentOS 7 returns a kernel error when attempting to boot from PXE on various models of ATCA blades from ENP and Adlink. If I use pxelinux.0, at least I can get as far as the PXE menu (menu.c32), but then the boot seems to hang after sending the Linux kernel image vmlinuz and an associated initrd.img My PXE server setup using dnsmasq and darkhttpd works just fine when installing via PXE to more conventional server hardware like the HP Proliant series. The pxe config I used for PXE on ATCA blades is as follows: The syslinux documentation mentions that broken PXE implementations are not uncommon, and presents a list of hardware known to have problems with syslinux PXE. I didn't see any mention of ATCA in the list, but I suspect that differences in PXE implementation are causing problems for me. Stabs at a solution Other engineers at my company use relatively old versions of syslinux, generally version 4.X available from the CentOS 6 repos, yet are able to install Linux over PXE using pxelinux.0 and a more conventional pxe server setup with httpd, xinetd, dhcpd, tftp, etc. I plan to recreate their setup and see if that resolves my issues with PXE on ATCA hardware. I also need to make note of what kind of PXE boot agents (i.e. Intel Boot Agent, etc) are being used by ATCA as well as the PXE firmware version numbers. Starting from syslinux 5.X, lpxelinux.0 became available, which natively supports sending pxe images by http and nfs instead of tftp. Perhaps trying lpxelinux.0 or the most recent version of syslinux (6.03 in Dec. 2014) pxe files will address my problem. Postscript 2014-12-26: It turns out that the PXE booting problems I experienced were due to "luser" error, not any problems with ATCA hardware. Luser Error 1: Incorrect syntax in the append initrd= block specifying serial console settings Since most ATCA blades don't have VGA connectors, to get any kind of video output you must connect via remote serial console (through serial-to-RJ45 cable). The variable console= must be appended to the invocation of initrd (initramfs image). The correct syntax is console=tty0 console=ttyS0,X (where X is serial communication speed in bps) Unfortunately, I specified console=ttyS0 first, which won't work, according to the tldp Remote Serial Console HOW-TO: The Linux kernel is configured to select the console by passing it the console parameter. The console parameter can be given repeatedly, but the parameter can only be given once for each console technology. So console=tty0 console=lp0 console=ttyS0 is acceptable but console=ttyS0 console=ttyS1 will not work. Information from kernel.org regarding console over serial port: tty0 for the foreground virtual console ttyX for any other virtual console ttySx for a serial port lp0 for the first parallel port ttyUSB0 for the first USB serial device You can specify multiple console= options on the kernel command line. Output will appear on all of them. The last device will be used when you open /dev/console. So, for example: console=ttyS1,9600 console=tty0 defines that opening /dev/console will get you the current foreground virtual console, and kernel messages will appear on both the VGA console and the 2nd serial port (ttyS1 or COM2) at 9600 baud. Since I specify console=ttyS0,57600 last, it will be the device used when /dev/console is opened. Luser Error 2: Incorrect vmlinuz and initrd.img The PXE boot .cfg file above indicates that RHEL5.4 will be installed, but I accidentally used vmlinuz and initrd.img from RHEL5.6! In addition to fixing errors 1 and 2, I also tried using lpxelinux.0 from syslinux 6.03 as the dhcp-boot image and am happy to report that it works fine. Now PXE boot and RHEL 5.X installation over http works for me! ## 2014년 12월 8일 월요일 ### A quick-and-dirty Python3 script to extract only English or non-English characters from a textfile When I was a full-time interpreter/translator, once in a while clients would send me proofreading work in the form of dual-language files containing alternating lines of Korean followed by English. The problem is that translators bill their clients by counting the number of words in the source language, but this is not possible when both source and target language are mixed up in the same file! Imagine that you have a 100-page .doc file with alternating lines of English and some foreign language. It is not feasible to manually cut-and-paste all the foreign language sentences into another file! Luckily, Python 3 exists and it is UTF-8 friendly, so we can easily manipulate English and all kinds of foreign languages within Python 3 programs. My script is called deleteLanguage.py and it is available on github at https://github.com/gojun077/deleteLanguage. It will take mixed text like (I didn't do this horrible translation into English, btw) 기업에서 왜 트리즈를 교육해야 하는가? Why do the companies should educate TRIZ to their members? 오늘날 특히 기업에서의 연구개발은 문제를 해결하느냐 못하느냐의 문제가 아니다. Today being able to solve the problems or not being isn’t a real problem in the corporation’s research and development. 얼마나 빨리 새로운 결과를 찾아내는 가에 따라 성공여부가 결정된다. The success of them depends on how fast they can find the new solutions. 하지만 우리들은 문제를 더 빨리 혁신적으로 해결할 수 있는 방법을 공부한 적이 없다. But we have never learned to solve problems faster and more innovative. 대부분의 많은 연구개발자들은 창의적인 문제해결이 무엇인지도 모른다. Most researchers and engineers don’t even know what the creative method to solve the problems is. 오늘날도 많은 연구자들은 각자 기존의 경험과 지식을 바탕으로 열심히 생각하기를 한다. Today they are thinking hard based on only their own experience and knowledge. and parse it into separate English Why do the companies should educate TRIZ to their members? Today being able to solve the problems or not being a real problem in the research and development. The success of them depends on how fast they can find the new solutions. But we have never learned to solve problems faster and more innovative. Most researchers and engineers even know what the creative method to solve the problems is. Today they are thinking hard based on only their own experience and knowledge. and non-English output: 기업에서 왜 트리즈를 교육해야 하는가? 오늘날 특히 기업에서의 연구개발은 문제를 해결하느냐 못하느냐의 문제가 아니다. 얼마나 빨리 새로운 결과를 찾아내는 가에 따라 성공여부가 결정된다. 하지만 우리들은 문제를 더 빨리 혁신적으로 해결할 수 있는 방법을 공부한 적이 없다. 대부분의 많은 연구개발자들은 창의적인 문제해결이 무엇인지도 모른다. 오늘날도 많은 연구자들은 각자 기존의 경험과 지식을 바탕으로 열심히 생각하기를 한다. The version in the initial commit has the following limitations: The script will assume that non-ASCII characters not included in string.printable (from the Python string module) are non-English characters, so the following strings 'Awesome' '...noted.' would not be detected as 'English' by the script. In non-English sentences containing a the occasional English word, the script just omits these words entirely. Consider the following Korean sentence: "철수씨는 IBM에 근무한다." deleteLanguage.py as it is currently implemented will parse the above snippet into the following when it outputs the non-English only text file: "철수씨는 근무한다." The '에' character adjoining IBM is deleted along with the English word. I haven't yet thought up a sure-fire algorithm to avoid this problem; creating prescriptive rules for dozens of one-off cases doesn't seem to be the solution, either. ## 2014년 12월 5일 금요일 ### Fixing an issue with RAM statistics in archey 0.1-11 (archbey) Archbang banner script The banner script used in Archlinux is called archey: [archjun@arch ~]$ sudo pacman -Ss archey
community/archey3 0.5-3
Output a logo and various system information

However Archbang has taken this script and renamed it archbey, adding a 'b' for 'bang', I think. archbey is a python2 script located in /usr/bin that creates the following banner every time a terminal window is opened:

.
#.               OS: Archbang x86_64
/;#               Hostname: arch
#;##              Kernel: 3.17.4-1-ARCH
/###'              Uptime: 0:32
;#\   #;            Window Manager: Openbox
+###  .##            Packages: 1269
+####  ;###           RAM: -2523 MB / 3947 MB
######  #####;         CPU: Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz
#######  ######         Shell: Bash
######## ########        Root FS: 9.2G / 24G (ext4)
.########;;########\
.########;   ;#######
#########.   .########
######'           '######
;####                 ####;
##'                     '##
#'                         #

As you can see, however, the 'RAM' value listed above is incorrect. Let's take a look at the output of free -m (memory stats in MB):

[archjun@arch ~]$free -m total used free shared buff/cache available Mem: 3947 1081 1721 155 1145 2467 Swap: 999 0 999 We are using a 1081 MB of RAM, but the archbey script is giving is a negative value which is definitely wrong. Let's take a look at the python2 script /usr/bin/archbey to see how it is calculating RAM usage: #!/usr/bin/env python2 # # archey [version 0.1-11] ... # Modified for ArchBang -sHyLoCk ... import os, sys, subprocess, optparse, re from subprocess import Popen, PIPE ... # RAM Function def ram_display(): raminfo = Popen(['free', '-m'], stdout=PIPE).communicate()[0].split('\n') ram = ''.join(filter(re.compile('M').search, raminfo)).split() used = int(ram[2]) - int(ram[5]) - int(ram[6]) output ('RAM', '%s MB / %s MB' % (used, ram[1])) We can see that the output of free -m is being stored in the variable raminfo, which is then parsed with a regex that searches for a line starting with 'M'. This line of text is then turned into a list of strings using the built-in function split(), which splits a line into words at every whitespace character and inserts them into a list. the variable used is then calculated by accessing specific indexes from list ram (starting at index 0). Let's look at the relevant code snippets in action: [archjun@arch ~]$ python2
Python 2.7.8 (default, Sep 24 2014, 18:26:21)
[GCC 4.9.1 20140903 (prerelease)] on linux2
>>> import subprocess
>>> subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0].split('\n')
['              total        used        free      shared  buff/cache   available', 'Mem:           3947        1036        1772         149        1139        2518', 'Swap:           999           0         999', '']
>>> raminfo = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0].split('\n')
>>> import re
>>> ram = ''.join(filter(re.compile('M').search, raminfo)).split()
>>> print ram
['Mem:', '3947', '1050', '1757', '149', '1139', '2503']

We can see from the output above that

ram[0] is the string 'Mem'
ram[1] is 3947 (total)
ram[2] is 1050 (used)
ram[3] is 1757 (free)
ram[4] is 149 (shared)
ram[5] is 1139 (buff/cache)
ram[6] is 2503 (available)

It is apparent that the formula
used = int(ram[2]) - int(ram[5]) - int(ram[6])
from /usr/bin/archbey is incorrect.

If we want to see how much memory is being used, we should simply look at ram[2] alone, which would give us 1050.

free comes from the package procps-ng:

[archjun@arch ~]$sudo pacman -Qo free [sudo] password for archjun: /usr/bin/free is owned by procps-ng 3.3.10-1 (FYI, pacman -Qo fileName is equivalent to rpm -qf fileName in RHEL/CentOS) Was procps-ng updated recently? [archjun@arch ~]$ sudo cat /var/log/pacman.log |grep procps-ng
[2013-03-30 02:57] upgraded procps-ng (3.3.5-1 -> 3.3.7-1)
[2013-05-21 15:40] [PACMAN] upgraded procps-ng (3.3.7-1 -> 3.3.7-2)
[2013-06-05 12:57] [PACMAN] upgraded procps-ng (3.3.7-2 -> 3.3.8-1)
[2013-06-27 13:50] [PACMAN] upgraded procps-ng (3.3.8-1 -> 3.3.8-2)
[2013-09-21 13:35] [PACMAN] upgraded procps-ng (3.3.8-2 -> 3.3.8-3)
[2013-12-10 10:38] [PACMAN] upgraded procps-ng (3.3.8-3 -> 3.3.9-1)
[2014-01-29 08:45] [PACMAN] upgraded procps-ng (3.3.9-1 -> 3.3.9-2)
[2014-05-03 11:49] [PACMAN] upgraded procps-ng (3.3.9-2 -> 3.3.9-3)
[2014-11-12 23:36] [PACMAN] upgraded procps-ng (3.3.9-3 -> 3.3.10-1)

I wasn't able to find any mention from the procps-ng project page that fields in free were added or changed. Taking a look at the man page for free, it says the following about the field used:

Used memory (calculated as total - free - buffers - cache)

I think the archbey script was trying to manually replicate the calculation above. In that case,

used = int(ram[2]) - int(ram[5]) - int(ram[6])

should be changed to

used = int(ram[1]) - int(ram[3]) - int(ram[5])

But this is a waste of CPU time because ram[2] already contains the value of used from free -m!

We can see that
int(ram[2])
is almost identical to
int(ram[1]) - int(ram[3]) - int(ram[5])

>>> int(ram[1]) - int(ram[3]) - int(ram[5])
1051
>>> int(ram[2])
1050

I am not sure why the values aren't identical, however (rounding error from buff/cache?)

Conclusion

I have fixed the archbey banner by editing lines 112 and 113 in /usr/bin/archbey as follows:

used = int(ram[2])
output ('RAM Used', '%s MB / %s MB' % (used, ram[1]))

Now the terminal banner properly lists RAM usage (of course, archbey or archey3 must be included in your ~/.bashrc file):

.
#.               OS: Archbang x86_64
/;#               Hostname: arch
#;##              Kernel: 3.17.4-1-ARCH
/###'              Uptime: 0:37
;#\   #;            Window Manager: Openbox
+###  .##            Packages: 1269
+####  ;###           RAM Used: 1114 MB / 3947 MB
######  #####;         CPU: Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz
#######  ######         Shell: Bash
######## ########        Root FS: 9.2G / 24G (ext4)
.########;;########\
.########;   ;#######
#########.   .########
######'           '######
;####                 ####;
##'                     '##
#'                         #

This script could use some loving; for one thing, all the indentation is non-kosher by Python standards -- default indentation should be 4 spaces, but this script uses just a single space throughout. By contrast, archey on github is properly indented with  4 spaces as default for code inside functions and classes.

The archbey script was based off of archey 0.1-11 but today archey(3) is at version 0.5-3. Unfortunately, no package owns archbey:

[archjun@arch ~]$sudo pacman -Qo archbey [sudo] password for archjun: error: No package owns /usr/bin/archbey It is a script that was added on top of the base Archlinux install by the Archbang installer, so it is not updated by pacman -Syyu like archey would be. I could always just get rid of archbey and install archey3 instead for creating a pretty Archlinux terminal banner. ## 2014년 12월 2일 화요일 ### Dealing with some new changes in ibus-hangul 1.5.0 Before the recent upgrade to ibus-hangul 1.5.0, a single press of the user-defined hotkey for next input method toggled between Korean (hangul) and English. After the upgrade, however, a single press of the next input method hotkey does not immediately enable hangul input; A new menu item called "hangul mode" must also be toggled separately by mouse on the ibus tray icon for hangul to appear. To fix this this issue, right click on the ibus tray icon and navigate to ibus-hangul preferences -> Input Method -> Korean - Hangul Then click Preferences and make sure "Start in hangul mode" is checked in the following dialog box. Now pressing the Hangul toggle key once should enable Korean input without having to click "Hangul Mode" with your mouse after right-clicking the ibus tray icon. Another issue is that the Alt_R no longer seems to work as the Hangul toggle key; although you can manually specify Alt_R as the next input method toggle key in ibus-setup and ibus-hangul, once you enter hangul mode with Alt_R you cannot switch back to English by pressing the same key (Alt_R) as was true before the upgrade. My temporary workaround is to use Shift+space as the IME toggle key in both ibus and ibus-hangul. Also there is a typo in the IME toggle key dialog for ibus-hangul 1.5.0: "Press any key which you want to use as hanja key" should be "Press any key which you want to use as hangul key" The hanja key (for Chinese character input in Korean) is usually Ctrl_R (right side Ctrl) or F9 on Korean keyboards, whereas the hangul key is usually Alt_R. ## 2014년 11월 26일 수요일 ### Make sure to disable NetworkManager before manually assigning IP addresses to network interfaces This might be obvious, but I am writing this post as a note to myself in the future: stop NetworkManager before you do network configuration work in Linux! I had the following happen to me several weeks ago. I was sent to install RHEL on several machines during a server room maintenance window from 2:00~5:00 am. Strangely, however, PXE installation kept freezing in the middle of the installation. tail -f /var/log/messages revealed that NetworkManager was deleting the IP address I had assigned to the PXE server (followed by messages that ntpd was removing the IP from its records). I once again manually added an IP with ip addr add 192.168.10.100/24 broadcast 192.168.10.255 dev eth0 which enabled the PXE install over http to continue once again. But every 5 minutes, NetworkManager would again delete the IP, requiring me to manually add it once more. Once I stopped the NM service altogether, random IP address deletions no longer occurred. As my PXE server is a CentOS 6.5 VM that uses SysVinit, to stop the NetworkManager service I use: service NetworkManager stop but for those of you on distros using systemd you would use systemctl stop NetworkManager When adding an IP, I usually just specify addr/CIDR and broadcast address (which has worked just fine for me), but this tutorial from the Archlinux docs recommends also adding routing information. Postscript 2015.12.04 Nowadays I am working with Openstack at work, and NetworkManager and openstack are incompatible. After the openstack installation completes using scripts from packstack or devstack, you will be warned if the NetworkManager daemon is running. For machines intended as openstack nodes, it is a good idea to just disable NetworkManager (or any other network connection management daemon like wicd, connman, etc.) entirely: systemctl disable NetworkManager systemctl stop NetworkManager ## 2014년 11월 18일 화요일 ### Troubleshooting a failure of ntpd.service at system startup Several months ago, I noticed that journalctl was containing messages about ntpd.service failing. systemctl status ntpd also confirmed that systemd failed to load ntpd. A quick and dirty hack (that doesn't solve the underlying problem) is to just run sudo ntpd -qgd to manually load the ntp daemon (and update system time even if there is a difference of over 1000s between local time and ntp server time). This has the effect of making a one-time change to the system clock after ntpd queries the Network Time Protocol servers defined in /etc/ntp.conf Today I had some free time so I decided to take a closer look at the problem. I discovered several issues: 1. Manually starting ntpd daemon conflicts with starting systemd ntp.service I know it sounds like common sense, but at times I seem to lack this resource. This problem is characterized by the following error message in journalctl: ... unable to bind to wildcard address :: - another process may be running - EXITING By checking running processes, we can see that, sure enough, ntpd is already running: [archjun@arch ~]$ ps aux | grep ntp
root      1699  0.0  0.3 105200 14588 ?        SLs  10:47   0:00 ntpd
archjun  28055  0.0  0.0  11908  2276 pts/2    S+   11:00   0:00 grep ntp

So problem #1 was solved by doing a kill -15 on pid 1699 shown above.

2. Create user ntp

Invoking systemctl start ntpd still didn't work, however. journalctl -f (equivalent of tail -f /var/log/messages for non-systemd machines) showed the following error:

Nov 18 11:02:59 arch ntpd[1241]: Cannot find user `ntp'
Nov 18 11:02:59 arch systemd[1]: ntpd.service: main process exited, code=exited, status=255/n/a

That's strange. Despite re-installing the ntp package through pacman, user ntp was not created (checked with cat /etc/passwd |grep ntp), although group ntp was created (verified with cat /etc/group |grep ntp).

I tried to create user ntp with a simple useradd ntp, but my system complained that there was already a group with the same name. I thus added user ntp and added them to group ntp all in the same command:

Now when I run systemctl start ntpd everything looks fine when checked with systemctl status ntpd and in journalctl:

Nov 18 11:07:43 arch systemd[1]: Starting Network Time Service...
Nov 18 11:07:43 arch ntpd[11063]: ntpd 4.2.7p465@1.2483-o Sun Sep  7 07:03:04 UTC 2014 (1): Starting
Nov 18 11:07:43 arch ntpd[11063]: Command line: /usr/bin/ntpd -g -u ntp:ntp
Nov 18 11:07:43 arch systemd[1]: Started Network Time Service.
Nov 18 11:07:43 arch ntpd[11064]: proto: precision = 1.047 usec (-20)
Nov 18 11:07:43 arch ntpd[11064]: Listen and drop on 0 v6wildcard [::]:123
Nov 18 11:07:43 arch ntpd[11064]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 2 lo 127.0.0.1:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 3 wlp12s0 192.168.0.9:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 4 lo [::1]:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 5 wlp12s0 [fe80::21f:3cff:fe46:6467%3]:123
Nov 18 11:07:43 arch ntpd[11064]: Listening on routing socket on fd #22 for interface updates

https://bbs.archlinux.org/viewtopic.php?id=155120

## 2014년 11월 11일 화요일

### Setting up OpenDaylight Hydrogen VM - Ubuntu 14.04

On Wednesday and Thursday (Nov. 5 & 6) last week, I participated in my first Hackathon -- Cisco Codefest 2014 -- which was held at the POSCO Engineering and Construction Building main hall 4F in Songdo, South Korea. The track my team chose for the competition was OpenDaylight, a Software Defined Networking (SDN) framework led by the Linux Foundation, Redhat, Cisco, and other partners. This framework enables network devices to be virtualized (NFV - Network Function Virtualization) which opens up the possibility of regular PC's with multiple PCI network cards being able to act as cheap switches, for example, among other possibilities.

Although this contest was a coding competition, our team spent a lot of time doing Linux sysadmin work to get the OpenDaylight VM's into a usable state. We also spent a significant chunk of time learning how to use mininet, the network simulator bundled with the OpenDaylight VM, and setting up network visualization (DLUX - Daylight User Interface) provided by Karaf from the ODL Helium container. This post will walk you through the process of customizing the Ubuntu 14.04 VM available for download from the ODL Hydrogen downloads page (you can navigate here from the main OpenDaylight Downloads page).

Virtualbox Settings for Ubuntu 14.04 VM

As of 2014-11-11 the download link for the Ubuntu 14.04 VM appears as odl-test-desktop-3.ova, but once you actually import it into Virtualbox, the name of the VM appears as odl-test-desktop-2, which is the name of the Ubuntu 13.04 VM available at the second download link in the middle. Here's a screenshot of the page:

I thought I had downloaded the wrong VM, but after verifying with uname -a that the guest OS was running kernel 3.13, I was sure I was running Ubuntu 14.04 (instead of 13.04, the version in the other two VM's available for download). Note that all the Ubuntu ODL images are 64-bit, so if you're running a 32-bit version of say, Windows 7 (or Linux, for that matter), you will not be able to run the VM's even if you have Virtualbox installed (my Codefest teammate actually went out and bought a new laptop because his Core i5 with just 2GB RAM just wasn't up to the task of running the heavy Ubuntu 14.04 VM's).

First, after importing the VM you should first tweak some settings for speed. The download page does mention  "disable 3D acceleration otherwise left menu may not show," (the Unity bar on the left-hand margin) but in my case, leaving 3D acceleration on in the VM display settings also caused problems with my host machine display, making me unable to minimize the VM full-screen window. So you definitely want to make sure that 3D acceleration box is not checked in the following window from the Virtualbox Manager:

Second, from the "System" tab above, you should reduce the number of CPU cores allocated to the VM from 4(!) to something more manageable for your system. Likewise with memory; the memory initially allocated to the VM is 4096MB (4 GB), but I found that for simple topologies 2 CPU cores and 2GB works fine.

Third, from the "Network" tab above, make Adapter 1 attached to "Bridged Adapter" and choose the name of your host machine's network interface. Adapter 1 will provide your VM with Internet access from your host and will also give the VM an IP within the same subnet as your host making ssh possible. You should also enable Adapter 2 as a "Host-only" interface that you can use exclusively for a mininet network. If you do not have any host-only interfaces enabled, click "File" -> "Preferences" -> "Network" from the Virtualbox Manager and select the "Host-only Networks" tab. Click the '+' icon to add one interface (which appears as vboxnet0 in Linux hosts).

Fourth, the VM will not be able to go full screen until you intstall the Virtualbox Guest Additions into the guest OS. Refer to this previous post on my blog for instructions on how to do this in a Linux VM. It would be nice if the people at the ODL foundation actually installed the Virtualbox Guest Additions into the Ubuntu 14.04 VM before exporting it as an .ova file...

Setting up the Application Environment

Fifth, seriously consider installing a lighter Desktop Environment (DE) like XFCE or LXDE. Unity with compiz is installed by default as the Ubuntu 14.04 DE, but its 3D bling, compositing, drop shadows, etc. are superfluous to the job of running network simulations for SDN. In fact, once you launch Karaf (from ODL Helium) and mininet you will find that even powerful systems will lag (my teammate's new machine, a brand-new Core i7 laptop with 256GB SSD and 8GB of RAM ran sluggishly with Unity enabled while running mininet et al). If you run top within the VM while using Unity, it is not uncommon to see compiz taking up 20%+ of CPU.

Don't worry -- installing a new DE does not require the removal of Unity and compiz! Once the new DE is installed through apt-get, you can select what DE to use from the login screen. To install either XFCE or LXDE for Ubuntu, enter one of the following:

sudo apt-get install --no-install-recommends xubuntu-desktop

sudo apt-get install --no-install-recommends lubuntu-desktop

It would also be a good idea to turn off automatic updates... this is a VM, after all, so you should only do point updates for packages that really need it.

Sixth, set up Wireshark. Although it is pre-installed in the VM, running it as the regular user mininet will not allow you to listen to network traffic on any of the network interfaces. That doesn't mean you should run Wireshark as root, however (doing so is a security risk). Instead, run dpkg-reconfigure wireshark-common from the CLI. This will enable regular users to listen on network interfaces as long as they are members of the newly-created user group wireshark. To add user mininet to group wireshark:

sudo usermod -a -G wireshark mininet

For changes to take effect, you must log out and log in once again.

Seventh, install Karaf from ODL Helium. From the main ODL download page, click the link for either the zip or tar archive and extract the files into a folder in the ODL Hydrogen VM. Once you have extracted the files, navigate to the folder and enter the bin directory and invoke karaf as follows:

mininet@mininet-vm:~/distribution-karaf-0.2.0-Helium/bin\> ./karaf

karaf will work with any JVM running Java 1.7 or higher. Fortunately, the Ubuntu VM comes with OpenJDK 7 runtime already installed. Once karaf is installed and running, from the karaf command prompt enter the following to install packages necessary for DLUX:

opendaylight-user@root>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core

Now when you create a network in mininet and generate traffic between hosts, you will be able to see the topology by opening a browser within the VM and navigating to localhost:8181/dlux/index.html, logging in with admin:admin, and clicking on "Topology" from the left-hand menu within the browser page body. (Note: If you launch mininet with default settings, it will try to create a network controller on localhost:6633, but this conflicts with the controller created by karaf. To avoid this conflict, you need to specify the IP for a remote controller and a port that doesn't conflict with 6633. In our case, we will use port 6653.)

Example of mininet and karaf in action with topology visualization

Open two terminals within the VM and start karaf in one of them. Once karaf is fully loaded (this can take up to 60 seconds), we will launch mininet with a remote controller residing on the host-only IP for our VM, 192.168.56.101:

mininet@mininet-vm:~\> sudo mn --mac --switch=ovsk,protocols=OpenFlow13 --controller=remote,ip=192.168.56.101,port=6653 --topo=tree,2

We have created a tree topology 2 levels deep (switch 1 is the root node, switches 2 and 3 branch from s1) using an open virtual switch with openflow 1.3

Note that we have changed the controller port to 6653 so that it will not conflict with the karaf controller. Within the mininet command environment, we invoked pingall -c 1 (ping all hosts 1 time) to generate network traffic (if there is no traffic, DLUX will not show any flows). Now navigating to localhost:8181 and navigating to Topology should show the following:

Now your Ubuntu 14.04 VM for ODL Hydrogen should be ready for you to experiment with SDN!

## 2014년 11월 5일 수요일

### Time-lapse recording of meditation sessions with motion, ffmpeg, and bash script

Background

I have been doing insight meditation for the past several months, but have a problem with being consistent and disciplined enough to actually meditate when I feel tired or too busy. I have used Beeminder (here are my automatically tracked goals) to good effect to keep me on track for certain goals that can be automatically monitored, i.e. RescueTime, Fitbit step count, Dual N-Back practice sessions, number of cards reviewed in Mnemosyne (similar to Anki but easier to use), etc.

For more free-form goals that require human oversight, StickK is a good tool. Both Beeminder and StickK are commitment devices that help people actually stick to their goals by making them put money on the line; if you go off track (determined by a human referee in the case of StickK, and by a computer in the case of Beeminder), your credit card gets charged some amount (that exponentially increases in Beeminder).

I track weekly pushups and meditation sessions using StickK. In the case of pushups, I prop up my smartphone so that it will record me doing one set of X pushups. The video is automatically uploaded to G+ from where I share a link to the video with my StickK referee. For 25 pushups, the videos are usually about 60 seconds long. But this will not work for meditation, considering the fact that a meditation session can last anywhere from 10 to 30 minutes. The battery life on many smartphones is pretty terrible, and even if you could take a half-hour video of yourself sitting on a mat meditating, who the hell would watch it? I want to save my StickK referee from such torture as well as save memory card space and battery life on my smartphone. Solution: use Linux!

The Tools

motion

motion is a web-cam utility that begins taking snapshots when motion is detected in front of the camera. It is included in the package repositories of many Linux distros, and I use the version from the Archlinux community repository. motion is commonly used in DIY CCTV projects using the Raspberry Pi, however, I will use it to take time-lapse photos of my meditation sessions. By default, motion will start taking pictures whenever it senses motion, but we need to change this default behavior so that it will take a photo every N seconds. To do this, you need to edit /etc/motion/motion.conf as follows:

First make sure that motion daemon mode is turned off

############################################################
# Daemon
############################################################

# Start in daemon (background) mode and release terminal (default: off)
daemon off

Although the comment above claims the default is off, in Archlinux the default is actually on.

############################################################
# Snapshots (Traditional Periodic Webcam File Output)
############################################################

# Make automated snapshot every N seconds (default: 0 = disabled)
snapshot_interval n
In our case, we set n to 6.

Since we will be taking time-lapse photos, we need to turn off the feature that takes pictures when motion is detected:

############################################################
# Image File Output
############################################################

# Output 'normal' pictures when motion is detected (default: on)
...
# Can be used as preview shot for the corresponding movie.
output_normal off

After installing the motion package in Archlinux, you also have to edit the permissions on /var/run/motion so that it is writable by the regular user. Something like the following should do the trick:

If your /usr/local directory is not already writable by the regular user, you will also need to recursively change the permissions on this directory as well because motion outputs all its image files to the path /usr/local/apache2/htdocs/cam1 (at least this is true on Archlinux). You can recursively change ownership of directories and subdirectories using the -R option in chown.

There are several other config changes you might want to make to /etc/motion/motion.conf.

If you don't want motion to create partial preview videos every X frames as well as a final video (I think it's a waste of space), you need to edit the following section:

# Use ffmpeg to encode mpeg movies in realtime (default: off)
ffmpeg_cap_new off

ffmpeg

Once you have finished taking time-lapse photos in motion, all the images will be contained in /usr/local/apache2/htdocs/cam1

Now you need to use ffmpeg to render these time-lapse images into a video (without audio). In our video, we want each image to be shown for 1 second each and the video should be 25 frames per second. We can render such a video as follows:

ffmpeg -framerate 1/1 -pattern_type glob -i "*.jpg" -r 25 filename.mp4

Note: -framerate flag ensures that each input image appears for 1 second (to make each image appear for n seconds, set this flag to 1/n)

When using the -i flag (input), if you wish to use wildcards (to render all jpg files in a directory, for example) you need to preface the -i with the -pattern_type flag glob option

The -r flag sets the frame rate for the output video file.

bash script

You don't want to have to remember the ffmpeg invocation above every time you want to record a time-lapse video. Below is a script I made to launch motion, render the images, and then cleanup the output directory once video rendering is complete:

My original script didn't have any error trapping before running rm *.jpg and I bitterly came to regret this when one day ffmpeg exited with an error; despite no .mp4 video being rendered, my silly script deleted all jpg files. The unsafe lines were as follows:
...
ffmpeg -framerate 1/1 -pattern_type glob -i "*.jpg" -r 25 \$DATE.mp4
rm *.jpg

The problem with these lines is that even if ffmpeg exits with an error, the script will not stop; the rm command will still be executed! After learning my lesson the hard way, I read up on error handling from William Shotts' wonderful bash tutorial. I also highly recommend his book, The Linux Command Line.