2014년 12월 8일 월요일

A quick-and-dirty Python3 script to extract only English or non-English characters from a textfile

When I was a full-time interpreter/translator, once in a while clients would send me proofreading work in the form of dual-language files containing alternating lines of Korean followed by English. The problem is that translators bill their clients by counting the number of words in the source language, but this is not possible when both source and target language are mixed up in the same file!

Imagine that you have a 100-page .doc file with alternating lines of English and some foreign language. It is not feasible to manually cut-and-paste all the foreign language sentences into another file! Luckily, Python 3 exists and it is UTF-8 friendly, so we can easily manipulate English and all kinds of foreign languages within Python 3 programs.

My script is called deleteLanguage.py and it is available on github at https://github.com/gojun077/deleteLanguage.

It will take mixed text like (I didn't do this horrible translation into English, btw)

기업에서 왜 트리즈를 교육해야 하는가?
Why do the companies should educate TRIZ to their members?
오늘날 특히 기업에서의 연구개발은 문제를 해결하느냐 못하느냐의 문제가 아니다.
Today being able to solve the problems or not being isn’t a real problem in the corporation’s research and development.
얼마나 빨리 새로운 결과를 찾아내는 가에 따라 성공여부가 결정된다.
The success of them depends on how fast they can find the new solutions.
하지만 우리들은 문제를 더 빨리 혁신적으로 해결할 수 있는 방법을 공부한 적이 없다.
But we have never learned to solve problems faster and more innovative.
대부분의 많은 연구개발자들은 창의적인 문제해결이 무엇인지도 모른다.
Most researchers and engineers don’t even know what the creative method to solve the problems is.
오늘날도 많은 연구자들은 각자 기존의 경험과 지식을 바탕으로 열심히 생각하기를 한다.
Today they are thinking hard based on only their own experience and knowledge.

and parse it into separate English

Why do the companies should educate TRIZ to their members?
Today being able to solve the problems or not being a real problem in the research and development.
The success of them depends on how fast they can find the new solutions.
But we have never learned to solve problems faster and more innovative.
Most researchers and engineers even know what the creative method to solve the problems is.
Today they are thinking hard based on only their own experience and knowledge.

and non-English output:

기업에서 왜 트리즈를 교육해야 하는가?
오늘날 특히 기업에서의 연구개발은 문제를 해결하느냐 못하느냐의 문제가 아니다.
얼마나 빨리 새로운 결과를 찾아내는 가에 따라 성공여부가 결정된다.
하지만 우리들은 문제를 더 빨리 혁신적으로 해결할 수 있는 방법을 공부한 적이 없다.
대부분의 많은 연구개발자들은 창의적인 문제해결이 무엇인지도 모른다.
오늘날도 많은 연구자들은 각자 기존의 경험과 지식을 바탕으로 열심히 생각하기를 한다.

The version in the initial commit has the following limitations:

The script will assume that non-ASCII characters not included in string.printable (from the Python string module) are non-English characters, so the following strings

'Awesome'
'...noted.'

would not be detected as 'English' by the script.

In non-English sentences containing a the occasional English word, the script just omits these words entirely. Consider the following Korean sentence:


"철수씨는 IBM에 근무한다."

deleteLanguage.py as it is currently implemented will parse the above snippet into the following when it outputs the non-English only text file:

"철수씨는 근무한다."

The '에' character adjoining IBM is deleted along with the English word.

I haven't yet thought up a sure-fire algorithm to avoid this problem; creating prescriptive rules for dozens of one-off cases doesn't seem to be the solution, either.

2014년 12월 5일 금요일

Fixing an issue with RAM statistics in archey 0.1-11 (archbey) Archbang banner script

The banner script used in Archlinux is called archey:

[archjun@arch ~]$ sudo pacman -Ss archey
[sudo] password for archjun: 
community/archey3 0.5-3
    Output a logo and various system information

However Archbang has taken this script and renamed it archbey, adding a 'b' for 'bang', I think. archbey is a python2 script located in /usr/bin that creates the following banner every time a terminal window is opened:

               .                
               #.               OS: Archbang x86_64
              /;#               Hostname: arch
              #;##              Kernel: 3.17.4-1-ARCH
             /###'              Uptime: 0:32
            ;#\   #;            Window Manager: Openbox
           +###  .##            Packages: 1269
          +####  ;###           RAM: -2523 MB / 3947 MB
         ######  #####;         CPU: Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz
        #######  ######         Shell: Bash
       ######## ########        Root FS: 9.2G / 24G (ext4)
     .########;;########\        
    .########;   ;#######       
    #########.   .########`     
   ######'           '######    
  ;####                 ####;   
  ##'                     '##   
 #'                         `#  


As you can see, however, the 'RAM' value listed above is incorrect. Let's take a look at the output of free -m (memory stats in MB):

[archjun@arch ~]$ free -m
              total        used        free      shared  buff/cache   available
Mem:           3947        1081        1721         155        1145        2467
Swap:           999           0         999

We are using a 1081 MB of RAM, but the archbey script is giving is a negative value which is definitely wrong.

Let's take a look at the python2 script /usr/bin/archbey to see how it is calculating RAM usage:

#!/usr/bin/env python2
#
# archey [version 0.1-11]
...
# Modified for ArchBang -sHyLoCk
...
import os, sys, subprocess, optparse, re
from subprocess import Popen, PIPE
...
# RAM Function
def ram_display():
 raminfo = Popen(['free', '-m'], stdout=PIPE).communicate()[0].split('\n')
 ram = ''.join(filter(re.compile('M').search, raminfo)).split()
 used = int(ram[2]) - int(ram[5]) - int(ram[6])
 output ('RAM', '%s MB / %s MB' % (used, ram[1]))

We can see that the output of free -m is being stored in the variable raminfo, which is then parsed with a regex that searches for a line starting with 'M'.

This line of text is then turned into a list of strings using the built-in function split(), which splits a line into words at every whitespace character and inserts them into a list.

the variable used is then calculated by accessing specific indexes from list ram (starting at index 0). 

Let's look at the relevant code snippets in action:

[archjun@arch ~]$ python2
Python 2.7.8 (default, Sep 24 2014, 18:26:21) 
[GCC 4.9.1 20140903 (prerelease)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import subprocess
>>> subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0].split('\n')
['              total        used        free      shared  buff/cache   available', 'Mem:           3947        1036        1772         149        1139        2518', 'Swap:           999           0         999', '']
>>> raminfo = subprocess.Popen(['free', '-m'], stdout=subprocess.PIPE).communicate()[0].split('\n')
>>> import re
>>> ram = ''.join(filter(re.compile('M').search, raminfo)).split()
>>> print ram
['Mem:', '3947', '1050', '1757', '149', '1139', '2503']

We can see from the output above that

ram[0] is the string 'Mem'
ram[1] is 3947 (total)
ram[2] is 1050 (used)
ram[3] is 1757 (free)
ram[4] is 149 (shared)
ram[5] is 1139 (buff/cache)
ram[6] is 2503 (available)

It is apparent that the formula
used = int(ram[2]) - int(ram[5]) - int(ram[6])
from /usr/bin/archbey is incorrect.

If we want to see how much memory is being used, we should simply look at ram[2] alone, which would give us 1050.

free comes from the package procps-ng:

[archjun@arch ~]$ sudo pacman -Qo free
[sudo] password for archjun: 
/usr/bin/free is owned by procps-ng 3.3.10-1

(FYI, pacman -Qo fileName is equivalent to rpm -qf fileName in RHEL/CentOS)

Was procps-ng updated recently?

[archjun@arch ~]$ sudo cat /var/log/pacman.log |grep procps-ng
[2013-03-30 02:57] upgraded procps-ng (3.3.5-1 -> 3.3.7-1)
[2013-05-21 15:40] [PACMAN] upgraded procps-ng (3.3.7-1 -> 3.3.7-2)
[2013-06-05 12:57] [PACMAN] upgraded procps-ng (3.3.7-2 -> 3.3.8-1)
[2013-06-27 13:50] [PACMAN] upgraded procps-ng (3.3.8-1 -> 3.3.8-2)
[2013-09-21 13:35] [PACMAN] upgraded procps-ng (3.3.8-2 -> 3.3.8-3)
[2013-12-10 10:38] [PACMAN] upgraded procps-ng (3.3.8-3 -> 3.3.9-1)
[2014-01-29 08:45] [PACMAN] upgraded procps-ng (3.3.9-1 -> 3.3.9-2)
[2014-05-03 11:49] [PACMAN] upgraded procps-ng (3.3.9-2 -> 3.3.9-3)
[2014-11-12 23:36] [PACMAN] upgraded procps-ng (3.3.9-3 -> 3.3.10-1)

Apparently it was upgraded on Nov. 12th, about 3 weeks ago.

I wasn't able to find any mention from the procps-ng project page that fields in free were added or changed. Taking a look at the man page for free, it says the following about the field used:

Used memory (calculated as total - free - buffers - cache)

I think the archbey script was trying to manually replicate the calculation above. In that case,

used = int(ram[2]) - int(ram[5]) - int(ram[6])

should be changed to

used = int(ram[1]) - int(ram[3]) - int(ram[5])

But this is a waste of CPU time because ram[2] already contains the value of used from free -m!

We can see that
int(ram[2])
is almost identical to
int(ram[1]) - int(ram[3]) - int(ram[5])

>>> int(ram[1]) - int(ram[3]) - int(ram[5])
1051
>>> int(ram[2])
1050

I am not sure why the values aren't identical, however (rounding error from buff/cache?)

Conclusion

I have fixed the archbey banner by editing lines 112 and 113 in /usr/bin/archbey as follows:

used = int(ram[2])
output ('RAM Used', '%s MB / %s MB' % (used, ram[1]))

Now the terminal banner properly lists RAM usage (of course, archbey or archey3 must be included in your ~/.bashrc file):

               .                
               #.               OS: Archbang x86_64
              /;#               Hostname: arch
              #;##              Kernel: 3.17.4-1-ARCH
             /###'              Uptime: 0:37
            ;#\   #;            Window Manager: Openbox
           +###  .##            Packages: 1269
          +####  ;###           RAM Used: 1114 MB / 3947 MB
         ######  #####;         CPU: Intel(R) Core(TM)2 Duo CPU T7500 @ 2.20GHz
        #######  ######         Shell: Bash
       ######## ########        Root FS: 9.2G / 24G (ext4)
     .########;;########\        
    .########;   ;#######       
    #########.   .########`     
   ######'           '######    
  ;####                 ####;   
  ##'                     '##   
 #'                         `#  

This script could use some loving; for one thing, all the indentation is non-kosher by Python standards -- default indentation should be 4 spaces, but this script uses just a single space throughout. By contrast, archey on github is properly indented with  4 spaces as default for code inside functions and classes.

The archbey script was based off of archey 0.1-11 but today archey(3) is at version 0.5-3. Unfortunately, no package owns archbey:

[archjun@arch ~]$ sudo pacman -Qo archbey
[sudo] password for archjun: 
error: No package owns /usr/bin/archbey

It is a script that was added on top of the base Archlinux install by the Archbang installer, so it is not updated by pacman -Syyu like archey would be.

I could always just get rid of archbey and install archey3 instead for creating a pretty Archlinux terminal banner.

2014년 12월 2일 화요일

Dealing with some new changes in ibus-hangul 1.5.0

Before the recent upgrade to ibus-hangul 1.5.0, a single press of the user-defined hotkey for next input method toggled between Korean (hangul) and English.

After the upgrade, however, a single press of the next input method hotkey does not immediately enable hangul input; A new menu item called "hangul mode" must also be toggled separately by mouse on the ibus tray icon for hangul to appear.

To fix this this issue, right click on the ibus tray icon and navigate to ibus-hangul preferences -> Input Method -> Korean - Hangul



Then click Preferences and make sure "Start in hangul mode" is checked in the following dialog box.



Now pressing the Hangul toggle key once should enable Korean input without having to click "Hangul Mode" with your mouse after right-clicking the ibus tray icon.

Another issue is that the Alt_R no longer seems to work as the Hangul toggle key; although you can manually specify Alt_R as the next input method toggle key in ibus-setup and ibus-hangul, once you enter hangul mode with Alt_R you cannot switch back to English by pressing the same key (Alt_R) as was true before the upgrade. My temporary workaround is to use Shift+space as the IME toggle key in both ibus and ibus-hangul.

Also there is a typo in the IME toggle key dialog for ibus-hangul 1.5.0:



"Press any key which you want to use as hanja key"

should be

"Press any key which you want to use as hangul key"

The hanja key (for Chinese character input in Korean) is usually Ctrl_R (right side Ctrl) or F9 on Korean keyboards, whereas the hangul key is usually Alt_R.

2014년 11월 26일 수요일

Make sure to disable NetworkManager before manually assigning IP addresses to network interfaces

This might be obvious, but I am writing this post as a note to myself in the future: stop NetworkManager before you do network configuration work in Linux!

I had the following happen to me several weeks ago. I was sent to install RHEL on several machines during a server room maintenance window from 2:00~5:00 am. Strangely, however, PXE installation kept freezing in the middle of the installation. tail -f /var/log/messages revealed that NetworkManager was deleting the IP address I had assigned to the PXE server (followed by messages that ntpd was removing the IP from its records). I once again manually added an IP with

ip addr add 192.168.10.100/24 broadcast 192.168.10.255 dev eth0

which enabled the PXE install over http to continue once again. But every 5 minutes,  NetworkManager would again delete the IP, requiring me to manually add it once more. Once I stopped the NM service altogether, random IP address deletions no longer occurred. As my PXE server is a CentOS 6.5 VM that uses SysVinit, to stop the NetworkManager service I use:

service NetworkManager stop

but for those of you on distros using systemd you would use

systemctl stop NetworkManager

When adding an IP, I usually just specify addr/CIDR and broadcast address (which has worked just fine for me), but this tutorial from the Archlinux docs recommends also adding routing information.

Postscript 2015.12.04
Nowadays I am working with Openstack at work, and NetworkManager and openstack are incompatible. After the openstack installation completes using scripts from packstack or devstack, you will be warned if the NetworkManager daemon is running. For machines intended as openstack nodes, it is a good idea to just disable NetworkManager (or any other network connection management daemon like wicd, connman, etc.) entirely:

systemctl disable NetworkManager
systemctl stop NetworkManager

2014년 11월 18일 화요일

Troubleshooting a failure of ntpd.service at system startup

Several months ago, I noticed that journalctl was containing messages about ntpd.service failing. systemctl status ntpd also confirmed that systemd failed to load ntpd. A quick and dirty hack (that doesn't solve the underlying problem) is to just run sudo ntpd -qgd to manually load the ntp daemon (and update system time even if there is a difference of over 1000s between local time and ntp server time). This has the effect of making a one-time change to the system clock after ntpd queries the Network Time Protocol servers defined in /etc/ntp.conf

Today I had some free time so I decided to take a closer look at the problem. I discovered several issues:

1. Manually starting ntpd daemon conflicts with starting systemd ntp.service

I know it sounds like common sense, but at times I seem to lack this resource. This problem is characterized by the following error message in journalctl:
...
unable to bind to wildcard address :: - another process may be running - EXITING

By checking running processes, we can see that, sure enough, ntpd is already running:

[archjun@arch ~]$ ps aux | grep ntp
root      1699  0.0  0.3 105200 14588 ?        SLs  10:47   0:00 ntpd
archjun  28055  0.0  0.0  11908  2276 pts/2    S+   11:00   0:00 grep ntp

So problem #1 was solved by doing a kill -15 on pid 1699 shown above.


2. Create user ntp

Invoking systemctl start ntpd still didn't work, however. journalctl -f (equivalent of tail -f /var/log/messages for non-systemd machines) showed the following error:

Nov 18 11:02:59 arch ntpd[1241]: Cannot find user `ntp'
Nov 18 11:02:59 arch systemd[1]: ntpd.service: main process exited, code=exited, status=255/n/a

That's strange. Despite re-installing the ntp package through pacman, user ntp was not created (checked with cat /etc/passwd |grep ntp), although group ntp was created (verified with cat /etc/group |grep ntp).

I tried to create user ntp with a simple useradd ntp, but my system complained that there was already a group with the same name. I thus added user ntp and added them to group ntp all in the same command:

useradd ntp -g ntp

Now when I run systemctl start ntpd everything looks fine when checked with systemctl status ntpd and in journalctl:

Nov 18 11:07:43 arch systemd[1]: Starting Network Time Service...
Nov 18 11:07:43 arch ntpd[11063]: ntpd 4.2.7p465@1.2483-o Sun Sep  7 07:03:04 UTC 2014 (1): Starting
Nov 18 11:07:43 arch ntpd[11063]: Command line: /usr/bin/ntpd -g -u ntp:ntp
Nov 18 11:07:43 arch systemd[1]: Started Network Time Service.
Nov 18 11:07:43 arch ntpd[11064]: proto: precision = 1.047 usec (-20)
Nov 18 11:07:43 arch ntpd[11064]: Listen and drop on 0 v6wildcard [::]:123
Nov 18 11:07:43 arch ntpd[11064]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 2 lo 127.0.0.1:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 3 wlp12s0 192.168.0.9:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 4 lo [::1]:123
Nov 18 11:07:43 arch ntpd[11064]: Listen normally on 5 wlp12s0 [fe80::21f:3cff:fe46:6467%3]:123
Nov 18 11:07:43 arch ntpd[11064]: Listening on routing socket on fd #22 for interface updates

Finally, a helpful thread I referred to from the Archlinux forums:

https://bbs.archlinux.org/viewtopic.php?id=155120


2014년 11월 11일 화요일

Setting up OpenDaylight Hydrogen VM - Ubuntu 14.04

On Wednesday and Thursday (Nov. 5 & 6) last week, I participated in my first Hackathon -- Cisco Codefest 2014 -- which was held at the POSCO Engineering and Construction Building main hall 4F in Songdo, South Korea. The track my team chose for the competition was OpenDaylight, a Software Defined Networking (SDN) framework led by the Linux Foundation, Redhat, Cisco, and other partners. This framework enables network devices to be virtualized (NFV - Network Function Virtualization) which opens up the possibility of regular PC's with multiple PCI network cards being able to act as cheap switches, for example, among other possibilities.

Although this contest was a coding competition, our team spent a lot of time doing Linux sysadmin work to get the OpenDaylight VM's into a usable state. We also spent a significant chunk of time learning how to use mininet, the network simulator bundled with the OpenDaylight VM, and setting up network visualization (DLUX - Daylight User Interface) provided by Karaf from the ODL Helium container. This post will walk you through the process of customizing the Ubuntu 14.04 VM available for download from the ODL Hydrogen downloads page (you can navigate here from the main OpenDaylight Downloads page).


Virtualbox Settings for Ubuntu 14.04 VM

As of 2014-11-11 the download link for the Ubuntu 14.04 VM appears as odl-test-desktop-3.ova, but once you actually import it into Virtualbox, the name of the VM appears as odl-test-desktop-2, which is the name of the Ubuntu 13.04 VM available at the second download link in the middle. Here's a screenshot of the page:




I thought I had downloaded the wrong VM, but after verifying with uname -a that the guest OS was running kernel 3.13, I was sure I was running Ubuntu 14.04 (instead of 13.04, the version in the other two VM's available for download). Note that all the Ubuntu ODL images are 64-bit, so if you're running a 32-bit version of say, Windows 7 (or Linux, for that matter), you will not be able to run the VM's even if you have Virtualbox installed (my Codefest teammate actually went out and bought a new laptop because his Core i5 with just 2GB RAM just wasn't up to the task of running the heavy Ubuntu 14.04 VM's).

First, after importing the VM you should first tweak some settings for speed. The download page does mention  "disable 3D acceleration otherwise left menu may not show," (the Unity bar on the left-hand margin) but in my case, leaving 3D acceleration on in the VM display settings also caused problems with my host machine display, making me unable to minimize the VM full-screen window. So you definitely want to make sure that 3D acceleration box is not checked in the following window from the Virtualbox Manager:


Second, from the "System" tab above, you should reduce the number of CPU cores allocated to the VM from 4(!) to something more manageable for your system. Likewise with memory; the memory initially allocated to the VM is 4096MB (4 GB), but I found that for simple topologies 2 CPU cores and 2GB works fine.

Third, from the "Network" tab above, make Adapter 1 attached to "Bridged Adapter" and choose the name of your host machine's network interface. Adapter 1 will provide your VM with Internet access from your host and will also give the VM an IP within the same subnet as your host making ssh possible. You should also enable Adapter 2 as a "Host-only" interface that you can use exclusively for a mininet network. If you do not have any host-only interfaces enabled, click "File" -> "Preferences" -> "Network" from the Virtualbox Manager and select the "Host-only Networks" tab. Click the '+' icon to add one interface (which appears as vboxnet0 in Linux hosts).



Fourth, the VM will not be able to go full screen until you intstall the Virtualbox Guest Additions into the guest OS. Refer to this previous post on my blog for instructions on how to do this in a Linux VM. It would be nice if the people at the ODL foundation actually installed the Virtualbox Guest Additions into the Ubuntu 14.04 VM before exporting it as an .ova file...

Setting up the Application Environment

Fifth, seriously consider installing a lighter Desktop Environment (DE) like XFCE or LXDE. Unity with compiz is installed by default as the Ubuntu 14.04 DE, but its 3D bling, compositing, drop shadows, etc. are superfluous to the job of running network simulations for SDN. In fact, once you launch Karaf (from ODL Helium) and mininet you will find that even powerful systems will lag (my teammate's new machine, a brand-new Core i7 laptop with 256GB SSD and 8GB of RAM ran sluggishly with Unity enabled while running mininet et al). If you run top within the VM while using Unity, it is not uncommon to see compiz taking up 20%+ of CPU.

Don't worry -- installing a new DE does not require the removal of Unity and compiz! Once the new DE is installed through apt-get, you can select what DE to use from the login screen. To install either XFCE or LXDE for Ubuntu, enter one of the following:

sudo apt-get install --no-install-recommends xubuntu-desktop

sudo apt-get install --no-install-recommends lubuntu-desktop

It would also be a good idea to turn off automatic updates... this is a VM, after all, so you should only do point updates for packages that really need it.

Sixth, set up Wireshark. Although it is pre-installed in the VM, running it as the regular user mininet will not allow you to listen to network traffic on any of the network interfaces. That doesn't mean you should run Wireshark as root, however (doing so is a security risk). Instead, run dpkg-reconfigure wireshark-common from the CLI. This will enable regular users to listen on network interfaces as long as they are members of the newly-created user group wireshark. To add user mininet to group wireshark:

sudo usermod -a -G wireshark mininet

For changes to take effect, you must log out and log in once again.

Seventh, install Karaf from ODL Helium. From the main ODL download page, click the link for either the zip or tar archive and extract the files into a folder in the ODL Hydrogen VM. Once you have extracted the files, navigate to the folder and enter the bin directory and invoke karaf as follows:

mininet@mininet-vm:~/distribution-karaf-0.2.0-Helium/bin\> ./karaf

karaf will work with any JVM running Java 1.7 or higher. Fortunately, the Ubuntu VM comes with OpenJDK 7 runtime already installed. Once karaf is installed and running, from the karaf command prompt enter the following to install packages necessary for DLUX:

opendaylight-user@root>feature:install odl-restconf odl-l2switch-switch odl-mdsal-apidocs odl-dlux-core

Now when you create a network in mininet and generate traffic between hosts, you will be able to see the topology by opening a browser within the VM and navigating to localhost:8181/dlux/index.html, logging in with admin:admin, and clicking on "Topology" from the left-hand menu within the browser page body. (Note: If you launch mininet with default settings, it will try to create a network controller on localhost:6633, but this conflicts with the controller created by karaf. To avoid this conflict, you need to specify the IP for a remote controller and a port that doesn't conflict with 6633. In our case, we will use port 6653.)


Example of mininet and karaf in action with topology visualization

Open two terminals within the VM and start karaf in one of them. Once karaf is fully loaded (this can take up to 60 seconds), we will launch mininet with a remote controller residing on the host-only IP for our VM, 192.168.56.101:

mininet@mininet-vm:~\> sudo mn --mac --switch=ovsk,protocols=OpenFlow13 --controller=remote,ip=192.168.56.101,port=6653 --topo=tree,2



We have created a tree topology 2 levels deep (switch 1 is the root node, switches 2 and 3 branch from s1) using an open virtual switch with openflow 1.3

Note that we have changed the controller port to 6653 so that it will not conflict with the karaf controller. Within the mininet command environment, we invoked pingall -c 1 (ping all hosts 1 time) to generate network traffic (if there is no traffic, DLUX will not show any flows). Now navigating to localhost:8181 and navigating to Topology should show the following:


Now your Ubuntu 14.04 VM for ODL Hydrogen should be ready for you to experiment with SDN!

2014년 11월 5일 수요일

Time-lapse recording of meditation sessions with motion, ffmpeg, and bash script

Background

I have been doing insight meditation for the past several months, but have a problem with being consistent and disciplined enough to actually meditate when I feel tired or too busy. I have used Beeminder (here are my automatically tracked goals) to good effect to keep me on track for certain goals that can be automatically monitored, i.e. RescueTime, Fitbit step count, Dual N-Back practice sessions, number of cards reviewed in Mnemosyne (similar to Anki but easier to use), etc.

For more free-form goals that require human oversight, StickK is a good tool. Both Beeminder and StickK are commitment devices that help people actually stick to their goals by making them put money on the line; if you go off track (determined by a human referee in the case of StickK, and by a computer in the case of Beeminder), your credit card gets charged some amount (that exponentially increases in Beeminder).

I track weekly pushups and meditation sessions using StickK. In the case of pushups, I prop up my smartphone so that it will record me doing one set of X pushups. The video is automatically uploaded to G+ from where I share a link to the video with my StickK referee. For 25 pushups, the videos are usually about 60 seconds long. But this will not work for meditation, considering the fact that a meditation session can last anywhere from 10 to 30 minutes. The battery life on many smartphones is pretty terrible, and even if you could take a half-hour video of yourself sitting on a mat meditating, who the hell would watch it? I want to save my StickK referee from such torture as well as save memory card space and battery life on my smartphone. Solution: use Linux!

The Tools

motion

motion is a web-cam utility that begins taking snapshots when motion is detected in front of the camera. It is included in the package repositories of many Linux distros, and I use the version from the Archlinux community repository. motion is commonly used in DIY CCTV projects using the Raspberry Pi, however, I will use it to take time-lapse photos of my meditation sessions. By default, motion will start taking pictures whenever it senses motion, but we need to change this default behavior so that it will take a photo every N seconds. To do this, you need to edit /etc/motion/motion.conf as follows:

First make sure that motion daemon mode is turned off

############################################################
# Daemon
############################################################

# Start in daemon (background) mode and release terminal (default: off)
daemon off

Although the comment above claims the default is off, in Archlinux the default is actually on.

############################################################
# Snapshots (Traditional Periodic Webcam File Output)
############################################################

# Make automated snapshot every N seconds (default: 0 = disabled)
snapshot_interval n
In our case, we set n to 6.

Since we will be taking time-lapse photos, we need to turn off the feature that takes pictures when motion is detected:

############################################################
# Image File Output
############################################################

# Output 'normal' pictures when motion is detected (default: on)
...
# Can be used as preview shot for the corresponding movie.
output_normal off

After installing the motion package in Archlinux, you also have to edit the permissions on /var/run/motion so that it is writable by the regular user. Something like the following should do the trick:

sudo chown username:username /var/run/motion

If your /usr/local directory is not already writable by the regular user, you will also need to recursively change the permissions on this directory as well because motion outputs all its image files to the path /usr/local/apache2/htdocs/cam1 (at least this is true on Archlinux). You can recursively change ownership of directories and subdirectories using the -R option in chown.

There are several other config changes you might want to make to /etc/motion/motion.conf.

If you don't want motion to create partial preview videos every X frames as well as a final video (I think it's a waste of space), you need to edit the following section:

# Use ffmpeg to encode mpeg movies in realtime (default: off)
ffmpeg_cap_new off


ffmpeg

Once you have finished taking time-lapse photos in motion, all the images will be contained in /usr/local/apache2/htdocs/cam1

Now you need to use ffmpeg to render these time-lapse images into a video (without audio). In our video, we want each image to be shown for 1 second each and the video should be 25 frames per second. We can render such a video as follows:

ffmpeg -framerate 1/1 -pattern_type glob -i "*.jpg" -r 25 filename.mp4

Note: -framerate flag ensures that each input image appears for 1 second (to make each image appear for n seconds, set this flag to 1/n)

When using the -i flag (input), if you wish to use wildcards (to render all jpg files in a directory, for example) you need to preface the -i with the -pattern_type flag glob option

The -r flag sets the frame rate for the output video file.


bash script

You don't want to have to remember the ffmpeg invocation above every time you want to record a time-lapse video. Below is a script I made to launch motion, render the images, and then cleanup the output directory once video rendering is complete:





My original script didn't have any error trapping before running rm *.jpg and I bitterly came to regret this when one day ffmpeg exited with an error; despite no .mp4 video being rendered, my silly script deleted all jpg files. The unsafe lines were as follows:
...
ffmpeg -framerate 1/1 -pattern_type glob -i "*.jpg" -r 25 $DATE.mp4
rm *.jpg

The problem with these lines is that even if ffmpeg exits with an error, the script will not stop; the rm command will still be executed! After learning my lesson the hard way, I read up on error handling from William Shotts' wonderful bash tutorial. I also highly recommend his book, The Linux Command Line.