2015년 12월 26일 토요일

Enabling Brainworkshop 4.8.7 to work with python2.7.9+ in Linux

I have found a way to make Brainworkshop 4.8.7 work with versions of Python 2 > 2.7.9
In a previous post from June 2015, I mentioned that Brainworkshop installed from the zip archive downloaded from Sourceforge triggers a segfault when executed with Python 2 version 2.7.10 and above.

The problem is that the version of pyglet bundled with the zip archive is lower than version 1.2.0, which is incompatible with Python 2 > 2.7.9

The solution is to not use the old version of pyglet bundled with brainworkshop.zip but to instead install your linux distribution's packaged version of pyglet for python2. In the case of Archlinux, this package is python2-pyglet which is currently at version 1.2.4-2 as of Dec. 26, 2015.

Note that if you use recent versions of pyglet > 1.2 you must edit brainworkshop.pyw around line 2523 and remove the reference to the keyword halign which is no longer valid in pyglet > 1.2. I discussed this workaround in this post from February 2015.

This screenshot shows Brainworkshop 4.8.7 working on my Archlinux system:


2015년 12월 19일 토요일

Keeping track of pushups with Beeminder and a smartphone

I've been a Beeminder user for the past 3 years and I only recently learned that it's possible to keep track of pushups in a semi-automated way. Apparently this feature has been available since at least the end of 2013.

First, install the Beeminder app on your android smartphone and make sure you have created a "Do More" goal to track pushups (as of Dec 2015, you have to create new goals through the website). Then tap on your goal in the android app and select "Tally entry" mode by swiping to the right in the data entry box at bottom.



Finally tap once at the bottom around the zero and you will be taken to the following screen:



In tally entry mode, tapping the screen anywhere will record one data point. This is helpful when doing pushups, because when your nose or chin touches the smartphone screen, one pushup will be registered! When you are done, press your smartphone's back button and in the 2nd screen above, tap "Submit" to send you pushup data to Beeminder.

*Note: I find it easier to touch my chin to the smartphone screen as it allows me to look straight out instead of focusing on the floor as is the case when touching your nose to the screen.

References:

http://blog.beeminder.com/beedroid/ (Manual Data Entry section)
http://blog.beeminder.com/faire/ (Pushups header)

2015년 12월 9일 수요일

Using Pipework to enable communication between docker containers and the host

The docker daemon (I currently have version 1.9.1 installed on Archlinux) creates a bridge interface named docker0 on startup and when containers are spawned, they are all connected to to this interface by default. For security reasons, containers have access to the Internet through NAT, but are not otherwise externally visible. This is problematic if we would like to ping containers from our host or have containers communicate with machines on our network.

There is a great tool called Pipework (https://github.com/jpetazzo/pipework) that incorporates commands for docker, iproute2, and Linux bridge. It's written in Bash shell.

It automates the process of creating network bridges and virtual ethernet ifaces (veth) on the host as well as additional network interfaces within containers (LXC or Docker).

First let's see what docker images I have available from which I can launch some containers:

[archjun@pinkS310 ~]$ docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
jess/chromium       latest              c0aed183c970        4 days ago          567.5 MB
jess/gparted        latest              6d1bee229713        7 days ago          212.4 MB
l3iggs/archlinux    latest              0ac34c50f830        10 days ago         365.8 MB
busybox             latest              c51f86c28340        5 weeks ago         1.109 MB

These were all downloaded from Dockerhub using docker pull repoName. I will launch two container instances from the busybox image:

[archjun@pinkS310 ~]$ docker run -ti --rm busybox /bin/sh

An explanation of the option flags (from man docker run):

-i or --interactive
Keep STDIN open even if not attached (i.e. connected to the container).

-t or --tty
Allocate a pseudo-TTY (pty)

--rm
remove the container with docker rm when you exit the container

/bin/sh
Finally you must give a command to run inside the container. The busybox image does not contain bash, but it does have sh.

Let's take a look at the network ifaces inside the busybox container:

/ # ip a
1: lo: mtu 65536 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
7: eth0@if8: mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever

Only two ifaces exist, loopback and eth0@if8 which is connected to the bridge iface docker0. Note that the IP is in the range 172.17.x, which is in the default docker settings. Through the docker bridge, containers can communicate with each other. But my local machine has an IP in the range 192.168.10.x, so direct communication with docker containers through docker0 is not yet possible.

 I will launch one more busybox container:

[archjun@pinkS310 ~]$ docker run -ti --rm busybox:latest /bin/sh

The second container also has only two network ifaces, one of which is mapped to bridge iface docker0.

/ # ip a
1: lo: mtu 65536 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
9: eth0@if10: mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link 
       valid_lft forever preferred_lft forever

From a terminal on my host, let's take a look at the running containers:

[archjun@pinkS310 ~]$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
859be599f53d        busybox:latest      "/bin/sh"           2 hours ago         Up 2 hours                              drunk_perlman
5140cd8079d4        busybox             "/bin/sh"           2 hours ago         Up 2 hours                              stoic_mcnulty

Two busybox containers are running, drunk_perlman and stoic_mcnulty.

The following network ifaces are active on the host:

[archjun@pinkS310 ~]$ ip a show up
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether f8:a9:63:3c:23:64 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.97/24 brd 192.168.10.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 fe80::faa9:63ff:fe3c:2364/64 scope link 
       valid_lft forever preferred_lft forever
3: wlp2s0: mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether b8:ee:65:d8:fd:f7 brd ff:ff:ff:ff:ff:ff
    inet 192.168.40.195/24 brd 192.168.40.255 scope global wlp2s0
       valid_lft forever preferred_lft forever
    inet6 fe80::baee:65ff:fed8:fdf7/64 scope link 
       valid_lft forever preferred_lft forever
4: virbr0: mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 52:54:00:59:95:09 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
6: docker0: mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:3e:81:64:5d brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:3eff:fe81:645d/64 scope link 
       valid_lft forever preferred_lft forever
8: vetha4b3346@if7: mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 96:ce:70:36:f9:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::94ce:70ff:fe36:f95e/64 scope link 
       valid_lft forever preferred_lft forever
10: veth04edbf0@if9: mtu 1500 qdisc noqueue master docker0 state UP group default 
    link/ether 92:01:7a:5b:0b:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::9001:7aff:fe5b:b06/64 scope link 
       valid_lft forever preferred_lft forever

docker0 is the bridge interface created by the docker daemon/systemd service. virbr0 is the bridge iface for use by hypervisors like KVM or Virtualbox. My ethernet interface, enp1s0, has the IP 192.168.10.97/24

Finally, the virtual eth ifaces veth... correspond to the single ports within each of the busybox containers connected to docker0.

Now, using the pipework script run as root, I will create a new bridge interface called br1. Each of the containers will be connected to it through new network ifaces within the two containers.

[archjun@pinkS310 ~]$ sudo pipework br1 drunk_perlman 192.168.10.101/24
[archjun@pinkS310 ~]$ sudo pipework br1 stoic_mcnulty 192.168.10.102/24

Now on the host there are three new ifaces:

[archjun@pinkS310 ~]$ ip a show up
...
11: br1: mtu 1500 qdisc noqueue state UP group default 
    link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::10c3:eff:fec7:8d89/64 scope link 
       valid_lft forever preferred_lft forever
13: veth1pl26713@if12: mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
    link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::6c7c:fcff:fef6:4f5/64 scope link 
       valid_lft forever preferred_lft forever
15: veth1pl23625@if14: mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
    link/ether aa:4f:6d:9c:b9:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::a84f:6dff:fe9c:b9bc/64 scope link 
       valid_lft forever preferred_lft forever

You can see that the two new veth ifaces have br1 as their master.

Inside each of the containers you can see one new interface with one of the IP addresses specified above (in the IP range 192.168.10.x

/ # ip a show up
1: lo: mtu 65536 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
7: eth0@if8: mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link 
       valid_lft forever preferred_lft forever
14: eth1@if15: mtu 1500 qdisc fq_codel qlen 1000
    link/ether 66:5c:ae:2b:26:3a brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.102/24 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::645c:aeff:fe2b:263a/64 scope link 
       valid_lft forever preferred_lft forever

The new iface in container stoic_mcnulty is eth1..., which is connected to bridge br1

Inside container drunk_perlman, you can see a new iface eth1@if13:

/ # ip a show up
1: lo: mtu 65536 qdisc noqueue 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
9: eth0@if10: mtu 1500 qdisc noqueue 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.3/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:3/64 scope link 
       valid_lft forever preferred_lft forever
12: eth1@if13: mtu 1500 qdisc fq_codel qlen 1000
    link/ether f2:08:19:49:64:4c brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.101/24 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::f008:19ff:fe49:644c/64 scope link 
       valid_lft forever preferred_lft forever

The new iface was created by pipework and has an IP on the same subnet as localhost.

So far, so good. With just this setup, however, I will be unable to ping the docker containers from my host. Now I must make my Ethernet port enp1s0 into a slave of bridge br1.

[archjun@pinkS310 ~]$ sudo ip l set enp1s0 master br1
[archjun@pinkS310 ~]$ bridge link
2: enp1s0 state UP : mtu 1500 master br1 state forwarding priority 32 cost 19 
5: virbr0-nic state DOWN : mtu 1500 master virbr0 state disabled priority 32 cost 100 
8: vetha4b3346 state UP @(null): mtu 1500 master docker0 state forwarding priority 32 cost 2 
10: veth04edbf0 state UP @(null): mtu 1500 master docker0 state forwarding priority 32 cost 2 
13: veth1pl26713 state UP @(null): mtu 1500 master br1 state forwarding priority 32 cost 2 
15: veth1pl23625 state UP @(null): mtu 1500 master br1 state forwarding priority 32 cost 2

bridge link (show) displays the current port config and flags for linux bridges. I am not sure why docker0 does not show up in the output, as it is also a bridge iface (running bridge link as root makes no difference). You can see that enp1s0 now has master br1

Since br1 is the master for Ethernet port enp1s0, I have to clear the IP address from enp1s0 and assign it to br1 instead:

[archjun@pinkS310 ~]$ sudo ip a flush enp1s0
[archjun@pinkS310 ~]$ sudo ip a add 192.168.10.97/24 dev br1

Now pinging the containers at 192.168.10.101 and ...102 from the host machine works. The host's wired IP is 192.168.10.97 (the address for br1, which is the master iface for enp1s0).

[archjun@pinkS310 ~]$ ping 192.168.10.101
PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.
64 bytes from 192.168.10.101: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from 192.168.10.101: icmp_seq=3 ttl=64 time=0.051 ms
^C
--- 192.168.10.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.051/0.067/0.093/0.020 ms

[archjun@pinkS310 ~]$ ping 192.168.10.102
PING 192.168.10.102 (192.168.10.102) 56(84) bytes of data.
64 bytes from 192.168.10.102: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.10.102: icmp_seq=2 ttl=64 time=0.047 ms
^C
--- 192.168.10.102 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.047/0.077/0.107/0.030 ms

Pinging the containers at 101 and 102 from my host machine works. Pinging other machines on the local network also works fine:


[archjun@pinkS310 ~]$ ping 192.168.10.58
PING 192.168.10.58 (192.168.10.58) 56(84) bytes of data.
64 bytes from 192.168.10.58: icmp_seq=1 ttl=64 time=0.817 ms
64 bytes from 192.168.10.58: icmp_seq=2 ttl=64 time=0.448 ms
64 bytes from 192.168.10.58: icmp_seq=3 ttl=64 time=0.483 ms
64 bytes from 192.168.10.58: icmp_seq=4 ttl=64 time=0.447 ms
^C
--- 192.168.10.58 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.447/0.548/0.817/0.158 ms

Now let's see if the containers (I will only show the terminal for one container, since they look identical on the CLI as they don't have unique hostnames) can ping localhost as well as other hosts on the LAN:

/ # ping 192.168.10.97
PING 192.168.10.97 (192.168.10.97): 56 data bytes
64 bytes from 192.168.10.97: seq=0 ttl=64 time=0.104 ms
64 bytes from 192.168.10.97: seq=1 ttl=64 time=0.088 ms
64 bytes from 192.168.10.97: seq=2 ttl=64 time=0.107 ms
^C
--- 192.168.10.97 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.088/0.099/0.107 ms
/ # ping 192.168.10.58
PING 192.168.10.58 (192.168.10.58): 56 data bytes
64 bytes from 192.168.10.58: seq=0 ttl=64 time=0.607 ms
64 bytes from 192.168.10.58: seq=1 ttl=64 time=0.585 ms
64 bytes from 192.168.10.58: seq=2 ttl=64 time=0.543 ms
^C
--- 192.168.10.58 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.543/0.578/0.607 ms

Great! You can see the containers along with other machines on my LAN in zenmap (the GUI for nmap) after a ping scan:


101 and 102 are the busybox docker containers, 97 is the the linux bridge br1 which connects enp1s0 and the containers, and 58 is another host on the LAN.

2015년 12월 5일 토요일

Sharing mnemosyne cards across multiple machines using dropbox and symlinks

Mnemosyne is available in the Arch User Repository (Archlinux) and installs itself into the following directories by default.

1. default.db and media files:

$HOME/.local/share/mnemosyne

2. config files for syncing with other machines (through desktop app) or devices (through Android app):

$HOME/.config/mnemosyne/

My mnemosyne card database resides in  $HOME/Dropbox/mnemosyne and all changes are automatically synced by the dropbox daemon. To setup mnemosyne on a new machine, you should not create a symlink to #2 (info like machine.id needs to be unique for every machine mnemosyne is installed on) , but you need to symlink #1:

ln -s $HOME/Dropbox/mnemosyne $HOME/.local/share/

Now when you launch mnemosyne, your card database from Dropbox will be used.

Note:
If you don't symlink and simply click File->Open->nameOfDatabse.db residing in a cloud sync folder (using Dropbox, SpiderOak, etc), you will be able to access your cards; but when you try to sync cards from the Android app, the default.db on your local machine will sometimes reset and point back to $HOME/.local/share/mnemosyne
instead of a cloud sync folder.