I have found a way to make Brainworkshop 4.8.7 work with versions of Python 2 > 2.7.9
In a previous post from June 2015, I mentioned that Brainworkshop installed from the zip archive downloaded from Sourceforge triggers a segfault when executed with Python 2 version 2.7.10 and above.
The problem is that the version of pyglet bundled with the zip archive is lower than version 1.2.0, which is incompatible with Python 2 > 2.7.9
The solution is to not use the old version of pyglet bundled with brainworkshop.zip but to instead install your linux distribution's packaged version of pyglet for python2. In the case of Archlinux, this package is python2-pyglet which is currently at version 1.2.4-2 as of Dec. 26, 2015.
Note that if you use recent versions of pyglet > 1.2 you must edit brainworkshop.pyw around line 2523 and remove the reference to the keyword halign which is no longer valid in pyglet > 1.2. I discussed this workaround in this post from February 2015.
This screenshot shows Brainworkshop 4.8.7 working on my Archlinux system:
2015년 12월 26일 토요일
2015년 12월 19일 토요일
Keeping track of pushups with Beeminder and a smartphone
I've been a Beeminder user for the past 3 years and I only recently learned that it's possible to keep track of pushups in a semi-automated way. Apparently this feature has been available since at least the end of 2013.
First, install the Beeminder app on your android smartphone and make sure you have created a "Do More" goal to track pushups (as of Dec 2015, you have to create new goals through the website). Then tap on your goal in the android app and select "Tally entry" mode by swiping to the right in the data entry box at bottom.
Finally tap once at the bottom around the zero and you will be taken to the following screen:
In tally entry mode, tapping the screen anywhere will record one data point. This is helpful when doing pushups, because when your nose or chin touches the smartphone screen, one pushup will be registered! When you are done, press your smartphone's back button and in the 2nd screen above, tap "Submit" to send you pushup data to Beeminder.
*Note: I find it easier to touch my chin to the smartphone screen as it allows me to look straight out instead of focusing on the floor as is the case when touching your nose to the screen.
References:
http://blog.beeminder.com/beedroid/ (Manual Data Entry section)
http://blog.beeminder.com/faire/ (Pushups header)
First, install the Beeminder app on your android smartphone and make sure you have created a "Do More" goal to track pushups (as of Dec 2015, you have to create new goals through the website). Then tap on your goal in the android app and select "Tally entry" mode by swiping to the right in the data entry box at bottom.
Finally tap once at the bottom around the zero and you will be taken to the following screen:
In tally entry mode, tapping the screen anywhere will record one data point. This is helpful when doing pushups, because when your nose or chin touches the smartphone screen, one pushup will be registered! When you are done, press your smartphone's back button and in the 2nd screen above, tap "Submit" to send you pushup data to Beeminder.
*Note: I find it easier to touch my chin to the smartphone screen as it allows me to look straight out instead of focusing on the floor as is the case when touching your nose to the screen.
References:
http://blog.beeminder.com/beedroid/ (Manual Data Entry section)
http://blog.beeminder.com/faire/ (Pushups header)
2015년 12월 9일 수요일
Using Pipework to enable communication between docker containers and the host
The docker daemon (I currently have version 1.9.1 installed on Archlinux) creates a bridge interface named docker0 on startup and when containers are spawned, they are all connected to to this interface by default. For security reasons, containers have access to the Internet through NAT, but are not otherwise externally visible. This is problematic if we would like to ping containers from our host or have containers communicate with machines on our network.
There is a great tool called Pipework (https://github.com/jpetazzo/pipework) that incorporates commands for docker, iproute2, and Linux bridge. It's written in Bash shell.
It automates the process of creating network bridges and virtual ethernet ifaces (veth) on the host as well as additional network interfaces within containers (LXC or Docker).
First let's see what docker images I have available from which I can launch some containers:
[archjun@pinkS310 ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
jess/chromium latest c0aed183c970 4 days ago 567.5 MB
jess/gparted latest 6d1bee229713 7 days ago 212.4 MB
l3iggs/archlinux latest 0ac34c50f830 10 days ago 365.8 MB
busybox latest c51f86c28340 5 weeks ago 1.109 MB
These were all downloaded from Dockerhub using docker pull repoName. I will launch two container instances from the busybox image:
[archjun@pinkS310 ~]$ docker run -ti --rm busybox /bin/sh
An explanation of the option flags (from man docker run):
-i or --interactive
Keep STDIN open even if not attached (i.e. connected to the container).
-t or --tty
Allocate a pseudo-TTY (pty)
--rm
remove the container with docker rm when you exit the container
/bin/sh
Finally you must give a command to run inside the container. The busybox image does not contain bash, but it does have sh.
Let's take a look at the network ifaces inside the busybox container:
/ # ip a
1: lo: mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0@if8: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
Only two ifaces exist, loopback and eth0@if8 which is connected to the bridge iface docker0. Note that the IP is in the range 172.17.x, which is in the default docker settings. Through the docker bridge, containers can communicate with each other. But my local machine has an IP in the range 192.168.10.x, so direct communication with docker containers through docker0 is not yet possible.
I will launch one more busybox container:
[archjun@pinkS310 ~]$ docker run -ti --rm busybox:latest /bin/sh
The second container also has only two network ifaces, one of which is mapped to bridge iface docker0.
/ # ip a
1: lo: mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0@if10: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
From a terminal on my host, let's take a look at the running containers:
[archjun@pinkS310 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
859be599f53d busybox:latest "/bin/sh" 2 hours ago Up 2 hours drunk_perlman
5140cd8079d4 busybox "/bin/sh" 2 hours ago Up 2 hours stoic_mcnulty
Two busybox containers are running, drunk_perlman and stoic_mcnulty.
The following network ifaces are active on the host:
[archjun@pinkS310 ~]$ ip a show up
1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether f8:a9:63:3c:23:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.97/24 brd 192.168.10.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::faa9:63ff:fe3c:2364/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0: mtu 1500 qdisc mq state UP group default qlen 1000
link/ether b8:ee:65:d8:fd:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.195/24 brd 192.168.40.255 scope global wlp2s0
valid_lft forever preferred_lft forever
inet6 fe80::baee:65ff:fed8:fdf7/64 scope link
valid_lft forever preferred_lft forever
4: virbr0: mtu 1500 qdisc noqueue state DOWN group default
link/ether 52:54:00:59:95:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: docker0: mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:3e:81:64:5d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3eff:fe81:645d/64 scope link
valid_lft forever preferred_lft forever
8: vetha4b3346@if7: mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 96:ce:70:36:f9:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::94ce:70ff:fe36:f95e/64 scope link
valid_lft forever preferred_lft forever
10: veth04edbf0@if9: mtu 1500 qdisc noqueue master docker0 state UP group default
link/ether 92:01:7a:5b:0b:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9001:7aff:fe5b:b06/64 scope link
valid_lft forever preferred_lft forever
docker0 is the bridge interface created by the docker daemon/systemd service. virbr0 is the bridge iface for use by hypervisors like KVM or Virtualbox. My ethernet interface, enp1s0, has the IP 192.168.10.97/24
Finally, the virtual eth ifaces veth... correspond to the single ports within each of the busybox containers connected to docker0.
Now, using the pipework script run as root, I will create a new bridge interface called br1. Each of the containers will be connected to it through new network ifaces within the two containers.
[archjun@pinkS310 ~]$ sudo pipework br1 drunk_perlman 192.168.10.101/24
[archjun@pinkS310 ~]$ sudo pipework br1 stoic_mcnulty 192.168.10.102/24
Now on the host there are three new ifaces:
[archjun@pinkS310 ~]$ ip a show up
...
11: br1: mtu 1500 qdisc noqueue state UP group default
link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::10c3:eff:fec7:8d89/64 scope link
valid_lft forever preferred_lft forever
13: veth1pl26713@if12: mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::6c7c:fcff:fef6:4f5/64 scope link
valid_lft forever preferred_lft forever
15: veth1pl23625@if14: mtu 1500 qdisc fq_codel master br1 state UP group default qlen 1000
link/ether aa:4f:6d:9c:b9:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::a84f:6dff:fe9c:b9bc/64 scope link
valid_lft forever preferred_lft forever
You can see that the two new veth ifaces have br1 as their master.
101 and 102 are the busybox docker containers, 97 is the the linux bridge br1 which connects enp1s0 and the containers, and 58 is another host on the LAN.
There is a great tool called Pipework (https://github.com/jpetazzo/pipework) that incorporates commands for docker, iproute2, and Linux bridge. It's written in Bash shell.
It automates the process of creating network bridges and virtual ethernet ifaces (veth) on the host as well as additional network interfaces within containers (LXC or Docker).
First let's see what docker images I have available from which I can launch some containers:
[archjun@pinkS310 ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
jess/chromium latest c0aed183c970 4 days ago 567.5 MB
jess/gparted latest 6d1bee229713 7 days ago 212.4 MB
l3iggs/archlinux latest 0ac34c50f830 10 days ago 365.8 MB
busybox latest c51f86c28340 5 weeks ago 1.109 MB
These were all downloaded from Dockerhub using docker pull repoName. I will launch two container instances from the busybox image:
[archjun@pinkS310 ~]$ docker run -ti --rm busybox /bin/sh
An explanation of the option flags (from man docker run):
-i or --interactive
Keep STDIN open even if not attached (i.e. connected to the container).
-t or --tty
Allocate a pseudo-TTY (pty)
--rm
remove the container with docker rm when you exit the container
/bin/sh
Finally you must give a command to run inside the container. The busybox image does not contain bash, but it does have sh.
Let's take a look at the network ifaces inside the busybox container:
/ # ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0@if8:
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
Only two ifaces exist, loopback and eth0@if8 which is connected to the bridge iface docker0. Note that the IP is in the range 172.17.x, which is in the default docker settings. Through the docker bridge, containers can communicate with each other. But my local machine has an IP in the range 192.168.10.x, so direct communication with docker containers through docker0 is not yet possible.
I will launch one more busybox container:
[archjun@pinkS310 ~]$ docker run -ti --rm busybox:latest /bin/sh
The second container also has only two network ifaces, one of which is mapped to bridge iface docker0.
/ # ip a
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0@if10:
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
From a terminal on my host, let's take a look at the running containers:
[archjun@pinkS310 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
859be599f53d busybox:latest "/bin/sh" 2 hours ago Up 2 hours drunk_perlman
5140cd8079d4 busybox "/bin/sh" 2 hours ago Up 2 hours stoic_mcnulty
Two busybox containers are running, drunk_perlman and stoic_mcnulty.
The following network ifaces are active on the host:
[archjun@pinkS310 ~]$ ip a show up
1: lo:
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0:
link/ether f8:a9:63:3c:23:64 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.97/24 brd 192.168.10.255 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::faa9:63ff:fe3c:2364/64 scope link
valid_lft forever preferred_lft forever
3: wlp2s0:
link/ether b8:ee:65:d8:fd:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.40.195/24 brd 192.168.40.255 scope global wlp2s0
valid_lft forever preferred_lft forever
inet6 fe80::baee:65ff:fed8:fdf7/64 scope link
valid_lft forever preferred_lft forever
4: virbr0:
link/ether 52:54:00:59:95:09 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: docker0:
link/ether 02:42:3e:81:64:5d brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:3eff:fe81:645d/64 scope link
valid_lft forever preferred_lft forever
8: vetha4b3346@if7:
link/ether 96:ce:70:36:f9:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::94ce:70ff:fe36:f95e/64 scope link
valid_lft forever preferred_lft forever
10: veth04edbf0@if9:
link/ether 92:01:7a:5b:0b:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::9001:7aff:fe5b:b06/64 scope link
valid_lft forever preferred_lft forever
docker0 is the bridge interface created by the docker daemon/systemd service. virbr0 is the bridge iface for use by hypervisors like KVM or Virtualbox. My ethernet interface, enp1s0, has the IP 192.168.10.97/24
Finally, the virtual eth ifaces veth... correspond to the single ports within each of the busybox containers connected to docker0.
Now, using the pipework script run as root, I will create a new bridge interface called br1. Each of the containers will be connected to it through new network ifaces within the two containers.
[archjun@pinkS310 ~]$ sudo pipework br1 drunk_perlman 192.168.10.101/24
[archjun@pinkS310 ~]$ sudo pipework br1 stoic_mcnulty 192.168.10.102/24
Now on the host there are three new ifaces:
[archjun@pinkS310 ~]$ ip a show up
...
11: br1:
link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff
inet6 fe80::10c3:eff:fec7:8d89/64 scope link
valid_lft forever preferred_lft forever
13: veth1pl26713@if12:
link/ether 6e:7c:fc:f6:04:f5 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet6 fe80::6c7c:fcff:fef6:4f5/64 scope link
valid_lft forever preferred_lft forever
15: veth1pl23625@if14:
link/ether aa:4f:6d:9c:b9:bc brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::a84f:6dff:fe9c:b9bc/64 scope link
valid_lft forever preferred_lft forever
You can see that the two new veth ifaces have br1 as their master.
Inside each of the containers you can see one new interface with one of the IP addresses specified above (in the IP range 192.168.10.x)
/ # ip a show up
1: lo: mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
7: eth0@if8: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:2/64 scope link
valid_lft forever preferred_lft forever
14: eth1@if15: mtu 1500 qdisc fq_codel qlen 1000
link/ether 66:5c:ae:2b:26:3a brd ff:ff:ff:ff:ff:ff
inet 192.168.10.102/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::645c:aeff:fe2b:263a/64 scope link
valid_lft forever preferred_lft forever
The new iface in container stoic_mcnulty is eth1..., which is connected to bridge br1
Inside container drunk_perlman, you can see a new iface eth1@if13:
/ # ip a show up
1: lo: mtu 65536 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
9: eth0@if10: mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:3/64 scope link
valid_lft forever preferred_lft forever
12: eth1@if13: mtu 1500 qdisc fq_codel qlen 1000
link/ether f2:08:19:49:64:4c brd ff:ff:ff:ff:ff:ff
inet 192.168.10.101/24 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::f008:19ff:fe49:644c/64 scope link
valid_lft forever preferred_lft forever
The new iface was created by pipework and has an IP on the same subnet as localhost.
So far, so good. With just this setup, however, I will be unable to ping the docker containers from my host. Now I must make my Ethernet port enp1s0 into a slave of bridge br1.
[archjun@pinkS310 ~]$ sudo ip l set enp1s0 master br1
[archjun@pinkS310 ~]$ bridge link
2: enp1s0 state UP : mtu 1500 master br1 state forwarding priority 32 cost 19
5: virbr0-nic state DOWN : mtu 1500 master virbr0 state disabled priority 32 cost 100
8: vetha4b3346 state UP @(null): mtu 1500 master docker0 state forwarding priority 32 cost 2
10: veth04edbf0 state UP @(null): mtu 1500 master docker0 state forwarding priority 32 cost 2
13: veth1pl26713 state UP @(null): mtu 1500 master br1 state forwarding priority 32 cost 2
15: veth1pl23625 state UP @(null): mtu 1500 master br1 state forwarding priority 32 cost 2
bridge link (show) displays the current port config and flags for linux bridges. I am not sure why docker0 does not show up in the output, as it is also a bridge iface (running bridge link as root makes no difference). You can see that enp1s0 now has master br1
Since br1 is the master for Ethernet port enp1s0, I have to clear the IP address from enp1s0 and assign it to br1 instead:
[archjun@pinkS310 ~]$ sudo ip a flush enp1s0
[archjun@pinkS310 ~]$ sudo ip a add 192.168.10.97/24 dev br1
Now pinging the containers at 192.168.10.101 and ...102 from the host machine works. The host's wired IP is 192.168.10.97 (the address for br1, which is the master iface for enp1s0).
[archjun@pinkS310 ~]$ ping 192.168.10.101
PING 192.168.10.101 (192.168.10.101) 56(84) bytes of data.
64 bytes from 192.168.10.101: icmp_seq=1 ttl=64 time=0.093 ms
64 bytes from 192.168.10.101: icmp_seq=2 ttl=64 time=0.059 ms
64 bytes from 192.168.10.101: icmp_seq=3 ttl=64 time=0.051 ms
^C
--- 192.168.10.101 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.051/0.067/0.093/0.020 ms
[archjun@pinkS310 ~]$ ping 192.168.10.102
PING 192.168.10.102 (192.168.10.102) 56(84) bytes of data.
64 bytes from 192.168.10.102: icmp_seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.10.102: icmp_seq=2 ttl=64 time=0.047 ms
^C
--- 192.168.10.102 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.047/0.077/0.107/0.030 ms
Pinging the containers at 101 and 102 from my host machine works. Pinging other machines on the local network also works fine:
[archjun@pinkS310 ~]$ ping 192.168.10.58
PING 192.168.10.58 (192.168.10.58) 56(84) bytes of data.
64 bytes from 192.168.10.58: icmp_seq=1 ttl=64 time=0.817 ms
64 bytes from 192.168.10.58: icmp_seq=2 ttl=64 time=0.448 ms
64 bytes from 192.168.10.58: icmp_seq=3 ttl=64 time=0.483 ms
64 bytes from 192.168.10.58: icmp_seq=4 ttl=64 time=0.447 ms
^C
--- 192.168.10.58 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.447/0.548/0.817/0.158 ms
Now let's see if the containers (I will only show the terminal for one container, since they look identical on the CLI as they don't have unique hostnames) can ping localhost as well as other hosts on the LAN:
/ # ping 192.168.10.97
PING 192.168.10.97 (192.168.10.97): 56 data bytes
64 bytes from 192.168.10.97: seq=0 ttl=64 time=0.104 ms
64 bytes from 192.168.10.97: seq=1 ttl=64 time=0.088 ms
64 bytes from 192.168.10.97: seq=2 ttl=64 time=0.107 ms
^C
--- 192.168.10.97 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.088/0.099/0.107 ms
/ # ping 192.168.10.58
PING 192.168.10.58 (192.168.10.58): 56 data bytes
64 bytes from 192.168.10.58: seq=0 ttl=64 time=0.607 ms
64 bytes from 192.168.10.58: seq=1 ttl=64 time=0.585 ms
64 bytes from 192.168.10.58: seq=2 ttl=64 time=0.543 ms
^C
--- 192.168.10.58 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.543/0.578/0.607 ms
Great! You can see the containers along with other machines on my LAN in zenmap (the GUI for nmap) after a ping scan:
2015년 12월 5일 토요일
Sharing mnemosyne cards across multiple machines using dropbox and symlinks
Mnemosyne is available in the Arch User Repository (Archlinux) and installs itself into the following directories by default.
1. default.db and media files:
$HOME/.local/share/mnemosyne
2. config files for syncing with other machines (through desktop app) or devices (through Android app):
$HOME/.config/mnemosyne/
My mnemosyne card database resides in $HOME/Dropbox/mnemosyne and all changes are automatically synced by the dropbox daemon. To setup mnemosyne on a new machine, you should not create a symlink to #2 (info like machine.id needs to be unique for every machine mnemosyne is installed on) , but you need to symlink #1:
ln -s $HOME/Dropbox/mnemosyne $HOME/.local/share/
Now when you launch mnemosyne, your card database from Dropbox will be used.
Note:
If you don't symlink and simply click File->Open->nameOfDatabse.db residing in a cloud sync folder (using Dropbox, SpiderOak, etc), you will be able to access your cards; but when you try to sync cards from the Android app, the default.db on your local machine will sometimes reset and point back to $HOME/.local/share/mnemosyne
instead of a cloud sync folder.
1. default.db and media files:
$HOME/.local/share/mnemosyne
2. config files for syncing with other machines (through desktop app) or devices (through Android app):
$HOME/.config/mnemosyne/
My mnemosyne card database resides in $HOME/Dropbox/mnemosyne and all changes are automatically synced by the dropbox daemon. To setup mnemosyne on a new machine, you should not create a symlink to #2 (info like machine.id needs to be unique for every machine mnemosyne is installed on) , but you need to symlink #1:
ln -s $HOME/Dropbox/mnemosyne $HOME/.local/share/
Now when you launch mnemosyne, your card database from Dropbox will be used.
Note:
If you don't symlink and simply click File->Open->nameOfDatabse.db residing in a cloud sync folder (using Dropbox, SpiderOak, etc), you will be able to access your cards; but when you try to sync cards from the Android app, the default.db on your local machine will sometimes reset and point back to $HOME/.local/share/mnemosyne
instead of a cloud sync folder.
2015년 11월 28일 토요일
Installing Dokdo Project ROM v7.3 on Samsung Galaxy S3 SHW-M440S (Korean 3G version)
The latest Dokdo Project ROMs (which use parts of CM 12) bring Android 5.0 to various smartphones from Korean manufacturers such as Samsung, Pantech, and LG.
Updates for my old Samsung Galaxy S3 ended with Android 4.3, but thanks to this ROM, I can now use Android Lollipop with a phone I bought in 2013. Although the stock ROM for SGS3 doesn't support Bluetooth LE, the Dokdo 7.3.0+ ROM supports Bluetooth and Wifi perfectly!
To install custom ROMs on this phone requires that it be rooted, however. To root the SHW-M440S, I used the cache and recovery images from CF Auto Root, namely CF-Auto-Root-m0skt-m0skt-shwm440s.zip. After extracting cache.img and recovery.img from the zip archive, you need a way to transfer these files to your Samsung phone's internal flash ROM. (It goes without saying that you must have USB debugging mode enabled in the Developer Options)
Enter Heimdall. Most people on Windows OS simply use Samsung's internal tool called Odin to flash their phones. Unfortunately, Odin is software leaked from Samsung and is not officially supported and only runs on Windows. Heimdall, however, is open source and runs on Linux.
First use Heimdall to dump your Samsung phone's existing profile info with the following command:
heimdall download-pit --output filename.pit
Here is some sample output:
$ heimdall download-pit --output shw-m440s.pit
Heimdall v1.4.1
Copyright (c) 2010-2014 Benjamin Dobell, Glass Echidna
http://www.glassechidna.com.au/
This software is provided free of charge. Copying and redistribution is
encouraged.
If you appreciate this software and you would like to support future
development please consider donating:
http://www.glassechidna.com.au/donate/
Initialising connection...
Detecting device...
Manufacturer: "SAMSUNG"
Product: "Gadget Serial"
length: 18
device class: 2
S/N: 0
VID:PID: 04E8:685D
bcdDevice: 021B
iMan:iProd:iSer: 1:2:0
nb confs: 1
interface[0].altsetting[0]: num endpoints = 1
Class.SubClass.Protocol: 02.02.01
endpoint[0].address: 83
max packet size: 0010
polling interval: 09
interface[1].altsetting[0]: num endpoints = 2
Class.SubClass.Protocol: 0A.00.00
endpoint[0].address: 81
max packet size: 0200
polling interval: 00
endpoint[1].address: 02
max packet size: 0200
polling interval: 00
Claiming interface...
Attempt failed. Detaching driver...
Claiming interface again...
Next, open the .pit file with a hex editor so you can learn the names of internal partitions (I used emacs in hexl-mode). For the SHW-M440S, the partition names are in ALL CAPS:
RECOVERY, CACHE, BOOT, BACKUP, etc
but I have read that international versions of Samsung phones sometimes use lowercase partition names.
Now that you know the partition names, write the CF Auto Root recovery.img to RECOVERY and cache.img to the CACHE partition using heimdall:
heimdall flash --RECOVERY recovery.img --CACHE cache.img
Your SGS3 should then automatically reboot into recovery mode and a red pirate Android bot will appear in the background as the root exploit is applied. Then the phone's original recovery will be restored and the cache erased before the phone reboots into the stock ROM. You will notice that there will be a new app, SuperSU and that you can now use su in adb shell once you give permission on your phone's touch screen.
Now turn off the phone and go back into Download Mode (Vol down + Home Key + Power). heimdall detect to make sure the SGS3 is being detected:
$ heimdall detect
Device detected
Now that the SGS3 is rooted, it is possible to install a custom recovery that will allow you to install custom ROMs to the phone's internal memory. Download recovery-clockwork-6.0.4.6-i9300.img (the i9300 version works just fine with the SHW-M440S) to your linux box and flash it into the recovery partition using heimdall.
heimdall flash --RECOVERY recovery-clockwork-6.0.4.6-i9300.img
Now reboot into recovery mode (Vol Up + Home + Power) and wipe data, Dalvik cache, cache, and select install from zip if you have saved the custom ROM on an external microSD card. Or you could simply use adb sideload if your phone in recovery mode is connected to a Linux machine via USB cable. I just used sideload to transfer files from my Linux machine to my SGS3:
adb sideload dokdo-7.3.0-OFFICIAL-m0xx.zip
adb sideload dokdo_gapps_5.1.X.zip
I then selected the dokdo ROM for install from zip, followed by the Google apps package gapps.
Updates for my old Samsung Galaxy S3 ended with Android 4.3, but thanks to this ROM, I can now use Android Lollipop with a phone I bought in 2013. Although the stock ROM for SGS3 doesn't support Bluetooth LE, the Dokdo 7.3.0+ ROM supports Bluetooth and Wifi perfectly!
To install custom ROMs on this phone requires that it be rooted, however. To root the SHW-M440S, I used the cache and recovery images from CF Auto Root, namely CF-Auto-Root-m0skt-m0skt-shwm440s.zip. After extracting cache.img and recovery.img from the zip archive, you need a way to transfer these files to your Samsung phone's internal flash ROM. (It goes without saying that you must have USB debugging mode enabled in the Developer Options)
Enter Heimdall. Most people on Windows OS simply use Samsung's internal tool called Odin to flash their phones. Unfortunately, Odin is software leaked from Samsung and is not officially supported and only runs on Windows. Heimdall, however, is open source and runs on Linux.
First use Heimdall to dump your Samsung phone's existing profile info with the following command:
heimdall download-pit --output filename.pit
Here is some sample output:
$ heimdall download-pit --output shw-m440s.pit
Heimdall v1.4.1
Copyright (c) 2010-2014 Benjamin Dobell, Glass Echidna
http://www.glassechidna.com.au/
This software is provided free of charge. Copying and redistribution is
encouraged.
If you appreciate this software and you would like to support future
development please consider donating:
http://www.glassechidna.com.au/donate/
Initialising connection...
Detecting device...
Manufacturer: "SAMSUNG"
Product: "Gadget Serial"
length: 18
device class: 2
S/N: 0
VID:PID: 04E8:685D
bcdDevice: 021B
iMan:iProd:iSer: 1:2:0
nb confs: 1
interface[0].altsetting[0]: num endpoints = 1
Class.SubClass.Protocol: 02.02.01
endpoint[0].address: 83
max packet size: 0010
polling interval: 09
interface[1].altsetting[0]: num endpoints = 2
Class.SubClass.Protocol: 0A.00.00
endpoint[0].address: 81
max packet size: 0200
polling interval: 00
endpoint[1].address: 02
max packet size: 0200
polling interval: 00
Claiming interface...
Attempt failed. Detaching driver...
Claiming interface again...
Next, open the .pit file with a hex editor so you can learn the names of internal partitions (I used emacs in hexl-mode). For the SHW-M440S, the partition names are in ALL CAPS:
RECOVERY, CACHE, BOOT, BACKUP, etc
but I have read that international versions of Samsung phones sometimes use lowercase partition names.
Now that you know the partition names, write the CF Auto Root recovery.img to RECOVERY and cache.img to the CACHE partition using heimdall:
heimdall flash --RECOVERY recovery.img --CACHE cache.img
Your SGS3 should then automatically reboot into recovery mode and a red pirate Android bot will appear in the background as the root exploit is applied. Then the phone's original recovery will be restored and the cache erased before the phone reboots into the stock ROM. You will notice that there will be a new app, SuperSU and that you can now use su in adb shell once you give permission on your phone's touch screen.
Now turn off the phone and go back into Download Mode (Vol down + Home Key + Power). heimdall detect to make sure the SGS3 is being detected:
$ heimdall detect
Device detected
Now that the SGS3 is rooted, it is possible to install a custom recovery that will allow you to install custom ROMs to the phone's internal memory. Download recovery-clockwork-6.0.4.6-i9300.img (the i9300 version works just fine with the SHW-M440S) to your linux box and flash it into the recovery partition using heimdall.
heimdall flash --RECOVERY recovery-clockwork-6.0.4.6-i9300.img
Now reboot into recovery mode (Vol Up + Home + Power) and wipe data, Dalvik cache, cache, and select install from zip if you have saved the custom ROM on an external microSD card. Or you could simply use adb sideload if your phone in recovery mode is connected to a Linux machine via USB cable. I just used sideload to transfer files from my Linux machine to my SGS3:
adb sideload dokdo-7.3.0-OFFICIAL-m0xx.zip
adb sideload dokdo_gapps_5.1.X.zip
I then selected the dokdo ROM for install from zip, followed by the Google apps package gapps.
2015년 11월 21일 토요일
EFI multiboot for Ubuntu 15.10 Wily and Archlinux
At work I recently had the opportunity to install Ubuntu 15.10 Wily Werewolf (released on Oct 22, 2015) on a company laptop using EFI boot instead of legacy BIOS. Ubuntu installs just fine on post-2011 hardware that has UEFI boot enabled, but note that Ubuntu uses the grub2 bootloader on top of EFI, whereas other Linux distributions use bootctl from systemd-boot or other EFI boot managers.
The company laptop initially had Archlinux installed with an encrypted root inside LVM on a LUKS partition with lots of free space left over for the installation of Ubuntu 15.10. Installing Ubuntu as the second OS on an EFI boot machine ran into two problems:
1. If you choose to install Ubuntu 15.10 into a new LUKS partition on a disk already containing other LUKS partitions, the Debian Installer will erase all other LUKS headers when it creates the new LUKS partition. In fact, this is a known issue and the developer of cryptsetup warns (section 1.2 WARNINGS) against installing Ubuntu with LUKS on a disk that already contains other LUKS partition:
https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
2. Ubuntu can fail to install grub2 into the ESP (EFI System Partition) if another boot manager has already been installed there. I think this occurs because the path /boot/efi doesn't exist, which can happen if another linux distro (like Archlinux) is first installed and a boot manager like bootctl has created the path /boot/EFI on the ESP. The sole difference is EFI in all-caps or not, but this seemingly-minor issue causes problems for the Ubuntu installer.
Workaround
I therefore strongly recommend that you install Ubuntu as your first OS before installing other Linux distros to create a EFI multiboot system. Archlinux plays well with other EFI boot managers installed to the ESP.
During the installation of Archlinux as the second OS on a EFI boot machine, the ESP will be mounted as /boot (actually /mnt/boot before chrooting into the new system). To install the bootctl boot manager, simply invoke
bootctl install
inside the chroot, which will then install bootctl into the ESP.
I created Archlinux as my default boot entry at /boot/loader/entries/arch.conf :
title Arch
linux /vmlinuz-linux
initrd /intel-ucode.img
initrd /initramfs-linux.img
options cryptdevice=UUID=b332de40-afe8-47d5-9512-bf03da8d13cc:ARCH root=/dev/mapper/ARCH-rootvol quiet rw
The example above includes boot options for opening a LUKS partition and accessing '/' inside LVM.
Now you also need to create an entry for ubuntu so that bootctl will load grub2 menu for 15.10 Wily. Let's call it /boot/loader/entries/ubuntu.conf:
title Ubuntu 15.10
efi /EFI/ubuntu/grubx64.efi
Finally, you need to add the Ubuntu 15.10 entry to /boot/loader/loader.conf:
timeout 10
default arch
ubuntu
Now when you get the to EFI boot menu, you will see entries for both Archlinux and Ubuntu.
The company laptop initially had Archlinux installed with an encrypted root inside LVM on a LUKS partition with lots of free space left over for the installation of Ubuntu 15.10. Installing Ubuntu as the second OS on an EFI boot machine ran into two problems:
1. If you choose to install Ubuntu 15.10 into a new LUKS partition on a disk already containing other LUKS partitions, the Debian Installer will erase all other LUKS headers when it creates the new LUKS partition. In fact, this is a known issue and the developer of cryptsetup warns (section 1.2 WARNINGS) against installing Ubuntu with LUKS on a disk that already contains other LUKS partition:
https://gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
UBUNTU INSTALLER: In particular the Ubuntu installer seems to be quite willing to kill LUKS containers in several different ways. Those responsible at Ubuntu seem not to care very much (it is very easy to recognize a LUKS container), so treat the process of installing Ubuntu as a severe hazard to any LUKS container you may have...the installer offers to create LUKS partitions in a way that several people mistook for an offer to activate their existing LUKS partition. The installer gives no or an inadequate warning and will destroy your old LUKS header, causing permanent data loss
2. Ubuntu can fail to install grub2 into the ESP (EFI System Partition) if another boot manager has already been installed there. I think this occurs because the path /boot/efi doesn't exist, which can happen if another linux distro (like Archlinux) is first installed and a boot manager like bootctl has created the path /boot/EFI on the ESP. The sole difference is EFI in all-caps or not, but this seemingly-minor issue causes problems for the Ubuntu installer.
Workaround
I therefore strongly recommend that you install Ubuntu as your first OS before installing other Linux distros to create a EFI multiboot system. Archlinux plays well with other EFI boot managers installed to the ESP.
During the installation of Archlinux as the second OS on a EFI boot machine, the ESP will be mounted as /boot (actually /mnt/boot before chrooting into the new system). To install the bootctl boot manager, simply invoke
bootctl install
inside the chroot, which will then install bootctl into the ESP.
I created Archlinux as my default boot entry at /boot/loader/entries/arch.conf :
title Arch
linux /vmlinuz-linux
initrd /intel-ucode.img
initrd /initramfs-linux.img
options cryptdevice=UUID=b332de40-afe8-47d5-9512-bf03da8d13cc:ARCH root=/dev/mapper/ARCH-rootvol quiet rw
The example above includes boot options for opening a LUKS partition and accessing '/' inside LVM.
Now you also need to create an entry for ubuntu so that bootctl will load grub2 menu for 15.10 Wily. Let's call it /boot/loader/entries/ubuntu.conf:
title Ubuntu 15.10
efi /EFI/ubuntu/grubx64.efi
Finally, you need to add the Ubuntu 15.10 entry to /boot/loader/loader.conf:
timeout 10
default arch
ubuntu
Now when you get the to EFI boot menu, you will see entries for both Archlinux and Ubuntu.
2015년 11월 14일 토요일
How to group chat in Slack through irssi + bitlbee
This tutorial assumes that you already have irssi and bitlbee installed and set up and therefore only details the process of adding the #general channel on a Slack board so you can group chat in the terminal. This tutorial will not go through how to get irssi and bitlbee set up for using XMPP chat services in the terminal, however, as there are already many good sources on the Internet that explain how to do this (Archlinux bitlbee tutorial, Slack tutorial on connecting via XMPP or IRC, etc).
In the bitlbee admin channel (/join &bitlbee), initially adding a slack account is straightforward:
account add jabber userName@myBoardName.xmpp.slack.com [Slack-generated XMPP password]
Note that you can find your Slack-generated XMPP password at myBoardName.slack.com/account/gateways at the bottom of the page with a section that reads "Getting Started: XMPP". This password is different from your main Slack password!
OK, so now you've added your XMPP account for a Slack board. If you do nothing else, you can at least chat with individual users on the board with the /msg userName command, but you won't be able to chat in channels such as #general.
To enable #general in Slack, enter the following from the bitlbee admin channel (all one line):
chat add userName@myBoardName.xmpp.slack.com general@conference.myBoardName.xmpp.slack.com
This step is not clear from the instructions provided by Slack on the .../account/gateways page.
But there is one problem -- unless your default irssi nick matches the one you are using in your Slack board, you will be refused access to #general. You can set your default irssi nick in $HOME/.irssi/config in the following line:
settings = {
core = { real_name = "My real name"; user_name = "YourUserNameHere"; nick = "YourNick"; };
"fe-text" = { actlist_sort = "refnum"; };
};
To apply changes, restart irssi. Or you can change your Slack username to match your irssi nick by clicking on Account Settings in the Slack UI.
Some screen shots of irssi + bitlbee in action with a Slack channel (already added).
In the bitlbee admin channel (/join &bitlbee), initially adding a slack account is straightforward:
account add jabber userName@myBoardName.xmpp.slack.com [Slack-generated XMPP password]
Note that you can find your Slack-generated XMPP password at myBoardName.slack.com/account/gateways at the bottom of the page with a section that reads "Getting Started: XMPP". This password is different from your main Slack password!
OK, so now you've added your XMPP account for a Slack board. If you do nothing else, you can at least chat with individual users on the board with the /msg userName command, but you won't be able to chat in channels such as #general.
To enable #general in Slack, enter the following from the bitlbee admin channel (all one line):
chat add userName@myBoardName.xmpp.slack.com general@conference.myBoardName.xmpp.slack.com
This step is not clear from the instructions provided by Slack on the .../account/gateways page.
But there is one problem -- unless your default irssi nick matches the one you are using in your Slack board, you will be refused access to #general. You can set your default irssi nick in $HOME/.irssi/config in the following line:
settings = {
core = { real_name = "My real name"; user_name = "YourUserNameHere"; nick = "YourNick"; };
"fe-text" = { actlist_sort = "refnum"; };
};
To apply changes, restart irssi. Or you can change your Slack username to match your irssi nick by clicking on Account Settings in the Slack UI.
Some screen shots of irssi + bitlbee in action with a Slack channel (already added).
2015년 11월 3일 화요일
ipTIME N704M 공유기 펌웨어 버전 9.84 - WAN 포트 감지 문제
문제: 2015-10-06에 배포된 N704 펌웨어 버전 9.84을 설치한 다음 공유기 전원 껏다키면 WAN 포트가 아예 감지되지 않습니다 (WAN LED 점등 되지 않음).
임시 해결: 바로 전 펌웨어 버전인 9.78(2015-08-11)로 다운그레이드 시키고 전원 껐다 키면 WAN 포트가 다시 감지 됩니다.
그래도 ipTIME은 4년이된 제품인 N704M 펌웨어 업그레이드를 계속 내놓는 것은 장하다고 생각합니다! 예를 들어 휴대폰 제조사들은 2년만 지나도 최신 Android으로 업데이트 해주지 않는 경우가 허다 합니다.
2015년 11월 1일 일요일
Using Emacs within a GNU Screen session
My text editor of choice is Emacs, and it is once of the applications that I automatically launch inside a tab within GNU Screen. One problem, however, is that GNU Screen by default captures Control S (C-s) because this key combo means XOFF (stop sending input to terminal; to re-enable sending input to terminal, press C-q). This is undesirable behavior because C-s is a very important key combo in Emacs.
You can manually turn off flow control in Screen by invoking
C-a :flow off
within the tab running Emacs. I wanted to do this through a script, however, so I added the highlighted line to a bash script that launches Screen and creates several tabs:
# launch a GNU Screen session named 'netbook' in detached mode
screen -AdmS netbook -t SHELL
# create separate tabs in session 'netbook' and launch programs in tabs
screen -S netbook -X screen -t HTOP htop
screen -S netbook -X screen -t CMUS cmus
screen -S netbook -X screen -t IRSSI irssi
screen -S netbook -X screen -t EMACS env TERM=xterm-256color emacs
# On tab 0, launch SpiderOakONE and dropboxd daemons for cli
screen -S netbook -p 0 -X stuff "cd $HOME/bin^M"
screen -S netbook -p 0 -X stuff "./tty_startup.sh^M"
# turn off flow control in tab 4 so C-s is passed to emacs
screen -S netbook -p 4 -X 'flow off'
Although the highlighted line doesn't generate a syntax error, it actually has no effect! When press C-a f (Control a, f) within the Screen tab running Emacs, I noticed that flow control was actually not being disabled in the GNU Screen tab running Emacs, despite the highlighted command above.
Fortunately, Screen has an automatic flow control mode that is smart enough to figure out when to disable flow control, for example when Emacs is running. To enable it, add the following line to your .screenrc config file:
defflow auto
Now I don't experience any more problems with Screen capturing the C-s key combo.
References:
https://www.gnu.org/software/screen/manual/html_node/Flow-Control-Summary.html
https://bitbucket.org/gojun077/bin-scripts/src/cef6f0a31c087cad85246f494f82a5abea2d0990/screen_netbook.sh?at=d257&fileviewer=file-view-default My GNU Screen launch script
https://github.com/gojun077/jun-dotfiles/blob/master/screenrc My .screenrc file from my personal dotfiles repo
You can manually turn off flow control in Screen by invoking
C-a :flow off
within the tab running Emacs. I wanted to do this through a script, however, so I added the highlighted line to a bash script that launches Screen and creates several tabs:
# launch a GNU Screen session named 'netbook' in detached mode
screen -AdmS netbook -t SHELL
# create separate tabs in session 'netbook' and launch programs in tabs
screen -S netbook -X screen -t HTOP htop
screen -S netbook -X screen -t CMUS cmus
screen -S netbook -X screen -t IRSSI irssi
screen -S netbook -X screen -t EMACS env TERM=xterm-256color emacs
# On tab 0, launch SpiderOakONE and dropboxd daemons for cli
screen -S netbook -p 0 -X stuff "cd $HOME/bin^M"
screen -S netbook -p 0 -X stuff "./tty_startup.sh^M"
# turn off flow control in tab 4 so C-s is passed to emacs
screen -S netbook -p 4 -X 'flow off'
Although the highlighted line doesn't generate a syntax error, it actually has no effect! When press C-a f (Control a, f) within the Screen tab running Emacs, I noticed that flow control was actually not being disabled in the GNU Screen tab running Emacs, despite the highlighted command above.
Fortunately, Screen has an automatic flow control mode that is smart enough to figure out when to disable flow control, for example when Emacs is running. To enable it, add the following line to your .screenrc config file:
defflow auto
Now I don't experience any more problems with Screen capturing the C-s key combo.
References:
https://www.gnu.org/software/screen/manual/html_node/Flow-Control-Summary.html
https://bitbucket.org/gojun077/bin-scripts/src/cef6f0a31c087cad85246f494f82a5abea2d0990/screen_netbook.sh?at=d257&fileviewer=file-view-default My GNU Screen launch script
https://github.com/gojun077/jun-dotfiles/blob/master/screenrc My .screenrc file from my personal dotfiles repo
라벨:
Emacs,
GNU Screen,
linux
2015년 10월 25일 일요일
Stepmania 5 with USB PlayDance DDR pad (dragonrise 0079:0011) in Linux
For the past 6 weeks, I've been doing sysadmin tasks remotely, working from home. One good thing about not having to go into the office is I can sleep about 1~2 hours more in the morning. One bad thing is that I am walking a lot less than I used to. I track my steps with a Fitbit Zip and I have a Beeminder steps goal linked through Fitbit's API. If I don't walk at least 7,000 steps per day, my credit card will get charged a penalty by Beeminder.
When working in the office, I routinely surpassed 7,000 steps per day, but working from home, I found I was walking less than 4,000...I was inspired by this post in which a fellow Beeminder user extolled the virtues of DDR (Dance Dance Revolution) for getting some exercise.
I searched some Korean shopping sites and found a USB DDR pad for W23,000 (about $20 at the current exchange rate of W1200/$1):
It's a soft, foldable mat that has sensors inside corresponding to 8 buttons. When I plugged the DDR mat into the USB port on my laptop, dmesg -wH gave me the following info:
...
[Oct25 02:34] usb 5-1: new low-speed USB device number 6 using uhci_hcd
[ +0.183611] input: USB Gamepad as /devices/pci0000:00/0000:00:1d.0/usb5/5-1/5-1:1.0/0003:0079:0011.0006/input/input18
[ +0.000565] dragonrise 0003:0079:0011.0006: input,hidraw3: USB HID v1.10 Joystick [USB Gamepad ] on usb-0000:00:1d.0-1/input0
And lsusb returns the following:
[archjun@latitude630 ~]$ lsusb
...
Bus 005 Device 006: ID 0079:0011 DragonRise Inc. Gamepad
In a Google search for this usb ID, I learned that the DragonRise chipset is generally used in handheld game controllers. Apparently now it's also used in DDR dance mats! The DDR mat I purchased is called PlayDance and is manufactured in China but sold on many Korean online shopping sites.
When I first launched Stepmania 5 (which works on Windows and Linux), the DDR pad seemed unresponsive. Before sending it back to the seller, I needed to check if HW input was getting picked up or not.
I referred to the excellent Archlinux documentation on gamepads at the following link:
https://wiki.archlinux.org/index.php/Gamepad
Recent kernels should be able to automatically detect USB joystick hardware. The documentation indicates that in /dev/input/by-id there should be a new device created when the USB gamepad/dancepad is plugged in:
[archjun@latitude630 ~]$ cd /dev/input/by-id
[archjun@latitude630 by-id]$ ls
usb-0079_USB_Gamepad-event-joystick
usb-0079_USB_Gamepad-joystick
usb-KYE_Optical_Mouse-event-mouse
usb-KYE_Optical_Mouse-mouse
After running cat on the the gamepad device, gibberish is printed to the screen every time I step on a dancepad button:
[archjun@latitude630 by-id]$ cat usb-0079_USB_Gamepad-event-joystick
:� V0r V0r
=� V�J V0r =� V�J " =� V�J =� VV~
=� VV~
[archjun@latitude630 by-id]$ cat usb-0079_USB_Gamepad-joystick
F=�U �F=�U� F=�Ucat usb-0079_USB_Gamepad-joystick� F=�U� F=�U� F=�U�F=�U� F=�U�F=�UF=�U� F=�U ��F=�U �� �E�U ?G�U I�U �J�U �M�U yN�U �O�U
After verifying that input from the USB pad was being detected by the Linux host, I once again tried Stepmania 5 to see if it would detect the dance pad. I don't know why Stepmania 5 didn't detect input the first time, but the second time the DDR pad worked fine and I was able to map dance pad buttons to actions in the Options menu.
For the first day or two, it was hard to break 3000 steps in an hour, but Now that I've got the hang of it (well, kind of -- my best is Easy 5) I find it's possible for me to record 3000 steps in 30 minutes and break into a sweat if I do enough songs > 130 bpm. So far, DDR seems to be an effective form of indoors exercise!
When working in the office, I routinely surpassed 7,000 steps per day, but working from home, I found I was walking less than 4,000...I was inspired by this post in which a fellow Beeminder user extolled the virtues of DDR (Dance Dance Revolution) for getting some exercise.
I searched some Korean shopping sites and found a USB DDR pad for W23,000 (about $20 at the current exchange rate of W1200/$1):
It's a soft, foldable mat that has sensors inside corresponding to 8 buttons. When I plugged the DDR mat into the USB port on my laptop, dmesg -wH gave me the following info:
...
[Oct25 02:34] usb 5-1: new low-speed USB device number 6 using uhci_hcd
[ +0.183611] input: USB Gamepad as /devices/pci0000:00/0000:00:1d.0/usb5/5-1/5-1:1.0/0003:0079:0011.0006/input/input18
[ +0.000565] dragonrise 0003:0079:0011.0006: input,hidraw3: USB HID v1.10 Joystick [USB Gamepad ] on usb-0000:00:1d.0-1/input0
And lsusb returns the following:
[archjun@latitude630 ~]$ lsusb
...
Bus 005 Device 006: ID 0079:0011 DragonRise Inc. Gamepad
In a Google search for this usb ID, I learned that the DragonRise chipset is generally used in handheld game controllers. Apparently now it's also used in DDR dance mats! The DDR mat I purchased is called PlayDance and is manufactured in China but sold on many Korean online shopping sites.
When I first launched Stepmania 5 (which works on Windows and Linux), the DDR pad seemed unresponsive. Before sending it back to the seller, I needed to check if HW input was getting picked up or not.
I referred to the excellent Archlinux documentation on gamepads at the following link:
https://wiki.archlinux.org/index.php/Gamepad
Recent kernels should be able to automatically detect USB joystick hardware. The documentation indicates that in /dev/input/by-id there should be a new device created when the USB gamepad/dancepad is plugged in:
[archjun@latitude630 ~]$ cd /dev/input/by-id
[archjun@latitude630 by-id]$ ls
usb-0079_USB_Gamepad-event-joystick
usb-0079_USB_Gamepad-joystick
usb-KYE_Optical_Mouse-event-mouse
usb-KYE_Optical_Mouse-mouse
After running cat on the the gamepad device, gibberish is printed to the screen every time I step on a dancepad button:
[archjun@latitude630 by-id]$ cat usb-0079_USB_Gamepad-event-joystick
:� V0r V0r
=� V�J V0r =� V�J " =� V�J =� VV~
=� VV~
[archjun@latitude630 by-id]$ cat usb-0079_USB_Gamepad-joystick
F=�U �F=�U� F=�Ucat usb-0079_USB_Gamepad-joystick� F=�U� F=�U� F=�U�F=�U� F=�U�F=�UF=�U� F=�U ��F=�U �� �E�U ?G�U I�U �J�U �M�U yN�U �O�U
After verifying that input from the USB pad was being detected by the Linux host, I once again tried Stepmania 5 to see if it would detect the dance pad. I don't know why Stepmania 5 didn't detect input the first time, but the second time the DDR pad worked fine and I was able to map dance pad buttons to actions in the Options menu.
For the first day or two, it was hard to break 3000 steps in an hour, but Now that I've got the hang of it (well, kind of -- my best is Easy 5) I find it's possible for me to record 3000 steps in 30 minutes and break into a sweat if I do enough songs > 130 bpm. So far, DDR seems to be an effective form of indoors exercise!
2015년 10월 18일 일요일
Transferring files without ssh: netcat, darkhttpd, and Python
ssh is indispensable when working on remote machines, but to my surprise (and frustration) many of the big telecoms in Korea have started disabling sshd on most machines due to security audit recommendations. This is ridiculous when you consider that the audits don't flag the rampant use of telnet (which sends all traffic in cleartext) for managing machines on the internal network. At some sites that disable sshd, sysadmins are using old-fashioned ftp in place of sftp or vsftpd!
To do my work, whether it's applying patches or setting up Apache, at a minimum I need to be able to transfer files between machines. When you aren't given access to ssh (which also means no scp or vsftpd), oftentimes netcat (nc), darkhttpd or Python's built-in webservers will do nicely for file transfers.
Netcat
Lots of old-school sysadmins are familiar with using netcat to transfer files between machines, and there are many tutorials on the Internet. Here is an example of using netcat (both GNU netcat/nc and BSD nc will work with each other) to transfer a file to a machine running firewalld dynamic firewall.
By default, firewalld will keep all ports closed except those necessary for web browsing (port 80 http or 443 https) and certain user-defined services like nfs, ssh, etc. The remote machine in this example is on my local network and has sshd and rpcbind (for NFS) running. firewalld will ignore a regular ping scan from nmap, but if we run nmap -Pn hostname, we can see a list of open ports (-Pn Treat all hosts as online -- skip host discovery).
[archjun@latitude630 playground]$ nmap -Pn 192.168.10.57
Starting Nmap 6.47 ( http://nmap.org ) at 2015-10-16 23:07 KST
Nmap scan report for 192.168.10.57
Host is up (0.85s latency).
Not shown: 996 filtered ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
873/tcp closed rsync
2049/tcp open nfs
Before transferring files to a remote machine using netcat, first I need to temporarily open a port for netcat to use. Let's use tcp port 4444:
[archjun@d257 playground]$ sudo firewall-cmd --zone=internal --add-port=4444/tcp
[sudo] password for archjun:
success
btw, firewalld has the concept of zones which have different security policies. Network interfaces can be placed The internal zone applies to the local network only. The available zones are:
[archjun@d257 playground]$ firewall-cmd --get-zones
block dmz drop external home internal public trusted work
Running nmap from latitude630 on the remote machine d257 now shows tcp port 4444:
[archjun@latitude630 playground]$ nmap -Pn 192.168.10.57
Starting Nmap 6.47 ( http://nmap.org ) at 2015-10-16 23:21 KST
Nmap scan report for 192.168.10.57
Host is up (0.61s latency).
Not shown: 994 filtered ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
873/tcp closed rsync
2049/tcp open nfs
4444/tcp closed krb524
Now from the remote machine d257 I will start BSD nc and tell it to listen on tcp port 4444 and to redirect all traffic to the file rcv_nc_test.txt:
[archjun@d257 playground]$ nc -l 4444 > rcv_nc_test.txt
From the sending machine, I will start GNU netcat/nc and tell it to send the file test_new_vimrc which contains the following text:
abc
#aabcde
hopefully no more temp files generated in editing path..
One more try... hopefully no more .un~ files will be generated
in the edited file's PATH
[archjun@latitude630 playground]$ nc 192.168.10.57 4444 < test_new_vimrc
nc on both the receiver and sender will not give any indication that the transfer is complete, so on the receiving end, I sent Ctrl-C to terminate the netcat session. Let's see if the content of the text file from latitude630 was sent to rcv_nc_test.txt on d257:
[archjun@d257 playground]$ ls
1 3 foo-replace orig1 orig2 play-w-bash-functions.sh test_sed_replace.txt
2 foo netcat_file.svg orig1.old orig3 rcv_nc_test.txt
[archjun@d257 playground]$ cat rcv_nc_test.txt
abc
#aabcde
hopefully no more temp files generated in editing path..
One more try... hopefully no more .un~ files will be generated
in the edited file's PATH
The content of test_new_vimrc from latitude630 was successfully redirected to rcv_nc_test.txt on d257! nc can also transfer binary files just fine.
Built-in Webservers in Python 2 and Python 3
Nowadays on most Linux installations (RHEL/CentOS, Ubuntu) I work on in the field, python 2 is installed by default. Python 2 comes with its own webserver module called SimpleHTTPServer. When it is invoked from the command line, it will by default serve up the current directory over http on port 8000. Since the remote machine is running firewalld, I first have to open tcp port 8000:
[archjun@d257 bin]$ sudo firewall-cmd --zone=internal --add-port=8000/tcp
[sudo] password for archjun:
success
[archjun@d257 playground]$ python2 -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
192.168.10.63 - - [16/Oct/2015 23:18:36] "GET / HTTP/1.1" 200 -
192.168.10.63 - - [16/Oct/2015 23:18:36] "GET /favicon.ico HTTP/1.1" 404 -
192.168.10.63 - - [16/Oct/2015 23:18:41] "GET /foo-replace HTTP/1.1" 200 -
Note that in Archlinux python 2 must be invoked with python2. Since 2014, python 3 has been the default python in Arch. In Python 3, invoking the built-in webserver is a bit different. The module name is http.server:
[archjun@d257 playground]$ python -m http.server
Serving HTTP on 0.0.0.0 port 8000 ...
192.168.10.63 - - [16/Oct/2015 23:31:54] "GET /test_sed_replace.txt HTTP/1.1" 200 -
192.168.10.63 - - [16/Oct/2015 23:31:58] "GET /play-w-bash-functions.sh HTTP/1.1" 200 -
Now if I navigate to 192.168.10.57:8000 from another machine on the local network, I get the following HTML page listing the contents of ~/playground:
Much easier than configuring apache/httpd, isn't it? Although the page reads, "Directory listing for /" it is actually serving up ~/playground from host d257. Output is the same for SimpleHTTPServer and http.server. If you want to change the port number, simply add the port you wish to use after the module name, i.e. python -m http.server 8080. Note that if you want to use a port below 1024, you must invoke the webserver as root (but be aware of the security risks).
Darkhttpd
I am a big fan of darkhttpd. I normally use it in a PXE server setup with tftpboot and dnsmasq to serve up files over http as I detailed in this post. While python's built-in webserver is fine for serving up a few files in a pinch, darkhttpd will handle tens of GB of transfers without any hiccups, as I can attest to when installing Linux on multiple machines from a PXE server sending files over 1 gigabit Ethernet.
By default, darkhttpd will share the specified directory on tcp port 8080, but you can specify a different port with the --port option:
[archjun@d257 playground]$ darkhttpd . --port 8000
darkhttpd/1.11, copyright (c) 2003-2015 Emil Mikulic.
listening on: http://0.0.0.0:8000/
1445006481 192.168.10.63 "GET /" 200 1024 "" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"
1445006504 192.168.10.63 "GET /netcat_file.svg" 200 1463 "http://192.168.10.57:8000/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"
Navigating to 192.168.10.57:8000 from another machine shows the following HTML page:
btw if you launch darkhttpd as root, it will share the specified directory on tcp port 80.
Conclusion
If you ever find yourself on a locked-down machine without ssh, give netcat, python webservers and darkhttpd a try. In the examples above, opening ports using firewalld is quite easy compared to editing and reloading static iptables rules. I am so glad that RHEL 7.x uses firewalld by default!
To do my work, whether it's applying patches or setting up Apache, at a minimum I need to be able to transfer files between machines. When you aren't given access to ssh (which also means no scp or vsftpd), oftentimes netcat (nc), darkhttpd or Python's built-in webservers will do nicely for file transfers.
Netcat
Lots of old-school sysadmins are familiar with using netcat to transfer files between machines, and there are many tutorials on the Internet. Here is an example of using netcat (both GNU netcat/nc and BSD nc will work with each other) to transfer a file to a machine running firewalld dynamic firewall.
By default, firewalld will keep all ports closed except those necessary for web browsing (port 80 http or 443 https) and certain user-defined services like nfs, ssh, etc. The remote machine in this example is on my local network and has sshd and rpcbind (for NFS) running. firewalld will ignore a regular ping scan from nmap, but if we run nmap -Pn hostname, we can see a list of open ports (-Pn Treat all hosts as online -- skip host discovery).
[archjun@latitude630 playground]$ nmap -Pn 192.168.10.57
Starting Nmap 6.47 ( http://nmap.org ) at 2015-10-16 23:07 KST
Nmap scan report for 192.168.10.57
Host is up (0.85s latency).
Not shown: 996 filtered ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
873/tcp closed rsync
2049/tcp open nfs
Before transferring files to a remote machine using netcat, first I need to temporarily open a port for netcat to use. Let's use tcp port 4444:
[archjun@d257 playground]$ sudo firewall-cmd --zone=internal --add-port=4444/tcp
[sudo] password for archjun:
success
btw, firewalld has the concept of zones which have different security policies. Network interfaces can be placed The internal zone applies to the local network only. The available zones are:
[archjun@d257 playground]$ firewall-cmd --get-zones
block dmz drop external home internal public trusted work
Running nmap from latitude630 on the remote machine d257 now shows tcp port 4444:
[archjun@latitude630 playground]$ nmap -Pn 192.168.10.57
Starting Nmap 6.47 ( http://nmap.org ) at 2015-10-16 23:21 KST
Nmap scan report for 192.168.10.57
Host is up (0.61s latency).
Not shown: 994 filtered ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
873/tcp closed rsync
2049/tcp open nfs
4444/tcp closed krb524
Now from the remote machine d257 I will start BSD nc and tell it to listen on tcp port 4444 and to redirect all traffic to the file rcv_nc_test.txt:
[archjun@d257 playground]$ nc -l 4444 > rcv_nc_test.txt
From the sending machine, I will start GNU netcat/nc and tell it to send the file test_new_vimrc which contains the following text:
abc
#aabcde
hopefully no more temp files generated in editing path..
One more try... hopefully no more .un~ files will be generated
in the edited file's PATH
[archjun@latitude630 playground]$ nc 192.168.10.57 4444 < test_new_vimrc
nc on both the receiver and sender will not give any indication that the transfer is complete, so on the receiving end, I sent Ctrl-C to terminate the netcat session. Let's see if the content of the text file from latitude630 was sent to rcv_nc_test.txt on d257:
[archjun@d257 playground]$ ls
1 3 foo-replace orig1 orig2 play-w-bash-functions.sh test_sed_replace.txt
2 foo netcat_file.svg orig1.old orig3 rcv_nc_test.txt
[archjun@d257 playground]$ cat rcv_nc_test.txt
abc
#aabcde
hopefully no more temp files generated in editing path..
One more try... hopefully no more .un~ files will be generated
in the edited file's PATH
The content of test_new_vimrc from latitude630 was successfully redirected to rcv_nc_test.txt on d257! nc can also transfer binary files just fine.
Built-in Webservers in Python 2 and Python 3
Nowadays on most Linux installations (RHEL/CentOS, Ubuntu) I work on in the field, python 2 is installed by default. Python 2 comes with its own webserver module called SimpleHTTPServer. When it is invoked from the command line, it will by default serve up the current directory over http on port 8000. Since the remote machine is running firewalld, I first have to open tcp port 8000:
[archjun@d257 bin]$ sudo firewall-cmd --zone=internal --add-port=8000/tcp
[sudo] password for archjun:
success
[archjun@d257 playground]$ python2 -m SimpleHTTPServer
Serving HTTP on 0.0.0.0 port 8000 ...
192.168.10.63 - - [16/Oct/2015 23:18:36] "GET / HTTP/1.1" 200 -
192.168.10.63 - - [16/Oct/2015 23:18:36] "GET /favicon.ico HTTP/1.1" 404 -
192.168.10.63 - - [16/Oct/2015 23:18:41] "GET /foo-replace HTTP/1.1" 200 -
Note that in Archlinux python 2 must be invoked with python2. Since 2014, python 3 has been the default python in Arch. In Python 3, invoking the built-in webserver is a bit different. The module name is http.server:
[archjun@d257 playground]$ python -m http.server
Serving HTTP on 0.0.0.0 port 8000 ...
192.168.10.63 - - [16/Oct/2015 23:31:54] "GET /test_sed_replace.txt HTTP/1.1" 200 -
192.168.10.63 - - [16/Oct/2015 23:31:58] "GET /play-w-bash-functions.sh HTTP/1.1" 200 -
Now if I navigate to 192.168.10.57:8000 from another machine on the local network, I get the following HTML page listing the contents of ~/playground:
Much easier than configuring apache/httpd, isn't it? Although the page reads, "Directory listing for /" it is actually serving up ~/playground from host d257. Output is the same for SimpleHTTPServer and http.server. If you want to change the port number, simply add the port you wish to use after the module name, i.e. python -m http.server 8080. Note that if you want to use a port below 1024, you must invoke the webserver as root (but be aware of the security risks).
Darkhttpd
I am a big fan of darkhttpd. I normally use it in a PXE server setup with tftpboot and dnsmasq to serve up files over http as I detailed in this post. While python's built-in webserver is fine for serving up a few files in a pinch, darkhttpd will handle tens of GB of transfers without any hiccups, as I can attest to when installing Linux on multiple machines from a PXE server sending files over 1 gigabit Ethernet.
By default, darkhttpd will share the specified directory on tcp port 8080, but you can specify a different port with the --port option:
[archjun@d257 playground]$ darkhttpd . --port 8000
darkhttpd/1.11, copyright (c) 2003-2015 Emil Mikulic.
listening on: http://0.0.0.0:8000/
1445006481 192.168.10.63 "GET /" 200 1024 "" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"
1445006504 192.168.10.63 "GET /netcat_file.svg" 200 1463 "http://192.168.10.57:8000/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.71 Safari/537.36"
Navigating to 192.168.10.57:8000 from another machine shows the following HTML page:
btw if you launch darkhttpd as root, it will share the specified directory on tcp port 80.
Conclusion
If you ever find yourself on a locked-down machine without ssh, give netcat, python webservers and darkhttpd a try. In the examples above, opening ports using firewalld is quite easy compared to editing and reloading static iptables rules. I am so glad that RHEL 7.x uses firewalld by default!
피드 구독하기:
글 (Atom)