Several months ago, I described how to enable port forwarding with the dynamic firewall firewalld in a post titled Internet connection sharing through a computer with two NIC's. Today I will describe how to achieve the same thing in Ubuntu 15.10 using uncomplicated firewall, ufw, a front-end to iptables.
Keep in mind that the method I am describing requires two NIC's on the machine that will be forwarding packets from the internal to the external network.
1. Make sure ip forwarding is enabled in the Kernel
On most linux distros this has historically been set in /etc/sysctl.conf but in recent years with the rise of systemd, the actual setting net.ipv4.ip_forward=1 might be found in a rules file under /usr/lib/sysctl.d/ or /etc/sysctl.d/ ; in the case of Ubuntu running ufw, however, the ip forwarding setting shown above should be made in /etc/ufw/sysctl.conf
2. Edit /etc/ufw/before.rules
Make sure that NAT is enabled with the following setting:
# NAT table rules
*nat
:POSTROUTING ACCEPT [0:0]
And then enable the forwarding of packets from your internal network subnet (mine is 192.168.95.0/24) to the external network interface (enp3s5f0 in my case):
-A POSTROUTING -s 192.168.95.0/24 -o enp3s5f0 -j MASQUERADE
-A is for append rule
-s specifies the source address
-o indicates the output (egress) interface
The internal network at work is on the 192.168.95.X subnet (iface enp5s0), while the external subnet is on 192.168.30.X (iface enp3s5f0).
To apply the changes, sudo ufw disable && sudo ufw enable
Notes
For some reason after applying the changes, pinging the Ubuntu 15.10 server worked, but ssh was blocked by ufw. I thus had to manually add ssh to the ufw firewall with the following command:
sudo ufw enable ssh
It is also possible to enable port forwarding using native iptables commands:
iptables -A FORWARD -i enp5s0 -j ACCEPT
iptables -A FORWARD -o enp3s5f0 -j ACCEPT
iptables -t nat -A POSTROUTING -o enp3s5f0 -j MASQUERADE
But I didn't actually try this method, so it may or may not work on your machine.
References:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Security_Guide/s1-firewall-ipt-fwd.html
https://gist.github.com/kimus/9315140
2016년 5월 28일 토요일
2016년 5월 21일 토요일
Setting up Sheepdog v0.9 Distributed Object Storage on Fedora 22/23
While many people have heard of Ceph distributed storage (an open-source project developed by Inktank, which was purchased by Redhat), not so many people have heard of Sheepdog distributed storage (an open-source project developed by NTT of Japan).
I first learned of Sheepdog from a watching a 2012 presentation on Windows VDI (Virtual Desktop Infrastructure) made by the CTO (now CEO) of Atlantis. The 68-minute talk is up on Youtube. I was shocked to learn that Software Defined Storage (SDS) in a distributed architecture with 5+ nodes could boast higher IOPS than enterprise SAN hardware.
At work, I have tested Ceph as a storage backend for Openstack, namely as a backend for Nova ephemeral VM's, Glance images, and Cinder block storage volumes.
According to the documentation from various versions of Openstack (from Juno onwards), the Sheepdog storage driver is supported. For example, here's what the Openstack Kilo docs say about Sheepdog and Cinder:
http://docs.openstack.org/kilo/config-reference/content/sheepdog-driver.html
Of course, Sheepdog can be used as distributed storage on its own without Openstack. In this post I will cover setting up Sheepdog on Fedora 22/23 and mounting an LVM block device using the sheepdog daemon sheep.
Compile Sheepdog v0.9 from Github
As of May 2016, the upstream version of sheepdog from Github is v0.9.0...
By contrast, the sheepdog package provided by the RDO (Redhat Distribution of Openstack) Kilo repos for Fedora is at version 0.3, which is incompatible with libcpg from corosync 2.3.5 in the default Fedora repos for f22/23. (sheep daemon fails to start because of a segfault in libcpg).
When trying to start the v0.3 sheep daemon I got the following error in dmesg:
...
[Apr25 14:52] sheep[11897]: segfault at 7fdb24f59a08 ip 00007fdb2ccc7cd8 sp 00007fdb24f59a10 error 6 in libcpg.so.4.1.0[7fdb2ccc6000+5000]
...
As you can see above, the sheep daemon fails to start because of a segfault in libcpg which is part of corosync.
This issue does not occur, however, when I use the v0.9 sheep daemon.
Here are the steps to compile Sheepdog from the upstream repo on Github:
(1) RENAME OLD SHEEPDOG 0.3 BINARIES
If you have RDO installed on your Fedora machine, sheepdog v0.3 binaries sheep and collie will already exist in /usr/sbin, but when you build sheepdog v0.9, it will install binaries into both /usr/sbin and /usr/bin:
To avoid namespace conflicts, it's a good idea to rename the old binaries from sheepdog v0.3. You might wonder why I bother renaming the binaries instead of doing dnf remove sheepdog. The reason you cannot just remove the old package is that sheepdog is one of the dependencies of RDO. Even marking the package as "manually installed" and trying to remove it didn't work for me.
mv /usr/sbin/collie /usr/sbin/collie_0.3
mv /usr/sbin/sheep /usr/sbin/sheep_0.3
(2) BUILD FROM UPSTREAM SOURCE
As of May 2016, the current sheepdog version is 0.9.0 ...
git clone git://github.com/collie/sheepdog.git
sudo dnf install -y autoconf automake libtool yasm userspace-rcu-devel \
corosynclib-devel
cd sheepdog
./autogen.sh
./configure
If you wish to build sheepdog with support for zookeeper as the sync agent (corosync is used by default) you must invoke the following:
./configure --enable-zookeeper
Finally, invoke:
sudo make install
Sheepdog 0.9 binaries will be installed into /usr/bin and /usr/sbin, so make sure the old sheepdog binaries in /usr/sbin have been renamed! BTW, there is no collie command in sheepdog v0.9. It has been replaced with dog.
Setup Corosync
Before starting corosync, you must ensure that TCP port 7000 has been opened in your firewall on all the machines you plan to use as sheepdog storage nodes. In a simple lab environment, you may be able to get away with temporarily stopping your firewall with systemctl stop firewalld, but don't do this in a production environment!
(1) CREATE COROSYNC CONFIG FILES
sudo vim /etc/corosync/corosync.conf
# Please read the corosync.conf 5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
# Note, fail_recv_const is only needed if you're
# having problems with corosync crashing under
# heavy sheepdog traffic. This crash is due to
# delayed/resent/misordered multicast packets.
# fail_recv_const: 5000
interface {
ringnumber: 0
bindnetaddr: 192.168.95.146
mcastaddr: 226.94.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
# the pathname of the log file
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
For bindnetaddr, use your local server's IP on a subnet which will be available to other Sheepdog storage nodes. In my lab environment, my sheepdog nodes are on ...95.{146,147,148}.
This probably isn't necessary, but if you want a regular user myuser to be able to access the corosync daemon, create the following file:
sudo vim /etc/corosync/uidgid.d/myuser
uidgid {
uid: myuser
gid: myuser
}
(2) START THE COROSYNC SERVICE
The corosync systemd service is not enabled by default, so enable the service and start it:
sudo systemctl enable corosync
sudo systemctl start corosync
When you check the corosync service status with systemctl status corosync you should see something like this:
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2016-05-09 11:45:06 KST; 1 weeks 4 days ago
Main PID: 2248 (corosync)
CGroup: /system.slice/corosync.service
└─2248 corosync
May 09 11:45:06 fx8350no3 corosync[2248]: [QB ] server name: cpg
May 09 11:45:06 fx8350no3 corosync[2248]: [SERV ] Service engine loaded: corosync...4]
May 09 11:45:06 fx8350no3 corosync[2248]: [SERV ] Service engine loaded: corosync...3]
May 09 11:45:06 fx8350no3 corosync[2248]: [QB ] server name: quorum
May 09 11:45:06 fx8350no3 corosync[2248]: [TOTEM ] A new membership (192.168.95.14...86
May 09 11:45:06 fx8350no3 corosync[2248]: [MAIN ] Completed service synchronizati...e.
May 09 11:45:06 fx8350no3 corosync[2236]: Starting Corosync Cluster Engine (corosync... ]
May 09 11:45:06 fx8350no3 systemd[1]: Started Corosync Cluster Engine.
May 09 11:48:46 fx8350no3 corosync[2248]: [TOTEM ] A new membership (192.168.95.14...88
May 09 11:48:46 fx8350no3 corosync[2248]: [MAIN ] Completed service synchronizati...e.
Hint: Some lines were ellipsized, use -l to show in full.
(3) REPEAT STEPS 1 & 2 ON ALL MACHINES YOU WISH TO USE AS STORAGE NODES
In /etc/corosync/corosync.conf make sure to change bindnetaddr to the IP for each different machine.
Launch Sheepdog Daemon on LVM Block Device
Sheepdog can use an entire disk as a storage node, but for testing purposes, it is easier to just mount an LVM block device with sheep.
(1) CREATE A MOUNTPOINT FOR SHEEP TO USE
sudo mkdir /mnt/sheep
(2) CREATE A LVM BLOCK DEVICE FOR SHEEPDOG
sudo pvcreate /dev/sdxy
sudo vgcreate /dev/sdxy VGNAME
sudo lvcreate -L nG VGNAME -n /dev/VGNAME/LVNAME
where x is a letter (such as a, b, ...z), y is a whole number (i.e., 1, 2, 3, ...), and n is a whole number.
(3) CREATE File System ON LV
sudo mkfs.ext4 /dev/VGNAME/LVNAME
In this example, I created an ext4 file system, but you could use XFS or anything else.
(4) MOUNT BLOCK DEVICE ON MOUNTPOINT
sudo mount /dev/VGNAME/LVNAME /mnt/sheep
(5) RUN SHEEP DAEMON ON MOUNTPOINT
sudo sheep /mnt/sheep
To make sure the daemon is running you can also try pidof sheep which should return two Process ID's.
(6) VERIFY DEFAULT FILES IN SHEEPDOG MOUNT
cd /mnt/sheep
ls
This should show the following files and directories:
config epoch lock obj sheep.log sock
If you don't see anything in the mount point, the sheep daemon failed to load.
(7) REPEAT STEPS 1-6 ON ALL MACHINES TO BE USED AS STORAGE NODES
(8) CHECK SHEEPDOG NODES
Now that the sheep has been launched you should check if it can see other sheepdog nodes. Sheepdog commands can be invoked by the regular user.
dog node list
Id Host:Port V-Nodes Zone
0 192.168.95.146:7000 128 2455742656
1 192.168.95.147:7000 128 2472519872
2 192.168.95.148:7000 128 2489297088
The sheepdog daemon should automatically be able to see all the other nodes on which sheep is running (if corosync is working properly, that is).
You can get a list of valid dog commands by just invoking dog without any arguments:
dog
Sheepdog administrator utility (version 0.9.0_352_g3d5438a)
Usage: dog [options]
Available commands:
vdi check check and repair image's consistency
vdi create create an image
...
(9) DO INITIAL FORMAT OF SHEEPDOG CLUSTER
This step only needs to be done once from any node in the cluster.
dog cluster format
using backend plain store
dog cluster info
Cluster status: running, auto-recovery enabled
Cluster created at Mon Apr 25 19:22:14 2016
Epoch Time Version [Host:Port:V-Nodes,,,]
#2016-04-25 19:22:14 1 [192.168.95.146:7000:128, 192.168.95.147:7000:128, 192.168.95.148:7000:128]
Convert RAW/QCOW2 Image to Sheepdog VDI Format
(1) INSTALL QEMU
sudo dnf install -y qemu qemu-kvm
(2) CONVERT A VM IMAGE TO SHEEPDOG VDI FORMAT
qemu-img convert -f qcow2 xenial-amd64.qcow2 sheepdog:xenial
In this example, I am converting an Ubuntu cloud image for 16.04 64-bit to sheepdog VDI format. Note that cloud images do not contain any user:pass info so it will be impossible to login without first injecting an ssh keypair with cloud-init. This can be achieved by first booting the image in Openstack, selecting a keypair, and then logging into the launched instance through the console in Horizon. Once you are logged in, you can create a user and password. Then take a snapshot of the instance and download it for use in qemu or virt-manager.
NOTE: The format for sheepdog images is sheepdog:imgName
Converting a RAW or QCOW2 image to sheepdog format will cause a new sheepdog VDI image to be created in the distributed storage nodes (which you can verify by navigating to the sheep mountpoint and running ls (but the image file itself won't appear, just a bunch of new file chunks, as this is object storage).
(3) VERIFY SHEEPDOG VDI CREATION
dog vdi list
Name Id Size Used Shared Creation time VDI id Copies Tag Block Size Shift
xenial 0 2.2 GB 976 MB 0.0 MB 2016-04-25 20:22 4f6c3e 3 22
(4) LAUNCH VDI VIA QEMU-SYSTEM & X11 FORWARDING
The Sheepdog storage nodes will probably be server machines without Xorg X11 / Desktop Environment installed.
You can still launch qemu-system if you use ssh x11 forwarding.
First check that the server machine has xauth installed:
rpm -q xorg-x11-xauth
Then from another machine that has X11 installed, invoke ssh with -X option to run the remote program with your local X session and run qemu-system:
ssh -X fedjun@192.168.95.148 qemu-system-x86_64 -enable-kvm -m 1024 \
-cpu host -drive file=trusty64-bench-test1
The -m flag designates memory in MB; qemu's default is only 128MB so you need to specify this manually if you need more memory.
Note that qemu uses NAT for VM's by default. You cannot directly communicate from host to VM, but you can go from VM to host by ssh'ing or pinging 10.0.2.2.
I first learned of Sheepdog from a watching a 2012 presentation on Windows VDI (Virtual Desktop Infrastructure) made by the CTO (now CEO) of Atlantis. The 68-minute talk is up on Youtube. I was shocked to learn that Software Defined Storage (SDS) in a distributed architecture with 5+ nodes could boast higher IOPS than enterprise SAN hardware.
At work, I have tested Ceph as a storage backend for Openstack, namely as a backend for Nova ephemeral VM's, Glance images, and Cinder block storage volumes.
According to the documentation from various versions of Openstack (from Juno onwards), the Sheepdog storage driver is supported. For example, here's what the Openstack Kilo docs say about Sheepdog and Cinder:
http://docs.openstack.org/kilo/config-reference/content/sheepdog-driver.html
This driver enables use of Sheepdog through Qemu/KVM.
Set the following volume_driver in cinder.conf:
volume_driver=cinder.volume.drivers.sheepdog.SheepdogDriverIn another post, I talk about setting up Sheepdog as a backend for Openstack Kilo.
Of course, Sheepdog can be used as distributed storage on its own without Openstack. In this post I will cover setting up Sheepdog on Fedora 22/23 and mounting an LVM block device using the sheepdog daemon sheep.
Compile Sheepdog v0.9 from Github
As of May 2016, the upstream version of sheepdog from Github is v0.9.0...
By contrast, the sheepdog package provided by the RDO (Redhat Distribution of Openstack) Kilo repos for Fedora is at version 0.3, which is incompatible with libcpg from corosync 2.3.5 in the default Fedora repos for f22/23. (sheep daemon fails to start because of a segfault in libcpg).
When trying to start the v0.3 sheep daemon I got the following error in dmesg:
...
[Apr25 14:52] sheep[11897]: segfault at 7fdb24f59a08 ip 00007fdb2ccc7cd8 sp 00007fdb24f59a10 error 6 in libcpg.so.4.1.0[7fdb2ccc6000+5000]
...
As you can see above, the sheep daemon fails to start because of a segfault in libcpg which is part of corosync.
This issue does not occur, however, when I use the v0.9 sheep daemon.
Here are the steps to compile Sheepdog from the upstream repo on Github:
(1) RENAME OLD SHEEPDOG 0.3 BINARIES
If you have RDO installed on your Fedora machine, sheepdog v0.3 binaries sheep and collie will already exist in /usr/sbin, but when you build sheepdog v0.9, it will install binaries into both /usr/sbin and /usr/bin:
- sheep will be created in /usr/sbin
- dog (the replacement for collie since v0.6) will be created in /usr/bin
To avoid namespace conflicts, it's a good idea to rename the old binaries from sheepdog v0.3. You might wonder why I bother renaming the binaries instead of doing dnf remove sheepdog. The reason you cannot just remove the old package is that sheepdog is one of the dependencies of RDO. Even marking the package as "manually installed" and trying to remove it didn't work for me.
mv /usr/sbin/collie /usr/sbin/collie_0.3
mv /usr/sbin/sheep /usr/sbin/sheep_0.3
(2) BUILD FROM UPSTREAM SOURCE
As of May 2016, the current sheepdog version is 0.9.0 ...
git clone git://github.com/collie/sheepdog.git
sudo dnf install -y autoconf automake libtool yasm userspace-rcu-devel \
corosynclib-devel
cd sheepdog
./autogen.sh
./configure
If you wish to build sheepdog with support for zookeeper as the sync agent (corosync is used by default) you must invoke the following:
./configure --enable-zookeeper
Finally, invoke:
sudo make install
Sheepdog 0.9 binaries will be installed into /usr/bin and /usr/sbin, so make sure the old sheepdog binaries in /usr/sbin have been renamed! BTW, there is no collie command in sheepdog v0.9. It has been replaced with dog.
Setup Corosync
Before starting corosync, you must ensure that TCP port 7000 has been opened in your firewall on all the machines you plan to use as sheepdog storage nodes. In a simple lab environment, you may be able to get away with temporarily stopping your firewall with systemctl stop firewalld, but don't do this in a production environment!
(1) CREATE COROSYNC CONFIG FILES
sudo vim /etc/corosync/corosync.conf
# Please read the corosync.conf 5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
# Note, fail_recv_const is only needed if you're
# having problems with corosync crashing under
# heavy sheepdog traffic. This crash is due to
# delayed/resent/misordered multicast packets.
# fail_recv_const: 5000
interface {
ringnumber: 0
bindnetaddr: 192.168.95.146
mcastaddr: 226.94.1.1
mcastport: 5405
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
# the pathname of the log file
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
For bindnetaddr, use your local server's IP on a subnet which will be available to other Sheepdog storage nodes. In my lab environment, my sheepdog nodes are on ...95.{146,147,148}.
This probably isn't necessary, but if you want a regular user myuser to be able to access the corosync daemon, create the following file:
sudo vim /etc/corosync/uidgid.d/myuser
uidgid {
uid: myuser
gid: myuser
}
(2) START THE COROSYNC SERVICE
The corosync systemd service is not enabled by default, so enable the service and start it:
sudo systemctl enable corosync
sudo systemctl start corosync
When you check the corosync service status with systemctl status corosync you should see something like this:
● corosync.service - Corosync Cluster Engine
Loaded: loaded (/usr/lib/systemd/system/corosync.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2016-05-09 11:45:06 KST; 1 weeks 4 days ago
Main PID: 2248 (corosync)
CGroup: /system.slice/corosync.service
└─2248 corosync
May 09 11:45:06 fx8350no3 corosync[2248]: [QB ] server name: cpg
May 09 11:45:06 fx8350no3 corosync[2248]: [SERV ] Service engine loaded: corosync...4]
May 09 11:45:06 fx8350no3 corosync[2248]: [SERV ] Service engine loaded: corosync...3]
May 09 11:45:06 fx8350no3 corosync[2248]: [QB ] server name: quorum
May 09 11:45:06 fx8350no3 corosync[2248]: [TOTEM ] A new membership (192.168.95.14...86
May 09 11:45:06 fx8350no3 corosync[2248]: [MAIN ] Completed service synchronizati...e.
May 09 11:45:06 fx8350no3 corosync[2236]: Starting Corosync Cluster Engine (corosync... ]
May 09 11:45:06 fx8350no3 systemd[1]: Started Corosync Cluster Engine.
May 09 11:48:46 fx8350no3 corosync[2248]: [TOTEM ] A new membership (192.168.95.14...88
May 09 11:48:46 fx8350no3 corosync[2248]: [MAIN ] Completed service synchronizati...e.
Hint: Some lines were ellipsized, use -l to show in full.
(3) REPEAT STEPS 1 & 2 ON ALL MACHINES YOU WISH TO USE AS STORAGE NODES
In /etc/corosync/corosync.conf make sure to change bindnetaddr to the IP for each different machine.
Launch Sheepdog Daemon on LVM Block Device
Sheepdog can use an entire disk as a storage node, but for testing purposes, it is easier to just mount an LVM block device with sheep.
(1) CREATE A MOUNTPOINT FOR SHEEP TO USE
sudo mkdir /mnt/sheep
(2) CREATE A LVM BLOCK DEVICE FOR SHEEPDOG
sudo pvcreate /dev/sdxy
sudo vgcreate /dev/sdxy VGNAME
sudo lvcreate -L nG VGNAME -n /dev/VGNAME/LVNAME
where x is a letter (such as a, b, ...z), y is a whole number (i.e., 1, 2, 3, ...), and n is a whole number.
(3) CREATE File System ON LV
sudo mkfs.ext4 /dev/VGNAME/LVNAME
In this example, I created an ext4 file system, but you could use XFS or anything else.
(4) MOUNT BLOCK DEVICE ON MOUNTPOINT
sudo mount /dev/VGNAME/LVNAME /mnt/sheep
(5) RUN SHEEP DAEMON ON MOUNTPOINT
sudo sheep /mnt/sheep
To make sure the daemon is running you can also try pidof sheep which should return two Process ID's.
(6) VERIFY DEFAULT FILES IN SHEEPDOG MOUNT
cd /mnt/sheep
ls
This should show the following files and directories:
config epoch lock obj sheep.log sock
If you don't see anything in the mount point, the sheep daemon failed to load.
(7) REPEAT STEPS 1-6 ON ALL MACHINES TO BE USED AS STORAGE NODES
(8) CHECK SHEEPDOG NODES
Now that the sheep has been launched you should check if it can see other sheepdog nodes. Sheepdog commands can be invoked by the regular user.
dog node list
Id Host:Port V-Nodes Zone
0 192.168.95.146:7000 128 2455742656
1 192.168.95.147:7000 128 2472519872
2 192.168.95.148:7000 128 2489297088
The sheepdog daemon should automatically be able to see all the other nodes on which sheep is running (if corosync is working properly, that is).
You can get a list of valid dog commands by just invoking dog without any arguments:
dog
Sheepdog administrator utility (version 0.9.0_352_g3d5438a)
Usage: dog
Available commands:
vdi check check and repair image's consistency
vdi create create an image
...
(9) DO INITIAL FORMAT OF SHEEPDOG CLUSTER
This step only needs to be done once from any node in the cluster.
dog cluster format
using backend plain store
dog cluster info
Cluster status: running, auto-recovery enabled
Cluster created at Mon Apr 25 19:22:14 2016
Epoch Time Version [Host:Port:V-Nodes,,,]
#2016-04-25 19:22:14 1 [192.168.95.146:7000:128, 192.168.95.147:7000:128, 192.168.95.148:7000:128]
Convert RAW/QCOW2 Image to Sheepdog VDI Format
(1) INSTALL QEMU
sudo dnf install -y qemu qemu-kvm
(2) CONVERT A VM IMAGE TO SHEEPDOG VDI FORMAT
qemu-img convert -f qcow2 xenial-amd64.qcow2 sheepdog:xenial
In this example, I am converting an Ubuntu cloud image for 16.04 64-bit to sheepdog VDI format. Note that cloud images do not contain any user:pass info so it will be impossible to login without first injecting an ssh keypair with cloud-init. This can be achieved by first booting the image in Openstack, selecting a keypair, and then logging into the launched instance through the console in Horizon. Once you are logged in, you can create a user and password. Then take a snapshot of the instance and download it for use in qemu or virt-manager.
NOTE: The format for sheepdog images is sheepdog:imgName
Converting a RAW or QCOW2 image to sheepdog format will cause a new sheepdog VDI image to be created in the distributed storage nodes (which you can verify by navigating to the sheep mountpoint and running ls (but the image file itself won't appear, just a bunch of new file chunks, as this is object storage).
(3) VERIFY SHEEPDOG VDI CREATION
dog vdi list
Name Id Size Used Shared Creation time VDI id Copies Tag Block Size Shift
xenial 0 2.2 GB 976 MB 0.0 MB 2016-04-25 20:22 4f6c3e 3 22
(4) LAUNCH VDI VIA QEMU-SYSTEM & X11 FORWARDING
The Sheepdog storage nodes will probably be server machines without Xorg X11 / Desktop Environment installed.
You can still launch qemu-system if you use ssh x11 forwarding.
First check that the server machine has xauth installed:
rpm -q xorg-x11-xauth
Then from another machine that has X11 installed, invoke ssh with -X option to run the remote program with your local X session and run qemu-system:
ssh -X fedjun@192.168.95.148 qemu-system-x86_64 -enable-kvm -m 1024 \
-cpu host -drive file=trusty64-bench-test1
The -m flag designates memory in MB; qemu's default is only 128MB so you need to specify this manually if you need more memory.
Note that qemu uses NAT for VM's by default. You cannot directly communicate from host to VM, but you can go from VM to host by ssh'ing or pinging 10.0.2.2.
2016년 5월 14일 토요일
When Ctrl-C fails, use Ctrl-Z: Finding and Killing the pid of a runaway shell script
Recently I was writing a bash shell script with a while-do loop. The loop exit condition failed to be satisfied so the script was stuck in an infinite loop. Checking pidof bash and htop didn't give me conclusive results because I had several bash shells running and didn't know which pid to kill.
Pressing Ctrl-C in the terminal running the script failed to stop the runaway program. In such a case, press Ctrl-Z in the unresponsive terminal to background the current process and then type ps to get the pid of all processes in the current terminal.
Doing this, I found the pid of my script foo.sh as well as the pid of all other commands being run by my script (nova, ping, neutron). Now just do a kill -9 pidOfScript and your runaway script will stop.
Pressing Ctrl-C in the terminal running the script failed to stop the runaway program. In such a case, press Ctrl-Z in the unresponsive terminal to background the current process and then type ps to get the pid of all processes in the current terminal.
Doing this, I found the pid of my script foo.sh as well as the pid of all other commands being run by my script (nova, ping, neutron). Now just do a kill -9 pidOfScript and your runaway script will stop.
2016년 5월 7일 토요일
Launching a QEMU session remotely using X11 Forwarding
This week I needed to launch qemu-system (QEMU hypervisor GUI) from a server that did not have xorg/X11 graphical environment installed. Fortunately, as long as qemu is installed on the server and X11 is installed on your local machine, you can use X11 forwarding to launch the qemu-system window locally.
When I first tried to execute qemu-system directly on server no3, I got the following error:
$ qemu-system-x86_64 -enable-kvm -drive file=sheepdog:xenial
Unable to init server: Could not connect: Connection refused
...
gtk initialization failed
This is not surprising, since the server doesn't have xorg/X11 installed.
I then tried to use my local Xorg session to run qemu-system remotely using X11 forwarding, but got the following error:
[archjun@pinkS310 dotfiles]$ ssh -X fedjun@192.168.95.146 qemu-system-x86_64 -enable-kvm -drive file=sheepdog:xenial
fedjun@192.168.95.146's password:
X11 forwarding request failed on channel 0
Unable to init server: Could not connect: Connection refused
Apparently this is due to xorg-x11-xauth not being installed on the no3 server running Fedora 23.
You also have to ensure the following is in /etc/ssh/sshd_config on the remote machine:
X11Forwarding Yes
X11UseLocalhost no
After making these settings, X11 forwarding of qemu-system-x86-64 should work.
References:
피드 구독하기:
글 (Atom)