2016년 9월 30일 금요일

Openstack Mitaka - Changing ceilometer default polling period in pipeline.yaml

I am currently working on a four-node installation of Openstack Mitaka and while testing Ceilometer alarm functionality, I ran into the problem of the alarm state always displaying insufficient data. Of the four nodes, mgmt01 is the storage node running glance and cinder, mgmt02 is the control node running nova-manager, keystone, ceilometer, heat, horizon, etc., and two compute nodes, compute01 and compute02.

By default, Ceilometer telemetry gathers data every 600 seconds, but you can change interval: 600 (in seconds) to some smaller value. Here's a link to the default version of /etc/ceilometer/pipeline.yaml for Openstack Mitaka:

https://gist.github.com/gojun077/8d7b9e8afc22c8f5d5014c883f8c1cf9

On my control node, mgmt02, I made sure to edit this file so that ceilometer would poll gauges every 60 seconds by using interval: 60 in several places throughout the file.

Next I created a new Cirros VM named cirros-test-meter with only an internal network interface:

# openstack server create --image cirros-d160722-x86_64 \
--flavor m1.tiny \
--nic net-id=bc7730ce-80e8-47e1-96e5-c4103ed8e37c cirros-test-meter


To get the UUID of cirros-test-meter:

# openstack server list
...
f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cirros-test-meter | ACTIVE  | private=192.168.95.142

I then created a ceilometer alarm for the cirros vm that would track cpu usage (using ceilometer gauge cpu_util) and trigger an alarm if cpu utilization went above 70% for more than two consecutive 60 second periods.

# ceilometer alarm-threshold-create --name cpu_high \
--description 'CPU usage high' --meter-name cpu_util \
--threshold 70 --comparison-operator gt --statistic avg \
--period 60 --evaluation-periods 2 --alarm-action 'log://' \
--query resource_id=f3280890-1b60-4a6c-8df5-7195dbb00ca3


Note that since Openstack Liberty, the alarm-action 'log://' will log alarms to /var/log/aodh/notifications.log instead of to /var/log/ceilometer/alarm-notifier.log so don't go looking for alarm logs in the wrong path!

Verify that the alarm cpu_high was created:

# ceilometer alarm-list
...
| Alarm ID                             | Name     | State             | Severity | Enabled | Continuous | Alarm condition                     | Time constraints |
     +--------------------------------------+----------+-------------------+----------+---------+------------+-------------------------------------+------------------+
| 23651a53-19cf-4bb0-97e0-09fab14445cd | cpu_high | insufficient data | low      | True    | False      | avg(cpu_util) > 70.0 during 2 x 60s | None             |

Since the alarm was just created, I will have to wait at least two 60 sec periods before the alarm has enough data.


I create high cpu load inside the cirros vm with the following while loop:

while [ 1 ] ; do
  echo $((13**99)) 1>/dev/null 2>&1
done &


This calculates 13 to the 99th power in an infinite loop. You can later kill this process by running top, finding the PID of the /bin/sh process running the above shell command, and killing it with sudo kill -15 PID.

This will immediately start generating 100% cpu load.

Just to make sure, let's see what meters are available for the cirros vm:

# ceilometer meter-list --query \
  resource=f3280890-1b60-4a6c-8df5-7195dbb00ca3

...
| Name                     | Type       | Unit      | Resource ID                          | User ID                          | Project ID                       |
     +--------------------------+------------+-----------+--------------------------------------+----------------------------------+----------------------------------+
| cpu                      | cumulative | ns        | f3280890-1b60-4a6c-8df5-7195dbb00ca3 | dfb630234e4e4155871611d5e60dc1d4 | ada4ee7cb446439abbe887601c87c900 |
| cpu.delta                | delta      | ns        | f3280890-1b60-4a6c-8df5-7195dbb00ca3 | dfb630234e4e4155871611d5e60dc1d4 | ada4ee7cb446439abbe887601c87c900 |
| cpu_util                 | gauge      | %         | f3280890-1b60-4a6c-8df5-7195dbb00ca3 | dfb630234e4e4155871611d5e60dc1d4 | ada4ee7cb446439abbe887601c87c900 |
...


You will see that the cpu_util ceilometer gauge exists for cirros-meter-test.

Several minutes passed, and so I got a list of cpu_util sample values from ceilometer:

# ceilometer sample-list --meter cpu_util --query \
  resource=f3280890-1b60-4a6c-8df5-7195dbb00ca3

+--------------------------------------+----------+-------+---------------+------+----------------------------+
| Resource ID                          | Name     | Type  | Volume        | Unit | Timestamp                  |
+--------------------------------------+----------+-------+---------------+------+----------------------------+
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.75779497  | %    | 2016-09-27T04:56:31.816000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.483886216 | %    | 2016-09-27T04:46:31.852000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 42.2381838593 | %    | 2016-09-27T04:36:31.826000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 4.27827845015 | %    | 2016-09-27T04:26:31.942000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 4.42085822432 | %    | 2016-09-27T04:16:31.935000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 4.41009081847 | %    | 2016-09-27T04:06:31.825000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 3.69494435414 | %    | 2016-09-27T03:56:31.837000 |


But something is not right. Although the latest readings for cpu_util are more than 100% (remember my alarm cpu_high should be triggered if cpu_util > 70%), you will notice that the polling interval is every 10 minutes:

2016-09-27T04:56
2016-09-27T04:46
2016-09-27T04:36
2016-09-27T04:26
2016-09-27T04:16
...

I definitely edited /etc/ceilometer/pipeline.yaml on my control node mgmt02 so that interval: 60 instead of 600 and then restarted ceilometer on the control node with openstack-service restart ceilometer

It turns out that I also have to edit /etc/ceilometer/pipeline.yaml on both of my Nova compute nodes as well! Running openstack-service status on mgmt02 I get:

[root@osmgmt02 ~(keystone_admin)]# openstack-service status
MainPID=26342 Id=neutron-dhcp-agent.service ActiveState=active
MainPID=26371 Id=neutron-l3-agent.service ActiveState=active
MainPID=26317 Id=neutron-lbaas-agent.service ActiveState=active
MainPID=26551 Id=neutron-metadata-agent.service ActiveState=active
MainPID=26299 Id=neutron-metering-agent.service ActiveState=active
MainPID=26411 Id=neutron-openvswitch-agent.service ActiveState=active
MainPID=26315 Id=neutron-server.service ActiveState=active
MainPID=26265 Id=openstack-aodh-evaluator.service ActiveState=active
MainPID=26455 Id=openstack-aodh-listener.service ActiveState=active
MainPID=26508 Id=openstack-aodh-notifier.service ActiveState=active
MainPID=19412 Id=openstack-ceilometer-central.service ActiveState=active
MainPID=19416 Id=openstack-ceilometer-collector.service ActiveState=active
MainPID=19414 Id=openstack-ceilometer-notification.service
ActiveState=active
MainPID=26577 Id=openstack-gnocchi-metricd.service ActiveState=active
MainPID=26236 Id=openstack-gnocchi-statsd.service ActiveState=active
MainPID=26535 Id=openstack-heat-api.service ActiveState=active
MainPID=26861 Id=openstack-heat-engine.service ActiveState=active

(Note: I am missing heat-api-cfn.service, which is necessary for autoscaling with Heat templates) 
MainPID=26781 Id=openstack-nova-api.service ActiveState=active
MainPID=26753 Id=openstack-nova-cert.service ActiveState=active
MainPID=26691 Id=openstack-nova-conductor.service ActiveState=active
MainPID=26316 Id=openstack-nova-consoleauth.service ActiveState=active
MainPID=26603 Id=openstack-nova-novncproxy.service ActiveState=active
MainPID=26702 Id=openstack-nova-scheduler.service ActiveState=active


And on both of my Nova nodes compute01, compute02 openstack-service status returns:

MainPID=845 Id=neutron-openvswitch-agent.service ActiveState=active
MainPID=812 Id=openstack-ceilometer-compute.service ActiveState=active
MainPID=822 Id=openstack-nova-compute.service ActiveState=active


The compute nodes are running Neutron OVS agent, ceilometer-compute, and nova-compute services.

Once I edit pipeline.yaml on compute01, compute02 and restart ceilometer on each Nova node, my cpu_high alarm finally gets triggered:

# ceilometer alarm-list
+--------------------------------------+----------+-------+----------+---------+------------+-------------------------------------+------------------+
| Alarm ID                             | Name     | State | Severity | Enabled | Continuous | Alarm condition                     | Time constraints |
+--------------------------------------+----------+-------+----------+---------+------------+-------------------------------------+------------------+
| 23651a53-19cf-4bb0-97e0-09fab14445cd | cpu_high | alarm | low      | True    | False      | avg(cpu_util) > 70.0 during 2 x 60s | None             |
+--------------------------------------+----------+-------+----------+---------+------------+-------------------------------------+------------------+

And you can also see that the cpu_util samples are now taken at one-minute intervals:

# ceilometer sample-list --meter cpu_util --query \
  resource=f3280890-1b60-4a6c-8df5-7195dbb00ca3 -l 10

+--------------------------------------+----------+-------+---------------+------+----------------------------+
| Resource ID                          | Name     | Type  | Volume        | Unit | Timestamp                  |
+--------------------------------------+----------+-------+---------------+------+----------------------------+
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.504582533 | %    | 2016-09-27T06:45:50.867000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.478356119 | %    | 2016-09-27T06:44:50.880000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.424959609 | %    | 2016-09-27T06:43:50.945000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.431860726 | %    | 2016-09-27T06:42:50.872000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.486230975 | %    | 2016-09-27T06:41:50.881000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.458524549 | %    | 2016-09-27T06:40:50.873000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.540472746 | %    | 2016-09-27T06:39:50.878000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.494100804 | %    | 2016-09-27T06:38:50.940000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.499692501 | %    | 2016-09-27T06:37:50.869000 |
| f3280890-1b60-4a6c-8df5-7195dbb00ca3 | cpu_util | gauge | 102.415145802 | %    | 2016-09-27T06:36:50.868000 |
+--------------------------------------+----------+-------+---------------+------+----------------------------+




2016년 9월 24일 토요일

Using Korean UTF8 fonts in LaTeX with koTeX

Although I am not an academic, I sometimes use LaTeX to write documents containing mathematical expressions. My resume is also formatted as a .tex document which I then export to PDF using pdflatex. One inconvenience, however, is that regular LaTeX doesn't support UTF8 characters.

When I searched the Internet for ways to add Korean fonts to .tex documents, I first came across the package texlive-langcjk which can be invoked in your document with \usepackage {CJKutf8} but it fails to correctly render Korean fonts using the pdflatex rendering engine.

I then tried the koTeX package which can be found in the default Fedora 24 repositories as texlive-kotex-* and in the default Archlinux repositories as texlive-langkorean. In your LaTeX preamble simply add

\usepackage {kotex}

and you will be able to use any UTF8 Korean fonts you have installed on your system. Note that you must use xetex instead of pdflatex as your tex to pdf rendering engine, however.







To use xetex on Fedora, you must have the texlive-xetex-bin package installed. On Archlinux, xetex can be found in the package texlive-bin. If you find that you are missing some .sty font files needed to render your LaTeX document to PDF, you will have to install those separately. For example, on Fedora I had to separately install texlive-isodate, texlive-substr, texlive-textpos, and texlive-titlesec before my resume template could render into PDF using xetex.

2016년 9월 17일 토요일

Openstack Mitaka multinode install using Packstack

I recently did a four-node install of Openstack Mitaka (RDO 9) using a pre-edited Packstack answer file. Of the four nodes, two are Nova compute nodes, one is a control node (Keystone, Horizon, Nova Controller, Neutron, etc) and one is a storage node (Glance and Cinder).

The external and internal network interfaces are br-ex and br-eno2, respectively. Before running

packstack --answer-file [name of answerfile]

you must create these interfaces manually in /etc/sysconfig/network-scripts

On my servers, I made eno1 a slave of br-ex and used an Open V-switch bridge instead of the built-in linux bridge. Here is my network config file for eno1 (ifcfg-eno1)

NAME=eno1
UUID=a5802af4-1400-4d4f-964c-2eae6e20905f
DEVICE=eno1
DEVICETYPE=ovs
ONBOOT=yes
TYPE=OVSPort
BOOTPROTO=none
OVS_BRIDGE=br-ex


And here is my network config file for br-ex (ifcfg-br-ex):

NAME=br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.10.116
NETMASK=255.255.255.0
GATEWAY=192.168.10.1
PEERDNS=yes
DNS1=168.126.63.1
DNS2=168.126.63.2


You will also need to make similar settings for eno2 and br-eno2  (which will be used for the internal or management network). In my case the external network is on 192.168.10.x, while my internal network is on 192.168.95.x

To apply the new settings, systemctl restart network and also make sure that you stop and disable NetworkManager before running Packstack.

In the Packstack answer file, I used br-ex and br-eno2 in the following four settings:

CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=extnet:br-ex,physnet1:br-eno2

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno1,br-eno2:eno2

CONFIG_NEUTRON_OVS_BRIDGES_COMPUTE=br-ex,br-eno2

I also specified that extnet and physnet1 use flat networking instead of vxlan or vlan so that instances can communicate with my existing physical network.

In order to install Glance and Cinder on a node separate from the control node, you must enable unsupported parameters as follows:

# Specify 'y' if you want to use unsupported parameters. This should
# be used only if you know what you are doing. Issues caused by using
# unsupported options will not be fixed before the next major release.
# ['y', 'n']
CONFIG_UNSUPPORTED=y


then specify the storage node IP which will host Glance and Cinder:

# (Unsupported!) Server on which to install OpenStack services
# specific to storage servers such as Image or Block Storage services.
CONFIG_STORAGE_HOST=192.168.95.11


It is highly-recommended to define hostnames for each node so that Openstack can refer to nodes without using IP addresses (which can change), but in my test installation I didn't create specify hostnames and IP's in /etc/hosts for each node. Because I simply use IP addresses as identifiers, after the Packstack install is completed, I have to edit /etc/nova/nova.conf on both compute nodes and edit the following setting to use IP address instead of hostname:

vncserver_proxyclient_address=192.168.10.115

Note that the IP address must have an IP in the same subnet as the vnc base url in nova.conf

You can see my entire Openstack Mitaka Packstack answer file for four-node installation at the following link:

https://gist.githubusercontent.com/gojun077/59fbfa56e395853002a65465f91fb691/raw/662fc97ede44d75c02104143909ffe4dda416354/4node-packstack-mitaka.cfg


2016년 9월 10일 토요일

Preparing a Win7 VM for use in Openstack Mitaka

In the project I am currently working on, the client wants to use a variety of OS instances on top of Openstack. Running recent Linux VM's (kernel versions 2.6 and above support the virtio drivers used in Openstack) on Nova just works out of the box, but running Windows VM's on Openstack requires some preparation -- namely, you must first install Redhat's virtio drivers into a Win7 VM and then import the VM image into Glance.

Since I already have a Win7 VM I use for testing in Virtualbox (downloaded legally from Microsoft at modernie), I thought it would be a simple task to just install virtio-win drivers into my Virtualbox Win7 instance. This idea didn't pan out, however.

If you follow the Redhat guide for installing virtio drivers into Windows VM's, your VM needs to have the following System Devices in Device Manager:

  • PCI standard RAM Controller (for memory balloon driver)
  • PCI Simple Communication Controller (for vioserial driver)
  • virtio network (for netKVM driver)
  • virtio storage (for viostor driver)

Virtualbox by default supports virtio-net network adapter type, but it doesn't support virtio storage devices, memory ballooning, or vioserial, so these virtio

Here is a screenshot of the available System Devices in a Win7 modernie VM running on Virtualbox:

You will notice that the required devices noted above do not exist.

I therefore had to do things the hard way. First I downloaded a Windows 7 installation iso then I created a new KVM virtual machine using virt-manager. The virtual machine has the following devices:


  • 2 IDE CD-ROM's - the first CD-ROM must mount the virtio-win iso file while the second CD-ROM will mount the Windows 7 iso.
  • 1 VirtIO hard disk
  • 2 VirtIO network adapters (1 for the private subnet, 1 for the public subnet for floating IP's in Openstack)
As of September 2016, you also have to change the video type from QXL to Cirrus, otherwise the VM will get stuck on "Starting Windows". This is a bug in qemu and might have been fixed by the time you read this article.

The Win7 installer will not be able to find any disks (because the disk uses virtio), so when it asks you for the location of the disk, click the install drivers button and select the CD-ROM containing the Redhat virtio drivers for Windows.

I followed this guide for these steps:


Although the guide is for VMWare, it works just as well for KVM. However, the last step about adding the virtio drivers to regedit is not necessary for KVM.

2016년 9월 3일 토요일

pdfshuffler for adding and removing pages from a PDF file

I used to use pdfmod for Linux to concatenate multiple PDF files together or to remove pages from a PDF file. pdfmod has mono libraries (open source MS .NET implementation) as a dependency, however, and they take up over 100 MB on disk when installed.

I have found a nice, lightweight replacement, however. pdfshuffler has just a handful of python dependencies and takes up less than 5 MB on disk. It has all the functionality of pdfmod but is much lighter. It can be found in the default repos on Fedora 24 and in the Arch User Repository. Highly recommended!



Note that you may run into a python bug when running pdfshuffler; you simply need to change ~2 lines of Python source as a workaround:

https://sourceforge.net/p/pdfshuffler/bugs/39/?limit=25#30ad/6f47