2016년 10월 29일 토요일

Slow transfer times when creating a local npm package mirror

In the current project I'm working on, the client wants developers to install packages from local mirrors of Ubuntu, PyPI, and npm. Creating local mirrors of Ubuntu default repositories is easy with apt-mirror, and PyPI can also be mirrored easily with Bandersnatch. Mirroring the npm repo, however, is not so easy. I followed guides which recommend creating a mirror with CouchDB, but I couldn't get this method to work. Instead, I set up Sinopia as an npm cache server so that every time I install an npm package locally it is also saved in Sinopia. The next time I try to install the same package, it will be installed from the Sinopia cache instead of from the npm site.

The problem with using Sinopia as a full npm mirror is that it will not automatically download packages. You must manually install npm packages with npm install pkgname. I got a list of all the packages in npm from http://skimdb.npmjs.com/registry/_all_docs and then parsed the file to get only package names. I then wrote the following script to download packages from npm (which will then be stored in Sinopia):


As of October 2016 when I mirrored PyPI using Bandersnatch, downloading 380GB took about 1.5 days on a 500 Mbit/s connection. Mirroring Ubuntu 14.04 and 16.04 repos requires 300GB each and takes almost one day for each. But using my bash script, I am only getting speeds of about 100MB per hour. Considering that npm is currently 1.2 TB in size, it would take me over 1200 days to create a complete npm mirror at the current download speed. Why is npm so slow compared to Ubuntu repos and PyPI?

2016년 10월 22일 토요일

Adding nodes to Openstack Mitaka with Packstack

In my initial Openstack Mitaka deployment on four Fujitsu servers, I used a customized answer file specifying Neutron OVS bridge interfaces and the slave interfaces for bridges. The relevant setting in the answer file is:

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eno1,br-eno2:eno2

However, I want to add 2 additional compute nodes which are a different kind of server, namely IBM. The interfaces are named differently, so the slave interfaces for br-ex and br-eno2 will be different.

I copied my original answer file and renamed it mitaka-ibm.cfg

Inside this answer file, I simply edited the field

CONFIG_COMPUTE_HOSTS=10.10.10.6,10.10.10.7

and added the mgmt network IP's for the two new compute hosts and removed the IP's for the existing compute nodes.

I then edited

CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:enp1s0,br-eno2:enp2s0

to change the slave interface names for the bridge interfaces on the 2 IBM servers.

Since I don't want Packstack to overwrite the config on existing Openstack nodes, I also added the mgmt network IP's as well as external network IP for Horizon to the following field:

EXCLUDE_SERVERS=10.10.10.2,10.10.10.3,10.10.10.4,10.10.10.5,192.168.4.51

However, the field

CONFIG_CONTROLLER_HOST=192.168.4.51

must be filled in with the external IP of the control node, otherwise the installation will fail when running the puppet file nova.pp

References:
https://www.rdoproject.org/install/adding-a-compute-node/

2016년 10월 15일 토요일

Boot Information Negotiation Layer (BINL) UDP 4011 must be opened for PXE

I wrote a script that opens ports in the firewalld dynamic firewall so I can run a PXE server, but I neglected to add one port -- UDP 4011 for the Boot Information Negotiation Layer. Once the kernel and initrd are sent by TFTP, I use http to send installation files from a mounted iso.

The ports I open are as follows:

UDP 69 (TFTP)
UDP 4011 (BINL)
UDP 67, 68 (DHCP)
TCP 5500 (VNC)

In the script I don't pass all port names explicitly; some can simply be passed as service names to firewall-cmd, which figures out which port numbers to open.



References:

http://www.configmgr.no/2012/03/21/ports-used-by-pxeosd/

2016년 10월 8일 토요일

Install Archlinux over UEFI PXE from an existing PXE server

When installing Linux over UEFI PXE, GRUB2 grub.cfg is used as the PXE image menu. Archlinux has some idiosyncratic PXE options which I will detail in this post.

As of Oct. 2016, the Archlinux wiki has a sample Legacy BIOS PXE menu entry but this format cannot be used verbatim for UEFI PXE.

First, download the latest Archlinux installation iso from the following link:

Mount the iso and note the following paths in the mounted image:

/arch/boot/x86_64
contains the linux kernel and initrd image for 64-bit
  • kernel: vmlinuz
  • initrd: archiso.img
/arch/boot
contains intel microcode and memtest
  • intel_ucode.img
  • memtest
In most other Linux distros, the initrd image on installation iso's is named initrd.img but Archlinux uses archiso.img

The Archlinux-specific Kernel boot parameters for PXE are as follows:
  • archisobaseddir=arch Specifies the root directory of the installation iso
  • archiso_http_srv=http://ip.ad.d.r:port/ Specifies installation file location over http (you can also use nbd and nfs instead of http)
  • ip=:::::eth0:dhcp Tells the Arch kernel to bring up the network iface (on the machine to be installed with Arch) and get an IP address via   DHCP. For predictability, the network iface in the Arch chroot environment is always named eth0
Keep in mind that eth0 is just a temporary name for your wired iface during installation. Once installation is complete and you exit the Arch install chroot and restart, your wired interface will come up with a systemd-style network device name.

My grub.cfg for UEFI PXE can be seen here:

The menu entry in grub.cfg for Archlinux UEFI should look like this if you are installing over http:

menuentry 'Archlinux iso 2016.09.03' --class arch --class gnu-linux --class gnu --class os {
        echo 'Loading kernel vmlinuz from installation ISO...'
        linuxefi images/archlinux/vmlinuz archisobaseddir=arch archiso_http_srv=http://192.168.95.97:8080/ ip=:::::eth0:dhcp
        echo 'Loading initial ramdisk ...'
        initrdefi images/archlinux/intel_ucode.img images/archlinux/archiso.img
}

For information on how to setup a PXE server that can install to both UEFI and Legacy BIOS machines, refer to my previous post on this topic:


If you don't have UEFI hardware lying around for testing, you can use KVM with OVMF Tianocore UEFI. I detail how to get started in the following post: