According to IETF's RFC 4578Dynamic Host Configuration Protocol (DHCP) Options for the Intel Preboot eXecution Environment (PXE), client system architecture type is denoted by a code in option 93. In the RFC, the following 10 types are listed:
Type Architecture Name ---- ----------------- 0 Intel x86PC 1 NEC/PC98 2 EFI Itanium 3 DEC Alpha 4 Arc x86 5 Intel Lean Client 6 EFI IA32 7 EFI BC 8 EFI Xscale 9 EFI x86-64
This option MUST be present in all DHCP and PXE packets sent by PXE-compliant clients and servers.
For my PXE server setup, I use dnsmasq. In its manpages and the default /etc/dnsmasq.conf however, the architecture codes are a bit different:
# Test for the architecture of a netboot client. PXE clients are # supposed to send their architecture as option 93. (See RFC 4578) #dhcp-match=peecees, option:client-arch, 0 #x86-32 #dhcp-match=itanics, option:client-arch, 2 #IA64 #dhcp-match=hammers, option:client-arch, 6 #x86-64 #dhcp-match=mactels, option:client-arch, 7 #EFI x86-64
RFC 4578 lists x86-64 EFI architecture as code '9' but most information I've found on the Internet about dhcpd and dnsmasq configs suggests that code '7' for option 93 denotes EFI x86-64...
Within the conf file for dnsmasq it is possible to make dhcp-boot send a specific boot image depending on whether the client architecture is detected as x86 BIOS or x64 EFI as follows:
dhcp-match=bios, option:client-arch, 0 #BIOS x86 dhcp-match=efi32, option:client-arch, 6 #IA32-EFI dhcp-match=efi64, option:client-arch, 7 #EFI x86-64 ... # Load different PXE boot image depending on 'dhcp-match' tag pxe-service=tag:bios, x86PC, "Install Linux on x86 legacy BIOS", pxelinux.0 pxe-service=tag:efi32, IA32-EFI,"Install Linux on IA32-EFI", bootx32.efi pxe-service=tag:efi64, X86-64_EFI, "Install Linux on x86-64 UEFI", bootx64.efi
The client system types (i.e. x86PC, IA32-EFI, X86-64_EFI above) in pxe-service are defined in the dnsmasq manpages (and differ from some dnsmasq config examples I've found on the Internet) and are as follows:
The known types are x86PC, PC98, IA64_EFI, Alpha, Arc_x86, Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI; an integer may be used for other types.
The boot images above come from the syslinux project. Install this package on your Linux machine and then navigate to the directories for the bios, efi32, and efi64 boot images. In the case of Archlinux, the syslinux path is /usr/lib/syslinux and under this path are the architecture-specific directories .../bios, .../efi32, .../efi64
pxelinux.0 can be found under the bios directory, but the efi boot images for 32 and 64-bit architectures are different. The can be found under the efi32 and efi64 directories, respectively, but have the same name syslinux.efi. Along with the boot image for bios architectures, copy the efi boot images to the tftp-root path for your pxe server and rename them appropriately as bootx32.efi or bootx64.efi
Docker containers generally don't come with Xorg installed because of the unnecessary bloat and the fact that most containers run applications that don't need X windows. However, it is possible for docker containers to use the host machine's X11!
Jessica Frazelle from Docker has lots of images on Dockerhub that support running GUI applications from a container. For testing docker + X11, I used the image jess/gparted.
In the container's dockerfile, which is on github, the command for launching the container is as follows:
docker run -v /tmp/.X11-unix:/tmp/.X11-unix \ --device=/dev/sda:/dev/sda \ -e DISPLAY=unix$DISPLAY gparted
For some reason the above invocation kept giving me the error:
Unable to find image 'gparted:latest' locally Pulling repository docker.io/library/gparted Error: image library/gparted:latest not found
I was able to launch the gparted container with the following commands:
Note that I have included the --rm flag so that the container will be deleted on exit instead of hanging around under /var/...
The order of the arguments is also different (which is probably unimportant). You will also notice that I specified two devices (/dev/sda and /dev/sdb) because my laptop has two drives. The crucial difference is that I explicitly specified the docker container name including the tag 'latest'.
For some reason, if I don't specify the tag, docker complains that it can't find the docker image.
After changing the IP address of an Openstack Kilo installation (and editing all the service endpoints manually), trying to add an Openstack Cloud Provider in ManageIQ (miq) / Cloudforms will return a 503 Service Unavailable error.
Solution:
I will introduce the solution in two parts -- first I will explain how to change the service endpoints in Openstack Kilo from an old to new IP. Second, I will illustrate the deletion of non-existent endpoints from the Keystone database in mariadb which is ultimately causing the 503 error in miq.
Change IP for Existing Openstack Kilo Installation
1. Replace all occurrences of old_ip with new_ip in Openstack config files
My Openstack installation was done with Packstack All-in-one on a physical Fedora 21 server. All the Openstack configuration files can be found under /etc in subfolders named after component services like keystone, neutron, nova, cinder, etc. To find all the Openstack config files containing old_ip (which in my case was 192.168.40.198), I changed directories to /etc and ran the following:
You can replace 192.168.40.198 with 192.168.10.168 as follows:
sed -i "s/192.168.40.198/192.168.10.168/g" filename
Note that the old IP address will appear in lots of non-config files as well, such as in various Openstack logs under /var/log/. You do not need to change the IP's in these log files!
In a post by Brad Pokorny from Symantec about changing Openstack service IP's, he pipes grep to xargs and sed to replace IP's in conf files:
$ grep -rl '[old IP address]' /etc | xargs sed -i 's/[old IP address]/[new IP address]/g'
grep's -r option searches recursively (-R does the same, but unlike -r it also follows all symlinks) in the search path
grep's -l option suppresses normal output to stdout and simply prints the name of each file that matches the search pattern.
After editing the IP in all the Openstack and system config files, you are still not done; the old IP's are also saved as service endpoints within the Keystone database in mariadb.
2. Replace all occurrences of old_ip with new_ip in Keystone DB
You can find your mariadb login credentials in your answer file under $HOME (which should have been automatically generated after running the packstsack or devstack installation script). You can also find the db login credentials within /etc/keystone/keystone.conf
Log into mariadb:
[fedjun@fedrana ~]$ mysql -ukeystone_admin -p123456789abc
(make sure there isn't a space after the -p flag)
MariaDB [(none)]> use keystone Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A MariaDB [keystone]> show tables; +------------------------+ | Tables_in_keystone | +------------------------+ | access_token | | assignment | | consumer | | credential | | domain | | endpoint | | endpoint_group | | federation_protocol | | group | | id_mapping | | identity_provider | | idp_remote_ids | | mapping | | migrate_version | | policy | | policy_association | | project | | project_endpoint | | project_endpoint_group | | region | | request_token | | revocation_event | | role | | sensitive_config | | service | | service_provider | | token | | trust | | trust_role | | user | | user_group_membership | | whitelisted_config | +------------------------+ 32 rows in set (0.00 sec) MariaDB [keystone]> show columns from endpoint; +--------------------+--------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +--------------------+--------------+------+-----+---------+-------+ | id | varchar(64) | NO | PRI | NULL | | | legacy_endpoint_id | varchar(64) | YES | | NULL | | | interface | varchar(8) | NO | | NULL | | | service_id | varchar(64) | NO | MUL | NULL | | | url | text | NO | | NULL | | | extra | text | YES | | NULL | | | enabled | tinyint(1) | NO | | 1 | | | region_id | varchar(255) | YES | MUL | NULL | | +--------------------+--------------+------+-----+---------+-------+ 8 rows in set (0.00 sec) MariaDB [keystone]> select * from endpoint -> ; +----------------------------------+----------------------------------+---------- -+----------------------------------+-------------------------------------------- ------+-------+---------+-----------+ | id | legacy_endpoint_id | interface | service_id | url | extra | enabled | region_id | +----------------------------------+----------------------------------+---------- -+----------------------------------+-------------------------------------------- ------+-------+---------+-----------+ | 0824196fa17c446ca255e2978b6541f0 | 27e475394a484791951b539e3a0f933e | internal | 3c15bfa190cf4ea4af059978a71615dd | http://192.168.40.198:8080/v1/AUTH_%(tenant_id)s | {} | 1 | RegionOne | | 0d871e7d98124e78a452e192636386d7 | 843a20e3a0b84895ba10425f5631580c | admin | c58ff58e9b2249798402afffe880acd3 | http://127.0.0.1:8774/v3 | {} | 1 | RegionOne | | 2235601f13404931858d15d919bc0c5f | e7149dd44359447fbbd85d0bb502694c | admin | a2cd703896714905afc15ab7e909902d | http://192.168.40.198:9292 | {} | 1 | RegionOne | | 2c5edf0ae12a47448a92259d0db26ce7 | df679e86729d48fda2d9be9f5b960171 | admin | 8fcfa027b51c4e40aeee09244d6400b7 | http://192.168.40.198:8773/services/Admin | {} | 1 | RegionOne | | 2e2cbf33223f4c79b88442e2df3f7b6b | ad6f8e52ff1f4708b64dee23098021bd | public | ed420763fd2a4ff3b9a858557bb4b5e0 | http://192.168.40.198:8774/v2/%(tenant_id)s | {} | 1 | RegionOne | | 2ebacc0d9f0a47b79a8d003a2097756c | 45a8c7dda03344e58f4921aa2bbad62e | admin | 33436af568cf4236936f4b6645a676d4 | http://192.168.40.198:8776/v1/%(tenant_id)s | {} | 1 | RegionOne | ... +----------------------------------+----------------------------------+-----------+----------------------------------+--------------------------------------------------+-------+---------+-----------+ 33 rows in set (0.01 sec)
Now replace all occurrences of old_ip with new_ip:
[fedjun@fedrana ~]$ sudo systemctl restart openstack-keystone [fedjun@fedrana ~]$ systemctl status openstack-keystone ● openstack-keystone.service - OpenStack Identity Service (code-named Keystone) Loaded: loaded (/usr/lib/systemd/system/openstack-keystone.service; enabled) Active: active (running) since Thu 2016-01-14 13:58:13 KST; 9s ago Main PID: 13518 (keystone-all) CGroup: /system.slice/openstack-keystone.service ├─13518 /usr/bin/python /usr/bin/keystone-all ├─13527 /usr/bin/python /usr/bin/keystone-all ├─13528 /usr/bin/python /usr/bin/keystone-all ├─13529 /usr/bin/python /usr/bin/keystone-all ├─13530 /usr/bin/python /usr/bin/keystone-all ├─13531 /usr/bin/python /usr/bin/keystone-all └─13532 /usr/bin/python /usr/bin/keystone-all
Horizon UI should now appear and work with your new IP. Note that you will also have to edit your external network (and floating IP ranges) and change the gateway in your virtual router in Neutron, as they will still be using the old IP.
Remove unused endpoints from Keystone DB to solve 503 error in miq
My packstack install of Openstack Kilo does not include Swift Object Storage, but when adding an Openstack Cloud Provider to miq, one of the logs within the Cloudforms / miq appliance indicated that no endpoint for Swift on port 8080 could be found. When I navigated to the Openstack IP on port 8080 I got the following:
By contrast, navigating by browser to any other port corresponding to an Openstack service endpoint returns JSON or XML. To enable verbose debugging in miq / Cloudforms, follow the instructions from the link below:
In the ManageIQ UI go to configure/configuration/, Select the appliance and go to Advanced tab. Find the following line and make the change below.
level_fog: info
to
level_fog: debug
Once this is done, wait a few minutes for the changes to take effect in miq / Cloudforms and then log into your miq appliance and navigate to /var/www/miq/vmdb/log/fog.log
Note that the password for the miq appliance is root:smartvm whereas the default user:pass for the miq web interface is admin:smartvm.
Looking through the logs I noticed that the 503 error only occurred when a request was made on port 8080:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The link above states that port 8080 is the HTTP alternate for OpenStack Object Storage (swift) service. The problem is that I never installed Swift in my Openstack Kilo deployment, but Keystone DB is telling miq that the endpoint for swift is at :8080. Since this endpoint doesn't exist, miq fails when trying to add the Openstack Cloud Provider. The weird thing is, before I changed my Openstack IP, I could create an Openstack Cloud Provider in miq without any problems.
The solution for the problem of non-existent endpoints is to go into the Keystone DB and delete all rows containing port 8080 (for swift) from the table endpoints:
MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | information_schema | | keystone | | test | +--------------------+ 3 rows in set (0.05 sec)
MariaDB [(none)]> use keystone Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A
I can see a variety of endpoints above, but just want to get rid of all the rows containing port 8080. In mysql/mariadb it is possible to match text patterns as below:
Recently at work I've been working with Redhat Cloudforms 4 (based off of the upstream opensource project manageIQ). A client has requested some customizations of Cloudforms which requires calling some info through the CFME/manageiq REST API.
Redhat Korea gave my company an API reference guide for Cloudforms v3 but it is quite incomplete. It is much better to just refer to the upstream manageiq docs on github:
Here are some examples of communicating with the v2.2 REST API.
Creating an openstack cloud provider using cURL:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Creating an openstack cloud provider using python 3:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Querying a list of cloud providers (and get detailed info on each provider) using python 3:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Note that in when using both cURL and python requests, SSL verification must be turned OFF (using -k flag in cURL, and verify=False in requests) because the Cloudforms instance uses a private certificate that is unknown to Certificate Authorities.
Android Debug Bridge (adb) is used to communicate with an Android phone connected by usb cable. It is a native linux program, although there are ways to run it in Windows. Without some setup, however, regular users cannot run adb, and must use sudo. This is not good practice, however, and can pose a security risk.
In July 2015, I submitted an answer to a question on StackOverflow regarding adb permissions. The accepted answer suggests setting the SUID bit (4XXX in octal) on the adb binary in /usr/bin but this is basically the same thing as sudo adb because SUID bit gives all users who run the file the same privileges as the file owner (root owns adb binary).
The answer I suggested uses Access Control Lists (ACL) to add the current user to the list of users allowed to run /usr/bin/adb using setfacl. This workaround was valid in July 2015, but stopped working in Archlinux several months later. Now the canonical way to allow the regular user to invoke adb is to make sure a udev rule for your Android device exists.
I use the Archlinux package android-udev which includes the USB ID's for most Android smartphones. If your device doesn't exist in the rules file, you can simply add its USB ID to /usr/lib/udev/rules.d/51-android.rules and send a pull request to the upstream repo of android-udev.