msgbartop
MAC OS X, Linux, Windows and other IT Tips and Tricks
msgbarbottom

30 Dec 16 pygrub: Unable to find partition containing kernel

Introduction:
Lately after I upgraded many packages in a Xen 4.4 DOMU VM the pygrub could not boot the VM any more.
During the security update, the installed grub2(grup-pc), which never created any problems before with pygrub, got updated and suddenly it did create problems to boot the VM. Here is the error message I got when trying to boot it:
Parsing config from /etc/xen/VM.cfg
libxl: error: libxl_bootloader.c:628:bootloader_finished: bootloader failed - consult logfile /var/log/xen/bootloader.32.log
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: bootloader [-1] exited with error status 1
libxl: error: libxl_create.c:1024:domcreate_rebuild_done: cannot (re-)build domain: -3
libxl: error: libxl_dom.c:35:libxl__domain_type: unable to get domain type for domid=32
Unable to attach console
libxl: error: libxl_exec.c:118:libxl_report_child_exitstatus: console child [0] exited with error status 1

I have another VM whith the same Debian system in it which boots well. After comparing the grub.conf etc. with each other I could not see any differences.
If I launched the pygrub with the image disk of the VM as argument, I am normally presented with the Grub menu and then kicks out with the normal errors. This time I got no menu at all and got the following error message:
/usr/lib/xen-4.4/bin/pygrub /virtual/xen/VM/disk.img
Traceback (most recent call last):
File "/usr/lib/xen-4.4/bin/pygrub", line 839, in
raise RuntimeError, "Unable to find partition containing kernel"
RuntimeError: Unable to find partition containing kernel

After Googeling a bit I found this site which talks about this problem as well although with an LVM volume instead of with a file disk image. But the principle was the same:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=745419
So in resume: If pygrub sees somethings else than zeroes in the first 512 bytes of the image disk, it returns with this error: ‘Unable to find partition containing kernel’

Cause:
During the upgrade of grub-pc the package script asked me to specify the boot sector where grub should be installed and I happen to select the proposed one ‘/dev/xvda2’ which was a mistake.

Preventive solution:
I should have left the image partition untouched and continue the upgrade of Grub-PC without grub being written in the boot sector, and then afterwards run the command:
update-grub

Present Solution:
Overwrite the boot sector(512 bytes) of the image file with zeros.

Command:
dd conv=notrunc if=/dev/zero of=/virtual/xen/domains/VM/disk.img bs=512 count=1
Note: I use the option conv=notrunc to make sure the output file will not be truncated to 512 bytes after the overwriting.

Result:
I could then boot the VM well again.

03 Dec 16 ‘init: plymouth-upstart-bridge main process ended, respawning’ error messages at boot.

I installed a new Ubuntu 14.04 as a Xen server and found out that on booting the following messages repeated itself many times,
[ 2.811553] init: plymouth-upstart-bridge main process (191) terminated with status 1
[ 2.812789] init: plymouth-upstart-bridge main process ended, respawning
[ 2.874117] init: plymouth-upstart-bridge main process (210) terminated with status 1
[ 2.875167] init: plymouth-upstart-bridge main process ended, respawning
[ 2.904155] init: plymouth-upstart-bridge main process (217) terminated with status 1
[ 2.905289] init: plymouth-upstart-bridge main process ended, respawning
[ 2.928618] init: plymouth-upstart-bridge main process (221) terminated with status 1
[ 2.929713] init: plymouth-upstart-bridge main process ended, respawning
[ 49.975826] Adding 2093052k swap on /dev/mapper/[...]

then the booting stopped to resume normally at least 10-15 sec later.
To make sure I eliminate those message I searched the net and found this site which is a very good and simple solution.
http://www.unrelatedshit.com/2014/07/30/kvm-too-fast-for-plymouth-upstart-bridge/

Solution:
The solution is quite simple: To fix it, simply add a sleep 2 to your /etc/init/plymouth-upstart-bridge.conf file.
Example:
[...]
stop on (stopping plymouth
or stopping plymouth-shutdown)
console output
exec plymouth-upstart-bridge
sleep 2

Reboot and watch the booting… no more silly error messages of plymouth-upstart-bridge.

29 Nov 16 Installing Xen Hypervisor 4.8 on Debian Jessie

Introduction:
I was looking for a way to install Xen 4.8 in Jessie because in some of the newest Processors Intel series called Skylake the default version of Xen Hypervisor on Jessie (4.4) results in endless booting loops.
NOTE: If you already had the Xen 4.4(original installed) no worries the version 4.4 will not be uninstalled but the new version 4.8 will be the only one active.

Howto:
This short howto is based on this link which is also been recommended by Hetzner provider in Germany.
http://unix.stackexchange.com/questions/261029/install-xen-4-6-on-debian-jessie

Steps:
You have to pin stretch and stretch-updates to 499, jessie and jessie-updates to 500, then install xen-hypervisor-4.8-amd64 manually from stretch:

cat <<EOF | sudo tee /etc/apt/preferences.d/stretch-manual-only
Package: *
Pin: release n=jessie-updates
Pin-Priority: 500
#
Package: *
Pin: release n=jessie
Pin-Priority: 500
#
Package: *
Pin: release n=stretch-updates
Pin-Priority: 499
#
Package: *
Pin: release n=stretch
Pin-Priority: 499
EOF

Create a sources list for stretch:
sed -e 's/ \(stable\|jessie\)/ stretch/ig' /etc/apt/sources.list > /etc/apt/sources.list.d/debian-stretch.list
aptitude update

Those are the needed packages for Xen 4.8:
aptitude install xen-utils-common/stretch xen-utils-4.8/stretch xen-tools xen-hypervisor-4.8-amd64/stretch libncurses5/stretch libncursesw5/stretch libtinfo5/stretch

Possible output of command, but can also be different.
The following packages will be upgraded:
libxen-4.8 xen-hypervisor-4.8-amd64 xen-utils-4.8 xen-utils-common
Do you want to continue? [Y/n/?]

You also answer ‘Y‘ to this one.
Make sure all the packages are now up-to-date:
aptitude -y dist-upgrade
Continue with changing the boot order in grub:
dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
update-grub

Check grub menu entries in order with:
grep -i "menuentry '" /boot/grub/grub.cfg|sed -r "s|--class .*$||g"|nl -v 0
Now the first line should be
0 menuentry 'Debian GNU/Linux, with Xen hypervisor'
Reboot and have fun ๐Ÿ˜‰

13 Nov 16 Ubuntu 16.10 : xenconsole: Could not read tty from store: Success

Introduction:

After having had some stability problems, with running Xen DOMUs under Ubuntu 16.04/Xen 4.6, I decided to upgrade to Ubuntu 16.10/Xen 4.7.
Unfortunately, as I tried to start any of the DOMUs with the option -c to see the console content, the following error message was displayed and I got kicked out and no console.
xenconsole: Could not read tty from store: Success
I searched for hours in the Internet to find a solution. This morning I found an article where the version of Xen was much earlier but the problem was the same.

Cause:

The daemon xenconsoled was not running. Loading this daemon beforehand seems to have solved this issue, which got me in real trouble with my clients screaming against such long downtime of the servers. For some reason the DOMUs just hung as well.

Solution:

Start the daemon with the command:
/usr/lib/xen-4.7/bin/xenconsoled --pid-file=/var/run/xenconsoled.pid
Note:
You can make sure that this daemon will start automatically by using one of the following 2 methods:
Start the daemon using the @reboot cron job as follows:
crontab -e
Content:
@reboot /bin/sleep 15; /usr/lib/xen-4.7/bin/xenconsoled
OR
Start the daemon using Systemd start method.
touch /etc/systemd/system/xenconsoled.service
vim /etc/systemd/system/xenconsoled.service

Content of xenconsoled.service
[Unit]
Description=Xen Console Daemon service
[Service]
Type=forking
ExecStart=/usr/lib/xen-4.7/bin/xenconsoled --pid-file=/var/run/xenconsoled.pid
ExecStop=/usr/bin/killall xenconsoled
Restart=on-failure
RestartSec=3
[Install]
WantedBy=default.target

Execute those commands to register the service for boot start and start it now manually.
systemctl enable xenconsoled
systemctl daemon-reload
service xenconsoled start

05 Feb 16 Creating a new Xen Debian virtual machine from scratch

Introduction:

In this tutorial a new virtual machine based on Debian Jessie distribution will be created from scratch with minimal components.
Assumption: The Xen Hypervisor should already be installed and running in the main system (DOM0).

Creating the Xen Virtual Machine

This virtual machine will be created with the xen tools which bootstraps the creation of the VM.
Bootstrapping:
mkdir -p /virtual/xen/
cd /virtual/xen/
xen-create-image --dir=. --dist=jessie --hostname=mail.myserver.com --size=10Gb --swap=2048Mb --ip=87.176.10.167 --gateway=87.176.10.254 --netmask=255.255.255.0 --memory=4096Mb --arch=amd64 --role=udev

Install the kernel and pyGrub
– Put the produced disk.img and swap.img in the proper path.
eg. in /virtual/xen/MAIL/
Mount the disk image in loop
mkdir /mnt/MAIL
mount /virtual/xen/MAIL/disk.img /mnt/MAIL -o loop,rw

Mount /sys, /proc, /dev and chroot to it
mount /proc /mnt/MAIL/proc -o bind
mount /sys /mnt/MAIL/sys -o bind
mount /dev /mnt/MAIL/dev -o bind
chroot /mnt/MAIL

Install the grub-legacy in VM
apt-get update
apt-get install grub-legacy linux-image-3.2.0-4-amd64 mc
mkdir /boot/grub
mcedit /boot/grub/menu.lst
CONTENT:
#----------------
default 0
timeout 2
#
title Debian GNU/Linux
root (hd0,0)
kernel /vmlinuz root=/dev/xvda1 ro
initrd /initrd.img
#
title Debian GNU/Linux (recovery mode)
root (hd0,0)
kernel /vmlinuz root=/dev/xvda1 ro single
initrd /initrd.img
#-------------

Leave chroot and unmount all.
exit
umount /mnt/MAIL/dev
umount /mnt/MAIL/sys
umount /mnt/MAIL/proc
umount /mnt/MAIL/

Adjust the VM xen configuration(/etc/xen/mail.server.com.cfg) as follows:
Replace the older kernel and initrd lines in the Xen DOMu configuration file as follows:
Example:
REPLACE:
kernel = '/boot/vmlinuz-2.6.32-5-xen-amd64'
ramdisk = '/boot/initrd.img-2.6.32-5-xen-amd64'

WITH:
For Debian squeeze hypervisor:
bootloader = '/usr/lib/xen-default/bin/pygrub'
For Debian wheezy hypervisor:
bootloader = '/usr/lib/xen-4.1/bin/pygrub'
For Debian jessie hypervisor:
bootloader = '/usr/lib/xen-4.4/bin/pygrub'

Adjust the paths of the disks properly:
Example:
disk = [
'file:/virtual/xen/MAIL/disk.img,xvda2,w',
'file:/virtual/xen/MAIL/disk.swp,xvda1,w',
]

Test the pyGRUB configuration with the VM disk
Note: A GRUB menu should appear for a few seconds and then disappear with an error message. Ignore the error message. Most important is that the Grub menu appears.
For Debian squeeze hypervisor:
/usr/lib/xen-default/bin/pygrub /virtual/xen/MAIL/disk.img
For Debian wheezy hypervisor:
/usr/lib/xen-4.1/bin/pygrub /virtual/xen/MAIL/disk.img
For Debian jessie hypervisor:
/usr/lib/xen-4.4/bin/pygrub /virtual/xen/MAIL/disk.img

Start the VM
The Grub menu should appear and start booting.
xm create /etc/xen/mail.server.com.cfg -c

Important note: Normally after such Bootstrap of a new Xen VM the VM uses the Hypervisor kernel when booting. This means, each VM is not capable to update its kernel independantly. This above method makes the VM fully independent of the Hypervisor kernel and gets its own kernel. The only disadvantage I see is that with some kernel updates the /boot/grub/menu.lst file gets automatically replaced during the kernel upgrade, you then NEED to recover the previous /boot/grub/menu.lst which is normally saved under /boot/grub/menu.lst~ before you reboot the VM. In case you forgot, then simply mount the VM image in loop as explained above and replace the file as needed. You should then be able afterwards to boot th VM.

03 Feb 16 Installing Xen 4.4 on Ubuntu Server 14.04 LTS (Trusty)

Introduction:

This HowTo assumes that the Internet access from VMs via DOM0 and the private LAN are done using the Bridge method. In the previous versions of Xen installation the bridges were dynamically built via the Xen scripts, in this version the bridges are built permanently as the DOM0 boots up.
DOM0:xenbr0(eth0) ---bridging==>> DOMUs:eth0
DOM0:pdummy0(dummy0) ---bridging==>> DOMUs:eth1

IMPORTANT: If you are installing Xen in a Hetzner(Germany) dedicated server and use only the available(max 3) IPs for the DOMUs, then you need to make sure you are generating a MAC address for each DOMU IP in the Hetzner robot site of your server, then use this MAC address in your DOMU Xen configuration. If you are using a subnet of 8 IP or more in Hetzner server for DOMUs, this bridging method would not work. Follow the instructions shown here instead: https://wp.me/pKZRY-F9

Install Xen Hypervisor and some useful tools
apt-get install xen-hypervisor-4.4-amd64 xen-utils-4.4 bridge-utils ethtool iptables

Some extra preparations

Since every virtual disk needs to be mounted using a loop device, we need to make sure there are enough of them available in the system.
Edit the file /etc/modules and add:
loop max_loop=64
dummy

We also need to turn on the IPv4 forwarding in the kernel.
Edit the file /etc/sysctl.conf (around line 44) activate the line by removing the โ€˜#โ€™ as follows:
net.ipv4.ip_forward=1
The run the following command to activate it:
sysctl -p /etc/sysctl.conf

CONFIGURING THE NETWORK in DOM0

Based on the IP assumptions above, here is the content of the file /etc/network/interfaces.
# Internet Access nterface
auto xenbr0
iface xenbr0 inet static
address 85.114.145.5
netmask 255.255.255.0
network 85.114.145.0
broadcast 85.114.145.255
gateway 85.114.145.1
bridge_ports eth0
#
auto eth0
iface eth0 inet manual
#
# Internal LAN between VMs and DOM0
auto pdummy0
iface pdummy0 inet static
address 192.168.100.1
netmask 255.255.255.0
bridge_ports dummy0
#
auto dummy0
iface dummy0 inet manual

In order to make sure Xen scripts donโ€™t create the normal bridges when a DOMu is started, we need to hinder this process by:
editing the file /etc/xen/xend-config.sxp and change the line:(around line 176)
FROM:
(network-script network-bridge)
TO:
(network-script none)
reboot

Configuring the DOMUs

DOMUs Configuration

PyGRUB
If your DOMUs configurations are set to use pygrub as boot loader,
then make sure the path to pygrub in the DOMU configuration file is correct as follows:
bootloader = '/usr/lib/xen-4.4/bin/pygrub'
In the same DOMU configuration file, make sure you are using a non duplicated MAC addresses with the network interfaces assignment as well as define the bridge that will be used by this DOMu, for example:
vif = [ 'ip=46.7.178.112,mac=00:16:34:D7:9C:12,bridge=xenbr0', 'ip=192.168.100.112,mac=00:16:3E:D7:1C:12,bridge=pdummy0' ]
NOTE:If you are not using the PyGRUb and want to use it as boot loader for each individual DOMUs, which makes the DOMUs kernel independent from the DOM0, see the following article. Please notice that in Ubuntu 14.04 the path to pygrub is different than in the article. Each new version of Xen has a different path to PyGRUB th rest of the article is fully accurate for Ubuntu as well.
//tipstricks.itmatrix.eu/?s=pygrub&x=0&y=0

DOMus Network Configuration

Each DOMu will get an interface lo and eth0 with the following configuration:
Iโ€™m using the first IP of our subnet for this DOMU and will therefore be configured as follows:
Note: This configuration not really standard as it uses each IP with the netmask /32 (255.255.255.255).
This setting allows each IP of the subnet to be usable by each DOMu.
File: /etc/network/interfaces
Content:
# The loopback network interface
auto lo
iface lo inet loopback
#
# The primary network interface
auto eth0
iface eth0 inet static
address 46.7.178.112
netmask 255.255.255.255
gateway 46.7.178.1
#
# The internale LAN interface(will be connected to pdummy0 on DOM0)
auto eth1
iface eth1 inet static
address 192.168.100.112
netmask 255.255.255.0

22 May 15 Extending dynamically Linux RAMs in VMWare VM without rebooting

Situation:
Need to raise the amount of RAM in a VMWare VM without rebooting.

Solution:
– In VMWare interface: Raise the amount of RAM for the VM
– In the Linux VM: Run the following script:

#!/bin/bash
# This script enables in system the unrecognized RAMs
deleteline () {
echo -ne $dellineup
}
### check preconditions ###
if ! type -P $modprobe > /dev/null; then
echo -e "'modprobe' package is not installed! \n z.B. apt-get install modprobe"
fi
### check if there is any offline RAM ###
RAMOFFLINE=`grep offline /sys/devices/system/memory/*/state|wc -l`
if [ $RAMOFFLINE -gt 0 ]; then
echo "RAMs found that are not yet recognized by the system. Enable RAMs live recognition? (y/N)" ; read yesno ; deleteline
case "$yesno" in
[yY])
echo -e "Recognition of unused RAMs will now be enabled"
modprobe acpiphp
modprobe acpi_memhotplug
for i in $(grep -l offline /sys/devices/system/memory/*/state);do echo online > $i;done
echo -e "\n\n"
free -m
;;
*)
echo -e "Process cancelled"
;;
esac
else
echo "No Unrecognized RAMs present."
fi

Check the new amount of RAM:
free | grep Mem

XEN NOTE: With XEN environment it appears that the System/Kernel immediately recognizes the new amount of RAM dynamically without the need to run this above script.

22 May 15 Update the number of CPU dynamically in a VMWare VM

Situation:
I’ve come across a situation where I needed to LIVE-raise the number of CPUs for a VMWAre Linux VM without having to reboot.

Solution:
– In VMAre ris the number of CPUs
– In the Linux VM do the following:
– Save the following script into /root/bin/ directory
(It was take from this article: https://communities.vmware.com/docs/DOC-10493)
OR
mkdir -p /root/bin
cd /root/bin
wget https://communities.vmware.com/servlet/JiveServlet/download/10493-2-26560/online_hotplug_cpu.sh

Content of script:
#!/bin/bash
# William Lam
# http://engineering.ucsb.edu/~duonglt/vmware/
# hot-add cpu to LINUX system using vSphere ESX(i) 4.0
# 08/09/2009
#
for CPU in $(ls /sys/devices/system/cpu/ | grep cpu | grep -v idle)
do
CPU_DIR="/sys/devices/system/cpu/${CPU}"
echo "Found cpu: \"${CPU_DIR}\" ..."
CPU_STATE_FILE="${CPU_DIR}/online"
if [ -f "${CPU_STATE_FILE}" ]; then
STATE=$(cat "${CPU_STATE_FILE}" | grep 1)
if [ "${STATE}" == "1" ]; then
echo -e "\t${CPU} already online"
else
echo -e "\t${CPU} is new cpu, onlining cpu ..."
echo 1 > "${CPU_STATE_FILE}"
fi
else
echo -e "\t${CPU} already configured prior to hot-add"
fi
done

– Make the script runnable
chmod 755 online_hotplug_cpu.sh

– Run the script
/root/bin/online_hotplug_cpu.sh

Check the number of CPUs:
cat /proc/cpuinfo | grep ‘processor’
eg.
processor : 0
processor : 1
processor : 2
processor : 3

05 Apr 15 Installing Xen 4.4 on Ubuntu Server 14.04 LTS (Trusty) in a Hetzner server with 8 IPs subnet

Hetzner Germany has very fast and not expensive rentals of Hardware servers available. In order to communicate internally via private network between Xen-DOMUs and DOM0, normally one would install Xen DOM0 network with bridge networking as follows:
DOM0:xenbr0(eth0) ===bridging===>> DOMUs:eth0

BUT!!!!
PROBLEM:
Because of the configuration of the network switches at Hetzner, one hardware server can have multiple IPs but only one MAC address (MAC of eth0 in DOM0). This means that Bridge networking for Internet connection (eth0) doesn’t work for multiple DOMUs, each one having its own IP AND MAC address. The situation is quite different if you order 1 to 3(max) extra IPs that can be added to each hardware server. Those IPs can be configured in the Xen DOMUs and use bridge as the method. On the Hetzner Robot site you can generate a MAC address per IP which you can use in your Xen DOMUs configuration and Hetzner switch will route it properly.(See this site for these instructions: https://wp.me/pKZRY-OE) BUT, this is not (yet?) the case with requested IP subnets from Hetzner. Therefore the following solution is the best found so far.

SOLUTION:
The solution is to use routing for Internet access. DOM0 does the routing of the traffic from internet to each DOMU. It also does routing the traffic between DOMUs, making this a private connection since this communication never leaves DOM0.
Note: This solution was presented in Hetzner documentation at http://wiki.hetzner.de/index.php/KVM_mit_Nutzung_aller_IPs_aus_Subnetz/en for KVM installation, which offers the possibility of using ALL of the subnet IPs, as opposed to the traditional way of using routing, which prevents the use of the first and last IP of the subnet as DOMU IP and also needs an extra IP as subnet Gateway for DOMUs. For example:
Traditional way of routing:
CIDR Subnet: 46.5.178.112/29
Network addr: 46.5.178.112 (unusable by DOMUs hosts)
Gateway addr: 46.5.178.113 (used as gateway for DOMUs, unusable by DOMUs hosts)
DOMUs usable IPs: 46.5.178.114 - 46.5.178.118 (5 IPs)
Broadcast addr: 46.5.178.119 (unusable for DOMUs hosts)

This means out of 8 IPs subnet (/29) you can only run 5 DOMUs in this Xen environment if each DOMU needs to have its own Internet reachable IP.

This specific routing method:
Every IP (8 IPs) of the subnet can be used for DOMUs: 46.5.178.112 – 46.5.178.119
No IP is lost for being the network or broadcast IP or as subnet gateway.
Internet ===>>(DOM0:eth0) --- routing ===>>(DOMu Bridge)===>>(DOMu VIF === DOMu:eth0)
Short explanation:
Each DOMu gets a bridge which contains a private network address(172.30.xx.1) used to link DOM0 to the DOMu Internet address.
Example:
DOM0:eth0 ===routing===>> (Bridge[172.30.112.1]) ===>> (vif1.0 === DOMu:eth0 [46.5.178.112])
DOM0:eth0 ===routing===>> (Bridge[172.30.113.1]) ===>> (vif2.0 === DOMu:eth0 [46.5.178.113])
DOM0:eth0 ===routing===>> (Bridge[172.30.114.1]) ===>> (vif3.0 === DOMu:eth0 [46.5.178.114])
DOM0:eth0 ===routing===>> (Bridge[172.30.115.1]) ===>> (vif4.0 === DOMu:eth0 [46.5.178.115])
DOM0:eth0 ===routing===>> (Bridge[172.30.116.1]) ===>> (vif5.0 === DOMu:eth0 [46.5.178.116])
DOM0:eth0 ===routing===>> (Bridge[172.30.117.1]) ===>> (vif6.0 === DOMu:eth0 [46.5.178.117])
DOM0:eth0 ===routing===>> (Bridge[172.30.118.1]) ===>> (vif7.0 === DOMu:eth0 [46.5.178.118])
DOM0:eth0 ===routing===>> (Bridge[172.30.119.1]) ===>> (vif8.0 === DOMu:eth0 [46.5.178.119])
DOM0:dummy0 ===routing===>> (Bridge:pdummy0) ===>> (vifx.0 === DOMu:eth1 [192.168.100.x])

Please notice the 3rd number in the bridge IP corresponds to the last number of the subnet IP of its respective DOMu. This is used just to identify the different subnets created in each bridge. They simply need to differ between each other.
The vifx.0 Virtual Interface is created automatically by the Xen scripts at the start of a DOMu. It is the internal link between the DOMu eth0 interface and its associated bridge located in DOM0.
The netmask of the private subnet of each bridge being 255.2555.255.0 is only a practical way of limiting the range of each subnet so they don’t overlap each other.

Note: In this HowTo I also use the virtual interface dummy0 to connect the DOMUs between each other in a private virtual LAN based on the network: 192.168.100.0/24. To realize this I set-up a dummy0 virtual interface and its attached bridge pdummy0.

Steps to create the virtual private LAN:

Edit /etc/modules and add the following line:
dummy
This will load the module dummy to the kernel automatically at boot time.
Now to avoid having to reboot we load it manually by issuing the command:
modprobe dummy
Interface configuration:
Edit /etc/network/interfaces and add the following lines:
(Replace IPs to your preferred IP Network)
auto dummy0
iface dummy0 inet manual
#
auto pdummy0
iface pdummy0 inet static
address 192.168.100.1
netmask 255.255.255.0
network 192.168.100.0
broadcast 192.168.100.255
bridge_ports dummy0
bridge_stp off
bridge_fd 0
bridge_maxwait 0

Now we bring the dummy0 and bridge pdummy0 interfaces up:
ifup dummy0
ifup pdummy0

Note: at this point no worry about the error message. We can ignore it for now.
Check the configuration:
ifconfig dummy0
ifconfig pdummy0

You should get something like this:
dummy0 Link encap:Ethernet HWaddr 76:99:e1:48:64:f5
UP BROADCAST RUNNING NOARP MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:15 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:1230 (1.2 KB)
#
pdummy0 Link encap:Ethernet HWaddr 76:99:e1:48:64:f5
inet addr:192.168.100.1 Bcast:192.168.100.255 Mask:255.255.255.0
inet6 addr: fe80::7499:e1ff:fe48:64f5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:407 errors:0 dropped:0 overruns:0 frame:0
TX packets:530 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:82394 (82.3 KB) TX bytes:57166 (57.1 KB)

The routing table looks then like this:
# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 178.61.78.129 0.0.0.0 UG 0 0 0 eth0
46.5.178.112 0.0.0.0 255.255.255.255 UH 0 0 0 br112
46.5.178.113 0.0.0.0 255.255.255.255 UH 0 0 0 br113
46.5.178.114 0.0.0.0 255.255.255.255 UH 0 0 0 br114
46.5.178.115 0.0.0.0 255.255.255.255 UH 0 0 0 br115
46.5.178.116 0.0.0.0 255.255.255.255 UH 0 0 0 br116
46.5.178.117 0.0.0.0 255.255.255.255 UH 0 0 0 br117
46.5.178.118 0.0.0.0 255.255.255.255 UH 0 0 0 br118
46.5.178.119 0.0.0.0 255.255.255.255 UH 0 0 0 br119
172.30.112.0 0.0.0.0 255.255.255.0 U 0 0 0 br112
172.30.113.0 0.0.0.0 255.255.255.0 U 0 0 0 br113
172.30.114.0 0.0.0.0 255.255.255.0 U 0 0 0 br114
172.30.115.0 0.0.0.0 255.255.255.0 U 0 0 0 br115
172.30.116.0 0.0.0.0 255.255.255.0 U 0 0 0 br116
172.30.117.0 0.0.0.0 255.255.255.0 U 0 0 0 br117
172.30.118.0 0.0.0.0 255.255.255.0 U 0 0 0 br118
172.30.119.0 0.0.0.0 255.255.255.0 U 0 0 0 br119
178.61.78.129 0.0.0.0 255.255.255.255 UH 0 0 0 eth0
192.168.100.0 0.0.0.0 255.255.255.0 U 0 0 0 pdummy0

ASSUMPTIONS in these examples

Current network settings:
DOM0 IP: 178.61.78.140
Default Gateway: 178.61.78.129
IP Net netmask: 255.255.255.240

Extra IPs subnet:
Subnet: 46.5.178.112/29 (46.4.178.112 – 46.4.178.119)
Netmask: 255.255.255.248
Broadcast: 46.5.178.119

Local virtual LAN
Subnet: 192.168.100.0/24
Netmask: 255.255.255.0
Broadcast: 192.168.100.255

XEN INSTALLATION

We will first install XEN in the main hardware server. This means installing the hypervisor, xen aware kernel and xen tools. This can be done by a installing the following packages and a few favorite tools ๐Ÿ™‚
apt-get install xen-hypervisor-4.4-amd64 xen-utils-4.4 bridge-utils ethtool iptables mc ssh fail2ban

Some extra preparations

Since every virtual disk needs to be mounted using a loop device, we need to make sure there are enough of them available in the system.
Edit the file /etc/modules and add:
loop max_loop=64

We also need to turn on the IPv4 forwarding in the kernel.
Edit the file /etc/sysctl.conf (around line 44) activate the line by removing the ‘#’ as follows:
net.ipv4.ip_forward=1
The run the following command to activate it:
sysctl -p /etc/sysctl.conf

CONFIGURING THE NETWORK in DOM0

Based on the IP assumptions above, here is the content of the file /etc/network/interfaces:
Note: The configuration of the eth0 below is not standard. Please see the explanation of it at:
http://wiki.hetzner.de/index.php/KVM_mit_Nutzung_aller_IPs_aus_Subnetz/en
# Loopback device:
auto lo
iface lo inet loopback
#
## device: eth0 for normal operation
# The primary network interface for KVM operation
auto eth0
iface eth0 inet static
address 178.61.78.140
netmask 255.255.255.255
gateway 178.61.78.129
pointopoint 178.61.78.129
#
iface eth0 inet6 static
address 2a01:4f8:121:30ea::2
netmask 64
gateway fe80::1
#
auto dummy0
iface dummy0 inet manual
#
auto pdummy0
iface pdummy0 inet static
address 192.168.100.1
netmask 255.255.255.0
network 192.168.100.0
broadcast 192.168.100.255
gateway 192.168.0.1
bridge_ports dummy0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
#
################# Individual bridges for each extra VM #################
auto br112
iface br112 inet static
address 172.30.112.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.112 $IFACE
post-down brctl delbr $IFACE
#
auto br113
iface br113 inet static
address 172.30.113.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.113 $IFACE
post-down brctl delbr $IFACE
#
auto br114
iface br114 inet static
address 172.30.114.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.4.178.114 $IFACE
post-down brctl delbr $IFACE
#
auto br115
iface br115 inet static
address 172.30.115.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.115 $IFACE
post-down brctl delbr $IFACE
#
auto br116
iface br116 inet static
address 172.30.116.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.116 $IFACE
post-down brctl delbr $IFACE
#
auto br117
iface br117 inet static
address 172.30.117.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.117 $IFACE
post-down brctl delbr $IFACE
#
auto br118
iface br118 inet static
address 172.30.118.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.118 $IFACE
post-down brctl delbr $IFACE
#
auto br119
iface br119 inet static
address 172.30.119.1
netmask 255.255.255.0
pre-up brctl addbr $IFACE
post-up route add -host 46.5.178.119 $IFACE
post-down brctl delbr $IFACE

In order to make sure Xen scripts don’t create the normal bridges when a DOMu is started, we need to hinder this process by:
editing the file /etc/xen/xend-config.sxp and change the line:(around line 176)
FROM:
(network-script network-bridge)
TO:
(network-script none)
Reboot for the new network configuration to take effect:
reboot

DOMUs Configuration

PyGRUB
If your DOMUs configurations are set to use pygrub as boot loader,
then make sure the path to pygrub in the DOMU configuration file is correct as follows:
bootloader = '/usr/lib/xen-4.4/bin/pygrub'
In the same DOMU configuration file, make sure you are using a non duplicated MAC addresses with the network interfaces assignment as well as define the bridge that will be used by this DOMu, for example:
vif = [ 'ip=46.5.178.112,mac=00:16:34:D7:9C:F8,bridge=br112', 'ip=192.168.100.112,mac=00:16:3E:D7:1C:13,bridge=pdummy0' ]

NOTE:If you are not using the PyGRUb and want to use it as boot loader for each individual DOMUs, which makes the DOMUs kernel independent from the DOM0, see the following article. Please notice that in Ubuntu 14.04 the path to pygrub is different than in the article. Each new version of Xen has a different path to PyGRUB th rest of the article is fully accurate for Ubuntu as well.
//tipstricks.itmatrix.eu/?s=pygrub&x=0&y=0

DOMus Network Configuration

Each DOMu will get an interface lo, eth0 and eth1 with the following configuration:
I’m using the first IP of our subnet for this DOMU and will therefore be configured as follows:
Note: This configuration is not really standard as it uses each IP with the netmask /32 (255.255.255.255). This setting allows each IP of the subnet to be usable by each DOMu. The configuration pointopoint allows it to reach the gateway.

File: /etc/network/interfaces
Content:
# The loopback network interface
auto lo
iface lo inet loopback
#
# The primary network interface
auto eth0
iface eth0 inet static
address 46.5.178.112
netmask 255.255.255.255
gateway 178.61.78.140
pointopoint 178.61.78.140
#
auto eth1
iface eth1 inet static
address 192.168.100.112
netmask 255.255.255.0

07 Aug 14 Install Xen 4.1 on Debian Wheezy in a Hetzner Dedicated server

Hetzner Germany has very fast and not expensive rentals of Hardware servers available. In order to communicate internally via private network between Xen-DOMUs and DOM0, normally one would install Xen DOM0 network with bridge networking as follows:
DOM0:xenbr0(eth0) --- bridging==>> DOMUs:eth0
DOM0:xenbr1(dummy0) ---bridging==>> DOMUs:eth1

BUT!!!!
PROBLEM:
Because of the configuration of the network switches at Hetzner, one hardware server can have multiple IPs but only one MAC address (MAC of eth0 in DOM0). This means that Bridge networking for Internet connection (eth0) doesn’t work for multiple DOMUs, each one having its own IP AND MAC address.
SOLUTION:
The solution is to use routing for Internet access and bridging for private LAN as follows:
DOM0:eth0 --- routing===>> DOMUs:eth0
DOM0:xenbr1(dummy0) --- bridging==>> DOMUs:eth1

Note: The DISADVANTAGE of this solution is that DOM0 must use one IP from the subnet provided by hetzner to be used as a gateway for the running DOMUs to allow them to communicate with the Internet. In this case the following IP subnet of 8 IPs provided by by Hetzner could be for example:
CIDR Subnet: 140.231.213.168/28
Network addr: 140.258.213.168 (unusable by DOMUs hosts)
Gateway addr: 140.231.213.169 (used as gateway for DOMUs, unusable by DOMUs hosts)
DOMUs usable IPs: 140.231.213.170 - 140.231.213.174 (5 IPs)
Broadcast addr: 140.231.213.175 (unusable for DOMUs hosts)

This means out of 8 IPs you got as a subnet from Hetzner you can only run 5 DOMUs in this Xen environment if each DOMU needs to have its own Internet reachable IP.

XEN INSTALLATION


We will first install XEN in the main hardware server. This means installing the hypervisor, xen aware kernel and xen tools. This can be done by a installing the following packages and a few favorite tools ๐Ÿ™‚
apt-get install xen-linux-system xen-tools bridge-utils mc ssh fail2ban ethtool
Debian Wheezy uses Grub 2 and as default boot manager. It lists normal kernels first, and then, if the xen kernel is installed, lists the Xen hypervisor and its kernels. You need to change this to cause Grub 2 to prefer to boot Xen as default kernel. It is done by changing the priority of Grub’s Xen configuration script (20_linux_xen) to be higher prority than the standard Linux config (10_linux). This is most easily done using dpkg-divert:
dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
After any update to the Grub configuration you must apply the configuration by running:
update-grub
Disable Xendomains save & restore
We disable the saving and restore feature of DOMUs mostly because my experience is that this feature doens’t always work well. I prefer to do the shutdown of each DOMU manually before rebooting DOM0, then after reboot of DOM0, restart each individual DOMU using a @reboot cron job for example:
# This will start 2 virtual machines 60 sec after reboot of DOM0
@reboot /bin/sleep 60; /usr/sbin/xl create /etc/xen/DOMU1.cfg; /usr/sbin/xl create /etc/xen/DOMU2.cfg
This way if power failure happens or anything that forces an unattended reboot of DOM0, all the DOMUs will automatically restart after reboot.

Now the disabling of the automatic Save/Restore of DOMUs:
Edit /etc/default/xendomains
Content:
#XENDOMAINS_SAVE=/var/lib/xen/save
XENDOMAINS_SAVE=
#
#XENDOMAINS_RESTORE=true
XENDOMAINS_RESTORE=false

NETWORKING:


Add the dummy network interface module
echo dummy >> /etc/modules
modprobe dummy

Network configuration
Edit file: /etc/network/interfaces
(Note: here you’ll need to adapt your own IPs etc. in this file)
Content:
# Loopback device:
auto lo
iface lo inet loopback
#
# device: eth0
auto eth0
iface eth0 inet static
address 123.45.67.89
broadcast 123.45.67.255
netmask 255.255.255.0
gateway 123.45.67.1
#
iface eth0 inet6 static
address 2a01:4f7:192:4213::2
netmask 64
gateway fe80::1
#
# Used exclusively as Gateway for DOMUs for this subnet. Unfortunately losing one IP for Gateway purposes.
auto eth0:gw1
iface eth0:gw1 inet static
address 140.231.213.169
netmask 255.255.255.248
network 140.231.213.168
broadcast 140.231.213.175
#
# Internal private network to DOMUs
iface dummy0 inet manual
#
auto xenbr1
iface xenbr1 inet static
address 192.168.100.1
netmask 255.255.255.0
network 192.168.100.0
broadcast 192.168.100.255
bridge_ports dummy0
#
#other possibly useful options in a virtualized environment
bridge_stp off # disable Spanning Tree Protocol
bridge_waitport 0 # no delay before a port becomes available
bridge_fd 0 # no forwarding delay
post-up ethtool -K xenbr1 tx off
post-up ip link set xenbr1 promisc off

Switch to the XL Xen ToolStack
Edit /etc/default/xen
TOOLSTACK=xl
WARNING: The above entry is small ‘XL’ and not small ‘X1’ !!

Edit /etc/xen/xl.conf and make sure the entries are as follows:
# automatically balloon down dom0 when xen doesn't have enough free
# memory to create a domain
autoballoon=1
#
# full path of the lockfile used by xl during domain creation
lockfile="/var/lock/xl"
#
# default vif script.
#vifscript="vif-bridge"
vifscript="/etc/xen/scripts/vif-route_eth0-bridge_dummy0"

Note: Here we use a script which will set routing for eth0 and bridging for dummy0.
Create it.
touch /etc/xen/scripts/vif-route_eth0-bridge_dummy0
chmod 755 /etc/xen/scripts/vif-route_eth0-bridge_dummy0

Edit the file /etc/xen/scripts/vif-route_eth0-bridge_dummy0
Content:
#!/bin/sh
# Custom vif script which allows to combine routing for Internet and bridging for internal LAN
dir=$(dirname "$0")
IFNUM=$(echo ${vif} | cut -d. -f2)
if [ "$IFNUM" = "0" ] ; then
"$dir/vif-route" "$@"
else
"$dir/vif-bridge" "$@"
fi

Edit the file /etc/xen/xend-config.sxp
and make sure the already existing entries are disabled with ‘#’ and new lines entered as follows:
#.......
#(vif-script vif-bridge)
(network-script dummy)
#
#(vif-script vif-route)
(vif-script vif-route_eth0-bridge_dummy0)
#
# make sure DOM0 has enough memory
(dom0-min-mem 2048)
#.......

Setup the IP forwarding and ARP proxying in kernel:
Edit the file /etc/sysctl.conf
Either un-comment or add the following lines:
# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1
# ARP Proxying
net.ipv4.conf.eth0.proxy_arp = 1

To make this change take effect immediately run:
sysctl -p /etc/sysctl.conf
Finally, before we reboot the system we need to make sure we activate the proper toolstack and related features at boot time by running the following commands:
update-rc.d xendomains defaults
update-rc.d xen defaults
/etc/init.d/xen restart
/etc/init.d/xendomains restart

DOMUs Configuration


PyGRUB
If your DOMUs configurations are set to use pygrub as boot loader,
then make sure the path to pygrub in the DOMU configuration file is correct as follows:
bootloader = '/usr/lib/xen-4.1/bin/pygrub'
In the same DOMU configuration file, make sure you are using a non duplicated MAC addresses with the network interfaces assignment for example:
vif = [ 'ip=140.258.213.170,mac=00:16:34:D7:9C:F4' , 'ip=192.168.0.18,mac=00:16:3E:D7:9C:F6',bridge=xenbr1]
Note: The first IP doesn’t need any bridge since it is routing controlled, the internal LAN is bridged with xenbr1 though.

NOTE:If you want to use the pyGrub as boot loader for each individual DOMUs which makes the DOMUs kernel independant from the DOM0, see the following article:
//tipstricks.itmatrix.eu/?s=pygrub&x=0&y=0