Необходимо скачать и установить дистрибутив под архитектуру amd64. После этого ставим XEN с модулями для HVM и PV.
# apt-get install xen-linux-system xen-tools qemu-system-x86 firmware-linux-nonfree lvm2 bridge-utils
После установки обновляем GRUB.
Дописываем в /etc/default/grub следующее:
GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=1024M,max:1024M dom0_max_vcpus=1 dom0_vcpus_pin"
Это говорит GRUB запускать Dom0 с оперативной памятью размером 1ГБ и одним виртуальным процессором. При этом остальная оперативная память и процессоры (ядра процессоров) будет доступна вашим DomU. Можно изменить эти параметры по своему усмотрению.
Затем меняем порядок загрузки ядер, обновляем GRUB и перезагружаемся.
# dpkg-divert --divert /etc/grub.d/08_linux_xen --rename /etc/grub.d/20_linux_xen
# update-grub
# reboot
После перезагрузки выполняем команду:
# xl list
Если появляется что-то типа
Name ID Mem VCPUs State Time(s)
Domain-0 0 1017 1 r----- 382.4
значит XEN установлен.
Заменяем «набор инструментов» по умолчанию.
sed -i '/TOOLSTACK/s/=.*/=xl/' /etc/default/xen
Файл /etc/network/interfaces должен выглядеть примерно так:
# The loopback network interface
auto lo
iface lo inet loopback
# physical network interface
auto eth0
iface eth0 inet manual
# bridge public
auto xenbr0
iface xenbr0 inet static
address 192.168.0.9
broadcast 192.168.0.255
netmask 255.255.255.0
gateway 192.168.0.1
bridge_ports eth0
bridge_stp off # disable Spanning Tree Protocol
bridge_waitport 0 # no delay unless port available
bridge_fd 0 # no forwarding delay
Параметры address, broadcast, gateway, bridge_ports и т. п. выставляете в соответствии с настройками вашей сети.
Допустим у нас есть некий физический диск, который система "видит под именем" /dev/sdd.
Перед использованием диска или раздела в качестве физического тома необходимо его инициализировать.
Для целого диска:
root@host# pvcreate /dev/sdd
Physical volume "/dev/sdd" successfully created
Создание группы томов с именем vg0. В этой группе у нас будут храниться образы дисков виртуальных машин.
root@host# vgcreate vg0 /dev/sdd
Volume group "vg0" successfully created
Будем устанавливать vnc4server и окружение Xfce.
# apt-get install xfce4 xfce4-goodies gnome-icon-theme vnc4server
Создаем файл /usr/local/bin/myvncserver со следующим содержимым:
#!/bin/bash
PATH="$PATH:/usr/bin/"
DISPLAY="1"
DEPTH="24"
GEOMETRY="1200x800"
OPTIONS="-depth ${DEPTH} -geometry ${GEOMETRY} :${DISPLAY}"
case "$1" in
start)
/usr/bin/vnc4server ${OPTIONS}
;;
stop)
/usr/bin/vnc4server -kill :${DISPLAY}
;;
restart)
$0 stop
$0 start
;;
esac
exit 0
Делаем файл исполняемым:
# chmod +x /usr/local/bin/myvncserver
Проверяем:
# /usr/local/bin/myvncserver start
# /usr/local/bin/myvncserver stop
# /usr/local/bin/myvncserver restart
Теперь можно создать файл для сервиса, чтобы описать его и сообщить машине, как запустить/остановить/перезапустить этот сервис. Создаем файл /lib/systemd/system/myvncserver.service со следующим содержимым:
[Unit]
Description=Manage VNC Server
[Service]
Type=forking
ExecStart=/usr/local/bin/myvncserver start
ExecStop=/usr/local/bin/myvncserver stop
ExecReload=/usr/local/bin/myvncserver restart
User=root
[Install]
WantedBy=multi-user.target
Перезапускаем systemctl и включаем сервис:
# systemctl daemon-reload
# sudo systemctl enable myvncserver.service
Если все прошло удачно, тогда он должен запускаться при старте системы.
## Global XL config file ##
# Control whether dom0 is ballooned down when xen doesn't have enough
# free memory to create a domain. "auto" means only balloon if dom0
# starts with all the host's memory.
autoballoon="auto"
# full path of the lockfile used by xl during domain creation
#lockfile="/var/lock/xl"
# default output format used by "xl list -l"
#output_format="json"
# first block device to be used for temporary VM disk mounts
#blkdev_start="xvda"
# default option to run hotplug scripts from xl
# if disabled the old behaviour will be used, and hotplug scripts will be
# launched by udev.
#run_hotplug_scripts=1
# default backend domain to connect guest vifs to. This can be any
# valid domain identifier.
#vif.default.backend="0"
# default gateway device to use with vif-route hotplug script
#vif.default.gatewaydev="eth0"
# default vif script to use if none is specified in the guest config
#vif.default.script="vif-bridge"
# default bridge device to use with vif-bridge hotplug scripts
#vif.default.bridge="xenbr0"
# Reserve a claim of memory when launching a guest. This guarantees immediate
# feedback whether the guest can be launched due to memory exhaustion
# (which can take a long time to find out if launching huge guests).
# see xl.conf(5) for details.
#claim_mode=1
##
# /etc/xen-tools/xen-tools.conf
##
#
# This is the global configuration file for the scripts included
# within the xen-tools package.
#
# For more details please see:
#
# http://xen-tools.org/
#
##
##
#
# File Format
# -----------
#
# Anything following a '#' character is ignored as a comment.
#
# Otherwise the format of this file "key = value". The value of
# any keys in this file may be constructed via the output of a command.
#
# For example:
#
# kernel = /boot/vmlinuz-`uname -r`
#
##
#
##
# Output directory for storing loopback images.
#
# If you choose to use loopback images, which are simple to manage but
# slower than LVM partitions, then specify a directory here and uncomment
# the line.
#
# New instances will be stored in subdirectories named after their
# hostnames.
#
##
dir = /mnt/vg0-lv0
#
#
##
#
# If you don't wish to use loopback images then you may specify an
# LVM volume group here instead
#
##
lvm = vg0
#
##
#
# Installation method.
#
# There are four distinct methods which you may to install a new copy
# of Linux to use in your Xen guest domain:
#
# - Installation via the debootstrap command.
# - Installation via the rpmstrap command.
# - Installation via the rinse command.
# - Installation by copying a directory containing a previous installation.
# - Installation by untarring a previously archived image.
#
# NOTE That if you use the "untar", or "copy" options you should ensure
# that the image you're left with matches the 'dist' setting later in
# this file.
#
#
##
#
#
# install-method = [ debootstrap | rinse | rpmstrap | copy | tar ]
#
#
install-method = debootstrap
#
# If you're using the "copy", or "tar" installation methods you must
# need to specify the source location to copy from, or the source
# .tar file to unpack.
#
# You may specify that with a line such as:
#
# install-source = /path/to/copy
# install-source = /some/path/img.tar
#
#
#
##
# Command definitions.
##
#
# The "rinse", and "rpmstrap" commands are hardwired into
# the script, but if you wish to modify the commands which are executed
# when installing new systems by a "copy", "debootstrap", or "tar" method
# you can do so here:
#
# (This allows you to install from a .tar.bz file, rather than a plain
# tar file, use cdebootstrap, etc.)
#
# install-method = copy:
# copy-cmd = /bin/cp -a $src/* $dest
#
# install-method = debootstrap:
# debootstrap-cmd = /usr/sbin/debootstrap
#
# install-method = tar:
# tar-cmd = /bin/tar --numeric-owner -xvf $src
#
#
#
##
# Disk and Sizing options.
##
#
size = 10G # Root disk, suffix (G, M, k) required
memory = 256M # Suffix (G, M, k) required
#maxmem = 256M # Suffix (G, M, k) optional
swap = 256M # Suffix (G, M, k) required
noswap = 1 # Don't use swap at all for new systems.
fs = ext4 # Default file system for any disk
dist = `xt-guess-suite-and-mirror --suite`
# Default distribution is determined by Dom0's distribution
image = sparse # Specify sparse vs. full disk images (file based images only)
#
# See the README for currently supported and tested distributions. You can
# either find it in the root directory of the unpacked source or, on Debian
# and Ubuntu based systems, in /usr/share/doc/xen-tools/README.gz
#
##
# Networking setup values.
##
#
# Uncomment and adjust these network settings if you wish to give your
# new instances static IP addresses.
#
gateway = 192.168.0.1
netmask = 255.255.255.0
broadcast = 192.168.0.255
#
# Uncomment this if you wish the images to use DHCP
#
dhcp = 1
#
# Uncomment and adjust this setting if you wish to give your new
# instances a specific nameserver.
#
# By default, nameserver is not set, and Dom0's /etc/resolv.conf will
# be copied to guest.
#
nameserver = 192.168.0.1 8.8.8.8
#
#
# Setup bridge name for host vif. Usefull if you use bridged networking
# for guests.
#
bridge = xenbr0
#
##
# Misc options
##
#
# Uncomment the following line if you wish to disable the caching
# of downloaded .deb files when using debootstrap to install images.
#
# cache = no
#
#
# The default cachedir is, /var/cache/apt/archives/, however if it
# does not exist it will default to /var/cache/xen-tools/archives/
# Uncomment the line below to set it to something else.
#
cachedir = /var/cache/xen-tools/archives/
#
#
# Uncomment the following line if you wish not to generate a new root
# password for the new guest.
#
genpass = 0
#
#
# You can also change the password length by uncommenting and
# changing the line below
#
genpass_len = 4
#
#
# You can yet change the hashing method to encrypt the generated
# password by changing the line below.
# Valid values : md5, sha256 and sha512.
#
# hash_method = sha256
#
#
# Uncomment the following line if you wish to interactively setup a
# new root password for images.
#
passwd = 1
#
#
# If you'd like all accounts on your host system which are not present
# on the guest system to be copied over then uncomment the following line.
#
# accounts = 1
#
#
# Default kernel and ramdisk to use for the virtual servers
#
kernel = /boot/vmlinuz-`uname -r`
initrd = /boot/initrd.img-`uname -r`
#
# Uncomment the following line if you wish to use pygrub by default
# for all distributions.
#
# pygrub = 1
#
#
# The architecture to use when using debootstrap, rinse, or rpmstrap.
#
# This is most useful on 64 bit host machines, for other systems it
# doesn't need to be used.
#
# arch = [i386|amd64]
#
#
# The default mirror for debootstrap to install Debian-derived distributions
#
mirror = `xt-guess-suite-and-mirror --mirror`
#
# A mirror suitable for use when installing the Dapper release of Ubuntu.
#
# mirror = http://gb.archive.ubuntu.com/ubuntu/
#
# If you like you could use per-distribution mirrors, which will
# be more useful if you're working in an environment where you want
# to regularly use multiple distributions. The following are the
# default values used (for Debian the GeoIP feature is used, see
# http://wiki.debian.org/DebianGeoMirror):
#
mirror_sarge = http://archive.debian.org/debian
mirror_etch = http://archive.debian.org/debian
mirror_lenny = http://archive.debian.org/debian
mirror_squeeze = http://http.debian.net/debian
mirror_wheezy = http://http.debian.net/debian
mirror_jessie = http://http.debian.net/debian
mirror_sid = http://http.debian.net/debian
mirror_dapper = http://old-releases.ubuntu.com/ubuntu
mirror_edgy = http://old-releases.ubuntu.com/ubuntu
mirror_feisty = http://old-releases.ubuntu.com/ubuntu
mirror_gutsy = http://old-releases.ubuntu.com/ubuntu
mirror_hardy = http://old-releases.ubuntu.com/ubuntu
mirror_intrepid = http://old-releases.ubuntu.com/ubuntu
mirror_jaunty = http://old-releases.ubuntu.com/ubuntu
mirror_karmic = http://old-releases.ubuntu.com/ubuntu
mirror_lucid = http://archive.ubuntu.com/ubuntu
mirror_maverick = http://old-releases.ubuntu.com/ubuntu
mirror_natty = http://old-releases.ubuntu.com/ubuntu
mirror_oneiric = http://archive.ubuntu.com/ubuntu
mirror_precise = http://archive.ubuntu.com/ubuntu
mirror_quantal = http://old-releases.ubuntu.com/ubuntu
mirror_raring = http://old-releases.ubuntu.com/ubuntu
mirror_saucy = http://old-releases.ubuntu.com/ubuntu
mirror_trusty = http://archive.ubuntu.com/ubuntu
mirror_utopic = http://archive.ubuntu.com/ubuntu
mirror_vivid = http://archive.ubuntu.com/ubuntu
# If this is defined it will be used by debootstrap, and configured as the
# proxy for the guest
#
# apt_proxy =
#
# Filesystem options for the different filesystems we support.
#
ext4_options = noatime,nodiratime,errors=remount-ro
ext3_options = noatime,nodiratime,errors=remount-ro
ext2_options = noatime,nodiratime,errors=remount-ro
xfs_options = defaults
reiserfs_options = defaults
btrfs_options = defaults
#
# Uncomment if you wish newly created images to boot once they've been
# created.
#
boot = 1
#
# If you're using the lenny or later version of the Xen guest kernel you will
# need to make sure that you use 'hvc0' for the guest serial device,
# and 'xvdX' instead of 'sdX' for disk devices.
#
# You may specify the things to use here:
#
# serial_device = hvc0 #default
# serial_device = tty1
#
# disk_device = xvda #default
# disk_device = sda
#
#
# Here we specify the output directory which the Xen configuration
# files will be written to, and the suffix to give them.
#
# Historically xen-tools have created configuration files in /etc/xen,
# and given each file the name $hostname.cfg. If you want to change
# that behaviour you may do so here.
#
#
output = /mnt/vg0-lv0/xen-configs
extension = .cfg
#
#
# Here you can control weather your dom0's /etc/hosts file should be
# appended with the new guest, and also if your dom0's /etc/hosts file
# should be copied to the new guest.
#
# Change the following options to 1 to set them
# nohosts - don't touch the dom0's /etc/hosts file
# copyhosts - copy the dom0's /etc/hosts to the guest
#
# by default new guests ARE added to the dom0's /etc/hosts file
# nohosts = 0 # default
#
# by default the dom0's /etc/hosts IS NOT copied
# copyhosts = 0 # default
#
Параметры, на которые следует обратить внимание
dir = /mnt/vg0-lv0 - каталог, в котором будут храниться вновь создаваемые образы виртуальных машинlvm = vg0 - имя группы томов LVMsize = 10G - размер создаваемых виртуальных дисковmemory = 256M - объём оперативной памяти для создаваемых виртуальных машин#maxmem = 256M - максимальный объём оперативной памяти для создаваемых виртуальных машинswap = 256M - размер раздела /swapnoswap = 1 - 1- не использовать swap раздел, 0 - использовать swap раздел
Настройки сети для создаваемых виртуальных машинgateway = 192.168.0.1netmask = 255.255.255.0broadcast = 192.168.0.255dhcp = 1 - должна ли виртуальная машина получать настройки сети по DHCPnameserver = 192.168.0.1 8.8.8.8 - DNS серверы для виртуальной машиныbridge = xenbr0 - имя сетевого моста XENgenpass = 0 - нужна ли автоматическая генерация пароля (0 - нет, 1 - да)passwd = 1 - хотите ли ввести пароль для вновь создаваемой виртуальной машины вручную (0 - нет, 1 - да)boot = 1 - необходимо ли автоматически загрузить вновь созданную виртуальную машинуoutput = /mnt/vg0-lv0/xen-configs - где будут храниться файлы конфигурации виртуальных машин XENextension = .cfg - расширение файлов конфигурации виртуальных машин XEN
# xen-create-image --hostname=debian8-amd64-pv --memory=256mb --vcpus=1 --lvm=vg0 --dhcp --pygrub --force
Запускаем виртуальную машину (если еще не запущена)
# xl create /mnt/vg0-lv0/xen-configs/debian8-amd64-pv.cfg
Подключаемся к консоли и меняем IP адрес, настройки SSH, устанавливаем необходимые пакеты и т. п.
# xl console debian8-amd64-pv
Содержимое файла /mnt/vg0-lv0/xen-configs/debian8-amd64-pv.cfg в итоге должно быть следующим:
#
# Configuration file for the Xen instance debian8-amd64-pv, created
# by xen-tools 4.5 on Wed Jul 20 16:17:29 2016.
#
#
# Kernel + memory size
#
bootloader = '/usr/lib/xen-4.4/bin/pygrub'
vcpus = '1'
memory = '256'
#
# Disk device(s).
#
root = '/dev/xvda1 ro'
disk = [
'phy:/dev/vg0/debian8-amd64-pv-disk,xvda1,w',
]
#
# Physical volumes
#
#
# Hostname
#
name = 'debian8-amd64-pv'
#
# Networking
#
#dhcp = 'dhcp'
vif = [ 'mac=00:16:3E:5C:A2:A8,bridge=xenbr0' ]
#
# Behaviour
#
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
#
# Опциональные доработки для настройки VNC
#
sdl = 0
vnc = 1
autoport= 'no'
vnclisten = '0.0.0.0'
vncpasswd = '123456'
vncdisplay = 50
vncunused = 0
keymap = 'en-us'
В данном примере будем создавать образ с Windows 7 (x64).
Создаем LVM диск размером 10ГБ в группе томов vg0.
# lvcreate -nwindows7-x64-hvm -L10G vg0
Создаем файл /mnt/vg0-lv0/xen-configs/windows7-x64-hvm.cfg со следующим содержимым:
#kernel = "/usr/lib/xen-4.4/boot/hvmloader"
builder='hvm'
memory = 1024
vcpus=1
name = "windows7-x64-hvm"
vif = ['bridge=xenbr0']
#disk = ['phy:/dev/vg0/windows7-x64-hvm,hda,w','file:/mnt/vg0-lv0/disk_images/ru_windows_7_all_with_sp1_x64_dvd_u_677391_20120129.iso,hdc:cdrom,r']
disk = ['phy:/dev/vg0/windows7-x64-hvm,hda,w']
acpi = 1
apic = 1
device_model_version = 'qemu-xen'
boot="d"
serial='pty'
sdl = 0
vnc = 1
autoport= 'no'
vnclisten = '0.0.0.0'
vncpasswd = '123456'
vncdisplay = 51
vncunused = 0
keymap = 'en-us'
usb = 1
usbdevice = 'tablet'
/dev/vg0/windows7-x64-hvm - путь к ранее созданному LVM диску
/mnt/vg0-lv0/disk_images/ru_windows_7_all_with_sp1_x64_dvd_u_677391_20120129.iso - путь к ISO файлу с образом Windows 7 (x64)
При установке первую секцию disk = ['phy:/... необходимо раскомментировать, а вторую — закомментировать.
Скачать их можно тут: https://github.com/spurious/win-pvdrivers-mirror