This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
virtualization:citrix:xenserver [2013/09/23 14:27] gcooper |
— (current) | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== Citrix XenServer and Xen Cloud Platform (XCP) ====== | ||
- | :!: **Be sure to disable CPU power management (C-States) or stability issues may occur!** | ||
- | |||
- | See also **[[virtualization: | ||
- | |||
- | See also **[[virtualization: | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | * XenServer is a bare-metal (type 1) hypervisor, available for free | ||
- | * XCP is an Open Source version of Citrix XenServer | ||
- | * Citrix XenCenter can manage both | ||
- | * XenCenter requires Windows to run | ||
- | * Use tab-completion for '' | ||
- | |||
- | ===== Official Citrix Resources ===== | ||
- | |||
- | Quick Installation Guide: | ||
- | |||
- | http:// | ||
- | |||
- | Other documentation: | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | ===== Updates ===== | ||
- | |||
- | * Version upgrades are done with the Rolling Pool Upgrade tool within XenCenter | ||
- | * Command line updates are more reliable and faster | ||
- | |||
- | http:// | ||
- | |||
- | < | ||
- | wget http:// | ||
- | wget http:// | ||
- | wget http:// | ||
- | |||
- | wget http:// | ||
- | wget http:// | ||
- | wget http:// | ||
- | wget http:// | ||
- | |||
- | for x in XS*.zip; do unzip $x; done | ||
- | |||
- | for x in *.xsupdate; do xe patch-upload file-name=$x; | ||
- | </ | ||
- | |||
- | Then install the updates one at a time, in order, using the UUID's printed by the last command: | ||
- | |||
- | < | ||
- | xe patch-pool-apply uuid=< | ||
- | </ | ||
- | |||
- | Then reboot the XenServer host: | ||
- | |||
- | < | ||
- | reboot | ||
- | </ | ||
- | |||
- | ==== Multiple XenServers ==== | ||
- | |||
- | For a single XenServer, the above commands would suffice when run from the commmand line of the XenServer. | ||
- | |||
- | To update multiple XenServers, you would download the updates once, then push them to multiple servers by running additional commands specifying server/ | ||
- | |||
- | ===== PV vs. HVM ===== | ||
- | |||
- | This page has lots of info clearly explained: | ||
- | |||
- | http:// | ||
- | |||
- | ===== Remote Access ===== | ||
- | |||
- | Remote Access is a weak spot in XenServer since the primary management tool is XenCenter on Windows. | ||
- | |||
- | Most XenCenter-to-XenServer communications happen on ports 22 (SSH) and 443 (HTTPS). | ||
- | |||
- | Since standard SSH is available, the first management tool to grab is ' | ||
- | |||
- | As for XenCenter, you can get partial functionality by simply forwarding port 443. | ||
- | |||
- | The easiest way I've found to access the console of a VM running on a XenServer behind NAT is: | ||
- | |||
- | - Enable remote SSH access directly to the XenServer | ||
- | * You can forward a non-standard port at the firewall (22222) to the standard port 22 at the XenServer | ||
- | - To access a VM's console, you must tunnel through localhost (the XenServer) | ||
- | - Use SSH port forwarding to forward a VM's VNC port (5901, 5902, 5903, etc.) to ' | ||
- | * Each VM runs on a different VNC port | ||
- | * Each VM's VNC console is only available to localhost (the XenServer) | ||
- | - The IP address of the VM doesn' | ||
- | |||
- | You can determine which VNC port is assigned to which VM like this: | ||
- | |||
- | |||
- | Log into the XenServer via SSH: | ||
- | |||
- | < | ||
- | ssh -p 22222 -l root < | ||
- | </ | ||
- | |||
- | Determine the VNC port of your target VM: | ||
- | |||
- | < | ||
- | xe vm-list | ||
- | xe vm-list name-label="< | ||
- | netstat -lp|grep -w < | ||
- | </ | ||
- | |||
- | Now you can forward the port(s) and access the VNC console of the VM from other terminals on your remote workstation: | ||
- | |||
- | < | ||
- | ssh -p 22222 -l root -L < | ||
- | |||
- | vncviewer localhost:< | ||
- | </ | ||
- | |||
- | ===== Delete Storage Repository ===== | ||
- | |||
- | < | ||
- | xe sr-list | ||
- | |||
- | xe pbd-list sr-uuid=your-SR-uuid | ||
- | |||
- | xe pbd-unplug uuid=your-PBD-uuid | ||
- | |||
- | xe pbd-destroy uuid=your-PBD-uuid | ||
- | |||
- | xe sr-forget uuid=your-SR-uuid | ||
- | </ | ||
- | |||
- | ===== Create New Storage Repository ===== | ||
- | |||
- | ==== Local Storage ==== | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | :!: It is advisable to partition the new disk with one large partition as opposed to using the bare drive for LVM because many tools will report an unpartitioned drive as ' | ||
- | |||
- | :!: Verify the new local disk is visible to the XenServer host OS and create a single large partition of **type 8e (Linux LVM)**. | ||
- | |||
- | < | ||
- | gdisk / | ||
- | |||
- | cat / | ||
- | |||
- | ll / | ||
- | </ | ||
- | |||
- | Survey the existing storage repositories: | ||
- | |||
- | < | ||
- | xe sr-list | ||
- | </ | ||
- | |||
- | === Option 1 - Create a new SR === | ||
- | |||
- | < | ||
- | xe sr-create content-type=user type=lvm device-config: | ||
- | </ | ||
- | |||
- | === Option 2 - Extend an Existing SR === | ||
- | |||
- | * One benefit of extending a volume to a second drive is that you will have a single large volume to work with. If you need this, it's worth it. | ||
- | * One down-side is that your volume now depends on two drives and is therefore twice as likely to fail. | ||
- | |||
- | After partitioning the drive (x) as per above: | ||
- | |||
- | < | ||
- | vgdisplay | ||
- | |||
- | vgextend VG_XenStorage-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /dev/sdx1 | ||
- | </ | ||
- | |||
- | ==== ISO Storage on CIFS ==== | ||
- | |||
- | CIFS -> Centos CIFS ISO library -> \\192.168.0.6\nas_vg_1.nas_vol_1.no_backup\Centos | ||
- | |||
- | \\server\share\folder | ||
- | |||
- | Set username and password. | ||
- | |||
- | ==== ISO Storage on NFS ==== | ||
- | |||
- | You can't set folders using NFS, only shares, so .iso files must be in the top-level folder with NFS: | ||
- | |||
- | No_Backup NFS ISO library -> 192.168.0.6:/ | ||
- | |||
- | ==== ISO Storage on Local Disk ==== | ||
- | |||
- | http:// | ||
- | |||
- | :!: This will only create a small SR to hold smallish (rescue?) images. | ||
- | |||
- | :!: DO NOT FILL the Dom0 partition too full! | ||
- | |||
- | :!: Note that this will probably break and have to be fixed after a version upgrade! | ||
- | |||
- | < | ||
- | mkdir -p / | ||
- | |||
- | xe sr-create name-label=" | ||
- | </ | ||
- | |||
- | === Create an ISO Image === | ||
- | |||
- | To create an ISO image from a physical CD/DVD: | ||
- | |||
- | * Create the ISO storage repository as above | ||
- | * Place the optical disc in the XenServer' | ||
- | * Use '' | ||
- | |||
- | < | ||
- | dd if=/dev/dvd of=/ | ||
- | </ | ||
- | |||
- | ===== Introduce a Local Storage Repository ===== | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | ===== Install Guest (New VM) ===== | ||
- | |||
- | ==== Centos 5.5 Minimal Netinstall ==== | ||
- | |||
- | Start with 5.4 32-bit template and use mostly defaults. | ||
- | |||
- | Install from URL: | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | Do NOT start the VM automatically. | ||
- | |||
- | Adjust the Description and Storage Name and Storage Description as needed. | ||
- | |||
- | Start the VM - be patient because some large files are transferred over the 'net. | ||
- | |||
- | ==== Debian 5 (Lenny) Net-Install ==== | ||
- | |||
- | URL: | ||
- | |||
- | http:// | ||
- | |||
- | then: | ||
- | |||
- | debian.mirrors.easynews.com | ||
- | |||
- | < | ||
- | apt-get install openssh-server | ||
- | </ | ||
- | |||
- | ==== SME Server 8 Netinstall ==== | ||
- | |||
- | URL: | ||
- | |||
- | http:// | ||
- | |||
- | ===== Auto Start VMs ===== | ||
- | |||
- | http:// | ||
- | |||
- | * As of XenServer 6, you can't configure a VM to auto-start in the GUI | ||
- | * vApp has been introduced | ||
- | * vApp gives control over the order and delays of a group of VMs | ||
- | |||
- | In a simple setup, you can configure VMs to auto-start from the CLI: | ||
- | |||
- | < | ||
- | xe pool-list | ||
- | xe pool-param-set uuid=UUID other-config: | ||
- | xe vm-list | ||
- | xe vm-param-set uuid=UUID other-config: | ||
- | </ | ||
- | |||
- | ===== vApp ===== | ||
- | |||
- | ==== Manage from the CLI ==== | ||
- | |||
- | === List VMs === | ||
- | |||
- | < | ||
- | xe vm-list | ||
- | </ | ||
- | |||
- | === Create a vApp === | ||
- | |||
- | < | ||
- | xe appliance-create name-label=< | ||
- | |||
- | xe appliance-list | ||
- | </ | ||
- | |||
- | === Add VMs to vApp === | ||
- | |||
- | < | ||
- | xe vm-param-set uuid=< | ||
- | xe vm-param-set uuid=< | ||
- | </ | ||
- | |||
- | === Delete a vApp === | ||
- | |||
- | < | ||
- | appliance-destroy uuid=< | ||
- | </ | ||
- | |||
- | ==== vApp Startup ==== | ||
- | |||
- | http:// | ||
- | |||
- | * vApp feature introduced in XenServer 6 | ||
- | * vApp feature keyed to HA | ||
- | * Not currently configurable via the GUI | ||
- | * vApps won't auto-start on a cold boot | ||
- | |||
- | < | ||
- | xe appliance-list | ||
- | </ | ||
- | |||
- | Append these lines to / | ||
- | |||
- | < | ||
- | sleep 20 | ||
- | / | ||
- | </ | ||
- | |||
- | ===== Install XenServer Tools ===== | ||
- | |||
- | :!: Don't bother installing the Tools on a Linux guest if you don't have a Xen kernel. | ||
- | |||
- | < | ||
- | uname -a | ||
- | </ | ||
- | |||
- | In XenCenter, select xs-tools.iso for the DVD drive. | ||
- | |||
- | ==== Ubuntu/ | ||
- | |||
- | < | ||
- | mount /dev/xvdd /mnt | ||
- | |||
- | / | ||
- | |||
- | wget -q http:// | ||
- | |||
- | / | ||
- | </ | ||
- | |||
- | ==== SME Server 8 ==== | ||
- | |||
- | < | ||
- | mount /dev/xvdd /mnt | ||
- | |||
- | mv / | ||
- | echo " | ||
- | |||
- | / | ||
- | |||
- | / | ||
- | |||
- | ln -s / | ||
- | </ | ||
- | |||
- | ===== Change XenServer Hostname ===== | ||
- | |||
- | < | ||
- | xe host-list | ||
- | xe host-set-hostname-live host-uuid=< | ||
- | reboot | ||
- | </ | ||
- | |||
- | ===== Networking ===== | ||
- | |||
- | http:// | ||
- | |||
- | Design Guide: http:// | ||
- | |||
- | Move a XenServer Pool to a Different IP Subnet: http:// | ||
- | |||
- | * Define a gateway only on the Mangement interface | ||
- | * You can't use a VLAN for the Management interface | ||
- | * Unless it's the ' | ||
- | * Or handled by your switch | ||
- | * Best practice is no VLAN on Management interface | ||
- | |||
- | ==== Dedicated Storage Network ==== | ||
- | |||
- | :!: See the **Admin Guide** for more info. | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | :!: If you have an IP address set on a NIC that is neither management nor storage, the PIF (physical interface) cannot send or receive traffic on the PIF network. | ||
- | |||
- | http:// | ||
- | |||
- | First we need to get the uuid of the PIF (physical interface) that we want to use: | ||
- | |||
- | < | ||
- | xe pif-list host-name-label=< | ||
- | </ | ||
- | |||
- | Next we reconfigure our PIF: | ||
- | |||
- | < | ||
- | xe pif-reconfigure-ip mode=static IP=< | ||
- | xe pif-param-set disallow-unplug=true uuid=< | ||
- | xe pif-param-set other-config: | ||
- | </ | ||
- | |||
- | Enabling jumbo frames in XenServer simply requires changing the MTU for each pool-wide network from the default of 1500 to 9000 and rebooting each member of the pool. | ||
- | |||
- | The following steps need to be performed on only one of the XenServers in the pool to enable jumbo frames: | ||
- | |||
- | < | ||
- | xe pif-list | ||
- | xe network-list | ||
- | xe network-param-set uuid=[network uuid] MTU=9000 | ||
- | </ | ||
- | |||
- | ===== To get the uuid of the VDI of a specific VM ===== | ||
- | |||
- | < | ||
- | xe vm-disk-list vm=< | ||
- | </ | ||
- | |||
- | ===== Boot Linux to CD-ROM ===== | ||
- | |||
- | < | ||
- | xe vm-list | ||
- | |||
- | xe vm-param-set HVM-boot-policy=" | ||
- | </ | ||
- | |||
- | Next, on the " | ||
- | On the " | ||
- | |||
- | Before starting VM, make sure that the ISO that you want is in the VM's DVD-Drive located on the " | ||
- | |||
- | Proceed to use SUSE CD-ROM (or DVD) to upgrade, or Linux Rescue media, etc. Once all changes have been made to the VM, you will need to revert back the change to the VM's parameter with the following command: | ||
- | |||
- | < | ||
- | xe vm-param-set HVM-boot-policy="" | ||
- | </ | ||
- | |||
- | Reboot VM | ||
- | |||
- | Final caveat... mouse is unavailable as it is "HVM mode" and no device has been loaded. | ||
- | |||
- | ===== Convert HVM <-> PV ===== | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | ===== Xenserver Backup ===== | ||
- | |||
- | See [[XenServer Backup]]. | ||
- | |||
- | ===== AoE Storage ===== | ||
- | |||
- | See [[computing: | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | ===== High Availability ===== | ||
- | |||
- | http:// | ||
- | |||
- | * Bonded NICs | ||
- | * Separate network paths for | ||
- | * VMs | ||
- | * Storage | ||
- | * Management | ||
- | * 6 NICs per server! | ||
- | * SAN/[[NAS]] storage | ||
- | |||
- | ===== Firewall ===== | ||
- | |||
- | XenCenter - Port 443 | ||
- | |||
- | http:// | ||
- | |||
- | < | ||
- | iptables -nL -v --line-numbers | ||
- | </ | ||
- | |||
- | ==== NTP ==== | ||
- | |||
- | < | ||
- | iptables -I RH-Firewall-1-INPUT 13 -p udp --dport 123 -j ACCEPT | ||
- | service iptables save | ||
- | </ | ||
- | |||
- | ==== Sample Firewall ==== | ||
- | |||
- | This sample firewall allows NTP and limits access to ports 22, 80 and 443 by the sourde IP. | ||
- | |||
- | / | ||
- | |||
- | < | ||
- | # Generated by iptables-save v1.3.5 on Mon Apr 9 00:15:34 2012 | ||
- | *filter | ||
- | :INPUT ACCEPT [0:0] | ||
- | :FORWARD ACCEPT [0:0] | ||
- | :OUTPUT ACCEPT [135:25337] | ||
- | : | ||
- | -A INPUT -j RH-Firewall-1-INPUT | ||
- | -A FORWARD -j RH-Firewall-1-INPUT | ||
- | -A RH-Firewall-1-INPUT -i lo -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -p icmp -m icmp --icmp-type any -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -p esp -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -p ah -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -d 224.0.0.251 -p udp -m udp --dport 5353 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -i xenapi -p udp -m udp --dport 67 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -m state --state RELATED, | ||
- | -A RH-Firewall-1-INPUT -p udp -m state --state NEW -m udp --dport 694 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -s 209.104.9.32/ | ||
- | -A RH-Firewall-1-INPUT -s 209.193.64.248/ | ||
- | -A RH-Firewall-1-INPUT -s 72.200.111.140 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -s 209.193.64.2 -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -s 209.104.9.32/ | ||
- | -A RH-Firewall-1-INPUT -s 209.193.64.248/ | ||
- | -A RH-Firewall-1-INPUT -s 72.200.111.140 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -s 209.193.64.2 -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -p udp -m udp --dport 123 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -s 209.104.9.32/ | ||
- | -A RH-Firewall-1-INPUT -s 209.193.64.248/ | ||
- | -A RH-Firewall-1-INPUT -s 72.200.111.140 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -s 209.193.64.2 -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT | ||
- | -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited | ||
- | COMMIT | ||
- | # Completed on Mon Apr 9 00:15:34 2012 | ||
- | </ | ||
- | |||
- | ===== Resize Virtual Disk ===== | ||
- | |||
- | Assumes: | ||
- | |||
- | 1. Running LVM in the guest (you can do this without it, but it's difficult). | ||
- | 2. You're using a filesystem such as Ext3 on the partition you wish to expand which supports online expansion. | ||
- | 3. You're able to perform a backup just in case something goes wrong. | ||
- | |||
- | Steps: | ||
- | |||
- | __5-11. Run pvresize / | ||
- | __ | ||
- | 1. Shut down the virtual machine from XenCenter. | ||
- | |||
- | 2. Resize the VDI from XenCenter to the appropriate size | ||
- | |||
- | 3. Start the VM up. | ||
- | |||
- | 4. Run "fdisk -l" to locate the virtual disk by looking at the size. | ||
- | |||
- | 5. Run "fdisk / | ||
- | |||
- | 6. Create a new partition (usually primary) using all available space. The default options will be sufficient. To do this, type " | ||
- | |||
- | 7. Reboot the VM again to allow udev to create the appropriate /dev/ node for the partition just created. | ||
- | |||
- | 8. Create a new physical volume by running " | ||
- | |||
- | 9. Locate the Volume Group containing the partition by running " | ||
- | |||
- | 10. Extend your Volume Group with the newly added Physical Volume by running vgextend as shown: | ||
- | " | ||
- | |||
- | 11. Confirm the Volume Group has free storage by running " | ||
- | |||
- | 12. Expand the Logical Volume using all available free extents by running: | ||
- | lvextend -l+100%FREE / | ||
- | Remember to change VolGroupXX and LogVolYY as appropriate. | ||
- | |||
- | 13. Assuming ext3 or ext2 is being used (other filesystems will have their own tools for this purpose), resize the filesystem by running: | ||
- | resize2fs / | ||
- | |||
- | 14. As a precaution, reboot the system again checking the filesystem for errors: | ||
- | shutdown -r now -F | ||
- | |||
- | 15. Observe the output of the " | ||
- | |||
- | ===== Guest GUI ===== | ||
- | |||
- | ==== X and VNC on XenServer ==== | ||
- | |||
- | When you install Redhat on XenServer it does not by default enable a virtual video device. You are only given the text console by default. | ||
- | |||
- | Citrix' | ||
- | |||
- | Check to make sure that vnc-server and gdm are installed. | ||
- | |||
- | < | ||
- | rpm -q vnc-server gdm | ||
- | </ | ||
- | |||
- | If they are not, install them. | ||
- | |||
- | < | ||
- | yum install vnc-server gdm | ||
- | </ | ||
- | |||
- | Modify ''/ | ||
- | |||
- | < | ||
- | [servers] | ||
- | 0=VNC | ||
- | [server-VNC] | ||
- | name=VNC Server | ||
- | command=/ | ||
- | flexible=true | ||
- | </ | ||
- | |||
- | When GDM is running it should be listening on port 5900. Make sure that the iptables firewall allows access to this port from any machine running XenCenter or wanting to connect. | ||
- | |||
- | < | ||
- | iptables -N vnc | ||
- | iptables -I INPUT -p tcp -m state --state NEW -m tcp --dport 5900 -j vnc | ||
- | iptables -A vnc -s 134.114.0.0/ | ||
- | iptables -A vnc -p tcp -m tcp -m state --state NEW --dport 5900 -j REJECT | ||
- | </ | ||
- | |||
- | Now everything is set up, but by default Redhat on XenServer starts up into runlevel 3. This does not by default start GDM. So you can modify the default runlevel in / | ||
- | |||
- | ===== NFS ===== | ||
- | |||
- | http:// | ||
- | |||
- | http:// | ||
- | |||
- | :!: In an NFS VHD storage repository, VM images are stored as thin-provisioned VHD format files on a shared NFS target. | ||
- | |||
- | :!: XenServer requires NFS Version 3 over TCP for remote storage use. | ||
- | |||
- | ===== Physical to Virtual (P2V) ===== | ||
- | |||
- | http:// | ||
- | |||
- | ===== IPMI ===== | ||
- | |||
- | See also **[[computing: | ||
- | |||
- | First load the kernel modules: | ||
- | |||
- | < | ||
- | modprobe ipmi_msghandler | ||
- | modprobe ipmi_si | ||
- | modprobe ipmi_devintf | ||
- | </ | ||
- | |||
- | Then manage: | ||
- | |||
- | < | ||
- | ipmitool -v lan print 1 | ||
- | ipmitool -v lan set 1 ipaddr 66.93.163.29 | ||
- | ipmitool -v lan set 1 netmask 255.255.255.224 | ||
- | ipmitool -v lan set 1 defgw ipaddr 66.93.163.1 | ||
- | ipmitool -v lan set 1 defgw macaddr 00: | ||
- | ipmitool -I open user set password 2 bad1egg | ||
- | ipmitool -v lan set 1 access on | ||
- | </ |