This is an old revision of the document!
Main: http://www.proxmox.com/products/proxmox-ve
Wiki: http://pve.proxmox.com/wiki/Documentation
Forums: http://forum.proxmox.com/forum.php
Proxmox VE is a 64-bit, free, open-source, bare-metal virtualization platform (hypervisor). While it may not have as full a feature set as some other commercial hypervisors, it is none the less very attractive and easy to use.
Proxmox VE provides two virtualization technologies in one platform:
KVM requires hardware (CPU+mainboard) virtualization support.
apt-get update apt-get dist-upgrade
If you set a static IP address for the PVE host using the web interface post-installation, you may find the wrong IP address shown at the SSH login prompt. Edit /etc/hosts to fix:
nano /etc/hosts reboot
On the Hardware Node (HN):
pveperf
http://c-nergy.be/blog/?p=1004
http://www.linux-kvm.org/page/Main_Page
KVM provides for full virtualization and is built into the Linux kernel. It's supported by Redhat, Ubuntu and other distributions.
Live migration with KVM requires that you store your VM data on NAS/SAN or DRBD.
http://wiki.openvz.org/Main_Page
http://pve.proxmox.com/wiki/OpenVZ_Console
OpenVZ Containers work differently than KVM, VMware or Xen technologies:
When you first create a CentOS 5 VM, you must reconfigure a few things: From the PVE host CLI, we enter the VM:
vzctl enter CTID #next steps are all now inside the VM
Enable Proxmox CT console so we can access it from the web management interface:
vim /etc/inittab
Add the following line and reboot:
1:2345:respawn:/sbin/agetty tty1 38400 linux
When you first create a CentOS 6 VM, you must reconfigure a few things: From the PVE host CLI, we enter the VM:
vzctl enter CTID #next steps are all now inside the VM
Enable Proxmox CT console so we can access it from the web management interface:
vim /etc/init/tty.conf # This service maintains a getty on tty1 from the point the system is # started until it is shut down again. start on stopped rc RUNLEVEL=[2345] stop on runlevel [!2345] respawn exec /sbin/agetty -8 tty1 38400
Then reboot the VM.
Differences Between venet
and veth
: http://wiki.openvz.org/Differences_between_venet_and_veth
Detailed veth
Networking Info: http://forum.openvz.org/index.php?t=msg&&th=6191&goto=36869#msg_36869
venet
Docs: http://openvz.org/Virtual_network_device
veth
Docs: http://openvz.org/Virtual_Ethernet_device
veth
If you have problems using the VNC Console links, verify that you have Sun/Oracle Java installed.
It seems to work fine with Java 6 or Java 7.
See also Fedora 16 Notes.
http://pve.proxmox.com/wiki/Backup_and_Restore
If you want to back up to and additional hard drive in your Proxmox server, you'll want to prep it and mount it manually. Then you can configure the backup in the web interface.
Create one large partition on the extra disk and create a filesystem. After that, create a mount point and create an entry in /etc/fstab
. Finally, configure the directory /backup
in the storage-gui.
fdisk -l fdisk /dev/sdd mkfs.ext3 /dev/sdd1 mkdir /mnt/backup blkid
Edit /etc/fstab
and add:
vim /etc/fstab # Mount sdd1 as backup storage using the UUID UUID=549b77bf-be9a-4bed-b56f-ab360004717c /mnt/backup ext3 defaults,noatime 0 0
mount -a mount /dev/sdd1 on /mnt/backup type ext3 (rw,noatime)
Configuration → Storage → Add Directory
Configuration → Backup → Create New Job
apt-get -y update apt-get -y install build-essential make libncurses5-dev libcurl3-dev libiksemel-dev pve-headers-`uname -r` cd /usr/src/ wget http://downloads.digium.com/pub/telephony/dahdi-linux-complete/dahdi-linux-complete-current.tar.gz tar zxfv dahdi-linux-complete-current.tar.gz cd dahdi-linux-complete<tab> make all make install make config
Edit /etc/dahdi/modules
and comment out all modules unless you need them:
nano /etc/dahdi/modules modprobe dahdi_dummy
amportal restart
http://pve.proxmox.com/wiki/Storage_Model
Supported storage technologies:
See also napp-it ZFS Storage Server.
http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing
In this configuration, network block devices (iSCSI targets) are used as the physical volumes for LVM logical volume storage. This is a two step procedure and can be fully configured via the web interface.
Add iSCSI Target
on the Storage listuse LUNs direcly
Add LVM Group
on the Storage listBase Storage
, use the drop down menu to select the previously defined iSCSI targetBase Volume
select a LUNVolume Group Name
give a unique name (this name cannot be changed later)Save
http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster
http://pve.proxmox.com/wiki/High_Availability_Cluster
http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster
http://188.165.151.221/threads/9041-Advice-on-Shared-Storage-for-Proxmox-Cluster
http://pve.proxmox.com/wiki/Intel_Modular_Server_HA
Many command line tips here: http://forum.proxmox.com/archive/index.php/t-8624.html
Try logging into the PVE host and deleting the configuration file of the problem CT:
rm /etc/pve/nodes/pve1/openvz/1000.conf