Table of Contents

Proxmox Virtualization Environment

Main: https://www.proxmox.com/en/proxmox-ve

Wiki: http://pve.proxmox.com/wiki/

Documentation: https://pve.proxmox.com/pve-docs/

Forums: http://forum.proxmox.com/

Proxmox VE is a 64-bit, free, open-source, bare-metal virtualization platform (hypervisor).

Proxmox VE provides two virtualization technologies in one platform:

KVM requires hardware (CPU+mainboard) virtualization support.

Administration

Proxmox VE Clusters can be managed from any of the cluster hosts.

https://<ipaddress>:8006

Installation

:!: Use Etcher to write the ISO to a flash drive.

:!: The installer is graphical and requires a mouse.

Download: https://www.proxmox.com/en/downloads

Updating and Upgrading

:!: Proxmox VE comes pre-configured to use the 'enterprise' (paid subscription) package repository.

No Subscription

If you don't have a subscription, you must reconfigure the repository:

First, comment out the 'enterprise' repo:

vi /etc/apt/sources.list.d/pve-enterprise.list

Then clone the 'enterprise' repo file and modify it:

cp -a /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-no-sub.list

vi /etc/apt/sources.list.d/pve-no-sub.list

deb http://download.proxmox.com/debian/pve buster pve-no-subscription

apt update
apt dist-upgrade
reboot

Windows Guests

Windows 7: https://pve.proxmox.com/wiki/Windows_7_guest_best_practices

Windows 8: https://pve.proxmox.com/wiki/Windows_8_guest_best_practices

Windows 10: https://pve.proxmox.com/wiki/Windows_10_guest_best_practices

Windows 2012: https://pve.proxmox.com/wiki/Windows_2012_guest_best_practices


FIXME All very old below this point!

PVE Host Networking

If you set a static IP address for the PVE host using the web interface post-installation, you may find the wrong IP address shown at the SSH login prompt. Edit /etc/hosts to fix:

nano /etc/hosts

reboot

Quickie Performance Test

On the Hardware Node (HN):

pveperf

Proxmox VE 2.x With Software RAID

http://www.howtoforge.com/proxmox-2-with-software-raid

KVM

http://c-nergy.be/blog/?p=1004

http://www.linux-kvm.org/page/Main_Page

KVM provides for full virtualization and is built into the Linux kernel. It's supported by Redhat, Ubuntu and other distributions.

Live Migration

:!: Live migration with KVM requires that you store your VM data on NAS/SAN or DRBD.

Disk Image Format

VirtIO

OpenVZ

http://wiki.openvz.org/Main_Page

http://pve.proxmox.com/wiki/OpenVZ_Console

OpenVZ Containers work differently than KVM, VMware or Xen technologies:

CentOS 5 Container (VM)

When you first create a CentOS 5 VM, you must reconfigure a few things: From the PVE host CLI, we enter the VM:

vzctl enter CTID                                  #next steps are all now inside the VM

Enable Proxmox CT console so we can access it from the web management interface:

vim /etc/inittab

Add the following line and reboot:

1:2345:respawn:/sbin/agetty tty1 38400 linux

CentOS 6 Container (VM)

When you first create a CentOS 6 VM, you must reconfigure a few things: From the PVE host CLI, we enter the VM:

vzctl enter CTID                                  #next steps are all now inside the VM

Enable Proxmox CT console so we can access it from the web management interface:

vim /etc/init/tty.conf

# This service maintains a getty on tty1 from the point the system is
# started until it is shut down again.

start on stopped rc RUNLEVEL=[2345]

stop on runlevel [!2345]

respawn
exec /sbin/agetty -8 tty1 38400

Then reboot the VM.

Networking

Differences Between venet and veth: http://wiki.openvz.org/Differences_between_venet_and_veth

Detailed veth Networking Info: http://forum.openvz.org/index.php?t=msg&&th=6191&goto=36869#msg_36869

venet Docs: http://openvz.org/Virtual_network_device

veth Docs: http://openvz.org/Virtual_Ethernet_device

venet

veth

Multiple NICs for OpenVZ Containers

http://forum.proxmox.com/threads/3442-Multiple-NICs-for-OpenVZ-containers

venet

veth

Java VNC Console

If you have problems using the VNC Console links, verify that you have Sun/Oracle Java installed.

It seems to work fine with Java 6 or Java 7.

See also Fedora 16 Notes.

Backup

http://pve.proxmox.com/wiki/Backup_and_Restore

If you want to back up to and additional hard drive in your Proxmox server, you'll want to prep it and mount it manually. Then you can configure the backup in the web interface.

Prep the Disk

Create one large partition on the extra disk and create a filesystem. After that, create a mount point and create an entry in /etc/fstab. Finally, configure the directory /backup in the storage-gui.

fdisk -l

fdisk /dev/sdd

mkfs.ext3 /dev/sdd1

mkdir /mnt/backup

blkid

Edit /etc/fstab and add:

vim /etc/fstab

# Mount sdd1 as backup storage using the UUID
UUID=549b77bf-be9a-4bed-b56f-ab360004717c /mnt/backup ext3 defaults,noatime 0 0
mount -a
mount

/dev/sdd1 on /mnt/backup type ext3 (rw,noatime)

Configure the Storage

Configuration → Storage → Add Directory

Configure the Backup Job

Configuration → Backup → Create New Job

DAHDI

See DAHDI.

http://nerdvittles.com/?p=645

PBX on Proxmox/OpenVZ Notes

Install DAHDI on the Proxmox (OpenVZ) Host

apt-get -y update
apt-get -y install build-essential make libncurses5-dev libcurl3-dev libiksemel-dev pve-headers-`uname -r`
cd /usr/src/
wget http://downloads.digium.com/pub/telephony/dahdi-linux-complete/dahdi-linux-complete-current.tar.gz
tar zxfv dahdi-linux-complete-current.tar.gz
cd dahdi-linux-complete<tab>
make all
make install
make config

Edit /etc/dahdi/modules and comment out all modules unless you need them:

nano /etc/dahdi/modules

modprobe dahdi_dummy

Enable DAHDI Timing in Guest Containers

FIXME

amportal restart

Storage

http://pve.proxmox.com/wiki/Storage_Model

Supported storage technologies:

Shared Storage

iSCSI

See also napp-it ZFS Storage Server.

LVM Groups with Network Backing

http://pve.proxmox.com/wiki/Storage_Model#LVM_Groups_with_Network_Backing

In this configuration, network block devices (iSCSI targets) are used as the physical volumes for LVM logical volume storage. This is a two step procedure and can be fully configured via the web interface.

  1. Add the iSCSI target
    1. On some iSCSI targets you need to add the IQN of the Proxmox VE server to allow access
      1. Click Add iSCSI Target on the Storage list
      2. As storage name use whatever you want but take care, this name cannot be changed later
      3. Give the 'Portal' IP address or servername and scan for unused targets
      4. Disable use LUNs direcly
      5. Click save
  1. Add LVM group on this target
    1. Click Add LVM Group on the Storage list
    2. As storage name use whatever you want but take care, this name cannot be changed later
    3. For Base Storage, use the drop down menu to select the previously defined iSCSI target
    4. For Base Volume select a LUN
    5. For Volume Group Name give a unique name (this name cannot be changed later)
    6. Enable shared use (recommended)
    7. Click Save

Clustering

http://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster

http://pve.proxmox.com/wiki/High_Availability_Cluster

http://pve.proxmox.com/wiki/Two-Node_High_Availability_Cluster

http://188.165.151.221/threads/9041-Advice-on-Shared-Storage-for-Proxmox-Cluster

http://pve.proxmox.com/wiki/Intel_Modular_Server_HA

User Management

http://pve.proxmox.com/wiki/User_Management

Authentication

http://c-nergy.be/blog/?p=2501

Management

http://c-nergy.be/blog/?p=2588

Templates

TurnKey OpenVZ Templates

http://c-nergy.be/blog/?p=2570

http://www.turnkeylinux.org/blog/openvz-proxmox

Troubleshooting

Many command line tips here: http://forum.proxmox.com/archive/index.php/t-8624.html

Leftover VM from Failed CT Creation

Try logging into the PVE host and deleting the configuration file of the problem CT:

rm /etc/pve/nodes/pve1/openvz/1000.conf