The Proxmox VE storage model is very flexible. Virtual machine images can either be stored on one or several local storages, or on shared storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may configure as many storage pools as you like. You can use all storage technologies available for Debian Linux.
One major benefit of storing VMs on shared storage is the ability to live-migrate running machines without any downtime, as all nodes in the cluster have direct access to VM disk images. There is no need to copy VM image data, so live migration is very fast in that case.
The storage library (package libpve-storage-perl) uses a flexible plugin system to provide a common interface to all storage types. This can be easily adopted to include further storage types in future.
Storage Types
There are basically two different classes of storage types:
File level storage
File level based storage technologies allow access to a full featured (POSIX) file system. They are in general more flexible than any Block level storage (see below), and allow you to store content of any type. ZFS is probably the most advanced system, and it has full support for snapshots and clones.
Block level storage
Allows to store large raw images. It is usually not possible to store other files (ISO, backups, ..) on such storage types. Most modern block level storage implementations support snapshots and clones. RADOS and GlusterFS are distributed systems, replicating storage data to different nodes.
Table 1. Available storage types |
|||||
Description |
PVE type |
Level |
Shared |
Snapshots |
Stable |
ZFS (local) |
zfspool |
file |
no |
yes |
yes |
Directory |
dir |
file |
no |
no1 |
yes |
NFS |
nfs |
file |
yes |
no1 |
yes |
CIFS |
cifs |
file |
yes |
no1 |
yes |
GlusterFS |
glusterfs |
file |
yes |
no1 |
yes |
CephFS |
cephfs |
file |
yes |
yes |
yes |
LVM |
lvm |
block |
no2 |
no |
yes |
LVM-thin |
lvmthin |
block |
no |
yes |
yes |
iSCSI/kernel |
iscsi |
block |
yes |
no |
yes |
iSCSI/libiscsi |
iscsidirect |
block |
yes |
no |
yes |
Ceph/RBD |
rbd |
block |
yes |
yes |
yes |
ZFS over iSCSI |
zfs |
block |
yes |
yes |
yes |
1: On file based storages, snapshots are possible with the qcow2 format.
2: It is possible to use LVM on top of an iSCSI storage. That way you get a shared LVM storage.
Thin Provisioning
A number of storages, and the Qemu image format qcow2, support thin provisioning. With thin provisioning activated, only the blocks that the guest system actually use will be written to the storage.
Say for instance you create a VM with a 32GB hard disk, and after installing the guest system OS, the root file system of the VM contains 3 GB of data. In that case only 3GB are written to the storage, even if the guest VM sees a 32GB hard drive. In this way thin provisioning allows you to create disk images which are larger than the currently available storage blocks. You can create large disk images for your VMs, and when the need arises, add more disks to your storage without resizing the VMs' file systems.
All storage types which have the “Snapshots” feature also support thin provisioning.
If a storage runs full, all guests using volumes on that storage receive IO errors. This can cause file system inconsistencies and may corrupt your data. So it is advisable to avoid over-provisioning of your storage resources, or carefully observe free space to avoid such conditions. |
Storage Configuration
All Proxmox VE related storage configuration is stored within a single text file at /etc/pve/storage.cfg. As this file is within /etc/pve/, it gets automatically distributed to all cluster nodes. So all nodes share the same storage configuration.
Sharing storage configuration make perfect sense for shared storage, because the same “shared” storage is accessible from all nodes. But is also useful for local storage types. In this case such local storage is available on all nodes, but it is physically different and can have totally different content.
Storage Pools
Each storage pool has a
...
The
To be more specific, take a look at the default storage configuration after installation. It contains one special local storage pool named local, which refers to the directory /var/lib/vz and is always available. The Proxmox VE installer creates additional storage entries depending on the storage type chosen at installation time.
Default storage configuration (/etc/pve/storage.cfg)
dir: local
path /var/lib/vz
content iso,vztmpl,backup
# default image store on LVM based installation
lvmthin: local-lvm
thinpool data
vgname pve
content rootdir,images
# default image store on ZFS based installation
zfspool: local-zfs
pool rpool/data
sparse
content images,rootdir
Common Storage Properties
A few storage properties are common among different storage types.
nodes
List of cluster node names where this storage is usable/accessible. One can use this property to restrict storage access to a limited set of nodes.
content
A storage can support several content types, for example virtual disk images, cdrom iso images, container templates or container root directories. Not all storage types support all content types. One can set this property to select for what this storage is used for.
images
KVM-Qemu VM images.
rootdir
Allow to store container data.
vztmpl
Container templates.
backup
Backup files (vzdump).
iso
ISO images
snippets
Snippet files, for example guest hook scripts
shared
Mark storage as shared.
disable
You can use this flag to disable the storage completely.
maxfiles
Maximum number of backup files per VM. Use 0 for unlimited.
format
Default image format (raw|qcow2|vmdk)
It is not advisable to use the same storage pool on different Proxmox VE clusters. Some storage operation need exclusive access to the storage, so proper locking is required. While this is implemented within a cluster, it does not work between different clusters. |
Volumes
We use a special notation to address storage data. When you allocate data from a storage pool, it returns such a volume identifier. A volume is identified by the
local:230/example-image.raw
local:iso/debian-501-amd64-netinst.iso
local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
To get the file system path for a
pvesm path
Volume Ownership
There exists an ownership relation for image type volumes. Each such volume is owned by a VM or Container. For example volume local:230/example-image.raw is owned by VM 230. Most storage backends encodes this ownership information into the volume name.
When you remove a VM or Container, the system also removes all associated volumes which are owned by that VM or Container.
Using the Command Line Interface
It is recommended to familiarize yourself with the concept behind storage pools and volume identifiers, but in real life, you are not forced to do any of those low level operations on the command line. Normally, allocation and removal of volumes is done by the VM and Container management tools.
Nevertheless, there is a command line tool called pvesm (“Proxmox VE Storage Manager”), which is able to perform common storage management tasks.
Examples
Add storage pools
pvesm add
pvesm add dir
pvesm add nfs
pvesm add lvm
pvesm add iscsi
Disable storage pools
pvesm set
Enable storage pools
pvesm set
Change/set storage options
pvesm set
pvesm set
pvesm set local --format qcow2
pvesm set
Remove storage pools. This does not delete any data, and does not disconnect or unmount anything. It just removes the storage configuration.
pvesm remove
Allocate volumes
pvesm alloc
Allocate a 4G volume in local storage. The name is auto-generated if you pass an empty string as
pvesm alloc local
Free volumes
pvesm free
This really destroys all volume data. |
List storage status
pvesm status
List storage contents
pvesm list
List volumes allocated by VMID
pvesm list
List iso images
pvesm list
List container templates
pvesm list
Show file system path for a volume
pvesm path
See Also