====== iSCSI vs. NFS ====== http://www.racktopsystems.com/choosing-nfs-over-iscsi/ This is a perennial subject when choosing shared storage for virtualization of VDI. The common wisdom is that **iSCSI is faster than NFS** and **NFS is easier to manage than iSCSI**. By default, **iSCSI writes are async** (writeback cache enabled) and **NFS writes are sync**. This goes a long way towards explaining the common impression that iSCSI (block storage) is faster than NFS (file storage). With ZFS storage servers, with an iSCSI setup with VM's, setting ''sync=always'' is the correct way to go for data integrity, but it may be slower. With ZFS storage servers, you should implement a fast SSD (or better) SLOG device (per pool) which significantly speeds up sync writes. In reality, the decision to use one or the other method of storage access is not so simple. It may take (somewhat tedious) testing in your own environment in addition to some significant research on your part to develop your own understanding. **Facts**: * This author is more familiar with NFS * iSCSI is **block storage** * Storage is presented to the hypervisor as a block device * NFS storage is **file storage** * Storage is presented to the hypervisor as a share * NFS is easier to manage * Simple snapshots * Many admins choose **easier** over **slightly better performance** * Transactional applications need better storage performance * Sync writes are required * Databases * Virtualization * The integrity of VHDs is critical **HA Notes**: * iSCSI uses MPIO to implement multiple network connections * More speed * More reliability * Higher availabilty * Potentially fully redundant storage networking * NFS * No internal HA networking * Some true experts don't recommend using NFS over bonded NICs * Use 10GbE for speed * Some do * **More research needed on resilient NFS networking**