====== Napp-IT ZFS Storage Server ======
//**As the saying goes: "With RAID, you can have performance, capacity, and integrity - pick any two."**//
See also **[[computing:storage:zfs|ZFS]]**
See also **[[computing:storage:zfs_snapshot|ZFS Snapshots and Napp-IT Replication]]**
See also **[[https://www.napp-it.org/doc/downloads/flash_lsi_sas.pdf|Flash LSI IR to IT Mode]]**
**License Swap**: http://napp-it.org/extensions/swap_en.html
**Official Handbook**: http://www.napp-it.org/doc/downloads/napp-it.pdf
**Mini-HOWTO**: ftp://ftp.uni-duisburg.de/Solaris/napp-it/napp-it.pdf
**All-in-One**: http://www.napp-it.org/doc/downloads/all-in-one.pdf
**Best HBAs**: https://www.servethehome.com/buyers-guides/top-hardware-components-napp-omnios-nas-servers/top-picks-for-napp-it-and-omnios-hbas-host-bus-adapters/
**Best Hardware**: https://www.servethehome.com/buyers-guides/top-hardware-components-napp-omnios-nas-servers/
https://forums.servethehome.com/index.php?threads/performance-tuning-three-monster-zfs-systems.3921/
http://www.overclockers.com/forums/showthread.php?t=705879
http://researchblogs.cs.bham.ac.uk/specialist-computing/2012/05/building-an-openindiana-based-zfs-file-server/
Supported sharing protocols:
* AFP
* CIFS/SMB
* NFS
* iSCSI
* FTP
* rsync
* FOE
===== OmniOS =====
See also **[[computing:unix:omnios|OmniOS]]**
See also **[[http://www.napp-it.org/downloads/omnios_en.html|Napp-IT on OmniOS]]**
* Recommended host OS for Napp-IT
* Networking not configured during installation
==== Configure the Primary (Management) Network Interface ====
**Don't mess with networking** in the Napp-IT web interface unless you are prepared to manually fix things that go wrong using the CLI!
:!: Other interfaces, hostname, domain name and DNS servers can be configured with Napp-IT after it's installed, including MTU for Jumbo Frames.
**System -> Network**
Disable ''nwam'' (Network AutoMagic):
svcs nwam
svcs network/physical:default
svcadm disable svc:/network/physical:nwam
svcadm enable svc:/network/physical:default
dladm show-phys # show existing
dladm show-link # show available
ipadm show-if # show existing IP configurations
ipadm create-if igb0 # create the IF
ipadm create-addr -T static -a 192.168.1.4 igb0/v4 # set a static IP
route -p add default 192.168.1.1 # set default route to your gateway
echo 'nameserver 8.8.8.8' >> /etc/resolv.conf # or edit resolv.conf as needed
echo 'yourdomainname' >> /etc/resolv.conf # or edit resolv.conf as needed
echo 'yourdomainname' > /etc/defaultdomain # used by NIS
cp /etc/nsswitch.dns /etc/nsswitch.conf # use DNS for name resolution
dig google.com # to test
reboot
===== Install Napp-IT =====
* Install OmniOS normally from CD or USB flash drive
* Update OmniOS per the OmniOS wiki page before installing Napp-IT
As root:
wget -O - www.napp-it.org/nappit | perl
:!: The Napp-IT install changes the root password so we need to change it back.
Before rebooting type:
passwd root # reset the root password to set SMB password
passwd napp-it # for the Napp-IT web interface 'admin' user
reboot
==== Updating to the Latest Napp-IT Version ====
wget -O - www.napp-it.org/nappit | perl
* Reboot
* Re-enter the root password via ''passwd root''
* Check and/or recreate your jobs due to a new job management with editable timer and job-settings
* Re-enable auto-service
==== Napp-IT Configuration ====
http://napp-it.org/doc/downloads/napp-it.pdf
:!: Napp-IT should only be used on secure networks. SSL is not available by default on the web interface.
:!: Delete '' /var/web-gui/_log/napp-it.cfg'' to reset to defaults.
Using a web browser:
http://:81
https://:82
=== First Login ===
**admin**, **no password**
:!: Change the password now.
=== Configure Networking ===
Also configure hostname, DNS servers, domain name, etc.:
**System -> Network -> IP Address**
=== Configure E-Mail ===
:!: Typically Napp-IT is used in a secure network. However, if your mail server is external, you will want to configure TLS mail encryption.
perl -MCPAN -e shell
notest install Net::SSLeay
notest install IO::Socket::SSL
notest install Net::SMTP::TLS
exit
Enable and test SMTP mail without TLS encryption:
**About -> Settings**
**Jobs -> E-mail -> SMTP Test**
Once standard SMTP mail works, configure TLS (encrypted connections):
**Jobs -> TLS E-mail -> Enable TLS Mail**
**Jobs -> TLS E-mail -> TLS Test**
=== Enable Services ===
* Auto-Service
* This is the master service required for other scheduled jobs
* **Jobs -> Auto-Service -> Enable auto 15min**
* Scrubs
* Create a scrub job for each pool you create
* **Jobs -> Scrub -> Create auto scrub job**
* Create jobs for e-mailed status and alerts
* **Jobs -> E-mail -> Status**
* **Jobs -> E-mail -> Alert**
=== Configure Storage ===
http://www.overclockers.com/forums/showthread.php?t=705879
:!: Here we create a pool of mirrored vdevs (RAID10).
Create a pool with the first vdev being a pair of mirrored drives:
**Pools -> Create Pool**
* Name the pool
* Choose the first two disks to be mirrored and choose **Mirror**
* **Submit**
Add more mirrors to this pool until done:
**Pools -> Extend Pool**
* Select the proper pool
* Select two more disks and choose **Mirror**
* **Submit**
Add L2ARC or ZIL (per pool):
**Pools -> Extend Pool**
* Select the proper pool
* Select the SSD drive you want to use
* Choose the ''read-cache'' or ''write-log'' function
* **Submit**
=== iSCSI ===
http://blog.allanglesit.com/2012/02/adventures-in-zfs-configuring-iscsi-targets/
http://docs.oracle.com/cd/E19963-01/html/821-1459/fncpi.html
http://blogs.citrix.com/2011/06/01/sizing-luns-a-citrix-perspective/
* VMs per LUN = Approx 20 max
* iSCSI provides block storage
* Host Groups - which initiators/clients can see the LU
* Target Groups - which targets will be able to provide access to the LU
== Example Naming Conventions ==
^iSCSI Device ^Name ^Example ^
|LU |lu-name |lu-vhd-infra |
|Target Name |iqn.2010-09.org.openindiana:t-name |iqn.2010-09.org.openindiana:t-vhd-infra |
|Target Alias |t-name |t-vhd-infra |
|Target Group |tg-name |tg-vhd-infra |
|Host Group |hg-name |hg-vhd-infra |
== Setup a Running Configuration ==
http://docs.oracle.com/cd/E23824_01/html/821-1459/fnnop.html
:!: Verify that the Comstar and iSCSI services are running first:
**Services -> Comstar**
Once we verify the services are running, we configure a LU for use:
- You must **define a logical unit**
- **Comstar -> Logical Units -> Create Thin Prov LU**
- File-based thin provisioned LU's are suggested
* You can copy/move virtual hdd files
- Disable Writeback Cache on your LU if you prefer data-security with sync-writes
* Lowers performance unless you have a fast ZIL drive
- You must **define a target**
- **Comstar -> Targets -> Create iSCSI Target**
- The target is what clients connect to
- A target can contain multiple logical units
- You must **define a target-group** and **add your target(s) as a member**
- **Comstar -> Target Groups -> Create Target Group**
- **Comstar -> Target Groups -> Add Members**
- Views cannot be set to targets, only to target groups
- You must **set a view** for each logical unit to a target-group or the LU remains invisible
- **Comstar -> Views - Add a View**
* Set the target group to ''all'' for visibility to all clients
===== Tuning =====
http://www.napp-it.org/manuals/tuning.html
Disable ''atime'':
zfs set atime=off
===== Useful Commands =====
http://docs.oracle.com/cd/E19963-01/html/821-1459/fncpi.html
LUN:
svcs stmf
stmfadm list-lu
stmfadm list-view -l
Target:
svcs -l iscsi/target
/var/svc/log/network-iscsi-target:default.log
itadm list-target -v
===== S3 =====
http://www.napp-it.org/doc/downloads/cloudsync.pdf
https://forums.servethehome.com/index.php?threads/amazon-s3-compatible-zfs-cloud-with-minio.27524/
* Update ''napp-it'' (**About -> Update**)
* Use menu **Services -> minIO S3 Services** to install ''minIO''
* Use menu **ZFS Filesystems -> S3cloud -> unset** to activate (ex: on port 9000)
You can share the same filesystem via SMB and S3, but there is no file locking.
Via SMB, you will find a folder ''S3_data'' and ''S3_config'' with S3 data.
Open a browser (or any S3 client) with address ip:9000.
===== NFS =====
**Mini-HOWTO**: ftp://ftp.uni-duisburg.de/Solaris/napp-it/napp-it.pdf
**NFS Share Permissions**: http://forums.freebsd.org/showthread.php?t=26801
**More NFS Share Permissions**: http://forums.freebsd.org/showthread.php?t=26801
**Using NFS for VMware ESX**: http://mordtech.com/2008/12/11/leveraging-zfs-nfs-for-esxi/
- Create a data pool to store your files on
- Create ZFS Filesystem (dataset) on your pool
- Set ACL permissions of pool/dataset to something like
- ''root=full'' with inheritance on for newly created files and folders
- ''everyone@=full'' or ''everyone@=modify'' with inheritance on for newly created files and folders
- Share the dataset via NFS
- For virtualization hosts, you may need to specify options when enabling NFS sharing:
- sharenfs=root=ip.of.virt.host
- Use the new NFS share with a URL like:
- hostnameorip:/pool/dataset
{{ :computing:storage:napp-it_zfs_folder_acl.png?direct&750 |ZFS Folder ACLs}}
{{ :computing:storage:napp-it_zfs_smb_acl.png?direct&750 |ZFS Share ACLs}}
===== CIFS/SMB =====
:!: You have to restart the SMB service after making changes!
==== HowTo ====
**Mini-HOWTO**: ftp://ftp.uni-duisburg.de/Solaris/napp-it/napp-it.pdf
- Create/Add at least one SMB-user (not root) to smb-group administrators
- Set all needed ACLs from Windows (right-click on folder -> Properties -> Security)
- If you smb-share a ZFS-folder via napp-it, these settings are used:
- Set unix permissions to 777 (or some ACL and share options will not work)
- Set the folder ACL from "nearly everything is denied" to the following ACL:
- root=full access
- everybody@=modify
- Set share-level ACL‘s to everybody=full access
- See the linked PDF above for various permissions and ACL settings
- You mostly want to connect from Windows and manage ACLs/Permissions from there
==== Folder ACLs ====
**ZFS Filesystems -> ACL Extension -> ACL on Folders -> **
Remove all ACLs:
/usr/bin/chmod -R A- /pool/share
Add ACLs (recursively) for SMB shares (order is important - ''root'' rule will be number ''0''):
/usr/bin/chmod -R A=everyone@:modify_set:file_inherit/dir_inherit:allow /pool/share
/usr/bin/chmod -R A+user:root:full_set:file_inherit/dir_inherit:allow /pool/share
{{ :computing:storage:acl_on_folders.png?direct&800 |Folder ACLs on SMB Shares}}
==== CIFS Troubleshooting ====
- Don't use numeric (octal) ''chmod'' commands on SMB shares or the ACLs get messed up
- Set Unix permissions to ''777'' on top level (share) folder
- Reset the ''root'' password with the ''passwd'' command after enabling CIFS
* This sets the ''root'' SMB user password too
- In Windows, log in as the ''root'' user to set permissions, if necessary
===== Block or File Based Storage =====
See also: **[[computing:storage:iscsi_vs_nfs|iSCSI vs. NFS]]**
http://searchservervirtualization.techtarget.com/tip/ISCSI-vs-NFS-for-virtualization-shared-storage
http://www.brentozar.com/archive/2012/05/storage-protocol-basics-iscsi-nfs-fibre-channel-fcoe/
**NFS or iSCSI?**
:!: The bottom line is that iSCSI may perform better but will be more trouble to manage.
* File (NFS) or block (iSCSI) storage
* Proxmox (OpenVZ) Containers must be on NFS (or local storage)
* VMware ESXi needs NFS
===== Disk Errors =====
**Napp-IT -> System -> Basic Statistics -> Disks**
Every five seconds:
iostat 5
Extended disk statistics:
iostat -xtc
iostat -xn 1 2 # use second value only
**Soft/Hard/Transfer errors** are warnings from ''iostat''. Look at them as warnings not real errors like ZFS checksum errors, but they indicate problems.
Look also at the **wait value on writes (%w)**. If you have significantly high wait or busy values on a single disk this indicates a problem as well.
{{ :computing:storage:napp-it_zfs_diskinfos.png?750 |ZFS Disk Errors}}
===== Replace a Failed Disk =====
In the web GUI:
**Disks -> Replace**
* Select the failed drive
* Select the new drive you already installed
* Click 'Replace'
===== File Manager =====
Use Midnight Commander with the command ''mc''.
* Works well with PuTTY
* May have issues at console with function keys or character sets.
===== Troubleshooting =====
In case of problems, use the console and check:
* poolstatus: ''zpool status''
* zfsstatus: ''zfs list''
* controller: ''cfgadm -avl''
* diskstatus: ''format'' or ''iostat -En''
* services: ''prstat''
* napp-it webserver restart: ''/etc/init.d/napp-it restart''
* or try a reboot or reinstall via ''wget''
===== SFTP =====
- Add SFTP user to SSH ''AllowUsers'' or ''AllowGroups''
- ''AllowUsers root backupuser''
- Modify ''/etc/passwd'' SFTP user home directory and restricted shell
* ''/bin/rbash''
- Restart SSH service
* ''svcadm restart svc:/network/ssh:default''
===== FTP =====
http://www.napp-it.org/extensions/proftpd_en.html
http://www.proftpd.org/docs/howto/
:!: Note that **FTP is an add-on** not included in the base Napp-IT installation.
:!: Try **Passive Mode** to connect if problems.
==== Install ProFTPd Add-On ====
wget -O - www.napp-it.org/proftpd | perl
This starts the service and can also be used to restart the ProFTPd service after making configuration changes:
**Services -> FTP -> Enable proftpd**
svcs proftpd
svcadm enable proftpd
svcadm disable proftpd
==== Add a User for FTP Access ====
**User -> Add Local User**
:!: No home directory is created.
:!: Users must be deleted with the ''userdel'' command at the CLI.
==== Virtual Server Configuration ====
**ZFS Filesystems -> FTP (link per filesystem)**
:!: You must manually define the Virtual Server by clicking the **set other manually** link.
:!: You can have only one Virtual Server defined per port (i.e port 21).
ServerName "FTP Server"
Port 21
Umask 022
DenyAll
User mybackupuser
Group nobody
AnonRequirePassword on
AllowOverwrite on
AllowUser mybackupuser
DenyAll
AllowUser mybackupuser
DenyAll
===== Notes From Others =====
Mostly stolen from https://hardforum.com where the user ''_Gea'' is the primary developer of Napp-IT.
**On link aggregation:**
Link aggregation may help a little with a lot of parallel connections
but mostly its complicated only with a lot of possible problems and
no or minimal performance advantage.
i always follow these rules
1. keep it simple
2. use 10 Gb/ FC if you need speed
3. If you have an All-In-One ESXi/ SAN solution, use one VLAN Uplink
from ESXi to your physical switch (1 GB or 10 Gbit) and divide your LANS there
Use highspeed virtual vnics to internally connect your VM's with your SAN
4, On ESXi use virtual software switches and not physical nics beside failover
1 Gb aggregation is outdated. 10 GB is on the way to become cheap.
Currently 2x10 Gb cards are about 300 Euro but you can expect them to
be onboard in 2012 on better mainboards or as cheap as good 1 GB Nics 5 years ago
10 Gb on switches is currently availabe for about 250 Euro per port.
I use HP 2910 switches with up to 4 x 10 Gb ports. They are not really cheap
(about 1300 Euro with 24 x 1Gb Ports + 2 x 10 Gb for about 500 Euro)
but affordable if you need the speed.
If you only need high-speed between one server and a few clients, you do not need
a switch immideatly (example small video editing pool) and can connect them directly
and buy the 10 Gb switch later.
Gea
**More on link aggregation:**
with link aggregation, you complicate things with often no or minimal benefit and add an extra
problem field for example together with jumbo frames (mostly not working at all)
in my opinion, its not worth the effort today
about your pools:
if you need best IO and speed for VM's use always mirrors so you pool 1 is perfect
- add at least one hotspare!!
- you may add a ssd read cache and eventually a mirrored write cache (Hybrid storage)
pool2
a hotfix to a Raid-Z is not very efficient.
if you have a failure you need a rebuild with a at this moment untested disk
use next Raid-Z level Raid-Z2 instead and you have a 'hot' hotfix
use hotfix always on mirrors and on a Raid-Z3 if needed
Depending on your workload, SSD cache dtives can help to improve performance.
For my own i switched to SSD only pools as ESXi datastore.
(although they are not as reliable as good SAS disks, so i use 3 x mirrors now)
The time for expensive 15k SAS is over for new installations (imho)
Gea
**On networking:**
I have problems with Solaris 11 and Link Aggression, I have a dell powerconnect switch, I put it to LAG mode and set up LACP at solaris, I set a static IP and don't get any connection with the local network, I check, dladm say it is up and I put it to DHCP and I get a IP from the router but still no connection with the local network, WHATS UP?
Reply With Quote
#1147
Old 07-04-2011, 11:07 AM
ChrisBenn Limp Gawd, 1.6 Years
Status: ChrisBenn is offline
The dell powerconnect switches require a L2 policy and a static aggregation (no LACP)
if you have 2 interfaces, say ige0 and ige1
ipadm delete-if ige0
ipadm delete-if ige1
dladm create-aggr -P L2 -l ige0 -l ige1 aggr1
ipadm create-addr -T dhcp aggr1/v4
should work for you. This is assuming you have disabled the nwam service and are using the physical service:
svcadm disable svc:/network/physical:nwam
svcadm enable svc:/network/physical:default
**On disk failures:**
So if you have a bunch of disks in a ZFS pool how can you move them to a new server?
If you have done it like suggested (Use HBA controller, never use hardware-raid)
you can just plug your disks into your new computer with any disk controller
and import your pool - no problem -
Quote:
How does rebuilding work when you lose a drive?
If you have set ZFS pool property autoreplace=on you just need to replace
a failed drive, otherwise plug in a new disk and do a replace failled drive -> new disk
If your controller does not support hot-plug you need a reboot after plug-in new disks
Gea
**On SMB file sharing:**
Hello, I run Solaris 11 Express and napp-it for stuff liek SMB and zpools
I was wondering how to I make custom SMB users or groups, like users that only can access some files, they can read at one place and write on another, how does this work?
With napp-it you can create user and smb-groups in menu user.
Connect from Windows as user root and set desired file and folder ACL
(works from Win XP pro, Win 2003 and Win 7 pro, problems are reported with home editions
and Win 7 ultimate)
Problem: Solaris ACL are order sensitive - Windows ACL not
non-trivial ACL should be set from Solaris
From Solaris you can set ACL via CLI or via napp-it ACL extension
(in development, currently you can set share level ACL and ACL on shared folders
not on other files and folders)
Gea
Exactly what Gea said, but to give you specific commands
#zpool import
to list all the pools found
#zpool import poolname
to import the pool.
If you had a failure and didn't do a clean export you just need to use the -f (force) switch
#zpool import -f poolname
**About overflow protection:**
Its always bad, if you fill up your pool up to 100% with data.
Overflow protection sets a 10% reservation on the pool ZFS itself.
The available space for your ZFS folder is then 90%.
That means, if you fill your folder up to the max, 10% of the pool remains always free.
You can check/modify that always in menu ZFS folder under reservation.
Gea