Table of Contents
Build Storage Backend
I am moving our iSCSI target over to FreeBSD. Our iSCSI servers are generally used as the “disk drives” for Xen virtual servers. NFS with File Backed Devices (FBD) is really too slow when several virtuals are running at the same time, but iSCSI appears to fix this issue.
We also export nfs for use by the Xen Hypervisor and, in a few cases, for the virtual machines themselves.
This document will describe how it is built.
- NFS service used for common files
- xen-configs stores the Xen configuration files shared by all DOM0's. It is generally mounted under /etc/xen/iscsi-configs
- xen-store stores common files for all DOM0's. This includes ISO images (for install and/or maintenance).
- xen-images stores FBD's, allowing quick and dirty testing where necessary.
- NFS service used for some virtual storage backends also. In some cases, the storage requirements are large but the accesses are low enough nfs is quite capable of handling it.
- message store for mail servers
- file storage for NextCloud/Owncloud servers
- file storage for some web servers
- iSCSI exports for the virtual images. Each virtual may have one or two images exported by the iSCSI target.
In many cases, I use a small image for the operating system, then a larger image for the data. The second image for Linux is may be a simple partition, or set up as a Physical Volume for LVM2 during installation. For FreeBSD, it would be ZFS. For Windows, we generally use only one image. However, for larger storage requirements, we may use NFS so long as the overhead is acceptable.
- Do a basic install with any utilities you might commonly need. My personal choices are in the article Basic FreeBSD Installation, which is still a work in progress.
- Install hast and carp. See Setting up HAST
- Create the zfs pool
- Install NFS
- Create a directory tree under /media/nfs
- Populate NFS directory tree
- Install iSCSI
- Create some ZFS volumes to store the images and configure iSCSI to export them
- Start the iSCSI service
In our case, we will use four network ports. Two will be used set up as a Link Aggregation and used to sync changes to the master server to the slave. We will also use this for the control interface, thus we will have two vlan's on it. The control interface is on vlan 30, and the sync interface on vlan 60.
The other pair is set up as an additionall LAGG and used for iscsi and nfs communications to the clients. This will be done over vlan 50. This is set up with an alias to allow fast switching between the primary and secondary servers when HAST decides it is necessary.
# .. there is more code above defining the server # set up the networks # bring all NIC's up ifconfig_bge0="up" ifconfig_bge1="up" ifconfig_bge2="up" ifconfig_bge3="up" # define the lagg's, ie the cloned interfaces cloned_interfaces="lagg0 lagg1" # now, set up the lagg's ifconfig_lagg0="laggproto lacp laggport bge0 laggport bge1 up" ifconfig_lagg1="laggproto lacp laggport bge2 laggport bge3 up" # define the vlans vlans_lagg0="50" vlans_lagg1="30 60" # and set up the vlans ifconfig_lagg0_50="inet 10.19.209.67/24" ifconfig_lagg1_60="inet 10.128.50.1/30" ifconfig_lagg1_30="SYNCDHCP" # NOTE: must use SYNCDHCP so dhcp will complete before something modifies # the network # set up the shared interface for HAST ifconfig_lagg0_50_alias="inet vhid 1 pass iscsilan alias 10.19.209.64/24"
HAST and CARP
Now, set up your disks for High Availability. See Setting up HAST
As mentioned earlier, we use nfs to store some files common to all of the virtuals. Thus, we can store Xen configuration files, installer and utility ISO's, etc… via nfs so they are usable by all machines. We can even store FBD (File Backed Devices) here. Basically, anything that does not get a lot of data access.
See Build NFS Server for notes on how to set up NFS server.
However, you will need to disable nfs autostart (since it relies on zfs, which relies on hast). nfsd and rpcbind will need to be managed in our failover script.