User Tools

Site Tools


unix:freebsd:system_builds:xenstoragebackend

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
unix:freebsd:system_builds:xenstoragebackend [2019/07/10 01:09]
127.0.0.1 external edit
unix:freebsd:system_builds:xenstoragebackend [2019/09/17 01:46] (current)
rodolico [NFS]
Line 1: Line 1:
-====== Build Storage Backend ​for Virtualization (Xen) ======+====== Build Storage Backend ======
  
 I am moving our iSCSI target over to FreeBSD. Our iSCSI servers are generally used as the "disk drives"​ for Xen virtual servers. NFS with File Backed Devices (FBD) is really too slow when several virtuals are running at the same time, but iSCSI appears to fix this issue. I am moving our iSCSI target over to FreeBSD. Our iSCSI servers are generally used as the "disk drives"​ for Xen virtual servers. NFS with File Backed Devices (FBD) is really too slow when several virtuals are running at the same time, but iSCSI appears to fix this issue.
 +
 +We also export nfs for use by the Xen Hypervisor and, in a few cases, for the virtual machines themselves.
  
 This document will describe how it is built. This document will describe how it is built.
Line 7: Line 9:
 ===== Basic Requirements ===== ===== Basic Requirements =====
   * NFS service used for common files   * NFS service used for common files
-     * xen-configs stores the Xen configuration files shared by all DOM0'​s. It is generally mounted under /​etc/​xen/​iscsi-configs +    ​* xen-configs stores the Xen configuration files shared by all DOM0'​s. It is generally mounted under /​etc/​xen/​iscsi-configs 
-     ​* xen-store stores common files for all DOM0'​s. This includes ISO images (for install and/or maintenance). +    * xen-store stores common files for all DOM0'​s. This includes ISO images (for install and/or maintenance). 
-     ​* xen-images stores FBD's, allowing quick and dirty testing where necessary. +    * xen-images stores FBD's, allowing quick and dirty testing where necessary. 
-  * iSCSI exports for the virtual images. Each virtual may have one or two images exported by the iSCSI target. In many cases, I use a small image for the operating system, then a larger image for the data. The second image for Linux is usually ​set up as a Physical Volume for LVM2 during installation. For FreeBSD, it would be ZFS. For Windows, we generally use only one image.+  * NFS service used for some virtual storage backends also. In some cases, the storage requirements are large but the accesses are low enough nfs is quite capable of handling it.  
 +    * message store for mail servers 
 +    * file storage for NextCloud/​Owncloud servers 
 +    * file storage for some web servers 
 +  * iSCSI exports for the virtual images. Each virtual may have one or two images exported by the iSCSI target. ​ 
 + 
 +In many cases, I use a small image for the operating system, then a larger image for the data. The second image for Linux is may be a simple partition, or set up as a Physical Volume for LVM2 during installation. For FreeBSD, it would be ZFS. For Windows, we generally use only one image. However, for larger storage requirements,​ we may use NFS so long as the overhead is acceptable.
  
 ===== The Setup ===== ===== The Setup =====
Line 25: Line 33:
  
 ===== Details ===== ===== Details =====
 +
 +==== Networking ====
 +
 +In our case, we will use four network ports. Two will be used set up as a Link Aggregation and used to sync changes to the master server to the slave. We will also use this for the control interface, thus we will have two vlan's on it. The control interface is on vlan 30, and the sync interface on vlan 60.
 +
 +The other pair is set up as an additionall LAGG and used for iscsi and nfs communications to the clients. This will be done over vlan 50. This is set up with an alias to allow fast switching between the primary and secondary servers when HAST decides it is necessary.
 +
 +<code bash rc.conf>
 +# .. there is more code above defining the server
 +
 +# set up the networks
 +# bring all NIC's up
 +ifconfig_bge0="​up"​
 +ifconfig_bge1="​up"​
 +ifconfig_bge2="​up"​
 +ifconfig_bge3="​up"​
 +
 +# define the lagg'​s,​ ie the cloned interfaces
 +cloned_interfaces="​lagg0 lagg1"
 +
 +# now, set up the lagg's
 +ifconfig_lagg0="​laggproto lacp laggport bge0 laggport bge1 up"
 +ifconfig_lagg1="​laggproto lacp laggport bge2 laggport bge3 up"
 +
 +# define the vlans
 +vlans_lagg0="​50"​
 +vlans_lagg1="​30 60"
 +
 +# and set up the vlans
 +ifconfig_lagg0_50="​inet 10.19.209.67/​24"​
 +ifconfig_lagg1_60="​inet 10.128.50.1/​30"​
 +ifconfig_lagg1_30="​SYNCDHCP"​
 +# NOTE: must use SYNCDHCP so dhcp will complete before something modifies
 +# the network
 +
 +# set up the shared interface for HAST
 +ifconfig_lagg0_50_alias="​inet vhid 1 pass iscsilan alias 10.19.209.64/​24"​
 +</​code>​
 +
 +==== HAST and CARP ====
 +
 +Now, set up your disks for High Availability. See [[unix:​freebsd:​system_builds:​hast|]]
  
 ==== NFS ==== ==== NFS ====
Line 32: Line 82:
 See [[unix:​freebsd:​system_builds:​nfsserver|]] for notes on how to set up NFS server. See [[unix:​freebsd:​system_builds:​nfsserver|]] for notes on how to set up NFS server.
  
 +However, you will need to disable nfs autostart (since it relies on zfs, which relies on hast). nfsd and rpcbind will need to be managed in our failover script.
 ==== Set up iSCSI server ==== ==== Set up iSCSI server ====
 ==== Create iSCSI volumes and export them ==== ==== Create iSCSI volumes and export them ====
unix/freebsd/system_builds/xenstoragebackend.1562738957.txt.gz · Last modified: 2019/07/10 01:09 by 127.0.0.1