====== Setting up HAST ======
This setup assumes we are building the primary, and will add the secondary at a later time. the machines are named iscsi-0 (primary) and iscsi-1 (secondary). Again, this is non-standard in that we will set up one machine (iscsi-0), let it run for a while, then set up the secondary machine.
The system we are building is a storage server, with zvols (ZFS Volumes) for iSCSI exports and an nfs directory tree for nfs storage. That is irrelevant to this document, but some of the instructions will point to it.
===== CARP =====
This document was getting too long, so I created the carp setup at [[unix:freebsd:system_builds:carp|]]
===== Configuration File =====
Create the configuration file, /etc/hast.conf. The following configuration assumes:
* Servers will be named iscsi-0 and iscsi-1 (from hostname)
* Servers have a separate subnet to synchronize over. In our case, they will be on a separate vlan, but they could also be on a crossover cable. Note that synchronization traffic is minimal compared normal data traffic.
* iscsi-0: 10.128.50.1/255.255.255.254
* iscsi-1: 10.128.50.2/255.255.255.254
* Disks to be synchronized are
* iscsi-0: /dev/da0 through /dev/da9 (10 disks)
* iscsi-1: unknown at this time
* We'll use the defaults for most items, but will explicitly include them in the configuration for documentation.
# global section simply sets values explicitly
# using the default values for hast
replication memsync
timeout 20
control uds:///var/run/hastctl
pidfile /var/run/hastd.pid
metaflush on
# tell each system what to listen on
on iscsi-0 {
listen tcp4://10.128.50.1
}
on iscsi-1 {
listen tcp4://10.128.50.2
}
# now, set up our resources
# Note that since we don't have the second node
# set up yet, we use remote none to get around
# our timeout issues
resource disk0 {
on iscsi-0 {
local /dev/da0
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da0
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk1 {
on iscsi-0 {
local /dev/da1
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da1
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk2 {
on iscsi-0 {
local /dev/da2
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da2
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk3 {
on iscsi-0 {
local /dev/da3
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da3
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk4 {
on iscsi-0 {
local /dev/da4
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da4
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk5 {
on iscsi-0 {
local /dev/da5
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da5
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk6 {
on iscsi-0 {
local /dev/da6
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da6
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk7 {
on iscsi-0 {
local /dev/da7
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da7
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk8 {
on iscsi-0 {
local /dev/da8
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da8
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
resource disk9 {
on iscsi-0 {
local /dev/da9
source tcp4://10.128.50.1
# remote tcp4://10.128.5.2
remote none
}
on iscsi-1 {
local /dev/da9
source tcp4://10.128.50.2
remote tcp4://10.128.50.1
}
}
===== Initial Setup =====
The initial setup requires we initialize all the block devices, one by one. We'll then start the service just to make sure it works, then we set this server to primary
foreach i ( 0 1 2 3 4 5 6 7 8 9 )
hastctl create disk$i
end
service hastd onestart
foreach i ( 0 1 2 3 4 5 6 7 8 9 )
hastctl role primary disk$i
end
hastctl status
The output should show all 10 disks, though they are degraded (second system is not set up). The new devices are now available as /dev/hast/disk0 through /dev/hast/disk9.
===== Make Permanent =====
To ensure hast comes up automatically in the future, add **hastd_enable="YES"** to /etc/rc.conf.
echo 'hastd_enable="YES"' >> /etc/rc.conf
===== Set up secondary =====
Setting up the secondary is a lot like the primary, just changing the role and setting different IP's.
I'll fill this in more after we do it.
===== Utilize the new devices =====
You can do anything with the hast devices that you can with your normal block devices. In our case, we want to set up a ZFS Pool (zpool) named storage. Note that this might take a minute or two, so using //screen// is probably a good idea.
zpool create storage raidz2 /dev/hast/disk0 /dev/hast/disk1 /dev/hast/disk2 /dev/hast/disk3 /dev/hast/disk4 /dev/hast/disk5 /dev/hast/disk6 /dev/hast/disk7 /dev/hast/disk8 /dev/hast/disk9
Now, you can check your zpool with
zpool list -v
The -v gives more detail.
===== References =====
Following are some of the places I went to to come up with this information.
* https://www.freebsd.org/cgi/man.cgi?query=hast.conf&sektion=5&manpath=freebsd-release-ports
* https://forums.freebsd.org/threads/zfs-hast-with-different-disk-configurations.71045/
* https://www.freebsd.org/doc/handbook/disks-hast.html
* https://forums.freebsd.org/threads/vlans-over-lagg.7668/
* https://forums.freebsd.org/threads/zfs-and-no-zil-log-reported-usage.38483/