Table of Contents

Setting up HAST

This setup assumes we are building the primary, and will add the secondary at a later time. the machines are named iscsi-0 (primary) and iscsi-1 (secondary). Again, this is non-standard in that we will set up one machine (iscsi-0), let it run for a while, then set up the secondary machine.

The system we are building is a storage server, with zvols (ZFS Volumes) for iSCSI exports and an nfs directory tree for nfs storage. That is irrelevant to this document, but some of the instructions will point to it.

CARP

This document was getting too long, so I created the carp setup at Setting up CARP

Configuration File

Create the configuration file, /etc/hast.conf. The following configuration assumes:

hast.conf
# global section simply sets values explicitly
# using the default values for hast
replication memsync
timeout 20
control uds:///var/run/hastctl
pidfile /var/run/hastd.pid
metaflush on
 
# tell each system what to listen on
on iscsi-0 {
   listen tcp4://10.128.50.1
}
on iscsi-1 {
   listen tcp4://10.128.50.2
}
 
# now, set up our resources
# Note that since we don't have the second node 
# set up yet, we use remote none to get around
# our timeout issues
resource disk0 {
   on iscsi-0 {
      local /dev/da0
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da0
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk1 {
   on iscsi-0 {
      local /dev/da1
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da1
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk2 {
   on iscsi-0 {
      local /dev/da2
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da2
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk3 {
   on iscsi-0 {
      local /dev/da3
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da3
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk4 {
   on iscsi-0 {
      local /dev/da4
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da4
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk5 {
   on iscsi-0 {
      local /dev/da5
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da5
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk6 {
   on iscsi-0 {
      local /dev/da6
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da6
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk7 {
   on iscsi-0 {
      local /dev/da7
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da7
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk8 {
   on iscsi-0 {
      local /dev/da8
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da8
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}
 
resource disk9 {
   on iscsi-0 {
      local /dev/da9
      source tcp4://10.128.50.1
      # remote tcp4://10.128.5.2
      remote none
   }
   on iscsi-1 {
      local /dev/da9
      source tcp4://10.128.50.2    
      remote tcp4://10.128.50.1
   }
}

Initial Setup

The initial setup requires we initialize all the block devices, one by one. We'll then start the service just to make sure it works, then we set this server to primary

initialize.csh
foreach i ( 0 1 2 3 4 5 6 7 8 9 )
   hastctl create disk$i
end
 
service hastd onestart
 
foreach i ( 0 1 2 3 4 5 6 7 8 9 )
   hastctl role primary disk$i
end
 
hastctl status

The output should show all 10 disks, though they are degraded (second system is not set up). The new devices are now available as /dev/hast/disk0 through /dev/hast/disk9.

Make Permanent

To ensure hast comes up automatically in the future, add hastd_enable=“YES” to /etc/rc.conf.

echo 'hastd_enable="YES"' >> /etc/rc.conf

Set up secondary

Setting up the secondary is a lot like the primary, just changing the role and setting different IP's.

I'll fill this in more after we do it.

Utilize the new devices

You can do anything with the hast devices that you can with your normal block devices. In our case, we want to set up a ZFS Pool (zpool) named storage. Note that this might take a minute or two, so using screen is probably a good idea.

zpool create storage raidz2 /dev/hast/disk0 /dev/hast/disk1 /dev/hast/disk2 /dev/hast/disk3 /dev/hast/disk4 /dev/hast/disk5 /dev/hast/disk6 /dev/hast/disk7 /dev/hast/disk8 /dev/hast/disk9

Now, you can check your zpool with

zpool list -v

The -v gives more detail.

References

Following are some of the places I went to to come up with this information.