Linux menu

Sunday, December 14, 2014

solaris unix zfs file system II

Getting Started

You need:

  • An operating system with ZFS support:
    • Solaris 10 6/06 or later 
    • OpenSolaris (discontinued)
    • FreeBSD 7 or later
    • Mac OS X with maczfs 
    • Linux with ZFS kernel module
  • Root privileges (or a role with the appropriate ZFS rights profile)
  • Some storage, either:
    • 256 MB of disk space on an existing partition
    • Two spare disks of the same size

Using Files

To use files on an existing filesystem, create two 128 MB files, eg.:
# mkfile 128m /home/ocean/disk1
# mkfile 128m /home/ocean/disk2
# ls -lh /home/ocean
total 1049152
-rw------T   1 root     root        128M Mar  7 19:48 disk1
-rw------T   1 root     root        128M Mar  7 19:48 disk2

Using Disks

To use real disks in the tutorial make a note of their names (eg. c2t1d0 or c1d0 under Solaris). You will be destroying all the partition information and data on these disks, so be sure they’re not needed.
In the examples I will be using a pair of 146 GB disks named c3t2d0 and c3t3d0; substitute your disks or files for them as appropriate.

ZFS Filesystems

ZFS filesystems within a pool are managed with the zfs command. Before you can manipulate filesystems you need to create a pool (you can learn about ZFS pools in part 1). When you create a pool, a ZFS filesystem is created and mounted for you.

ZFS Filesystem Basics

Create a simple mirrored pool and list filesystem information with zfs list:
# zpool create salmon mirror c3t2d0 c3t3d0

# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
salmon                  136G   84.5K    136G     0%  ONLINE     -

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                75.5K   134G  24.5K  /salmon
We can see our filesystem is mounted on /salmon and is 134 GB in size.
We can create an arbitrary number (264) of new filesystems within our pool. Let’s add some filesystems space for three users with zfs create:
# zfs create salmon/kent
# zfs create salmon/dennisr
# zfs create salmon/billj

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                 168K   134G  28.5K  /salmon
salmon/billj          24.5K   134G  24.5K  /salmon/billj
salmon/dennisr        24.5K   134G  24.5K  /salmon/dennisr
salmon/kent           24.5K   134G  24.5K  /salmon/kent
Note how all four filesystems share the same pool space and all report 134 GB available. We’ll see how to set quotas and reserve space for filesystems later in this tutorial.
We can create arbitrary levels of filesystems, so you could create whole tree of filesystems inside /salmon/kent.
We can also see our filesystems using df (output trimmed for brevity):
# df -h
Filesystem             size   used  avail capacity  Mounted on
salmon                 134G    28K   134G     1%    /salmon
salmon/kent            134G    24K   134G     1%    /salmon/kent
salmon/dennisr         134G    24K   134G     1%    /salmon/dennisr
salmon/billj           134G    24K   134G     1%    /salmon/billj
You can remove filesystems with zfs destroy. User billj has stopped working on salmon, so let’s remove him:
# zfs destroy salmon/billj
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                 138K   134G  28.5K  /salmon
salmon/dennisr        24.5K   134G  24.5K  /salmon/dennisr
salmon/kent           24.5K   134G  24.5K  /salmon/kent

Mount Points

It’s useful that ZFS automatically mounts your filesystem under the pool name, but this is often not what you want. Thankfully it’s very easy to change the properties of a ZFS filesystem, even when it’s mounted.
You can set the mount point of a ZFS filesystem using zfs set mountpoint. For example, if we want to move salmon under /projects directory:
# zfs set mountpoint=/projects/salmon salmon
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                 142K   134G  27.5K  /projects/salmon
salmon/dennisr        24.5K   134G  24.5K  /projects/salmon/dennisr
salmon/kent           24.5K   134G  24.5K  /projects/salmon/kent
On Mac OS X you need to force an unmount of the filesyetem (using umount -f /Volumes/salmon) before changing the mount point as it will be in use by fseventsd. To mount it again after setting a new mount point use ‘zfs mount salmon’.
Mount points of filesystems are not limited to those of the pool as a whole, for example:
# zfs set mountpoint=/fishing salmon/kent
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                 148K   134G  27.5K  /projects/salmon
salmon/dennisr        24.5K   134G  24.5K  /projects/salmon/dennisr
salmon/kent           24.5K   134G  24.5K  /fishing
To mount and unmount ZFS filesystems you use zfs mount and zfs unmount (Old school Unix users will be pleased to know ‘zfs umount’ works too). ZFS filesystems are entirely managed by ZFS by default, and don’t appear in /etc/vfstab. In a future tutorial we will look at using ‘legacy’ mount points to manage filesystems the traditional way.
For example (mount output trimmed for brevity):
# zfs unmount salmon/kent

# mount | grep salmon
/projects/salmon on salmon
/projects/salmon/dennisr on salmon/dennisr

# zfs mount salmon/kent

# mount | grep salmon
/projects/salmon on salmon 
/projects/salmon/dennisr on salmon/dennisr 
/fishing on salmon/kent

Managing ZFS Filesystem Properties

Other filesystem properties work in the same way as the mount point (which is itself a property). To get and set properties we use zfs get and zfs set. To see a list of all filesystem properties we can use ‘zfs get all’:
# zfs get all salmon/kent
NAME             PROPERTY       VALUE                      SOURCE
salmon/kent      type           filesystem                 -
salmon/kent      creation       Fri Apr  6 13:14 2007      -
salmon/kent      used           24.5K                      -
salmon/kent      available      134G                       -
salmon/kent      referenced     24.5K                      -
salmon/kent      compressratio  1.00x                      -
salmon/kent      mounted        yes                        -
salmon/kent      quota          none                       default
salmon/kent      reservation    none                       default
salmon/kent      recordsize     128K                       default
salmon/kent      mountpoint     /fishing                   local
salmon/kent      sharenfs       off                        default
salmon/kent      checksum       on                         default
salmon/kent      compression    off                        default
salmon/kent      atime          on                         default
salmon/kent      devices        on                         default
salmon/kent      exec           on                         default
salmon/kent      setuid         on                         default
salmon/kent      readonly       off                        default
salmon/kent      zoned          off                        default
salmon/kent      snapdir        hidden                     default
salmon/kent      aclmode        groupmask                  default
salmon/kent      aclinherit     secure                     default
The first set of properties, with a SOURCE of ‘-‘, are read only and give information on your filesystem; the rest of the properties can be set with ‘zfs set’. The SOURCE value shows where a property gets its value from, other than ‘-’ there are three sources for a property:
  • default - the default ZFS value for this property
  • local - the property is set directly on this filesystem
  • inherited - the property is inherited from a parent filesystem
The mountpoint property is shown as from a local source, this is because we set the mountpoint for this filesystem above. We’ll see an example of an inherited property in the section on compression (below).
I’m going to look at three properties in this section: quota, reservation and compression (sharenfs will be covered in a future tutorial). You can read about the remaining properties in the Sun ZFS Administration Guide.
Quotas & Reservations
All the filesystems in a pool share the same disk space, This maximises flexibility and lets ZFS make best use of the resources, however it does allow one filesystem to use all the space. To manage space utilisation within a pool, filesystems can have quotas and reservations. A quota sets a limit on the pool space a filesystem can use. A reservation reserves part of the pool for the exclusive use of one filesystem.
To see how this works, let’s consider our existing pool:
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                 148K   134G  26.5K  /projects/salmon
salmon/dennisr        24.5K   134G  24.5K  /projects/salmon/dennisr
salmon/kent           24.5K   134G  24.5K  /fishing
For example, let’s say we want to set a quota of 10 GB on dennisr and kent to ensure there’s space for other users to be added to salmon (If you are using disk files or small disks just substitute a suitable value, eg. quota=10M):
# zfs set quota=10G salmon/dennisr
# zfs set quota=10G salmon/kent

# zfs get quota salmon salmon/kent salmon/dennisr
NAME             PROPERTY       VALUE                      SOURCE
salmon           quota          none                       default
salmon/dennisr   quota          10G                        local
salmon/kent      quota          10G                        local
You can see how we used zfs get to retrieve a particular property for a set of filesystems. There are some useful options we can use with get:
  • -r recursively gets the property for all child filesystems.
  • -p reports exact values (e.g. 9437184 rather than 9M).
  • -H omits header fields, making the output easier for scripts to parse.
  • -o <fields> specify a list of fields you wish to get (avoids having to use awk or cut).
An example (excluding headers and not showing the source field):
# zfs get -rHp -oname,property,value quota salmon
salmon          quota   0       
salmon/dennisr  quota   10737418240
salmon/kent     quota   10737418240
As an example of reservations let’s add a new filesystem and reserve 1 GB of space for it. This ensures that however full the disk gets, when someone comes to use it there will be space.
# zfs create salmon/jeffb
# zfs set reservation=1G salmon/jeffb

# zfs get -r reservation salmon
NAME             PROPERTY       VALUE                      SOURCE
salmon           reservation    none                       default
salmon/dennisr   reservation    none                       default
salmon/jeffb     reservation    1G                         local
salmon/kent      reservation    none                       default
If we look at our list of filesystems with zfs list we can see the effect of the quotas and reservation:
# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
salmon                1.00G   133G  27.5K  /projects/salmon
salmon/dennisr        24.5K  10.0G  24.5K  /projects/salmon/dennisr
salmon/jeffb          24.5K   134G  24.5K  /projects/salmon/jeffb
salmon/kent           24.5K  10.0G  24.5K  /fishing
As expected the space available to salmon/dennisr and salmon/kent is now limited to 10 GB, but there appears to be no change to salmon/jeffb. However, if we look at the used space for salmon as a whole we can see this has risen to 1 GB. This space isn’t actually used, but because it has been reserved for salmon/jeffb it isn’t available to the rest of the pool. Reservations could lead you to over-estimate the spaced used in your pool. The df command always displays the actual usage, so can be handy in such situations.
Compression
ZFS has built-in support for compression. Not only does this save disk space, but it can actually improve performance on systems with plenty of CPU and highly compressible data, as it saves disk I/O. An obvious candidate for compression is a logs directory.

No comments: