| .\" |
| .\" CDDL HEADER START |
| .\" |
| .\" The contents of this file are subject to the terms of the |
| .\" Common Development and Distribution License (the "License"). |
| .\" You may not use this file except in compliance with the License. |
| .\" |
| .\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE |
| .\" or http://www.opensolaris.org/os/licensing. |
| .\" See the License for the specific language governing permissions |
| .\" and limitations under the License. |
| .\" |
| .\" When distributing Covered Code, include this CDDL HEADER in each |
| .\" file and include the License file at usr/src/OPENSOLARIS.LICENSE. |
| .\" If applicable, add the following below this CDDL HEADER, with the |
| .\" fields enclosed by brackets "[]" replaced with your own identifying |
| .\" information: Portions Copyright [yyyy] [name of copyright owner] |
| .\" |
| .\" CDDL HEADER END |
| .\" |
| .\" Copyright (c) 2009 Sun Microsystems, Inc. All Rights Reserved. |
| .\" Copyright 2011 Joshua M. Clulow <josh@sysmgr.org> |
| .\" Copyright (c) 2011, 2019 by Delphix. All rights reserved. |
| .\" Copyright (c) 2013 by Saso Kiselkov. All rights reserved. |
| .\" Copyright (c) 2014, Joyent, Inc. All rights reserved. |
| .\" Copyright (c) 2014 by Adam Stevko. All rights reserved. |
| .\" Copyright (c) 2014 Integros [integros.com] |
| .\" Copyright 2019 Richard Laager. All rights reserved. |
| .\" Copyright 2018 Nexenta Systems, Inc. |
| .\" Copyright 2019 Joyent, Inc. |
| .\" |
| .Dd June 30, 2019 |
| .Dt ZFSCONCEPTS 7 |
| .Os |
| . |
| .Sh NAME |
| .Nm zfsconcepts |
| .Nd overview of ZFS concepts |
| . |
| .Sh DESCRIPTION |
| .Ss ZFS File System Hierarchy |
| A ZFS storage pool is a logical collection of devices that provide space for |
| datasets. |
| A storage pool is also the root of the ZFS file system hierarchy. |
| .Pp |
| The root of the pool can be accessed as a file system, such as mounting and |
| unmounting, taking snapshots, and setting properties. |
| The physical storage characteristics, however, are managed by the |
| .Xr zpool 8 |
| command. |
| .Pp |
| See |
| .Xr zpool 8 |
| for more information on creating and administering pools. |
| .Ss Snapshots |
| A snapshot is a read-only copy of a file system or volume. |
| Snapshots can be created extremely quickly, and initially consume no additional |
| space within the pool. |
| As data within the active dataset changes, the snapshot consumes more data than |
| would otherwise be shared with the active dataset. |
| .Pp |
| Snapshots can have arbitrary names. |
| Snapshots of volumes can be cloned or rolled back, visibility is determined |
| by the |
| .Sy snapdev |
| property of the parent volume. |
| .Pp |
| File system snapshots can be accessed under the |
| .Pa .zfs/snapshot |
| directory in the root of the file system. |
| Snapshots are automatically mounted on demand and may be unmounted at regular |
| intervals. |
| The visibility of the |
| .Pa .zfs |
| directory can be controlled by the |
| .Sy snapdir |
| property. |
| .Ss Bookmarks |
| A bookmark is like a snapshot, a read-only copy of a file system or volume. |
| Bookmarks can be created extremely quickly, compared to snapshots, and they |
| consume no additional space within the pool. |
| Bookmarks can also have arbitrary names, much like snapshots. |
| .Pp |
| Unlike snapshots, bookmarks can not be accessed through the filesystem in any way. |
| From a storage standpoint a bookmark just provides a way to reference |
| when a snapshot was created as a distinct object. |
| Bookmarks are initially tied to a snapshot, not the filesystem or volume, |
| and they will survive if the snapshot itself is destroyed. |
| Since they are very light weight there's little incentive to destroy them. |
| .Ss Clones |
| A clone is a writable volume or file system whose initial contents are the same |
| as another dataset. |
| As with snapshots, creating a clone is nearly instantaneous, and initially |
| consumes no additional space. |
| .Pp |
| Clones can only be created from a snapshot. |
| When a snapshot is cloned, it creates an implicit dependency between the parent |
| and child. |
| Even though the clone is created somewhere else in the dataset hierarchy, the |
| original snapshot cannot be destroyed as long as a clone exists. |
| The |
| .Sy origin |
| property exposes this dependency, and the |
| .Cm destroy |
| command lists any such dependencies, if they exist. |
| .Pp |
| The clone parent-child dependency relationship can be reversed by using the |
| .Cm promote |
| subcommand. |
| This causes the |
| .Qq origin |
| file system to become a clone of the specified file system, which makes it |
| possible to destroy the file system that the clone was created from. |
| .Ss "Mount Points" |
| Creating a ZFS file system is a simple operation, so the number of file systems |
| per system is likely to be numerous. |
| To cope with this, ZFS automatically manages mounting and unmounting file |
| systems without the need to edit the |
| .Pa /etc/fstab |
| file. |
| All automatically managed file systems are mounted by ZFS at boot time. |
| .Pp |
| By default, file systems are mounted under |
| .Pa /path , |
| where |
| .Ar path |
| is the name of the file system in the ZFS namespace. |
| Directories are created and destroyed as needed. |
| .Pp |
| A file system can also have a mount point set in the |
| .Sy mountpoint |
| property. |
| This directory is created as needed, and ZFS automatically mounts the file |
| system when the |
| .Nm zfs Cm mount Fl a |
| command is invoked |
| .Po without editing |
| .Pa /etc/fstab |
| .Pc . |
| The |
| .Sy mountpoint |
| property can be inherited, so if |
| .Em pool/home |
| has a mount point of |
| .Pa /export/stuff , |
| then |
| .Em pool/home/user |
| automatically inherits a mount point of |
| .Pa /export/stuff/user . |
| .Pp |
| A file system |
| .Sy mountpoint |
| property of |
| .Sy none |
| prevents the file system from being mounted. |
| .Pp |
| If needed, ZFS file systems can also be managed with traditional tools |
| .Po |
| .Nm mount , |
| .Nm umount , |
| .Pa /etc/fstab |
| .Pc . |
| If a file system's mount point is set to |
| .Sy legacy , |
| ZFS makes no attempt to manage the file system, and the administrator is |
| responsible for mounting and unmounting the file system. |
| Because pools must |
| be imported before a legacy mount can succeed, administrators should ensure |
| that legacy mounts are only attempted after the zpool import process |
| finishes at boot time. |
| For example, on machines using systemd, the mount option |
| .Pp |
| .Nm x-systemd.requires=zfs-import.target |
| .Pp |
| will ensure that the zfs-import completes before systemd attempts mounting |
| the filesystem. |
| See |
| .Xr systemd.mount 5 |
| for details. |
| .Ss Deduplication |
| Deduplication is the process for removing redundant data at the block level, |
| reducing the total amount of data stored. |
| If a file system has the |
| .Sy dedup |
| property enabled, duplicate data blocks are removed synchronously. |
| The result |
| is that only unique data is stored and common components are shared among files. |
| .Pp |
| Deduplicating data is a very resource-intensive operation. |
| It is generally recommended that you have at least 1.25 GiB of RAM |
| per 1 TiB of storage when you enable deduplication. |
| Calculating the exact requirement depends heavily |
| on the type of data stored in the pool. |
| .Pp |
| Enabling deduplication on an improperly-designed system can result in |
| performance issues (slow IO and administrative operations). |
| It can potentially lead to problems importing a pool due to memory exhaustion. |
| Deduplication can consume significant processing power (CPU) and memory as well |
| as generate additional disk IO. |
| .Pp |
| Before creating a pool with deduplication enabled, ensure that you have planned |
| your hardware requirements appropriately and implemented appropriate recovery |
| practices, such as regular backups. |
| Consider using the |
| .Sy compression |
| property as a less resource-intensive alternative. |