![]() My interest is in copies= changes as per the above mentioned ticket (2 to 1, in particular). Even clones of snapshots that reference the old dataset would still reference the old data & metadata, (be it compression, checksum or encryption changes). Something like suggested above can help balance data, even if we don't need to change checksum, compress or encryption algorythms.īack to reality, snapshots & possibly even bookmarks would be a problem. Even newly written data may have to favor the second vDev as it has the most free space. The data from the first, (if not changed), remains only on the first vDev. Simple example, you have 1 vDev and when it get fullish, you add a second vDev. This does have the advantage of re-striping the data. Otherwise, the pool would remain in a partially migragted state. If something like this were implemented, a resume after Zpool export would also have to be part of the work. Whence one file is entirely copied to the new dataset, the file could be deleted from the source dataset. Theoretically, we could almost do it without enough space for the whole dataset. Whence the entire dataset is received, the old dataset is destroyed. All new writes go to the new dataset, and any read not yet available in the new dataset would fall back to the old dataset. > A future³ pr might add a way to migrate between crypto given enough space, yes, a transparent ZFS send/receive would be a way to go. -o encryption=on from off might be a useful thing to support, now that we allow unencrypted children.(Without knowing implementation details, sounds like it could be a two phase process like scrubs?) Defrag mode: Only rewrite fragmented datasets for some definition of fragmented.One could hack something like this together using zfs send/recv, it'd probably involve a clone receive and some upgrade shenanigans, but it would definitely not be the same as having a canonical zfs subcommand with the above-mentioned UX especially since it would somewhat cleanly resolve some "please unshoot my foot" situations that inexperienced and/or sleep deprived users might get themselves into, for example choosing the wrong compression algorithm/level a year before realizing it, without the need to figure out and possibly script (recursive) zfs send and receive.Īlso, zfs is probably in a better position to do a much cleaner in-place swap of the two versions of the dataset when the 'rewrite' is done, probably like a snapshot rollback, and will most likely not forget to delete the old version afterwards, unlike my hacky scripts, which break all the time. If there's sufficient free space on the pool, this can also be a form of defragmentation, right?.Shouldn't this also convert xattrs to the sa format?.AFAIK this also applies to checksum algorithms.The applications for this go beyond just applying a different compression algorithm: trigger it through a zfs subcommand, appears in zfs?/ zpool status, gets resumed after reboots, can me paused, stopped, etc.). I personally would be fine if this feature initially behaved like/leveraged an auto-resumed local send/receive and some clone/upgrade-like switcheroo (and obeyed the same constraints, if unavoidable even temporarily using twice the required storage of the dataset being 'transformed') in the background with the user interface of a scrub (i.e. ![]() This feature would enable us to go byond the requested deprication in #9761. ![]() this also has the added benefid of making us able to force it if we depricate/replace/remove an algorithm. It's not very accessible for the simplest of users, for example (future) freenas home usersĪ prefered way to handle this would be a features which recompresses current data on the drive, "in the background", just like a scrub or resilver.While it's perfectly possible to send data to a new dataset and thus trigger a recompression, this has a few downsides: If we remove (or depricate) a compression algorithm.If one wants to increase the compression ratio of currently compressed data.If one wants to change decompression speed for currently stored write-once-read-many data.However, there are 3 scenario's where we would want an easy way to recompress a complete dataset: This works perfectly fine for many people during normal use. Currently one can change the compression setting on a dataset and this will compress new blocks using the new algorithm.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |