Is dump really deprecated?

What's the story?

Back in April and May 2001, a Linux 2.4 kernel bug was being discussed in the Linux Kernel Mailing List. The bug caused incoherent view of a disk device when it was mounted read-write and you looked at it through its device file (such as /dev/hda1), even if you had synced the disks. There was no disagreement that this bug existed (or that it should be fixed, as it were); there was disagreement on its consequences. Linus Torvalds asked, why would anyone open the device file of a live filesystem? One of the answers was, to dump it. Linus responded with the following:

I think all these arguments are fairly bogus. Doing things like "dump" on a live filesystem is stupid and dangerous (in my opinion it is stupid and dangerous to use "dump" at _all_, but that's a whole 'nother discussion in itself), and there really are no valid uses for opening a block device that is already mounted.

Here's Linus's message, and here's a related followup.

Naturally these statements by Linus caused concern among system administrators who were using dump. Later RedHat added to the worries by deprecating dump in the Red Hat Linux 9 Red Hat Linux System Administration Primer (section 8.4.2.3):

[M]any system administrators with UNIX experience may feel that dump and restore are viable candidates for a good backup program under Red Hat Linux. Unfortunately, the design of the Linux kernel has moved ahead of dump's design.

The manual then quotes one of Linus's messages and concludes that the use of dump is discouraged.

However, the dump developers and many dump users believe otherwise.

What is the problem when dumping live filesystems?

The problem is that the filesystem may be changing while you are dumping it. You have this problem with all backup utilities, but with dump it is more serious. When you are using tar, for example, a file could be changed at the time it is read by tar; in that case, that particular file would be corrupted in the resulting tar file. But whereas for tar this is a problem only if it so happens that the file is changed the instant it is read, dump could backup corrupted versions of files if they changed some time before dump attempts to read them. Let's see why.

The kernel caches write operations to the disk. You can see this for yourself if you make some experiments with a floppy. Insert a floppy in the drive, mount it, and copy a file to the floppy; the operation, especially with recent 2.4 kernels, will appear to finish instantly. You can then do something like ls /mnt/floppy and see that your file is on the disk. But your file is not really on the disk; the drive's light hasn't been on at all. If you looked at the disk through /dev/fd0, you wouldn't find your file there.

You can force the file to be actually written to the diskette by unmounting the diskette, or with the sync command; if you don't, the kernel will write the file on the disk when it sees it fit to do so. What's more, the kernel might actually write half the file on the disk, and I guess that this will be usual with hard disks; when there are lots of pending write operations at approximately the same physical area of the disk, the kernel will probably choose to flush them, but it will probably choose not to flush other pending operations at distant areas, so as to minimize head movement. Thus, when dump reads the filesystem through the block device, it will get corrupted versions of some files if there are pending write operations; even worse, the metadata (filesystem structure) could be corrupt, in which case (a part of) the filesystem could become entirely unreadable.

Is that problem peculiar to Linux?

No. RedHat is in error when they claim that the design of the Linux kernel has moved ahead of dump's design. Dump always had this problem, on all operating systems and filesystems, by the very nature of its design. Linus's arguments are correct, but they are true for any operating system, not just Linux. Linus only said that with 2.4 the problem is worse than with 2.2; he did not imply that with 2.4 the problem is worse than it has ever been in any operating system.

Actually in the first versions of the 2.4 kernel, the problem was worse, but this, as already mentioned, was a bug.

Can I use dump, then?

Dump is a really popular backup solution among Unix system administrators worldwide, and it is not because those administrators are ignorant of the problems.

First, you can safely use dump on unmounted and read-only filesystems. You can also safely use dump on idle filesystems if you sync before dumping (but can you be sure they are idle? a solution is to remount them read-only before dumping).

You can also use dump on non-idle filesystems, but with caution. You must take care to dump when the filesystem is not heavily loaded; for example, I dump during the night, when only logfiles and mailboxes are modified, and not heavily. If your filesystem is always on heavy load, maybe you shouldn't use dump. In addition, you should verify your backups; see below.

Why not prefer one of the other utilities, if dump has these problems?

The fact that dump reads the block device directly gives it several advantages. First, you can dump unmounted file systems. It has been reported that this is particularly useful in cases of filesystem error which renders it unmountable; in those cases, it is useful to dump the filesystem (to the extent possible) before attempting to fsck it, in case fsck causes further data loss.

Second, dump never changes the filesystem while dumping it. The problem with tar and cpio is that they change a normally mounted read-write filesystem while reading it. The filesystem keeps three times for each file: the last modification time (mtime), the last access time (atime), and the last i-node modification time (ctime). When you read a file through a normal system call, its atime is set to the time of the access. You could then issue another system call to revert atime to its original value, as GNU tar does when given the --atime-preserve option, but in that case ctime changes to indicate an i-node modification. There is no system call to change ctime.

If you choose to leave the atime modified, you loose valuable information. I use atime to determine if my users have lots of unused files occupying unnecessary disk space; I can tell a user off for occupying 3 GB of disk space 2 GB of which they have neither read or written for the last three years. In addition, when I'm lost in configuration files, such as XF86Config and XF86Config-4, I can tell from the atime which one is actually being used. If you choose to restore atime and change ctime instead, you'll have other problems. Backup utilities normally consider files with changed ctime to be changed, and thus save them in incremental backups. In addition, security monitoring tools may signal possible changes in system and configuration files.

In Linux, the "noatime" mount option causes the kernel to not alter atimes, so you can remount the filesystem with that option before tarring or cpioing and remount it again normally after backup. With this workaround you can use other backup utilities without altering the filesystem.

Dump's third advantage is that it works faster, because it bypasses the kernel's filesystem interface. I don't have any experience on this, but I suspect that now that the machines are faster and the filesystem caches are much larger, this advantage is less important than what it used to be.

The above three advantages, namely the ability to backup unmounted filesystems, the fact that dump never changes the filesystem no matter how it is mounted, and speed, are rooted in the fact that dump reads the filesystem through the block device rather than through the normal filesystem calls. In addition, dump has some more advantages, namely ease of use and reliability. It's much easier to get dump's options correct than GNU tar's. And, of course, dump's complement utility, restore, does interactive restores whose user-friendliness and efficiency is unmatched. It is also frequently reported that dump can handle flawlessly all kinds of strange things, such as files with holes, files with unbelievably long filenames, files with unbelievably strange symbols in the filenames, and so on, and this reliability is probably due to the fact that it has been used so much. However, GNU tar is lately also reported to be exceptionally reliable.

A drawback of dump is that it must know some filesystem internals. As a result, you can't find dump for all filesystems. There is currently no dump for ReiserFS. However, many administrators only choose among filesystems supported by dump (currently ext2 and ext3), because they won't consider using another backup utility.

Should I always verify the backups?

Suppose you have set the system to backup the files at two o'clock each morning. Suppose, further, that among the tens of thousands of files unknown to you, there's a file, /home/yourboss/unbelievably_important_file, that happens to always change while you are backing it up. Maybe your boss has set a cron job doing something tricky at two o'clock in the morning.

Let's see another example. Among the files you backup at two o'clock each morning, there is a relational database which is live 24 hours a day. You obviously don't want to backup its datafiles directly, because they would be in an inconsistent state. For this reason, you perform a database export and, instead, backup the export file together with the rest of the filesystem. You know the exporting takes about two minutes, so it's really safe if you do it a whole hour before dumping/tarring. But instead of "0 1 * * * dbexport", your finger slipped and typed "0 2 * * * dbexport", and your eye failed to register the error (I always have to look up crontab syntax). Exporting takes place at two o'clock each morning. You have no backup of your database.

Or, maybe you typed the crontab alright, but your assistant changed it. Or maybe your crontabs are perfect, but for some strange reason, there's always a transaction happening at one o'clock in the morning, and it stays open until two, and it causes the exporting to pause for exactly an hour.

These things may be far-fetched, but you have seen stranger things than those happening, haven't you? And even if you somehow take care that none of the above happens, you can't guard against a hundred other things which I can't think of and you can't think of, but will happen.

As these scenarios work perfectly whether you use dump or one of the other utilities, you must always verify the backups. Not only verify them; you must also test them. You must keep a recovery plan, and must regularly follow it to bring up a copy of your system from scratch. This, unfortunately, does not always work. You'll certainly know you can bring up your database, but you probably won't know you failed to bring up /home/yourboss/unbelievably_important_file, because you won't know it exists until your boss comes and asks for it after a real data loss.

If you use "restore -C" to verify the backups, you may want to remount the filesystem with the "noatime" option, otherwise "restore -C" will change the atimes of your files.

All this shows that there are issues with backup that are much more important than dump's problems. Backup is one of the trickiest parts of system administration, and although Linus is technically correct when he points out dump's problems, he fails to see that these problems are but a small part of the big mess the administrator has to sort out in the real world.

Will dump be more reliable in the future?

In the future, dump's problems will be solved by snapshots. Snapshots are a way to atomically get a read only copy of a filesystem which will be freezed at the time the snapshot was made, while leaving the original still mounted in read/write mode. A pseudo-code example:

    mount -o rw /dev/hda1 /mnt
    ... now you can start using /mnt for whatever you need ...

    # time to make a backup
    snap -create /dev/hda1 /dev/snap1 # create a snapshot of /dev/hda1
    dump 0f /dev/nst0 /dev/snap1      # backup the snapshot instead of the real +fs
    snap -delete /dev/snap1

Today some solutions exist for creating snapshots for filesystems which are part of a LVM or a EVMS installation, but for most filesystems you can't have snapshots as the Linux kernel doesn't support them yet.


written by Antonios Christofides, September 2003.

Questions, suggestions, and comments should go to the dump-users mailing list. Despite appearances, I'm not a dump expert; I just wrote up. But if you really want to email me, my address is anthony@itia.ntua.gr.

Copyright (C) 2003 Antonios Christofides

Permission is granted to reproduce and modify this document, provided that this notice remains intact and that no name is removed from the list of copyright holders (add your name if you make substantial changes).

(It is recommended, but not required, that you send modifications to the author/maintainer instead of creating a modified version yourself.)