On Wed, 2005-28-12 at 07:57 +0800, John Summerfied wrote:
...snip...
Might I observe that the many-partitions layout so often recommended
gives you all the disadvantages of a fragmented drive from day one?
Just plain wrong. Keeping relatively static files separated from
relatively dynamic files can keep the "static" files less fragmented.
And since spool files are very dynamic and command files are usually
very static, it makes a great deal of sense to keep /usr separate
from /var. There are many other security and performance reasons to
keep other directories in separate partitions, not necessarily related
to fragmentation. There is also a good reason for leaving adequate
"head room" on your partitions to alleviate fragmentation.
As mentioned mail spools are notorious for fragmenting. There are lots
of files many small but some are often very large. Using hashed mail
directories can help, by keeping the size of the directories from adding
to the problem. Discouraging large mailboxes is also effective in
minimizing fragmentation.
Two busy partitions is one too many. In these times of cheap disks
and
USB2 enclosures, I'd rather have one partition for everything (except
maybe /boot and maybe other low-use stuff), and if an upgrade is
contemplated, back up /home to a USB drive. At worst, almost anything
can be backed up overnight. According to dd, I can backup /dev/hda (80
Gb) to a USB2 disk at 14 Mbytes/sec on my laptop.
Arguably, I should be doing something of the sort regardless. As should
anyone with many computers in their care.
fwiw I used to use OS/2 and it was IBM's recommendation that one should
not defrag HPFS (which, btw, predates NTFS) partitions because HPFS
allocates space in bands (take your pad, divide it into eight columns
and you have eight bands) (and so takes an initial performance hit).
File expansions are done within the same band where possible, so
reducing the impact of further fragmentation. Performance was pretty
uniform up to, I think, about 95% full.
Defragging an HPFS drive would involve putting all the files together
into a single block, and the chances were good that you'd soon find
files occupying extents both inside and outside that block and
consequent bad performance.
I've always assumed that, since the algorithms have existed for a long
time, that Linux filesystems are also good in that respect. The fact
that no defragger is included in populare distros supports my
(underinformed) view.
Not sure what point your making.
Journalling introduces a complication, but its effect depends on
where
the journal is. Also, journalling only has an effect when files are written.
Finally, the ultimate defag tool is backup and restore. It might not be
necessary, but it won't do any harm either.
That is likely true.
Stuffing a lot of files into a directory is a bad practice,
and Red Hat is well known for it. Check /usr/bin, /etc and
a few others. Many of the files would normaly be located
under /usr/local . If you are short of space on a partition
or are writing a huge file after creating and deleting lots
of small files, then you can get considerable fragmentation.
Copying or archiving the files to a different partition, then
deleting them and copying or restoring them back will "defrag"
a partitiion.