Jump to content

Regarding a HDD, in theory, if you had a folder full of many tiny files, and the partition was properly defragmented, they would all be stored contiguously, would they not?  And as such, if you were to copy or move the entire folder, despite the fact that technically it contains many tiny files and would thus normally be subject to the terrible random IO performance of HDDs, you could theoretically instead read the entire block of data that represents the contents of the folder sequentially for a massive performance improvement, could you not?  If anyone knows better I'd like to hear why this wouldn't be the case, or if it's correct, why such an optimization hasn't been created by now.

  1. Sauron

    Sauron

    Not necessarily because defragmentation only guarantees that individual files are stored contiguously, not that they are all stored next to each other.

  2. vanished

    vanished

    I might be mis-remembering as it was over 20 years ago now I suppose, but I could have sworn the tool used to talk about fragmented files and folders back when it actually showed the details of what it was doing.  Regardless, wouldn't it make sense to try to keep all files within a folder together?  They are potentially more likely to be accessed together.

  3. Sauron

    Sauron

    Quote

    Regardless, wouldn't it make sense to try to keep all files within a folder together?

    I could be wrong but as far as I know defragmenting utilities don't do this because it takes longer for arguably no reason. Reading all files in a given folder contiguously isn't a very realistic scenario (if you're doing that often they should probably be a single file or in an archive). Moving all files to be contiguous among each other on the drive would take a lot of time since you may have to copy every single file (plus you may have to sort them to respect folder structure) and it would place a lot of unneeded stress on the platters and heads.

  4. TopHatProductions115

    TopHatProductions115

    Wouldn't this be considered an FS-level HDD optimisation? 

  5. vanished

    vanished

    Perhaps, but I could imagine it being done on the HDD firmware level, or even the OS itself as well.  Perhaps it was naïve of me to think folders would be structured like this and thus that it should be common and easy to benefit from it, I can see that now.  With that said though, I think such an optimization would be nice.  Perhaps the opportunity to use it wouldn't come up often, and perhaps it would only end up applying on a large scale, or by chance on files that seemed to be unrelated, but when and if it could be done it would certainly help.

×