a few observations (in my own opinion):
- used % of disk instead of clusters - who cares much about cluster details?
Perhaps having a percentage of disk space used for spacehogs would be useful for some purposes, but cluster details are important for people who make use of other disk managment tools who like to rely on the most accurate measure of disk usage. And who like a more accurate monitor of the changes in system files like the MFT and Journal.
Sorry, my purpose in using % was to indicate where on the disk the file was going Do you know if 43564856 is near the start or end?
But I suppose a simple decimal would be better, since I mean to indicate a place rather than an amount.
Density, as a measure of the "goodness" for fragmented files, seems highly subjective to me; and we already have in place (and coming in v4) a better measure from the user perspective: Zones.
Using Zones in v3 does not appear to completely solve the problem as you describe, but you are unlikely to have a file fragmented across 45% of a disk if you have already applied -a -f and -u options as needed to defrag those particular files along with or after the rest of the disk.
While Density wouldn't matter much in sequential file access, if the file is being accessed randomly, like a database, or a large mail file, then Density would matter a great detail. Ten thousand seeks are faster over 10% of the disk than over 60%.
Perhaps V4 should allow users to name groups of filetypes (or other criteria like size), like:
or whatever the user wants.
In the analysis, the groups can be colored in the display.
Then the user can decide where to try to put them, eg:
- .4 of disk
Zones in v4 should allow the user to specify their own areas of greater or lesser Density by specifying the priority of their own files and folders in the defrag/optimization process.
Edit: I do like the idea of the option to run an additional analysis at the end of the defrag process to append to the log.