Thanks for sharing your idea, I appreciate it. Placing a zone (zones) at the end of the disk is a long standing request of many users. If it was easy I would have implemented it a long time ago.... Will come sometime in the future.
The only difficulty you currently have is that you just manage a single counter/position for the zone start (which is increased after processing each zone).
Just add a second counter (starting from end of disk) that will be decreased after processing zones allocated from end of disk, instead of assuming that files in the current zone can be allocated up to end of disk.
Make sure you also allow the MakeGap() primitive to also allow specifying gaps from end of disk as well.
Also I strongly suggests that, when moving out files from a location, they can also be moved not only after the expected end of zone, before being relocated later : allow files to be moved out termporarily in other existing free areas BEFORE the current location (including the existing gaps), notably because this is a faster zone.
Note also that gaps between zones absolutely DON'T NEED to be emptied before processing the next zone: just leave the files there until they will be caught in a later pass. Clearing gaps preemptively is a complete loss of time, you JUST need to free the area where you'll place the final selected files, and you absolutely don't need to do that for FastFill().
These prior gaps are valuable fast workspaces for completing the rest of the defragmentation (they are faster than the end of disk where you temporarily move files during full defragmentation, so instead of looking for free space starting at the end of zone, just look for free space available from the start of disk, or even possibly in a zone in the 16 Megabytes (or max chunk size?) before the current position of the file to move out (if the free area is above this threshld, the IO access time will typically be identical independantly of the distance afte rthis threshold, but choosing a free area near the begining of disk will have the benefit of having a faster throughput, notably when defragmenting large files of the last zone for space hogs).
The default gap size (about 0.1% of free disk space, i.e. typically around 200MB on modern disks) can fit a very large majority of files (including most space hogs like photos, MP3...) and it can certainly fit at least the large chunks for very large files (like DivX videos or ISO images for CD/DVD, or virtual hard disk images used in VirtualServer or VirtualPC, or system backup files).
For now, MyDefragtakes too much time to complete the defragmentation of very large files ; and in fact, it does not use the max chunksize appropriately: it's certainly better to defragment and move them chunk by chunk, as if they were already split into separate files, instead of trying to fully move large files: very large files can support moderate fragmentation if fragments have at least the max chunk size. These chunks will just be coallesced together only if there is space to make them contiguous and this will occur naturally (but this should not be a blocking situation).
Suppose that the max chunk size is 128MB (or a multiple of it : note that 128MB has already been noted as being a fine size, because such chunk size will fully use a single 4KB cluster in the NTFS allocation bitmap: such chunks should then be aligned to this multiple to minimize IO on the NTFS bitmap, and reduce the memory page swaps on this bitmap!): in order to minimize the operations on the NTFS bitmap, must bitmap clusters should be fully allocated, reducing the partailly allocated bitmap clusters to the minimum.
That's why I've personnally rounded up ALL zone start positions to 128MB multiples (i.e. 32768 clusters), instead of just the default (0.1% of free space): effectively, this reduces the memory footprint of the Windows disk I/O kernel, with less swaps!
On my disk (rated 5.9 in WinSAT), writing about 1GB takes about 2 seconds. There's no benefit in having larger chunk sizes.
Suppose now that a very large file is taking 150GB on a 250GB partition with about 50GB free space. You can defragment this file as if it was 150 distinct files of 1GB each. As there are 50GB free space, the default gaps (0.1% of free space rounded up to 128MB multiple) will be 128MB. But with gaps rounded up to the next 1GB multiple, you will just have to move up and down about 150 fragments, each one taking about 2s per move, so this file will be defragmented in about 5 minutes. But with the current processing, it actually takes more than 30 minutes : the main reason is that such large file is being defragmented fully by placing most of its data at end of disk (where it is considerably slower to write and read).
Note also: the default max chunk size is MUCH TOO LARGE (each IO at this size takes several minutes to complete even on fast disks): MyDefrag should be able to complete ALL its I/O in no more than 5 seconds. This IO completion time SHOULD BE MONITORED :
* when the I/O time is excessive, reduce the max chunk size. This is currently not checked, and this causes MyDefrag to remain in memory and running for several minutes, even if it has been closed: this prohibits for example terminating the windows session and rebooting, when we need our PC for something else more urgent.
* the other reason is that MyDefrag should be able to work more transparently in the background. The IO completion time is a key factor for getting our PC more responsive for something else (otherwise we will NOT allow it to run in the background).
So I suggest that you change your default script to use 128MB as the default max chunk size by default instead of 1GB. (Of course users can change it by editing the script, but most will forget to do that when upgrading from one version to the other (that will ignore the existing customized scripts...)