Why ? Yes for current implementation ...
But if wrapping status does not depend on temporaries ?
and if wrapping is done only if appropriate to do so ?
And if done, it is taken as appropriate to keep it ?
Why should it be a benefit to wrap around temporary files? After deletion there are GAPS wich WILL DEGRADE PERFORMANCE!
For unmovable files, this approach is a good idea. But not for temporary files that are in the way.
There is no general way to say it is a temporary file.
Performance of accessing to wrapped around file does not depend on what is wrapped, and does not decrease when temporaries/unmovables are deleted or become moveable.
Small gaps, comparing to whole file, do NOT affect access performance whatever gaps are and wrapping could be allowed unconditionally.
Sure, if this small obstacle is movable, it is better to move it. If not, does not matter.
Unless file itself is small - small files should never be wrapped around anything.
Big gaps, comparing to whole file, DO affect access performance whatever gaps are.
Unless fragments are big enough, file with fragments bigger than lets say >64MB can be wrapped around anything.
How do you want to manage this ? Telling windows "please write for a while at partition end only or better postpone all writes" ?
It could significantly affect system performance, already partially affected by disk optimization.
And for not writing to disk for hours - without UPS ( and partially even with it ) it would be playing russion rulette.
A server must be paused while defragmenting. There are times where doing optimiztation or maintenance is needed. Then a monthly defrag can be made.
I know servers where on demand full AV background scan lasts 10-14 days. How long would last monthly defrag, even on foreground ?
There are systems that could afford barely few hours of outage, that is reserved for more important maintenance thah monthly like optimization.
I guess such your approach would give much less gain than it would cost ( figurally, and often literally )
But it is what multitasking enviroment is about. Multiple people writing on the same paper.
I guess you are not going back to old good DOS age with exclusive access to all system resources.
This has nothing to do with multitasking. While optimizing a system, there must not be any write access by other programs. That's my opineon.
A special file system filter could block all write access while defragmenting for example. Enabled by a SuspendOS(TillFinished) command f.e.
I am not talking about programs, but processes. All Windows are about multitasking.
Even Windows own processes would not be happy for that. And - you would have to write Windows from the scratch.
I am afraid your approach would have serious consequences and impacts, giving very slight gain, that would disappear by following disk activities.
An often neglected part of system optimization is to improve system performance without limiting system itself.
If system is to be suspended, it is in fact degradation of overall system optimization.