Intro Download and install Frequently Asked Questions Tips and tricks

Homepage







© J.C. Kessels 2009
MyDefrag Forum
November 23, 2014, 10:33:01 am *
Welcome, Guest. Please login or register.

Login with username, password and session length
News:
 
   Home   Help Search Login Register  
Pages: [1] 2
  Print  
Author Topic: How to treat hybrid drives?  (Read 15155 times)
hjacobson
JkDefrag Junior
**
Posts: 6


View Profile
« on: August 10, 2010, 06:56:01 pm »

I've lately begun using Seagate Momentus X hybrid drives. These drives combine standard Seagate Momentus 7200.4 physical hard drives with 4 GB solid state disks and 32 MB of cache memory. The drive's electronics copy frequently used files (or is it sectors?) to the SSD which then load orders of magnitude faster than from the physical disk. The improvement in performance is not instantaneous; the drive acquires file/sector usage history over time before it copies to SSD, and therein lies my uncertainty about the common sense of defragmentation when applied to hybrid disk drives.

When the disk is defragmented files are moved around. Defragmentation thus resets file/sector usage history. Performance drops until usage history is re-built and files/sectors are copied back to the 4 GB SSD.

It appears that defragmentation common sense for hybrid drives is opposite that for physical drives: Defragmenting hybrid drives reduces performance, albeit temporarily. I'm inclined to think defragmenting hybrid drives should be done sometimes but not as frequently as with physical drives. I've avoided defragmenting hybrid system disks, at least on daily and weekly basis, for just this reason.

How to treat hybrid drives?



Logged
jeroen
Administrator
JkDefrag Hero
*****
Posts: 7236



View Profile WWW
« Reply #1 on: August 11, 2010, 04:47:26 am »

I have no personal experience with such hybrid disks, but I think you are right in saying that defragmenting/optimizing such hybrid disks will mess up the statistics of the flash cache, and will therefore (temporarily) reduce performance. I have no idea how fast the cache will recover, though. I guess it depends on how the computer is used. If it takes days then a daily defragmentation/optimization is nonsense. But if it takes only an hour or so then daily defragmentation/optimization is still a good idea, because ultimately the speed will be higher than before defragmentation/optimization. At the moment, based on no hard data and on a gut feeling only, I would say that perhaps a weekly + monthly defragmentation schedule is best for these kinds of hybrid disks, using the standard MyDefrag Weekly and Monthly scripts.
Logged
hjacobson
JkDefrag Junior
**
Posts: 6


View Profile
« Reply #2 on: January 27, 2011, 10:27:38 pm »

Jeroen,

That's exactly what I did. My laptop runs the monthly script but not the daily or weekly script.

After MyDefrag monthly and weekly scripts run there's a noticeable lag in performance. That is, the drive's performance is about the same as the non-hybrid Seagate Momentus 7200.4 drive (which was the prior drive in this laptop).

The drive returns to its hybrid performance within a few hours; faster if the laptop's re-booted.

Regards,
Harry

P.S. Regarding whether Seagate's Adaptive Memory deals in files or sectors, I've decided it must be sectors. Otherwise the Adaptive Memory would be OS specific, which the specs say it's not.
« Last Edit: January 27, 2011, 10:34:41 pm by hjacobson » Logged
Thaliur
JkDefrag Hero
*****
Posts: 71


View Profile
« Reply #3 on: May 04, 2011, 09:36:02 pm »

I've read a lot about the Momentus drives. Yes, the harddisk firmware just observes sector usage and copies sectors that are read frequently into the SSD part. Anytime one of those sectors is requested, it's loaded from the SSD instead of the harddisk.

Since I'm thinking about buying one of those for my netbook (and maybe even for my desktop. Too bad there are just notebook-sized disks available) I've spent some time thinking about how to adapt the MyDefrag scripts for such a drive.

The easiest solution I found so far would be checking the file change date and excluding all the files which were not changed in the last month (or some other period of time) from defragmentation.
Maybe once in a while, you could do a full optimization, but for daily and weekly usage this should be a good start.

[edit]Actually, I think defragmenting/optimizing at all might be a bad idea (except maybe after large installations or once a month), because I just realised that things like Outlook archives (or any kind of database) usually consist of a lot of unchanged data with a few changes appended to the file, so most of it would come from the SSD, even if it changed recently.
So, I've run out of ideas now.
« Last Edit: May 08, 2011, 11:58:23 am by Thaliur » Logged
Kasuha
JkDefrag Hero
*****
Posts: 595


View Profile
« Reply #4 on: May 11, 2011, 07:06:19 pm »

Actually, I think defragmenting/optimizing at all might be a bad idea (except maybe after large installations or once a month), because I just realised that things like Outlook archives (or any kind of database) usually consist of a lot of unchanged data with a few changes appended to the file, so most of it would come from the SSD, even if it changed recently.
So, I've run out of ideas now.
I believe the best strategy is to minimize file moves while still keeping files somewhat compact and separated to zones. A good approximation may be to run just daily optimization time to time, but it can be improved even further, e.g. by putting the MFT at a fixed place.
If nothing else I think it's better than not optimizing at all.
Logged
skozzy
JkDefrag Senior
****
Posts: 28


View Profile
« Reply #5 on: May 20, 2011, 07:58:42 pm »

How about sending an email to the manufacturer to ask them some technical details.

If the drive takes a few hours to build back up to a good performance level. I guess one thing that might help restore the cache faster after any defrag is to read those blocks from recently accessed files over and over in a background process. So after the defrag is finished, build a list of recently accessed files with the data size upto the size of the cache, and access those files (either copy to nil or just open read close the files several times). I guess the amount if times needed to re-read a sector is something to find out from the manufacturer.

Will be interesting to see how the experimenting works.


Logged
Thaliur
JkDefrag Hero
*****
Posts: 71


View Profile
« Reply #6 on: May 23, 2011, 12:08:47 pm »

Maybe the drive firmware offers some kind of interface to ask which sectors are mirrored in the SSD part.
If that's the case, MyD could just mark those sectors as unmovable and try to build the according files around them, if possible.

I could imagine that, if such an interface exists, Seagate won't publish that information, since it might be a security risk.
Logged
BloodySword
Global Moderator
JkDefrag Hero
*****
Posts: 1158



View Profile
« Reply #7 on: May 23, 2011, 01:00:13 pm »

I think that's impossible because the SSD-Part acts as a cache and the files are updated frequently. This means that the files are copied into the SSD-Part and back.
Logged

Greetings from Germany!
Thaliur
JkDefrag Hero
*****
Posts: 71


View Profile
« Reply #8 on: May 23, 2011, 05:44:31 pm »

I think that's impossible because the SSD-Part acts as a cache and the files are updated frequently. This means that the files are copied into the SSD-Part and back.
As far as I understood the description on Seagates homepage, the hybrid drive writes exclusively onto the HDD part, not to SSD. The SSD only contains information that is also contained on the HDD.
Based on the Seagate description, there should be no actual data transfer from the SSD to the HDD, only the other way, so I guess if MyD could manage to determine the cached sectors and does not move those disk sectors that are preloaded onto the SSD, the hybrid drive could be able to keep its optimisation effect.
Logged
poutnik
JkDefrag Hero
*****
Posts: 1113


View Profile
« Reply #9 on: May 24, 2011, 06:56:03 am »

As far as I understood the description on Seagates homepage, the hybrid drive writes exclusively onto the HDD part, not to SSD. The SSD only contains information that is also contained on the HDD. Based on the Seagate description, there should be no actual data transfer from the SSD to the HDD, only the other way,.....

I have the same information. HDD writes use only standard DRAM 32MB cache, bypassing flash read cache 4GB.
It is logical - small flash memory serving as a write cache would be disaster for flash lifetime, even if being SLC.
Logged

It can be fast, good or easy. You can pick just 2 of them....
Treating Spacehog zone by the same effort as Boot zone is like cleaning a garden by the same effort as a living room.
William1Will
Newbie
*
Posts: 1


View Profile
« Reply #10 on: June 06, 2011, 03:13:36 pm »

really helpuf information, hybri drives are always complicated
Logged

I consider myself a very down to earth person but at the same time I try to live my dreams every moment I can. Without dreams, we are nothing.
mimarsinan
Newbie
*
Posts: 3


View Profile
« Reply #11 on: June 28, 2011, 09:14:11 pm »

I have used traditional SSDs (both the SLC and the MLC kind), and am now discovering the joys of the hybrid drive.

Its suicide to defrag traditional SSDs: it takes much longer than a regular drive to complete a defrag cycle (your first clue something "fishy" is going on), it significantly shortens the life of the drive, I have seen data actually get corrupted, and last but not least, absolutely zero benefit to be had in the defragmentation, since the entire disk area is read at the same speed, and since there is zero lag in jumping from one sector to another in an SSD (this is why they're so fast, right?)

Now the Momentus XT is NOT an SSD. Its a mechanical drive with an SSD cache. This means that ALL of the benefits of defragmentation DO APPLY to the Momentus XT. Case in point: On a partition with 6million+ NTFS indexes, chkdsk took 30-45 minutes to run before a full defrag. After a full defrag, which consolidated the MFT and also sorted data by frequency of access, the same chkdsk took about 3-5 minutes.

This is BEFORE the cache had the time to rebuild. This isn't the cache optimizing anything yet. Its the pure benefit of the defrag showing itself in action. Instead of the drive head moving randomly all across the drive seeking all those scattered sectors that contained the MFT, the drive read them in what was apparently more or less sequential read operation - which resulted in chkdsk completing NEARLY as fast as on a traditional, full blown, pure SSD.

I think a good defrag to keep the disk data optimized is well worth the loss of a few hours in the drive re-caching the data back into the 4GB SLC cache. Only so much will fit in your cache - that's the bottom line. Do you really want to lose all defragmentation benefits for everything else on your drive?

I have to add though, the OS image that is currently running on the Momentus XT comes from my traditional SSDs - meaning, they hadn't been defragged at all since 2008, when I first transitioned. So I definitely see a MASSIVE performance improvement after defragging a drive which got fragmented for the past three years relentlessly, and never could be defragged due to SSD controller "magic". I don't know how much benefit there's to be had from defragging your drive every week, or even every month - I'd imagine your fragmentation would be a whole lot less than mine.

But I'd definitely say, defrag at least once every three or six months, with full file sorting based on access patterns! It took my Momentus XT about a week (yep!) to complete the initial defrag. You don't want that!
« Last Edit: June 28, 2011, 09:17:50 pm by mimarsinan » Logged
Kasuha
JkDefrag Hero
*****
Posts: 595


View Profile
« Reply #12 on: June 28, 2011, 10:26:47 pm »

and last but not least, absolutely zero benefit to be had in the defragmentation, since the entire disk area is read at the same speed, and since there is zero lag in jumping from one sector to another in an SSD
I can't argue with your personal experience but I don't agree with this. Every file fragment has a separate MFT record so if your drive becomes very fragmented, also your MFT becomes very large and reading and analyzing MFT records needed to read the file might become significant part of the process.
That doesn't mean you need to defragment an SSD frequently. That just means occasional defragmenting may be a good idea.
Also, occasional disk rewrite may help certain SSD controllers to better leverage disk aging - but that's very model specific.
Logged
mimarsinan
Newbie
*
Posts: 3


View Profile
« Reply #13 on: June 29, 2011, 02:13:10 am »

Every file fragment has a separate MFT record so if your drive becomes very fragmented, also your MFT becomes very large and reading and analyzing MFT records needed to read the file might become significant part of the process.
That doesn't mean you need to defragment an SSD frequently. That just means occasional defragmenting may be a good idea.
Also, occasional disk rewrite may help certain SSD controllers to better leverage disk aging - but that's very model specific.

No. This is because regardless of how fragmented your SSD may be, access times NEVER slow down. Again, the SSD accesses EACH part of the disk at the same speed. If you bench SSDs with, for example, HDTune - you will see the access time is as low as 0.01msec - for all parts of the disk. Try that on a regular disk - you are lucky if you have anything at all below 0.17msec. The same goes for jumping from one part of the disk to the other.

That's why SSDs are so fast - tens of times faster than ordinary drives. Because there is virtually zero access time, it is as if they are running at infinite RPM vs. the finite RPM of spinning drives.

No matter how large or scattered the MFT is, it will always be read from at the same speed from the "surface" of the SSD. It is not spinning and the drive head does not need to dance all over the place to access the MFT data. What you think of as "reading and analyzing MFT records" is actually just the drive head going crazy all over the mechanical disk platters, slowing down your access exponentially.

For compatibility reasons, while SSDs will present to the OS a logical disk structure similar to mechanical HDs, the reality is that the controller is running the show and who knows which data is physically "where". This is why defragmenting SSDs is pointless and can even lead to physical harm. Even assuming the controller never lied about which data is physically stored on what part of the "spinning platter" (say, which NAND cell), then again having the data contiguous is pointless, because all parts of the SSD "surface" (NAND cells) are accessed at the same speed. There is no mechanical platter that needs to spin under a drive head so it can be read.

This is all for traditional SSDs though. Hybrid drives wouldn't count as SSDs for the above analysis, although you can do some interesting research. For example, repeat the HDTune disk speed test a few times. You might see the access time drop to 0.03msec, at which point you know the areas HDTune is reading has been cached inside the NAND 4GB buffer Smiley Of course, just like defragging your hybrid SSD, that might have forced out other data.
Logged
quanthero
JkDefrag Hero
*****
Posts: 234



View Profile
« Reply #14 on: June 29, 2011, 04:46:32 am »

Mimarsinan, I respectfully disagree with you about the idea that fragmentation has no impact on performance. Performance degrades not because reading files is slower, but because file system is less efficient when files are fragmented (I think that's what Kasuha tried to explain). I'll explain this below.

NTFS is extent-based file system. An extent is a block of contiguous clusters of a certain file. Pointers to file are stored inside MFT, and each extent of a file is represented by [starting cluster, extent length] pair. If a file on an NTFS volume becomes fragmented enough (tens of fragments), pointers to its location inside no longer fit into MFT. In that case, an attribute called $Attribute_List is created in order to store records pointing to file's location, and old MFT record stores pointer to that $Attribute_List file. If then file becomes even more fragmented, space allocated for pointers inside $Attribute_List fills up. Then additional $Attribute_List files are created to store pointers to file.
So, for fragmented enough files, file system has to walk through MFT and all these $Attribute_List files to collect file's location and access that file. This means reading a lot of data just to access that file (if the file was unfragmented, it would be necessary to just scan one MFT record). So, things slow down irrespective of how low is SSD access time, not to mention CPU and memory overhead due to processing of all these extents.

Also, even good SSD suffer from poor random write performance (because NAND flash block have to be erased when writing to that block). Consolidating free space on file system level (doen't matter how free space is distributed inside SSD itself) will force OS to write in larger chunks (because it won't have to write all over tiny free space gaps).
« Last Edit: June 29, 2011, 04:50:48 am by quanthero » Logged
Pages: [1] 2
  Print  
 
Jump to:  

Powered by MySQL Powered by PHP Powered by SMF 1.1.5 | SMF © 2006-2008, Simple Machines LLC Valid XHTML 1.0! Valid CSS!