Why are we using ntfs in 2020

I have done 1000's of chkdsk /r on windows systems they take forever on avg 4 hours min upwards of 12+ hours for 12tb drives the equiv for linux on EXT4 takes a quarter of the time and i bet if someone creates a better file system for windows could get that halved with current hardware.

You know that the /r switch does? And if so then understand it's speed is largely influenced by actual disk speed?

I don't know why anyone would ever run chkdsk /r anyway. if you're in a situation where you feel it's needed it more or less indicates drive has used up all spare sectors - and/or - you'd be better of cloning the disk to recover the data.
 
TRIM is a drive thing not a filesystem thing if I recall.

TRIM is combination of drive (or Storage Spaces pool) and filesystem working together. TRIM is filesystem telling the drive "I no longer need data at locations X, Y, and Z. Feel free to destroy data at these locations as part of your optimization process". On the filesystem side, TRIM needs filesystem to send the information, and the drive needs to know how to use it (if at all).

They can take a while, and they shouldn't even be necessary. These [chkdsk-like] kinds of tasks should be automatically built into the file-system practically at a driver-level.

Well, what happens if the repair attempt fails? There are multiple cases when running fsck or chkdsk further damages the filesystem instead of repairing it. There is no way to work around this, it is not possible to "just build a better freaking chkdsk". In any modification of the filesystem in inconsistent state there is an inherent risk of making things worse.
  • On NTFS, CHKDSK with offlining allows you to schedule when exactly you want take a risk of destroying the entire filesystem. Including, you can make backup of what is still readable beforehand.
  • On ReFS, the driver will attempt repair on first access. If the attempt crashes the entire filesystem, that's it. You don't have a chance of declining or cancelling the attempt. Which seems like the option you describe, actually.
  • On ZFS, there is no CHKDSK equivalent. In the unlikely chance something goes wrong, there is nothing you can do about it.

However, if the backups are available, the three options are generally equivalent, basically let the damn thing crash and then start from backup.
 
The problem with chkdsk is that its definition of "fixed" and yours might not agree - much like your definition and your dog's might not agree. This is presumably most pronounced when you're trying to do data recovery, because (oversimplified) chkdsk's target is "no structure errors" and if deleting files/directory entries clears those structure errors then hey, it's fixed! That's why you want to be working against a drive image not the original drive.
 
This is presumably most pronounced when you're trying to do data recovery, because (oversimplified) chkdsk's target is "no structure errors" and if deleting files/directory entries clears those structure errors then hey, it's fixed! That's why you want to be working against a drive image not the original drive.

If you have ReFS-style recovery on first access (same with NTFS SpotFix for minor repairs), there is no original drive. First time you learn that there is a problem is already after repair attempt had been completed to whatever definitions of "repair" the driver has.
 
Last edited:
As I've not had any issues with NTFS, I personally have no reason to criticize it...(yet)

The only issues I've had are with customers' actual failing individual drives, not with the file system.
 
I don't know why anyone would ever run chkdsk /r anyway

It's the only way I know of to reset the dirty bit on a drive once its been tripped. If someone else has a suggestion I'd like to learn it other than manually editing the boot sectors or hex strings.
 
It's the only way I know of to reset the dirty bit on a drive once its been tripped. If someone else has a suggestion I'd like to learn it other than manually editing the boot sectors or hex strings.

I don't think you need to run with /r switch though, /f will do?

You disk edit the dirty bit, however it would depend on FS what to edit For FAT it's in the boot sector if I remember correctly, for NTFS in $Volume?. You can suppress check disk even if dirty bit was set using chkntfs /x <drive letter>:, see https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/chkntfs.

But, is it wise?
 
Have you ever just opened up cmd.exe and run chkdsk /?

chkdsk by itself just checks the disk, it doesn't fix anything.
chkdsk /f tells it to fix the errors it finds.
chkdsk /r actually implies /f, but tosses in that hugely time consuming surface scan

I've never used /x. But ALL of this is explained if you run /?

Which brings me back to...
 
I had the same initial response of "why scan the drive for errors with /r?" as other folks, but I'll note that now the description for /r is "Locates bad sectors and recovers readable information (implies /F, when /scan not specified)." That sounds to me like it could be a different behavior.

Edit: The Windows Server documentation describes /f as fixing logical errors and /r as attempting to address physical issues. Some of this looks like it's from the New NTFS Health Model introduced with Windows 8. I can see reasons to run /r, but I also think that if you have cause to believe /r is merited then you should image the drive.
 
Last edited:
Yep, if you are in a situation where /r is needed, you're in a place where the drive is failing and it's time for it to go. There are a few rare circumstances otherwise. Also it's hard to call something "new" when it's almost a decade old now. All of my comments were based on the assumption that we were using a version of chkdsk made this side of the decade marker... if anyone's brain is still stuck before that point, and they are on this board...

Well, it's tar and feather time to be honest. There's being a bit behind, and then there's being a dinosaur!

Saying that, I do feel it though... it's hard to unlearn old junk. That's probably the hardest part of this job. And a TON of the new guys just never had the experience with the old systems to know how the new ones actually work. All the experience working with hardware platforms in general is fading fast, and in this VM SaaS happy world, it's only going faster.
 
Right, but when I think of "Chkdsk /r" I think "Scan the entire disk for bad sectors" not "Attempt to recover data from in-use bad sectors."

I honestly can't remember the last time I used /r, almost certainly not since the days of XP or maybe even 2000.
 
Same here, other than occasionally running it just to confirm a drive is dead. When I'm being lazy, the data isn't important, and just want to burn some time.
 
One machine i got in showed 98% fragmentation on a 4tb mechanical drive took over 10 mins to load after spending 5 hours defraging the drive boot time dropped to 25 secs on some machines with ntfs esp ones that are used for video editing without monthly defragging machine slows right down.

Just hoping that someone creates a much better filesystem that is supported in windows and that supports tiered hierarchical storage to make appropriate use of solid state disks.
 
Last edited:
One machine i got in showed 98% fragmentation took over 10 mins to load

You have to be doing something really special to manage something like that, and doing it over a pretty long period since most of what's required for startup isn't going to be moving anyway. Moving the Windows system files needed for that to be contiguous with the rest of Windows would make a difference but most of those files aren't replaced that often.

My immediate suspicion if faced with something like that would be that it was something that was upgraded to Windows 10 (so the Win10 install was scattered on the disk), used heavily after the Windows.old directory was removed (filling the now-empty "premium" space where Windows was originally installed) and probably had at least one Win10 feature upgrade to further scatter Windows files all over the drive.
 
Yeah, and that "really special" translates to "really stupid". That level of fragmentation only happens when you're using one of those minimal builds of Windows that rather stupidly turned off the automated weekly defrag jobs.

I suppose a stock install can do that too, if someone killed defrag manually... but far more frequently I run into that sort of thing from "professionals", doing junk at home they don't have any business doing. That's pizza tech damage...

Oh, and one more thing... a full boot requires almost a half an hour on any system with a platter... 10min to desktop is NORMAL for most systems in that class. The system in question must be on a platter, because SSDs don't care about fragmentation! There's no head to move to the next block of clusters, the thing just reads them as needed.

Blaming the filesystem for a poorly configured, obviously out of date system that's been left to run until it's in the dumpster is illogical.
 
Back
Top