Why are we using ntfs in 2020

NTFS on extremely large drives does the same over allocation and space trick ETX4 and ZFS do to keep fragmentation away.

Not to mention every OS has a defragmentation scheduled at least weekly by default. I haven't defragged a drive in a decade! The OS does all that junk for me!
 
Not to mention every OS has a defragmentation scheduled at least weekly by default.

I consider the weekly defrag in Windows to be overkill and usually set it to monthly for the computers I set up. I also don't like the default setting of defragging external drives. Most don't know this and either shut down or yank their external drive right in the middle of external defrags.
 
I haven't defragged a drive in a decade! The OS does all that junk for me!

Same here, and exactly, at least for permanently attached drives. I will defrag my backup drives every once in a while, though.

I'd rather have the frequency set too frequently than too widely spaced. The amount of effort to determine a drive "is not quite there yet" as far as actually needing to be defragmented takes so few resources, and runs so quickly, as to be negligible. But if things go too long, and the fragmentation happens to get bad for whatever reason, it takes a lot longer to clean it up.
 
It's all a matter of personal preference. For myself, I still prefer the weekly check. You and others will differ, and neither of our preferences is wrong.
 
The default is a weekly check, and I haven't felt the need to muck about with it. I like leaving settings alone as much as possible.
 
The default is a weekly check, and I haven't felt the need to muck about with it. I like leaving settings alone as much as possible.

There is that, too.

I am a big believer in, "If it ain't broke, don't fix it," and in way more arenas than computers. And when it comes to computers, a great deal of the business I get has its roots in random tweaking/button pushing without any awareness of what each tweak/button does let alone how they might interact with each other.
 
It just occurred to me that I lied... I did have a fragmentation issue this time last year. It was on one of my aging 2012 HyperV hosts. The server in question was installed in the Fall of 2014, so Server 2012 was just a bit more than a year old. But still considered rather new, everyone back then was saying stay off 2012 and stick with 2008. So this is now Fall of 2018, the server had been in place for four solid years.

The two hypervisors in question started alerting via my RMM tools late Nov last year, of the fragmentation on the volumes holding the virtual hard disks being above 30%. The one server I just powered down the VMs and defrag on the command line, and it cleaned it right up. The other, wouldn't... That weekend I noticed the reason why was I had over-provisioned the storage, and the volume was running out. I had several VMs at that point slated for removal since the client in question had long since moved all core functionality to the new Server 2016 host. Once those vhdx files were cleared, defrag worked and my RMM shut up.

But, even during all of that I noticed ZERO performance lost or gained during the process. I wasn't getting IO warnings, drive queue lengths were fine, access times were fine. So I spent time "fixing" what amounted to nothing more than a statistical footnote in my logs.
 
I'm not clear on why the hate for NTFS - it's still a pretty much modern filesystem, does journaling, has a very nice range of features, has a much better range of options for granular security than I'm aware of on *nix filesystems though it's been a while since I looked at any of them on a technical basis. Disliking it because of its performance under Linux makes just as much sense as saying EXT4 or ZFS is dead because of its performance on Windows.
 
I don't dislike NTFS but I do dislike that Windows can only read three file systems like they are the only ones in the world. You'd think after all this time they'd broaden the scope of file systems that Windows can work with. After all Linux can read and write more than a dozen.
 
And yet, in this very thread we've discussed how *nix can only use one of the most commonly used file systems on the planet, extremely poorly. All of those drivers have varying quality levels.

And once you get past the consistency issues of *nix in this realm, you're right back in the thick of the fact that all of these filesystems were designed for very different things. NTFS is fine, as is EXT4, for endpoint use. But I've seen people try to use ZFS on the boot partitions of Linux servers, blissfully ignorant to the fact this will slow things down. At least Microsoft was smart enough to disable application execution on ReFS volumes, it forces admins to use that filesystem for what it's good at... bulk storage.

Linux, all the rope in the world to hang yourself. Sometimes that's great, sometimes not so much. :(
 
Also, no modern filesystem has any kind of software wear levelling. As of now, it is done in hardware. Software support for TRIM is available in all major filesystems; don't know about FAT, and don't know about NTFS driver for Linux, but all native drivers for all modern filesystems support TRIM.

TRIM is a drive thing not a filesystem thing if I recall. I could be wrong.
 
While NTFS is still ok we are moving away from mechanical hard drives now lets talk speed:
https://openbenchmarking.org/result/1608041-LO-LINUX44BT99

Notice NTFS is dead last in ALL tests not to mention no wear leveling native support most devices have this built in which adds to the cost why have there been no real innovation in a better file system that not only allows more performance for gamers but also business applications as well and supports SSD/NVME drives natively.

Even EXT4 is better i have done extensive testing using a PLEX media server and transferring huge amounts of video files NTFS with huge amounts of files the performance degrades very fast VS EXT4 not to mention fragmentation is not an issue where NTFS well if your transferring large amounts of files deleting files and replacing files causes HUGE performance hits on mechanical drives requiring very long defrag times.

FAT32 vs Ext4 vs NTFS
  • FAT32 is the older. NTFS is the newer drive format. Ext4 is the newest of these drive formats.

  • FAT32 originally designed in 1977. NTFS introduced in July 1993. And Ext4 stable version released on 21 October 2008.

  • FAT32 is read/write compatible with a majority of recent and recently obsolete operating systems, including DOS, most flavors of Windows (up to and including 8), Mac OS X, and many flavors of UNIX-descended operating systems, including Linux and FreeBSD.

  • NTFS is fully read/write compatible with Windows from Windows NT 3.1 and Windows XP up to and including Windows 8. Mac OS X 10.3 and beyond have NTFS read capabilities, but writing to an NTFS volume requires a third party software utility like Paragon NTFS for Mac.

  • Ext4 is one of the latest and greatest Linux file formats.

  • Ext4 modifies important data structures of the filesystem such as the ones destined to store the file data.

  • The ext4 format allows users to still read the filesystem from other distributions/operating systems without ext4 support.

  • Ext3/4 is by far the best filesystem format, but it's not supported natively by Windows or Macs. A good option is to create a small FAT32 partition and copy or install an application such as Ext2Fsd and format the rest as ext4.

  • ext4 has very large limits on file and partition sizes., allowing you to store files much larger than the 4 GB allowed by FAT32.

  • Use Ext4 when you need a bigger file size and partition limits than FAT32 offers and when you need more compatibility than NTFS offers.

  • NTFS is ideal for internal drives, while Ext4 is generally ideal for flash drives.

  • Ext4 filesystems are complete journaling filesystems and do not need defragmentation utilities to be run on them like FAT32 and NTFS.

  • The ext4 filesystem can support volumes with sizes up to 1 exbibyte (EiB) and files with sizes up to 16 tebibytes (TiB).

  • The maximum possible size for a file on a FAT32 volume is 4 GiB.

  • The design of the FAT32 file system does not include direct built-in support for long filenames.

  • Ext4 is backward-compatible with ext3 and ext2, making it possible to mount ext3 and ext2 as ext4.

  • Ext4 uses a performance technique called allocate-on-flush.

  • Ext4 allows an unlimited number of subdirectories.

  • The ext4 file system does not honor the "secure deletion" file attribute, which is supposed to cause overwriting of files upon deletion.

  • Windows uses hard links to support short (8.3) filenames in NTFS.

  • NTFS is a journaling file system and uses the NTFS Log to record metadata changes to the volume. It is a feature that FAT does not provide and critical for NTFS to ensure that its complex internal data structures will remain consistent in case of system crashes or data moves performed by the defragmentation API, and allow easy rollback of uncommitted changes to these critical data structures when the volume is remounted.

  • When it comes to file checking, EXT4 is quicker because unallocated blocks of data are marked as such and are simply skipped during disk check operations.

  • The Encrypting File System (EFS) provides the core file encryption technology used to store encrypted files on NTFS volumes.

  • FAT is a simple file system that is supported for reading and writes on all major operating systems (which is why it's a good choice for external drives), it has no security and it does not perform well with large files. NTFS makes improvements on FAT with security and in many cases contiguous reads, but it still suffers some similar ailments. Ext is generally a good choice for working with most files, however, small files would benefit more from contiguous allocation.

Maximum FAT32 is not 4 GiB. You can actually go to about 2 TB on most Operating Systems. I know Windows 10 limits it to formatting 32 GB though it can read/write much larger. I have seen memory sticks factory formatted around 256 GB as FAT32 or ExFAT depending on which OS you get the report from.

Ext4 has some issues. For example, it has no inheritance. It's still stuck with only Read, Write, and Execute bits which are historically changed with chmod. With these regards, it is inferior to NTFS. That said, NTFS needs to be revamped for a lot of reasons besides leveling and performance. Those are minor.

What's really broken (the fundamental design is entirely broken):

You should be able to set a permission or ownership and have it flow downstream to all files and sub-directories, yet making the change shouldn't cause a dialog box to pop-up and process on screen for 15 minutes. It should be instant like a SQL query instead of written to the MFT or wherever it writes it for each and every file. It should be impossible to end up with inconsistent permissions. (i.e. canceling said dialog). If you don't have permissions to make a change on all files below, it shouldn't pause with an error on those. Instead, it should simply deny your request up-front reporting on exactly what the error is (i.e. "You requested to make a change inherited by all files/folders below "here," but you don't have rights to "here\blah\blah").

There should be some new dialog boxes that are role-based, like allow full-control, full-control without the ability to change ownership or permissions, read-only without execute, read-only with execute, a drop box to be able to place items yet not see other dropped items.

The file system should be integrated with a new file sharing protocol, so there aren't separate share vs file permissions. It should be one unified system. Auditing should be enabled by default, and instead of looking at the audit logs in the event viewer like a needle in a haystack, it should have an easy comprehensive reporting system to report who made what ownership/permission changes, where, how (i.e. what tool/application), when, and from what networked system!

Deny statements should be simplified because right now it's a poor design. There should be rules like allow/deny, and they should be ordered and processed in order like how a firewall processes its rules. Once a match is made, that should be the end of processing. Reporting on rights should be able to show that "Jane Smith" is granted access to "secret.docx" by virtue of being in Active Directory group "Secret Clearance." The tool for building rules should show all rules matched (or not) and tell which rule takes precedence, so you can easily see if re-ordering helps.

You should be able to disable an Active Directory group much like you disable a user account!

Lastly, ANY and all member computers and servers in an Active Directory domain should have ALL their volumes show up as volume objects in the Domain like they did in the tree with Novell back in the day.

You should be able to run a report on any domain user account for example, and it should show a comprehensive list of everything they have rights to on every volume of every server via every group etc. This is the kind of stuff that should be built-in.
 
Last edited:
I have done 1000's of chkdsk /r on windows systems they take forever on avg 4 hours min upwards of 12+ hours for 12tb drives the equiv for linux on EXT4 takes a quarter of the time and i bet if someone creates a better file system for windows could get that halved with current hardware.
 
Last edited:
I have done 1000's of chkdsk on windows systems they take forever on avg 4 hours min upwards of 12+ hours for 12tb drives the equiv for linux on EXT4 takes a quarter of the time and i bet if someone creates a better file system for windows could get that halved with current hardware.

They can take a while, and they shouldn't even be necessary. These kinds of tasks should be automatically built into the file-system practically at a driver-level. At any time, an Administrator should be able to check the health without ever even taking the filesystem offline to check it.
 
When it comes to data recovery, I'll take NTFS over FAT, exFAT, EXT(x), ZFS, XFS and so on. Most Linux based file systems have their file structure somewhat evenly distributed throughout the entire drive. So, even though the drive is 1% full, you pretty much have to get a 100% clone in order to recover the full file tree.
 
They can take a while, and they shouldn't even be necessary. These kinds of tasks should be automatically built into the file-system practically at a driver-level. At any time, an Administrator should be able to check the health without ever even taking the filesystem offline to check it.

NTSF has many self healing features built-in for many years.

"Self-healing NTFS attempts to correct corruptions of the NTFS file system online, without requiring Chkdsk.exe to be run. The enhancements to the NTFS kernel code base help to correct disk inconsistencies and allow this feature to function without negative impacts to the system."

https://docs.microsoft.com/en-us/pr...71388(v=ws.10)#what-does-self-healing-ntfs-do
 
Back
Top