This is badly written and ignorant article. Fat32 supports up to 16Tb partition size (depending on cluster size - 2Tb -16Tb).
Its microsoft’s windows tools that arbitrarily only allow users to create 32Gb partitions, and it is this that is being changed. This is not a change to Fat32, this is a change to windows. 3rd party tools on Windows and other systems like Linux have long offered more options for partition size.
That its taken to 2024 for Microsoft to fix the command line tool (and still not fix the GUI tools) is ridiculous.
That’s what I thought!
The real issue with Fat32 is the 4gb file size limit.
About 10 years ago, my usb drive was Fat32 by default. I changed it to Ntfs due to Fat32’s 4-GB cap. 1080p movies that were 4+ GB were getting more widespread then. I’m using Ntfs till now.
Problem is NTFS isn’t as widely supported across alternative operating systems.
Exfat if you wanna use your usb drive on Macos or Linux.
I have Windows so I’m OK with Ntfs.
From the department of temporary fixes, becoming a permanent solution. This guy made FAT32: https://youtu.be/bikbJPI-7Kg?si=orQCjxmnOPAhKIeu
I love how the arstechnica article words it like you will never need FAT32 and it’s silly to consider it.
I had to download fat32format I don’t know how many times because I needed to format an extra large SD Card or USB drive for some device. Microsoft really shafted exFAT’s adoption with their licensing.
FAT32 is also really simple to implement. Supporting exFAT may require a larger microcontroller with more memory, which results in a more expensive product.
FAT32 is the java of file systems. Works everywhere, on anything. But everyone hates it.
Even my speaker can read fat32, but I never format any storage in that system
I personally haven’t had to touch it in over a decade, but I guess there’s probably some uses for it still, yeah.
Personal computers and flagship phones? Yeah you can probably use exFAT.
Video game consoles and handhelds? Dashcams? Car entertainment centers? Cheap android devices? 100% going to be FAT32 partitioned with a Master Boot Record
Low end motherboard BIOS flashing
I just flashed my mobo last night and it wanted a fat32 fs. It’s not low-end at all.
You think high end motherboards are going to flash from XFS?
I’ve seen a few that can read ExFAT
Yup, I had to download a program to format my 64Gb micro SD card for my 3ds last year.
I needed it for a printer the other day!
Yep, many smart TVs still only accept FAT32 format. I have to split my HDR videos into multiple files to be able to watch them on TV — because of 4GiB size limit.
Rufus is your friend
Microsoft can suck my FAT32mm Micropenis
Ha! Gotem!
I think there was some kind of tool that let you extend it more. I had a 512gb drive on fat32 but it sucked so much I just reformated to ext4 and it performed much better
Yeah, GUIFormat can do that. Fat32 has its limitations, but I pretty much always use it as the stuff I use micro SD cards in, require it
Finally, Microsoft caught up to Linux.
Microsoft caught up to Linux.
They cannot even read (let alone write) any of the FOSS file systems used in Linux.
Thankfully. I wouldn’t trust windows with a mounted foreign filesystem if I dual booted.
That’s an odd statement. I had an ext4 partition mounted on a Windows 11 machine just a week ago.
Natively somehow, or via LFS? If you have LFS set up, explorer lets you use it to mount Linux disks
Well, I was referring to Fat32. Probably shouldve stated that before lol. But yeah i absolutely agree.
Linux still unable to catch up with NTFS when it comes to filename length, sadly. 256 bytes in an era of Unicode is ridiculous.
NTFS also has a 255 limit, but it’s UTF16, so for unicode, you will get more out of it. High price to pay for UTF16. Windows basically is moving stuff between UTF16 and ASCII all the time. Most apps are ASCII but Windows is natively UTF16. All other modernly maintained OS do UTF8, which “won” unicode.
The fact that all major Unix (not just Linux) filesystems are to 255 bytes says it’s not a feature in demand.
I’d much rather have COW subvolume snapshotting and incremental backup of btrfs or zfs. Plus all the other things Linux has over Windows of course.
NTFS also has a 255 limit, but it’s UTF16, so for unicode, you will get more out of it.
I think this is a biased way of putting it. NTFS way is easy to understand and therefore manage. What’s more important is that ASCII basically means English only. I’ve seen enough of such “discrimination” (stuff breaks etc.) based on used language in software/technology and it should end for good.
All other modernly maintained OS do UTF8, which “won” unicode.
UTF8 is Unicode. UTF8 symbols can take more than 1 byte.
Plus all the other things Linux has over Windows of course.
There are also encryption methods that slash maximum length of each filename even further.
Of course UTF8 is Unicode. The cool thing about UTF8 is that is ASCII, until it isn’t. It cover all of Unicode, but doesn’t need any bloat if you are just doing latin characters. Plus UTF8 will seamless go through ASCII code and things that understand it do, others just have patches of jibberish, but still work otherwise. It’s a way better approach. Better legacy handling and more efficient packing for latin languages. Which is why it “won” out. UTF16 pretty much only exists in Windows because it’s legacy it will be hard for it to escape.
LUKS is by far the most common encryption setup on Linux. It’s done at block layer and the filesystem doesn’t know about it. No effect of filename length, or anything else.
None of that helps or discards anything I’ve said above. But it allows to say that NTFS limit can be basically 1024 bytes. Just because you like what UTF-8 offers it doesn’t solve hurdles with Linux limits.
LUKS is commonly used but not the only one.
Linus’s VFS is where the 256 limit is hard. Some Linux filesystem, like RaiserFS, go way beyond it. If it was a big deal, it would be patched and widely spread. The magic of Linux, is you can try it yourself, run your own fork and submit patches.
LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.
Edit: oh and NTFS is 512 bytes. UTF16 = 16bit = 2 bytes. 256*2 = 512
The magic of Linux, is you can try it yourself, run your own fork and submit patches.
Well it should probably go further and offer more of another kind of magic - where stuff works as user expects it to work.
As for submitting patches, it sounds like you suggest people play around and touch core parts responsible for file system operations. Such an advice is not going to work for everyone. Open source software is not ideal. It can be ideal in theory, but that’s it.
LUKS is the one to talk about as the others aren’t as good an approach in general. LUKS is the recommended approach.
It looks like there are enough use cases where some people would not prefer LUKS.
Linux might have a similar file name restriction, but what’s more important IMO, is the obnoxious file path restrictions NTFS has.
Naming a file less than 255 chars is a lot easier than keeping its path down.
Limiting file name is one thing, but dealing with limited path lengths when trying to move a custies folder full of subdir on subdirs is obnoxious when the share name its being transferred to makes it just too long.
Can’t you work around that with the extended length prefix of
\\?\
(\\?\C:\whateverlongpathhere\
)? Though admittedly, it is a pain in the ass to use.(edited for clarity and formatting)
You can also enable long paths in w10/11 (30,000+ characters). Instructions are here:
https://learn.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation?tabs=registry
That would unfortunately require me to edit GPO I have non control over. I could temporarily knock it out with regedit but I don’t know if it’d be tossed next gpupdate, I’d have to check.
Bummer. The '\?' prefix will work regardless of registry setting, though it’s a pain to remember each time.
True. Problem is, moving from more restricted system to less restricted system is a breeze, but painful otherwise. Linux is in a position where it would benefit from any little thing. People trying to switch to Linux will find path length feels like an upgrade, but file name limitation is clearly a downgrade.
What are you guys naming your files anyways? No more than four words in lower snake case, as the Machine Spirit intended.
I guess something like
ようこそ『追放者ギルド』へ ~無能なSランクパーティがどんどん有能な冒険者を追放するので、最弱を集めて最強ギルドを創ります~ 1 (ドラゴンコミックスエイジ) - 荒木 佑輔.epub
- 92 characters, but 246 bytes. Where on Windows this file hits 35% of the limit, on Linux it hits 96%.The file is not some rare case. It’s from a torrent, uploaded somewhere just today. There are tons of files like this with slightly or much longer names. As of 2024, they can’t be served by Linux. Not in a pure file form, that is.
Yeah I suppose that would get in the way.
deleted by creator
Linux file system is shit? Otherwise I don’t get why you’ve used the “because” word. NTFS is certainly not shit.
I re-read your comment and i completely misunderstood it sorry it’s 4am
I don’t know how much it matters though? If I try it on my Windows XP machine I’ll still be stuck with the old limit right?
If someone still use win-dos, 4GB per file and 32GB partition cap is what they deserve.