Optimal Clustersize for Trainz

CitMusJoe

Trainz User
I consider to transfer my TRS19 Windows installation not only to a different drive than C: but also to a single drive, where TRS19 (and its database) will be storaged _only_ . Nothing else on that drive.

So my thoughts are concentrated to: what filesystem is best: e.g: NTFS or just FAT32. I tend to NTFS. The next question is: which is the optimal blocksize /clustersize for my SSD respectively Trainz 2019?

It will be important for read/write performance of the database, which is SQL? Some time ago experts told to use a clustersize of 64K, is this still true for Trainz? It also depends on things like: database uses most oft the time random file access or sequential access. And so on, and so on.

Maybe N3V can enlighten this a bit?
Any hints or/and recommendations are very appreciated.
 
Last edited:
You definitely want to stick with NTFS these days. There are many reasons and among them is handling of large drives well beyond the 32 GB partition allowed in a FAT-32 file system.

Sector-size depends upon what you are seeking. Larger sector sizes, such as a 64K block will waste disk space at the cost of file storage efficiency, and can become very inefficient when working with small files due to the wasted space within the sectors. In our case, we're best to leave the system defaults because we may have some large files, such as assets.tdx and some assets, but not all assets are large, and not all files are large. Trainz assets may display large sizes, but that's the sum-total of all the smaller files that exist within the asset.

My recommendation is to experiment if you have a spare hard drive to play around with. Replicate your User Data folder on that spare drive and see which way it goes. If you find you are getting better performance with larger sectors, let us know.
 
Microsofts recommended size is just 4k on NTFS, on fat32 it depends on the size of the partition. As TRS19 TANE etc have thousands of small files loaded or streamed in real time, optimising for a database only used in managing content, isn't IMO advisable and with a SSD which are fast enough as it is, I doubt you will see any benefits other than waste of space with larger clusters.

Fat32, mainly only used on thumb drives sd cards etc now and has the 32GB partition limit if formatted in Windows, more if third party apps used and the 4GB file size restriction.

I'm inclined to go with Microsoft's default and leave things as they are.
 
I further tested by using a SSD (500GB). Good results when using NTFS, default size. Additionally tried it using a simple SD(X)-CARD (256GB) I had inserted into PCs SD-slot.
Surprisingly even with this combination good results (also NTFS, default size). I didn't notice bad performance. So I'll keep it at least as an additional backup-drive.

BTW, I guess FAT32 is still used, but mainly for data exchange or compatibility between different OSs, e.g: Windows, Mac, Linux etc. I just use FAT32 for exchange between Windows 10 (NTFS) and my Linux Mint (EXT3) installation only.

Thanks for your comments and recommendations.
 
Last edited:
Back
Top