I have been doing some experimenting with ZFS Recordsize and other dataset settings when opening photos in PM from my new NVMe server were absurdly slow (3min + for it to open and finish gathering sort data vs 45 seconds for it to open the same folder when stored on a windows machine on the same network). What I found is that recordsize plays a huge role in the time it takes for PM to gather sort data.
Typically when it comes to ZFS datasets that store media, the general wisdom is to use 1M for recordsize since they are large files, but what I've found on two separate systems and OSes is that using 1M makes PM very slow.
I'm still doing some testing to find the optimal recordsize, but so far I've found that recordsizes between 16 and 512 seem to perform well enough. Anything else starts to get very very slow when gathering sort data. I will do some more testing and see which recordsize results in the lowest load time on average.
I also have compression off on this dataset because the data is non-compressible to begin with, so no need to waste CPU power on it.
Another thing I'm going to test once I've identified the best recordsize is primarycache=metadata for this dataset since it consists of NVMe drives. It may be faster to leave files on NVMe instead of storing them in ARC. I'm not sure if this workload is one of those cases.
A question I'm hoping a PM dev will pop in and answer, how much of each RAW file is PM reading when gathering sort data? And then when going through the preview window, how much of each file is PM reading? That could help influence the recordsize choice.