SSD are much more expensive than last year, at least in Europe. I found a 4 TB SSD on Ediloca web site. The price was low, I tried it. The SSD works and I tested its capacity with f3write
/f3read
, all good.
But it appears to have a quick 1TiB Flash zone followed by the remaining 2.7 TiB / 2.9 TB in slow flash.
(See https://en.wikipedia.org/wiki/Byte#Multiple-byte_units for difference between TiB and TB)
Speed varies with the file system and mount options, anyway I managed to reach ~ 350-400 MiB/s on the first TiB and then 50-60 MiB/s on the remaining space.
I asked the support if they had some clue: magical mkfs
or mount
options, firmware upgrade... They just told me that all SSDs are like this, and I should wait ~ 15 minutes until the quick zone recovers. I suspected that this was untrue, anyway I tried writing 250 GB files and wait half an hour between each file. Same result!
Has anybody seen a similar behaviour? Is there something I could do to make the performance more constant? I tried mounting a BTRFS filesystem with ssd_spread
, that was worse (just slower everywhere).
My last test, with redundant or verbose lines suppressed. It was running on a N5105 mini PC with 16 GiB RAM. I previously tried on another machine with an utterly different SATA controller, and got the same results.
# mkfs.btrfs -f -d single -m dup --csum xxhash64 -O extref,no-holes,block-group-tree,free-space-tree,squota /dev/sda1
# mount -o ssd,lazytime,nobarrier,nodiratime /dev/sda1 /m/
# for I in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ; do
sync; date "+%s $I" >> /tmp/ssd.log;
dd if=/dev/zero of=/m/$I bs=128k count=1907349 status=progress
date "+%s $I" >> /tmp/ssd.log ; sync; sleep 1800
done
250000048128 bytes (250 GB, 233 GiB) copied, 711.98 s, 351 MB/s
# There was some activity on the machine, dd probably fight for the buffer cache with other processes on the first file
250000048128 bytes (250 GB, 233 GiB) copied, 550.064 s, 454 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 629.865 s, 397 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 541.539 s, 462 MB/s
# All this was written at full speed, 1 TB = 932 GiB
# It seems that the quick zone is 1 TiB, so there is still # 1024-932 = 92 GiB in the quick zone
250000048128 bytes (250 GB, 233 GiB) copied, 2097.48 s, 119 MB/s
# This speed is consistent with 92 GiB at full speed and 233 - 92 = 141 GiB at slow speed
# All the remaining files were written at slow speed.
250000048128 bytes (250 GB, 233 GiB) copied, 4228.37 s, 59.1 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3756.49 s, 66.6 MB/s
# At some point I run "fstrim -v -a" just in case. BTRFS should TRIM the SSD if needed in the background (discard=async)
# fstrim did not help
250000048128 bytes (250 GB, 233 GiB) copied, 3731.19 s, 67.0 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3823.29 s, 65.4 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3852.8 s, 64.9 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3789.42 s, 66.0 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3870.74 s, 64.6 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3847.66 s, 65.0 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 3997.83 s, 62.5 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 4095.57 s, 61.0 MB/s
250000048128 bytes (250 GB, 233 GiB) copied, 4169.98 s, 60.0 MB/s
In the end:
# df -h /m
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 3.8T 3.7T 75G 99% /m
# ls -sh /m
total 3.7T
233G 1 233G 11 233G 13 233G 15 233G 2 233G 4 233G 6 233G 8
233G 10 233G 12 233G 14 233G 16 233G 3 233G 5 233G 7 233G 9
#