Benchmarking Disk Throughput With Samsung 840 120GB

So today I decided to see what speed I could get on my Samsung 840 (not to be confused with the 840 Pro, nor the 840 Evo). I performed a simple test to write 100MB and 1GB, each with a block size of 1MB for performance to the disk. Please note that I made sure to perform the tests outside of my home directory which is encrypted (and probably shouldn't be on 12.04 with an SSD). The results are below:

# Script that was used
rm 100MB.img
rm 1GB.img

echo "writing 1 GB of 0's to non home directory file"
dd if=/dev/zero of=100MB.img bs=1M count=100

# Just in case
sleep 10

echo "writing 1 GB of 0's to non home directory file"
dd if=/dev/zero of=1GB.img bs=1M count=1000

Obviously something fishy is going on here. Getting 2.5 GB/s throughput would be mind-blowing performance for a single consumer level SSD drive. Digging further, I found out that this is because the system will use the system's RAM (technically called DRAM) as a cache and return immediately, even before the data has actually been physically written to the drive. This is why it's always nice to have lots of free unused memory in a Linux machine, something you are unlikely to find on a VPS host.

Take RAM Out Of The Equation

We want to test how good our drive is, not how good the system is, so we need to ensure we know how long it took to actually write to the disk. Luckily, the dd command has options/flags for this.

  • conv=fdatasync - Ensure all data is written to the drive before finishing.
    This does not include metadata, to include that as well, you need to use fsync instead
  • oflag=dsync - This performs a sequential write of the data, ensuring that each individual write is written to the drive before moving onto the next one.
    To ensure both data and metadata is written, use sync instead.

No Cache Results

# Updated script
echo "writing 1GB of 0's (no cache)"
dd if=/dev/zero of=1GB.img bs=1M count=1000 conv=fdatasync

rm 1GB.img
sleep 3

echo ""
echo "writing 1GB sequentially"
dd if=/dev/zero of=1GB.img bs=1M count=1000 oflag=dsync

rm 1GB.img

Poorer Performance Than Expected

Well that was a massive blow to my ego. Thank goodness for caching, although that can lead to data loss/corruption if your power suddenly cuts out so invest in a UPS!

Get the Evo Version!

It's worth noting that the Amazon UK product page for the Samsung 840 Evo states that the drive has a sequential write speed of 410MB/s!

This is made possible by a little bit of extra hardware in the SSD that acts as an on-board cache. So that "sequential write" is sequential in the sense that it is going to the drive, but not in the sense of it actually having been written. This feels like a massive "cheat" and has shifted the buffer from your DRAM cache to your disks cache, but it will increase your sequential write benchmarks and be better for VPS hosts where RAM is heavily utilized or even oversold. The size of this buffer varies depending upon the capacity of drive, with 3GB, 3GB, 6GB, 9GB and 12GB for the 120GB, 250GB, 500GB, 750GB, and 1TB drives respectively. [ source ]

With regards to "RAPID Mode", this is a feature in Samsung's "Magician" software for Windows users that basically implements the DRAM buffer that Linux already has and was described earlier.

Last Note On Performance

These tests were performed on Ubuntu 12.04 which does not have TRIM support enabled by default, nor has it manually been activated. Having secure-erased the drive just beforehand or having had TRIM support enabled may have resulted in an increase in performance. However, I last secure-erased the drive 3 weeks ago and it is at 74% capacity, as well as being over-provisioned by 25%, so I'm pretty sure that it's still in a peak performance state. This is all to do with whether the SSD's controller needs to spend time erasing blocks before it can write new ones.

References

No comments:

Post a comment