All about hard drive cache

How does a hard drive cache work EXACTLY

The short answer is, EXACTLY, no one knows, how a hard drive cache works is a manufacturer secret and differs from drive to drive depending on the drive’s purpose, BUT, we have a lot of clues, some through the SATA specification (And PATA), others through industry standard commands, and it is also not so hard to get what we want from black box reverse engineering, we might not get the actual algorithm (or variant of the algorithm) from such an endeavor, but we can know enough to predict how it will work

Hard drives are not simple machines in any sense of the word, as soon as you are familiar with them, and if you are familiar with computer science, specifically algorithms, you will come to conclusions concerning where complexities lie ! and it is not all in the hardware, much of it is in the hard drive’s software (Firmware)

The hard drive’s raison d’être

You see, a hard drive spins at a certain speed (Most commonly 5400 or 7200 rpm), some spin even faster, the hard drive has to do all it can to do what it is asked in the most efficient way, so for example, it allows the OS (through the controller’s driver) to tell it all about what data it wants in advance so that it can plan the heads shortest path to getting all that data (Native Command Queuing and before it Tagged Command Queuing), but let us not get carried away here, we are here to find out how cache works ! NCQ is a topic for a different day (Or is it)

Im here for the recipes

There are very few recipes and interactions that you are able to make use of, but let me try to come up with the most common ones you will probably want.

IMPORTANT: please note that all this is lost when you switch your computer off, to make this stuff permanent, you will need to add them to /etc/rc.local or use udev rules

write caching

First, here are the commands to probe for state, enable and disable

# Check status (=0 means disabled)
sudo hdparm -W /dev/sdX
# Enable
sudo hdparm -W1 /dev/sdX
# Disable
sudo hdparm -W0 /dev/sdX

read ahead caching

First, here are the commands to probe for state, enable and disable

# Check for state (Zero means disabled, other values are sectors to cache)
sudo hdparm -a /dev/sdX
# Enable (Ask for a 256 sector read ahead)
sudo hdparm -a 256 /dev/sdX
# Disable
sudo hdparm -A 0 /dev/sdX

Operating system level caching for a device

# Set read ahead for a disk into ram (Unit: Memory blocks)
blockdev --setra xxx /dev/sda
# Set write caching in system memory (Percentage of ram)
echo 10 > /proc/sys/vm/dirty_ratio
# Fstab entry to create a hard drive (Block device) in RAM (percentage or size Ex: 20G)
tmpfs /mnt/tmpfs tmpfs size=50%,rw,nosuid,nodev 0 0

In this day and age, do we still need spinning hard drives anyway ?

Well, yes, and no, in my case, I burn through hard drives and SSDs very quickly, but with a little tweaking, hard drives live a bit longer (Can only be achieved by also managing the vibration of multiple disks with a heavy computer case, but that is a topic for a different post), my use case is all about continuous writing, SSDs don’t seem to like this.

If this does not apply to you, and SSD cost is what is stopping you from going all in SSD, then maybe you would be interested in a post about adding an SSD caching layer in front of your inexpensive spinning disk

Why this is important to me (and you)

It is important to me because I have a mysql database spread across a big bunch of spinning disks, those disks are being written to ALL THE TIME, and this is precisely why using SSDs here is a bad idea, the data is short lived but the drive is hammered with writes continuously !

I am not saying that hard drives don’t take a considerable hit when they are hammered with writes continuously, but a disk constantly busy seeking while writing vs a disk writing sequentially do not bear the same kind of penalty, in fact, from my experiments, a hard disk with a write load designed to destroy it, will last much less than an SSD ! and the hit on SSDs also depends on the workload (Check write amplification), so yeah, this subject can get out of hand quickly

Is a hard drive’s cache used for reading or writing

Both, you will be told online (On some very authoritative popular places) that it is mostly for reading, but I fail to see what that means, it is mostly for whatever you are doing more ! Here is a bad example, It’s as if you are asking if a dolly is more concerned with sending goods to the truck or bringing them from the truck to the warehouse, it depends on whether you are loading or unloading the truck

Why is this a bad example you ask, well, because a hard drive is not a dolly that is being used to unload a truck, operating systems and database engines and hard drives are not a sheet of metal on 4 wheels (More like a sheet of oxidized metal on one bearing, but that is besides the point), A database operation will typically require many reads before it does any writes, and those reads are also handled by the database engine’s cache and the operating system’s cache, you get the idea and complexity…. but this still doesn’t mean that cache is concerned with reads more than writes or the other way around. it will depend on your workload, and on the correct disk firmware for that workload (EX: WD purple vs WD Blue, VS WD Black for example).

the firmware will always determine the priorities of the disk when caching, so certain firmwares will lean towards caching writes over reads while other firmwares will do the opposite.

NCQ already !

Well, since me and my big mouth already got us into NCQ, let me start with that and get it out of the way

NCQ is not possible without a chache, the cache is used to

  • Store operating system’s requests, reordering them according to their locations on the disk, and fetch them
  • Some requests may be served immediately from the cache before that cache is overwritten
  • Write Coalescing and Deferred Writes, writes can be “acknowledged” before being written and wait their turn to truly be written, and are only written to disk when they are combined into a larger write for optimization (There is a feature in NCQ that allows the OS to know if it was written to the disk or just the cache, but you don’t need that in your applications, you shouldn’t care)

Okay, so let us get back to what we were saying….

Hard drive cache for reading

hard drive designers are certainly well aware of the operating system’s cache in ram, so what good could come from caching in a measly 64MBs on the disk

this is a very good question, you see the operating system will not attempt to read neighboring areas of the disk just because they have zero overhead, but the disk will, it is free potential prefetch so why wouldn’t it fill its cache with it

There are many reasons why it would and why it would not, the cache size is limited, so there are priorities to what gets done with this cache, but also, the required processing is not little, so you don’t want to push that hard drive processor making a bottleneck out of it, remember when western digital came out with their black series and promoted them as having 2 processors (Micro-controllers is probably the correct term, but why complicate the jargon), that is because there is plenty of processing tasks to be done ?

So let us get to the reading business, if you ask AI, you will get very outdated or irrelevant data, when i asked AI, it seems to return advantages that are nulled by operating system disk-to-ram caching, so let me tell you what is still true and what is not

  1. Prefetching and Read-Ahead Optimization also known as (read-lookahead feature) and (read-ahead caching): Since the hard drive has knowledge of its own physical layout and access patterns, it can intelligently prefetch adjacent data into cache. Unlike the operating system, which only caches frequently used files or blocks, the hard drive itself can anticipate sequential reads and load data preemptively at a very little to no overhead (because it is reading data in the head’s way mostly). This is particularly useful for sequential reads (Mostly contiguous) . the drive itself has the facility to detect whether the read is sequential or not from the request addresses, SO TO AVOID LOST SPINS DON’T COMPLETELY DISABLE IT… MAKE IT LOWER IF YOU MUST, EXPERIMENTATION ON THE BEST SIZE IS KEY
  2. Interaction with OS-Level Caching: While the operating system also caches data in RAM, the drive’s internal cache is the first line of defense against performance bottlenecks. The OS might not always know the drive’s specific access patterns, whereas the drive’s firmware can optimize for known workloads in real-time.
  3. Adaptive Algorithms: Some hard drives (probably all modern ones) employ adaptive caching techniques, where they analyze access patterns over time and adjust caching strategies accordingly. For example, a drive may increase its read-ahead buffer if it detects frequent sequential reads but prioritize different caching strategies when dealing with random access patterns.

Hard drive cache for writing

Writing to a hard drive is not as straightforward as it might seem. The cache plays a crucial role in optimizing write performance and improving the overall lifespan of the drive. When data is written to a hard drive, it doesn’t necessarily go straight to the platters. Instead, the cache temporarily holds this data before it is written in an optimized manner.

This is beneficial for a few reasons:

  1. Write Coalescing: The hard drive can combine multiple small write requests into a single, larger, more efficient write operation. This reduces the number of disk rotations required to complete a task.
  2. Reducing Latency: If an application writes small amounts of data frequently, the cache allows the drive to acknowledge the write operation almost instantly before the data is physically committed to the disk.
  3. Deferring Writes: Some writes can be held in cache temporarily, allowing the drive to prioritize more urgent tasks before actually writing the data to disk.

However, this raises an important issue: data integrity. Since data is often held in volatile cache before being written permanently, there is always a risk of data loss in the event of a power failure or unexpected system shutdown. To mitigate this, many enterprise-grade drives implement write-through caching or battery-backed cache systems that ensure data is not lost before it is written.

Does Cache Improve Write Speed?

Yes, but only under certain conditions. For bursty, short writes, the cache significantly improves performance because the hard drive doesn’t have to immediately seek and rotate to a specific position on the disk. Instead, it temporarily holds the data and commits it at an optimal time. However, for sustained, sequential writes that exceed the cache size, the drive eventually has to flush the cache and write directly to disk, which means the cache offers diminishing returns.

Another critical aspect to consider is firmware tuning. Some manufacturers optimize their firmware for different workloads. Consumer drives often prioritize read-heavy workloads, while enterprise drives optimize caching strategies for sustained writes and improved data integrity.

Cache Eviction and Management

Since cache size is limited (typically between 8MB and 256MB on modern drives), the firmware must decide what stays in cache and what gets discarded. The general approach follows:

  • Least Recently Used (LRU): Frequently accessed data is kept in cache, while older, less-used data is replaced.
  • Write Prioritization: If a large sequential write is detected, the drive may flush other cache contents to prioritize this operation.
  • Predictive Read-Ahead: The drive may determine patterns in disk access and prefetch data into cache for anticipated future reads.

The Role of the OS in Caching

The operating system also plays a major role in caching, with its own layer of RAM-based disk caching. It can reorder and batch disk operations before passing them to the hard drive. This means that even if a hard drive’s cache is relatively small, the OS can compensate by managing frequently accessed data in RAM, which is significantly faster than any onboard hard drive cache.

When Cache Doesn’t Help

While cache is incredibly useful for many workloads, there are scenarios where it does little to nothing:

  • Purely Sequential Writes: If you are writing large files that exceed the cache size, the drive will quickly bypass the cache and write directly to disk.
  • Heavy Random Workloads: If your workload is entirely random writes that do not benefit from coalescing or deferred writes, the cache provides minimal advantage.
  • Database Applications (Like MySQL): Many database engines already perform their own caching and optimizations, sometimes making CERTAIN TYPES OF CACHING on the hard drive’s cache redundant, and making other caching mechanisms more valuable (Why i research hard drive caching).

Final Thoughts

Hard drive cache is a critical but often misunderstood component. It plays a dynamic role in both read and write operations, helping to bridge the performance gap between slow spinning platters and fast system memory. While the actual caching algorithms remain proprietary, we can infer their behavior from real-world testing and performance characteristics.

For database-heavy workloads like MySQL, tuning both the database and disk caching mechanisms can lead to significant performance gains. Understanding when and how a hard drive’s cache is utilized can help in selecting the right drive for your specific use case.

Hard drive power draw at startup

The maximum power draw a PC with many hard drives happens at boot time, in my case, the PC is a n intel atom D525MW, which hardly draws any power

What this means is that I need an oversized power supply that only does its thing at startup, then becomes an inefficient power supply right after, why this is particularly important is because this computer runs on a UPS, and the number of minutes it can stay up is a very important number.

The solution is to enable PUIS (Power up in standby), what this does is allow the disks not to spin as soon as they get power, but instead, spin up upon reception of a command from the controller. so in effect, the disks are spun up sequentially (In turn).

Continue reading “Hard drive power draw at startup”

Over provisioning SSD in linux

Over provisioning a Samsung 1TB 850 EVO

Mind you, Don’t follow this tutorial step by step unless you have a 1TB Samsung 850 EVO, if you have a smaller disk, you need to adapt the numbers to your SSD 😉

Over provisioning a flash disk is simply some un-partitioned space a the end of the disk, but you need to tell the SSD’s controller about that free space that it can use to do it’s housekeeping, You also need to find out if the Tejun Heo’s on-demand HPA unlocking patch applies to your distro, if it does, you need to get kernel patching first.

First of all, the controller will usually use the cache RAM to so the over provisioning, or at least this is what i understood from some text on the Samsung website, you can make things faster by allowing it to use FLASH space while it deletes a 1.5MB flash area to put the data in.

1- How big should the over provisioning area be ?

Samsung recommends 10% of the disk’s space. Somewhere hidden in a PDF on their websites, they explain that OP space should be anywhere between 7% and 50% ! we will use 10 as our writing patterns are not that harsh. but mind you, a database that alters a few rows every second can probably make the most use of such OP space.

2- Won’t that 10% wear out before the rest ?

No, there is a mapping function inside the controller where that space is in fact wherever the controller thinks is appropriate, the wear leveling algorithm kicks in at a stage after the logical stage of partitions etc… it is blind to the file system or the over provisioning area, it will simply remap any address you give it to a random address that is not already mapped, at flash erase, those mappings are deleted, and other areas of the disk will be assigned to that area, i have no idea whether it uses a random algorithm, or simply has a record of flash chip usage (At the size of the sample, that won’t make any difference.)

3- Are you sure we are informing the controller and not just telling Linux what the last address is ?

Sure I’m sure, ask the controller DIRECTLY yourself with the command

smartctl -i /dev/sdb

Before the operation we are doing in this article, it will say 1000204886016, and after it will say

User Capacity:    900,184,411,136 bytes [900 GB]

Meaning that now, the disk’s S.M.A.R.T. attribute tells us that this much is available for the user after the over provisioning operation

So, how do we over provision in linux

See the last secrot of your ssd,

hdparm -N /dev/sdb

In my case, my samsung 850 EVO has the following, notice that the number is repeated twice. x out of x is the same, and HPA is disabled..

max sectors = 1953525168/1953525168, HPA is disabled

Now, 1953525168 * 512 = 1,000,204,886,016 (1 TB !)

Now, we want to set a maximum address, anything after this address is a PROTECTED AREA, that the controller knows about, I will multiply the number above with 0.9 to get the maximum address, take the integer part alone

hdparm -Np1758172678 --yes-i-know-what-i-am-doing /dev/sdb (As hdparm -Np1758172678 /dev/sdb will ask you if you know what you are doing)

 setting max visible sectors to 1758172678 (permanent)
 max sectors   = 1758172678/1953525168, HPA is enabled

Now again, hdparm -N /dev/sdb

max sectors = 1758172678/1953525168, HPA is enabled

Now, to make sure we are not suffering that dreaded bug, let’s reboot the system and check again after that, I am using debian Jessie, so it is unlikely that i am affected

Yup, hdparm -N /dev/sdb still gives us a smaller maximum address than the actual physical

Now, we seem to be ready to talk fdisk business.

fdisk /dev/sdb

Now, if you O (Clean), then P, you should get a line such as

Disk /dev/sdb: 838.4 GiB, 900184411136 bytes, 1758172678 sectors

This means that FDISK understands.and asking it to create (the n command) will yeild this

/dev/sdb1 2048 1758172677 1758170630 838.4G 83 Linux

Arent we happy people.

Now, lets mount with trim support, and enjoy all the beutiful abilities an SSD will bless us with.

tune2fs -o journal_data_writeback /dev/sdb1
tune2fs -O ^has_journal /dev/sdb1

NOTE: in the event that you are presented with an error such as the following

/dev/sde:
 setting max visible sectors to 850182933 (permanent)
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 40 01 21 04 00 00 a0 14 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 max sectors   = 1000215216/1000215216, HPA is disabled

The most likely cause is the SATA controller (Try executing the hdparm -Np command using a different SATA controller), another possible cause of errors is that some disks require being trimmed before this action

Disk spindown in linux, specifeying spindown idle time

Update/2025: The old info below (Startiong with the title spindown) no longer works as what used to be expected back then, the new expected behavior changed, today, if you try this on your drives, it may or may not work due to a bug-fix that fixes a bug that is 15 years old !

So there are 2 conditions

  • the device is not attached via USB or Firewire
  • supports APM

For the impatient, a workaround is to use udev rules, you start by checking that the following command “grep . /sys/class/block/sdb/device/power/*” to find out what’s currently set for autosuspend control, if it returns either “control:on” or “autosuspend_delay_ms as empty”, proceed, So let us get down to business

So, to test this out, try it without making it perminent, the following line should spin down your disk after 1 minute

echo 5000 | sudo tee /sys/class/block/sdc/device/power/autosuspend_delay_ms
echo auto | sudo tee /sys/class/block/sdc/device/power/control

If the above worked for you, you can simply add the following rules file to make those settings permanent

Create the file “/etc/udev/rules.d/99-spindown-disks.rules” and put the following contents in it

ACTION=="add", SUBSYSTEM=="block", KERNEL=="sd[a-z]", TEST=="device/power/autosuspend_delay_ms", ATTR{device/power/autosuspend_delay_ms}="15000"
ACTION=="add", SUBSYSTEM=="block", KERNEL=="sd[a-z]", TEST=="device/power/control", ATTR{device/power/control}="auto"

A different work around is using the package “https://github.com/adelolmo/hd-idle” 😉

Spinning the disks down manually (hdparm -Y /dev/sdc) works instantly, no problems there !

Setting this directly with the “hdparm -S 240 /dev/sdb” for example should work (need to check), but not through hdparm.conf !

So how do i know this criteria is what is stopping me from using the config file to spin things down ? i tried this command

hdparm -B /dev/sda

/dev/sda:
APM_level = not supported

Disk Spin down (Tested with Bullseye 2022)

Even though everything concerning block devices in linux has shifted to unique identifiers, hdparm has not, and will still use the old /dev/sdx system

To control disk spindown, and to manually issue commands, you will need to have the package installed

apt-get install hdparm

There is a probelm with disk spindown via hdparm, the problem is that you must address a disk as /dev/sdc , which changes in the case of USB media and other disks, even when you add slaves,

hdparm -Y /dev/sdb will spin a disk down instantly
hdparm -S 240 /dev/sdb will set this disk to sleep when idle for 20 minutes (5 second units here)

or adding at the bottom of the file /etc/hdparm.conf a section such as

/dev/sdc {
spindown_time = 240
}

to make those changes persistent across reboots.

The new way of doing this is using the disk ID, to find the disk ID, run the command

ls -l /dev/disk/by-id

once you know your disk ID, the block should look like this

# My 3TB WD green 
/dev/disk/by-id/ata-WDC_WD30EZRX-00MMMB0_WD-WMAWZ0299541 {
spindown_time = 240
}

To check the status of a disk, here is what you do

hdparm -C /dev/sde

You could get one of the following results
When spun down…
drive state is: standby
When active
drive state is: active/idle

Don’t make your disks spin-down too often, 20 minutes is good for me almost in all circumstances.

If the disks don’t spin down, chances are that selftest is enabled…

Check if it is enabled with

smartctl -a /dev/sdb
if it reads
Auto Offline Data Collection: Enabled.
then you need to disable it with
smartctl --offlineauto=off /dev/sdb

then wait for them to finish (if a test is running) then spin down.

DD_RESCUE ( GDDRESCUE’s ddrescue ) for disks with Advanced Format ( AF ) 4KiB sectors 4096 byte

1- Before using dd, ddrescue, or dd_rescue, you need to know which disk is which, you can do that by simply using the command “fdisk -l” in my case, the old disk turned out to be /dev/sdb and the new un-partitioned disk is /dev/sdc.

So, i have been cloning a 2TB hard drive ( WD20EARS ) to a WD20EARX, same disk, but with a few differences

WD20EARS is sata 2 and the other is sata 3, another difference is that using “hdparm -I /dev/sdb” the older WD20EARS reports (And should not be true)

WD20EARS

Logical/Physical Sector size:           512 bytes

wile with “hdparm -I /dev/sdc” the newer WD20EARX reports

        Logical  Sector size:                   512 bytes
        Physical Sector size:                  4096 bytes
        Logical Sector-0 offset:                  0 bytes

The first clone did not work for a reason unknown to me, i cloned my NTFS disk with ddrescue (gddrescue) on a linux (because i don’t know how to clone on windows) and then plugged it into windows, where it simply did not work, and in disk management reported the disk as un-partitioned space, so now i want to do the thing again, but i don’t want that slow performance, so i increased block size to 4KiB. (UPDATE: THE NEW COPY WITH 4KiB DID WORK BUT I DONT KNOW IF THE 4KiB SIZE IS RELEVANT, MAYBE YOU SHOULD TAKE A LOOK AT THE SECOND DIFFERENCE BETWEEN THE DISKS UP AT THE BEGINNING OF THE POST)

For now, i will try the cloning with the command (Only change the block level for advanced format hard drives)

Note, block size no longer works, and it is called sector-size, but the short letter for it -b is still the same, so we will change this to the line below it
ddrescue --block-size=4KiB /dev/sdb /dev/sdc rescue2.log
ddrescue -b=4KiB /dev/sdb /dev/sdc rescue2.log

And if all of your data is important, you can ask ddrescue to retry every bad block 3 times (or as many times as you wish) with the -r command

ddrescue --block-size=4KiB -r3 /dev/sdb /dev/sdc rescue2.log
ddrescue -b=4KiB -r3 /dev/sdb /dev/sdc rescue2.log

And what do you know, the disk now works on my WINDOWS machine 😀 no errors and no nothing, great, so now to some details about the copy

The result up to now is that i am reading at a maximum of 129MB while the average (in the first 60 GBs is 93018 kB/s), if this continues, i will be done in less than 6 hours.

The part that does not make any sense to me is that western digital states clearly in the specs that the maximum (Host to/from drive (sustained)) is 110 MB/s for both drives, it must be that i need to wait a bit more and see what that actually means.

rescued:         0 B,  errsize:       0 B,  errors:       0
Current status
rescued:    74787 MB,  errsize:       0 B,  current rate:     119 MB/s
   ipos:    74787 MB,   errors:       0,    average rate:   93018 kB/s
   opos:    74787 MB,     time from last successful read:       0 s
Copying non-tried blocks...

Now, once done, you can have the OS reload the partition table without having to restart, you can simply use the command partprobe

partprobe
or
partprobe /dev/sdc

To use partprobe, you need to install parted

apt-get install parted

If it were a linux drive, an advanced format drive would not have it’s first sector on sector 63 but rather on sector 2048, which is at exactly 2KiB, it could (but usually does not) start at any other value divisible by 8.

Windows probably does something similar for our AF Disk, so asking parted about our ntfs disk, this is what parted says

Model: ATA WDC WD20EARS-00M (scsi)
Disk /dev/sdb: 2000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  2000GB  2000GB  primary  ntfs

1049kB is 1074176 bytes, Which when divided by 8 is 134272 (divisible by 8).

NOTES:
-There is a tool specifically for cloning ntfs volumes called ntfsclone, i am not sure what extra features it provides that are specific to ntfs, i have never used it before, with my disk that has bad blocks, i can only rely on gddrescue.
-A block is 512 on regular drives, and 4096 on newer ones, if you want to backup the hard drive’s geometry, you can do one of the following
Backup the first 63 blocks (MBR + Bootloader). on a “non advanced format” drive

dd if=/dev/sda of=/mnt/storage/sda.vbr bs=512 count=63

On an advanced format drive, we can try

dd if=/dev/sda of=/mnt/storage/sda.vbr bs=4096 count=63

Which, will make us read 258048 bytes rather than the traditional 32256 bytes (around 250K rather than 32K)

Checking if SSD trim is working (discard)

Note that if your kernel is before 2.6.33 you can check, but it won’t be working !

in the case that you don’t want to update your kernel, and you just want to trim your disk, try wiper.sh or fstrim, both are command line tools that you can run manually or put in a cron job. if you do want to update your kernel, here is how on debian squeeze

So for example, if you are on debian 6 squeeze, you need to get a kernel from the backports (add the line “deb http://backports.debian.org/debian-backports squeeze-backports main” to your /etc/apt/sources.list then apt-get update then apt-get -t squeeze-backports install linux-image-3.2.0-0.bpo.2-amd64) to get the new kernel, it will then work.

I assume you already have an ext4 file system with discard option in fstab as described on this website

Also note: Many modern SSDs will not reclaim the TRIMmed space., so if using the test below you see zeros, discard (trim) is working 100%, if you don’t. it may or may not be working… but if you wait fdor a significant amount of time, then reboot, the zeros should appear in that exact location even if the disk does not reclaim instantly … happyt trimming, now to the procedure

now, write a file to the ssd (random numbers)
dd if=/dev/urandom of=/hds/ssd300/myfile.bin bs=1M count=3

Find the location where the file begins
hdparm –fibmap /hds/ssd300/myfile.bin

Now, take note of the start address and use it in this command replacing xxxxx

hdparm –read-sector xxxxxx /dev/sdb
You should see random numbers

Delete the file
rm /hds/ssd300/myfile.bin
Sync with the command
sync

Wait for 2 minutes
the issue the same command to read again
hdparm –read-sector xxxxxx /dev/sdb
You should now see all zeros, if you do not, the disk has not been trimmed 🙂