It depends upon the type of second hard drive and the interface. USB 3.0 drives are slower than USB-C and internal drives. If they are regular hard drives, they have the RPM issue and access time to contend with also. Drives like these need to be de-fragmented frequently, otherwise access times will increase substantially.
Drives meant for Raid and NAS devices may be slower overall even though they spin quickly due to their internal structures and how the data is written. These drives tend to become slower over time due to how the data is arranged on the disk. The better drives are the internal drives meant for servers such as Seagate Iron Wolf drives with large caches. These drives also have the added benefit of 5-year warranties unlike consumer grade storage.
Try and get as much Cache on platter drives as you can find. I believe 256MB
en.wikipedia.org
On this hard disk drive, the controller board contains a RAM integrated circuit used for the disk buffer.
A 500 GB
Western Digital hard disk drive with a 16 MB buffer
In
computer storage, a
disk buffer (often ambiguously called a
disk cache or a
cache buffer) is the embedded memory in a
hard disk drive (HDD) or
solid-state drive (SSD) acting as a
buffer between the rest of the computer and the physical
hard disk platter or
flash memory that is used for storage.
[1] Modern hard disk drives come with 8 to 256
MiB of such memory, and
solid-state drives come with up to 4 GB of cache memory.
[2]
Since the late 1980s, nearly all disks sold have embedded
micro controllers and either an
ATA,
Serial ATA,
SCSI, or
Fiber Channel interface. The drive circuitry usually has a small amount of memory, used to store the data going to and coming from the disk platters.
The disk buffer is physically distinct from and is used differently from the
page cache typically kept by the
operating system in the computer's
main memory. The disk buffer is controlled by the micro controller in the hard disk drive, and the page cache is controlled by the computer to which that disk is attached. The disk buffer is usually quite small, ranging between 8 MB to 4 GB, and the page cache is generally all unused main memory. While data in the page cache is reused multiple times, the data in the disk buffer is rarely reused.
[3] In this sense, the terms
disk cache and
cache buffer are misnomers; the embedded controller's memory is more appropriately called
disk buffer.
Note that
disk array controllers, as opposed to
disk controllers, usually have normal cache memory of around 0.5–8 GiB.
Uses
Read-ahead/read-behind
When a disk's controller executes a physical read, the actuator moves the
read/write head to (or near to) the correct cylinder. After some settling and possibly fine-actuating the read head begins to pick up track data, and all is left to do is wait until platter rotation brings the requested data.
The data read
ahead of request during this wait is unrequested but free, so typically saved in the disk buffer in case it is requested later.
Similarly, data can be read for free
behind the requested one if the head can stay on track because there is no other read to execute or the next actuating can start later and still complete in time.
[4]
If several requested reads are on the same track (or close by on a spiral track), most unrequested data between them will be both read ahead and behind.
Speed matching
The speed of the disk's
I/O interface to the computer almost never matches the speed at which the bits are transferred to and from the
hard disk platter. The disk buffer is used so that both the I/O interface and the disk read/write head can operate at full speed.
Write acceleration
The disk's embedded micro controller may signal the main computer that a disk write is complete immediately after receiving the write data, before the data is actually written to the platter. This early signal allows the main computer to continue working even though the data has not actually been written yet. This can be somewhat dangerous, because if power is lost before the data is permanently fixed in the magnetic media, the data will be lost from the disk buffer, and the file system on the disk may be left in an inconsistent state.
On some disks, this vulnerable period between signaling the write complete and fixing the data can be arbitrarily long, as the write can be deferred indefinitely by newly arriving requests. For this reason, the use of write acceleration can be controversial. Consistency can be maintained, however, by using a battery-backed memory system for caching data, although this is typically only found in high-end
RAID controllers.
Alternatively, the caching can simply be turned off when the integrity of data is deemed more important than write performance. Another option is to send data to disk in a carefully managed order and to issue "cache flush" commands in the right places, which is usually referred to as the implementation of
write barriers.
Command queuing
Newer
SATA and most
SCSI disks can accept multiple commands while any one command is in operation through "command queuing" (see
NCQ and
TCQ). These commands are stored by the disk's embedded controller until they are completed. One benefit is that the commands can be re-ordered to be processed more efficiently, so that commands affecting the same area of a disk are grouped together. Should a read reference the data at the destination of a queued write, the to-be-written data will be returned.
NCQ is usually used in combination with enabled write buffering. In case of a read/write FPDMA command with Force Unit Access (FUA) bit set to 0 and enabled write buffering, an operating system may see the write operation finished before the data is physically written to the media. In case of FUA bit set to 1 and enabled write buffering, write operation returns only after the data is physically written to the media.