IOPS counter. FAQ
What is IOPS counter?
Terms
What types of RAID controllers are supported by IOPS counter?
What drive types are supported by IOPS counter?
Why drive vendor and model influences IOPS value?
Why only Random I/O operations are measured with IOPS counter?
What is queue depth?
How does the queue depth influence productivity and latency?
How RAID type influences productivity?
How the RAID productivity is influenced by embedded cache?
How the RAID productivity is influences by drive’s cache?
Why RAID useful capacity is lower than expected?
IOPS counter helps in measuring the productivity of storage subsystem of your server.
RAID 0/10/5/6 productivity, capacity and latency is calculated based on the type of controller, type and number of drives used, the cache mode, queue depth and load type.
IOPS - Input/output Operations Per Second, number of input-output operations performed by disk subsystem per second.
Input/output operation – reading or recording the data by disk subsystem either in sequential or random mode.
Sequential I/O operation – reading or recording the data blocks that are located one after another. Productivity is measured in MB per second (MB/s).
Random I/O operation - reading or recording the data blocks that are located in a random order. Productivity is measured in IOPS.
Request – request to disk subsystem to perform data reading or recording operation.
Queue depth – number of simultaneous requests.
Latency – average time of handling the request.
Strip – data block that is recorded on a single disk within RAID. Strip size is set during the RAID installment.
Stripe – total size of single record on all disks of RAID excluding parity block.
What types of RAID controllers are supported by IOPS counter?
IOPS counter supports all types of RAID controllers that are used in our server systems:
- Intel RAID controllers and modules
- LSI RAID controllers
- Adaptec RAID controllers
- Embedded SATA chipset controller Intel C612 in ESRT2 and RSTe modes
- Embedded SAS/SATA chipset controller Intel SCU in ESRT2 and RSTe modes
RAID controllers with cache are supported on a both Write Through и Write Back modes.
The selection of a certain RAID controller may limit maximum number of drives and their interface.
For example, embedded controllers support up to 6 or 8 drives out of port capabilities. Despite port extension might be installed, such solutions are rarely used.
SATA-controllers do not support SAS drives. Additionally, certain controllers may not support all RAID types.
What drive types are supported by IOPS counter?
IOPS counter supports all types of drives that are used in our existing server hardware solutions: 2.5" or 3.5" with SAS or SATA interfaces and 7200/10000/15000 RPM. SSDs coverage by IOPS counter is underway.
Why drive vendor and model influences IOPS value?
Drives of same types but from different vendors (for example, SAS 2.5” drives of 7200 RPM) may deliver materially different productivity. IOPS value grows significantly in case drive is equipped with flash-based cache memory.
Why only Random I/O operations are measured with IOPS counter?
In the vast majority of applications the disk subsystem performs random I/O operations, reading or recording the data blocks located in a random order.
When sequential I/O operation productivity is measured, productivity is proportional to the number of drives (in case the data block size exceeds Stripe). This holds true even in case of RAIDs using parity, as in such cases the whole Stripe and parity block is recorded without prior reading of “old” data.
Queue depth is a number of simultaneous requests for reading or recording the data sent by the server to the disk subsystem. If queue depth is one, it means that next requests follows only after the previous one is completed. If queue depth is above one, server is making numerous simultaneous requests which is constant along the whole operations cycle.
How does the queue depth influence productivity and latency?
In case of using single drive:
The higher queue depth is, the more requests per second drive is able to perform. This happens as drive manages sequence of requests in a way that the drive heads are minimizing the route in terms of time. The higher queue depth is, the more flexible the drive in selection of optimal route for its heads.
SATA drives may handle up to 32 simultaneous requests while SAS drives up to 64. Maximum queue depths allows to triple the performance compared to the single requests mode.
On the other side, increasing the queue depth also increases latency. For example, having the queue depth set at 32, every request is performed only when all previous are handled. If maximum queue depth allows tripling productivity, the latency may increase by 10 times. Latency includes waiting time and operation handling time. Hence, optimal solution does not assume only maximum queue depth as in that case the latency becomes too high.
When it comes to RAID queue depth set on one, there is no productivity increase compared to single drive as RAID would involve only one drive as well. But in case RAID set at the queue depth equal to the number of drives in RAID, the performance would increase proportionally to the number of drives.
However, in fact the requests are not distributed evenly between disks, hence real productivity increase is lower. For example, in a two drives RAID only 50% of cases would involve different drives that gives real productivity increase by 50% (slightly more, if be precise, as 50% of cases involving same drive for the data blocks would allow to optimize the route). Our IOPS counter precisely takes this specifics into account for any number of drives and any queue depth.
Optimal solution assumes one request per drive within RAID. Further queue depth increase results in a lower productivity growth against latency (that is proportional to queue depth). Hence default queue depth is equal to the number of drives in RAID. In addition, IOPS counter allows to measure how deviation from default queue depth influences productivity and latency.
How RAID type influences productivity?
In case of random reading operation when queue depth is equal to number of drives and requests are distributed evenly:
RAID 0 reading would be handled in parallel from every drive, therefore productivity increase would be proportional to the number of drives.
The same holds true in case of RAID 5 or RAID 6. As parity blocks are distributed evenly, reading would involve all drives.
However, RAID 10 with hardware controller would increase productivity compared to RAID 0/5/5, as reading would be performed from those drive in mirror pair that has optimal position of its heads.
In spite random recording operation looks different in terms of productivity change. RAID 0 would be the only type where the productivity goes up proportionally to number of drives (as in the case with reading). RAID 10 shows twice lower productivity compared to reading operation as recording is duplicated on different drives.
RAID 5 initiates four activities out of single recording operation: reading of “old” data block, reading of corresponding parity block, recording of “new” data block and respective parity block. Hence theoretically RAID 5 should be four times slower than RAID 0 in case of same number of drives used. However type of controller might make a difference here: while Adaptec controllers show real productivity in line with this logic, LSI ones do not deliver productivity increase along with increasing number of drives (while perform better than Adaptec in a few drives RAIDs).
The same holds true in case of using RAID 6 with an exception that recording operation initiates six activities instead of four: three for reading and three for recording.
How the RAID productivity is influenced by embedded cache?
Write Back mode of embedded cache increases RAID productivity approximately in 10 times (disregard to RAID type), as data recording is first performed to fast cache memory. Data transfer to drives is performed in a background mode in a way that controller maximizes IOPS value by selecting optimal requests sequence (as latency does not influence productivity here).
In a productive server solution using Write Back mode is recommended only in case there is a cache-controller power loss protection (battery or flash modules based). Otherwise there is a high risk of losing significant data pack.
How the RAID productivity is influences by drive’s cache?
All current drives are equipped with embedded “fast” cache memory – usually either 64 or 128 MB. If disk cache is on, data is first recorded to cache and request is considered as performed. Then background transfer to regular storage is performed. Using cache materially (in a few times) increases drive productivity again due to optimization of head routes during background transfer.
Usually drive cache is power sensitive. Hence power problems would incur loss of data in the cache, therefore cache should be used only in case of available power redundancy.
Simultaneous using of both controller’s and drive’s cache do not necessarily mean the increase in productivity compared to using only controller based cache. In a certain configurations it would even lower the productivity – background processes of drives may interrupt controller optimization algorithms.
Why RAID useful capacity is lower than expected?
Drive vendors use GB or TB size capacity characteristic for its products. GB means 109 bytes, TB means 1012 bytes.
RAID capacity is measured in GB or TB as well, but GB here means 10243 bytes and 10244 TB bytes.