| 3 min read

Impact of IOPS on IT System Performance

IOPS (input/output operations per second) is a standard unit of measurement pertaining to the maximum number of reads and writes to non-contiguous storage locations. But IOPS have gotten a bad rap lately because they can be wildly exaggerated when measured under controlled conditions. This is absolutely true, and yes, IOPS used as a sales specification should not be believed unless the environment variables (read/write ratio, block size, and latency) are included. This article addresses the importance of IOPS when evaluated under real-world conditions.

IOPS: You Know More Than You Think

Whether you are a guru on the subject of IOPS or not, you are certainly familiar with the problems that occur when your storage device doesn’t deliver enough of them. Before solid state disks (SSDs) went mainstream, old laptops were renowned for their sluggish performance. It wasn’t so much that they had slower CPUs or less RAM than their desktop counterparts, it was just that those old hard disk drives (HDDs) were only capable of 50 IOPS, not to mention their inherent latency. At the time, the HDDs in comparable desktop computers were running around 75 IOPS which provided a much better user experience. People who needed faster performance turned to SCSI storage because it nearly doubled the IOPS performance of desktops. The point here is that IOPS have always been an integral part of performance… even when storage was local.


IOPS In Your Network

Fast forward to our modern network environments with centralized storage, and IOPS become more important than ever. This is because poor performance from network storage affects everyone in the company. Many of the storage area networks (SANs) in use today are still filled with those same spinning HDDs of old because of their affordability in terms of dollars-per-terabyte. When placed in a RAID 6 environment, you can add together the IOPS of each disk to get better performance, but each of those disks still has inherent mechanical latency, which adds up as well. It is amazing that companies still make purchase decisions based on the storage capacity of a SAN instead of storage performance. The array might be affordable, but the performance hit is not! That extra fraction of a second it takes to complete every I/O operation turns into many person-hours of lost productivity per day when multiplied by 50-100 employees.

Testing Network Performance

The solution for sluggish storage/server performance is increasing the IOPS and throughput for the type of data loads your business generates… and also minimizing latency across the disks. Microsoft offers a free opens source tool called Diskspd that can help you diagnose problems in your storage system without having to run a full end-to-end workload. Diskspd’s scripts can generate a wide variation of different I/O loads that can test the performance of files, partitions, and physical disks. SQL Server I/O activity can also be simulated, as well as more complex, changing access patterns. Test results are returned in detailed XML output that can be used in automated results analysis software.

How Much IOPS is Enough?

But knowing how many IOPS you have across the storage server is only useful it you also know how many IOPS you need! This number will be based on the size of the company, the type of applications you run, the amount of activity, number of concurrent users, daily usage patterns, and other parameters. Once again, Microsoft has assembled some excellent resources for planning the database and storage tier in the context of their SharePoint Server environment, which can be found here. This includes information on how to estimate core storage and IOPS needs and determine other hardware requirements. Gathering all this information and evaluating the results will likely takes several weeks or more, but it is essential to tuning your storage environment to meet the needs of your workplace environment.

Be aware that there is more than hardware involved in the success of a storage system. SSD’s are often touted as the savior of network storage, and while they do take away the mechanical latency of HDDs, SSDs are also considerably more expensive and offer lower capacities. SSDs also require specialized controllers that can do write optimization to avoid prematurely wearing out the drives. An obvious compromise is a hybrid storage model that provides the best of SSD performance with the best of HDD capacity. With proper attention paid to intelligent data tiering in the hybrid storage system, there is no compromise at all, unless you consider paying less for the same performance (about 200,000 IOPS) to be a compromise.

Look at Storage First

Any chain is only as strong as its weakest link, so it is essential to identify those places in your storage network environment where IOPS are getting choked off. The bottleneck is almost always between the server and storage, which is why it is essential to understand the needs of the business and ensure that the hardware you own is capable of the performance you require. In the end, it’s about getting the most IOPS with respect to the unique workload your business environment presents. Whatever the culprit is in your network, one thing is for sure: storage performance should always be looked at first, not last.

To learn more about the latest storage technology that is available to improve network performance across your entire company, contact the IT professionals at CNS Partners. We enjoy helping businesses improve operational performance and growth. 

For an in depth about how to diagnose and solve network performance issues, download our eBook

how to diagnose network performance issues