For anyone who has been in IT for a while, the phrase “latency” probably conjures up images of spinning disks, spindles, and actuators… the things that HDDs are made of. These were all obvious mechanical forms of latency – limited by the physical constraints of a rotating system – which is probably why so many of us still think of latency in that way. But in this era of flash drives and SSDs, what is latency? As it turns out, latency didn’t go away, it just got exposed in new areas of the storage system after the 800-lb gorilla (the HDDs) got out of the way.
The Cast of Characters
A few key terms are always bandied about in any discussion of a storage network’s performance, so it is important to understand the role of each.
Latency is generally defined as the round-trip time required to process a transaction (a ping) within the storage network. Throughput is the actual amount of data that flows through the pipe. Bandwidth is the maximum throughput that is possible given the size of that pipe, and IOPS is the sustainable number of transactions that the storage array can pass through the sum of all its ports in a given second.
Among these terms, note that only latency is purely about time. It’s not a how much, a how many, or a how capable. Anything that inordinately affects the transaction time of a piece of data can be attributed to some form of latency. And only latency touches every layer of the storage environment, from servers, compute components, storage media, and software.
Why Latency Matters
Too often, when system performance gets sluggish, there is a pure focus on IOPS -- and yes, the storage should be the first thing you look at -- but it should be done with an eye for latency. Why? The less latency in your storage network, the faster it can act on requests. That means more data gets delivered in less time. That also means applications run faster and servers can do their jobs with greater efficiency. This is why latency absolutely matters.
Latency in the Modern Day
It was mentioned earlier that when HDDs ruled, you knew exactly where the latency was. The time required to perform a single read/write action from a hard disk platter required more time than the rest of the data’s round-trip journey. The surrounding hardware and software that managed the I/O to the HDDs never had to worry about underperforming them. Even a single-core microprocessor, like an 80386 series CPU, was enough to drive the storage controller and keep those sluggish HDDs running at capacity. The software on the controller that managed I/O operations never had to be optimized for speed because it was sufficient for the HDD environment. But those days are gone! SSDs can perform read/writes “latency free”, which means that surrounding hardware/software layer is absolutely underperforming the storage media, making it the new culprit for storage latency.
But wait… If the problem is in the storage controller, why not just fix it? One reason this remains a problem, particularly among the major storage vendors, is because it would require a complete redesign of their storage software architecture that would not be compatible with their current offerings. Storage vendors have a certain loyalty to their major clients and, understandably, they cannot just phase out an aging product line every time a new technology presents itself. This forces vendors to streamline their product offerings, so they can keep the clients happy with minimal stress and drama. But this allegiance to backwards compatibility also prevents them from taking full advantage of the latest technology, like 64-bit, multi-core, multithreading CPUs. Ironically, this is the processing power you need to keep pace with a tray full of blazing SSDs, particularly if you want to add functionality like inline deduplication and compression. But for a storage vendor with an established customer base content with the status quo that provides a steady stream of income, there is no rush to get there.
Latency Is a Moving Target
Latency is the enemy of performance, and the starting point for troubleshooting any network performance issue should be the storage. From there, you may opt to chase down the causes of latency in other parts of your network chain. To this extent, latency is a moving target! Just as the introduction of SSDs exposed latency in the controllers, so might a modern storage solution expose latency in other parts of your network and even the application layer.
The current generation of SANs that are driven by the latest processors, intelligent software, and flash storage, can generate impressive IOPS that might challenge the bandwidth limitations of your older servers, switches, and compute components. But before you set off on a latency witch hunt, remember that there IS an acceptable amount of latency. How much? Whatever amount no one else notices.
To learn more about modern storage technology and how it can eradicate latency and improve network performance across your entire company, contact the IT professionals at CNS Partners. We help manufacturing businesses streamline and improve their IT performance so it enables business growth, not get in the way of it.