The idea of differentiating storage for I/O-intensive workloads is a process that has roots in traditional IT, but it’s also one that’s becoming less important as hardware and software advances change enterprise storage.
Essentially, the philosophy of handling I/O-intensive workloads differently in terms of data storage revolves around the idea that the relatively strenuous management of lots of transient data doesn’t fit into the storage model for other data sets that may be less dynamic. Another way to say this is that traditionally, companies and stakeholders have managed “hot” and “cold” data differently, partly because of the limitations of systems. Going back to the days of tape vaults and analog storage, you might have seen a company use tape for archived or cold data storage, and some other more agile medium for more I/O-intensive data sets and workloads.
The term “tiered storage” has emerged to describe many of these processes. A tiered storage system will have one storage system for I/O-intensive workloads, and another for more static data handling processes. One of these systems might move data between different RAID (redundant array of independent disks) levels, or work with multiple media as mentioned above. Over time, engineers integrated something called “automated storage tiering” which made the process more agile.
Some new advances are, in some ways, making even automated storage tiering obsolete. Things like software-defined storage and solid-state engineering are allowing managers to store hot and cold data in the same ways. An emergent storage process called “flash” is helping to decrease the limitations that made tiered storage necessary in the first place. In combining storage for more and less I/O-intensive processes, companies have to be sure that the single system that they design or procure can handle all of the higher volumes of activity associated with I/O-intensive processes.