ALERT

[FREE DEMO] Deploy Your Enterprise Cloud in Minutes

Storage Performance Platform

Definition - What does Storage Performance Platform mean?

A storage performance platform is a new architecture paradigm for virtualized environments. The premise of this new approach to storage is the use of server-side media controlled by hypervisor-integrated software able to provide distributed or clustered platforms with accelerated storage read and write I/O operations. These new platforms make use of flash- or RAM-based storage solutions within the server itself as opposed to slower disk-based ones implemented over a network fabric.

Techopedia explains Storage Performance Platform

A storage performance platform makes use of two things that allow it to provide superior performance to current network-based storage solutions. First is the server-side computing resources such as RAM or flash, since they provide many advantages in terms of performance compared to network-based storage solutions. One advantage is that these resources are already very close to the applications themselves so they can truly leverage the performance of the media. Compared to current network-based storage solutions that may reside in a different room, building or geographic location entirely, server-side resources provide better performance because of shorter I/O paths. Shared storage implemented across a fabric cannot compete with the performance delivered by the RAM itself that sits right next to the CPU, or by flash storage that can be connected using high bandwidth interconnects. As an example, current PCIe flash storage devices can provide 250,000 IOPS actual with microsecond latency.

The second part of this new storage platform is a hypervisor-integrated software installed as kernel modules. This means that I/O access to the accelerated storage resources is done as part of the kernel and not as a separate filler driver inside another virtual machine or a virtual machine itself. In such a manner, scheduling and contention issues are avoided completely, which means that there is more throughput per I/O traffic. And since the software is a kernel module, the I/O path is ensured to be as short as possible from the application towards the requested data, which results in the best possible performance. Current shared storage that is accessed via a network fabric would never be able to compete with the throughput of something that is sitting right inside the host/server itself.

Techopedia Deals

Connect with us

Techopedia on Linkedin
Techopedia on Linkedin
Tweat cdn.techopedia.com
"Techopedia" on Twitter


'@Techopedia'
Sign up for Techopedia's Free Newsletter!

Email Newsletter

Join thousands of others with our weekly newsletter

Resources
The 4th Era of IT Infrastructure: Superconverged Systems
The 4th Era of IT Infrastructure: Superconverged Systems:
Learn the benefits and limitations of the 3 generations of IT infrastructure – siloed, converged and hyperconverged – and discover how the 4th...
Approaches and Benefits of Network Virtualization
Approaches and Benefits of Network Virtualization:
Businesses today aspire to achieve a software-defined datacenter (SDDC) to enhance business agility and reduce operational complexity. However, the...
Free E-Book: Public Cloud Guide
Free E-Book: Public Cloud Guide:
This white paper is for leaders of Operations, Engineering, or Infrastructure teams who are creating or executing an IT roadmap.
Free Tool: Virtual Health Monitor
Free Tool: Virtual Health Monitor:
Virtual Health Monitor is a free virtualization monitoring and reporting tool for VMware, Hyper-V, RHEV, and XenServer environments.
Free 30 Day Trial – Turbonomic
Free 30 Day Trial – Turbonomic:
Turbonomic delivers an autonomic platform where virtual and cloud environments self-manage in real-time to assure application performance.