Definition - What does Server Redundancy mean?
Server redundancy refers to the amount and intensity of backup, failover or redundant servers in a computing environment. It defines the ability of a computing infrastructure to provide additional servers that may be deployed on runtime for backup, load balancing or temporarily halting a primary server for maintenance purposes.
Techopedia explains Server Redundancy
Server redundancy is implemented in an enterprise IT infrastructure where server availability is of paramount importance. To enable server redundancy, a server replica is created with the same computing power, storage, applications and other operational parameters.
A redundant server is kept offline. That is, it powers on with network/Internet connectivity but is not used as a live server. In case of failure, downtime or excessive traffic at the primary server, a redundant server can be implemented to take the primary server's place or share its traffic load.
The Top 3 Challenges for Implementing Public Cloud
Join thousands of others with our weekly newsletter
The 4th Era of IT Infrastructure: Superconverged Systems:
Approaches and Benefits of Network Virtualization:
Free E-Book: Public Cloud Guide:
Free Tool: Virtual Health Monitor:
Free 30 Day Trial – Turbonomic: