Server Redundancy

Definition - What does Server Redundancy mean?

Server redundancy refers to the amount and intensity of backup, failover or redundant servers in a computing environment. It defines the ability of a computing infrastructure to provide additional servers that may be deployed on runtime for backup, load balancing or temporarily halting a primary server for maintenance purposes.

Techopedia explains Server Redundancy

Server redundancy is implemented in an enterprise IT infrastructure where server availability is of paramount importance. To enable server redundancy, a server replica is created with the same computing power, storage, applications and other operational parameters.

A redundant server is kept offline. That is, it powers on with network/Internet connectivity but is not used as a live server. In case of failure, downtime or excessive traffic at the primary server, a redundant server can be implemented to take the primary server's place or share its traffic load.

Posted by:

Connect with us

Techopedia on Linkedin
Techopedia on Linkedin
Tweat cdn.techopedia.com
Techopedia on Twitter


'@Techopedia'
Sign up for Techopedia's Free Newsletter!

Email Newsletter

Join 138,000+ IT pros on our weekly newsletter

Resources
Free Whitepaper – Bridging the IT Operations Application Owner Gap
Free Whitepaper – Bridging the IT Operations Application Owner Gap:
Conflicting organizational needs all too often create tension between functional silos. Application owners demand more and more resources.
Free 30 Day Trial – VMTurbo Operations Manager
Free 30 Day Trial – VMTurbo Operations Manager:
VMTurbo's flagship product, Operations Manager maintains your virtual and cloud environments in a healthy state. It manages homogeneous and...