What is the difference between cloud computing and grid computing?
Cloud computing and grid computing are very similar concepts that can be easily confused. Fortunately, there are a few key differences that set the two apart.
Grid computing is a loose network of computers that can be called into service for a large-scale processing task. This network is over the Internet, but only computers that have opted into the grid are called upon. Although distributed geographically, grid computing allows for parallel processing on a massive scale. In short, grid computing is what you want when you have a big processing job to tackle. Search for Extraterrestrial Intelligence (SETI), for example, uses a grid computing scheme to analyze radio frequency data, taking advantage of volunteers’ idle processing power.
Cloud computing, in contrast, usually involves accessing resources on an as-needed basis from clusters of servers. These clusters can handle large processing loads, but they are intended to provide scalable processing to users on a smaller scale. Instead of handling one huge task from a single user, cloud computing handles many smaller requests from multiple users. This allows the users to scale up their computer resources for a temporary processing spike without investing in actual servers, which may be recruited only rarely.
So, while grid computing uses sprawling networks meant to handle a limited number of big tasks over the Internet, cloud computing uses large clusters of computers that can be accessed over the Internet to handle smaller processing tasks from many sources. That is not to say these are the only differences, but they are the ones from which all the other differences (interfaces, standards, services offered, etc.) follow.