Google has been less than forthcoming about the details of its data centers, since they regard them as tools that provide a competitive advantage. Here are 3 pages of information about Google’s data centers and their locations: http://www.datacenterknowledge.com/archives/2008/03/27/google-data-center-faq/
Here’s a peek inside Google’s inner workings and hidden mechanisms, in surprising detail, from a Google fellow: http://news.cnet.com/8301-10784_3-9955184-7.html?tag=mncol;title Highlights:
- Google distributes its data and processing across many thousands of servers.
- They buy comparatively little software. They use open source software (such as Linux) and write their own software.
- They build fault tolerance into software rather than (expensive) hardware, and run it on lots of cheap expendable hardware.
- Commercial databases such as Oracle lack needed horsepower, so Google built their own GFS (Google File System) and BigTable.
Fault tolerant microprocessor-based servers originated (I think) at Novell.
Google’s chairman and CEO is Eric Schmidt, who followed Ray Noorda as CEO of Novell. One of Novell’s better pieces of server software was named NetWare SFT-III (System Fault Tolerance), which debuted around 1991. It ran on two identical servers, and kept running even when one server died. It worked like magic: when a server failed, users had no indication that the system had been severely wounded. Novell later developed this into server clustering, and it sounds like Google has implemented this idea on a broad scale, based upon a Linux core. If Google were to market this server operating system (and who knows what they have planned?), it could seriously affect server software sales for Microsoft and Sun.