Take a look inside Microsoft’s Quincy, Wash. data center

Microsoft's data center Quincy Wash.
Credit: Microsoft
The growth of a data center

For some IT people, looking into one data center is like looking into a thousand different data centers: They all look, feel and even smell the same. Rows and rows of racks, raised floors, cables everywhere, and millions of blinking lights make up the typical data center experience, and you might think the art of data center design had been baked down to a fine science by now.

For the cloud giants, however -- Microsoft, Amazon, Google, and the like -- data centers are not set in stone. Rather, they evolve in generations as servers get more powerful, power becomes cheaper and computers less hungry for it, and scale becomes hyper. Data centers that contain millions of servers operate on a different level, and their operation is closely guarded, as each of these giants thinks that the way it runs its millions of servers is a competitive advantage.

I was invited to peek behind the scenes, however, as part of a group touring Microsoft's growing Quincy, Wash. data center last month to see some of what Microsoft calls its data center evolution. Our group was not permitted to take photos, but Microsoft provided images that exactly match what I saw on the tour. Follow along as we look at how cloud and hyperscale changes the way data centers are put together.

Microsoft's Quincy, Wash. data center
Credit: Microsoft
Quincy: Crops and clouds

Quincy, Wash. is home not only to Washington state agriculture but also to one of Microsoft's older public data centers, opened in 2006. Pictured in this photo is just a portion of the 270 acres that make up the massive Quincy campus, which hosts a variety of workloads. (In addition, Microsoft Azure facilities are being constructed on acreage adjacent to the current facility. Since it was under construction at the time, we were not permitted to tour the Azure addition.)

Microsoft chose Quincy because of its relatively mild climate, low humidity and its proximity to the Columbia River -- a source of very cheap and plentiful hydroelectric power for its growing den of servers. The climate itself provides an opportunity to use adiabatic cooling, allowing outside air mixed with varying degrees of water to provide for cooling needs for much of the data center campus. (Other providers also have data centers in the area, including Yahoo.)

FACT: Microsoft opened its first data center in 1989, right on its Redmond, Wash. corporate headquarters and campus.

Traditional data center - Microsoft
Credit: Microsoft
The traditional data center approach, or "generation 1"

Part of the Quincy data center is like you would expect: Rows of racks and servers and networking gear. These rooms are generally cooled by large air conditioners, because hundreds of servers with fans spinning get hot. (This is why many data center operators say they primarily run a large air conditioner with some computers thrown in.)

In what Microsoft calls its "generation 1" data center, hot aisles were created in the traditional data center setup to contain the exhaust heat from the racked equipment; these aisles are cooled by traditional chillers passing conditioned air through the raised floor. These chillers, of course, use a lot of power. The hot aisles are sealed off with insulating plastic to limit heat bleed. The fronts of the racks are exposed to normal air, not insulated, and thus are more easily accessed for service work and troubleshooting. The result is a traditional format for a server room with a decent reduction in cooling and energy costs because of the isolation of the warm air.

Interestingly enough, the footprint of the machinery within these walls is shrinking. Each portion of the Quincy campus is considered a data center rated for a certain number of megawatts. As server and networking hardware gets more efficient, the equipment consolidates over time into fewer and fewer machines, and those pieces of hardware need fewer and fewer megawatts to run in the aggregate. The result is that a portion of the facility built for a certain number of megawatts is not anywhere close to capacity anymore. Large portions of the bays we were allowed to see were simply empty, with raised floor tiles not supporting anything but air.

FACT: Microsoft's data centers deliver 200+ online services on a 24x7x365 basis.

Microsoft containerized servers
Credit: Microsoft
Moving into containerized servers and networking

Around the turn of the decade, Microsoft began experimenting with containerizing its server loads when building out new data centers. Microsoft basically designed a specification for data centers in a box and invited large compute vendors to compete to provide a "pod" filled with servers, networking gear, and power supplies and UPS. All that would be required of Microsoft would be to plug them into power, provide any upstream networking required, and literally grab a garden hose for cooling when they were delivered to the data center site.

There are various versions and revisions of this container concept, which Microsoft dubs "ITPACs," and two of them are shown above.

This was the beginning of viewing data centers as single units, not as thousands of discrete servers to be managed -- an important point when you are managing to hyperscale, a classification for which Microsoft Azure surely qualifies. We see Microsoft pushing software-defined networking and data centers in its products, but that's primarily because using software-defined networking is how the company manages the millions of servers that make up its online servers.

Ordering thousands of servers and networking gear by the "ITPAC" makes it easy to not care what individual pieces of hardware are doing -- you can easily redefine routes, workloads, failover procedures and more with system management software.

FACT: Microsoft confirms it has more than one million individual physical servers across its data centers.

Microsoft data-centers-in-a-box
Credit: Microsoft
ITPACS

At first, Microsoft ordered ITPACs (from large OEMs I'm not allowed to identify) and planned to store them within unassuming structures with concrete floors, secure walls, and a simple roof that would help keep stable temperatures and humidity. The photo here shows the initial facility where ITPACs were stored. This is actually not a cooled facility, and in fact the ITPACS here were in operation before the roof was installed, open to the elements and weather. This works because each ITPAC has its own cooling integrated; it truly is essentially a self-contained data center. One funny story: During the cold winter in Quincy, snow would fall and cover up this part of the facility while the roof was still open. The baffles you see on the left were added to keep nature out -- most of the time.

FACT: Microsoft has more than 100 data centers in its global portfolio.

Microsoft data center pods, v. 2
Credit: Microsoft
"Get rid of the damned building"

If you could classify the first couple of generations of ITPACs as "containers of servers," then the lesson learned in the second wave of ITPACs was "get rid of the damn building." There were rows and rows of these latest generations of ITPACs on the Quincy campus, literally placed outside, open to the elements on concrete pads. Since the units are self-contained there is really little risk to having data center containers like this exposed to weather and nature in a place with a relatively temperate climate like Quincy.

As Microsoft has built up the Azure cloud service and is moving more and more of its individual online services onto the Azure platform, the company has moved away from the ITPAC model. There simply wasn't enough scale to this model to handle the growth Microsoft is seeing around its cloud services. Instead, the company is looking more toward buying servers by the tens of thousands with custom designs as part of the Open Compute Project. (More on that a little later in this slideshow.)

FACT: Microsoft's data centers store more than 30 trillion individual pieces of data.

Microsoft ITPAC data center
Credit: Microsoft
What’s inside an ITPAC?

Microsoft originally specified the ITPAC infrastructure for its Chicago data center. This first generation of ITPAC was not ever in service at Quincy, so we were unable to examine it or compare and contrast it to later generations. The concepts, however, stayed the same. Microsoft said, "Ship us some servers, networking, and power equipment in a box so all we have to do is hook it up to power, network, and water, and you guys deal with all of the details on the inside."

The interior of the ITPAC looks like a normal data center, albeit compressed. The ITPACs themselves sequester areas into hot and cold aisles, much like the "regular" data center modeled previously in this slideshow. Each vendor's overall ITPAC profile, however, was different; one vendor built ITPACs based on shipping containers, while another vendor delivered ITPACs that were more like temporary construction trailers or large doublewide trailers.

FACT: Microsoft's data centers process over 1.5 million requests every single second.

Microsoft's Quincy, Washington data center blending in
Credit: Microsoft
Security is paramount

Security is a vital part of Microsoft's data center strategy -- so much so that there is no external reference anywhere visible to the naked eye that this data center is in fact a massive Microsoft property. From the flat road, it looks like a variety of industrial buildings and generators without any corporate names or identifying logos or marks, so you'd have to know where you were going.

To get in the facility, you must be pre-approved by the data center team at the company and submit to a number of agreements limiting what you can and cannot say about what you see. You must show government identification as well, so there is a record of all visitors, their time in and their time out. There are also a number of procedures within the facility that I can't talk about but that were rigidly enforced during our visit. Perhaps the coolest feature was the giant disk shredder, where every single hard drive goes to die before obsolete or unneeded equipment is recycled. No data leaves the facility on drives. Ever.

I've been in a number of data centers before, but none with this depth of security and dedication to carrying it out. It leaves me wondering why some folks are reluctant to trust the security of major cloud players; is your data center's perimeter and interior protected as well and as diligently as Microsoft's is? Food for thought.

FACT: The number of times Microsoft's fiber optic network, one of North America's largest, could stretch to the moon and back? Three.

Microsoft server design
Credit: Microsoft
The Open Compute Project and Open CloudServer

Even though many of Microsoft's data center operational specifics remain confidential to the company, it is sharing some best practices in terms of server design through the Open Compute Project. The OCP is a joint venture led by Facebook to create best-of-breed server designs that integrate power, compute and networking into a forward-looking configuration that works well in today's data center environments but also can take advantage of future innovations in power and network delivery. Microsoft contributed the Open CloudServer design, a chassis system that makes use of taller than normal 19" racks. According to the project, "the 12U chassis has dedicated, hard-wired out of band management, phase-balanced power, and high efficiency cooling."

Ultimately Microsoft found that to extend the life of its data centers and enhance their software-defined natures, moving back toward standard racked servers was the way to go. It is easier to swap equipment in and out and upgrade gear to take advantage of the latest in power, cooling and networking. In fact, the portion of the Quincy campus that will open as an Azure region in the coming months and years is designed in a "future-proof" way so that advances in power delivery can be fully harnessed within the building. In addition, in-line batteries are being developed so that large rooms on the campus holding giant batteries aren't required to keep the servers up in the event of a power failure.

The details of the Open Cloud Server configuration Microsoft contributed to the project are available.

FACT: Microsoft and its data centers have been carbon neutral since 2012, and the company is continuing to increase the mix of renewable energy sources such as wind, solar and hydropower.