In 1973, Pete Townshend and The Who wrote and sang about Quadrophenia. And although it took another 34 years for quad-core servers to be counted as a commercial success, by all accounts, multicore server evolution is just beginning.
As the decade draws to a close, x86-based servers will have eight or even 16 cores in a single chip, said Nathan Brookwood, an analyst at Insight 64. The reason: Adding more cores is the fastest way to performance gains.
Improving memory technology can add 5 percent to 10 percent to system performance, and an updated processor architecture might provide an additional 10 percent boost, Brookwood said. But doubling core density within a processor can instantly add 50 percent or more in performance.
"Compare the level of performance gain we are seeing with quad-core processors to what Intel was able to provide in the move from Pentium 3 to Pentium 4," Brockwood said. Even though the Pentium 4 was a whole new microarchitecture, the move boosted performance by only around 20 percent, he explained. Intel's first quad-core Xeons, by contrast, are promising a 40 percent or greater increase.
There seems to be no point in the foreseeable future at which doubling cores every two years for mainstream servers will reach diminishing returns. Eight-core designs in 2009, 16 cores in 2011 and 32 cores in 2013 will be the route to processor performance enhancement just about indefinitely, most observers agree.
"There is always more work to be done," said Martin Reynolds, a Gartner Inc. analyst. "With more cores, you can get more work done."
For his part, Brookwood said, the multicore era is at its earliest stages. "We are not running into walls there."
That said, though, there's no definite word on how the industry will get there. Intel Corp. and Advanced Micro Devices Inc. have taken different paths to their quad-core designs. Some analysts believe, though, that ultimately AMD might have to take a more Intel-like approach to really catch up, and then pass Intel, in the multicore market. (See related story: "")
Microprocessor makers turned to multicore designs to solve some fundamental problems. Semiconductor technology continues along the path defined by Intel co-founder Gordon Moore in 1965. Moore's Law says that the number of transistors on a given chip will double roughly every two years. But the heat generated by packing so much in one tiny space has demanded a new approach to achieving incremental performance gains.
The biannual Moore's Law increase comes at the same time the width of transistor lines within a chip shrinks. This allows more transistors to be placed inside a given chip area. Today, leading semiconductor vendors are producing chips at either 90- or 65-nanometer line widths, and the move to 45 nm will begin by some vendors later this year.
But while the transistor budgets continue to increase, microprocessor designs began to hit a wall a few years ago in their ability to continue to accelerate the clock frequencies of those chips while keeping the heat produced at a manageable level. Digital Power Group, a Washington-based energy research firm, estimates that computers now consume about 10 percent of all the electricity generated in the U.S., a figure that could double by 2015. Legislation is being considered to force businesses and technology providers to reduce energy consumption.
By moving to multiple cores inside a single chip, processor manufacturers can reduce or maintain clock speeds and at the same time contain the associated heat generated. Overall performance can be dramatically boosted by doubling the available processing engines inside the same silicon real estate while maintaining stable power levels.
"It's really providing amazing new performance levels," said David Tuhy, a general manager at Intel's Business Client Group. "We're offering 50 percent more performance than our best dual-core processors, and it's four and half times the performance of our original single core Xeon. And the power didn't go up."