For the past twenty years, the power management in processors and computer systems has advanced at a substantial rate. Though it may seem that many changes have been evolutionary, a number of them have also been quite remarkable. In truth, possibly the greatest changes have been influenced by the sustainability of Moore’s law.
Only two decades ago, the most advanced processors at the turn of the century were mainly monolithic. We had yet to see the advent of the multi-core processor on a single die. With processing node technologies in the 350 and 250nm generations, the world had yet to see the incredible scaling which was to come. Single-core processing with single-threaded operations was still the norm. But soon communication systems like the Internet were starting to drive the insatiable need for more processing power – especially in data centers and servers. It is not just mobile computing that has been the driving force behind many of the changes but also the backend processing in large server systems that enabled such changes. As commercial systems increased their ability to process data and mine that information for monetary benefit, so did the requirements for more advanced processing. Take for example, the case of Amazon setting up advanced software systems that not only tracked users sales and secured financial data, but also tracked consumers buying habits and needs. In addition, they also required advanced databases and search engines that would eventually lead to AI subsystems. As more and more companies learned to utilize such processing power and built software around it, the microprocessor became more and more advanced.
Intel and AMD’s roadmaps in the server industries grew and went from less than 40% of their revenue to nearly 90% today. This massive change in the silicon industry also enabled more processing power per unit area than ever before. And as the processors became more powerful, the power consumption, and the silicon and platform management of that power became more of a problem. A simple view of the changes to voltage and current at the processor level can be seen by the graph below (Figure 1).
Note that in 1994 the current per processor was typically below a factor of 1000 (black curve) lower than in modern times, and the voltages were at 3.3V. Today, currents in an advanced processor can reach over 400A at 1 volt.
What is more astonishing is the change to the current density. This can be seen in the next graph (Figure 2). As stated earlier, the number of cores in a processor went from one core to now (for some suppliers) offering 64 or more. This amazing increase has resulted in not only very dense processing capabilities on silicon but also very dense currents as can be seen by the vertical current densities on the left of Figure 2 (black curve).
The result has been not only a massive increase in processing power but also a host of system- level challenges that have consumed the processor industry.
The question that often comes up in the discussion of these amazing trends in computational advancements is what were the drivers of these changes? That question is complex and multi-faceted, but when one looks back in time it is easy to see why such changes took place.
In the 1980’s and 1990’s a few companies believed they could challenge IBM for server supremacy in the database computer market. Up until this time, IBM had dominated the industry through their hardware and software offerings. Not only this, they offered a massive support infrastructure. Though still a player in the smaller personal computer market at the time, the majority of the revenue was in computer systems that did number crunching safely for large businesses such as banking and financial services. The up and coming retail market was also starting to use them to help run their business. One such retailer that was growing dramatically was Walmart. This company was trying to use their systems to not only place items properly on their shelves but to also tap into buyer trends. This was before the proliferation of mobile technologies (Smart Phones) that would further revolutionize the industry. With companies in these large markets leading the way, new computing models started to emerge using large databases for data mining and other activities to produce trends in buyer attitudes. This created a huge opportunity for increasing revenue and resulted in new challenges for the server and database software industry.
In the early to mid-1980’s, Intel was one such company that had been transitioning from being a large player in the silicon memory business to now developing microprocessors. With the generational performance improvements driven from process scaling, frequency improvement, and compute density, Intel began to supplant many of the major players in silicon, such as IBM. Soon, Intel was targeting the server systems markets and enabling other suppliers to go after the growing server systems industry.
In the late 90s, early 2000’s, the revolution was on to create higher performance silicon per unit driven by the .com boom. And while the process scaling continued, so did the architectural changes which is when the multi-core processor started to rise to prominence.
With increasing densities comes increased challenges. Intel and others discovered that not only did the increase in power density and multi-core processing result in higher power, it eventually began to limit the performance expectations they were after. Within the first decade of the 21st century, engineers were implementing circuits and techniques to address such power management issues.
One such technique was the advent of power gates. These devices were simply large power transistors on the die that would allow power management designers to turn off and on cores rather than let them sit and leak power when not doing useful work. Though the power gates took up space on the die, which was valuable silicon real-estate, the result was a more efficient processing device which helped improve the overall performance/watt equation.
The second big change was to segment the power even further. By isolating the power going to a core or a group of cores, one could actually control the power more efficiently. Also, by changing the power states (voltages) to these cores, a power management engineer could perform workload-aware computations and not operate at the highest frequencies all of the time when certain operations did not require it. This was not only very useful for server-based cores, but it helped immensely in mobile cores which had limited battery life.
Further improvements occurred by controlling the different processor states, or p-states, within a system. Also, techniques such as guardbanding helped to give margin back to such systems. By knowing the operational voltage limits in a processor due to power supply and processing tolerances, a designer could set the voltage at the optimal point for the device and limit the power consumption while maintaining performance throughout the frequency range of the core and silicon.
And while these changes today certainly have helped to maintain the edge that processor designers and manufacturers have needed to keep pace with the insatiable performance requirements in today’s server chips, the next decade appears to be even more fraught with challenges that power and power management engineers must deal with to keep pace. The reason being that these techniques are beginning to become roadblocks to the advancement to processor performance.
In the next blog update we will be covering the major technological changes which have occured in the processor area to combat the aforementioned challenges and how the previous innovations have now become somewhat of an impediment to performance in the advanced processor and computing market.