A Parallel Future? What Bulldozer Means for the Future of Computing.
As we move forward in technology, single threaded performance becomes more and more difficult to increase. However, overall performance may be increased through multi-threaded applications and multi-core processors. Intel and AMD were pushing dual core, even in laptops, years ago. Now, quad core – and even octo core – CPUs are available.
Bulldozer: What is it?
Perhaps one of the boldest movements towards multi-threaded, multi-core processing was made with AMD’s Bulldozer architecture. Bulldozer eschews the prototypical definition of a multi-core processor. It is composed of modules – each of which contain two "cores". Each module features two integer units and one floating point unit; these are shared between both "cores". Such an architecture greatly favors parallel, multi-threaded workloads over single threaded workloads. Each core/module has less single threaded performance than even some of AMD’s older processors, and it certainly falls short of Intel’s excellent Sandy Bridge and Ivy Bridge processors.
Why? The answers to this question are numerous. One thought is that AMD wished to differentiate itself from Intel, and with AMD’s limited resources, they struggled to keep up with Intel’s single threaded performance. Another idea, one that I favor, is that with the experience gained from their acquisition of Ati, a graphics company, they are moving towards a parallel, optimized future – one could will truly change the entire CPU landscape.
APUs: A Genius Idea?
AMD has championed the acronym APU. An APU is an Accelerated Processing Unit. It combines a CPU, GPU, and various other goodies like a memory controller onto a single chip allowing for quick interconnects and lower overall power usage. Most APUs have a mid range CPU and a mid range GPU combined at an attractive price. These units do not match the brute force of an Intel CPU or a high end AMD or nVidia GPU, but an APU meets most computing needs at a low price.
Consider the architecture of a GPU. A graphics unit has many parallel units working in unison to produce visuals at exceptionally quick speeds. Each unit is not very powerful on its own, but it doesn’t have to be. Now, think back to Bulldozer. Bulldozer makes a CPU more parallel, not anywhere near a GPU yet though. However, its strength is multi-threaded integer operations, and a GPU’s strength is multi-threaded floating point operations. Combining both ideas on a single chip could allow for great parallel computing power.
A future Bulldozer-based CPU combined with a future GCN (Graphics Core Next architecture) GPU could deliver excellent integer and floating point performance on a single, low cost, efficient chip. The trick is optimization. Without significant uptake in GPU/GPGPU acceleration supporting these chips, the GPU sits idle much of the time. With clever programming and efficient utilization of assets, one of these theoretical APUs could stomp Intel. Even a midrange GPU offers several times the floating point performance of a high end Intel CPU, and Bulldozer even defeated Sandy Bridge Core i5s and Core i7s in certain workloads.
Conclusion: The Barrier
Here’s the issue. Who is going to take the time to optimize for 10% of the market? With Intel owning both the high end and mainstream x86 markets, any optimization will be done towards Intel, not AMD. The optimizations that would be required for an APU to overtake an Intel CPU could be tricky and time consuming, and most current GPU/GPGPU optimization utilizes CUDA technology, proprietary to nVidia, AMD’s GPU rival. Furthering the issues, AMD does not have the resources to drive development on its platform; they struggle just to keep pumping out revisions and improvements. Their hope at the moment is with open standards like OpenCL which support AMD and nVidia GPUs. Without the adoption of GPU acceleration in more places, AMD is stuck with undercutting Intel on price, and that strategy is tough to sustain long term as AMD teeters on profitability. Their future hinges on this bet, and this bet could become a huge flop or a genius success in the long run.
Disclaimer: I am not affiliated with AMD or Intel. I also do not have any experience in chip design. I got the idea to write this piece when I realized the possibilities of Bulldozer in an APU package. This is simply my opinion.
Feel free to add your opinion in the comments. I look forward to what you have to say. Any Computer Engineers or people in the industry? Any insight to provide?