Interesting articles by Mark B Friedman:
Part 1 - Parallel Scalability Isnt Childs Play
Part 2 - Amdahls Law vs. Gunthers Law
Part 3 - The Problem with Fine-Grained Parallelism
Part 1 - Parallel Scalability Isnt Childs Play
Quote:
In a recent blog entry, Dr. Neil Gunther, a colleague from the Computer Measurement Group (CMG), warned about unrealistic expectations being raised with regard to the performance of parallel programs on current multi-core hardware. Neils blog entry highlighted a dismal parallel programming experience publicized in a recent press release from the Sandia Labs in Albuquerque, New Mexico. Sandia Labs is a research facility operated by the U.S. Department of Energy. According to the press release, scientists at Sandia Labs simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added. They concluded that this retrograde speed-up was due to deficiencies in memory bandwidth as well as contention between processors over the memory bus available to each processor. |
Part 2 - Amdahls Law vs. Gunthers Law
Quote:
This blog entry investigates Gunthers model of parallel programming scalability, which, unfortunately, is not as well known as it should be. Gunthers insight is especially useful in the current computing landscape, which is actively embracing parallel computing using multi-core workstations & servers. Gunthers scalability formula for parallel processing is a useful antidote to any overly optimistic expectations developers might have about the gains to be had from applying parallel programming techniques. |
Part 3 - The Problem with Fine-Grained Parallelism
Quote:
Developers experienced in building parallel programs recognize that Gunthers formula echoes an inconvenient truth, namely, that the task of achieving performance gains using parallel programming techniques is often quite arduous. For example, in a recent blog entry entitled When to Say No to Parallelism, Sanjiv Shah, a colleague at Intel, expressed similar sentiments. One very good piece of advice Sanjiv gives is that you should not even be thinking about parallelism until you have an efficient single-threaded version of your program debugged and running. |
Parallel Scalability Isnt Childs Play
0 commentaires:
Enregistrer un commentaire