Parallel Scalability Isn’t Child’s Play

samedi 18 avril 2015

Interesting articles by Mark B Friedman:



Part 1 - Parallel Scalability Isn’t Child’s Play


Quote:








In a recent blog entry, Dr. Neil Gunther, a colleague from the Computer Measurement Group (CMG), warned about unrealistic expectations being raised with regard to the performance of parallel programs on current multi-core hardware. Neil’s blog entry highlighted a dismal parallel programming experience publicized in a recent press release from the Sandia Labs in Albuquerque, New Mexico. Sandia Labs is a research facility operated by the U.S. Department of Energy.

According to the press release, scientists at Sandia Labs simulated key algorithms for deriving knowledge from large data sets. The simulations show a significant increase in speed going from two to four multicores, but an insignificant increase from four to eight multicores. Exceeding eight multicores causes a decrease in speed. Sixteen multicores perform barely as well as two, and after that, a steep decline is registered as more cores are added.” They concluded that this retrograde speed-up was due to deficiencies in “memory bandwidth as well as contention between processors over the memory bus available to each processor.



Part 2 - Amdahl’s Law vs. Gunther’s Law


Quote:








This blog entry investigates Gunther’s model of parallel programming scalability, which, unfortunately, is not as well known as it should be. Gunther’s insight is especially useful in the current computing landscape, which is actively embracing parallel computing using multi-core workstations & servers.

Gunther’s scalability formula for parallel processing is a useful antidote to any overly optimistic expectations developers might have about the gains to be had from applying parallel programming techniques.



Part 3 - The Problem with Fine-Grained Parallelism


Quote:








Developers experienced in building parallel programs recognize that Gunther’s formula echoes an inconvenient truth, namely, that the task of achieving performance gains using parallel programming techniques is often quite arduous. For example, in a recent blog entry entitled “When to Say No to Parallelism,” Sanjiv Shah, a colleague at Intel, expressed similar sentiments. One very good piece of advice Sanjiv gives is that you should not even be thinking about parallelism until you have an efficient single-threaded version of your program debugged and running.








Parallel Scalability Isn’t Child’s Play

0 commentaires:

Enregistrer un commentaire

Labels