Jump to content

Talk:Amdahl's law

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Suggestion for a simpler/more general introduction

[edit]

Hi, may I suggest a simpler version of the introduction, which explains the significance of the law and also sets it in a more general context:

Amdahl's law, also known as Amdahl's argument,[1] is named after computer architect Gene Amdahl, and defines a theoretical maximum for the ability of systems to scale up. "Scaling up" refers to the ability to do more work by adding more resources. The critical parameter in Amdahl's law is the percentage of work which cannot be parallelized - in other words, work which must be performed by one central component and cannot be divided between resources. The basic finding of the law is that as long as there is a non-zero percentage of work which cannot be parallelized, the system will not be able to scale beyond a certain theoretical maximum. The law also shows how to compute the theoretical maximum scale depending on the percentage, and some other parameters.

Amdahl's law is most often applied to parallel computing. In this context, the speedup of a program using multiple processors is limited by the time needed for the "sequential fraction" of the program to run - this is the percentage of work which cannot be parallelized. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.

Anne.naimoli (talk) 12:27, 13 September 2011 (UTC)[reply]

References

  1. ^ Rodgers 85, p.226

The application of this would tie well to the concepts of Critical Path Method for project management. The tracing of the shortest path identifies the shortest time of execution. Please note a link to http://en.wikipedia.org/wiki/Critical_path_method — Preceding unsigned comment added by JimDelRossi (talkcontribs) 22:17, 23 January 2012 (UTC)[reply]

The explanations in this article are pretty horrible. In fact other articles are also not so good, because of errors, but this one isn't too bad - http://software.intel.com/en-us/articles/amdahls-law-gustafsons-trend-and-the-performance-limits-of-parallel-applications/ though there is an incorrect sign in one part. If we use P for the proportion of time spent on the parallelisable part of the task, then we can write P=1-S, so that the expression is really quite easily seen to be Speedup (N) = 1/ (S + P/N) - offset terms. Then the details do become really rather obvious. The first section of this article is far too confusing, and if needed at all, should come after the basic expression shown here. If the offset terms (due to communications) are ignored, then if N is allowed to "proceed to infinity" it is easy to see that the maximum Speedup is 1/S. As commented, the performance/price ratio deteriorates if attempts are made to use more than 1/S processors/threads/cores. I don't have time to sort this myself, but the article really could be tidied up a lot. David Martland (talk) 09:37, 9 March 2012 (UTC)[reply]

Irony

[edit]

This article (itself) is longer, has more math, and more graphs than Amdahl's own original article. 198.123.56.223 (talk) 18:52, 12 April 2012 (UTC)[reply]

Error in graph scales

[edit]

Depending on your perspective the graph is either scaled incorrectly or the Y axis is mislabeled. It states that a single processor is twice as fast as a single processor, i.e. speedup and speed multiplier have been confused. Justin Urquhart Stewart (talk) 02:28, 29 August 2014 (UTC)[reply]

Please clarify. Each curve on the graph starts at (1,1) [and this was true even when you posted your comment]. According to Speedup, a "speedup" value of 1 represents no improvement at all. So what are you talking about? - 72.182.55.186 (talk) 00:02, 27 February 2018 (UTC)[reply]

Limitations of Amdahl's law, and ways to mitigate it using frequency scaling

[edit]

It seems like it would be appropriate to discuss turbo boost (and other frequency scaling techniques) in this article since one of their purposes is to mitigate the effects of Amdahl's law. See http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=1431565, here's a quote from their abstract "In order to mitigate the effects of Amdahl's law, in this paper we make a compelling case for varying the amount of energy expended to process instructions according to the amount of available parallelism."

I personally do not know how successful these methods are, currently, as I have seen some more recent papers that show only a modest performance gain (around 5%) for both applications with limited and with copious amounts of parallelism. It seems worthwhile to mention these techniques in this article, however.

In addition, Amdahl's law is not considered to be as useful as "work/span analysis" in the analysis of parallel computations. See chapter 2 of Structured Parallel Programming (http://books.google.com/books?id=2hYqeoO8t8IC). For more information about work/span analysis you can also see intel's brief tutorial here: https://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/GUID-83D8631F-8ECA-498A-A091-CF952FE77A24.htm

I think that we need a separate article on work/span analysis on wikipedia. If I get some time I'll try and write one, but if someone else in the wikipedia world is interested in doing so let me know and I'll be happy to lend a hand. -Tim Kaler 20:24, 15 October 2014 (UTC) — Preceding unsigned comment added by Timkaler (talkcontribs)

Inequality

[edit]

Isn't there some value to point out that Amdahl's law functions as a limit or at least an inequality? s is really <= the rest. — Preceding unsigned comment added by Jimperrywebcs (talkcontribs) 15:55, 13 November 2015 (UTC)[reply]

Corrections

[edit]

Problem size (or workload) is not in the Amdahl's Law formulation. Assuming Amdahl's Law only applicable to solving problems of fixed size is incorrect. Amdahl's Law can scale the number of processors (N) without affecting the serial percentages. Thus Amdahl's speedup can converge to 1/(1-p) which can asymptotically approach infinity or zero, depending upon the parallel percentages (p). Therefore, it is mathematically incorrect to assert pessimism on parallel processing. The main page's Amdahl's graph only show half of the Law's power. Gustafson's Law also does not include problem size in its formulation. However, when one tries to scale the number of processors (N) using Gustafson's Law, it actually increases the problem sizes (workload) and the parallel percentages, since Gustafson's formulation is based on the workload using fixed number of processors. The Gustafson's scaling graph is also incorrect. Both pages will need corrections. Justin Y. Shi (talk) 20:26, 16 September 2024 (UTC)[reply]

I don't think you've fully understood what's meant by "problems of fixed size". Amdahl's law describes the performance of a system as the number of processors is changed. It doesn't describe the performance of a system as the size of the problem changes; that behavior depends on the problem and is ordinarily described in asymptotic notation, e.g. for something like merge sort. This doesn't mean that Amdahl's law can't be used except for problems that are inherently fixed-size; it simply means that Amdahl's law works by taking the problem size as fixed. Sneftel (talk) 12:26, 17 September 2024 (UTC)[reply]
Amdahl's Law works for all problem sizes. This is why the problem size is not in the formula at all. Justin Y. Shi (talk) 18:51, 17 September 2024 (UTC)[reply]
Taking the number of processors to infinity yields Amdahl's speedup upper bound. Your last sentence does not make sense to me "it simply means that Amdahl's law works by taking the problem size as fixed.". It contradicts your previous sentence "This doesn't mean that Amdahl's law can't be used except for problems that are inherently fixed-size". The law should work for solving any problem with any size. No? Justin Y. Shi (talk) 04:08, 18 September 2024 (UTC)[reply]
Amdahl's law describes a *relative* change in performance. It does not have units of time. But that means that it needs to be proportional to something: most conventionally, the algorithm with no parallel speedup. If the problem size is not taken as fixed, then the time taken to execute the algorithm with no parallel speedup is not fixed, so Amdahl's law can't say anything meaningful. The law does work for problems of any size, because any problem has a specific size. For instance, if you want to talk about how adding processors affects, say, parallel merge sort, you might talk specifically about a particular implementation of parallel merge sort with an input size of one million elements. Or one thousand. The input size can be whatever you want, but since the p and s depend strongly on what that input size is, you need to decide on an input size in order to do the calculations. Sneftel (talk) 13:06, 18 September 2024 (UTC)[reply]