Back to ComputerTerms

Normalize: Given a reference execution time A, take the execution time you have (B) and divide it by the reference execution time. What you get is a normalized execution time of B with respect to A.

$$$Normalized(B)=\frac{Execution~time~of~B}{Reference~execution~time~A}$$$

Given several programs P1, P2, ..., Pn, the average (GeometricMean) normalized execution time is

 For (i=1; i++, <= n) {
   P *= Normalized(Pi) 
 }
 return P^(1/n)

or

$$$\sqrt[n]{\prod_{i=1}^{n}Normalized(P_i)}$$$

Using the geometric mean gives a consistent ranking no matter which machine you normalize to, as in: Computer A is faster than computer B. The Arithmetic mean (average) does not give a consistent ranking when we normalize to different machines. However, the arithmetic mean does give you an execution time average while the geometric mean does not! Instead, it gives you a composite measure.

We note that the geometric mean violates our fundamental principle of performance measurement - it does not predict execution time. Also geometric mean rewards percentage improvement, not running time improvement. So if one benchmark in a suite only takes 2 seconds and it can be improved to 1 second, while another benchmark takes 10000 seconds... well you can see that the percentage improvement could be the same, but not significantly affect the some people. Think savings here. If you need to run a program that takes 2ms seconds several times and you can cut it down to 1ms second, Ok great, but not a huge impact on your life. On the other hand if you can cut a process taking 100 seconds to 50 seconds, that is a great increase because it lies in the "sweet spot" of our temporal existence.

Back to ComputerTerms

AverageNormalizedExecutionTime (last edited 2020-01-26 18:56:30 by scot)