The Problem: "The input is a vector X of N real numbers; the output is the maximum sum found in any contiguous subvector of the input." A zero length subvector has maximum 0.0.
[From Programming Pearls by Jon Bentley.]
In the previous TidBits we came up with solutions that had executions times that were Order N**3, N**2, N*Log(N) and N respectively.
Jon Bentley coded up these solutions and for the particular compiler and cpu he used he got respectively:
Run time msec | 3.4 * N**3 | 13 * N**2 | 46 * N * Log(N) | 33 * N |
N = 10**2 | 3.4 sec | .13 sec | 0.03 sec | 0.003 sec |
N = 10**3 | 0.94 hr | 13 sec | .45 sec | 0.033 sec |
N = 10**4 | 39 days | 22 min | 6.1 sec | 0.33 sec |
N = 10**5 | 108 yrs | 1.5 days | 1.3 min | 3.3 sec |
N = 10**6 | 108,000 yrs | 5 months | 15 min | 33 sec |
Max N in 1 sec | 67 | 280 | 2000 | 30,000 |
Max N in 1 min | 260 | 2,200 | 82,000 | 2,000,000 |
Max N in 1 hr | 1000 | 17,000 | 3,500,000 | 120,000,000 |
Max N in 1 day | 3,000 | 81,000 | 73,000,000 | 2,800,000,000 |
So, worrying about the Order of an algoritm may be in order.