Algorithm analysis problem turned into a math problem

AI Thread Summary
The discussion revolves around analyzing an algorithm's complexity using a mathematical approach, specifically through an implicit equation and the identification of patterns in a table of values. The contributor discovered that the growth of the function can be expressed as O(n^4) based on a pattern involving triangular numbers, but expressed frustration over the lack of a straightforward method for deriving such patterns under pressure. They highlight that while exact functional forms are not necessary for big-O notation, understanding growth rates is crucial, with larger exponents dominating smaller ones. A method for determining polynomial degrees through successive differences is also presented, emphasizing that the degree of the polynomial can indicate the algorithm's complexity. Ultimately, the focus remains on understanding growth relationships rather than finding exact solutions.
r0bHadz
Messages
194
Reaction score
17
Homework Statement
I was analyzing an algorithm for my datastructures/alg class, and I came upon the following numbers. For n = 1,2,3,4,5,6.... => 1,6,18,40,75,126.... respectively
Relevant Equations
(n^3+n^2)/2
The equation listed is the implicit form. I achieved this a weird way.

\begin{array}{|c|c|c|}<br /> \hline 1 &amp; 1 &amp; 1*1 \\<br /> \hline 2 &amp; 6 &amp; 2*3 \\ <br /> \hline 3 &amp; 18 &amp; 3*6 \\ <br /> \hline 4 &amp; 40 &amp; 4*10 \\ <br /> \hline 5 &amp; 75 &amp; 5 * 15 \\ <br /> \hline 6 &amp; 126 &amp; 6 * 21\\<br /> \end{array}

Focusing on the third column now, I can see as we go down the rows we have the left part of the product is =n, and the right part of the product I noticed can be expressed implicitly as well. So I have n* something.

I took at row 2 column three and though.. hmm, 3 = (2*3)/2, 6=(3*4)/2, 10= (4*5)/2 and I noticed the pattern ( (n)(n+1) ) /2. Multiply that with the n and I got my answer that it was O(n^4)

The thing is... sure I achieved the right answer but I just hate this process. It's like, you have to get lucky just for your brain to be able to figure out how to find the pattern. There isn't a sure way to do things, and if you can't find it under pressure on the test, you're screwed. There is no way to get the exact answer for any pattern, like there isn't step 1: do this, step 2: do this, like for example, going from ax^2 + bx+ c to y=a(x-h)^2 +k, cookie cutter method.

My question is: maybe there is a cookie cutter method for finding any pattern? Is there any more efficient way to do these kinds of problems? Not going to lie they are kind of fun to do but at the same time they are exhausting because there isn't a cookie cutter way of just getting the answer.
 
Physics news on Phys.org
The thing is, for big-O notation you don't need the exact functional form. What you're trying to do is bound it. It helps to know how some functions grow. For instance log (to any base) grows slower than linear, so if you have log terms and linear terms, the linear terms dominate and it's ##O(n)##.

Larger exponents grow faster than smaller exponents. If you can tell that one part of an algorithm grows proportional to ##n^2## and another proportional to ##n^3## it doesn't matter what the coefficients are. The ##n^3## term dominates and this algorithm is ##O(n^3)##.

In real life if you had that set of run times, I'd plot them. It's obviously growing faster than linear, so if I suspected a power law ##T = a n^b## or ##\log T = \log a + b \log n## then I'd plot ##\log T## vs ##\log n## and see what the slope ##b## appeared to be, if it appeared to be converging toward a line (in the log-log plot) for larger ##n##. I don't care what ##a## is, and I don't care if there are lower order terms.

In a test situation I'd be surprised if you were given a problem like that. I'd think you would be more likely to analyze the growth on paper. If you can tell an operation is run ##n^2## times from the code, then you don't care what the runtime of one operation is. The ##n^2## is the complexity.
 
Having said all that, there is a simple way you can tell if it's polynomial in ##n##. (This came up in connection with a totally different question I answered elsewhere today). Take the differences of successive terms.

You have these numbers for successive ##n##: 1, 6, 18, 40, 75, 126
Take the differences: 5, 12, 22, 35, 51
Take the differences of those: 7, 10, 13, 16
Take the differences of those: 3, 3, 3
You end up with a constant on the 3rd difference. So the original numbers follow a 3rd-degree polynomial. This algorithm is ##O(n^3)##.

I could use the difference information to find out exactly what the polynomial is. But the point is that I don't care if I just want to know the complexity.
 
  • Like
Likes QuantumQuest and StoneTemplePython
Here's a way that works for any series in which the n-th term is a polynomial in n.

Construct a difference table as follows:

First row is the series of numbers s(1), ..., s(n).

For every row after that, calculate a number as the number above and to its right minus the number directly above. Each row will end one place earlier than the row above. The n-th row is called the series of (n-1)th differences. So the second row is first differences, third row is second diffs, and so on.
Keep going until you get a row that is all zeros or you reach about ten rows. The number of rows needed to reach a zero row is two more than the degree of the polynomial, and you are unlikely to have been given a series based on a polynomial of degree greater than eight. If you don't reach a zero row by then, it's probably not polynomial. It could be exponential, trig, combinatoric or something else.

Let n be the number of rows including the final row of zeroes, minus 2. That is the degree of your polynomial.
The leading coefficient of your polynomial is the value in the penultimate row, divided by n!

To get the other coefficients, you work backwards up the table. See if you can figure out how. But as @RPinPA points out, if you only want the big-O measure, the degree of the polynomial tells you all you need to know.
 
  • Like
Likes RPinPA and QuantumQuest
As already noted, in algorithm analysis we're interested in the growth of a function - which relates the length of an algorithm's input to the number of steps it takes or (more rarely) to the number of storage locations it uses, so we're looking for bounds. It's good to find a closed form but this is not always viable or in general, easy.

Additionally to what has been already said for polynomial cases, if you have a recurrence relation for your algorithm, you can take a look at the small summary and problems ##1## and ##2## in February's Math Challenge. Also, it may be of help to take a look at my Intro to Data Structures for programming tutorial.
 
QuantumQuest said:
As already noted, in algorithm analysis we're interested in the growth of a function - which relates the length of an algorithm's input to the number of steps it takes or (more rarely) to the number of storage locations it uses, so we're looking for bounds. It's good to find a closed form but this is not always viable or in general, easy.

Additionally to what has been already said for polynomial cases, if you have a recurrence relation for your algorithm, you can take a look at the small summary and problems ##1## and ##2## in February's Math Challenge. Also, it may be of help to take a look at my Intro to Data Structures for programming tutorial.

That is one amazing resource man. I am only able to skim through it right now but I can already tell I'm going to be coming back to that a LOT! Much love for that man
 
  • Like
Likes QuantumQuest
Perhaps it's worth noting that the set of second numbers in column 3 are triangular numbers so you could write the general term as: $$n\sum_{n=1}^{n} n $$
 

Similar threads

Replies
1
Views
1K
Replies
4
Views
1K
Replies
11
Views
2K
Replies
14
Views
3K
Replies
1
Views
1K
Replies
13
Views
2K
Replies
32
Views
6K
Replies
16
Views
5K
Replies
1
Views
1K
Back
Top