# I For every finite integer sequence there's a pattern- source?

Tags:
1. Dec 26, 2017

I have in the back of my head the statement that for every finite sequence of positive integers there exists a pattern (i.e., a generating formula). While this sounds reasonable, I am not sure whether it is true, and if it is true, what the source for this statement is, and how the correct statement is correctly formulated. A somewhat stronger statement would be that for every finite sequence of integers there is a pattern that one can theoretically find, and comments on that would also be appreciated if the first statement is correct.

2. Dec 26, 2017

### Staff: Mentor

Like you, I'm not sure it would be true in general, but for some sequences of integers, a generating polynomial could be found. For the sequence $\{a_1, a_2, \dots, a_n\}$, plot the points $(1, a_1), (2, a_2), \dots, (n, a_n)$. In many cases it would be possible to find the n coefficients of a polynomial of degree n - 1 that passes through these points.

3. Dec 26, 2017

### Stephen Tashi

If we interpret "generating formula" to mean a polynomial, it is true that there exists a polynomial f(x) such f(k) gives the kth member of the sequence. This can be done for any finite sequence of real numbers by using the Lagrange interpolating polynomial http://mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html. For example, In the notation of that article the $(x_i,y_i)$ could be (1,3), (2,5), (3,13) to indicate the sequence {3,5,13}.

The function that generates a given sequence is not unique. For example, if you are given the sequence {y1,y2,..yn} you could find a polynomial that generates the sequence {y1,y2,..yn,8} and another polynomial that generates the sequence {y1,y2,..yn,9}. Both polynomials generate {y1,y2,..yn}. If we want to make a claim about a unique generating function then we have to impose some conditions, such as seeking a polynomial of smallest degree. There are also non-polynomial functions that generate a given sequence.

4. Dec 26, 2017

### StoneTemplePython

another approach:

if you want to interpret your finite positive integer sequence as being un-normalized power sums, i.e.

$s_1 = \sum_{i=1}^n \lambda_i$
$s_2 = \sum_{i=1}^n \lambda_i^2$
$\vdots$
$s_n = \sum_{i=1}^n \lambda_i^n$

and in general:
$s_k = \sum_{i=1}^n \lambda_i^k$

(whereas a proper power sum would be $s_k^* = \frac{1}{n}\sum_{i=1}^n \lambda_i^k$ )

you can directly find a monic polynomial that fits to these via Newton's Identities.

- - - -
questions of uniqueness get tied into this via a Hankel Matrix formulation. The original result that tells you the number of roots and the mixture of real vs complex is due to Sylvester. If you normalize these sums to become a proper power sum, you can consider this as an instance of the Hamburger Moment Problem:

https://en.wikipedia.org/wiki/Hamburger_moment_problem

5. Dec 26, 2017

### Staff: Mentor

It is always possible to do that, and you only have to solve a linear equation system which will always have a unique solution (or an infinite set if you use polynomials with a higher degree).

6. Dec 26, 2017

Thanks, Stephen Tashi, StoneTemplePython, mfb and Mark44. The Lagrange Interpolating Polynomial (indicated by Stephen Tashi) appears to be the most direct method given of these answers [ it is this which I presume (?) mfb is referring to in saying "it is always possible...."]; StoneTemplePython's method is also interesting. If I understand correctly (which is far from guaranteed), the Lagrange Interpolation is much more direct theoretically whereas the Newton Identities approach might be more computationally efficient in practice.

7. Dec 26, 2017

### StoneTemplePython

Actually both the lagrange interpolation approach and newton's identities have similar complexity and both are known for numeric stability issues.

It's quite common, and desirable, to try to characterize probability distributions by their moments, graphs by their cycles, matrices by their traces (i.e. their spectral properties if we're dealing with scalars in $\mathbb R, \mathbb C, \mathbb Q$) and some other stuff. That is what I was getting at.
- - - -
If you know linear algebra, a unifying theme in all this is to look into Vandermonde matrices. Any Hankel matrix can be decomposed into $\mathbf V^T \mathbf {DV}$ (where $\mathbf D$ is diagonal and $\mathbf V$ is vandermonde -- note that is a transpose sign not $\mathbf V^*$ on purpose as it is still a non-conjugate transpose in the case of complex scalars.)

If you are so interested you can skip the theory of Lagrange interpolation and just look up Vandermonde interpolation. (After playing around with the basis its the same thing basically.) e.g. see this: https://math.stackexchange.com/ques...olation-formula-from-vandermondes-determinant

8. Dec 26, 2017

Super! Thanks again, StoneTemplePython.