Is There Merit in Experimentation for Mathematical Discoveries?

In summary, this conversation discusses a theorem stating that for natural numbers n, m, and s, the equation (1) holds true if and only if n+1=(m+1)(s+1). It then goes on to explore the implications of this theorem on factorization and algorithms for polynomial multiplication. It also introduces the concept of shift operators and how they can transform certain base a expansions into a form that allows for easy factorization. The conversation concludes by posing an important question about the existence of a base a that satisfies certain conditions for a given natural number N.
  • #36
eed to know those primes. So we trade some of the work necessary to find those primes for a task we can do in a knowable number of steps.

Some theorems that can be proved as excercises.
 
Physics news on Phys.org
  • #37
(10a) [tex]rrs_{k+1}=\{x \Pi_{i=1}^k p_i+rrs_k |x = 1...p_{k+1}-1\} - \{p_{k+1}rrs_k\}[/tex]

Write this set out in ordered pairs that sum to {itex}\Pi_{i=1}^k p_i{/itex}

{tex} (1, 5)_6 \arrow 1+5=6{/tex} six is an example of a number which is the sum of two of it's own divisors 3+3 = 6, this never happens again in the sequence we are looking at since half of any element is greater than the largest prime factor.

{tex} (6*0+1, 6*4+5)_30 = (1,29)
(6*1+1,6*3+5)=(7,23)
(6*1+5,6*3+1)=(11,19)
(6*2+1, 6*2+5)=(13,17){/tex}

Here we have three sums of odd primes adding to 30, but notice that the sum of multiples of six is four, that is one less than the largest prime in 30 this pattern holds as we climb the ladder.

{tex} (30*0+1, 30*6+29)_210 = (1,209)
(30*0+11,30*6+19)=(7,23){/tex} etc.

So for any element {itex}\Pi_{i=1}^k p_i{/itex} of the sequence p# at least one goldbach ordered pair must exist and it will have the form

{tex}(x\Pi_{i=1}^{k-1} p_i+r,(p_k-x-1)\Pi_{i=1}^{k-1} p_i + (\Pi_{i=1}^{k-1}-r)){/tex}

So that translates into a system of equations like this

{tex}(xP+r,(p-x-1)P+(P-r)){/tex}

Where P is the product of primes up to k-1, p is the kth prime, r is a reduced residue of P and x is grater thatn or equal to zero and less than or equal to p-1.

The binding equation is

{tex}xP+r+(p-x-1)P+(P-r)=Pp{/tex}

Whic is true. However every goldbach pair for this sequence of which P is an element is then given by


(10c) {tex}(Px+r,P(p-x)-r)){/tex}

where

(11c) {tex}x \in {0,...,p-1} \and r\ in rrs_P {tex}

Where in that array do these ordered prime pairs appear? Well if they lie between p and p^2 the sieve has already told us they are prime because P contains all prime factors less than their square root. So any pairs (10c) that can be formed where both elements are between p and p^2 must be Goldbach ordered pairs. This result can only be an guarantee of existence until P>2p^2. At that point no pairs both between p and p^2 can sum to P. This happens for 2*3*5*7=210. However the elements of the rrs between between 11 and 49 left inclusive must be prime, those between 49 and 121 must either be prime or divisible by 11, those between 121 and 169 prime or divisible by 11 or 13, those between 169 and 219, prime or divisible by 11, 13 or 17... So that covers the set if I form the pairs and test each one equivalently, but notice I can write these ordered pairs down.

rrs elements divisible by eleven are {11, 11*11, 11*13, 11*17,11*19} Those divisible by 13 are {13, 13*13} that covers that. And it gives us a clue. If I can calculate the size of the set of composites in the rrs mod P lying between p^2 and P, I then know how many primes lie on that interval and are wihtin the rrs mod Pp.

{Note: it is important to note that the only composites in rrs mod Pp come form multiplying elements of rrs mod P}

Now synthesize all this into a table that sets up the possible goldbach ordered pairs. If you crank out a few of these and make sure you have all the indexes correct, you see that for small prime products it is not amazing at all that goldbach should be true because most of the element of the rrs mod Pp are going to be prime. However as we go big it might be possible that the rrs mod P contains no primes at all. But then we are saved by a theorem which says that there is always at least one prime between p and 2p. Well the interval defined by P and Pp is sure to contain at least one new prime. This is very loose and needs to be made tighter with a proof.

An interesting question here is does the density of primes in the rrs actually get larger approaching one, does it go to zero in the limit or does it settle down to some fixed number. If it settles down to a fixed number or progresses to one Goldbach can probably be proved on this sequence, if it moves towards zero it would be quite striking that Goldbach would hold for very large P. I will come back and flesh this out later also.
 
  • #38
Playdo said:
So anyways, smooth numbers, numbers which have only prime divisors less than n are n-smooth, this corresponds to which types of FFT's can be used to factor them.

I'm not sure what you mean here. If you're trying to do a FFT of length N, you can break this up into smaller FFT's whose lengths are divisors of N, so I guess it turns out quicker if N only has smaller factors. This is how fft's whose length was a power of 2 worked at least.

Playdo said:
Ok, nice, I'm guessing then there is some analytic advantage to thinking of everything in the real base e?

e is a convenient way to write nth roots of unity, it turns their group operation (multiplication) into addition. Orbits of nth roots of unity are used to create the set of orthonormal vectors used in the dft.

Playdo said:
DFT's I do know are convolutes allowing the writing of a product in terms of a sum.

Well they allow you to write a convolution as a product of the sequences dft. The novelty being we can compute these dft's with significantly less steps than we could the convolution.

Playdo said:
Riemann's famous function is the result of an observation that an infinite product including the primes can be expanded and reqorked into the zeta function.

That the dirichlet series defining the zeta function to the right of 1 can be written as an euler product was known before Riemann. The massively hugely important leap he made was proving analytic continuation (and corresponding functional equation) to the rest of the complex plane (pole at 1 of course). This opened up a huge bag of analytical tools.

Playdo said:
The smooth numbers, yes that is like what I am saying with the one algorithm idea, that we can produce sequences that contain easily factorable numbers and then just map them back to the number we are intersted in.

Have you looked into how rare smooth numbers are yet? Your S[rsa] sequence isn't likely to contain many smooth numbers at all.

Playdo said:
I know that it has been said that there exists no general method for solving all diophantine equations, in fact I thought Dirichlet proved that. .

Do you mean Matiyasevich's proof that there's no general method to show if a given diophantine equation has solutions or not? (I don't know much about this, Hurkyl tends to know stuff like this)

Playdo said:
However, I have never seen anyone take the time to painstakingly investigate the implications of base a arithmetics in our quest for solutions

There's a reason for this. Diophantine equations don't care what base you use to represent your integers. You don't have to prove something astonishing to convince people otherwise, you can aim at a special case of fermats, like a specific exponent n, or really any other diophantine equation, solved or unsolved.

Playdo said:
However as we go big it might be possible that the rrs mod P contains no primes at all. But then we are saved by a theorem which says that there is always at least one prime between p and 2p. Well the interval defined by P and Pp is sure to contain at least one new prime. This is very loose and needs to be made tighter with a proof.

The primes in the rrs mod P are exactly the primes less than P after removing the primes less than the (k-1)st prime removed. In absolute terms, you have piles upon piles of them. Remember P is enormous compared to the (k-1)st prime. I don't see anything anywhere above that points to both in any pair having to be prime though.

However:

Playdo said:
An interesting question here is does the density of primes in the rrs actually get larger approaching one, does it go to zero in the limit or does it settle down to some fixed number.

p the kth prime, the density is just (pi(p#)-k)/phi(p#). I worked this asymptotic out in an earlier post, it goes to zero (the k in the numerator has essentially no effect).

And no, this wouldn't mean I'd be at all suprised if Goldbach turns out to be true for the sequence p# (or actually in general). Even a set with asymptotic density of zero can be "numerous" in some ways (like the sum of the recipricols of the primes diverging)

ps. use [] instead of {} for your latex tags
 
Last edited:
  • #39
shmoe said:
There's a reason for this. Diophantine equations don't care what base you use to represent your integers. You don't have to prove something astonishing to convince people otherwise, you can aim at a special case of fermats, like a specific exponent n, or really any other diophantine equation, solved or unsolved.

That of course is not entirely true. Diophantine equations either have or do not have solutions based upon the properties of the natural numbers. However base a representations allow different perspectives (coefficients) on those equations and sometimes make the finding of solutions or the confirmation that none exist easier. The first theorem here is such a theorem. If there exists a base a representation of a that satisfies the condition the number is composite. Other such implications undoubtedly exist. Also, the way multiplication as an algorithm proceeds is different depending on the base. So if we think of the mechanics of the algorithm the base a arithmetics are quite unique, thus understanding the diophantine problem over all bases could be very important.

But this stuff is already known, I am just trying to look at the relationship more directly. In fact I have professor friends hwo have tried to sort out all the implications of base a arithmetics as part of Phd dissertations and years later they are still unable to get it all down. I think the key connection is threefold, the first is the way in which a base a alphabet is used to form words that represent natural numbers in a one to one fashion (a grammar that is unique to the base), the second is the way two of these grammars are related, and the third is how those facts limit or shape arithmetic calulation algorithms over the base. From there it is quite clear that diophantine equations have rather interesting properties. If they are true for any natural number they must have a solution over every base. But the word used to express the solution can have unique multiplicative properties over its base, so that addition and multiplication algorithms are conceptually related for all bases but adding the same to integers proceeds in different ways for different bases. For instance 15 = 2^3+2^2+2+1 or 1111 base 2, a special form in any base. However 15 is also equal to 3^2+2*3 base 3 and clearly divisible by three. In a structural sense the change of base has simplified the task of factoring 15. Understanding IN DETAIL how the base a arithmetics are related gives us hope of simplifying any diophantine equation though not by applying the same steps in a mechanical fashion. Rather there probably exists a grab bag of steps, that when allowed to happen in whatever order they are needed suffices to solve every diophantine euqation. Just as Abel has showed us with polynomials, no fifth order general solution, but any fifth order polynomial with natural number solutions can be solved by some method.

Until I see a complete exposition of the base a arithmetics with respect to these various questions I won't except that there is nothing worth looking at. In fact what I have already found is quite intersting enough to warrant further investigation. No offense, but I have seen too many middling intellects dismiss hard questions because they require more than a few blackboards worth of chalk. And many of these fellows have Phd's. I despise lazy thinkers. You and Hurk do not seem to be of that ilk.

On the goldbach issue, take the rrs mod p#[k] set it up in ordered pairs that sum to p#. Now using some facts about the rrs mod p#[k-1] count all of the composite numbers in the set. If this is smaller than phi(p#[k]-2)/2 Goldbach must be true by the pidgeon hole principle. That takes us some way up into the atmosphere. The problem will be if the prime density within the rrs mod p#[k] gets much below that, not because the proof cannot be done, but now we have to count the composites and verify where they are to argue that not all of the ordered pairs can have at least one composite element. But so once that proof were varified it shoul dbe extendable to any sequence of the form of p#, meaning a sequence formed by multiplication of primes. Getting past the square free sequences will be the real hurdle I think.

There is a lot so far in this thread I need to come back to. With some luck I will find the time to do these ideas justice.
 
  • #40
Just a quick note for now:

Playdo said:
On the goldbach issue, take the rrs mod p#[k] set it up in ordered pairs that sum to p#. Now using some facts about the rrs mod p#[k-1] count all of the composite numbers in the set. If this is smaller than phi(p#[k]-2)/2 Goldbach must be true by the pidgeon hole principle. That takes us some way up into the atmosphere. The problem will be if the prime density within the rrs mod p#[k] gets much below that,...

I've mentioned a couple of times already- this density goes to zero (just use the prime number theorem and merten's asymptotic for the product of (1-1/p)). At k=7, p[k]=17, you already have phi(17#)=92160 and pi(17#)=42331 so you can't use pigeon hole for long at all.
 
  • #41
Playdo said:
Rather there probably exists a grab bag of steps, that when allowed to happen in whatever order they are needed suffices to solve every diophantine euqation.
We already know an algorithm capable of finding a solution to a solvable Diophantine equation: simply exhaust through all possible ways of assigning values to the variables.1 :wink: (Of course, more clever ways are needed if we want to see the answer in our lifetime!)

But there are Diophantine equations for which no algorithm -- no "grab bag of steps" -- is capable of saying "this Diophantine equation has no solution".

The idea of the proof is actually quite simple: we can write down a Diophantine equation and then prove that the statement:

"This Diophantine equation has a solution"

is logically independent of the axioms of integer arithmetic.

It's sort of like the parallel postulate from geometry -- you can assume the parallel postulate is true and do Euclidean geometry. Or, you can assume the parallel postulate is false, and do hyperbolic geometry. Either way, you are working in a perfectly consistent2 mathematical theory.

The same is true with certain Diophantine equations3; you can assume it does have a solution and do one kind of number theory, or you can assume it has no solutions and do another kind of number theory.

(I know I keep harping on this -- I think it's important, and I can't really tell for sure if the ramifications of this fact has sunk in)


Playdo said:
However 15 is also equal to 3^2+2*3 base 3 and clearly divisible by three. In a structural sense the change of base has simplified the task of factoring 15.
When you say it like this, it sounds strikingly similar to one of the few techniques I've learned for solving Diophantine equations: look at them modulo n for various integers n.



I was thinking about how likely it is that you can multiply two n-digit base b numbers so that there are no "carries" involved.

The product will have less than 2n digits, and for each of those, there will be less than b nonzero digit products that contribute to it. So, if our original numbers had k nonzero digits, then the number of nonzero products will be , so we must have:

k² < 2bn

(This is a very generous bound)

So, in general, we see that if we can multiply large base b numbers so that there are no overflows, then (at least one of) the numbers have to be very sparse -- for a large n digit number, only about the square root of n digits may be nonzero.



1: Okay, okay, there's a technical detail. We have to assume that it is possible to enumerate the natural numbers. I.E. any particular natural number will (eventually) be printed by the following program:

Code:
n <-- 0
begin loop
   print n
   n <-- n + 1
end loop

2: Technically speaking, they are relatively consistent. Euclidean geometry is consistent if and only if hyperbolic geometry is consistent.

3: While we have discovered some Diophantine equations with this property, there must exist Diophantine equations for which we cannot prove they have this property.
 
  • #42
Hurkyl said:
But there are Diophantine equations for which no algorithm -- no "grab bag of steps" -- is capable of saying "this Diophantine equation has no solution".

The idea of the proof is actually quite simple: we can write down a Diophantine equation and then prove that the statement:

"This Diophantine equation has a solution"

is logically independent of the axioms of integer arithmetic.

That's neat. Can I have a name, a reference, a link, or something so I can learn more about this?
 
  • #43
CRGreathouse said:
That's neat. Can I have a name, a reference, a link, or something so I can learn more about this?
The question of the solvability of Diophantine equations is tenth problem[/url], which [/URL] answered negatively.

What I stated follows naturally from a formal logic/computability theory standpoint. Briefly, the idea is that if there is no algorithm for deciding whether a given Diophantine equation has a solution, then we cannot derive a contradiction by assuming it does or does not.
 
Last edited by a moderator:
  • #44
Hurkyl said:
What I stated follows naturally from a formal logic/computability theory standpoint. Briefly, the idea is that if there is no algorithm for deciding whether a given Diophantine equation has a solution, then we cannot derive a contradiction by assuming it does or does not.

Hilbert's 10th knew, but Matiyasevich's theorem I had heard only once before, and I couldn't recall the name. Thanks.
 
  • #45
DAVIS, ROBINSON, AND PUTNAM, are usually credited as well. matiasevich just put in the last brick in the wall.
 
  • #46
Hurkyl said:
When you say it like this, it sounds strikingly similar to one of the few techniques I've learned for solving Diophantine equations: look at them modulo n for various integers n.



I was thinking about how likely it is that you can multiply two n-digit base b numbers so that there are no "carries" involved.

The product will have less than 2n digits, and for each of those, there will be less than b nonzero digit products that contribute to it. So, if our original numbers had k nonzero digits, then the number of nonzero products will be , so we must have:

k² < 2bn

(This is a very generous bound)

So, in general, we see that if we can multiply large base b numbers so that there are no overflows, then (at least one of) the numbers have to be very sparse -- for a large n digit number, only about the square root of n digits may be nonzero.

Yes, and you can get a feel for it by using the systems outlined in the second post. Because the equations for each power in the solution have to be less than the base. So you end up looking at the number of points under some bounding hyperboloid in a multidimensional space. For example the trailing coeff has to obey

[tex]C_0=x_0y_0 < a[/tex]

The first order coefficient must obey

[tex]C_1=x_0y_1+x_1y_0 < a[/tex]

In general the nth order coefficient has to obey

[tex]C_n=\sum_{i+j=n}x_iy_j < a[/tex]

For polynomials of high degree it becomes exceptionally unlikely that they will mutliply wihtout carrying. However once we have identified this space where no carrying happens, we can begin categorizing the carrying spaces. For instance it is know that for any base a and residues x and y,

[tex]xy\leq(a-1)^2[/tex]

So the zeroth order carry is between zero and a-2. This can be done for each order. In general the carry L (for leap) obeys

[tex]0\leq L_n \leq ||\{(i,j)|i+j=n\}||(a-1)^2[/tex]

As you start looking at how the various possible carries cover the full set of possibilities, they start to cover it very quickly, so that most numbers are factorable starting with the no carry space and simply walking through the possibilities. In fact we strarted with say a cubic polynomial and order all the possible carries in terms of the size of factor set they cover (with respect to a particular number to be factored and a given base a expansion) we could arrive at a solid estimate of the probablity of factoring a number in a given number of steps. My thoughts are that these things I have been talking about are not new per se, but may simply be a different way of looking at the same mathematical objects. The particular shape of those objects does not really change, we just develop different projections of it. In fact it should be easy to see that the solutions to any and all congruences are governed by the unchanging relationship of primes and composites within the natural numbers. Well of course that should be clear from the fundamental homomorphism theorem taught in first semester abstract algebra.

My thing, that I do not have time now to think much about, is wondering if there is not some discrete embedding space which opens this onion up even more. I realize we may start running into Godel type issues as you mentioned above. But I am still thinking that there may exist a further insight here that allows us to recognize the path of the carrying operation. At first glance that seems unlikely because it is not one to one. But we have the ability to jump between bases as often as needed. Perhaps a unique signature can be deduced in that fashion. Perhaps by some series of base changes we might arrive at a polynomial whose solutions are in fact the carrying values for each order. Polynomials which do not allow such a mapping would be very special indeed. or they may be the norm. But alas, now I must seek to make money, and number theory is no way in which to do that, at least for now!

Also the algorithm I mentioned above where we map a number n onto other numbers base ten with similar multiplicative structures seems quite plausible to me. But that one opens up a lot of time questions, expecially if we are choosing larger numbers. Schmoe already mentioned most of that.

Later
 
Last edited:
  • #47
shmoe said:
Just a quick note for now:



I've mentioned a couple of times already- this density goes to zero (just use the prime number theorem and merten's asymptotic for the product of (1-1/p)). At k=7, p[k]=17, you already have phi(17#)=92160 and pi(17#)=42331 so you can't use pigeon hole for long at all.


I know schmoe I can see that plainly. But there is still more detail to be had in the relationship between successive rrs for p#, just use the theorem I mentioned above for generating the rrs for consecutive elements in p#. You'll see that the non-primes are all found by taking products of units in the smaller element. This pattern can be worked out of that I am certain but we will have to pay attention to discrete details that most analytical minded folks try to avoid. The density is a truth but it is a bounding type of thing, it does not prove that no path to Goldbach exists using the outline I have provided. It's not going to be easy but few things worth doing are. I just don't have time right. Maybe in a year or two if I decide to go for the Phd I will do number theory. Right now I have to try and find a job. Once that is done I will have more leisure time. Then I can get a bottle of Courvoisier and really think about it.
 
Last edited:
  • #48
Playdo said:
I know schmoe I can see that plainly.

That's wasn't at all clear to me. You had asked about the density a few times, even after I had given you the asymptotic. You then suggested using pigeon hole would get you "some way up into the atmosphere", it works up to 13#, this is hardly the atmosphere let alone eye level. I keep getting the impression that you are vastly underestimating the number of composites that are less than p# but relatively prime to p#.

You might want to take the time to work out phi(p#) and pi(p#) to some distance. pi(p#) gets difficult to find pretty quick, but the prime number theorem will work well. I think Dusart's work is the latest on concrete bounds for error terms, so it might be worth looking into that if you don't like the big-O term with unspecified constant in the prime number theorem.

Playdo said:
But there is still more detail to be had in the relationship between successive rrs for p#, just use the theorem I mentioned above for generating the rrs for consecutive elements in p#. You'll see that the non-primes are all found by taking products of units in the smaller element. This pattern can be worked out of that I am certain...

Wait, are you saying that the composites of the rrs of p# are found by taking products of elements in the rrs of q# (where q is the largest prime less than p)? That's pretty simple, a composite in the rrs of p# has only prime factors greater than p, so it can't have a non-trivial divisor greater than q#. What makes you certain that any sort of pattern can be worked out here?

Playdo said:
... but we will have to pay attention to discrete details that most analytical minded folks try to avoid.

Just curious what makes you think you can say what "most analytical minded folks" will or will not do. I don't get the impression you've studied analytic number theory in any depth, so on what grounds are you making this statement?

Playdo said:
The density is a truth but it is a bounding type of thing, it does not prove that no path to Goldbach exists using the outline I have provided.

Sure, but it sure puts a damper on any hopes of using some sort of pigeon hole argument, even if you can narrow things down signifigantly.

Playdo said:
Maybe in a year or two if I decide to go for the Phd I will do number theory. .

If that's a possible goal, then you might want to start reading some elementary number theory texts. You will be in a heap of trouble if you hope to enter a phd program in number theory and you don't have elementary facts about things like the divisor function down, as well as the basic big-O and other asymptotic notations.
 
  • #49
shmoe said:
That's wasn't at all clear to me. You had asked about the density a few times, even after I had given you the asymptotic. You then suggested using pigeon hole would get you "some way up into the atmosphere", it works up to 13#, this is hardly the atmosphere let alone eye level. I keep getting the impression that you are vastly underestimating the number of composites that are less than p# but relatively prime to p#.

You might want to take the time to work out phi(p#) and pi(p#) to some distance. pi(p#) gets difficult to find pretty quick, but the prime number theorem will work well. I think Dusart's work is the latest on concrete bounds for error terms, so it might be worth looking into that if you don't like the big-O term with unspecified constant in the prime number theorem.



Wait, are you saying that the composites of the rrs of p# are found by taking products of elements in the rrs of q# (where q is the largest prime less than p)? That's pretty simple, a composite in the rrs of p# has only prime factors greater than p, so it can't have a non-trivial divisor greater than q#. What makes you certain that any sort of pattern can be worked out here?



Just curious what makes you think you can say what "most analytical minded folks" will or will not do. I don't get the impression you've studied analytic number theory in any depth, so on what grounds are you making this statement?



Sure, but it sure puts a damper on any hopes of using some sort of pigeon hole argument, even if you can narrow things down signifigantly.



If that's a possible goal, then you might want to start reading some elementary number theory texts. You will be in a heap of trouble if you hope to enter a phd program in number theory and you don't have elementary facts about things like the divisor function down, as well as the basic big-O and other asymptotic notations.

Yeah, your right schmoe I really don't want to become an expert in number theory. Analytical number theory glosses over the details of elementary number theory. The prime number theorem is powerful and important but it does not tell you what pi(n) actually is for any specific n. Remember the program is 'remain true to the bounds of elementary number theory' whenever possible. To me that means pay attention to detail and stay discrete.

Now it is quite possible that the number of composites relatively prime to p# is quite large and I will look at it eventually. The density going to zero means as we go further out it becomes quite amazing to suppose that Goldbach should be true for this sequence. So therefore I expect to find some very interesting things as I pursue the details. Conceded, pidgeon hole only works to p#[13], but more details of the relationship between p#[k] and p#[k+1] remain to be understood. You are an expert in number theory, surely you realize that all of the composites in rrs p#[k+1] are generated by multiplying units in p#[k]? Meaning set up the multiplication table for rrs p#[k] and simply count the points less than the hyperbola xy=p#[k+1]. Think about that graphically and useful patterns beyond the plain gray of the pidgeon hole principle start to emerge.

I really do not have time now to get into it. Bit I will and perhaps it will fail. If it does then it does. You know sometimes instead of using big powerful theorems to avoid the details, we actually have to go in and start looking at things on a case by case basis until patterns start to emerge. I know it's a method not to be generally recommended, experimental number theory is only advocated by fools who believe Chaitin is correct.
 
  • #50
Playdo said:
Yeah, your right schmoe I really don't want to become an expert in number theory.

Forget about a phd in number theory then.

Playdo said:
Analytical number theory glosses over the details of elementary number theory.

You didn't answer before- what are you basing this on? You don't seem to have studied analytic number theory in any depth, so on what grounds can you say what is and isn't studied by analytic number theorists?

Playdo said:
The prime number theorem is powerful and important but it does not tell you what pi(n) actually is for any specific n.

Sure, but for many (many, many) applications, it works handily even with our weak error terms (weak relative to the sort of things believed true). There are also algorithms based on analytic techniques to calculate pi(x) exactly for a given x.

I'm not sure I see the relevance of this comment? Is it a supposed failing of analytic techniques?

Playdo said:
Remember the program is 'remain true to the bounds of elementary number theory' whenever possible. To me that means pay attention to detail and stay discrete.

Why would you ignore analytic results, especially when they handily answer some of the questions you've asked? In any case, the prime number theorem can be proven with elementary techniques, in case you were thinking in that direction.

Playdo said:
Now it is quite possible that the number of composites relatively prime to p# is quite large and I will look at it eventually.

See, when you say things like this it makes me think you don't "plainly see" what the density is doing at all. It's not "quite possible", it most definitely is the case that this happens. If you don't believe the asymptotic i gave, is there any chance at all that you will try to get some bounds on pi(p#)/phi(p#) for some values of p yourself? If not, just say so and I'll do it for you.

Playdo said:
The density going to zero means as we go further out it becomes quite amazing to suppose that Goldbach should be true for this sequence.

It seems you should then be suprised if goldbach were be true in general, the density of the primes in the integers is also going to zero (meaning pi(n)/n->0 as n->infinity).

This shouldn't be suprising. Density zero sets can still be 'dense enough' for 'stuff' to happen. for example if a sequence has positive upper density then it contains arbitrarily long arithmetic progressions (Szemerdi). The primes have density zero, yet this is also true for the primes (Green,Tao).

Playdo said:
You are an expert in number theory, surely you realize that all of the composites in rrs p#[k+1] are generated by multiplying units in p#[k]?

i said as much in my last post.

Playdo said:
Meaning set up the multiplication table for rrs p#[k] and simply count the points less than the hyperbola xy=p#[k+1]. Think about that graphically and useful patterns beyond the plain gray of the pidgeon hole principle start to emerge.

if it's 'simple' then do it. Count these points, show what the 'usefull patterns' are.
Playdo said:
You know sometimes instead of using big powerful theorems to avoid the details, we actually have to go in and start looking at things on a case by case basis until patterns start to emerge.

Seriously now, who is avoiding details?

Playdo said:
I know it's a method not to be generally recommended, experimental number theory is only advocated by fools who believe Chaitin is correct.

What I would call "experimental number theory' happens all the time, and is in fact a large driving force behind many conjectures. Packages like pari/gp weren't developed by number theorists to take up disk space. What does the phrase "experimental number theory" mean to you?
 
  • #51
shmoe said:
What I would call "experimental number theory' happens all the time, and is in fact a large driving force behind many conjectures. Packages like pari/gp weren't developed by number theorists to take up disk space. What does the phrase "experimental number theory" mean to you?

Just got back, from finishing up my MS and moving. I pm Kurk to see if this thread was dleted. He sent me what I interpreted to be a turse PM for criticizing the establishment. So that sort of feeds into your question about experimental number theory.

I will probably have to open a separate thread in the general discussion area about the culture in mathematics. There is much ot be discussed there. Having been around the community for some amount of time I too have had my share of dealing with folks who think that .999... >< 1. What can you do. But not everyone who has a unique approach is of that vein. Of course it is rare to meet the exception and I do not claim to be the one who can reinvent number theory or that it even needs to be reinvented. But I do chaff, and begin gnawing at the bit when I see a mathematician that is effectively a social retard making others feel bad because they don't get it. Those types of things lead to Bastille Day. Being smart is not enough, the mindless masses pay your salary through taxes. Either get your attitude in check with reality or take your chalk and go home! That is a rebuke I am willing to accept myself. I spent many years after my BS puffed up by the 'mentoring' of various elements in the math community.

So yes experimentation. I mean free and creative thinking, which De Morgan felt was essential to all mathematical progress. That does not remove the precedent of established mathematical truths. However techniques are still growing. We all should aspire to do something where we can be original, in my opinion. If math isn't that for someone, then maybe they would be happier somewhere else. At any rate, we need library books. Some folks function as wetware, library collections of facts about math and science. I have no problem with that but a jerk is a jerk is a jerk. A saying I really like is by Plutarch , "Be kind, everyone you meet is fighting a hard battle."

I came back because I am working out a partial proof of the Goldbach Conjecture, at least I have a solid idea. I will post the key question in the number theory thread.
 
Last edited:
  • #52
Playdo said:
Just got back, from finishing up my MS and moving. I pm Kurk to see if this thread was dleted. He sent me what I interpreted to be a turse PM for criticizing the establishment. So that sort of feeds into your question about experimental number theory.

I don't know who Kurk is, but it's rather disingenious of you to give what looks to be a criticism of his private communication to you when we have no idea what he said.

Playdo said:
So yes experimentation. I mean free and creative thinking, which De Morgan felt was essential to all mathematical progress. That does not remove the precedent of established mathematical truths. However techniques are still growing. We all should aspire to do something where we can be original, in my opinion. If math isn't that for someone, then maybe they would be happier somewhere else. At any rate, we need library books. Some folks function as wetware, library collections of facts about math and science. I have no problem with that but a jerk is a jerk is a jerk. A saying I really like is by Plutarch , "Be kind, everyone you meet is fighting a hard battle."

By your explanation of 'experimentation', that's pretty much what mathematicians do all the time. They aren't just walking piles of information and facts. Coming up with original ideas is what mathematics is all about, whether it's in the form of completely new techniques or novell ways of combining old ideas. So I really don't know what you're going on about, except that it looks to me like you are trying to justify ignoring the massive body of literature and 'do your own thing', even when some very elementary parts of the theory would serve you very well.
 
  • #53
shmoe said:
I don't know who Kurk is, but it's rather disingenious of you to give what looks to be a criticism of his private communication to you when we have no idea what he said.



By your explanation of 'experimentation', that's pretty much what mathematicians do all the time. They aren't just walking piles of information and facts. Coming up with original ideas is what mathematics is all about, whether it's in the form of completely new techniques or novell ways of combining old ideas. So I really don't know what you're going on about, except that it looks to me like you are trying to justify ignoring the massive body of literature and 'do your own thing', even when some very elementary parts of the theory would serve you very well.

Fair enough Schmoe on all counts.
 
Back
Top