What Is the Largest Cyclic Subgroup of S_n?

  • Thread starter Thread starter JoanBraidy
  • Start date Start date
  • Tags Tags
    Cyclic Subgroup
Click For Summary
The discussion centers on determining the largest cyclic subgroup of the symmetric group S_n, which involves finding partitions of n that maximize the least common multiple (LCM) of their parts. Participants note that computing this is complex and may not yield a closed formula, particularly for values of n beyond certain thresholds. The relationship between the largest order of elements in S_n and prime partitions is highlighted, with examples illustrating how different partitions can yield varying LCMs. Additionally, there are challenges in programming accurate calculations due to limitations in integer precision. Overall, the problem remains open-ended, with no simple solution currently available.
JoanBraidy
Messages
7
Reaction score
0
I became interested in this question a few weeks ago, I couldn't find much on it

basically I've realized it's equivalent to finding for each n a partition of n say

x_1,x_2,...,x_k such that x_1+x_2+...+x_k=n and lcm(x_1,...,x_k) is maximum

(because you can then take the subgroup <(123...x_1)(x_2...x_3)...(x_(k-1)...x_k)>

however computing this seems pretty much impossible

does anyone have any insights on this problem?
 
Physics news on Phys.org
I don't think there is a closed form. Check out this paper: http://www.cs.ust.hk/mjg_lib/Library/Miller87.pdf
 
Last edited by a moderator:
EDIT: Both the formula and programme in this post are incorrect (see sequel)

For the numbers P_n=\sum_{r=1}^{n}p_r the largest order of an element in S_{P_n} would be \Pi_{r=1}^{n}p_r. It's also clear that the largest order for S_n is monotonic non decreasing with n.

In the gaps between the P_n the best places to distribute "remainder" between the existing cycles and how effective that can be will depend on both the remainder and the differences between p_{n+i}, i=1,2,\dots and p_{n-i}, i=0,1,2,\dots. Because of the awkwardness of the sequence of primes I would be surprised if a handy formula could be found for these.

I've put together a cheap and nasty javascript routine (below) to calculate the first few values, but this bows out after n=270 because the javascript integer accuracy is exceeded. Even if an unlimited accuracy routine for the arithmetic were included, I wouldn't hold out much hope of a BFI routine like this getting very far owing to the eventual size of the numbers if nothing else. (I haven' t attempted to quantify this either.)

Code:
<script>
big=Math.pow(2,53)
toobig=false
function hcf(p,q){
  var u=Math.max(p,q)
  var v=Math.min(p,q)
  while(1){
    h=v
    v=u%v  
    if(!v)break
    u=h
  }
  return h  
}
function lcm(p,q){
  var wk=Math.min(p,q)/hcf(p,q)
  if(big/wk>=Math.max(p,q))return Math.max(p,q)*wk
  return 0
}
function cpy(u){
  if(!(u instanceof Array)){if(typeof(u)!='object')return u;wobbler()}
  var r=[]
  for(var i in u){
    r[i]=cpy(u[i])
  }
  return r
}
function part(n){
  if(!('done' in this))done=[]
  if(n in done)return cpy(done[n])
  var max=[n]
  max.v=n
  for(var i=n-1;i>=1;i--){
    var rest=n-i 
    rest=part(rest) 
    if(rest)rest.v=lcm(i,rest.v)             
    if(!(rest&&rest.v)){toobig=true;return null}
    if(rest.v>max.v){
      max.v=rest.v
      max.length=0
      var mx=0
      var idone=0
      for(var j=0;j<rest.length;j++){
        if(!idone&&i>=rest[j]){
          max[mx]=i
          mx++
          idone=1          
        }
        if(i!=rest[j]){
          max[mx]=rest[j]
          mx++
        }
      }
      if(!idone)max[mx]=i
    }    
  }
  done[n]=cpy(max) 
  return max
}
function doit(n,testlim,parm){
  if(!parm)parm=[100,10000]
  var pglim=parm[0]?parm[0]:100
  var gap=parm[1]?parm[1]:10000  
  if(!testlim)testlim=n
  var upper=n-(n%pglim)+pglim
  if(!('upto' in this))upto=document.getElementById('upto')
  if(!('blurb' in this))blurb=document.getElementById('blurb')
  var show=[]
  sp='&nbsp;';sp+=sp;sp+=sp;
  for(;n<Math.min(upper,testlim+1);n++){
    var maxpart=part(n)
    if(toobig){show[show.length]='<br>accuracy limit exceeded - terminating';break}
    show[show.length]=n+sp+maxpart.v+sp+'('+maxpart+')'
  }
  upto.innerHTML=n
  blurb.innerHTML=(n>pglim?'...<br>':'')+show.join('<br>')+(n<testlim?'<br>...':'')
  if(n<testlim&&!toobig)setTimeout('doit('+(n+1)+','+testlim+','+gap+')',gap)
}
</script>
<body onload="doit(1,1000000,[1000,1])">
<div>up to:</div>
<div id=upto></div>
<div>===========<br>n&nbsp;&nbsp;&nbsp;max_order&nbsp;&nbsp;&nbsp;max_cycle</div>
<div id=blurb></div>
 
Last edited:
interesting

i don't get why it's obvious that s_sum(p_i) has productp_i as its largest order

couldnt there exist another partion of sum(p_i) whose lowest common multiple is bigger

say 2+3+5=10=7+3

but 2*3*5=30 and 7*3=21

so S_(7+3) would have largest order 21 according to you but

then it also has an element of order 30 just take (12)(345)(678910)

so anytime you can write n as a sum of primes 2 different ways this would probably

happen
 
By p_i I meant the i^{th} prime, not just any old prime, so P_3=10 in that notation, and the max cycle order for 10 would be given by {\Pi_{i=1}^{3}p_i=2.3.5=30}.

If n=14, P_3&lt;n&lt;P_4, so you have a "remainder" of 4 to adjust the factors of 2.3.5. The best way is add 2 to each of 2,5 or, equivalently, 1 to each of 2,3 and 2 to 5. In each case the new factors are relatively prime to the rest, but this rests partly on the fact that 7-5=2, and similar relations between primes for higher numbers, so I wouldn't be optimistic about finding a simple formula for values of n between the P_n.

But you're right that the formula is not obvious. In fact it breaks down for P_k when k\geq 13, possibly before. In this case splitting 2+41 as 1+8+9+25 still gives factors that are relatively prime to anything except 2,3,5, but 1.8.9.25&gt;2.41\times 3.5, hence {8.9.25\times 7.11.13\dots 37&gt;2.3.5\dots 41}. So all I can actually say is:
n\geq P_k\Rightarrow\text{max cycle length in }S_n\geq \Pi_{i=1}^{k}p_i

and this bound will no doubt get progressively worse as n increases beyond {8+9+25+7+11+13+\dots +37=229} (at most).

The programme posted is also flawed (I think for similar reasons).

Goes to show it would probably be a good idea if I thought before posting.
 
Last edited:
I'm attaching a (hopefully) corrected version of the program previously posted. This version slows to a snails pace after about n=60 and would probably take all day to get up to the limit of javascript integer accuracy (I haven't tried it).

First difference from previous program is at n=29.

Challenge is to get it to run at reasonable speed or prove it never will.

Code:
<script>
big=Math.pow(2,53)
toobig=false
done=[];done.maxix=1
show=[]
function hcf(p,q){
  var u=Math.max(p,q)
  var v=Math.min(p,q)
  while(h=v){    
    v=u%v  
    if(!v)break
    u=h
  }
  return h  
}
function lcm(p,q){
  var wk=Math.min(p,q)/hcf(p,q)
  if(big/wk>=Math.max(p,q))ret=Math.max(p,q)*wk
  else{
    toobig=true
    ret=0
  }
  return ret
}
function part(n){ 
  if(!done[n]){
    var cycs=[[n]]
    cycs[0].v=n
    cycs.max=0
    for(var i=n-1;i>=Math.floor((n+1)/2);i--){
      var icycs=part(i),jcycs=part(n-i)
      for(var ic in icycs)for(var jc in jcycs){
        if(ic!='max'&&jc!='max'){
          var icyc=icycs[ic],jcyc=jcycs[jc]
          var vtry=lcm(icyc.v,jcyc.v)            
          if(toobig)break;
          else{
            cycins=cycs.length
            for(var cx in cycs){
              if(cx!='max'){
                var cyc=cycs[cx]
                if(!(cyc.v%vtry)){cycins=undefined;break}              
                if(!(vtry%cyc.v)){cycins=cx;delete cycs[cx]}
              }
            }
            if(cycins!=undefined){ 
              if(cycins>done.maxix)done.maxix=cycins   
              if(!cycs[cycs.max]||cycs[cycs.max].v<vtry)cycs.max=cycins
              cyc=cycs[cycins]=[]
              cyc.v=vtry
              cyc.length=0
              var ix,jx; ix=jx=cx=0
              while(ix<icyc.length||jx<jcyc.length){
                if(ix<icyc.length&&(jx>=jcyc.length||icyc[ix]>=jcyc[jx]))cyc[cx++]=icyc[ix++]
                else cyc[cx++]=jcyc[jx++]
              }
            }
          }
        }    
      }
      if(toobig)break;
    }    
    done[n]=cycs    
  }
  return done[n]
}
function doit(n,lim,showix){
  var cyc=part(n)
  cyc=cyc[cyc.max]
  sp='&nbsp;';sp+=sp;sp+=sp;    
  if(toobig)show[show.length]='<br>accuracy limit exceeded - terminating'
  else show[show.length]=n+sp+cyc.v+sp+'('+cyc+')'    
  upto=document.getElementById('upto')
  upto.innerHTML=n+(showix?' max cycle index='+done.maxix:'')  
  blurb=document.getElementById('blurb')  
  blurb.innerHTML=show.join('<br>')
  n++
  if(!(toobig||(lim&&n>lim)))setTimeout('doit('+n+','+lim+','+showix+')',1)
}
</script>
<body onload="doit(1,1000)">
<div>up to:</div>
<div id=upto></div>
<div>===========<br>n&nbsp;&nbsp;&nbsp;max_order&nbsp;&nbsp;&nbsp;max_cycle</div>
<div id=blurb></div>
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
Replies
1
Views
1K
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 125 ·
5
Replies
125
Views
20K
  • · Replies 20 ·
Replies
20
Views
6K
  • · Replies 1 ·
Replies
1
Views
2K