How to find the diagonal matrix and it's dominant eigenvalue

Click For Summary
The discussion focuses on creating a tri-diagonal matrix using vectors in Matlab and finding its dominant eigenvalue. Users share their approaches to constructing the matrix and calculating the determinant, noting discrepancies in their results. The conversation highlights the importance of using the correct commands in Matlab, particularly the distinction between obtaining eigenvalues and eigenvectors. It is clarified that the dominant eigenvalue can also be determined using the 2-norm of the matrix, which simplifies the process. Ultimately, the correct dominant eigenvalue is confirmed to be 12.4586.
ver_mathstats
Messages
258
Reaction score
21

Homework Statement


Consider the following vectors, which you can copy and paste directly into Matlab.

x = [2 2 4 6 1 5 5 2 6 2 2];
y = [3 3 3 6 3 6 3 2 3 2];

Use the vectors x and y to create the following matrix.

2 3 0 0 0 0 0 0 0 0 0
3 2 3 0 0 0 0 0 0 0 0
0 3 4 3 0 0 0 0 0 0 0
0 0 3 6 6 0 0 0 0 0 0
0 0 0 6 1 3 0 0 0 0 0
0 0 0 0 3 5 6 0 0 0 0
0 0 0 0 0 6 5 3 0 0 0
0 0 0 0 0 0 3 2 2 0 0
0 0 0 0 0 0 0 2 6 3 0
0 0 0 0 0 0 0 0 3 2 2
0 0 0 0 0 0 0 0 0 2 2

Such a matrix is called a tri-diagonal matrix.
Hint: Use the diag command three times, and then add the resulting matrices.

To check that you have correctly created the matrix A, verify that det(A) = 1.3995e+06.

Find the dominant eigenvalue of A.

2. Homework Equations

The Attempt at a Solution


So I started by copying the matrix into Matlab, so I did a=diag(x) then I did b=diag(y,1), and lastly I did c=diag(y,-1). I added all of my matrices together so d=a+b+c. I get the same matrix on Matlab however my determinant is completely different and I am unsure why?

Thank you.
 
Physics news on Phys.org
ver_mathstats said:
I get the same matrix on Matlab however my determinant is completely different and I am unsure why?

what do you mean you "get the same matrix"?

I followed along in Julia which is close enough to Matlab in structure and commands... I get determinant of 1.3994640000000002e6

what are you getting as the determinant?
 
StoneTemplePython said:
what do you mean you "get the same matrix"?

I followed along in Julia which is close enough to Matlab in structure and commands... I get determinant of 1.3994640000000002e6

what are you getting as the determinant?

What I meant by that was that I obtain the same 11 x 11 matrix as stated in the question. Okay, the determinant I get is 1399464, I am completely confused as to where I am going wrong?
 
ver_mathstats said:
What I meant by that was that I obtain the same 11 x 11 matrix as stated in the question. Okay, the determinant I get is 1399464, I am completely confused as to where I am going wrong?

but ##\frac{1.3994640000000002e6}{1399464} = 1##

i.e. they are the same except for possible numeric precision nits.
 
StoneTemplePython said:
but ##\frac{1.3994640000000002e6}{1399464} = 1##

i.e. they are the same except for possible numeric precision nits.

Okay thank you, however when I went to find the dominant eigenvalue of A I used

x = magic(11)
[P D] = eig(x)
evalues = diag(D)
abs_evalues = abs(evalues)
sorted_abs_evalues = sort(abs_evalues)
dominant_evalue = sorted_abs_evalues(end)

I got the value 671.0000, yet this is incorrect so I am very confused as to why that is? So I am unsure of how to now go about this.
 
ver_mathstats said:
Okay thank you, however when I went to find the dominant eigenvalue of A I used

x = magic(11)
[P D] = eig(x)
evalues = diag(D)
abs_evalues = abs(evalues)
sorted_abs_evalues = sort(abs_evalues)
dominant_evalue = sorted_abs_evalues(end)

I got the value 671.0000, yet this is incorrect so I am very confused as to why that is? So I am unsure of how to now go about this.

sorry what is magic? there should be an eigenvalues command you can call, possibly one specialized to real symmetric (or hermitian) matrices... in Julia the command is "eigvals(A)" and in python/numpy it would be "eigvalsh(A)"... you don't need the eigenvectors here, they just clutter things up.

as such all eigenvalues are real -- no need to put absolute values around anything
- - - -
btw, I don't want to confuse things, but if you know a good deal of linear algebra: you may skip all this eigenvalues stuff and just call a command to get the operator 2 norm of the matrix which is the dominant eigenvalue (why?)

in Julia, the command is ##\text{norm}(A)##. However, it varies by language... sometimes norm() will give the frobenius norm.
 
Show us what MATLAB gives you as a result of [P D] = eig(x)
 
StoneTemplePython said:
sorry what is magic? there should be an eigenvalues command you can call, possibly one specialized to real symmetric (or hermitian) matrices... in Julia the command is "eigvals(A)" and in python/numpy it would be "eigvalsh(A)"... you don't need the eigenvectors here, they just clutter things up.

as such all eigenvalues are real -- no need to put absolute values around anything
- - - -
btw, I don't want to confuse things, but if you know a good deal of linear algebra: you may skip all this eigenvalues stuff and just call a command to get the operator 2 norm of the matrix which is the dominant eigenvalue (why?)

in Julia, the command is ##\text{norm}(A)##. However, it varies by language... sometimes norm() will give the frobenius norm.
Magic in Matlab I think was used to return an n x n matrix. Okay sorry that's how we were taught how to do it but I understand what you are saying. For Matlab I'm quite sure it is just eig(A).

So we obtain the 2-norm so that would be the same as the operator 2 norm? In Matlab to obtain the 2 norm we must use n = norm(X).
 
The Electrician said:
Show us what MATLAB gives you as a result of [P D] = eig(x)
d =

68 81 94 107 120 1 14 27 40 53 66
80 93 106 119 11 13 26 39 52 65 67
92 105 118 10 12 25 38 51 64 77 79
104 117 9 22 24 37 50 63 76 78 91
116 8 21 23 36 49 62 75 88 90 103
7 20 33 35 48 61 74 87 89 102 115
19 32 34 47 60 73 86 99 101 114 6
31 44 46 59 72 85 98 100 113 5 18
43 45 58 71 84 97 110 112 4 17 30
55 57 70 83 96 109 111 3 16 29 42
56 69 82 95 108 121 2 15 28 41 54P =

Columns 1 through 8

0.3015 0.3788 0.1975 0.4365 -0.2767 0.2973 -0.1327 -0.1049
0.3015 0.4223 -0.0219 0.2814 0.1539 -0.4298 -0.1890 0.3451
0.3015 0.3423 -0.2642 -0.1837 -0.0109 0.1281 0.0748 0.3861
0.3015 0.1506 -0.3914 -0.4163 -0.1002 0.3416 0.1817 -0.0037
0.3015 -0.0951 -0.4239 -0.1529 0.1865 -0.3839 -0.0790 -0.3764
0.3015 -0.2950 -0.2950 0.2811 -0.2863 0.2373 -0.3593 -0.2811
0.3015 -0.4239 -0.0951 0.3764 0.3879 -0.0574 0.3458 0.1529
0.3015 -0.3914 0.1506 0.0037 -0.4361 -0.3268 0.3224 0.4163
0.3015 -0.2642 0.3423 -0.3861 0.4168 0.4856 -0.5796 0.1837
0.3015 -0.0219 0.4223 -0.3451 -0.3766 -0.1184 -0.0447 -0.2814
0.3015 0.1975 0.3788 0.1049 0.3416 -0.1736 0.4596 -0.4365

Columns 9 through 11

0.4596 0.3416 0.1736
-0.0447 -0.3766 0.1184
-0.5796 0.4168 -0.4856
0.3224 -0.4361 0.3268
0.3458 0.3879 0.0574
-0.3593 -0.2863 -0.2373
-0.0790 0.1865 0.3839
0.1817 -0.1002 -0.3416
0.0748 -0.0109 -0.1281
-0.1890 0.1539 0.4298
-0.1327 -0.2767 -0.2973D =

Columns 1 through 8

671.0000 0 0 0 0 0 0 0
0 214.7427 0 0 0 0 0 0
0 0 -214.7427 0 0 0 0 0
0 0 0 -111.9042 0 0 0 0
0 0 0 0 -61.1221 0 0 0
0 0 0 0 0 -66.5104 0 0
0 0 0 0 0 0 -80.0530 0
0 0 0 0 0 0 0 111.9042
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0

Columns 9 through 11

0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
0 0 0
80.0530 0 0
0 61.1221 0
0 0 66.5104evalues =

671.0000
214.7427
-214.7427
-111.9042
-61.1221
-66.5104
-80.0530
111.9042
80.0530
61.1221
66.5104abs_evalues =

671.0000
214.7427
214.7427
111.9042
61.1221
66.5104
80.0530
111.9042
80.0530
61.1221
66.5104sorted_abs_evalues =

61.1221
61.1221
66.5104
66.5104
80.0530
80.0530
111.9042
111.9042
214.7427
214.7427
671.0000dominant_evalue =

671.0000

Here is what I got when I did it, I just copy and pasted it.
 
  • #10
Apparently you have been given the "correct" value for the dominant eigenvalue; what is it? What do you get for norm(x)?
 
  • #11
ver_mathstats said:
Magic in Matlab I think was used to return an n x n matrix. Okay sorry that's how we were taught how to do it but I understand what you are saying. For Matlab I'm quite sure it is just eig(A).

So we obtain the 2-norm so that would be the same as the operator 2 norm? In Matlab to obtain the 2 norm we must use n = norm(X).
yes, the docs say the "2 norm" of a matrix X is the largest singular value (aka the operator norm aka the largest magnitude eigenvalue for a normal matrix -- or if you prefer: for a real symmetric matrix). I'm a little concerned about going down this more slick path unless you understand why this path works... and you should also be able to call the eig function...
- - - -
so from the docs, you want :

"e = eig(A)"
not
"[V,D] = eig(A)"

these are the docs I'm referencing: https://www.mathworks.com/help/matlab/ref/eig.html
 
  • #12
StoneTemplePython said:
yes, the docs say the "2 norm" of a matrix X is the largest singular value (aka the operator norm aka the largest magnitude eigenvalue for a normal matrix -- or if you prefer: for a real symmetric matrix). I'm a little concerned about going down this more slick path unless you understand why this path works... and you should also be able to call the eig function...
- - - -
so from the docs, you want :

"e = eig(A)"
not
"[V,D] = eig(A)"

these are the docs I'm referencing: https://www.mathworks.com/help/matlab/ref/eig.html

Okay so for when I did e=eig(d), I obtained:
-4.6061
-2.1522
-1.4833
-0.6932
1.9811
2.8707
3.5153
6.3973
8.1851
10.5267
12.4586

and then when I did f=norm(d), I obtained:

12.4586.

Would that just be it?
 
  • Like
Likes StoneTemplePython
  • #13
The Electrician said:
Apparently you have been given the "correct" value for the dominant eigenvalue; what is it? What do you get for norm(x)?
Well for norm(x), I obtained the value 12.4586.
 
  • #14
ver_mathstats said:
Okay so for when I did e=eig(d), I obtained:
-4.6061
-2.1522
-1.4833
-0.6932
1.9811
2.8707
3.5153
6.3973
8.1851
10.5267
12.4586

and then when I did f=norm(d), I obtained:

12.4586.

Would that just be it?
Yes. 12.4586 is right.

it's worth mentioning that I could eyeball your matrix and know the dominant eigenvalue is positive (why? non-negative matrix with a connected graph underneath -> Perron Frobenius theory)
 
  • #15
The docs StoneTemplePython linked says:

[V,D] = eig(A) returns diagonal matrix D of eigenvalues and matrix V whose columns are the corresponding right eigenvectors, so that A*V = V*D

It would seem you should have gotten your desired eigenvalues in the diagonal matrix D. Perhaps there was something wrong with the argument x you gave to the command [P D] = eig(x)
 
  • #16
StoneTemplePython said:
Yes. 12.4586 is right.

it's worth mentioning that I could eyeball your matrix and know the dominant eigenvalue is positive (why? non-negative matrix with a connected graph underneath -> Perron Frobenius theory)
Okay, thank you. And okay I understand. That other method was way more straightforward then what I had initially done.
 
  • #17
The Electrician said:
The docs StoneTemplePython linked says:

[V,D] = eig(A) returns diagonal matrix D of eigenvalues and matrix V whose columns are the corresponding right eigenvectors, so that A*V = V*D

It would seem you should have gotten your desired eigenvalues in the diagonal matrix D. Perhaps there was something wrong with the argument x you gave to the command [P D] = eig(x)
I definitely think I did something wrong, I am trying to find my mistake right now. Thank you for the help though.
 
  • #18
ver_mathstats said:
Okay, thank you. And okay I understand. That other method was way more straightforward then what I had initially done.

just be careful with the norm command. Singular values are always non-negative and in general

##\sigma_{max} \geq \big \vert \lambda\big \vert_{max} \geq \lambda_{max}##

it's just that I knew this was an equality in both cases here because your matrix is real symmetric, and then by perron-frobenius.

edit:
(a technical nit I suppose is that the final leg of the inequality is only well defined if the eigenvalues are real, so the way that this is shown is perhaps double dipping on the fact that the matrix here is real symmetric...)
 
Last edited:
  • #19
StoneTemplePython said:
just be careful with the norm command. Singular values are always non-negative and in general

##\sigma_{max} \geq \big \vert \lambda\big \vert_{max} \geq \lambda_{max}##

it's just that I knew this was an equality in both cases here because your matrix is real symmetric, and then by perron-frobenius.
Okay yes I will keep that in mind when using it. Thank you very much for the help.
 

Similar threads

  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 4 ·
Replies
4
Views
765
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K