Recent content by akerman
-
A
MHB Master algorithm design and upper bound proof
All I have right now is the initial setting where we do not have k classes but we just use {0,1}. For this setting I have that the mistakes or loss for S (online sequence of data) is: $Loss(S) = \sum_{t=1}^{m} |y_{t} - \hat{y_{t}} |$ From there I don't know how to progress or solve this...- akerman
- Post #3
- Forum: Programming and Computer Science
-
A
MHB Master algorithm design and upper bound proof
Hello, I am currently preparing myself for exams and I have a past exam question which I can't solve. This question concerns online learning and the following picture illustrates it: Is anyone able to help me out and propose a solution to this question?- akerman
- Thread
- Algorithm Bound Design Master Proof Upper bound
- Replies: 2
- Forum: Programming and Computer Science
-
A
MHB How to Solve Condition Number and LU Decomposition Problems?
For (c) would it be enough to show the proof that product of two lower triangular matrices is still lower triangular and the same thing for upper triangular?- akerman
- Post #3
- Forum: Linear and Abstract Algebra
-
A
MHB How to Solve Condition Number and LU Decomposition Problems?
I have two question one of them I have solved but a bit differently and the second is something I need more help with. First question I have solved previously but bit different and I am not too sure how it should be solved in part b given above. Here is my similar solution Can you comment...- akerman
- Thread
- Condition
- Replies: 3
- Forum: Linear and Abstract Algebra
-
A
MHB Answer:Newton-Cotes Formula: Proving $\omega_j=\omega_{n-j}$ & $(b-a)$ Sum
OK that makes sense. What about the second proof which is ∑jωj=(b−a). I need to prove it considering the quadrature formulae (constant function 1) -
A
MHB Answer:Newton-Cotes Formula: Proving $\omega_j=\omega_{n-j}$ & $(b-a)$ Sum
We interpolate $f$ using a Lagrange interpolation polynomial of the form $$ p_n(x)=\sum_{k=0}^nL_k(x)f(x_k). $$ We obtain $$ \int_a^bf(x)dx\approx \int_a^b\sum_{k=0}^nL_k(x)f(x_k)=\sum_{k=0}^nf(x_k)\int_a^bL_k(x)dx:=\sum_{k=0}^nf(x_k)\omega_k, $$ where the $\omega_k:=\int_a^bL_k(x)dx$ are... -
A
MHB Answer:Newton-Cotes Formula: Proving $\omega_j=\omega_{n-j}$ & $(b-a)$ Sum
So in this context ωj are weights, actually I think its called a quadrature weight. So I would like to show that for corresponding weights it is true or it holds that ωj = ωj-1. I am sure for this proof odd functions need to be considered in the interval of -1 to 1. Does that explain it any better? -
A
MHB Answer:Newton-Cotes Formula: Proving $\omega_j=\omega_{n-j}$ & $(b-a)$ Sum
First proof should consider odd functions in the interval from -1 to 1, and I would like to show that ωj=ωn−j The second one should consider the quadrature formulae (constant funciton 1) based on that I would like to prove ∑jωj=(b−a) where ω is the weights. However I don't know how... -
A
MHB Answer:Newton-Cotes Formula: Proving $\omega_j=\omega_{n-j}$ & $(b-a)$ Sum
I have been given this question and I have no idea how to answer it. I know that the answer will contain two small proofs where one of them uses quadrature formulae. So I have been ask to show Show it holds that $\omega_j=\omega_{n-j}$ and that $\sum_j{\omega_j}=(b-a)$. Knowing that Newton... -
A
MHB Finding smallest magnitude eigen value
So just by putting A^-1 which is the inverse I should be getting the smallest magnitude eigen value. But are you sure rest of the calculations won't change?- akerman
- Post #8
- Forum: Linear and Abstract Algebra
-
A
MHB Finding smallest magnitude eigen value
So, I have found usefull solution for unsymmetric matrices but its finding k largest eigenvalues. So I have this: for $k=1,2,\dots$ $\quad Z^{(k)} = AQ^{(k-1)}$ $\quad Q^{(k)}R = Z^{(k)}$ (QR decomposition) $\quad A^{(k)} = [Q^{(k)}]^HAQ^{(k)}$ end its finding the largest eigenvalues. How...- akerman
- Post #6
- Forum: Linear and Abstract Algebra
-
A
MHB Finding smallest magnitude eigen value
Re: finding smallest magnitude eigen value I am still searching for something exiting stuff, however it's very difficult to actually find the outline of the whole process. So do you think this method will work for unsymmetric matrices? I found this it's the best so far. The inverse power...- akerman
- Post #5
- Forum: Linear and Abstract Algebra
-
A
MHB Finding smallest magnitude eigen value
Re: finding smallest magnitude eigen value I think the iteration method can only be used for symmetric but I might be wrong... So in fact my question is what would be a fastest and the best process to execute in order to find th smallest magnitude?- akerman
- Post #3
- Forum: Linear and Abstract Algebra
-
A
MHB Finding smallest magnitude eigen value
Hello, I have been asked to implement an algorithm which will find the smallest magnitude eigen value of the matrice. I have already seen many implementation of it. However, all of them are for the symmetric matrices. My problem is that I need to do it for non-symmetric matrices which makes it...- akerman
- Thread
- Magnitude Value
- Replies: 7
- Forum: Linear and Abstract Algebra
-
A
MHB How to Prove Certain Matrix Operations and Understand Their Properties?
Oh yes, thanks you I just added it incorrectly but rest of it is really useful and thanks again.- akerman
- Post #14
- Forum: Linear and Abstract Algebra