Checking Solutions to Linear Algebra Problems

In summary, the conversation discusses the accuracy of solutions to two linear algebra problems. The first problem involves proof by Zorn's Lemma and the use of transfinite induction, while the second problem deals with defining L(V1, ..., Vk; W) and the concept of natural isomorphism. The expert summarizer provides a brief summary of each problem and highlights some errors in the solutions presented.
  • #1
andytoh
359
3
Could someone please double-check the accuracy of my solutions to two linear algebra problems. Thank you.
 
Last edited:
Physics news on Phys.org
  • #2
1.5

There are definite problems with the first solution. First, you can't just pick I0 contained in G, it must also contain I. Also, you say V is partially ordered by set inclusion, but V is the vector space, so this isn't right. In general, however, the proof is a little confusing and overly long. Let B be the collection of linearly independent sets between I and G. This is nonempty (since it contains I) and is partially ordered by inclusion. If C is a totally ordered subset of B, we show that the union of elements in C, denoted W, is an upper bound of C in B. As the union of elements in C, W clearly includes every element of C, so it is an upper bound. Is it an element of B? Well since each element of C is between I and G, W is between I and G. And it's linearly independent because if we could pick some finite collection of vectors in W with a non-trivial linear combination equal to 0, then by total orderedness of C, some element of C would contain each of these vectors, making that element of C linearly dependent, contradiction.

By Zorn's Lemma, B has a maximal element, call it B. If B spans V, then we're done. Else, there is some v in G \ span(B). But then B U {v} is between I and G, linearly independent, (i.e. it's an element of B) and properly containing B, contradicting the maximality of B. So G \ span(B) is empty, hence span(B) contains span(G) = V.

Your proof seems to do something like the above in some parts, but seems to be missing some things, and at the same time doing some extra unnecessary things. One definite error is that you don't show that EVERY totally ordered subset has an upper bound, you simply construct one totally ordered subset, and then attempt to prove that it has an upper bound. Moreoever, you don't show that it has an upper bound, you just say that because it's contained in G, it does.

Also, you don't explicitly say what collection you're applying Zorn's Lemma to. In my proof, I defined a collection B, and so I get a maximal element in B. What is your maximal element a maximal element of?

2.14

Could you define L(V1, ..., Vk; W)?
 
Last edited:
  • #3
Thank you so much! L(V1, ..., Vk; W) is the set of all linear mappings from V1 X V2 X...X Vk to W.
 
Last edited:
  • #4
First problem:

You're actually mixing paradigms here. You need to invoke transfinite induction to construct the chain

[tex]I_0 \subset I_1 \subset \cdots \subset I_\alpha \subset \cdots G.[/tex]

(There's no reason to think G-I is countable) But once you have it, you switch over to Zorn's lemma. I think your proof would flow much better if you stuck with one or the other -- either use transfinite induction to directly construct your basis, or gear your proof towards the use of Zorn's lemma right from the beginning. (either way, your proof will be very similar to the proof that any vector space has a basis)

Incidentally, the above chain is not a chain of linearly independent sets: it contains G!


For the second problem, you can make it notationally similar by noticing that the general case is essentially an immediate consequence of the case that p = 2, s = 1, i1 = 1, and i2 = 2.


What is your definition of natural in natural isomorphism? Is it meant informally, or is there actually a technical condition that it implies? In the context of category theory, the isomorphism

L(A, B ; C) ---> L(A ; L(B, C))

is natural in A iff, for any map A --> A', the following diagram is commutative.

Code:
L(A, B ; C)  ------> L(A ; L(B, C))
     ^                     ^
     |                     |
     |                     |
     |                     |
     |                     |
L(A', B ; C) ------> L(A' ; L(B, C))

Where the horizontal maps are the natural isomorphisms, and the vertical maps are the linear maps induced by the map A --> A'.

And, we have similar diagrams for naturality in B and C. (The vertical arrows point downwards for naturality in C)
 
Last edited:
  • #5
Natural isomorphism means an isomorphism independent of the choice of a basis. So I take it that I screwed up the first question but got the second question right (or almost right)?
 
Last edited:
  • #6
Your solution to 2.14 looks good.
 

1. What does it mean to "check" a solution to a linear algebra problem?

Checking a solution to a linear algebra problem means verifying that the solution satisfies all of the given equations or conditions in the problem. This can involve substituting the solution into the equations and solving for both sides to ensure they are equal, or plugging the solution into a matrix and performing matrix operations to confirm the result.

2. Why is it important to check solutions to linear algebra problems?

It is important to check solutions to linear algebra problems because it ensures that the solution is correct and satisfies all of the given conditions. This is crucial in fields such as engineering, physics, and computer science where small errors in calculations can lead to significant consequences.

3. What are some common mistakes to watch out for when checking solutions to linear algebra problems?

Some common mistakes to watch out for when checking solutions to linear algebra problems include algebraic errors, incorrect application of matrix operations, and missing or transposed coefficients in a matrix. It is important to double check all calculations and ensure that all steps are correctly followed.

4. Can a solution to a linear algebra problem be correct even if it does not pass the "check"?

No, a solution to a linear algebra problem must pass the "check" in order to be considered correct. If the solution does not satisfy all of the given equations or conditions, then it is not a valid solution to the problem. However, it is possible for a solution to pass the check and still be incorrect if the initial problem was set up incorrectly.

5. Are there any tips or strategies for effectively checking solutions to linear algebra problems?

One tip for effectively checking solutions to linear algebra problems is to work backwards from the solution. This means starting with the solution and substituting it into the equations or performing matrix operations to see if it satisfies all of the conditions. It is also helpful to break the problem down into smaller steps and check each step along the way to catch any errors early on.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
936
  • Calculus and Beyond Homework Help
Replies
10
Views
987
  • Calculus and Beyond Homework Help
Replies
5
Views
930
  • Calculus and Beyond Homework Help
Replies
3
Views
909
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Science and Math Textbooks
Replies
17
Views
1K
  • Calculus and Beyond Homework Help
Replies
25
Views
2K
  • Calculus and Beyond Homework Help
Replies
12
Views
964
  • Calculus and Beyond Homework Help
Replies
8
Views
601
Back
Top