Consisity of non-homogenous systems of linear equations

Este
Messages
7
Reaction score
0
Consistency of non-homogenous systems of linear equations

Hello,

Is it possible to find out if a non-homogenous system of linear equations has:
- Single Solution.
or
- Infinite Solutions.
or
- No Solutions.
Without applying any methods/rules ( Such as Gauss, determinant..etc), just judging by the values and number of given equations and unknowns?

Thanks for your time.
 
Last edited:
Physics news on Phys.org
I'm not sure what you mean by "judging by the values" etc. All of the methods you want to "not apply" do just that. If you mean "just by looking at the values (of the coefficients?) and number of given equations and unknowns" without doing any computation at all, then, except for the obvious: if there are fewer equations than unknowns, there cannot be a unique solution (but can be no solution or an infinite number of solutions) but if you had the same number of equations and unknowns or more equations than unknowns, no solution, a unique solution, or an infinite number of solutions are all possible depending on the determinant of coefficients.
 
Well, you I meant the values of the coefficients. To be more clear, the question was I encountered was:
Without solving, determine which of the following linear systems have a solution or not"
...list of systems goes here..

I guess I'm allowed to calculate the determinant after all as it doesn't solve the system by itself..

Well, I read this on an online preview of some book:
Nonhomogeneous system:
(a) If det(A) \neq 0, a unique solution exists.
(b) If det(A) = 0, either no solutions exists or infinitely many solution exist.

So even the determinant of the matrix doesn't classify the system into either: Has solution or Has no solution. Is their is such a rule/method that can do that without solving the system?

Thanks for your response and I apologize for the ambiguity.
 
Yes you can determine whether or not a system of equations has a solution or not by looking at the determinant which is NOT "solving the equation".

But be careful if Ax= b, and A is a coefficient matrix having zero determinant, you know the system cannot have a unique solution. But whether the system has no solution or an infinite number of solutions depends on b as well as A. I think the simplest way to answer such a question would be to row-reduce the augmented matrix (coefficient matrix A with b as an added column) to upper triangular form (all 0's under the main diagonal). If you have no "all zero" rows, you know Ax= b has a unique solution no matter what b is. If A reduces to some "all zero" rows, there will be an infinite number of solutions if all the "b" entries in those rows also become 0, no solution if any of those "b" entries are not 0.

That is not completely "solving" the system since you don't clear the upper triangle.
 
Thanks a lot HallsofIvy! You've been very helpful. See you in another question :))
 
You mentioned the word 'determinant", if the system is NOT square, you cannot talk about determinant hence you can't use the determinant rule to determine if a solution exits. The best (perhaps only) scheme is to reduce the system [A|b] to RREF.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...

Similar threads

Replies
5
Views
2K
Replies
30
Views
433
Replies
11
Views
2K
Replies
3
Views
1K
Replies
1
Views
1K
Replies
9
Views
2K
Replies
7
Views
4K
Back
Top