Completeness of a basis function

L^2(0,1)## function can be expanded as linearly independent, linear combinations of a chosen function space?In summary, the main topic of discussion is the concept of completeness in relation to a given space and metric. The conversation touches on the different ways completeness can be achieved, such as through topological or algebraic completion, and the need for more information in order to understand the specific context and definition of completeness being used. The conversation also discusses the application of completeness in solving an ODE, where it is important for any function in the given space to be expandable as a linear combination of chosen basis functions. There may also be a connection between completeness and the process of recombining trial functions
  • #1
member 428835
Hi PF!

I'm somewhat new to the concept of completeness, but from what I understand, a particular basis function is complete in a given space if it can create any other function in that space. Is this correct?

I read that the set of polynomials is not complete (unsure of the space, since Taylor series can represent all continuous functions) but that Legendre polynomials are complete. Can anyone correct, finesse, or provide a working example of this?

Thanks!
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
joshmccraney said:
Hi PF!

I'm somewhat new to the concept of completeness, but from what I understand, a particular basis function is complete in a given space if it can create any other basis function in that space. Is this correct?
No.
At least your question isn't specifically enough to be answered clearly. Completeness usually relates to a certain set, and there are more than one way to complete such a set. E.g. ##\mathbb{R}## is the topological completion of ##\mathbb{Q}##, whereas ##\mathbb{C}## is the algebraic completion (better: closure) of ##\mathbb{R}##.
I read that the set of polynomials is not complete (unsure of the space, since Taylor series can represent all continuous functions) ...
Taylor series are series, not polynomials.
... but that Legendre polynomials are complete. Can anyone correct, finesse, or provide a working example of this?

Thanks!
What you wrote sounds to be the completion of a few linear independent functions to a basis in a given space; or the change to an orthonormal basis; or the description of the solution space to some equations. This usage of completion is misleading, at least if not explicitly stated what has to be completed to achieve what.
 
Last edited:
  • Like
Likes member 428835
  • #3
Thanks for the response.

fresh_42 said:
What you wrote sounds to be the completion of a few linear independent functions to a basis in a given space; or the change to an orthonormal basis; or the description of the solution space to some equations. This usage of completion is misleading, at least if not explicitly stated what has to be completed to achieve what.
Hmmmm so how would you interpret the statement "complete basis functions"? Application here is with a functional, and evidently the input functions must be complete (nothing further given, I guess in physics/engineering it's assumed the reader knows the space and norm)?
 
  • #4
joshmccraney said:
so how would you interpret the statement "complete basis functions"?
This depends on the context. In (this) topological forum, I would assume a topological completeness, which is the standard usage of the term. It means to expand a given metric space with all possible limits of its Cauchy sequences. Functions of various types form topological, metric spaces, like smooth functions, or square integrable functions, or just continuous functions. So my first question would be: Which space? Which metric? As you see by my examples, there is no unique answer to this, even for engineers. Smooth and continuous are both important and very different. In quantum physics, I would assume the square integrable Lebesgue space.

However, you mentioned a single function to be completed, which doesn't fit in here. You also mentioned Legendre polynomials, which point to some differential equation, in which case the solution space might be meant and completion would indicate a basis of said space. "The input functions must be complete" doesn't make sense to me, except for "the [set of all] input functions must be complete [in order to ...]" and the terms in brackets are clear by context. In any case, I would ask for the area the formulation is taken from. Completion has an inherent meaning of adding something new to something given, so two questions have to be answered beforehand: what is given, and how is new defined. In the usual sense, the given is some metric space and the new are limits of Cauchy sequences. Since this appears not to be the case here, more information (context) is definitely needed.
 
  • #5
Thanks for the response, and sorry it's taken me so long to reply. I'll do my best to give you precise insight on what I'm doing.

fresh_42 said:
So my first question would be: Which space? Which metric?
I am trying to solve an ODE that looks like this $$L(\phi_n(s)) = \lambda \phi(s):s\in[0,1]$$ where ##L(\phi_n)\equiv \phi_n''(s)+\phi_n(s)## and the subscript ##n## denotes a normal derivative to a surface (rather than go into details here, let's just think of that subscript as being one more derivative with some extra complications). I can't solve the ODE exactly, but I can solve the weak formulation, which looks like this $$(L(\phi_n),\phi_n) = \lambda(\phi,\phi_n) : (f,g) = \int_0^1 fg\, ds.$$ I will solve the weak form through an eigenfunction expansion, so I'll let ##\phi = f_i## for some predetermined ##f_i##, so we could think of ##f_i = \sin(i \pi x)## or perhaps ##x^i(1-x)##. Then we see the weak formulation is now an algebraic eigenvalue problem with matrices.

I didn't mention BC's and this is where my question of completeness enters. With some BC's it's obvious how to formulate the trial functions, such as ##\phi(0)=\phi(1) = 0##; in this case we can let ##f_i = x(1-x)^i##. However, in general it's not so simple how to build the BCs into the function space, so I have a technique, where basically I superimpose combinations of my trial functions to automatically solve the BCs.

For example, if I'm trying to solve the BCs ##\phi(0)=\phi(1)=0## and I'm using 3 trial functions ##f_i = x^i##, then I need to take linear combinations of ##\{x,x^2,x^3\}## to satisfy the BCs, perhaps ##\{x-x^2, x-x^3\}##. In this scenario, these two new functions will be my trial functions, specifically ##f_1 = x-x^2## and ##f_2 = x-x^3##.

All of this for my final question: looking at the weak formulation, it seems what I am looking for is assurance that any ##L^2(0,1)## function can be expanded as linearly independent, linear combinations of a chosen function space.

I should specify that I am using a computer algebra package to determine the ways I should superimpose my selected trial functions. On problems where I do not have to recombine the trial functions, I get good results. However, sometimes when I recombine basis functions everything goes wrong. Do you think this is at all related to the completeness of the trial functions, once recombined?
 
  • #7
fresh_42 said:
This depends on the context. In (this) topological forum, I would assume a topological completeness, which is the standard usage of the term. It means to expand a given metric space with all possible limits of its Cauchy sequences. Functions of various types form topological, metric spaces, like smooth functions, or square integrable functions, or just continuous functions. So my first question would be: Which space? Which metric? As you see by my examples, there is no unique answer to this, even for engineers. Smooth and continuous are both important and very different. In quantum physics, I would assume the square integrable Lebesgue space.

However, you mentioned a single function to be completed, which doesn't fit in here. You also mentioned Legendre polynomials, which point to some differential equation, in which case the solution space might be meant and completion would indicate a basis of said space. "The input functions must be complete" doesn't make sense to me, except for "the [set of all] input functions must be complete [in order to ...]" and the terms in brackets are clear by context. In any case, I would ask for the area the formulation is taken from. Completion has an inherent meaning of adding something new to something given, so two questions have to be answered beforehand: what is given, and how is new defined. In the usual sense, the given is some metric space and the new are limits of Cauchy sequences. Since this appears not to be the case here, more information (context) is definitely needed.
I believe some of these function spaces are not metrizable.
 
  • #8
joshmccraney said:
Thanks for the response, and sorry it's taken me so long to reply. I'll do my best to give you precise insight on what I'm doing.I am trying to solve an ODE that looks like this $$L(\phi_n(s)) = \lambda \phi(s):s\in[0,1]$$ where ##L(\phi_n)\equiv \phi_n''(s)+\phi_n(s)## and the subscript ##n## denotes a normal derivative to a surface (rather than go into details here, let's just think of that subscript as being one more derivative with some extra complications). I can't solve the ODE exactly, but I can solve the weak formulation, which looks like this $$(L(\phi_n),\phi_n) = \lambda(\phi,\phi_n) : (f,g) = \int_0^1 fg\, ds.$$ I will solve the weak form through an eigenfunction expansion, so I'll let ##\phi = f_i## for some predetermined ##f_i##, so we could think of ##f_i = \sin(i \pi x)## or perhaps ##x^i(1-x)##. Then we see the weak formulation is now an algebraic eigenvalue problem with matrices.

I didn't mention BC's and this is where my question of completeness enters. With some BC's it's obvious how to formulate the trial functions, such as ##\phi(0)=\phi(1) = 0##; in this case we can let ##f_i = x(1-x)^i##. However, in general it's not so simple how to build the BCs into the function space, so I have a technique, where basically I superimpose combinations of my trial functions to automatically solve the BCs.

For example, if I'm trying to solve the BCs ##\phi(0)=\phi(1)=0## and I'm using 3 trial functions ##f_i = x^i##, then I need to take linear combinations of ##\{x,x^2,x^3\}## to satisfy the BCs, perhaps ##\{x-x^2, x-x^3\}##. In this scenario, these two new functions will be my trial functions, specifically ##f_1 = x-x^2## and ##f_2 = x-x^3##.

All of this for my final question: looking at the weak formulation, it seems what I am looking for is assurance that any ##L^2(0,1)## function can be expanded as linearly independent, linear combinations of a chosen function space.

I should specify that I am using a computer algebra package to determine the ways I should superimpose my selected trial functions. On problems where I do not have to recombine the trial functions, I get good results. However, sometimes when I recombine basis functions everything goes wrong. Do you think this is at all related to the completeness of the trial functions, once recombined?
The space of polys. as you said, isnot complete because, e.g. the truncated Taylor series of ,e.g. #e^x# will converge to ( the non-polynomial) #e^x# . So you have a sequence of polynomials that converges to a non-polynomial. It is a nice exercise to show #e^x# is not a polynomial.Edit: Sorry, this is supposed to address your first post and not the last one.
 
  • Like
Likes FactChecker
  • #9
joshmccraney said:
Hi PF!

I'm somewhat new to the concept of completeness, but from what I understand, a particular basis function is complete in a given space if it can create any other function in that space. Is this correct?

I read that the set of polynomials is not complete (unsure of the space, since Taylor series can represent all continuous functions) but that Legendre polynomials are complete. Can anyone correct, finesse, or provide a working example of this?

Thanks!

The Legendre Polynomials, being polynomials, obviously are not a complete basis.

What is true is that in spherical polar coordinates, eigenfunctions of the laplacian operator depend on the colatitude [itex]\theta \in [0, \pi][/itex] as [itex]P(\cos\theta)[/itex] where [itex]P[/itex] is a polynomial. These polynomials are known as the Legendre polynomials.

Now since we don't care what happens for [itex]\theta \in (\pi, 2\pi)[/itex] we can assume that any function we are interested in is both even and periodic with period [itex]2\pi[/itex]. That means that it can be expanded as a cosine series, so the functions [itex]\cos(n\theta)[/itex] for integer [itex]n \geq 0[/itex] are a complete basis. Using the Legendre polynomials composed with [itex]\cos\theta[/itex] amounts to a change of basis to a set of functions which are orthogonal eigenfunctions of the laplacian operator. It is in this sense that the Legendre polynomials are complete.
 
  • #10
WWGD said:
The space of polys. as you said, is not complete because, e.g. the truncated Taylor series of ,e.g. #e^x# will converge to ( the non-polynomial) #e^x# . So you have a sequence of polynomials that converges to a non-polynomial.
Yes - what of it?
The Stone-Weierstrass theorem (https://en.wikipedia.org/wiki/Stone–Weierstrass_theorem) states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function.
 
  • #11
  • #12
WWGD said:
S, I think I may have misunderstood the question. But the answer is correct, isn't it? A Cauchy sequence of polynomials may converge to a non-polynomial, so the space of polynomials is not closed.
I agree with that. But the OP mentions complete basis, so I wonder if it is asking about something else.
 
  • #13
FactChecker said:
I agree with that. But the OP mentions complete basis, so I wonder if it is asking about something else.
Yes, my bad, I think I misread the question. Pease se my last post in this thread.
 
  • #14
WWGD said:
Yes, my bad, I think I misread the question. Pease se my last post in this thread.
I saw that. I don't think it is possible to answer a question about the basis of a space unless the OP is clear which space is being talked about. I don't think that the powers of x (and so the polynomials) can be a complete basis for any set that includes functions where there is a point interior to the domain without a derivative.
 
  • Like
Likes WWGD
  • #15
FactChecker said:
I don't think that the powers of x (and so the polynomials) can be a complete basis for any set that includes functions where there is a point interior to the domain without a derivative.
See post #10!
 
  • #16
Svein said:
See post #10!
An arbitrarily close approximation does not make them equal, which is required by a complete basis. In fact, a power series which converges will also have a derivative within its circle of convergence. Suppose a continuous function has no derivative at a point within (0,1). A power series can not equal that function on (0,1). Therefore, the functions ##f_n(x) = x^n, n\in N## (##N##, non-negative integers) can not be a complete basis for the continuous functions on (0,1).
 
  • #17
FactChecker said:
An arbitrarily close approximation does not make them equal, which is required by a complete basis. In fact, a power series which converges will also have a derivative within its circle of convergence. Suppose a continuous function has no derivative at a point within (0,1). A power series can not equal that function on (0,1). Therefore, the functions ##f_n(x) = x^n, n\in N## (##N##, non-negative integers) can not be a complete basis for the continuous functions on (0,1).
So you are thinking of a Hamel basis and I guess the polys are a schauder basis?
 
  • #18
WWGD said:
So you are thinking of a Hamel basis and I guess the polys are a schauder basis?
I think there are problems with either one. I am struggling with reconciling the Stone-Weierstrauss Theorem with the fact that a convergent power series can not equal a function that does not have a derivative at a point within the radius of series convergence. I guess that the issue is that restricting the basis to ##{x^n}## has the disadvantage that the coefficient, ##a_{n_i}## of any ##x^{n_i}## is eventually fixed in the limit ##\Sigma_{n=0}^{\infty}a_ix^n## and so can not converge to a continuous function that does not have a derivative at a point in (0,1). I will have to think about this some more with respect to the OP question.
 
  • #19
FactChecker said:
I think there are problems with either one. I am struggling with reconciling the Stone-Weierstrauss Theorem with the fact that a convergent power series can not equal a function that does not have a derivative at a point within the radius of series convergence. I guess that the issue is that restricting the basis to ##{x^n}## has the disadvantage that the coefficient, ##a_{n_i}## of any ##x^{n_i}## is eventually fixed in the limit ##\Sigma_{n=0}^{\infty}a_ix^n## and so can not converge to a continuous function that does not have a derivative at a point in (0,1). I will have to think about this some more with respect to the OP question.
But it goes beyond that in terms of extreme cases: the polys will also approximate nowhere-diffetentiable continuous functions. It seems like that difference between the approximation and the function provides enough wiggle room to go between differentiable polynomials and the nowhere differentiable functions. Strange. Maybe it has to see with rate of convergence.
 
  • Like
Likes FactChecker
  • #20
WWGD said:
But it goes beyond that in terms of extreme cases: the polys will also approximate nowhere-diffetentiable continuous functions. It seems like that difference between the approximation and the function provides enough wiggle room to go between differentiable polynomials and the nowhere differentiable functions. Strange. Maybe it has to see with rate of convergence.
Being restricted to a basis of the powers of x prevents later terms from adjusting the coefficients of the lower powers of x. Whereas, a set of convergent polynomials are can keep adjusting the lower coefficients as needed. Clearly there is something wrong with the original question since the set of polynomials includes the Legendre polynomials. So any space where the Legendre polynomials form a complete basis would also have the polynomials complete. The polynomials would not be a basis because the representations would not be unique.
 
  • Like
Likes WWGD
  • #21
I read both your posts, but I think it's a little beyond me. However, after reading, I think I have a little clarity I can shed. Referencing post 5, I'll say the basis functions ##\phi## must be harmonic and satisfy ##d_x\phi(x=\pm 1) = 0, d_y\phi(y=-1) = 0##. These are easy to find, specifically they are sines and cosines multiplied with hyperbolic cosines. The last BC requires ##\phi_n(x=\pm 1) = 0##, where ##n## denotes a normal derivative to a curve ##\Gamma## (see picture). The ODE ##L\phi_n(s) = -\lambda \phi(s)## I described above is only a function of ##s## (rather than ##x,y##) since it is evaluated on the curve ##\Gamma##.

My question is, when I evaluate the weak formulation and enforce BCs ##\phi_n(x=\pm 1)## I get different results than when I do not enforce BCs. Again, I enforce BCs by superimposing the harmonics. Any ideas why this would happen?
 

Attachments

  • IMG_1166.jpg
    IMG_1166.jpg
    19.5 KB · Views: 372
  • #22
Svein said:
Yes - what of it?
The Stone-Weierstrass theorem (https://en.wikipedia.org/wiki/Stone–Weierstrass_theorem) states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function.

But this happens more generally, not just here. You do not have the space fully spanned by a Hamel basis. Example: Take an infinite-dimensional Hilbert space. By a Baire Cat argument it must be Uncountably -infinite-dimensional. But orthogonal (Hamel ) bases must be countable. So , as I understand it, we then use a Schauder basis where the sum _converges_ to values in the context topology , but does not _equal_ the value. EDIT: I think the clear explanation for this "paradox" is that there are Hamel bases that are not orthogonal , but are not countable either. But sometimes it is more convenient to work with Schauder bases than with uncountable Hamel bases. Example, for ##L^2 [a,b] ## we use the standard Schauder basis ## \{ \{1\} Sin(nx), Cos(nx) ; n=1,2,...## But I am not sure of what completeness would mean here nor if we require uniqueness of the representat
 
Last edited:
  • #23
FactChecker said:
Being restricted to a basis of the powers of x prevents later terms from adjusting the coefficients of the lower powers of x. Whereas, a set of convergent polynomials are can keep adjusting the lower coefficients as needed. Clearly there is something wrong with the original question since the set of polynomials includes the Legendre polynomials. So any space where the Legendre polynomials form a complete basis would also have the polynomials complete. The polynomials would not be a basis because the representations would not be unique.

Is the uniqueness also required for Schauder basis or just for Hamel bases?
 
  • #24
joshmccraney said:
I read both your posts, but I think it's a little beyond me. However, after reading, I think I have a little clarity I can shed. Referencing post 5, I'll say the basis functions ##\phi## must be harmonic and satisfy ##d_x\phi(x=\pm 1) = 0, d_y\phi(y=-1) = 0##. These are easy to find, specifically they are sines and cosines multiplied with hyperbolic cosines. The last BC requires ##\phi_n(x=\pm 1) = 0##, where ##n## denotes a normal derivative to a curve ##\Gamma## (see picture). The ODE ##L\phi_n(s) = -\lambda \phi(s)## I described above is only a function of ##s## (rather than ##x,y##) since it is evaluated on the curve ##\Gamma##.

My question is, when I evaluate the weak formulation and enforce BCs ##\phi_n(x=\pm 1)## I get different results than when I do not enforce BCs. Again, I enforce BCs by superimposing the harmonics. Any ideas why this would happen?

Sorry to keep hammering this, but when you use a Schauder basis this is an ordered basis ( because convergence is not always absolute) , so you may be changing the order of the terms of the sum? Are you doing finite sums ( Hamel bases, equality) or infinite sums ( Schauder bases, convergence).
 
  • #25
WWGD said:
Is the uniqueness also required for Schauder basis or just for Hamel bases?
Sorry. I wasn't very clear. I was just correcting a misconception that I had. I was misusing a fact about convergent Taylor series.
 
  • Like
Likes WWGD
  • #26
WWGD said:
Sorry to keep hammering this, but when you use a Schauder basis this is an ordered basis ( because convergence is not always absolute) , so you may be changing the order of the terms of the sum? Are you doing finite sums ( Hamel bases, equality) or infinite sums ( Schauder bases, convergence).
I'm using finite sums, so Hamel bases. Have you ever encountered anything like what I describe: superimposing basis functions to create new basis functions that are still linearly independent but now also satisfy a new property?
 
  • #27
joshmccraney said:
I'm using finite sums, so Hamel bases. Have you ever encountered anything like what I describe: superimposing basis functions to create new basis functions that are still linearly independent but now also satisfy a new property?

Not really, sorry. I am a bit confused; your new functions are linear combinations of the previous, so I don't see what you may gain from it. Why not keep the original basis and just use combinations of it?
 
  • #28
fresh_42 said:
No.
At least your question isn't specifically enough to be answered clearly. Completeness usually relates to a certain set, and there are more than one way to complete such a set. E.g. ##\mathbb{R}## is the topological completion of ##\mathbb{Q}##, whereas ##\mathbb{C}## is the algebraic completion (better: closure) of ##\mathbb{R}##.
Taylor series are series, not polynomials.

.
I think he is referring to the partial sums. But, yes, it does not seem clear.
 
  • #29
WWGD said:
Not really, sorry. I am a bit confused; your new functions are linear combinations of the previous, so I don't see what you may gain from it. Why not keep the original basis and just use combinations of it?
I'm approximately solving a differential eigenvalue problem. The basis functions I use must satisfy the BCs with the EVP. Since they also must be harmonic and satisfy the derivatives I mentioned earlier, I find them generally for those conditions, and then superimpose to satisfy the final BCs. Then I apply a Ritz procedure to them to approximately solve the ODE.

There are two ways to solve: solve the inverse EVP, which introduces a Green's function. In that case, I don't have to recombine basis functions since the Green's function accounts for these. This approach is correct. The second way to solve the EVP is the direct approach. In this approach, I recombine the basis functions, and in this approach I get an incorrect solution for some (not all) parameter values. I just don't know why.

Basis functions work, but when recombined, suddenly the technique can fail. You've never seen anything like this?
 
  • Like
Likes WWGD
  • #30
joshmccraney said:
I'm approximately solving a differential eigenvalue problem. The basis functions I use must satisfy the BCs with the EVP. Since they also must be harmonic and satisfy the derivatives I mentioned earlier, I find them generally for those conditions, and then superimpose to satisfy the final BCs. Then I apply a Ritz procedure to them to approximately solve the ODE.

There are two ways to solve: solve the inverse EVP, which introduces a Green's function. In that case, I don't have to recombine basis functions since the Green's function accounts for these. This approach is correct. The second way to solve the EVP is the direct approach. In this approach, I recombine the basis functions, and in this approach I get an incorrect solution for some (not all) parameter values. I just don't know why. EDIT: Sorry too, for jumping in and sort of getting into a side-discussion. Hope you got something out of it or at least refreshed the material. I hope to get myself back into PDEs and Green's functions some time soon.

Basis functions work, but when recombined, suddenly the technique can fail. You've never seen anything like this?
No, sorry, it has been a while since I have done PDEs or worked with Green's function in general.
 
Last edited:
  • #31
WWGD said:
No, sorry, it has been a while since I have done PDEs or worked with Green's function in general.
No worries, thanks for taking the time!
 
  • Like
Likes WWGD
  • #32
WWGD said:
Not really, sorry. I am a bit confused; your new functions are linear combinations of the previous, so I don't see what you may gain from it. Why not keep the original basis and just use combinations of it?
It is often nice to represent a function in terms of a basis that gives a clean interpretation of the components in the specific current context. So there are reasons to convert from one basis to another, as appropriate.
 
Last edited:
  • #33
FactChecker said:
It is often nice to represent a function in terms of a basis that gives a clean interpretation of the components in the specific current context. So there are reasons to convert from one basis to another, as appropriate.
Yes, the input to the Ritz procedure I employ must satisfy all BCs. As is, the basis functions don't satisfy all the BCs. So we take linearly independent combinations of basis functions to form new basis functions such that the additional BC is satisfied.
 
Last edited by a moderator:
  • Like
Likes FactChecker

1. What is the definition of completeness of a basis function?

The completeness of a basis function refers to the ability of a set of basis functions to accurately represent any function within a given space. In other words, the basis functions can be combined in various ways to create a wide range of functions.

2. How is completeness of a basis function measured?

The completeness of a basis function is typically measured using a metric called the L2 norm. This metric calculates the distance between a function and its approximation using the basis functions. A lower L2 norm indicates a more complete set of basis functions.

3. What does it mean if a set of basis functions is not complete?

If a set of basis functions is not complete, it means that there are certain functions within the given space that cannot be accurately represented using those basis functions. This can result in errors or inaccuracies in calculations or models that rely on those basis functions.

4. Can a set of basis functions be both complete and orthogonal?

Yes, a set of basis functions can be both complete and orthogonal. Orthogonality refers to the independence of the basis functions from each other, while completeness refers to the ability to accurately represent any function within the given space. These two properties are not mutually exclusive.

5. How does the completeness of a basis function affect the accuracy of calculations or models?

The completeness of a basis function is crucial for accurate calculations and models. A more complete set of basis functions allows for a wider range of functions to be accurately represented, resulting in more accurate calculations and models. In contrast, a less complete set of basis functions can lead to errors and inaccuracies in the results.

Similar threads

Replies
139
Views
4K
  • Topology and Analysis
Replies
8
Views
1K
  • Quantum Physics
2
Replies
61
Views
1K
Replies
67
Views
5K
  • Calculus and Beyond Homework Help
Replies
0
Views
445
  • Linear and Abstract Algebra
Replies
9
Views
561
  • Linear and Abstract Algebra
Replies
6
Views
861
  • Quantum Physics
Replies
8
Views
2K
Replies
2
Views
778
  • Linear and Abstract Algebra
Replies
9
Views
186
Back
Top