Exploring Non-Standard Calculus: Benefits and Drawbacks

  • Thread starter khemix
  • Start date
  • Tags
    Calculus
In summary: They argue that there is a more intuitive way to do it, which is to start with the idea that derivatives are functions that change as you vary the inputs. In summary, the mathematical community does not seem to be very supportive of nonstandard calculus. There may be logical flaws in the theory, but it is still valid mathematics.
  • #1
khemix
123
1
I'm wondering, how does the mathetical community view non-standard calculus/analysis as developed by Robinson? The idea of infitesimals revived seems like a break-through in my eyes. But the fact that is has not yet been adopted by any major institution leads me to skepticism. From those of you have done it, are there logical flaws in the theory?

I am really interested in learning it but I fear it may limit me a lot because every mathematical field uses the standard theories.
 
Physics news on Phys.org
  • #2
There is nothing at all controversial about it. However, it gives no different results from "regular", limit based, calculus and requires some pretty sophisticated logic to introduce "infinitesmals" and infinite numbers, so it isn't really appropriate for people just starting to study mathematics.
 
Last edited by a moderator:
  • #3
Nonstandard analysis and standard analysis prove exactly the same theorems about real analysis; if one is consistent, then so is the other. (And inversely)
 
  • #4
khemix said:
I'm wondering, how does the mathetical community view non-standard calculus/analysis as developed by Robinson? The idea of infitesimals revived seems like a break-through in my eyes. But the fact that is has not yet been adopted by any major institution leads me to skepticism.

HallsofIvy said:
There is nothing at all controversial about.

I remember the time when I thought that there would be controversy about non-standard calculus, too. The are lot of different kind of people talking about infinitesimals, and it can be difficult to make sense out of everything. khemix, my message to you is that non-standard calculus is perfectly valid mathematics, but everything you will hear about infinitesimals is not.

I have a question about this topic too. I've never studied non-standard calculus in detail, but I've understood that it relies on the axiom of choice. It was something about existence of some filters. Right? Isn't it strange that such simple idea must rely on AOC? I'm already a believer of a claim, that AOC will never become relevant for any practical applications (is axiom of choice applicationless?) It follows, that the non-standard calculus will never turn out to be useful either. Does anyone want to prove me wrong here? :smile:
 
  • #5
jostpuur said:
I'm already a believer of a claim, that AOC will never become relevant for any practical applications (is axiom of choice applicationless?)
Gak! I suppose this might be true if you have a very narrow definition of 'practical application'... I didn't notice your thread before, I'll say something.
 
  • #6
khemix said:
I am really interested in learning it but I fear it may limit me a lot because every mathematical field uses the standard theories.

Jerome Keisler has written a couple of calculus textbooks based on this stuff,

http://www.math.wisc.edu/~keisler/.

The first link (Free Online Calculus Book (PDF files)) is longer and more elementary, than the second link (Foundations of Infinitesimal Calculus (2007)).
 
  • #7
George Jones said:
Jerome Keisler has written a couple of calculus textbooks based on this stuff,

http://www.math.wisc.edu/~keisler/.

The first link (Free Online Calculus Book (PDF files)) is longer and more elementary, than the second link (Foundations of Infinitesimal Calculus (2007)).

Thank you George for the links!

I must confess; I'd never heard of Infinitesimal Calculus before this thread.:blushing: I quickly browsed through the beginnings of the first link and to me it seems more intuitive than standard calculus. I wonder why it isn't at least introduced to high school students? It seems to me that it might help some students who have difficulty grasping limit based calculus to offer a course in infinitesimal calculus as an alternative.
 
  • #8
Yes, but...

I will confess that is the way I first saw Calculus: If [itex]y= x^2[/itex] then [itex]dy= (x+ dx)^2- x^2= x^2+ 2xdx - x^2[/itex] (Since (dx)^2= 0) so [itex]dy= 2xdx[/itex] and, of course, dy/dx= 2x. But for that to make sense, you have to really understand "infinitesmals". You can, of course, simply learn formulas that way but learning the concepts requires, as I said before, deep results from symbolic logic.
 
  • #9
I cannot understand this talk about infinitesimals being intuitive :confused:

The first examples of limits are examples where you have some continuous function [itex]f:[a,b]\to\mathbb{R}[/itex] restricted to a set [itex][a,c[\;\cup\;]c,b][/itex], so that we get a hole into the graph. Then the hole can be removed with a limit [tex]\lim_{x\to c} f(x)[/tex].

Usually the function is written in some funny way, like
[tex]
f(x) = \frac{x^2-1}{x+1}.
[/tex]

The big idea behind the derivative is already contained in this idea, that you can remove a hole from a graph. The rule

[tex]
h\mapsto \frac{f(x+h) - f(x)}{h}
[/tex]

defines some mapping [itex][-\delta, 0[\;\cup \;]0,\delta]\to\mathbb{R}[/itex], and the derivative [itex]f'(x)[/itex] is obtained, when you fill the hole in the graph at the point [itex](0,f'(x))[/itex]. IMO the idea, that a derivative can be defined by filling a hole in the graph, is pretty much as intuitive as it can get.

Now some people claim that this conventional idea of defining the derivative with a limit, which is the same thing as filling a hole in the graph, is not intuitive enough, and can confuse students. So a more intuitive idea could also be used in teaching. And what is the more intuitive idea supposed to be? ... Some axiom of choice demanding hyper real numbers... Holy .... well okey... :uhh: To put it mildly, I disagree. I would prefer staying with the graphs with holes, and the procedures of filling the holes.
 
  • #10
The general picture of nonstandard analysis is as follows:

1. We start with the standard model of real analysis.
2. We build another model of real analysis that has exactly the same theorems.
3. We compare the two models.

AoC is only used in the theoretical foundations: to show that nonstandard models exist. Actual application of NSA doesn't make use of the AoC. (And, I believe, you can still do NSA in a set theory that rejects the AoC but still offers a non-standard model of real analysis)


IMO the idea, that a derivative can be defined by filling a hole in the graph, is pretty much as intuitive as it can get.
It's a useless idea at the practical level, though, because that doesn't tell you what a hole is, how to tell when they can be filled in, or how to compute the value when it is filled in. To do the latter still requires you to do either an epsilon-delta with difference quotients, or infinitessimal approximation with them (the infamous ghosts of departed quantities!).


The differences become more pronounced as things start getting more sophisticated. e.g. standard analysis has to talk about tangent spaces and differentiable charts to do calculus on manifolds; NSA just continues using its infinitessimals.
 
  • #11
The usual approach to nonstandard (or infinitesimal) calculus (based on Abraham Robinson's work, Non-standard Analysis, 1962) does indeed depend on ultrafilters and the Axiom of Choice, and is usually reserved for third year university level or even post graduate courses. This is a shame, because everything one needs to establish a rigorous "hyperreal" extension of the real numbers, complete with (nonzero) infinitesimals and their infinite reciprocals, can be found in high school level algebra.

An infinitesimal is a number with an absolute value (or size) less than every positive standard real number, no matter how small. The only such standard number is zero, itself. One needs to think a bit outside the standard box to find a rigorous model of nonzero infinitesimals.

Consider the set of rational functions, ratios of one polynomial to another, in a single positive integer variable (an index n) with real coefficients, where f <, =, or > g iff, for some n0, n>n0 implies f(n) <, =, or > g(n), respectively. This set (call it R is an ordered field that is a superset of R, the "standard" real numbers and proper subset of Robinson's hyperreals, *R, with the constant functions identified with the standard reals.

Does R contain any infinitesimals? Yes, j(n) = 1/n is less than every positive constant function for all values of n greater then the reciprocal of the constant, and is still greater than zero. In fact, every polynomial ratio with a denominator of greater degree than its numerator is an infinitesimal. And, every such ratio with a numerator of greater degree than its denominator is infinitely large.

Ratios with numerators of the same degree as their denominators are finite numbers that differ at most from some unique constant by some infinitesimal. (The constant is just the ratio of the leading coefficients. Robinson calls that the "standard part" of the finite number. Finding it is just the same as "rounding off" that finite number to the "nearest real number," a concept much more intuitive than finding derivatives and integrals as limits using the standard epsilon/delta definition.)

Robinson represents the "standard part of x," where x is a finite, but nonstandard, number, as °x, which I prefer to read as "the nearest standard value of x."

The open interval between n0 and infinity contains all but some finite set of positive integers, and, when one defines any mathematical statement about members of R as holding iff it holds for all but some finite set positive integer values of the index, it is easy to prove the most important theorem of nonstandard analysis, the Transfer Principle, that every "first order" statement about elements of n0 has the same truth value as it does for elements of R, without recourse to ultrafilters or the AoC, and proofs using infinitesimals are equivalent to proofs using limits, just as Leibniz claimed 300 ago.

The earliest example of this model (the polynomial ratios) I have found is in Edwin Moise's Elementary Geometry from an Advanced Standpoint, 1962, Chapter 28. An Example of an Ordered Field which is not Archimedean. Moise let's the index variable range through all real values, and does not mention the use of this field in calculus, but I think the coincidence of dates with Robinson's suggests that there is a link.

So Sylvanus Thompson (Calculus Made Easy, 1910) was right to thumb his nose at the "mostly clever fools" who insisted that calculus had to be hard. It only took a half a century for Rigorous Mathematics to catch up with him. And there is no need to hide the advantages of Nonstandard Calculus from the rest of us.
 
Last edited:
  • #12
High school algebra isn't enough to give you standard analysis, let alone nonstandard analysis!

It's easy to construct ordered fields with infinite and infinitessimal numbers. But that just let's you do arithmetic -- it's not enough to do calculus.

e.g. in +R, sin(n) doesn't exist. Heck, [itex]\sqrt{n}[/itex] doesn't even exist!

Your example doesn't even get first-order algebra correct -- you need a real closed field for that.




Hrm. But you're doing something odd with semantics, though -- I haven't fully thought it through yet.
 
Last edited:
  • #13
Let f be the element of +R with f(n) = n2.

Let P(x) be the proposition "x is an integer that is an odd square".

Then P(f) does not hold.
However, (not P)(f) does not hold either.
 
  • #14
It's not clear to me why you are focusing just on the rational functions in n. Certainly it's a subset on which the ordering is unambiguous -- but in the most straightforward way I can interpret what you're arguing, the Dedekind reals constitute the arbitrary real-valued functions in n.
 
  • #15
Okay, my thoughts have cohered.


The problem with +R is that, as a structure, it doesn't really support anything but arithmetic -- not even algebra. Given the semantics you defined, "[itex]\forall x \geq 0 \exists! y \geq 0: y^2 = x[/itex]" does indeed hold. However, most elements of +R do not have square roots in +R.


Now, we can remove that problem by going to RN -- but the new problem is that the standard part function is no longer well-defined on the finite terms. Heck, "finite" is not even well-defined. Also, you get complications from the logic not being two-valued -- e.g. my example in post 13.


ATM, I simply don't see how one can actually use of either of these things as a foundation for non-standard analysis.
 
  • #16
All I am trying to do is to show that making dx an "infinitely small bit of x" (to paraphrase Thompson's Calculus Made Easy) does not lead to any contradictions with any of the first order axioms that define the real numbers.

Robinson did this with equivalent sets of all possible real sequences and ultrafilters. I'm doing it with a less extensive set of sequences that only requires the cofinite (or Fréchet) filter on the index set, and avoids invoking the Axiom of Choice. Robinson used ultrafilters and the AoC to prove the compactness theorem so he could assert that models that include infinitesimals exist, but there is no reason why such a model also needs ultra filters.

The fact that my "lesser extension" of R than Robinson's *R does not include such infinitesimals as 10n does not make the infinitesimal polynomial ratios any less useful.

Can one "do calculus" with high school algebra? Yes. For example, to find dy/dx for y = x2:

Let x change by ±dx = ±1/n.
Then y+∆y = (x ±1/n)2 = x2x± 2/n + 1/n2.
Subtracting y = x2 from both sides,
y = ±2/n + 1/n2. (Notice that ∆y is NOT the same as dy, but both values are within a second order infinitesimal of each other, and of dy.)
Dividing ∆y by ∆x = dx = ±1/n, we get a finite result that comes within an infinitesimal of our goal:
2x ± 1/n.
We have carried our calculation to a higher precision than the infinite precision of standard reals, (more than good enough for government work!). We round of directly to the standard part of that nonstandard result:
dy/dx = °∆y/∆x = 2x.

After a few calculations using polynomial ratios as hyperreals, the student should become confident that she can do them staying within the Leibniz notation, and never have to bother with justifying the infinitesimals again.

As for square roots of members of R (that's a superscripted dagger, not a superscripted plus, BTW), it is true that half of them are not elements of R— those roots of negative hyperreals are elements of C, polynomial ratios with complex coefficients! Let us save nonstandard complex analysis for a later course.
 
  • #17
My complaint is that you're not talking about calculus -- you're talking about real closed fields.

You can do a lot with algebraic geometry, and more with real algebraic geometry, including things like defining (algebraic) differential forms.

But that approach is limited to algebra -- it cannot say anything about transcendental functions (e.g. the trigonometric functions) nor anything that requires use of integers (e.g. power series, Riemann sums), let alone stuff that involves more set theory (e.g. measures).


By the way, you missed a lot of other elements in +R that don't have square roots. For example, the square roots of the number defined by the function f(n)=n live in RN but not in +R.

(The + is much easier to type than the dagger)
 
Last edited:
  • #18
f(n) = n is a rational function of n, so that f is an element of R. It is also true that √f is not. You could call √f an irrational function, that it is an element of *R–R.

But, just as one may approximate √2 to any desired precision with rational numbers, so one may approximate √f to any desired precision with rational functions.

When one does calculus with such approximations, the errors become higher order infinitesimals that "drop out" when one finds the standard part.

By the way, notice that f and all polynomials (of greater degree than constants) with integer coefficients are integer functions. As nonstandard numbers, they are unlimited (or infinite) integers.

Is f even or odd? Neither the odd terms of the sequence nor the even ones form a finite or a cofinite subsequence, so there is no way to decide within R, since a statement is defined to hold only if there is a cofinite subsequence of terms where the statement is true of those terms.

Even within *R, where it takes an ultrafilter to decide, the answer depends upon the choice of the ultrafilter.

While "every integer is either odd or even but not both" is true within R, that property, as simple as it appears, is not first order within either the standard reals or the hyperreals.

(First order logic within a set demands that quantified variables be elements of the set. Parity is a property of integers, not all reals (or hyperreals), and need not transfer.)

This example goes beyond mere calculus, and is a peek "under the hood" of the foundations of calculus, the stuff of standard and nonstandard analysis.
 
Last edited:
  • #19
Alan Fisher said:
But, just as one may approximate √2 to any desired precision with rational numbers, so one may approximate √f to any desired precision with rational functions.

When one does calculus with such approximations, the errors become higher order infinitesimals that "drop out" when one finds the standard part.
What do you mean? These statements are patently false in what I think is the most obvious interpretation:
Theorem: If g is an element of +R, then [itex]|g - \sqrt{f}|[/itex] is a transfinite hyperreal​
The proof sketch is simple -- either:
  • the limit of g is +infinity and thus it grows like O(n) (or faster!)
  • the limit of g is constant, and thus it grows like O(1)
  • the limit of g is -infinity, and thus g has the wrong sign
However, [itex]\sqrt{f}[/itex] grows like [itex]O(\sqrt{n})[/itex].



(First order logic within a set demands that quantified variables be elements of the set. Parity is a property of integers, not all reals (or hyperreals), and need not transfer.)

This example goes beyond mere calculus, and is a peek "under the hood" of the foundations of calculus, the stuff of standard and nonstandard analysis.
You can't do first-order logic with a set. You need a language and probably want a theory. And since you want a set involved, you need an interpretation too.

What is your language? Your theory? As far as I can tell, you are trying to work with the theory of real-closed fields -- the language generated from 0,1,+,*,< and the theory is exactly the statements satisfied by the real numbers. However, your field is just formally real and not real-closed -- because, e.g., you don't have square roots of positives. The real closure of it, however, will be real-closed (and still a proper subfield of the hyperreals).

The fact that the theory you're working with can't talk about integers means it cannot be used to do calculus -- this example doesn't go beyond calculus because it hasn't even gotten as far as calculus yet.

Roughly speaking, you're doing non-standard arithmetic.
 
Last edited:
  • #20
All I am doing, is trying to establish the relative consistency of an ordered field of hyperreal numbers, R, with that of its subfield, the Dedekind complete set of standard real numbers, R. To this end, I am using the non-Archimedean ordered field real rational functions in a single variable as a model of my hyperreal set with such numbers interpreted as equivalence sets of such functions.

As a bonus, by refining my interpretation of those functions as sequences, functions of positive integers rather than members of the continuous interval of real numbers, I am able to demonstrate Robinson style transfer first order statements (or "well formed formulae") between R and R, but without using the Axiom of Choice.

I use a slightly modified version of Jerzy Łoś's proof of transfer for ultrapower models, which does not use any of the properties that distinguish between ultra filters and mere general logical filters.

This is more than arithmetic, algebra, or even calculus. It's analysis.
 
  • #21
I don't see anything in your argument that suggests what you are claiming. So show me what I'm missing -- demonstrate that you can do nonstandard analysis this way.


Let's start with something normally very simple. Let's define the predicate "f is continuous at a". (f ranges over real-valued functions of the reals)

This is easy in non-standard analysis. Transfer gives us a hyperreal-valued function of the hyperreals [itex]{}^\star f[/itex] and a hyperreal [itex]{}^\star a[/itex], and we can define
"f is continuous at a" if and only if "[itex]\forall x \in {}^\star \mathbb{R}: x \sim {}^\star a \implies {}^\star f(x) \sim {}^\star f({}^\star a)[/itex]"​

How can this possibly make sense in your approach?
 
  • #22
And a more complex example: any continuous real-valued function of a closed interval has a maximum.

Here is a beautiful proof sketch of this basic fact via non-standard analysis:

Let [a,b] be the interval, and f be the function.

Choose any transfinite hyperinteger H.

A standard theorem is that any real-valued function of finite set has a maximum value. By the transfer principle, we have the theorem that any hyperreal-valued (internal) function of a hyperfinite set has a maximum value.

The set [itex]S = \{ a + (b-a)n/H | n \in {}^\star \mathbb{N} \wedge 0 \leq n < H\}[/itex] is hyperfinite. Therefore, [itex]{}^\star f[/itex] has a maximum on S. Let's say the maximum happens at n=c.

Then f must have a maximum at [itex]\mathop{std} c[/itex] by the following argument: for any standard number x, there is a point [itex]y \in S[/itex] such that [itex]x \sim y[/itex]. Then:
[tex]f(x) = {}^\star f({}^\star x) \sim {}^\star f(y) \leq {}^\star f(c) \sim f(\mathop{std} c)[/tex]​
Which implies either
  • f(x) is less than f(std c)
  • f(x) is infinitessimally greater than or equal to f(std c). But they are both standard numbers, and are therefore equal.
 
  • #23
First example: (f is continuous at a) iff (xa implies f(x) ≈ f(a)), where ab iff ab is infinitesimal. Your definition of continuity within *R is the same as mine within R, both simpler (and much more intuitive!) than the standard definition within R: (f is continuous at a) iff (limxaf(x = f(a). (Look Ma, no LaTex!:smile:) Within R, ab is infinitesimal when it is a polynomial ratio with a numerator of higher degree than its denominator.

I prefer to use ≈ (alt x on the Mac) for "infinitesimally close."

Second example: Transfer holds between *R and R. It also holds (as I shall eventually prove) between R and R.

Therefore, transfer holds between *R and R, and my proof is the same as your proof! QED
 
  • #24
Alan Fisher said:
(f is continuous at a) iff (x ≈ a implies f(x) ≈ f(a))
More detail please. If x is supposed to be a variable ranging over +R, then you've made a syntax error: the domain of f is merely R, so f(x) is nonsense.

(And, of course, if x ranges over R, then by your definition, all real-valued functions of the reals are continuous)


It also holds (as I shall eventually prove) between R and R.
I already showed an example which serves to disprove the transfer principle:
[itex]\forall x \in \mathbb{R} : x > 0 \implies \exists y \in \mathbb{R} : y^2 = x[/itex] is a theorem, but [itex]\forall x \in {}^\dagger \mathbb{R} : x > 0 \implies \exists y \in {}|^\dagger \mathbb{R} : y^2 = x[/itex].​
This latter statement is true in odd semantics you defined*, but it is patently false as an external statement. Incidentally, this also proves that we do not have an elementary embedding: to the real-valued function of the reals [itex]f(x) = \sqrt{|x|}[/itex], there does not exist a corresponding +R-valued of +R.

*: Actually, you only partially defined it, because you've been fairly silent on the language you meant



Therefore, transfer holds between *R and R, and my proof is the same as your proof! QED
Please take careful note that I didn't apply the transfer principle to a statement in the first-order language of real closed fields -- my statement references N, has a variable S that ranges over P(R), and another variable that ranges over P(SxS)!

(P(_) is the power-set function)
 
Last edited:
  • #25
There was a reply to the effect "non-standard analysis is too complex for practical use". This is wrong -- it's confusing the existence proof of a construct with the construct itself.

It is the *existence proof* of the actual construct -- the one issued by Robinson -- which is complex. In contrast, however, the actual construct (namely, that given in the 1920's by the Lowenhein-Skolem Theorem) is extremely simple as is the idea behind it.

One merely adjoins the infinite set of axioms 0 < omega, 1 < omega, 2 < omega, 3 < omega, ... to those governing an ordered linear field and you're set to go.

So, non-standard analysis is simply the real numbers plus an extra infinite number, omega.

What makes the process work is just that the theory of the real numbers (i.e. the theory of complete ordered fields) is not a first-order theory. The axiom of completeness is inextricably an axiom of second order logic that can only be approximately represented in first order logic. So, as long as you're working in first ordered logic, there is no inconsistency with adopting the axioms for omega.

Now, granted, when you're doing analysis (i.e. calculus) versus merely algebra, you're working in second order logic, since basically the distinction between analysis vs. algebra IS essentially what the difference between first and second order logic is. But the fact remains that the construct for non-standard analysis is simple. To get second order logic, you use whatever first order approximation to the axiom of completeness that you use for the real numbers anyhow. But the account given for calculus is simpler, more straightforward, and more consistent with the spirit of the Founding Fathers. Therefore it will pass muster in the Supreme Court.
 
  • #26
To be fair, it's a competing trade-off. On the good side, a well-behaved calculus of non-zero infinitessimals is a great convenience. On the bad side, a student has to be at least superficially aware of some sticky issues of formal logic. (although an unambitious student is unlikely to be imaginative enough to run into problems)

And there's a pedagogical issue -- okay, so a student manages to take a calculus class taught with non-standard analysis. But how likely is he going to find, say, a differential geometry or real analysis course suited to his background rather than the standard one? :frown:

I like NSA; I'm just trying to point out that it's not all benefits and no drawbacks.



Anyways, this thread is fairly old, and I'm not inclined to leave it open to invite discussion on Alan Fisher's ideas.
 
Last edited:

1. What is non-standard calculus?

Non-standard calculus, also known as non-standard analysis, is a branch of mathematics that extends the principles of traditional calculus to include infinitesimal and infinite numbers. It was first developed by mathematician Abraham Robinson in the 1960s as a way to address some of the limitations of traditional calculus.

2. How is non-standard calculus different from traditional calculus?

Non-standard calculus differs from traditional calculus in that it includes the use of infinitesimal and infinite numbers, which are considered to be infinitely small and infinitely large respectively. This allows for a more precise and intuitive understanding of certain mathematical concepts, such as continuity and convergence, which are often difficult to grasp using traditional methods.

3. What are some applications of non-standard calculus?

Non-standard calculus has a wide range of applications in various fields of mathematics, physics, and engineering. Some examples include the study of fractals, optimization problems, chaos theory, and differential equations. It has also been used in the development of computer algorithms and in the analysis of complex systems.

4. Are there any controversies surrounding non-standard calculus?

While non-standard calculus has been widely accepted and used by many mathematicians and scientists, there are still some controversies surrounding its use. Some argue that the concept of infinitesimals is not well-defined and can lead to paradoxes and inconsistencies. Others argue that the use of non-standard methods is unnecessary and that traditional calculus is sufficient for most applications.

5. How can non-standard calculus benefit my understanding of mathematics?

Non-standard calculus can offer a deeper and more intuitive understanding of mathematical concepts and can provide new insights and approaches to problem-solving. It can also help bridge the gap between pure mathematics and its applications in fields such as physics, engineering, and economics. Additionally, learning about non-standard calculus can broaden one's perspective and appreciation for the beauty and complexity of mathematics as a whole.

Similar threads

  • STEM Academic Advising
Replies
24
Views
2K
  • STEM Academic Advising
Replies
11
Views
658
Replies
7
Views
5K
Replies
2
Views
874
Replies
1
Views
1K
  • Beyond the Standard Models
Replies
11
Views
2K
Replies
33
Views
5K
  • STEM Academic Advising
Replies
7
Views
872
Replies
8
Views
2K
Replies
11
Views
3K
Back
Top