# Line count in a simple program

• I
• SSequence
In summary, the conversation discusses a question that was posted on a math forum and the confusion surrounding a specific construct in the question. The question involves defining two functions, F and f, and their relationship to a fundamental sequence for a specific ordinal value. The confusion arises when considering a similar construct for an infinite case, where a universal program can be used to simulate a program with a large index value. The individual is seeking clarification and wondering if the construct for the infinite case has any significant differences from the finite case.
SSequence
Preface:
I also posted this question on mathoverflow(link: https://mathoverflow.net/questions/324110). A bit surprising for me that it has gotten votes to close. Mostly because all the dozen or so questions (after my very first one) that I have asked there have been received positively.
But maybe this question has some basic issue that I wasn't able to see (before posting)?

I would guess that it is one of my better questions there lest the mistake I made is a trivial one. But I haven't been able to see anything clearly wrong with the question till yet (and hence my reason for re-posting it here ... and I suppose an extended discussion might help my understanding anyway). Since if there is trivial error/mistake I would like to delete the question there. I suppose perhaps maybe someone will point it out there in comments anyway.

Here is the copy-paste of the question:

I thought of a certain point that seems to point to an apparent incongruity. Hopefully someone would promptly point out the logical mistake and/or some implicit assumption that may not hold under a closer look. Even though the question is a quite long in length, the actual point is very short. I have thought about it for a good number of hours, but I am likely suffering from a blindspot (and possibly an easy one, though I would hope it isn't trivial).

So here is the rather short construct, so to speak. Let's first consider the case of ##\omega_{CK}##. Now we want to define two functions ##F:\omega \rightarrow \omega_{CK}## and ##f:\omega \rightarrow \omega## (the way I consider ##F## and ##f## in the actual question is almost exactly the same). For ##F##, we simple define ##F(x)## to be the ordinal value whose well-order relation (for a well-order of ##\mathbb{N}##) is generated by program with index ##x##. If the program with index ##x## doesn't generate a well-order relation we can set ##F(x)=\omega_{CK}## (just as a matter of convention really).

Now we want to define the function ##f##. We define ##f(0)## to be the smallest value ##a \in \mathbb{N}## that satisfies the property ##F(a)<\omega_{CK}##. Now we recursively define ##f(x+1)## to be the smallest value ##a \in \mathbb{N}## such that:
(1) ##F(a)>F(f(x))## AND (2) ##F(a)<\omega_{CK}##. It isn't difficult to see that: (a) ##f## is a strictly increasing function (b) The function ##G:\omega \rightarrow \omega_{CK}## defined by the equation ##G(x)=F(f(x))## forms a (fundamental) sequence for ##\omega_{CK}##.

Now before proceeding, an elementary point that is relevant to the question. Consider an ordinary program where all variables start from value ##0##. We also assume that the only way to increase the value of a variable is by the command of form ##v:=v+1##. Now suppose we wanted to set a variable value to say ##1000,000##. We don't have to write the same number of lines for increment. We can just write a function (##\mathbb{N}## to ##\mathbb{N}##) such as ##x \rightarrow 10^x##. Suppose hypothetically that this function takes ##100## lines. Now all we have to do is increment a variable (call it ##v##) six times in the beginning and then place the body of function calculating ##10^x## afterwards (replacing the input variable by ##v## in the body of course). This will set the output variable to ##1000,000## in ##106## lines.

---

And now here is my confusion. Let ##c## be the supremum of clockable values for an ordinal-register program (or a program model that is quite similar to it). Now define a function ##F:\omega \rightarrow c## so that ##F(x)## returns the value clocked by the program with index ##x##. If the program with index ##x## doesn't halt set ##F(x)=c## (as convention). Now define functions ##f:\omega \rightarrow \omega## and ##G:\omega \rightarrow c## in a manner completely identical to how they were defined in first part of question.

Now here is my confusion, which I assume has a simple answer that I am failing to see. I will use some specific numbers just so that the main point is immediately clear (I really doubt the specific numbers really play any important role here ... so I have chosen some easy numbers for convenience).

Consider the family of programs that, for some given value ##a \in \mathbb{N}##, simulate the program with index ##f(a)##. These programs halt in their simulation when the program with index ##f(a)## halts. These programs all have the same structure. For convenience choose ##f## to be a very simple function, say ##x \rightarrow 10^x## (obviously ##f## can't be recursive, but the main thing is just that ##f## is strictly increasing). The main point that is confusing me is, that for a very large value ##a \in \mathbb{N}## (and hence also very large ##f(a)##), why can't we just set a variable equal to value ##f(a)## rapidly in a few lines and then simulate the program with index ##f(a)## using essentially a universal program.

And unless the specific construction for infinite case (since I have never written it in full detail) has some drastic different from finite case here, wouldn't this be a constant overhead in terms of number of lines. I suppose I should write the details (unless the error is something simpler).

It sounds too abstract but a simple example would help. Suppose writing the universal program takes about ##200## lines. Now as I mentioned above, suppose that ##f(x)=10^x##. In particular, we have ##f(6)=1000,000##. But couldn't we just set a variable whose value equals ##1000,000## in ##106## lines (as I mentioned in first part of answer) and then place the body of universal program afterwards (to simulate the program with index ##1000,000##). Wouldn't this would take a total of just ##200+106=306## lines? Sure its index will be a bit higher, but I am not clear that this makes any real difference. The program with index ##1000,000## clocked the value ##G(5)=F(f(5))##. And now somehow the program with smaller index is clocking a value greater than or equal to ##G(5)## [assuming that simulating a program will always be at least as expensive as directly running the program]?

Now given that our indexing is well-behaved in the sense that it assign the program of bigger length the greater index, then what I wrote in above paragraph simply shouldn't hold (otherwise there seems to be an incongruity). Hopefully someone will promptly point out the error.

Last edited:
@stevendaryl
Could you take a look and point out where there is an issue? Thanks. If anything is unclear (or some point requires some explanation), or requires a bit more extended discussion, let me know.

@Deedlit
Any thoughts on where an important step could be (plausibly) missing or incorrect?

Last edited:
SSequence said:
So here is the rather short construct, so to speak. Let's first consider the case of ##\omega_{CK}##. Now we want to define two functions ##F:\omega \rightarrow \omega_{CK}## and ##f:\omega \rightarrow \omega##
I get that you have functions that are maps between two sets, but what is ##\omega_{CK}##? This is not a good practice, using terms that you haven't defined.

SSequence said:
Now before proceeding, an elementary point that is relevant to the question. Consider an ordinary program where all variables start from value ##0##. We also assume that the only way to increase the value of a variable is by the command of form ##v:=v+1##.

SSequence said:
Now suppose we wanted to set a variable value to say ##1000,000##. We don't have to write the same number of lines for increment. We can just write a function (##\mathbb{N}## to ##\mathbb{N}##) such as ##x \rightarrow 10^x##. Suppose hypothetically that this function takes ##100## lines.
You said earlier (quoted in the 2nd paragraph above) that the only way to increase the value of a variable is by an assignment statement that increments the variable by 1. So how is it that you can write some magical function (of 100 lines or less!) to set a variable to an arbitrary value, when the only way you can do this is by incrementing its value by 1 each time?

I looked at your post on mathoverflow, and saw that you didn't get any response, which is not a surprise to me. Your question appears to me to by very theoretical, but without a reasonable explanation of what you're trying to do.

Mark44 said:
I get that you have functions that are maps between two sets, but what is ##\omega_{CK}##? This is not a good practice, using terms that you haven't defined.
##\omega_{CK}## just means church-kleene ordinal.

I just used it as an example. The actual ordinal important in the question is ##c##, which I also described in the question.

Mark44 said:
You said earlier (quoted in the 2nd paragraph above) that the only way to increase the value of a variable is by an assignment statement that increments the variable by 1. So how is it that you can write some magical function (of 100 lines or less!) to set a variable to an arbitrary value, when the only way you can do this is by incrementing its value by 1 each time?
Well this is also somewhat standard. The point of disallowing arbitrary commands of the form such as ##v:=constant## (example: ##v:=10000##) is so that we have finite number of programs of a given size (which simply makes easier to organise them). This is also somewhat standard I think.

Here is an example of what I was saying. Suppose we wanted to set a variable ##v## to a value like ##1000,0000## or greater. One option is to just put exactly that many increment commands. The other is to write something along the lines of:
a:=0
a:=a+1 (10 times say)
v:=0
n:=0
while(n!=a) {
m:=0
while(m!=a) {
v:v=v+1
m:=m+1 }
n:=n+1
}

A few more nested loops would essentially do the same job. What I meant was setting ##v:=1000,1000## will take substantially less number of lines (basically the faster growing the underlying function used the more difference it creates).

Last edited:
I'm not sure I understand your definition of ##F(x)##. What does "clock" mean here?

stevendaryl said:
I'm not sure I understand your definition of ##F(x)##. What does "clock" mean here?
It just means that given a (transfinite) program model when we start with empty input the program halts exactly at time ##\alpha \in Ord##. In that case we say that the given program clocks ##\alpha##.

The main thing though is that with powerful enough models there is a notion of simulation (in a similar way to ordinary programs) that is quite similar to ordinary programs (the actual construction is more sophisticated, but that would be besides the point).

So, in a way, I was thinking about a paradox of program-length of sorts. What I was thinking was that for a given ##a \in \mathbb{N}## a program with a structure along the lines of:
c1:=value of f(a)
c2:=length of program with index f(a)
while(n!=c2) {
simulate the program with index f(a)
}

Note that c1, c2, n are program-variables. The basic idea was that if we can somehow set the value of c1 equal to f(a) in a very few lines (if possible), we would clock beyond or equal to value F(f(a)) with a program that has index smaller than f(a) [some thought will convince that this shouldn't be possible consistently].

=====

You might also look at the answer given in the question. It seems that the main related issue (given the answer to linked-question in OP) might be related to kolmogorov complexity though (I don't know much about it though).

The main issue I am having is that suppose a function ##f:\mathbb{N} \rightarrow \mathbb{N}## (but ##f## itself non-recursive) is recursively bounded (meaning another recursive function ##g:\mathbb{N} \rightarrow \mathbb{N}## strictly dominates it) ... in fact ##f## will normally be exponentially bounded I think. Now can we create a really big difference in length of (ordinary) program (call it ##L##) that sets one of its variables to ##f(a)## (for arbitrary ##a \in \mathbb{N}##) and ##f(a)## itself? Meaning can we make ##f(a)-L## grow at a very large rate (very roughly speaking).

Personally, I will need to think about it. But I think the answer given suggests it is probably not possible.

Last edited:
SSequence said:
##\omega_{CK}## just means church-kleene ordinal.
This is only slightly helpful, since now I need to look up that term.

SSequence said:
The point of disallowing arbitrary commands of the form such as ##v:=constant## (example: ##v:=10000##) is so that we have finite number of programs of a given size (which simply makes easier to organise them). This is also somewhat standard I think.
Not amongst people who write actual code that runs on computers.
In any case, the programs you're trying to describe have different sizes, depending on the value you're trying to assign.

SSequence said:
Here is an example of what I was saying. Suppose we wanted to set a variable ##v## to a value like ##1000,0000## or greater. One option is to just put exactly that many increment commands. The other is to write something along the lines of:
Code:
a:=0
a:=a+1 (10 times say)
v:=0
n:=0
while(n!=a) {
m:=0
while(m!=a) {
v :=v+1
m:=m+1
}
n:=n+1
}
Since you seem to be stuck on the sizes of these programs, why not start with, say, assigning a value of 100 to the variable v, and then work up to 1000, 10,000, and so on, until you detect a pattern?
For your example above, the block of code above the outer while loop will always contain 10 lines of the form a := a + 1, since you are working with powers of 10. If you want to assign 100 to v, you need two nested while loops. To assign the value 1000 to v, you need three nested while loops, and so on. It should be a fairly simple matter to count the number of lines of these loops, keeping in mind that for each higher power, you need another loop counter variable, which will make each of the nested loops longer.

Minor nit. In the US, we generally separate groups of three 0's, not four. One million would be written as 1,000,000, not as 1000,0000. European countries use a period as a separator, but the also do so in groups of three digits.

Mark44 said:
Since you seem to be stuck on the sizes of these programs, why not start with, say, assigning a value of 100 to the variable v, and then work up to 1000, 10,000, and so on, until you detect a pattern?
For your example above, the block of code above the outer while loop will always contain 10 lines of the form a := a + 1, since you are working with powers of 10. If you want to assign 100 to v, you need two nested while loops. To assign the value 1000 to v, you need three nested while loops, and so on. It should be a fairly simple matter to count the number of lines of these loops, keeping in mind that for each higher power, you need another loop counter variable, which will make each of the nested loops longer.
This wouldn't be of much help. Choosing a computable ##f## isn't of much use since, by construction in question, ##f## can't be computable (I just used it as an easy example and also mentioned that it was an artificial choice).

The main point it comes down to is what I described in second half of post#6. As I said, I am not sure about it either way, since it occurred to me after the answer posted (in link) in response to my question (which is basically few hours) ... so I need to think about it, unless the person who answered the question (who seems quite knowledgeable) describes it as possible or impossible (I asked that specific point in comments there).

SSequence said:
This wouldn't be of much help. Choosing a computable ##f## isn't of much use since, by construction in question, ##f## can't be computable (I just used it as an easy example and also mentioned that it was an artificial choice).
If f isn't computable, how can you determine the number of lines of code it would take? What am I missing here?
SSequence said:
The main point it comes down to is what I described in second half of post#6. As I said, I am not sure about it either way, since it occurred to me after the answer posted (in link) in response to my question (which is basically few hours) ... so I need to think about it, unless the person who answered the question (who seems quite knowledgeable) describes it as possible or impossible (I asked that specific point in comments there).

Here is part of what you wrote in post #6.
So, in a way, I was thinking about a paradox of program-length of sorts. What I was thinking was that for a given ##a \in \mathbb{N}## a program with a structure along the lines of:
c1:=value of f(a)
c2:=length of program with index f(a)
while(n!=c2) {
simulate the program with index f(a)
}
This makes almost no sense to me for the following reasons.
1. If f is not computable, how to you determine f(a)?
2. "Program with index f(a)" -- ? It seems that you have a list of programs.
3. "Simulate the program with index f(a)" -- If you can't compute f(a), how will you be able to find the program at index f(a)?

I seem to remember you saying in some other thread that you had no or very little knowledge of programming. It seems to me that becoming more knowledgeable with some actual programming language would make it easier for you to communicate the meta-programming concepts you're interested in.

Mark44 said:
If f isn't computable, how can you determine the number of lines of code it would take? What am I missing here?
Basically the minimum number of lines of code it would take to reach ##f(a)## (for ##a \in \mathbb{N}##) if you denote it by ##L_a##, then we want to analyze the difference function ##f(a)-L_a##. Basically we just want to make it grow fast enough ... in terms of ##a## that is. If that function can be made to grow fast enough there is an incongruity. If not, there isn't any incongruity.

Again, the only real condition that can apparently be meaningfully placed on ##f## is that it should be recursively bounded (well exponentially bounded really).

Mark44 said:
Here is part of what you wrote in post #6.

This makes almost no sense to me for the following reasons.
1. If f is not computable, how to you determine f(a)?
2. "Program with index f(a)" -- ? It seems that you have a list of programs.
3. "Simulate the program with index f(a)" -- If you can't compute f(a), how will you be able to find the program at index f(a)?
Sorry there is some confusion. This specific sketch was for an ordinal program. Because stevendaryl asked about clocking, so I gave a very brief background relevant to main idea /construction of question, (in first half of post#6).

The part from post#6 that I was referring to was the "second half". I will just quote it:
SSequence said:
The main issue I am having is that suppose a function ##f:\mathbb{N} \rightarrow \mathbb{N}## (but ##f## itself non-recursive) is recursively bounded (meaning another recursive function ##g:\mathbb{N} \rightarrow \mathbb{N}## strictly dominates it) ... in fact ##f## will normally be exponentially bounded I think. Now can we create a really big difference in length of (ordinary) program (call it ##L##) that sets one of its variables to ##f(a)## (for arbitrary ##a \in \mathbb{N}##) and ##f(a)## itself? Meaning can we make ##f(a)-L## grow at a very large rate (very roughly speaking).

Personally, I will need to think about it.

Anyway, here is a toy model of sort I made. I think it reflects the actual situation reasonably well (ofc there is probably quite a bit needed to make it more complete).

If we say ##f## is exponentially bounded by a function ##g:\mathbb{N} \rightarrow \mathbb{N}## then we should be able to write ##g## as something like ##g(x)=2^x## without much loss of generality. Basically, as mentioned, for a given ##a##, we want to be able to reach the value ##f(a)## quickly enough.

As a (very) very crude first approximation, we can say that a program of length ##x## can set a variable equal to ##2^x##. One idea is similar to binary search sort of thing. We know that for any ##f(a)## we will have ##f(a) \leq 2^a## by definition of ##g##. Now assuming that we have ##2^{b-1} \leq f(a) \leq 2^{b}## (where ##0 \leq b \leq a##).

First we basically just use the program that reaches ##2^{b-1}## (to close the difference as quickly as possible). That program would be of length ##b-1##. Then we calculate ##f(a)-2^{b-1}## and close this difference as quickly by placing another program next to it. Essentially, continuing this way, we can at least make the difference ##f(a)-L_a## go to about exponential already.

My feeling is that an argument of this sort might be generaliseable. But again, what I wrote above is very rough idea. I am going to try to think about it in more detail/precision (to see whether it actually generalises or falls short somewhere).EDIT:
I thought of one other point. It seems that ##BB## functions might be of some relevance.

Essentially for a program that starts from empty input, I think the highest variable value (or say fixing a variable) is basically a form of busy beaver function (https://en.wikipedia.org/wiki/Busy_beaver) ... or otherwise we can switch from ordinal register to ordinal TM model anyway. Note that there are two kinds of such functions: "time-based" and "space-based". I am not quite familiar with latter ones (but anyway, these are the ones that might have possible relevance here).

Last edited:
I have thought about it some more and I think I understand the gist of it. The answer given in the link seems right to me. It helps to simplify things by actually modifying them a bit (and using simple approximation).

Instead of how ##F## was defined in OP, if we simply define ##F(x)## as maximum value clocked by (infinite) program of length ##x## then it becomes easier to see things. We can define the functions ##G## and ##f## in obvious way now.

If we use a simple approximation (without loss of much generality) that all programs with length ##N## have indexes between ##2^N## and ##2^{N+1}## say. The problem simply is that can to reach the value ##f(n)## using a program with length less than ##N##. Since the only thing given to us is that ##2^{N} \leq f(n) \leq 2^{N+1}## it seems that we can't assume a priori that we can. It certainly says something about the function ##f## though, but not anything more.

P.S.
The problem with a binary-style method similar to one described in post#11 would be that basically, in worst case scenario, the length of resulting program that goes to ##f(n)## will roughly get to the order of ~##N^2## ... a value certainly much bigger than ##N##.

Edit: It seems the same construction that is applied to ##c## in OP can be applied to ##\omega_{CK}## too I think (with appropriate modifications for functions ##F##,##f## and ##G##). In hindsight, I should have noticed this (since for it to be even close to reasonable, it really ought not to be applicable on ##\omega_{CK}##). I guess that's the problem with posting a question the same day that it occurs to you.

Last edited:
SSequence said:
Basically the minimum number of lines of code it would take to reach ##f(a)## (for ##a \in \mathbb{N}##) if you denote it by ##L_a##, then we want to analyze the difference function ##f(a)-L_a##. Basically we just want to make it grow fast enough ... in terms of ##a## that is. If that function can be made to grow fast enough there is an incongruity. If not, there isn't any incongruity.
When you are talking about impossible constructs, words like "basically" are not going to work. We need a mathematical foundation from which to work, not hand waving.

jbriggs444 said:
When you are talking about impossible constructs, words like "basically" are not going to work. We need a mathematical foundation from which to work, not hand waving.
"Mathematical foundation" is a rather loaded term. I don't want to go into detail (unless you want to discuss it) about it since I think it wouldn't be relevant to the topic.

Anyway the problem is settled (for example see the post before this one). Also, as mentioned in the answer (in the link) the construction shows that the function ##f## (as constructed in question) isn't compressible.

Last edited:
Let me see if I can restate the assumptions behind the question.

Let ##p_0, p_1, ...## be some computable enumeration of all possible programs that take no arguments.
Let ##F(j)## be some measure of complexity of program ##p_j## is, such as the amount of time ##p_j## runs before halting, or undefined, if it never halts.
Let ##f(j)## be the noncomputable function defined this way:
• ##f(0)## = the smallest ##j## such that ##F(j)## is defined.
• ##f(j+1)## = the smallest ##j## such that ##F(j)## is defined and ##F(j) > F(f(j))##
Let ##G(j)## be ##F(f(j))##.

Is that sort of the idea? So in terms of these definitions, ##G(j)## is some monotonically increasing function of naturals. What properties are you assuming that ##G(j)## has? I think you can prove that it grows faster than any total computable function. That is, for any program of one variable, ##p(j)##, there is a number ##n## such that ##G(n) > p(n)## (and ##G(n+1) > p(n+1)##, etc.)

SSequence
@stevendaryl Not quite. To be clear, it was a plausibility construction that I wrote down after good number of hours of thinking where I thought/felt some implicit assumption was probably being made [[... this becomes quite clear though when one notices that the construction works on ##\omega_{CK}## or even ANY reasonably larger ordinal than this: ##\omega_2^{CK}##, ##\omega_3^{CK}## etc.]]

Even though the problem is settled, if you are interested, you might read the first half of post#6 to better understand the main point (since that's essential to understanding the point).

##G## is just a function that represents the fundamental sequence for a large enough ordinal. We can simply define ##G:\omega \rightarrow \omega_{CK}## as the maximum value that is clocked by an (infinite) program of length ##x## (such that the clocked value is less than ##\omega_{CK}##). With this you can define the function ##f: \omega \rightarrow \omega## that gives the index of the program (of length x) that goes the furthest (if two programs reach the same point that's not a problem, we can have a convention of selecting smaller index). The programs under consideration are not ordinary programs but ordinal register programs (it is just that their behaviour before ##\omega## won't be any different from ordinary programs) so we are considering their indexes.

P.S. Also note that in post#12 there is a typo. Where I wrote F or F(x) it should have been replaced by G or G(x). Secondly note that the way I defined the functions G and f in the OP is a bit different (it was using the function F), but it was a bit more complicated than necessary. The way I have defined them here is simpler.

Also note that when we write index above, it is implicitly assumed we are talking about indexing of programs with no input/output etc. (in case there was any confusion about this).

Edit: Also, if it makes it clearer, an analogous construction can be done for ordinary programs too, but I really didn't have it in mind while writing the question (or even up until now), because that wasn't the perspective of the question.

Last edited:
@stevendaryl
I think basically what you have written is the correct (if we consider the question in the context of finite programs). If you look at post#16 (in context of ordinary programs) essentially then ##G:\omega \rightarrow \omega## will be the busy beaver function. ##f: \omega \rightarrow \omega## will be the index of programs that "win the race" so to speak.

But basically when we generalise appropriately, what was considered in post#16, for all limit values ##\alpha \leq c## (##c## being defined as in OP) then we have a function ##G_\alpha## that is somewhat analogous to BB function for ##\alpha## (the function ##G_\alpha : \omega \rightarrow \alpha## will form a (fundamental) sequence for ##\alpha##). The analogy with BB is indeed quite clear when we consider the case ##\alpha=c## itself (though it would be slightly off-topic).

While it is obvious there are many values ##\alpha## for which ##f_\alpha## will be incompressible, a good question to ask is that whether there will possibly be values ##\alpha## such that the function ##f_\alpha:\omega \rightarrow \omega## will be compressible (beyond a constant number) and can we say something about the basic pattern of when these values occur? While this was not the actual question, it is a reasonable side question I think (I don't know the answer).

The discussion has run its course (so I was thinking whether to bump the topic or not ...), but I thought it wouldn't be a bad idea to add this (since I can't edit the previous post).

Last edited:

## What is a line count in a simple program?

A line count in a simple program refers to the total number of lines of code or instructions that are written in the program. It is a measure of the size and complexity of the program.

## Why is line count important in programming?

Line count is important in programming because it gives an indication of the size and complexity of the program. It can also help in identifying areas for optimization and reducing code redundancy.

## How is line count calculated in a simple program?

Line count is typically calculated by counting the total number of lines of code, including blank lines and comments, in a program. Some programming languages also have built-in functions or tools that can automatically calculate the line count.

## What is considered a "good" line count in a simple program?

There is no specific number that can be considered a "good" line count in a simple program as it can vary depending on the programming language and the purpose of the program. Generally, a smaller line count can indicate a more efficient and well-structured program.

## Can line count be used as the sole measure of a program's complexity?

No, line count should not be used as the sole measure of a program's complexity. Other factors such as the use of advanced programming concepts, algorithms, and data structures should also be considered. A program with a small line count may still be complex if it uses advanced techniques and logic.

• General Math
Replies
2
Views
772
• General Math
Replies
13
Views
1K
• General Math
Replies
17
Views
2K
• Mechanics
Replies
3
Views
159
• General Math
Replies
23
Views
1K
• General Math
Replies
1
Views
875
• Calculus and Beyond Homework Help
Replies
6
Views
212
• General Math
Replies
6
Views
1K
• General Math
Replies
7
Views
1K
• Programming and Computer Science
Replies
1
Views
1K