Is computation synonymous with functions in computer science?

In summary: In computability theory, what things seam like, or what is practical is irrelevant.Even processes which would require more memory than could be constructed with all of the materials in the universe, and more time than the age of the universe, may be considered computable, while something we can very easily approximate with... well, with something we can very easily approximate with finite resources.
  • #1
Mr Davis 97
1,462
44
So I've delved into programming, and gotten experienced with the fundamentals. However, the more I learn, the more I question the central object of comp. science, computation, and its foundation. According to Wikipedia, "Computation is any type of calculation that follows a well-defined model understood and expressed as, for example, an algorithm." This is well enough, but I am interested in the concept of the mathematical function and its relation to computation. During the incipient phase of computer science, Turing and many others seemed to characterize computability in terms of functions. For example, the Church-Turing thesis is stated as saying that there is no effective model of computing that can compute more mathematical functions than a Turing machine. This seems to imply that all computations involve the mathematical idea of functions (perhaps this is due to that computing involves algorithms, and functions can effectively model algorithms).

Essentially, my question is, is the notion of computation wholly related to functions? Are the two terms somehow synonymously related?
 
Technology news on Phys.org
  • #2
Mr Davis 97 said:
Essentially, my question is, is the notion of computation wholly related to functions? Are the two terms somehow synonymously related?

An awful lot of ot certainly is, especially as you think of it in regard to solving physics problems. Most would agree that a good computer program should give the same output every time for a given input.

But certain probability related computer "functions" actually fail to fit the math definition of a function. Consider a random number generator. You can pretty easily use one to generate a sequence with a well-defined mean and standard deviation, but unpredictable regarding what the output of the next call will be. It may be doing exactly as expected, but no longer satisfying the strict math definition of function (one output for any given input in the domain).
 
  • Like
Likes Fooality and FactChecker
  • #3
Dr. Courtney said:
Consider a random number generator. You can pretty easily use one to generate a sequence with a well-defined mean and standard deviation, but unpredictable regarding what the output of the next call will be. It may be doing exactly as expected, but no longer satisfying the strict math definition of function (one output for any given input in the domain).

But there is no known true random number generator. And all computer algorithms for random number generation are only pseudo-random number generators, and indeed do have only one output for each input. But this is a good example, because I believe that random number generation is something that a Turing Machine cannot do, and may well be considered computation, but the question remains whether it can be done at all.

And this is a fascinating topic, because if true randomness is not possible, then it could imply that every event that has or will ever take place has always been predetermined.
 
Last edited:
  • #4
tAllan said:
But there is no known true random number generator. And all computer algorithms for random number generation are only pseudo-random number generators, and indeed do have only one output for each input.
For all practical purposes, this is not true. Not all random number generators are started with a known seed. Even the pseudo-random number generators are often not used in a way that makes their number stream predictable or repeatable.
 
  • Like
Likes aikismos
  • #5
All chess programs I know that can use multiple cores are non deterministic when using more than 1 thread. I.e. given a chess position, they will return a different evaluation every single time they are ran. On the other hand, when using 1 thread they are completely deterministic.

Now I don't know if the randomness is "true" or not. I believe that the output of these chess programs somehow depend on the temperature of the processors (and this isn't deterministic), because it depends on the speed of the processors which also depends on their temperature. I would tend to think that multi threading in chess programs at least, leads to true randomness in the output (for example evaluation, or total nodes searched for a given depth, etc.) I did some tests of randomness with these kinds of data and they couldn't spot any predictive behavior, though it doesn't mean that the data was truly random of course.
 
  • #6
tAllan said:
But there is no known true random number generator.
I think ERNIE is a true random number generator
 
  • Like
Likes aikismos
  • #7
FactChecker said:
For all practical purposes, this is not true. Not all random number generators are started with a known seed. Even the pseudo-random number generators are often not used in a way that makes their number stream predictable or repeatable.

In computability theory, what things seam like, or what is practical is irrelevant. Even processes which would require more memory than could be constructed with all of the materials in the universe, and more time than the age of the universe, may be considered computable, while something we can very easily approximate with extreme precision may not be considered computable.

And mathematically, even ERNIE's random number generation is a function, as well as chess moves in a multi-threaded chess game. All of the input is input, including voltages, temperatures, or whatever else has an influence on the outcome.

The question of whether or not true random number generators exist is much more fundamental, and ultimately is a potentially unanswerable physics question.
 
Last edited:
  • #8
William White said:
However, if we define a certain process to be random, say Brownian motion, then the position of particle(s) after time t can be said to be random, and if we use that position as a variable in a programme, it is random, according to our defintion.

If you want to learn more about computability and random numbers, then I suggest reading this excellent post from the theoretical computer science stack exchange.

LaurentBienvenu said:
I am joining the discussion fairly late, but I will try to address several questions that were asked earlier.

First, as observed by Aaron Sterling, it is important to first decide what we mean by "truly random" numbers, and especially if we are looking at things from a computational complexity or computability perspective.

Let me argue however that in complexity theory, people are mainly interested in pseudo-randomness, and pseudo-random generators, i.e. functions from strings to strings such that the distribution of the output sequences cannot be told apart from the uniform distribution by some efficient process (where several meanings of efficient can be considered, e.g. polytime computable, polynomial-size circuits etc). It is a beautiful and very active research area, but I think most people would agree that the objects it studies are not truly random, it is enough that they just look random (hence the term "pseudo").

In computability theory, a consensus has emerged to what should be a good notion of "true randomness", and it is indeed the notion of Martin-Löf randomness which prevailed (other ones have been proposed and are interesting to study but do not bare all the nice properties Martin-Löf randomness has). To simplify matters, we will consider randomness for infinite binary sequences (other objects such as functions from strings to strings can easily be encoded by such sequence).

An infinite binary sequence α is Martin-Löf random if no computable process (even if we allow this process to be computable in triple exponential time or higher) can detect a randomness flaw.

(1) What do we mean by "randomness flaw"? That part is easy: it is a set of measure 0, i.e. a property that almost all sequences do not have (here we talk about Lebesgue measure i.e. the measure where each bit has a 1/2 probability to be 0 independently of all the other bits). An example of such a flaw is "having asymptotically 1/3 of zeroes and 2/3 of ones", which violates the law of large numbers. Another example is "for every n, the first 2n bits of α are perfectly distributed (as many zeroes as ones)". In this case the law of large numbers is satified, but not the central limit theorem. Etc etc.
(2) How can a computable process test that a sequence does not belong to a particular set of measure 0? In other words, what sets of measure 0 can be computably described? This is precisely what Martin-Löf tests are about. A Martin-Löf test is a computable procedure which, given an input k, computably (i.e., via a Turing machine with input k) generates a sequence of strings wk,0, wk,1, ... such that the set Uk of infinite sequences starting by one of those wk,i has measure at most 2−k (if you like topology, notice that this is an open set in the product topology for the set of infinite binary sequences). Then the set G=⋂kUk has measure 0 and is referred to as Martin-Löf nullset. We can now define Martin-Löf randomness by saying that an infinite binary sequence α is Martin-Löf random if it does not belong to any Martin-Löf nullset.

This definition might seem technical but it is widely accepted as being the right one for several reasons:

  • it is effective enough, i.e. its definition involves computable processes
  • it is strong enough: any "almost sure" property you may find in a probability theory textbook (law of large numbers, law of iterated logarithm, etc) can be tested by a Martin-Löf test (although this is sometimes hard to prove)
  • it has been independently proposed by several people using different definitions (in particular the Levin-Chaitin definition using Kolmogorov complexity); and the fact that they all lead to the same concept is a hint that it should be the right notion (a little bit like the notion of computable function, which can be defined via Turing machines, recursive functions, lambda-calculus, etc.)
  • the mathematical theory behind it is very nice! see the three excellent books An Introduction to Kolmogorov Complexity and Its Applications (Li and Vitanyi), Algorithmic randomness and complexity (Downey and Hirschfeldt) Computability and Randomness (Nies).
What does a Martin-Löf random sequence look like? Well, take a perfectly balanced coin and start flipping it. At each flip, write a 0 for heads and a 1 for tails. Continue until the end of time. That's what a Martin-Löf sequence looks like :-)

Now back to the initial question: is there a computable way to generate a Martin-Löf random sequence? Intuitively the answer should be NO, because if we can use a computable process to generate a sequence α, then we can certainly use a computable process to describe the singleton {α}, so α is not random. Formally this is done as follows. Suppose a sequence α is computable. Consider the following Martin-Löf test: for all k, just output the prefix ak of α of length k, and nothing else. This has measure at most (in fact, exactly) 2−k, and the intersection of the sets Uk as in the definition is exactly {α}. QED!

In fact a Martin-Löf random sequence α is incomputable in a much stronger sense: if some oracle computation with oracle β (which itself is an infinite binary sequence) can compute α, then for all n, n−O(1) bits of β are needed to compute the first n bits of α (this is in fact a characterization of Martin-Löf randomness, which unfortunately is rarely stated as is in the literature).

Ok, now the "edit" part of Joseph's question: Is it the case that a TM with access to a pure source of randomness (an oracle?), can compute a function that a classical TM cannot?

From a computability perspective, the answer is "yes and no". If you are given access to a random source as an oracle (where the output is presented as an infinite binary sequence), with probability 1 you will get a Martin-Löf random oracle, and as we saw earlier Martin-Löf random implies non-computable, so it suffices to output the oracle itself! Or if you want a function f:ℕ→ℕ, you can consider the function f which for all n tells you how many zeroes there are among the first n bits of your oracle. If the oracle is Martin-Löf random, this function will be non-computable.

But of course you might argue that this is cheating: indeed, for a different oracle we might get a different function, so there is a non-reproducibility problem. Hence another way to understand your question is the following: is there a function f which is non-computable, but which can be "computed with positive probability", in the sense that there is an Turing machine with access to a random oracle which, with positive probability (over the oracle), computes f. The answer is no, due to a theorem of Sacks whose proof is quite simple. Actually it has mainly been answered by Robin Kothari: if the probability for the TM to be correct is greater than 1/2, then one can look for all n at all the possible oracle computations with input n and find the output which gets the "majority vote", i.e. which is produced by a set of oracles of measure more than 1/2 (this can be done effectively). The argument even extend to smaller probabilities: suppose the TM outputs f with probability ϵ>0. By Lebesgue's density theorem, there exists a finite string σ such that if we fix the first bits of the oracle to be exactly σ, and then get the other bits at random, then we compute f with probability at least 0.99. By taking such a σ, we can apply the above argument again.

http://cstheory.stackexchange.com/questions/1263/truly-random-number-generator-turing-computable
 
  • #9
tAllan said:
All of the input is input, including voltages, temperatures, or whatever else has an influence on the outcome..
The result of an unknown function of unknown inputs must be considered unknown and, to some extent, random.
 
  • #10
In regards to the @Mr Davis 97 post:

So I've delved into programming, and gotten experienced with the fundamentals. However, the more I learn, the more I question the central object of comp. science, computation, and its foundation. According to Wikipedia, "Computation is any type of calculation that follows a well-defined model understood and expressed as, for example, an algorithm." This is well enough, but I am interested in the concept of the mathematical function and its relation to computation. During the incipient phase of computer science, Turing and many others seemed to characterize computability in terms of functions. For example, the Church-Turing thesis is stated as saying that there is no effective model of computing that can compute more mathematical functions than a Turing machine. This seems to imply that all computations involve the mathematical idea of functions (perhaps this is due to that computing involves algorithms, and functions can effectively model algorithms).

Essentially, my question is, is the notion of computation wholly related to functions? Are the two terms somehow synonymously related?


Wikipedia is a tertiary source, and as such, I think you have to take their definition with a grain of salt. Computation is better defined IMHO as any system which embodies the information processing cycle, typically information input into an information-tight system is stored and changed (or processed) inside and eventually output. This is an important distinction because I consider well-defined a rather strict formality which tends to undermine the inclusion of the human brain as a computer. (Admittedly an ontological position I have chosen.)

The real question of why the use of functions as a model in computing should probably start of with us taking notice that the first computer scientists were mathematicians. The ancient Greeks, Babbage, Lovelace, Zuse, Von Neuman (just to list some of the earliest pioneers) were all mathematicians in various degrees. In fact, the formalism that describes the basis of digital computation in use (1's and 0's) was laid out by mathematician George Boole in the mid-nineteenth century (for which he has been immortalized every time one of us code-slingers declares Dim bln_Temp as Boolean or some such thing... :) ) But, a previous poster was spot on when he said that functions by definition are mathematical constructs that reliably produce the same output given the same input. While certainly you can talk about non-deterministic computation to show that not all computers are deterministic and predictable (who wants a calculator where 2 + 2 maps to 4 only some of the time?), most of us practical schmoes prefer to know that what we get out of our code is logically consistent with our inputs. This is contrast to the notion of a non-functional relation which an input can return two or more outputs.

In fact, in common parlance sometimes function is used interchangeable with procedure or as routine, but they are not the same, and some programming languages strictly enforce a difference in definition. To be precise, a routine is any group of instructions that are stored in a separate place and can be called repeatedly. Routines come in two flavors, the procedure (a proc or sub) and a function. Some programming languages, such as Visual Basic, strictly enforce the difference requiring a distinct definition for each (sub and function in this example). Procedures usually accept input, but return no output, whereas functions do return a value. So, I might create a procedure called void PrintOutput(int Flag) which accepts an integer called Flag as input and then takes data and sends it out to the screen, but doesn't return anything to the code in which it's being executed. Contrast that with int Add(int Addend, int Adder) which accepts two integers and returns the sum. The former is a procedure and the latter is a function, both of which are routines.

So, it's not hard to see how someone who is used to doing ##2 + 2 = 4## and then ##f_{+}(2,2) = 4 ## would want to generalize that to int Add(int Addend, int Adder) and call both of them functions.Moderator action: Some off-topic content was redacted from this post.
 
Last edited by a moderator:
  • #11
Moderator action: Some off-topic content was redacted from this post.A function maps a specific point in the domain to one and only one point in the co-domain. This is something that random number generators seam to violate. My point is that all influences, or variables which play a role in the calculation must be considered input. One perceived point in the domain may truly be two different points in the domain, which map to different outputs. This is trivial when it comes to pseudo-random number generators used in modern computers.

Now the question is whether there could exist a random number generator which violates the rules of a function, and how far to go in including any possible influencing factors as components of points in the domain? Well the first question is actually a very bad question, because any real random number generator will violate the rules of a function. So the question is really could a random number generator exist. As for the second question, you can go as far as considering the entire state of the universe as the point in the domain, and the question is ultimately (when taken to the full extent, to find the most fundamental, hard, mathematical truth), whether or not the universe is deterministic or not. I hope this makes it more clear what I was trying to say.

Here is a simple example that should help clarify my main point,

C:
int someFunc( int x ) {
    static int z = 0;
    z += x;
    return z;
}

The above is a function mathematically, even though it will produce different output for different calls, with the same argument, x.

A lot of physicists hide behind jargon for that reason.

Jargon is necessary, as English is ambiguous, and Mathematics requires precise definitions in specific contexts.
 
Last edited by a moderator:
  • #12
Probability and random behavior is intimately tied to information theory. Any time there is incomplete information and/or knowledge, there can be "random" behavior. Even given the most information possible, the quantum level presents uncertain random behavior that can not be removed. At the larger, non-quantum, level, there are certainly ways to computer-generate random numbers where the user does not have enough information to predict them.
 
  • Like
Likes Fooality
  • #13
Thread reopened.

The concept of computation is highly technical. Demands for an "explain it like I'm five" explanation is off-topic.
 
  • #14
@tAllan

Actually most function calls to random number generators are not mathematical functions because they don't have informational inputs. The physical inputs of the system are irrelevant when considering the flow of information into and out of the information system.

jtv
 
  • #15
aikismos said:
@tAllan
Actually most function calls to random number generators are not mathematical functions because they don't have informational inputs. The physical inputs of the system are irrelevant when considering the flow of information into and out of the information system.
jtv

I don't follow. If information derived from a physical process doesn't flow into the system, then how does it affect the way the system calculates? Besides, all input to any computer is physical at the lowest level anyways. In fact, your argument implies that mathematical functions don't exist at all in the physical world, which might be an interesting metaphysics topic, but is a hard concept to incorporate into a discussion about mathematical models of computation.

Pure mathematical functions are independent from the constrains of physical reality. When considering whether the random number generator is a function, it comes down to whether it is a deterministic process. If it is, then you can certainly model the process as a function. All of this doesn't depend on syntax and code, how the process is expressed in a programming language, or in what form information is stored or transferred.
 
Last edited:
  • #16
tAllan said:
When considering whether the random number generator is a function, it comes down to whether it is a deterministic process. If it is, then you can certainly model the process as a function. All of this doesn't depend on syntax and code, how the process is expressed in a programming language, or in what form information is stored or transferred.
People as smart as Einstein believed that everything is deterministic. He was only proven wrong by experiments in quantum theory. The assumption that computer calculations are deterministic functions has less certainty now that quantum effects are becoming important in computer design. Error correcting code, fault tolerant circuits, and quantum computers are important now and in the future.

But even if we assume that everything is a deterministic function, there is a mathematical distinction between functions where we know all the inputs and functions where we do not. We often have incomplete information. There are mathematical fields (Bayesian) to study the adjustments that should be made in probabilities when our information is incomplete and new information is obtained.

Getting back to the original question, there is a lot of computer Monte Carlo simulation where we are trying to model random, non-deterministic behavior. We want to get a representative sample of results given the same known inputs. So if it is a function, that is a failure of the computer model.
 
  • #17
@FactChecker, Quantum theory uses a non-deterministic model, but does not prove the universe is non-deterministic. At least that is my understanding from what I have read. There are some good threads on this forum about this issue you might want to read.

Regarding what the user does or doesn't know, this is not relevant in pure maths and theoretical computer science. It may be relevant in applied statistics.

Last, you still are not recognizing the difference between pseudo-random and random, which is fundamental to the discussion we are trying to have.

Most pseudo-random number generators generators are seeded with some number, and given the seed, a sequence of numbers is generated. The entire sequence is a function of the seed value. Then grabbing the next pseudo random number os just indexing into the predetermined sequence.
 
Last edited:
  • #18
tAllan said:
I don't follow. If information derived from a physical process doesn't flow into the system, then how does it affect the way the system calculates? Besides, all input to any computer is physical at the lowest level anyways. In fact, your argument implies that mathematical functions don't exist at all in the physical world, which might be an interesting metaphysics topic, but is a hard concept to incorporate into a discussion about mathematical models of computation.

Pure mathematical functions are independent from the constrains of physical reality. When considering whether the random number generator is a function, it comes down to whether it is a deterministic process. If it is, then you can certainly model the process as a function. All of this doesn't depend on syntax and code, how the process is expressed in a programming language, or in what form information is stored or transferred.

@tAllan,

Your question about the relationship between mathematical functions and the physical layer which gives rise to them is challenging because it rests on some philosophically challenging ideas. In the most general sense, yours is a question about how what we perceive as physical is interlinked to what we perceive as abstract; I specifically using perceive because in fact what we discuss about the universe is subject to infinite regress since as biological computers, essentially we are trying to model how we ourselves function. (See https://en.wikipedia.org/wiki/Gödel,_Escher,_Bach). I'm stating this as a preface because it has become somewhat chic to view the physical universe as information itself or some large simulation (See https://en.wikipedia.org/wiki/Programming_the_Universe) which gnaws at me the wrong way since it is not a falsifiable hypothesis, and I consider Lloyd's book an imaginative piece of science fiction until someone otherwise persuades me I'm a simulation. I also see such navel gazing as mistaking introspection about our thoughts which represent the physical universe for the physical universe itself, and I personally draw a very sharp line between what is symbol and what is medium because I believe abstraction to be an emergent property and information to be irreducible to the medium it is encoded in. This makes me a dualist, but not one who believes there is no interaction between the physical and the abstract. So let's get into my response now that you understand where I come from.

First, functions are a very specific type mathematical construct which is by definition a symbol to represent relationships between symbols in a very specific way. A function is a relation or rule between sets, the domain and codomain, where each informational input is mapped to exactly one output. Note well that this is a very specific set of words whose meaning deals in abstract entities (See https://en.wikipedia.org/wiki/The_Meaning_of_Meaning). In other words, 'function' is a symbol which we as thinkers (reference) use to name the definition above (referent). You say that my denial of your potential characterization of a function as allowing for physical inputs creating informational outputs might preclude functions from existing, but the opposite is true. I oppose the definition of a function to include physical inputs precisely because it muddies the definitions of what is physical and what is informational, and it is such a definition that would lead to contradictions. Follow me along if you please to see why.

Let us say we are aware of physical and abstract entities and that each are subject to stasis and dynamics. In fact, they exist as in parallel forms. Let's examine form one, the 'physiochemical processing system'. The car is an easy instantiation of this whereby gasoline can be added (physical and chemical input) to a gas tank (physical and chemical storage) where a fuel pump moves it to the cylinders for combustion (chemical processing) which moves the pistons (physical processing) which ultimately results in the circular motion of the flywheel and tires (physical output) and exhaust (chemical output). This entire system is understood very well by automotive engineers who are expert in the physics and chemistry and the applied sciences (materials, e.g.) to prototype and perfect a reliable car. Now, form two, the 'information processing system' is analogous. As I set here typing away at my keyboard, the physical keystrokes are converted to electrical signals (input) and are fed by a satellite microprocessor into the laptop's RAM (storage) where they are eventually packaged and sent through the IP protocol stack down to the physical layer (processing), and transmitted out my Wi-Fi card to the switch/firewall/router (output) via EM waves. Both of these systems, or in the abstract cycles, have a basic conceptual symmetry to them, that which we 'change' (processing) and 'stasis' (storage) in regards to the boundaries of a system. But as a dualist, I have to call attention to a fundamental difference in their nature. I cannot put gasoline into a car and get information as a direct output (though I could measure and gather data on the process), and I cannot put keystrokes into my computer and get a laptop to move (though I could use a computer to control the car). More simply put, cars deal in physical inputs (gasoline, air, electrons for ignition) and the arithmetic, logical, and control unit of my CPU deals in '1's and '0's as encoded by electrons. The first system deals with a medium, but the second system deals with a medium and an encoded message!

So, if you now accept the dualist premise that the abstract is more than just physical, that it somehow emerges from the physical, and is not reducible in the sense that it has the same meaning (a positive voltage may represent a '1', but it is not a '1' because '1' is an abstract idea that our brains have evolved to use for communication), then you're ready to see my object to blurring the definitions of the two to consider physical inputs into a computer actual 'symbols' for output. Not to sound contrarian to McCluhan's political statement, but the medium (the physical stuff) is not the message (the informational stuff). If it were, cryptologists wouldn't be needed, right? Capture the encrypted disc (medium) would yield the contents (message) because they're really the same thing! Fortunately for privacy's sake, this is not true. In fact, the practicalities of encoding information onto a medium have been mathematized quite nicely by Claude Shannon and others. (See https://en.wikipedia.org/wiki/Information_theory). Now, let me address your idea that physical inputs into an information system is possible but contradictory to the ontology which undergirds information theory.

Some may not appreciate the extent that information theory and work on noise in channels has influenced computer science, but the modern digital computer is predicated on ideas that come from it, primarily in the form of encoding and error-correcting codes (ECC). Were we not able to move the bits (the word coined by Shannon) around inside the computer reliably, then we wouldn't be able get reliable output of a digital computer at all. Bits are moved around inside a machine quickly and large volumes, so even small amounts of error in the input, storage, process, and output of bits would be disastrous for even a primitive OS. Remember that even a single wrong bit in an opcode in a short assembly of instructions would cause a system to crash. But what is that the ECC does exactly? It fights informational entropy, or the tendency of physical entropy and other irregularities to spontaneously change the medium and thereby alter the message against our wishes. Communication theorists call this 'noise'. Noise is nothing more than physical perturbations in a medium which affect the encoded information.

So, at last we come to the mechanics of my objection. Your conception of physical characteristics of the medium which encodes a function are intuition-wise accurate. Yes, all information is embodied by a
physical phenomena (encoding). Yes, those physical phenomena are responsible for the input, storage, processing, and output of information (physical processing underlies information processing). Yes, even without physical input, information can be created or changed (noise). True random number generators use physical processes (such as thermodynamic changes) to generate information, so an entire industry already exists in using physical inputs to generate informational outputs, and yes, the uniqueness of the state of the system could be used to output a single output, but it appears to me to be a semantic issue insofar as it is a violation of the definition of a function. "A function is a relation or rule between sets, the domain and codomain, where each informational input is mapped to exactly one output." The inputs have to be informational, that is to say, the inputs have to be encoded before they enter the system and not created within the boundaries of the system. It may seem hair-splitting, but I think it goes to the question of intentionality of design and the definitions of noise and message.

If you start considering all physical perturbations of a system as informational inputs, then all sorts of semantic contradictions begin to arise. If physical entities are information, then information is the same as the physical and no difference exists between physical and information. All stuff is idea, and all idea is stuff, and that invalidates the analytical definitions of everything pertaining to the physical and abstract, and renders all sorts of reasoning meaningless, contradictory, and/or tautological. It muddles the logical edifice that all of these disciplines are built on because fundamentally it ignores the phenomenon of encoding.

To wit, examine the following claims: The quantity of mass of a rock isn't encoded by measurement, but it exists inherent in the rock itself. The temperature of the granite isn't something measured, but just exists and is self-evident. The length of the rock is part of the rock, and not a number derived from comparison with a standard such as a meter stick. And it's hard for people who believe in an external reality not to object to these ambiguous statements, because we say quantity, temperature, and length are objective measures constantly in science, but if they are part of the object, if the temperature Das Ding-an-Dich (thing-in-itself) is inherent, how is it possible to have two people measure it and get different numbers? It is better to shed such certainty and realize that external reality comes to us through a perceptual filter itself built of external reality and not accessible to introspection. It's better to model external reality by realizing that our measurement of 'temperature' is an internal process, that labeling it 'temperature' is an informational process, and that what we perceive as 'temperature' is a symbol for a referent which is the average kinetic velocity of molecules in a mass; 'temperature' is a practical abstraction liable to errors in measurement (hence the concepts of accuracy and precision). Everything you might think you observe, taste, feel, and touch are nothing more than psychological perceptions subject to distortion and bias, and that is precisely why science, empiricism, skepticism, and related ilk are recent and highly fruitful innovations. They recognize this. The whole scientific method is built around the dichotomy of physical (external) and abstract (internal), and the moment you blur them together by theory and practice, you set thinking back to magic.

So, I hope you see that I object to your characterization not because it is immediately far-fetched or contradictory. It actually seems like a pragmatic way to conceptualize a function, but the problem is, it puts a small leak in the hull of thinking that eventually sinks the ship. In principle when you start muddling the nature of that which is physical from what is abstract, you undermine the intersubjective process by which STEM arrives at reliable knowledge.
 
Last edited:
  • #19
aikismos said:
In principle when you start muddling the nature of that which is physical from what is abstract, you undermine the intersubjective process by which STEM arrives at reliable knowledge.

But it's not necessary to even acknowledge any notion of physical in the abstract model of the process, when specified as a function. The "physical input" is not physical in it's mathematical form, it's simply collections of values, real numbers.
 
Last edited:
  • #20
tAllan said:
But it's not necessary to even acknowledge any notion of physical in the abstract model of the process, when specified as a function. The "physical input" is not physical in it's mathematical form, it's simply collections of values, real numbers.

Precisely why physical inputs cannot be defined as inputs of a function. Physical input cannot become mathematical input unless it is either encoded by the system or is noise in the system. That's why when you said "A function maps a specific point in the domain to one and only one point in the co-domain. This is something that random number generators seam to violate," you were correct. A true hardware random number generator (HRNG) does not need to embody a mathematical function; it can be designed as a routine which outputs information with no input of "information" per se. It creates information from the physical entropy (which is NOT information). That's what my response asserts. That what you keep calling "information" is NOT information until it is encoded into an information system. The equations of thermodynamics are NOT what they represent. They are set of rules (and hence information) only in our minds. Particles do NOT have information, but rather information exists in our mind to describe particles. To think otherwise is to confuse thinking for external reality (a confusion which is quite popular these days).
 
  • #21
tAllan said:
@FactChecker, Quantum theory uses a non-deterministic model, but does not prove the universe is non-deterministic. At least that is my understanding from what I have read. There are some good threads on this forum about this issue you might want to read.

Regarding what the user does or doesn't know, this is not relevant in pure maths and theoretical computer science. It may be relevant in applied statistics.

Last, you still are not recognizing the difference between pseudo-random and random, which is fundamental to the discussion we are trying to have.

Most pseudo-random number generators generators are seeded with some number, and given the seed, a sequence of numbers is generated. The entire sequence is a function of the seed value. Then grabbing the next pseudo random number os just indexing into the predetermined sequence.

The problem with those discussions about whether the universe is or is not deterministic, is they can't be falsified, and thus are useless in discussing science. To see it, imagine I had a black box which was by definition non-deterministic, its outputs have no correlations with its inputs in a way shape or form. You could always argue that there is some set of laws which describe the outputs from the inputs, we just don't know what they are, and the argument would be plausible without the a priori definition of non-determinism. Similarly, assume we are in a by definition deterministic universe. To prove that it actually is, requires omniscience: One must show that in every case, the same precedent state always leads to the same outcome. To totally rule out all non-deterministic events, we must be omniscient.

So what we do or do not know does matter in terms of practical philosophy. Newtonian physics for instance is still a really wonderful and powerful system, if you just explicitly acknowledge what it doesn't cover, what it leaves us not knowing. But in terms of absolutes, its wrong because its not 100% right, like every other modern physics theory. But when I see the Mars Rover, I am struck by how much scientists have achieved with "wrong" theories like QM and relativity. Making space for what we don't know, what theories don't cover, seems wise in science.

My preferred definition of randomness is the algorithmic randomness of Chaitin, explored also by Kolmogorov:
https://www.cs.auckland.ac.nz/~chaitin/sciamer.html

"Although randomness can be precisely defined and can even be measured, a given number cannot be proved to be random. This enigma establishes a limit to what is possible in mathematics."

That quote from the top makes explicit the relationship between what we don't know (including pseudorandom/chaotic sequences we don't understand the generators for) and true, absolute randomness: We can't, in many cases, tell them apart.
 
  • Like
Likes aikismos
  • #22
A computation takes an input and produces an output. So this is a function.
 
  • #23
tAllan said:
Jargon is necessary, as English is ambiguous, and Mathematics requires precise definitions in specific contexts.

English is as precise as you want to it be.

If you think English is ambiguous, that is because you are not using it properly.
I accept there is (sometimes) a need for technical terms if it makes a statement more concise and/or easier to understand.

However, some people to use jargon for the opposite effect - to make a statement less concise and more ambiguous, or just plain nonsense.

Somebody responded to a post of mine with

"...any non-organically compromised Homo sapien sapien..."

It is absolutely meaningless. This sort of nonsense needs to stop if scientists want the public, and other scientists to understand what they are working on. If you are trying to convince somebody that your opinion or point of view is the correct one - or even worth contemplating - is it not better to ensure clarity of thought?
This is not about the grammar police, or the spelling nazis, it is about clarity of thought and clarity of communication. The best communicators (in any field) use plain language and straight talking.Why can the OED define computation in five words?
 
  • #24
William White said:
English is as precise as you want to it be.

If you think English is ambiguous, that is because you are not using it properly.
All language is relative to the parties using it.

Consider a set of numbers that gets changed from 0328761954 to 0123456789. What happened in one sentence?

Computer Scientist: The set went from chaotic to ordered.
Physicist: The set went from ordered to chaotic.

So without a paragraph defining what chaotic and ordered means, it's not possible to convey what happened easily to everyone using the same sentence. The programmer considered "something ordered" only if it's some defined interesting order. The physicist defined the starting state as ordered, regardless of whether that order is interesting or not. When you have words with multiple definitions, a language becomes ambiguous. That's why the tax code fills a library and the standard model fits on an index card. Before about 1600, a lot of math was written in natural language, it was codified for a reason.
 
  • #25
You have made an example that does not need jargon though?

You could explain the meaning of those two sequences (the meaning you want to convey) using plain language, very easily.If you think a sentence you write is ambiguous that is your fault and not the language.

Clear communication is a skill. That lot of people in the "STEM community" lack.
 
  • #26
FactChecker said:
For all practical purposes, this is not true. Not all random number generators are started with a known seed. Even the pseudo-random number generators are often not used in a way that makes their number stream predictable or repeatable.
Incorrect. All such algorithms use variables that are known to the algorithm and as such the result of each computation is 100% predictable by the author of the algorithm. The randomness of any number generated by any algorithm is entirely illusory.

In the quantum world the velocity or location of a particle cannot be precisely determined which gives an impression of randomness. However in the absence of a full understanding of the mechanism that causes a quantum particle to move from location A to location B it cannot currently be said to be truly random.
 
  • #27
JonathanCollins said:
Incorrect. All such algorithms use variables that are known to the algorithm and as such the result of each computation is 100% predictable by the author of the algorithm. The randomness of any number generated by any algorithm is entirely illusory.

In the quantum world the velocity or location of a particle cannot be precisely determined which gives an impression of randomness. However in the absence of a full understanding of the mechanism that causes a quantum particle to move from location A to location B it cannot currently be said to be truly random.

@JonathanCollins, that simply is not true. You need to separate what the algorithm knows from what the author knows. An coder can write an algorithm which calls on information which makes the author unable to predict the output. A simple example of that would be an MD5 hash whose input comes from a HRNG.
 
  • #28
aikismos said:
@JonathanCollins, that simply is not true. You need to separate what the algorithm knows from what the author knows. An coder can write an algorithm which calls on information which makes the author unable to predict the output. A simple example of that would be an MD5 hash whose input comes from a HRNG.
yes but information taken from any external source will also be predictable by the author of the algorithm producing that information. With full knowledge of all the algorithms utlilised to generate a number that number can always be predicted and so is not random. In short no binary computer is capable of executing a single instruction for which it has not been programmed to do so.
 
  • #29
JonathanCollins said:
yes but information taken from any external source will also be predictable by the author of the algorithm producing that information. With full knowledge of all the algorithms utilized to generate a number that number can always be predicted and so is not random. In short no binary computer is capable of executing a single instruction for which it has not been programmed to do so.
Ahh, I see what you're trying to communicate, but there are some points which need clarification. The first is that an algorithm can always take in unknown or random information, and that could prevent someone who has knowledge of the algorithm from doing so, and secondly, remember that if an algorithm is written in a 3rd generation which itself is compiled, and then the instructions actually executed may vary from machine to machine based on the hardware, the software libraries, etc. What you're fundamentally trying to state is that traditional PCs are deterministic machines which be conceived mathematically as a series of mappings of inputs to outputs. I'd agree to that statement as long as you acknowledge the amount of complexity that is involved in a contemporaneous computer system. For instance, what if the there is a failure in the ECC's used in RAM or a floating-point error in the CPU's ALU? What happens if the algorithm calls a function in a library, and that library gets upgraded between runs? In my previous post, I mentioned a hardware random number generator (HNRG), and it were called on by the algorithm even once in the process as part of computation, that would make the entire algorithm, with all other factors being considered, incapable of being predicted. But I get what you're trying to say.
 
  • #30
Pity, I don't know rather it helps you, but all the computer could do is addition... Minus is plus with opposite sign, multiple is many times plus and division is many times minus. That's computer in real.
 
  • #31
Yes even in the case of hardware failure causing corruption to the variables generated by algorithms the end result is deterministic based on the exact sequence of events that occurred.

The wider and more interesting debate on this topic is the consideration of whether any event in the universe can be said to be random. If cause and effect can be pinpointed at the quantum level the entire universe becomes deterministic. Whilst observations of quantum particles appear to suggest random behaviour I am of the opinion that in time such observations will become explicable through an understanding of the mechanisms that drive such behaviour. After all at present there is to my knowledge no coherent model of the qualitative nature of any of the known forces (e.g. what actually causes 1 particle to be tightly bound to another in the nucleus of an atom?).
 
  • #32
Some random number generators have been built which derive their outputs by sampling physical phenomena .

Seen as a black box computer the input is the ON switch and the output is random numbers . ERNIE worked this way .

https://en.wikipedia.org/wiki/Premium_Bond
 
  • #33
JonathanCollins said:
Yes even in the case of hardware failure causing corruption to the variables generated by algorithms the end result is deterministic based on the exact sequence of events that occurred.

The wider and more interesting debate on this topic is the consideration of whether any event in the universe can be said to be random. If cause and effect can be pinpointed at the quantum level the entire universe becomes deterministic. Whilst observations of quantum particles appear to suggest random behaviour I am of the opinion that in time such observations will become explicable through an understanding of the mechanisms that drive such behaviour. After all at present there is to my knowledge no coherent model of the qualitative nature of any of the known forces (e.g. what actually causes 1 particle to be tightly bound to another in the nucleus of an atom?).

I think to have that conversation, one has to make clear: events in this universe are religiously deterministic if we believe the same complete state of system is always followed by the same following state, each time. Events in this universe are meaningfully deterministic if the same complete set of observables for a system is always followed by the same set of observables after it happens. The difference between the former and the latter is that in the former, we suppose a godlike all knowing observer who can determine the outcome knowing the current state but not us, and in the latter, WE can determine the outcome (or what we can observe of it) somehow from what we can observe from the current state.

I believe there's a parable in Islamic Hadith, that says in the beginning God told a pen to write the history of everything from the beginning till the end of time, and it did, and God has that transcript. That's the religiously deterministic universe, everything is predetermined, we just don't know what will happen, and never can. Its faith based, its rooted in the middle ages, and isn't useful for science. So that definition of determinism must be set aside.

That leaves us with the useful definition, where, when the set of observables available isn't sufficient to predict the set of observables that follow, you experience randomness, uncertainty about the outcome. Even if we do come up with a deterministic model of QM, it will still be there: There will be large, chaotic systems all the details of which are too large to fit into the largest computers of the future, and some elements will be unknown. Or, just take the Halting Problem from computer science. Computers are physical systems, so we should be able to determine, with our deterministic physics, whether any computer program will halt without running it, creating some crazy paradoxes.

So we will always have unknowns, always have randomness. The only alternative view is that of the omniscient viewer who won't tell us, which we have set aside.

edit: A word about observables: What this generation can observe, the last generation could not, so the set of things that can be determined is always growing, and science is in the business of growing it more and more. What's random is random precisely until it isn't... What we mustn't do is make the mistake of replacing the middle ages God with our present concept of our future selves..
 
  • #34
mac_alleb said:
Pity, I don't know rather it helps you, but all the computer could do is addition... Minus is plus with opposite sign, multiple is many times plus and division is many times minus. That's computer in real.
Actually at a fundamental level all a computer can do is represent binary 0 or 1
Fooality said:
I think to have that conversation, one has to make clear: events in this universe are religiously deterministic if we believe the same complete state of system is always followed by the same following state, each time. Events in this universe are meaningfully deterministic if the same complete set of observables for a system is always followed by the same set of observables after it happens. The difference between the former and the latter is that in the former, we suppose a godlike all knowing observer who can determine the outcome knowing the current state but not us, and in the latter, WE can determine the outcome (or what we can observe of it) somehow from what we can observe from the current state.

I believe there's a parable in Islamic Hadith, that says in the beginning God told a pen to write the history of everything from the beginning till the end of time, and it did, and God has that transcript. That's the religiously deterministic universe, everything is predetermined, we just don't know what will happen, and never can. Its faith based, its rooted in the middle ages, and isn't useful for science. So that definition of determinism must be set aside.

That leaves us with the useful definition, where, when the set of observables available isn't sufficient to predict the set of observables that follow, you experience randomness, uncertainty about the outcome. Even if we do come up with a deterministic model of QM, it will still be there: There will be large, chaotic systems all the details of which are too large to fit into the largest computers of the future, and some elements will be unknown. Or, just take the Halting Problem from computer science. Computers are physical systems, so we should be able to determine, with our deterministic physics, whether any computer program will halt without running it, creating some crazy paradoxes.

So we will always have unknowns, always have randomness. The only alternative view is that of the omniscient viewer who won't tell us, which we have set aside.

edit: A word about observables: What this generation can observe, the last generation could not, so the set of things that can be determined is always growing, and science is in the business of growing it more and more. What's random is random precisely until it isn't... What we mustn't do is make the mistake of replacing the middle ages God with our present concept of our future selves..
It is feasible that a full understanding will be reached for all phenomena (including QM) at which stage the universe becomes deterministic, albeit still very complex in the same way that weather becomes increasingly deterministic as we gradually unravel the detailed variables. A future explanation to the apparent random behaviour of quantum particles may yet prove to be very simple; future computers may become exponentially powerful. There is no reason to suppose that something (however apparently chaotic) cannot ultimately be determined. What are the paradoxes to which you refer regarding computers?

W
 
  • #35
Mr Davis 97 said:
Essentially, my question is, is the notion of computation wholly related to functions? Are the two terms somehow synonymously related?
It seems a lot of the answers have gone rogue on the recent hype revival of functional programming . But I would like to get back to your question.

First I think there is the same tension between Math and Physics then there is between Math and Computing/er. Each of those field trying to pretend ownership of the other.
I don't think that is the case. There is intersection between them, but some parts are clearly more distinct.

I think the reason is simple, both physics on computer have to handle "practical" rules and facts, that mathematics could probably describe, but to no ends.

In computing there is indeed "pure" function that would probably match pretty closely the math notion. Hence the same word is used. But where the math function is a mapping between input and output, a computer function is an instance of such a process that transform input into output.
I like to see the math as pure symbolism, and the computer function as a realization that would have to consume "energy". Its not pure, its not abstract, they could fail.
Even in its symbolic form, the computer function is deeply related with the fact that your CPU has a CALL / RET in its instruction set (that's why GPU are quite different to program).

So at one range of the spectrum, there is more pure function, like Add. Really, the pure functional language paradigm has a great deal of advantages, but they are often limited to handle the parts of the logic that is factually more close to pure math. SelectFirstElementOf(sequence) would be another example. But those function are not going anywhere. The purest math method I know of is the Mandelbrot set function. And really, it only gets its real beauty when computers take care of it. I really do think math an computing is the very same for all mathematics of iteration. I hope this will someday bring some new insight into the chaotics/fractal non-linear mathematics, that really need a revolution.

At the other end of the spectrum, there is the notion of process. There is a bunch of things (lets call them data), and those things will be peeked, poked, deleted, created, following a long range of procedure/recipe, producing along the way a lot of intermediate data. I don't think math could care less about [if then else]. But in every sense a computer only compute to compare and feed an "if" decision (lets be honest, its 99.9% of the time). I mean by that that computer are more like state machine, (which can also be describe/abstract by math) but is not linked to function/mapping. A browser is not a function. Its does not map the internet to page with url, even if some mathematician will pretend it is just what it does. What it does is follow a program, a procedure, it is only a bunch of: if then else and loop and or branch.
Instead of speaking of random() the paramount example of a function is open(). It is not math, it does does map close file to open file. But that method open a file. Call it twice, and your are dead meat (even one may fail). At this end of the spectrum I very much consider computing to be biology. Function are just strands of DNA proceed in semi chaotic orders putting some "life" into your machine.

Right in the middle of the spectrum, the is the notion of algorithm. Take quick sort for example. Is it a function that map a unsorted set to a sorted set ? Is it a slowly growing state change ? It is both, and it is at this interface of math and computing that I personally find the input of pure math the more important (I hope that mathematician do fell a symmetric feeling). I could also mention information theory and the ranking algorithm that everybody is so found of nowadays. Cryptography / compression is also at this middle range, in my views.

Well, back to work ;-)
 
  • Like
Likes Nidum

Similar threads

  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
Replies
7
Views
1K
  • Programming and Computer Science
Replies
2
Views
1K
Replies
46
Views
3K
  • STEM Academic Advising
Replies
1
Views
484
  • Quantum Physics
Replies
4
Views
739
  • Programming and Computer Science
Replies
1
Views
1K
  • STEM Academic Advising
Replies
10
Views
1K
  • Computing and Technology
Replies
12
Views
3K
Back
Top