Open vs. Closed System: Putnam's Definition

In summary, Putnam's book "Representations and Reality" states that every ordinary open system realizes every abstract finite automaton. This is a controversial idea that has gained attention from philosophers such as Chalmers and Bishop. However, there is still debate and confusion surrounding Putnam's concept of "open system" and its relation to computation and cognition. Some suggest that he may be using the term in the thermodynamic sense, while others argue that there may not be a clear connection. Ultimately, Putnam's argument challenges the idea of computational functionalism as a foundation for studying the mind.
  • #1
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,012
42
In his book, "Representations and Reality", Putnam states:
every ordinary open system realizes every abstract finite automaton
How would you define an "open system" in this context? How would you define a closed system?
 
Physics news on Phys.org
  • #2
You haven't provided enough context for my liking, but that's just me.
 
  • #3
I've read Putnam's account and even from context, it is unclear. However, it must be clear to philosophers as Putnam's contention has gained considerable attention, primarily from Chalmers on the opposing field and Bishop on Putnam's side. I'll include relavent discussion from Davenport as well.

From Chalmers.
http://consc.net/papers/rock.html
The latter principle holds that every ordinary system is in different maximal states at different times. (A "maximal" state is a total state of the system, specifying the system's physical makeup in perfect detail). Putnam argues for this on the basis that every such system is exposed to electromagnetic and gravitational radiation from a natural clock. I will accept the principle in the discussion that follows. Even if it does not hold across the board (arguably, signals from a number of sources might cancel each other's effects, leading to a cycle in behavior), the more limited result that every noncyclic system implements every finite-state automaton would still be a strong one.

From Bishop:
(Mechanical Bodies, Mythical Minds 2004)
instead of seeking to justify Putnam's original claim that, "every open system implements every finite state automaton", (FSA), and hence that psychological states of the brain cannot be functional states of a computer, I will seek to establish the weaker result that, over a finite time window every open system implements the trace of a particular FSA Q, as it executes with known input (x). That this result leads to panpsychism is clear as, equating Q(x) to a specified program that is claimed to instantiate phenomenal states as it executes, and following Putnam's procedure, identical computational (and ex-hypothesi phenomenal) states can be found in every open physical system.

From Davenport:
(Computationalism: The very idea)
Before moving on to look at cognition and how this view of computation is related to it, it is important to dispose of the argument, put forward by Putnam (1988), to the effect that computation is all pervasive. According to Putnam’s proof (in the Appendix of his Reality and Representation), any open system, for example, a rock, can compute any function. If true, this would render void the computationalist’s claim that cognition is simply a particular class of computation, since everything, even a rock, would be capable of cognition!
The essence of Putnam’s argument is as follows: Every ordinary open system will be in different maximal states, say s1, s2, … sn, at each of a sequence of times, t1, t2, … tn. If the transition table for a given finite state automaton (FSA) calls for it to go through a particular sequence of formal states, then it is always possible to map this sequence onto the physical state sequence. For instance, if the FSA is to go through the sequence ABABA, then it is only necessary to map A onto the physical state s1  s3  s5, and B onto s2  s4. In this way any FSA can be implemented by any physical system.
Fortunately (for cognitive science), the argument is not as good as it may at first appear. Putnam’s Principle of Non-Cyclical Behaviour hints at the difficulty. His proof relies on the fact that an open system is always in different maximal states at different times. In other words, it is possible to perform this mapping operation only once (and then probably only with the benefit of hindsight!)
 
  • #4
I think he is using the term in the normal thermodynamic sense. If I had to guess at a reason why he specifies the argument to only open systems, it is because a completely closed system can theoretically reach permanent homeostasis and cease to go into changing physical states as time passes. Don't quote me on that, though. Ask a chemistry expert.
 
  • #5
loseyourname said:
I think he is using the term in the normal thermodynamic sense. If I had to guess at a reason why he specifies the argument to only open systems, it is because a completely closed system can theoretically reach permanent homeostasis and cease to go into changing physical states as time passes. Don't quote me on that, though. Ask a chemistry expert.

This is correct, except the stable state is known as "thermal equilibrium", not homeostasis. Homeostasis is the property some open systems have of maintaining an approximately stable internal state in the face of varying inputs. An example of that would be a mammalian body in varying temperatures. Homeostasis requires some "programming" on the part of the system that exhibits it.
 
  • #6
Thanks, Adjoint.
 
  • #7
loseyourname said:
I think he is using the term in the normal thermodynamic sense. If I had to guess at a reason why he specifies the argument to only open systems, it is because a completely closed system can theoretically reach permanent homeostasis and cease to go into changing physical states as time passes. Don't quote me on that, though. Ask a chemistry expert.

THis is correct except that the stable state is known as thermal equilibrium, not homeostasis.

http://en.wikipedia.org/wiki/Homeostasis" [Broken] is the property some open systems have of maintaining their internal state in approximate stability in spite of varying inputs from the environment. Animals, and especially mammals, are VERY homeostatic.
 
Last edited by a moderator:
  • #8
Thanks for the input Lose & Adjoint. I'd like to think that's what he's referring too, but after reading through a number of papers on the topic, I can't see any connection between Putnam's use of the term "open physical system" and http://en.wikipedia.org/wiki/Open_system_%28system_theory%29" [Broken]. Unfortunately, I've lost my copy of the book. I had it a day or two ago <sigh> Regardless, take for example Chalmers:

Putnam has argued that computational functionalism cannot serve as a foundation for the study of the mind, as every ordinary open physical system implements every finite-state automaton. …

The theory of computation is often thought to underwrite the theory of mind. In cognitive science, it is widely believed that intelligent behavior is enabled by the fact that the mind or the brain implements some abstract automaton: perhaps a Turing machine, a program, an abstract neural network, or a finite-state automaton. The ambitions of artificial intelligence rest on a related claim of computational sufficiency, …

In an appendix to his book Representation and Reality (Putnam 1988, pp. 120-125), Hilary Putnam argues for a conclusion that would destroy these ambitions. Specifically, he claims that every ordinary open system realizes every abstract finite automaton. He puts this forward as a theorem, and offers a detailed proof. If this is right, a simple system such as a rock implements any automaton one might imagine. Together with the thesis of computational sufficiency, this would imply that a rock has a mind, and possesses many properties characteristic of human mentality. If Putnam's result is correct, then, we must either embrace an extreme form of panpsychism or reject the principle on which the hopes of artificial intelligence rest.

I don't see the importance of using the phrase "open physical system" here if it is used in the thermodynamic sense, unless I'm applying more importance to it than it deserves. Perhaps that's all there is to it. Perhaps 'open system' here is miss-leading, but I don't think so.

I say this because the argument Chalmers uses against Putman is to say that "strong conditionals" and counterfactuals are important features of a computational mind. Chalmers argument seems to suggest the 'open system' is one which is disconnected somehow, such that one can't have a simple 1 to 1 mapping between an FSA with and an FSA without a mind. Chalmers claims that without the ability to duplicate all potential I/O, you can't preserve mentality. Bishop comes back and suggests "counterfactuals can't count". In each case the focus on Putnam seems to be on proving whether or not this 'mapping' of an alleged conscious FSA with an 'ordinary open system' is a valid argument.

Take Bishop for instance, who claims that "over a finite time window, every open system implements the trace of a particular FSA … lead[ing] to panpsychism" and, by a reductio, "a suitably programmed computer qua performing computation can never instantiate genuine phenomenal states".

The references to "open systems" seem to indicate only some stationary system in which states can be mapped. If they mean open system in the conventional sense, it seems misleading.

If this is all sounding like nonsense, then you probably understand it as well as I do. I think I'm going to shove a water hose in my ear now, I can smell brain cells burning…
 
Last edited by a moderator:
  • #9
From Chalmers' paper at

http://consc.net/papers/rock.html

Chalmers said:
The argument is general. Given any inputless FSA and any noncyclic physical system, we can map physical states of the system over an arbitrarily long period of time to formal states of the FSA by the same method. We associate the initial physical state of the system with an initial state of the FSA and associate subsequent physical states of the system with subsequent states of the FSA, where the FSA state-evolution is determined by its state-transition rules. The implementation mapping is determined by taking the disjunction of associated physical states for each state of the FSA, and mapping that disjunctive state to the FSA state in question. Under this mapping, it is easy to see that the evolution of the physical states precisely mirrors evolution of the FSA states. Therefore every noncyclic physical system implements every (inputless) FSA.
The argument seems to rest on the fact that any given open system (such as a rock exposed to the rest of the universe) is in different maximal states at different times (assuming Putnam’s Principle of Non-Cyclic Behaviour) - and Putnam argues for this on the basis that every such system is exposed to electromagnetic and gravitational radiation from a natural clock.

Thus (my understanding of Putnam's argument would be) it follows that over an infinite length of time such an open system would occupy every possible maximal state, and if each physical state is mapped to a different FSA then it follows that it would implement all possible FSAs?

I find it hard to follow Putnam's argument, but is this what Putnam is basically claiming?

Are each of the FSAs supposed to be implemented entirely within the rock itself, or do we have to take the environment into account as part of the enabling of the FSAs (after all, part of the basis for arguing that the rock occupies every possible maximal state is that it IS an open system, exposed to electromagnetic and gravitational radiation etc)?

I would also certainly (at least) challenge his Principle of Non-Cyclic Behaviour.

Finally - funny things happen when you play around with infinity. Remember Hilbert's hotel? I can prove that there are twice as many natural numbers as there are natural numbers, for example. Just because we let the experiment run for an infinite time does not (imho) entail that every possible maximal state will be visited, nor does it entail that every FSA will be implemented.

Best Regards
 
Last edited:
  • #10
Hi MF. After reading Putnam, Chalmers, Bishop, and a few others on this argument, I still wasn't following Putnam's original reasoning. He's even more difficult to understand than Chalmers and Bishop. I thought perhaps I was simply missing something to do with a philisophical open system because the arguments are also focused on a disconnected or disjointed system. I thought perhaps there was a relationship there that I was missing. On further review, I've concluded the term "open system" is exactly as Adjoint states, and I must be putting too much emphasis on the term. I don't see it as being crucial to Putnam's argument at this point.

I'm not sure if Putnam's argument requires an infinite time period, though Chalmers mention something about this. I'm also unfamiliar with Putnam's principal of non-cyclic behavior. I'll open a new thread to discuss Putnam's work in more detail soon. It's difficult to understand but it seems to have drawn considerable attention from the philisophical community.
 
  • #11
I think this quote from your post #3:

From Davenport:
(Computationalism: The very idea)


Before moving on to look at cognition and how this view of computation is related to it, it is important to dispose of the argument, put forward by Putnam (1988), to the effect that computation is all pervasive. According to Putnam’s proof (in the Appendix of his Reality and Representation), any open system, for example, a rock, can compute any function. If true, this would render void the computationalist’s claim that cognition is simply a particular class of computation, since everything, even a rock, would be capable of cognition!
The essence of Putnam’s argument is as follows: Every ordinary open system will be in different maximal states, say s1, s2, … sn, at each of a sequence of times, t1, t2, … tn. If the transition table for a given finite state automaton (FSA) calls for it to go through a particular sequence of formal states, then it is always possible to map this sequence onto the physical state sequence. For instance, if the FSA is to go through the sequence ABABA, then it is only necessary to map A onto the physical state s1  s3  s5, and B onto s2  s4. In this way any FSA can be implemented by any physical system.
Fortunately (for cognitive science), the argument is not as good as it may at first appear. Putnam’s Principle of Non-Cyclical Behaviour hints at the difficulty. His proof relies on the fact that an open system is always in different maximal states at different times. In other words, it is possible to perform this mapping operation only once (and then probably only with the benefit of hindsight!)

pretty much settles the issue. You can't get more thermodynamically open than a rock, and it's just the ordinary successive states of the rock that Putnam is referring to. He says he can use these successive states (he seems to need that external "clock' the flow of time; hence the system has to be open) can be used as a surrogate for any finite state machine, in other words as the instantiation of a Turning Machine. Hence a computational intelligence cannot be unique to human minds, since such computers are ubiquitous in Nature. I am just restating what Davenport says here. I can see problems with the mapping of any old succession of environmental states as a Turing machine; how do you erase?

But Davenport also says he "disposes" of Putnam's thesis. How does he do that?
 
Last edited:
  • #12
But Davenport also says he "disposes" of Putnam's thesis. How does he do that?
Ref: http://www.cs.bilkent.edu.tr/~david/papers/computationalism.doc" [Broken]

I wouldn't say that Davenport manages to do this. His paper isn't particularly convincing IMO, and I don't see anyone referencing him. I'll try and get something together on the issue shortly.
 
Last edited by a moderator:
  • #13
Q_Goest said:
Ref: http://www.cs.bilkent.edu.tr/~david/papers/computationalism.doc" [Broken]

I wouldn't say that Davenport manages to do this. His paper isn't particularly convincing IMO, and I don't see anyone referencing him. I'll try and get something together on the issue shortly.

I don't see why you're not convinced. Here is his argument:

His proof relies on the fact that an open system is always in different maximal states at different times. In other words, it is possible to perform this mapping operation only once (and then probably only with the benefit of hindsight!) But this is of no use whatsoever; for computation, as we have seen, is about prediction. Not only is Putnam’s “computer” unable to repeat the computation, ever, but also it can only actually make one “prediction” (answer one trivial question.) The problem is that the system is not really implementing the FSA in its entirety. A true implementation requires that the system reliably traverse different state sequences from different initial conditions in accordance with the FSA’s transition table. In other words, whenever the physical system is placed in state si it should move into state sj, and whenever it is in sk it should move to sl, and so on for every single transition rule. Clearly, this places much stronger constraints on the implementation. Chrisley (1995), Copeland (1996) and Chalmers (1996) all argue this point in more detail. Chalmers also suggests replacing the FSA with a CSA (Combinatorial State Automata), which is like a FSA except that its states are represented by vectors. This combinatorial structure is supposed to place extra constraints on the implementation conditions, making it even more difficult to find an appropriate mapping. While this is true, as Chalmers points out, for every CSA there is a FSA that can simulate it, and which could therefore offer a simpler implementation!

In other words, given some fixed set of transitions in the transition table, you can find some set of rock states at some time that implement that string of transitions. If you have some other set of transitions you have to find some completely different states of the rock which might be at some immense gap in time from the first. So just showning some typical string of transitions can be simulated doesn't do the job. What has not been shown is that you can implement the whole table and every path through it. So you can't use the rock to simulate the automaton.
 
Last edited by a moderator:
  • #14
I'm not sure your representation of Putnam's argument is correct, but assuming it is, I'll quote http://www.macrovu.com/CCT6/CCTMap640.html" [Broken]:
Imagine 2 machines are engaged in the same physical activity and are running the same consciousness program. One of these machines supports counterfactual states, whereas the other doesn't. The computationalist must claim, based on the nontriviality condition, that the machine capable of supporting counterfactual states is conscious and the other isn't. But this contradicts the supervenience thesis: each machine exhibits the same physical activity, but according to computationalism only one is conscious.

Also, http://www.goldsmiths.ac.uk/departments/computing/staff/MB.html" [Broken]"
about which http://consc.net/responses.html#bishop"remarks:
Bishop advances a version of Putnam's argument that every ordinary system implements every computation, uses this to argue that computation can't be the basis of consciousness, and then addresses my response to Putnam. I've argued (e.g. in "Does a Rock Implement Every Finite-State Automaton") that Putnam's systems are not true implementations, since they lack appropriate counterfactual sensitivity. Bishop responds that mere counterfactual sensitivity can't make a difference to consciousness: surely it's what actually happens to a system that matters, not what would have happened if things had gone differently. He runs a version of the fading qualia argument, suggesting that we can remove unused state-transitions one-by-one, thus removing counterfactual sensitivity, while (he argues) preserving consciousness.

Anyway, I've read most of these through, but still having trouble deciphering what the exact meaning of some of this is. There's a lot of work that's focused on this and plenty of debate on both sides. I don't see it as being as simple as Davenport makes it out. In any case, I still need a better understanding of the prior work.
 
Last edited by a moderator:
  • #15
Frankly I think the whole rock thing is ridiculous. And anyway there are available alternatives to finite state automata as models of the mind. Note that if you include recursion you can reach an arbitrary number of states with a finite resource: It is not true (that I am disturbed (that (Chalmers comments that (Bishop presents...)...)...), the point being that you can't code these into a static transition matrix, or I don't think so (someone who knows can orrect me). Chomsky claims that anything expressible in a natural language can be simulated by a small automaton with recursion.
 
  • #16
A "closed system" can be defined as a system of which the internal properties in investigative question can not be substantively modified through existing external influence.
In reality this is impossible.
As such, a "closed system" is a fictional, idealized state of isolation, though very useful in certain examinations.
 

What is an open system according to Putnam's definition?

An open system, according to Putnam's definition, is one that allows for the exchange of matter and energy with its surroundings. This means that the system is not completely isolated, and can interact with its environment.

What is a closed system according to Putnam's definition?

A closed system, according to Putnam's definition, is one that does not allow for the exchange of matter and energy with its surroundings. This means that the system is completely isolated and does not interact with its environment.

What are the main differences between open and closed systems?

The main difference between open and closed systems is the exchange of matter and energy with the surroundings. Open systems allow for this exchange, while closed systems do not. Additionally, open systems tend to be more complex and have a higher degree of organization compared to closed systems.

What are some examples of open systems?

Examples of open systems include living organisms, ecosystems, and the Earth's biosphere. These systems all exchange matter and energy with their surroundings, allowing for a continuous flow of resources and energy.

What are some examples of closed systems?

Examples of closed systems include a sealed jar, a thermos, and a closed terrarium. These systems do not exchange matter or energy with their surroundings, and their internal conditions remain constant. However, it is important to note that truly closed systems do not exist in nature, as all systems are affected by external factors to some degree.

Similar threads

Replies
26
Views
4K
Replies
1
Views
709
Replies
37
Views
4K
Replies
4
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Special and General Relativity
2
Replies
43
Views
3K
Replies
9
Views
2K
  • Introductory Physics Homework Help
Replies
2
Views
1K
  • Electrical Engineering
Replies
14
Views
3K
Back
Top