Is Chomsky's View on the Mind-Body Problem Redefining Materialism?

  • Thread starter Thread starter bohm2
  • Start date Start date
AI Thread Summary
Chomsky critiques traditional views on the mind-body problem, arguing that it can only be sensibly posed with a clear conception of "body," which has been undermined by modern physics. He suggests that the material world is defined by our scientific theories rather than a fixed notion of physicality, leading to the conclusion that the mind-body problem lacks coherent formulation. Chomsky posits that as we develop and integrate theories of the mind, we may redefine what is considered "physical" without a predetermined concept of materiality. Critics like Nagel argue that subjectivity and qualia cannot be reduced to material entities, regardless of future scientific advancements. Ultimately, Chomsky advocates for a focus on understanding mental phenomena within the evolving framework of science, rather than getting bogged down in the elusive definitions of "mind" and "body."
  • #201
bohm2 said:
What about this reductionist argument:

Where there is discontinuity in microscopic behavior associated with precisely specifiable macroscopic parameters, emergent properties of the system are clearly implicated, unless we can get an equally elegant resulting theory by complicating the dispositional structure of the already accepted inventory of basic properties. Sydney Shoemaker has contended that such hidden-micro-dispositions theories are indeed always available. Assuming sharply discontinuous patterns of effects within complex systems, we could conclude that the microphysical entities have otherwise latent dispositions towards effects within macroscopically complex contexts alongside the dispositions which are continuously manifested in (nearly) all contexts. The observed difference would be a result of the manifestation of these latent dispositions.

So I'm guessing a reductionist can claim that we lack these "latent dispositions" because we don't have a complete physical theory, yet?

http://plato.stanford.edu/entries/properties-emergent/
This is an interesting point. That sounds like an "intrinsicality" argument since anything can be a symbol. What determines what is a symbol comes from the subject. That seems like an argument against symbolic or semiotic function?

Whether it can be a symbol or not depends on the system context, as it should. If everything were red, we'd all effectively be blind. If the universe were all one temperature, nothing would happen.
 
Physics news on Phys.org
  • #202
bohm2 said:
So I'm guessing a reductionist can claim that we lack these "latent dispositions" because we don't have a complete physical theory, yet?

So far as I recall, Shoemaker takes a fairly systems view of causality. It is not clear that this is his own argument rather than him musing on what a reductionist might say.

But anyway, the systems answer is that it works the other way round.

What this idea of latent dispositions appears to be saying is that the parts of a system have some set of properties. There are those that are used or apparent at one level of development, but other unseen ones may come into play with more complex forms of organisation.

The reductionist of course wants the parts to be as simple as possible. Really, it is hard to explain why there should be anything rather than a nothing. But to be fundamental, a part should at least have as few properties as decently possible. Every new property is an addition to a growing collection. It seems troublesome that a part could both have many properties, and also that some of these are subtle enough to be hidden until some kind of complexity harnesses them and brings them to the fore.

The systems approach views the situation the other way round. Reality at root is vague. Any locale in an undeveloped state will have an unlimited number of degrees of freedom. While things are indeterminate, the "properties" of the local scale are infinite because unbounded - but also not really properties as such because, being everything at the same time, this adds up to nothing definite.

So the first point is that a "part" has a potential infinity of properties, and then has to become some actual part by becoming bounded in its freedoms. It is no surprise that a part has many "latent dispositions" as it starts with an infinity. The task then is to constrain these dispositions so that they do something useful in the context of a system.

Which is what hierarchy theory is about. How the global scale constrains the freedoms of the local scale, limiting local freedoms to turn infinite potential into crisply bounded actuality.

So latently, anything is possible. But due to downwards acting constraints, this freedom becomes increasingly constrained. Parts become ever more definite and particular as complexity or global organisation increases.

bohm2 said:
This is an interesting point. That sounds like an "intrinsicality" argument since anything can be a symbol. What determines what is a symbol comes from the subject. That seems like an argument against symbolic or semiotic function?

No, rather it is the basis of semiosis and the epistemic cut. The whole point of symbols is that they are as detached as possible from any physical considerations. Rate independent information needs to be separate from rate dependent dynamics for there to be a semiotic relation between syntax (the realm of symbols) and semantics (the real world they refer to).
 
  • #203
apeiron said:
The reductionist of course wants the parts to be as simple as possible. Really, it is hard to explain why there should be anything rather than a nothing. But to be fundamental, a part should at least have as few properties as decently possible. Every new property is an addition to a growing collection. It seems troublesome that a part could both have many properties, and also that some of these are subtle enough to be hidden until some kind of complexity harnesses them and brings them to the fore.

Some reductionists argue that, in fact, it is quite possible, in physics, to have a fundamentally important new property, completely different from any that had been contemplated hithero, hidden unobserved in the behaviour of ordinary matter. Although not the best example, one can argue that general relativistic effects "would have totally escaped attention had that attention been confined to the study of the behaviour of tiny particles." (Penrose).

apeiron said:
No, rather it is the basis of semiosis and the epistemic cut. The whole point of symbols is that they are as detached as possible from any physical considerations. Rate independent information needs to be separate from rate dependent dynamics for there to be a semiotic relation between syntax (the realm of symbols) and semantics (the real world they refer to).

This is the part that confuses me when trying to understand Chomsky. He favours an internalistic semantics:

The internalist denies an assumption common to all of the approaches above: the assumption that in giving the content of an expression, we are primarily specifying something about that expression's relation to things in the world which that expression might be used to say things about. According to the internalist, expressions as such don't bear any semantically interesting relations to things in the world; names don't, for example, refer to the objects with which one might take them to be associated. Sentences are not true or false, and do not express propositions which are true or false; the idea that we can understand natural languages using a theory of reference as a guide is mistaken. On this sort of view, we occasionally use sentences to say true or false things about the world, and occasionally use names to refer to things; but this is just one we can do with names and sentences, and is not a claim about the meanings of those expressions.

http://plato.stanford.edu/entries/meaning/#ChoIntSem

http://www.lainestranahan.com/wp-content/uploads/2010/12/Stranahan_Thesis.pdf

Actually, I thought this is what Pythagorean was arguing for. Looks like I misinterpreted his/her post.
 
  • #204
apeiron said:
So latently, anything is possible. But due to downwards acting constraints, this freedom becomes increasingly constrained. Parts become ever more definite and particular as complexity or global organisation increases.

In case this is too abstract, think of the classic substance~form argument. A lump of clay is a formless material. It could be potentially formed into an infinity of designs. It's "latent dispositions" are unbounded.

Humans can come along and impose constraints on that potential. A potter might make a vase. Or more interestingly, an engineer might impose even more "logical" form on matter to create screws, pistons, cams, ratchets, valves.

A lump of metal might be said to have these mechanical qualities as hidden dispositions - but only in the sense that just about any form could have been imposed on the substance. And it is plain how that form actually emerged - by a person with an idea, by an external source of information that cannot be called a hidden disposition of the metal.
 
  • #205
bohm2 said:
Some reductionists argue that, in fact, it is quite possible, in physics, to have a fundamentally important new property, completely different from any that had been contemplated hithero, hidden unobserved in the behaviour of ordinary matter. Although not the best example, one can argue that general relativistic effects "would have totally escaped attention had that attention been confined to the study of the behaviour of tiny particles." (Penrose).

This was phizzicsphan's argument - hidden microproperties are always conceivable. A reductionist is free to make any claim. But why would we take such a claim seriously unless there is a theory and data to show this to be so.

A reductionist would at least have to come up with a compelling instance of the kind of thing that they are talking about - show that it has been true of at least one system.

Special relativity at least might have been derived from time dilation of muon decay (or that would have been an observable demanding of some explanation).

I'm not sure how Penrose might have argued that general relativity shows "no observable effects" at the microscale. Perhaps you can give the source.

But then also the systems argument is that the global shapes the local, so it is not even necessary that that the global be visible as local properties. The argument would in fact seem the reverse. It would be a proof that GR is a maximally global description because it so purely resides at a global level in modelling.

It is of course the central project of current fundamental physics to unite GR and QFT. And the lack of success could be due to this point. Shrink GR to the limits of the microscale and instead of arriving at crisp micro-observables, you get the radical indeterminacy of singularities.

bohm2 said:
The internalist denies an assumption common to all of the approaches above: the assumption that in giving the content of an expression, we are primarily specifying something about that expression's relation to things in the world which that expression might be used to say things about.


The whole page you linked to is a result of the confusion of following reductionist approaches to reality.

As I keep arguing, the systems/semiotic approach says reality starts in vagueness, in radical indeterminancy, and then has to be constrained in its unbounded freedoms to become a something, a crisply definite entity or state.

A symbol stands for a constraint to be applied to naked meaning. It limits the freedom that the world can have.

So a word like "cat" is a token that constrains your thoughts. But there is still plenty of freedom that exists in what you might be thinking about. It could be a Persian, a lynx or Krazy Kat.

Further words can syntatically constrain the meaning, reducing the freedom of your thoughts. So a "fluffy cat". A "fluffy, white cat". etc.

A reductionist thinks meaning is constructed atomistically so therefore words somehow have to stand for some definite entity. But symbols work not by representing but by constraining. It is the limits that they can construct which are the causal source of their power.

So it is not about externalism or internalism, but about top-downism (which - the remarkable bit - is constructible from atomistic elements, discrete symbols).

It is the fact that symbols are global constraints, yet look like reductionist atoms, that probably does cause so much confusion. But anyway, to construct constraints you do also need rules - actual syntax. Which leads us even further towards modelling, semiosis and hierarchy theory.
 
  • #206
apeiron said:
This was phizzicsphan's argument - hidden microproperties are always conceivable. A reductionist is free to make any claim. But why would we take such a claim seriously unless there is a theory and data to show this to be so..

The same argument could be said about semiotics, I think? I didn't fully understand his paper but I think phizzicsphan argues at least, in part, that the new "novel" property that may offer insight in how consciousness can emerge from matter is (in part) the non-locality/non-separability implied at the micro-level in the Bell experiments (e.g. Aspect, etc.) and/or entanglement of QM? Maybe he can elaborate, how?

With respect to hidden microproperties vs semiotics consider using the semiotic approach to pre-quantum physics. A reductionist at that time would have argued that the reason why we can't get Newtonian physics to spit out chemical stuff is because there are hidden microproperties that have yet to be discovered. They would have been right, I think. Would the semiotic approach predicted QM via a different approach? I can't see how except maybe as a model to describe the stuff after the fact. But again, I might be confused as I have a bit of trouble understanding the practical implications and predictions although you have done a good job describing the general perspective. Moreover, I've come across these weaknesses even by those who support the systems/semiotic approach. I'm not sure if you agree with this assessment but here is what Marcello Barbieri writes about biosemiotics:

Biosemiotics is a new continent whose exploration has just begun, and it is not surprising that people have gone off in different directions. In addition to the difficulties that arise in any new field, however, biosemiotics is also having problems of its own. Today, the major obstacles to its development come from three great sources of confusion.

1. The first handicap is that biosemiotics is wrongly perceived as a philosophy rather than a science, and in particular as a view that promotes physiosemiotics, pansemiotics, panpsychism and the like. Here, the only solution is to remind people that biosemiotics is a science because it is committed to exploring the world with testable models, like any other scientific discipline.

2. The second handicap is that biosemiotics appears to be only a different way of looking at the known facts of biology, not a science that brings new facts to light. It is not regarded capable of making predictions and having an experimental field of its own, and to many people all this means irrelevance. Here the only solution is to keep reminding people that the experimental field of biosemiotics is the study of organic codes and signs, that biosemiotics did predict their existence and continues to make predictions, that codes and signs exist at all levels of organization and that the great steps of macroevolution are associated with the appearance of new codes. This is what biosemiotics is really about.

3. The third handicap is the fact that biosemiotics, despite being a small field of research, is split into different schools, which gives the impression that it has no unifying principle. Here we can only point out that a first step towards unification has already been taken and that the conditions for a second, decisive, step already exist. When biosemioticians finally accept that the models of semiosis must be testable, they will also acknowledge the existence of all types of semiosis that are documented by the experimental evidence and that is all that is required to overcome the divisions of the past. At that point, the old divides will no longer make sense and most schools will find it natural to converge into a unified framework.

Biosemiotics must overcome all the above obstacles in order to become a unified science, but this process of growth and development has already started and there is light at the end of the tunnel.


http://www.biosemiotica.it/internal_links/pdf/Marcello%20Barbieri%20(2009)%20A%20Short%20History%20of%20Biosemiotics.pdf
 
Last edited by a moderator:
  • #207
bohm2 said:
Would the semiotic approach predicted QM via a different approach?

Well, it does predict reality is fundamentally indeterminate (vague) and requires constraints (measurement) to make the local crisp (collapse). So in fact yes, it always argued against simple atomism.

1. The first handicap is that biosemiotics is wrongly perceived as a philosophy rather than a science, and in particular as a view that promotes physiosemiotics, pansemiotics, panpsychism and the like. Here, the only solution is to remind people that biosemiotics is a science because it is committed to exploring the world with testable models, like any other scientific discipline.

I don't call this a weakness. Do you?

2. The second handicap is that biosemiotics appears to be only a different way of looking at the known facts of biology, not a science that brings new facts to light. It is not regarded capable of making predictions and having an experimental field of its own, and to many people all this means irrelevance. Here the only solution is to keep reminding people that the experimental field of biosemiotics is the study of organic codes and signs, that biosemiotics did predict their existence and continues to make predictions, that codes and signs exist at all levels of organization and that the great steps of macroevolution are associated with the appearance of new codes. This is what biosemiotics is really about.

Yes, biosemiosis actually won't achieve much except give a more principled understanding of facts already discovered unless it comes up with mathematical-level models.

There is a lot to do to turn philosophy into actual science.

3. The third handicap is the fact that biosemiotics, despite being a small field of research, is split into different schools, which gives the impression that it has no unifying principle. Here we can only point out that a first step towards unification has already been taken and that the conditions for a second, decisive, step already exist. When biosemioticians finally accept that the models of semiosis must be testable, they will also acknowledge the existence of all types of semiosis that are documented by the experimental evidence and that is all that is required to overcome the divisions of the past. At that point, the old divides will no longer make sense and most schools will find it natural to converge into a unified framework.

Again, this is a weakness only in the sense that biosemiosis is a field that is still new and hopeful.

So I don't dispute Barbieri assessment at all.
 
  • #208
apeiron said:
Well, it does predict reality is fundamentally indeterminate (vague) and requires constraints (measurement) to make the local crisp (collapse). So in fact yes, it always argued against simple atomism.?

So I'm guessing it doesn't much favour the Everett or Bohmian interpretations of QM.

apeiron said:
I don't call this a weakness. Do you?

No. Assuming it's wrongly perceived as a philosophy. What is interesting is attempts by Barbieri's group to form a synthesis with biolinguistics and with linguists like Chomsky (see link below) given Chomsky's nativism and premise that syntax determines meaning. This is inconsistent with "the pragmatic context" which determines meaning for systems view.

http://www.biosemiotica.it/internal_links/pdf/2010-%20Group%20Discussion%20of%20On%20the%20Origin%20of%20Language.pdf

On the Origin of Language: A bridge between Biolinguistics and Biosemiotics

http://www.biosemiotica.it/internal_links/pdf/Barbieri%20M%20(2010)%20On%20the%20Origin%20of%20Language

I think Chomsky would agree with Barbieri that:

animals do not interpret the world but only representations of the world. Any interpretation, in short, is always exercised on internal models of the environment, never on the environment itself.

So that, perception of "external reality" is always mediated/filtered through our mental organs. But I'm not sure Chomsky would be sympathetic to the view that:

the environment (in an objective sense) necessarily represents the final/ultimate object of any perception.
 
Last edited by a moderator:
  • #209
bohm2 said:
So I'm guessing it doesn't much favour the Everett or Bohmian interpretations of QM.

That is certainly true for me. :smile:

bohm2 said:
What is interesting is attempts by Barbieri's group to form a synthesis with biolinguistics and with linguists like Chomsky (see link below) given Chomsky's nativism and premise that syntax determines meaning. This is inconsistent with "the pragmatic context" which determines meaning for systems view.

It is hardly Barbieri's "group". Quite a few are hostile to his view of what biosemiosis is, let alone his attempts to make a connection with Chomsky.

Barbieri himself calls his approach code-semiosis and distinguishes it from a number of approaches including Pattee's physical-semiosis, or the more strictly Peircean sign-semiosis.

Having read his papers, my main reaction is not that he is wrong (and others right) but he over complicates the analysis whereas others (principally Pattee and Salthe) are seeking to strip things down to their barest bones. And these two are also seeking the pan- view where semiosis is described with such generality it can be appreciated as a universal process (as Peirce envisaged).
 
  • #210
apeiron said:
Having read his papers, my main reaction is not that he is wrong (and others right) but he over complicates the analysis whereas others (principally Pattee and Salthe) are seeking to strip things down to their barest bones. And these two are also seeking the pan- view where semiosis is described with such generality it can be appreciated as a universal process (as Peirce envisaged).

This is my main problem with most semiosis theories too. I've read some Peirce and Sebeok and some others, and the posted attempt of Barbieri to bridge two fields. Again, I've always found that even the simplest models are debatable, the more complex models are based on so many assumptions and leaps-of-faith that they can only be incorrect, and given those observations it looks like most analysis deflate to too many words conveying gibberish.

It's nice with a glass of wine, though.
 
  • #211
PhizzicsPhan said:
matter/energy behaves according to the dual influences of the implicate order (described by Bohm and Hiley as the quantum potential or guiding wave) and explicate order (classical forces)

Here's a difficulty with Bohm's scheme that some mention. Assume a mixed ontology like his. You have:

1. A 3-dimensional space in which the N particles evolve.
2. A 3-N-dimensional space in which the wave function evolves.

They argue that you have 2 seemingly "disconnected spaces with no apparent causal connection between the particles in one space and the field in the other space, and yet the stuff in the two spaces is evolving in tandem." How is this possible? It seems to have an interaction problem equivalent to the Cartesian mind-body problem?
 
  • #212
bohm2 said:
Here's a difficulty with Bohm's scheme that some mention. Assume a mixed ontology like his. You have:

1. A 3-dimensional space in which the N particles evolve.
2. A 3-N-dimensional space in which the wave function evolves.

They argue that you have 2 seemingly "disconnected spaces with no apparent causal connection between the particles in one space and the field in the other space, and yet the stuff in the two spaces is evolving in tandem." How is this possible? It seems to have an interaction problem equivalent to the Cartesian mind-body problem?

I envision it as akin to a boat on an ocean - normal physical forces constitute the wind and other surface events. The quantum potential of the implicate order constitutes the ocean currents.

More abstractly, I envision the implicate order/apeiron/ether/ground of being as the realm of pure potentiality. It is only when a particle bubbles up from potentiality into actuality that it becomes conscious and it is only when it becomes conscious that it becomes subject to the normal physical forces.

For yet another model, I envision the implicate order as an infinite grid of 3-d pixels. When these pixels constitute empty space, it is because consciousness has not risen from implicate to explicate and thus matter has not manifested from pure potentiality to actuality. Wolfram has suggested a cellular automata model of physics in A New Kind of Physics and I think some of his ideas may have some merit. One idea I've played with a tiny amount is to extend the proximity model of cellular automata to two, three or more degrees of proximity, providing what seems to be a more natural model of how reality works, in terms of causal influences.
 
  • #213
apeiron said:
The whole page you linked to is a result of the confusion of following reductionist approaches to reality.

I don't think it has anything to do with the reductionist stance. Chomsky favours the “internalist” perspective with respect to linguistics because:

In symbolic systems of other animals, symbols appear to be linked directly to mind-independent events. The symbols of human language are sharply different. Even in the simplest cases, there is no word-object relation where words are mind-independent entities. There is no reference relation, in the technical sense familiar from Frege and Peirce to contemporary externalists.

Thus,

Much of Chomsky’s scepticism about externalist semantics is a scepticism about the possibility of making any scientific use of truth and reference in linguistic semantics. His scepticism about truth and reference in turns seems to stem from some deep metaphysical puzzles that he likes to raise about the existence of things in the world for words to refer to. In several places, Chomsky argues that names of cities, e.g., 'London' can refer both to something concrete and abstract, animate and inanimate.

He provides a number of examples if you read his stuff; convincingly, in my opinion. This seems to be one dividing line that separates his model to those of Peirce, Bateson, etc. who argue that “such operations fundamentally derive their referential and semiotic power from a system of relations external to, though including the individual agent. From what I recall Chomsky debated Bateson/Piaget on this point years ago, I think.
 
Last edited:
  • #214
bohm2 said:
Chomsky favours the “internalist” perspective with respect to liguistics because: In symbolic systems of other animals, symbols appear to be linked directly to mind-independent events. The symbols of human language are sharply different.


I think the problem here comes from taken an either/or approach. Either symbolic language is all innate/internal/whatever, or all learnt/referential/external/whatever.

My argument is about how both are true, and what that looks like.

So what is external to the "mind" is clearly the social construction of meaning. A word like London refers to something in the collective mind, if you like - a semiosis on a much larger scale. And unless you believe in telepathy, that's not an "internalist" story in the sense intended here.

And where Chomsky is really wrong (IMO, having studied the evolution of human language) is thinking that syntax is not quite simply explained in "externalist" terms.

The nested hierarchical design (the recursiveness) which he claims to be such a special feature of syntax is in fact just how the whole brain works. It is the natural architecture for cognition. The key evolutionary event was in fact the development of a further constraint on the motor output of this hierarchy. That is, the development of a throat, mouth and lips designed for chunking a flow of vocalisation. Once sound was chopped into a sequence of articulate syllables (proto-words), then it was ready to be taken over by a code with rules.

Even the rules of grammar are no big deal. Animal minds (through evolution of natural brain architecture) already model the world in terms of paying attention to the levers of control - analysing who did what to whom. Rudimentary cause and effect logic.

Once the possibility of an actual coding became possible, it is no surprise that the code emphasised this underlying epistemology, strengthening through rules (or rather, socially evolved habit) a universal logical format based on the triadic relation of subject, verb and object.

So Chomsky makes the evolution of language seem far more unnatural than its actually is (just as extreme nativists go the other way and think the story is so much simpler).

So nothing about human speech is internal in the sense that it arises mysteriously in "a mind", or any kind of mental realm.

But as I say, we shouldn't be too hasty, like the blank slate guys or behaviourists, and deny that nothing else is in play here.

And this is where the epistemic cut comes in. There really is something different going on when we compare what we could call (for the sake of familiarity) the realms of hardware and software. The physical basis of symbols is a vexed issue. Symbols do open up a new world of causality. And that is what semiosis is trying to acknowledge. There are causes at the symbol level that are not present (except as vague potentials) at the brute material level of analysis.

So semiosis must arise out of the material, but symbols do seem to come from some other place, a wee bit Platonic.

Putting it all together, the systems approach (based on good old fashioned Aristotlean causality) says this is a local construction vs global constraints deal. The sharp division is not between matter and mind, or outer and inner, but between the local and global, between constructive freedoms and the order imposed by top-down constraints. So there is a real divide to talk about.

But then holding it all still together is the epistemic cut - the understanding that it is a divide that arises via development and has to be inserted into nature. Underneath, all is still one, even though equally, nothing definite can exist until vague monadicity has been sharply separated into the dichotomies that allow the triadic relationships which are the hierarchies.

Anyway, the power of symbols is that they code for constraints. You can construct a constraint in serial fashion (as a syntactic sentence), which in turn creates a mental state within the hierarchical architecture of a brain (as I argued with the example of a white persian cat).

Acting this way, symbols have the power that we call machine-like - mechanical or computational. They can construct constraints to order (according to the "mental" habits that we have learnt). Constraints normally come from the "outside" of a system - they are imposed from levels of organisation beyond a system's control. As I said about liquidity and pressure. But through genes and words, constraints can be constructed from the "inside" of the system - the material inside rather than some immaterial inside, although still an emergently experiential "material inside".

So again, the view I'm arguing is complex - far beyond the simplicities of a Chomsky or a Skinner. But it is also what the literature supports. It is the story you can see in the neuroscience and paleoanthropology. And it is the story which can be explained causally in the kind of systems science, semiotics and hierarchy theory that have arisen out of biology dealing with essentially the same problem when talking about "life".
 
  • #215
bohm2, I'm still waiting for you to elaborate on your question about telepathy vis a vis Bohmian QM. Specifically, can you point me toward the source of your suggestion that each particle's wave function must be entirely isolated (I think this is what you suggested)?
 
  • #216
PhizzicsPhan said:
bohm2, I'm still waiting for you to elaborate on your question about telepathy vis a vis Bohmian QM. Specifically, can you point me toward the source of your suggestion that each particle's wave function must be entirely isolated (I think this is what you suggested)?

This is from Mike Towler's course slides on the properties of the wave field:

Comparison with other field theories

•No ‘source’ of ψ-field in conventional sense of localized entity whose motion ‘generates’ it. ψ thus not ‘radiated’.
•At this level no ‘ether’ introduced which would support propagation of ψ. As with electromagnetism, think of ψ as state of vibration of empty space.
•Influence of wave on particle, via Q, independent of its intensity.
•Initial velocity of particle fixed by initial wave function and not arbitrarily specified as in electromagnetic/gravitational theories.
•Schrodinger eqn. determines wave evolution and particle equation of motion (unlike electromagnetism where Maxwell equations and Lorentz force law logically distinct).
•Wave equation describes propagation of complex amplitude ψ , or equivalently two coupled real fields. Complex waves often used in other field theories for mathematical convenience, but always take real part in the end. In QM two real fields required.
• ψ-field finite and carries energy, momentum and angular momentum throughout space, far from where particle located (as in classical field theories). However conservation laws obeyed by field independent of particle since latter does not physically influence former.

Furthermore, there is, no action-reaction symmetry:

in classical physics there is an interplay between particle and field - each generates the dynamics of the other. In pilot wave theory ψ acts on positions of particles but, evolving as it does autonomously via Schrodinger’s equation, it is not acted upon by the particles.

So, it seems that even the particle cannot "influence" the ψ field. Bohm writes (p.30-The Undivided Universe):

the Schrodinger equation for the quantum field does not have sources, nor does it have any other way by which the field could be directly affected by the condition of the particles.

http://www.tcm.phy.cam.ac.uk/~mdt26/pilot_waves.html

Anyway, given the points above I can't see how a ψ field in one particle system can have an effect on another ψ field/particle system. A system of particles may be guided by a pool of information common to the whole system but that's not the same thing. It's possible that I'm mistaken, but I don't think I'm misinterpreting his model? As an aside, telepathy seems totally irrational to me.
 
Last edited:
  • #217
apeiron said:
So Chomsky makes the evolution of language seem far more unnatural than its actually is (just as extreme nativists go the other way and think the story is so much simpler).

I'm not sure what you mean by "unnatural". If you mean his skepticism of accounting for the evolution of language via natural selection, then I agree. I believe he thinks the evolution of the language faculty has more to do with the laws of physics than the principle of natural selection. He writes:

A very strong proposal, sometimes called “the strong minimalist thesis,” is that all phenomena of language have a principled account in this sense, that language is a perfect solution to interface conditions, the conditions it must satisfy if it is to be usable. If that thesis were true, language would be something like a snowflake, taking the form it does by virtue of natural law. Genetic endowment is the residue when this thesis is not satisfied. An account of the evolution of language will have to deal with the property of unbounded Merge, and whatever else remains in the genetic endowment. Emergence of unbounded Merge at once provides a kind of “language of thought,” an internal system to allow preexistent conceptual resources to construct expressions of arbitrary richness and complexity.

The core principle of language, unbounded Merge, must have arisen from some rewiring of the brain, presumably not too long before the “great leap forward,” hence very recently in evolutionary time. Such changes take place in an individual, not a group. The individual so endowed would have had many advantages: capacities for complex thought, planning, interpretation, and so on. The capacity would be transmitted to offspring, coming to dominate a small breeding group. At that stage, there would be an advantage to externalization, so the capacity would be linked as a secondary process to the sensorimotor system for externalization and interaction, including communication – a special case, at least if we invest the term “communication” with substantive content, not just using it for any form of interaction. It is not easy to imagine an account of human evolution that does not assume at least this much, in one or another form. And empirical evidence is needed for any additional assumption about the evolution of language.

Biolinguistic Explorations: design, development, evolution

http://www.law.georgetown.edu/faculty/mikhail/documents/Noam_Chomsky_Biolinguistic_Explorations.pdf

In a recent paper he writes,

At some time in the very recent past, maybe about 75,000 years ago, an individual in a small group of hominids in East Africa underwent a minor mutation that provided the operation Merge – an operation that takes human concepts as computational atoms, and yields structured expressions that provide a rich language of thought. These processes might be computationally perfect, or close to it, hence the result of physical laws independent of humans. The innovation had obvious advantages, and took over the small group. At some later stage, the internal language of thought was connected to the sensorimotor system, a complex task that can be solved in many different ways and at different times, and quite possibly a task that involves no evolution at all.

The Biolinguistic Program: The Current State of its Evolution and Development

http://www.punksinscience.org/klean...L/material/Berwick-Chomsky_Biolinguistics.pdf

The hi-lited metaphors above suggest some level of simplicity, elegance or optimal design, etc. given the conditions under which it developed. I don't understand if this is possible but some Cedric Boeckx and Massimo Piatelli-Palmarini argue that:

The ultimate goal of the Minimalist Program...is for the discovery of the points of variation to yield the linguistic equivalent of the periodic table of elements that would ‘bring linguistics closer to the goals and methods of the natural sciences, enriching both linguistics and biology with intimations of deductive power that might one day become not too dissimilar from that of physics.’

http://www.springerlink.com/content/j336q00qw84g3461/fulltext.pdf
http://dingo.sbs.arizona.edu/~massimo/publications/PDF/BoeckxMPPLingReview2005.pdf
 
Last edited by a moderator:
  • #218
bohm2 said:
Such changes take place in an individual, not a group.

This is the problem with Chomksy - the oracular statements that just fly in the face of the mainstream. It is as if he doesn't understand how evolution works.

He speaks about language as a hopeful monster mutation when evolutionary change is a population genetics story. Steady tinkering with a general package. Dramatic change comes by fine-tuning of developmental growth gradients, not by sudden inventions de novo.

It is the same as his arguing that internal speech - to control thoughts - came before external speech to control social behaviour. Social animals use signs (indexical rather than symbolic) to communicate. Chimps can offer, direct, indicate (in an unstructured, unformalised way). There is no reason to suppose that human language did not start off like this - especially as speech continues to have a primarily social function even if that function is self-regulation.

The use of inner speech to think (in that philosophical/rational fashion that Chomsky is treating as paradigmatic) is pretty modern - and itself clearly socio-cultural in its development. An education is first required. Just having speech does not create a rational style of thought (as cross-cultural studies easily demonstrates).

So I just don't "get" Chomsky. What is his appeal? Why is he the most cited living scientist?

I've always thought it must be because he was the one to give the Behaviourists the bashing they deserved. Or that his politics were right on. Or just that his prophetic style inspires disciples. Because I have yet to read a version of his theories concerning language evolution that makes any sense, or bears any realistic connection to the probable facts.

It is not just me either, but the general opinion of those studying language evolution.

Bickerton wrote this amusing account which seems revealing of Chomsky's character...

http://www.radicalanthropologygroup.org/old/pub_bickerton_on_chomsky.pdf

On October 14, 2005, Chomsky disembarked on Long Island for one of the few conferences he has attended in the last several decades: the Morris Symposium on the Evolution of Langauge at S.U.N.Y Stony Brook. He arrived too late for any of the presentations given by other scholars on that date, gave his public lecture, gave his conference presentation at the commencement of the next morning’s session, and, despite the fact that all of the morning’s speakers and commentators were expected to show up for a general discussion at the end of that session, left immediately for the ferry back without having attended a single talk by another speaker. For me, and for numerous others who attended the symposium, this showed a lack of respect for everyone involved. It spelled out in unmistakable terms his indifference to anything anyone else might say or think and his unshakable certainty that, since he was manifestly right, it would be a waste of time to
interact with any of the hoi polloi in the muddy trenches of language evolution.

He then goes on to pick apart the holes in Chomsky's notion of evolution.
 
Last edited by a moderator:
  • #219
bohm2 said:
That's what makes us unique among the other animals or so argues Chomsky:

Yes, but that does not mean Chomksy is saying anything useful about how the difference arose.

The highlighted bit about the fact internal speech cannot be shut down at will is yet another example of how Chomsky is out of touch with basic brain science.

The brain is designed to generate potential motor action whenever anything is in focal attention.

So see a door knob and your hands are already getting primed with an anticipatory sense of what to do. Speech output is just another form of motor action tacked on to the brain hierarchy in this sense. Whatever is your current focus of attention, your brain (through many years of training) will be seeking to form a verbal response.

And as he mentions, you in fact continue talking to yourself all through sleep too. Ruminative chatter runs through slow wave sleep.

So the brain is just doing what it was evolved to do - respond to attentional focus with at least preparatory responses. There is no off-button. The whole evolutionary point of a mind is to react to the world, not contemplate it in some abstract, conceptual, rational fashion.

So this is not a proof that speech is intrinsically "internal" any more than responding kinesthetically to the sight of a doorknob - then not opening the door. With speech, the urge is to say it aloud - a normal communicative action. But through socialisation, we have learned to keep our thoughts to ourselves and so speak silently in our heads - conscious of only the anticipated auditory image of saying something aloud.

So "will" can quite easily stop us blurting out our internal dialogue, but it cannot simply switch off the trained response of generating some urge say something about any focus of attention.

As to the general fact that language is special and the key to human difference, that is already agreed. That is the basis of the semiotic position. It is a symbolic coding system that animals lack and humans evolved/invented.

But Chomsky's notions about the "how" of that evolved/invented are just woefully out of touch with the science on everything I have read from him so far.

[Edit: What happened? Your last post seems to have disappeared?]
 
Last edited:
  • #220
Sorry, I jumped the gun in last post. My ADD? I didn't read your post carefully. I deleted the post because it doesn't really affect your argument. I also had trouble understanding his thoughts on evolution vs natural selection. But I thinks he wants to maximize physical/chemical laws that guide evolution over natural selection kinda in the same way that Helium came after Hydrogen, I think, as Jacob notes:

Chomsky‟s naturalism is based on the Galilean assumption that we ought to look for deep physical explanations, which in turn leads him to maximize the contribution of physical laws and downplay the role of natural selection in the evolution of complex biological systems. He seems to assume that time is not ripe yet for providing explanations of cognitive phenomena based on natural selection for we still miss basic insights into the physical constraints under which natural selection must operate. I certainly am in no position to judge whether he is right. Still, what is not always clear from Chomsky‟s writings is whether he thinks that naturalistically inclined externalist philosophers and evolutionary psychologists are merely guilty of neglecting the role of physical constraints in evolution or whether they are more seriously mistaken in assuming that natural selection is involved in explaining why the behavior of human beings exemplifies the law of universal gravitation.

http://hal.archives-ouvertes.fr/docs/00/05/32/33/PDF/ijn_00000027_00.pdf
 
Last edited:
  • #221
bohm2 said:
Sorry, I jumped the gun in last post. My ADD? I also had trouble understanding his thoughts on evolution vs natural selection. But I thinks he wants to maximize physical/chemical laws that guide evolution over natural selection kinda in the same way that Helium came after Hydrogen, I think, as Jacob notes:

This bit of his argument is then like what I am saying about the epistemic cut. Computational mechanism - the causal power of serial codes - is something that is beyond the usual ideas about material causality. So we should describe this aspect of systems in suitably universal terms to do it justice.

Codes are special. They seem to come from "beyond the normal" (being a variety of "imposed constraint"). The problem for biologists/neurologists/anthropologists is then to explain how codes can arise via natural evo-devo processes.

And this is not hard at all. It is the constraint over dimensionality that creates codes. In a world of processes of higher dimensionality, constraining dimensionality puts certain processes "outside" the system (even while they are inside).

So a membrane is a way to constrain the dimensionality of a chemical reaction from 3D to 2D, dramatically altering its rate and other material conditions.

We call cells "machinery" because they are full of these kinds of internal dimensional constraints.

And a code is what you get with maximal constraint (the situation where a process is now most completely "outside" what it is still "inside", or most completely shifted to the rate independent information side of Pattee's epistemic cut).

That is, constrain a process to a 1D line (like a DNA molecule or a flow of vocalisation), then constrain it further to 0D sequence of points (like a 3-base codon or syllabic utterances) and you have the materal basis of a code. All that has to happen next is the colonisation of this code by "information". A semiotic relationship must develop where a coding potential actually becomes used as a code, a memory mechanism, that controls a larger space of dynamical processes for some meaningful end.

Genes control a biochemical millieu. Words control a sociocultural millieu. A new level of code, a new level of organisational complexity.

So the general causal story is there in systems science/semiotics/theoretical biology. It is a naturalistic explanation that fits with the facts. It does appeal to a body of ideas beyond simplistic, reductionist, Darwinian selection and so does - as Chomsky wants - arrive at a more physically general level of explanation. But biologists already know that evolution is a much more complex story than Darwinism. That's why evo-devo is what they talk about these days.
 
  • #222
Stoljar’s argument against Strawson’s realist monism/panpsychism:

Strawson: (people) think they know a lot about the nature of the physical...this is a very large mistake.

Stoljar agrees with Strawson on this statement. Stoljar then summarizes Strawson’s argument:

1. If an experiential fact e is wholly dependent on a non-experiential fact n, then n must be intrinsically suitable (i.e. be intrinsically such to wholly yield an experiential fact).
2. There is no non-experiential fact n such that it is intrinsically suitable.
Conclusion: No experiential fact e is wholly dependent on any non-experiential fact n.

But the emergentist will deny 1: e is wholly dependent on n, and yet insist also that this tells us nothing about the intrinsic or essential nature of n. The eliminativist, denies that experiences even exist. Stoljar “agrees with Strawson that both eliminativism and emergentism are things to be avoided if possible.” But Stoljar points out that that premise 2 is much more faulty by giving another argument:

4. If a liquidity fact l is wholly dependent on a non-liquidity fact m, then m must be intrinsically suitable (i.e. be intrinsically such to wholly yield a liquidity fact).
5. There is no non-liquidity fact m such that it is intrinsically suitable.
Conclusion: No liquidity fact l is wholly dependent on any non-liquidity fact m.

The same type of argument can be used against Strawson using apeiron’s examples before(e.g. how one gets gas from liquid, how get acidity from hydrogen, etc.). Given the liquidity argument above it is obvious that:

his argument is clearly unsound because its conclusion is false: the facts about something’s being a liquid—for example the facts about water’s being a liquid—do indeed depend on facts not about liquid, for example facts about the nature of various chemical elements and their properties.

Stoljar thus argues that:

if the liquidity argument is unsound, and (premise) 4 is true, the culprit must be 5... So, that just as the second premise of the liquidity argument is false or without foundation, so too is the second premise of the experience argument.

But Strawson agrees that 5 is false, yet insists that 2 is true. Stoljar then goes on to use Strawson’s statement that “they think they know a lot about the nature of the physical...this is a very large mistake” against Strawson:

But isn’t Strawson’s claim about non-experiential facts directly analogous to this claim about physical facts? Isn’t he simply insisting that he knows enough to know about non-experiential facts that they are not intrinsically suitable? Why then isn’t his position on non-experiential facts directly analogous to the mistaken position about physical facts that he himself so correctly identifies and criticizes?

So Strawson is guilty of that same kind of error he accuses his opponents of making:

For, as we have seen, Strawson insists on 2, and 2 is the claim that no non-experiential fact is intrinsically such as to yield an experiential fact. When we ask what grounds this insistence, however, all we seem to find is that we know enough to know.

http://philrsss.anu.edu.au/sites/default/files/people/Strawson.pdf
 
Last edited:
  • #223
I know this is a stretch but I find the concept of non-separability and many-dimensional configuration space (as implied by QM) interesting with respect to 2 major issues:

1. Explaining emergence/novelty. Consider this author’s argument previously posted talking about the possibility for "real systemic or emergent properties" when discussing the results of the Bell test (Aspect) experiments:

"The classical picture offered a compelling presumption in favour of the claim that causation is strictly bottom up-that the causal powers of whole systems reside entirely in the causal powers of parts. This thesis is central to most arguments for reductionism. It contends that all physically significant processes are due to causal powers of the smallest parts acting individually on one another. If this were right, then any emergent or systemic properties must either be powerless epiphenomena or else violate basic microphysical laws. But the way in which the classical picture breaks down undermines this connection and the reductionist argument that employs it. If microphysical systems can have properties not possessed by individual parts, then so might any system composed of such parts...

Were the physical world completely governed by local processes, the reductionist might well argue that each biological system is made up of the microphysical parts that interact, perhaps stochastically, but with things that exist in microscopic local regions; so the biological can only be epiphenomena of local microphysical processes occurring in tiny regions. Biology reduces to molecular biology, which reduces in turn to microphysics. But the Bell arguments completely overturn this conception."

http://faculty-staff.ou.edu/H/James.A.Hawthorne-1/Hawthorne--For_Whom_the_Bell_Arguments_Toll.pdf

2. The information short-fall and “spatial” problem for mind: These are the arguments made by Fitch and McGinn.

Fitch: How can a single cell (the fertilized egg), with two copies of a few gigabytes of DNA, contain within itself the basis for a newborn’s body with 100 trillion cells and a brain with a trillion synapses? How can 25,000 genes possibly possesses enough information to specify this process? Alternatively, how could the environment in utero provide this information? How could evolution have encoded it? Where does all this information come from?

McGinn: That is the region in which our ignorance is focused: not in the details of neurophysiological activity but, more fundamentally, in how space is structured or constituted. That which we refer to when we use the word 'space' has a nature that is quite different from how we standardly conceive it to be; so different, indeed, that it is capable of 'containing' the non-spatial (as we now conceive it) phenomenon of consciousness.

http://www.punksinscience.org/kleanthes/courses/UCY10S/IBL/material/Fitch_Prolegomena.pdf
http://www.nyu.edu/gsas/dept/philo/courses/consciousness97/papers/ConsciousnessSpace.html

If one takes the ontology of the 3-N space in QM seriously then maybe there is some hope to meet Fitch’s and McGinn’s demands? Consider the many-dimensional configuration space, its properties and the arguments suggested in these 3 papers:

http://philsci-archive.pitt.edu/1272/
http://spot.colorado.edu/~monton/BradleyMonton/Articles_files/qm%203n%20d%20space%20final.pdf
http://philsci-archive.pitt.edu/4621/1/ststaterealism.pdf

I know it’s a stretch but it just seems that this space is “rich” enough to allow for the possibility to meet Fitch’s and McGinn’s demands. Especially when combined with the non-separability/contextuality suggested in all interpretations of QM (e.g. the whole is greater than the sum of the parts) so that emergence/novelty would not appear so “brute”?
 
Last edited by a moderator:
  • #224
bohm2 said:
So Strawson is guilty of that same kind of error he accuses his opponents of making:

For, as we have seen, Strawson insists on 2, and 2 is the claim that no non-experiential fact is intrinsically such as to yield an experiential fact. When we ask what grounds this insistence, however, all we seem to find is that we know enough to know.

So what changes? This is the standard charge against the Hard Problem.

Those who believe there is a problem will say "I see no convincing tale of micro-causes".

Those who argue against wiil say reply, well, that does not prove that such a tale does not exist. And if a tale does exist, then we only have a regular "easy" problem.

So the hard problem needs to be bolstered in the face of this reasonable sounding doubt by further conceivability tests, such as the zombie argument, or Mary's "knowing everything about colour" argument.

People can argue about how convincing they find that.
 
  • #225
bohm2 said:
I know it’s a stretch but it just seems that this space is “rich” enough to allow for the possibility to meet Fitch’s and McGinn’s demands. Especially when combined with the non-separability/contextuality suggested in all interpretations of QM (e.g. the whole is greater than the sum of the parts) so that emergence/novelty would not appear so “brute”?

But at the QM collapse level, the actual novelty generated is tiny, micro-physical. What we can measure is "random" - maximally entropic so far as we are concerned as observers.

If there is a kind of holistic, contextual choice being made (outcomes being neither determined, nor random, but entangled and then decohered), then this evidence of creative spontaneity is the least amount possible.

Whereas at the level of life, events like the growth of a cell into an individual are strongly a matter of choice (what a genome chose to happen). They are powerfully negentropic. The coded information is robust enough to overcome all sorts of vagaries of circumstance to still produce the same overall end.

So comparing emergence/novelty at the QM scale and the living systems scale is talking about opposite ends of a huge spectrum of systems or holism causality.

With QM, the holism could not be more fragile or minimal. With life and mind, it is physically robust and hugely negentropic.

So apples and oranges as a comparison. Even if both are varieties of holism, they are not woven of the same cloth - stuff of the same material description.

If you compared the actual "configuration space" of a pair of entangle electrons and a human mind, with all its hopes, plans, expectations about both the near perceptual and more distant intention future, you can see the electrons inhabit a trivially larger realm than Newtonian 3D, whereas for complex systems, the future is a really vast configuration space, if you had to specify it in terms of countable material trajectories.
 
  • #226
apeiron said:
So what changes? This is the standard charge against the Hard Problem.

Yes, but Stoljar is arguing that Strawson is contradicting himself.
 
  • #227
bohm2 said:
Yes, but Stoljar is arguing that Strawson is contradicting himself.

The base complaint is still just that Strawson needs to give us reason to believe the truth of 2). And it would be inconsistent for Strawson to claim that some physical facts are just obvious when he also admits we can never be sure we know the full truth about physical facts.

So the logic of the argument is not self-contradictory, but the standards set for the test of the truth of its premises seem hypocritical.

Of course it is still all "angels on the head of a pin" stuff as it is based on a faux reductionist notion of emergence, not one that I find logical or coherent in the first place.

As already argued, for there to be emergence, the micro-scale must certainly have the potential to yield what emerges. But equally, it cannot have those properties "in miniature" - present as an already realized actuality - otherwise there would be no emergence as such.

So the deep self-contradiction lies in insisting that there can be emergence without actual change. Arguing about whether you think you see, or think you positively don't see, the macro already present in the micro is a false dichotomy.

Logic says at a more fundamental level of development, things are inherently vague. If you are reducing the actual to the potential, by definition you only have the potential (the larger unconstrained state) and not the constraints necessary for some actuality to emerge.

You can't make definite statements (that something is either there, or not there) about that which is indefinite.
 
  • #228
apeiron said:
But at the QM collapse level...

Do you take the collapse as "real"? Do you favour collapse-type interpretations?
 
  • #229
bohm2 said:
Do you take the collapse as "real"? Do you favour collapse-type interpretations?

Yes of course. Constraints emerge with scale. That is basic to systems causality. So thermal decoherence would be taken as the mechanism that collapses things to effective classicality at a level far below those involved in brain function (even if decoherence is still an essentially reductionist no-collapse formalism).
 
  • #230
apeiron said:
Yes of course. Constraints emerge with scale. That is basic to systems causality. So thermal decoherence would be taken as the mechanism that collapses things to effective classicality at a level far below those involved in brain function (even if decoherence is still an essentially reductionist no-collapse formalism).

Maybe I'm misunderstanding you, but decoherence cannot solve the problem of definite outcomes in quantum measurement (e.g. measurement problem). So given that it is generally agreed that decoherence cannot do this, what interpretation do you (and systems theorists) favour?

With respect to Strawson, Stoljar is arguing that Strawson is contradicting himself because:

1. Strawson argues that we make the mistake of assuming we know enough about the non-experiential stuff but we don't...this is a fatal mistake.

But then Strawson, himself, makes that same mistake because:

2. Strawson argues (we know) that non-experiential stuff is not intrinsically suitable to accommodate the experiential.
 
  • #231
bohm2 said:
Maybe I'm misunderstanding you, but decoherence cannot solve the problem of definite outcomes in quantum measurement (e.g. measurement problem). So given that it is generally agreed that decoherence cannot do this, what interpretation do you (and systems theorists) favour?

As I agreed, even with decoherence, collapse is not in the formalism. But ontologically, decoherence is a systems-style approach because collapse is put out in the real world and tied to general thermodynamic principles rather than being either placed in a conscious human observer, or simply unplaced.

If you are asking my personal opinion, I don't hold to any strong definition of "collapse" here because again, that is the jargon of an either/or approach where something either is, or it isn't. Those are the only possibilities due to the law of the excluded middle.

The systems view would instead talk about limits. So collapse is something that would be approached asymptotically as a boundary state rather than a state actually achieved. But by the same token, this is still "as near properly collapsed as dammit" and not in some nebulous forever-Schrodinger's cat state, or any of the other interpretations like MWI that are justified by an inability to point to where the epistemic cut gets made in reality.

bohm2 said:
With respect to Strawson, Stoljar is arguing that Strawson is contradicting himself because:

1. Strawson argues that we make the mistake of assuming we know enough about the non-experiential stuff but we don't...this is a fatal mistake.

But then Strawson himself, so argues Stoljar makes that same mistake himself because he

2. Says that he knows that non-experiential stuff is not intrinsically suitable to accommodate the experiential.

So you said. And who is arguing against that?

Once you accept ontic doubt, it applies to all claims of knowledge. But the consequence of this is that any claims have to be argued for in a way people find reasonable and convincing.

So has Strawson done that? Clearly not to Stoljar's satisfaction.

The Stoljar/Strawson discussion is about motivations for panpsychism, is it not?

The ordinary view of material reality is that it lacks any material basis (by way of localised properties) to construct experiential states. So there is a Hard Problem. But the panpsychist wants to fix things for reductionism by positing experience itself as a material property that is pan-natural. This then would give a material basis to a materialistic production of conciousness.

So someone can both say we cannot see any causes for something so extraordinary as consciousness in our regular view of nature, but also because we cannot know everything about nature at this level, we know that leaves room always for anything to be the case - including that panpsychic experience is inherent as a fundamental property of matter.

If we know what we don't know, then that is definitely still knowing something. That is not strictly self-contradictory, though certainly runs into all the problems associated with hierarchically self-referential statements.

Now the panpsychic argument proceeds, as we have seen, along the lines that having considered all possible alternatives for how consciousness might arise in a fully-material world, we are left with only the improbable answer (one for which there is no observational evidence, for a start) that it is inherent as a fundamental property of matter.

But panpsychists have to first dismiss the systems argument, not the kind of lightweight notions of emergence being bandied about by Kim, for instance.

It is in fact quite easy to believe that a reductionist approach to consciousness (as a construction from a material) is not up to the task for accounting for its causes.

So now move on to tackling the much stronger systems view of complex reality before getting desperate, talking about the invisible properties of inaccessible regions - the very places your claims can never be checked against model and observation.
 
  • #232
If this opinion on semiotics by Lynn Nadel and Massimo Piattelli-Palmarini (see below) is pervasive among biolinguists , I don't see how Barbieri and others hope to form some type of bridge between Biolinguistics and Biosemiotics?

What is Cognitive Science?

A special position in this debate between continuists and modularist-innatists was occupied by the influential biosemiotician Thomas Sebeok. He rejected wholesale all the experiments on the alleged linguistic abilities of apes, claiming a much deeper, more universal and more meaningful underlying substrate: the “semiotic function”. He described incremental steps of complexification in this universal underlying substratum and insisted that a unified theory could range from the “syntactic” (sic) nature of Mendeleeff’s table of the chemical elements (Sebeok, 1995/2000), up to all systems of human communication, be they vocal, gestural, graphic or pictorial, passing through the genetic code, the immune code, the systems of communication between cells, between unicellular organisms (microsemiotics), plants (phytosemiotics) and the circuits of neurotransmitters in the nervous system (neurosemiotics). These incremental steps in the quality and complexity of signaling were analyzed as accruing to a common semiotic substrate, displaying a universal “perfusion of signs” which, according to Sebeok, authorizes a unified conceptualization, a semiotic "ecumenicalism” (Sebeok, 1977). Sebeok’s conceptualization and his alleged semiotic “theorems” and “lemmas” have found attentive ears in some literary quarters, and in some schools of communication (notably in Italy), but have remained, in the main, alien to cognitive science. The semantics of natural language has developed a radically different approach (for a textbook synthesis, see (Larson and Segal, 1995).

http://www.biolinguistics.uqam.ca/Nadel&Piattelli-Palmarini_2003.pdf
 
  • #233
bohm2 said:
If this opinion on semiotics by Lynn Nadel and Massimo Piattelli-Palmarini (see below) is pervasive among biolinguists , I don't see how Barbieri and others hope to form some type of bridge between Biolinguistics and Biosemiotics?

But why would we take this opinion seriously?

The paper itself concedes that naive innatism/modularism has been superceded (along with its corollary, naive blank slate behaviourism/connectionism).

But it does not deal with what is replacing this old dichotomy. And semiotics is central to that.
 
  • #234
Does anyone here have an answer to "Plato's Problem", regarding Chomsky?
 
  • #235
Willowz said:
Does anyone here have an answer to "Plato's Problem", regarding Chomsky?

If you mean the gap between knowledge and experience, as outlined here?

http://en.wikipedia.org/wiki/Plato's_Problem

Then, a pretty good summary of Chomsky's innatist stance in solving Plato's problem are these paragraphs:

I think we are forced to abandon many commonly accepted doctrines about language and knowledge. There is an innate structure that determines the framework within which thought and language develop down to quite precise and intricate details. Langauge and thought are awakened in the mind, and follow a large, predetermined course, much like other biological properties. They develop in a way that provides a rich structure of truths of meaning. Our knowledge in these areas, and I believe elsewhere-even in science and mathematics-is not derived by induction, by applying reliable procedure and so on; it is not grounded or based on "good reason" in any useful sense of the notion. Rather it grows in the mind, on the basis of our biological nature, triggered by appropriate experience, and in a limited way shaped by experience that settles options left open by the innate structure of mind. The result is an elaborate structure of cognitive systems of knowledge and belief, that reflects the very nature of the human mind, a biological organ like others with its scope and limits.

This conclusion, which seems to me well-supported by the study of language and I suspect holds true far more broadly, perhaps universally in domains of human thought, compels us to rethink fundamental assumptions of modern philosophy and our general intellectual culture, including assumptions about scientific knowledge, mathematics, ethics, aesthetics, social theory and practise and much else, questions too broad and far-reaching, for me to try to address here, but questions that should, I think, be subjected to serious scrutiny from a point of view rather different than those that have conventionally been assumed.


http://sammelpunkt.philo.at:8080/1284/1/Chomsky.pdf
 
Last edited by a moderator:
  • #236
Chomsky versus Peirce:

I found these passages quoted by Chomsky regarding Peirce's views on scientific theory construction interesting because it's something that always concerned me since I was a kid studying science:

Peirce holds that theories are constructed by a “guessing instinct” (abduction) that provides hypotheses to test. Successful theory construction can be explained only by assuming that “Man’s mind has a natural adaptation to imagining correct theories of some kinds.” This innate property of mind “puts a limit upon admissible hypotheses.” It accounts for the fact that “men of surpassing genius” had to make only a few guesses “before they rightly guessed the laws of nature” despite highly inadequate data, including often disconfirming data that are shelved. The very rapid success results from the fact that the “natural beliefs” are true. Peirce held, a “logical necessity” because the mind is a product of nature”


After quoting Peirce, Chomsky goes on to argue that this view on some accounts ("truth") is misconceived:

But...the history of science shows that most theories (perhaps all) are false, not true, and the fact that mind is a product of nature tells us nothing about the validity of “natural beliefs

So Chomsky being an internalist/innatist is arguing that our various systems of knowledge and belief do not resemble the “real” properties of the world, in any sense of the word, any more than our physical organs reflect our environment. But there's one thing I still don't understand:

If science can not explain "a single effect in nature", how do we explain the sense of deep understanding, of genuine explanation in some instances of science like in theoretical physics that do seem to convey a strong sense of "truth", a view that we are in some sense discovering "the real properties of the natural world"?

Is this also an illusion?
 
  • #237
bohm2 said:
After quoting Peirce, Chomsky goes on to argue that this view on some accounts ("truth") is misconceived:

Chomsky wants to employ Peirce as a cite for his innate "scientific theory-forming faculty", but never mind...

So Chomsky being an internalist/innatist is arguing that our various systems of knowledge and belief do not resemble the “real” properties of the world, in any sense of the word, any more than our physical organs reflect our environment. But there's one thing I still don't understand:

Yes, but Chomsky just says these things. Clearly they are fallacious.

It is a false dichotomy to demand that either our models of reality are completely true, or they must be completely untrue (just an illusion). They can be relatively true (or untrue).

And then, far more importantly, it is not even a natural purpose of a model to be "true". The reason for constructing models of the world is to gain pragmatic control over events. (Peirce was of course the father of pragmatist philosophy.)

And it is then pretty easy to judge a model on its utility. If it works, it ain't an illusion.

So Chomsky is simply using a false measure to judge science. And brains too. Both exist for the purpose of actively controlling reality, not passively knowing reality (even if rationalist ideology says otherwise).
 
  • #238
apeiron said:
Chomsky wants to employ Peirce as a cite for his innate "scientific theory-forming faculty", but never mind...

Yes, but Peirce seems to be arguing that our innate cognitive structures would have to have a considerable degree of correspondence to "external" reality (either because they are a product of natural law or for reasons of 'natural selection'):

e.g. a "logical necessity” because the mind is a product of nature”

Chomsky isn't sympathetic to this argument, at least, to the extent that Peirce is. He writes:

This partial congruence between the truth about the world and what the human science-forming capacity produces at a given moment yields science. Notice that it is just blind luck if the human science-forming capacity, a particular component of the human biological endowment, happens to yield a result that conforms more or less to the truth about the world.

Chomsky seems more sympathetic to the skeptical arguments put forth by Pyrrhonian Skeptics, Hume, etc. Citing Richard Popkins, Chomsky writes:

‘the secrets of nature, of things-in-themselves, are forever hidden from us.’ Thus, we revert to the ‘mitigated scepticism’ of even pre-Newtonian English science, acknowledging the impossibility of finding ‘the first springs of natural motions’
 
Last edited:
  • #239
So Chomsky being an internalist/innatist is arguing that our various systems of knowledge and belief do not resemble the “real” properties of the world, in any sense of the word, any more than our physical organs reflect our environment.

apeiron said:
Yes, but Chomsky just says these things. Clearly they are fallacious.

I don't understand why they are clearly fallacious. They might be, but if one treats mental "organs" on par with physical "organs", shouldn't the processes be similar? Here's a longer version of this skeptical/innatist argument:


Thus, like physical growth and development (i.e. humans are designed to grow arms and legs, not wings-to use one of N. Chomsky’s well-known examples), human mental development (including our systems of belief and knowledge) largely reflects our particular, biological endowment (i.e. a consequence of the organizing activity of the mind) and not the properties of our physical environment; consequently, there is no guarantee that any of our “knowledge” (including our mathematical and scientific knowledge) will conform to the “real” properties of the world...Thus, environmental input may act only as a trigger to set off a rich and highly articulated system of beliefs that, to a large extent, is intrinsically determined, following a predetermined course (in the same way that oxygen and nutrition are required for cellular growth to take place). Thus, our various systems of knowledge and belief do not resemble the “real” properties of the world, in any sense of the word, any more than our physical organs reflect our environment.
 

Attachments

Last edited:
  • #240
But our physical organs do play a huge role in reflecting our environment. Quite literally, in some cases!

I agree that our intrinsic systems play a significant role, too. The intrinsic systems were, however, shaped by the environment over billions of years.

(I speak now, only of physical organs.)
 
  • #241
Pythagorean said:
But our physical organs do play a huge role in reflecting our environment. Quite literally, in some cases!

Maybe we're talking about different stuff but I'm not understanding? So let me give a very simple example. Consider the same environment. Why does a human placed in that environment grow hands and arms whereas a bird placed in that same environment grow wings? Does it have anything to do with the environment (either than adequate nutrition, oxygen, etc.)?
 
  • #242
bohm2 said:
Maybe we're talking about different stuff but I'm not understanding? So let me give a very simple example. Consider the same environment. Why does a human placed in that environment grow hands and arms whereas a bird placed in that same environment grow wings? Does it have anything to do with the environment (either than adequate nutrition, oxygen, etc.)?

It only has to do with the environment. Of course, it's the environment of their ancestral history (two species diverge from one when the one is isolated into two separate environments). The ancestral form which both birds and humans split, of course, had the "master code" that was able to adapt in both situations, and we talk about these master codes a lot (gene conservation across species). This is common knowledge, assuming that you agree with mainstream view that all living life descends from a single ancestor.

You can also see this in phenotype, for instance, in the desert vull. The volcanic region vs. the dessert region. In general, the vull has several coats, but you will find a larger portion of dark-furred vulls in the volcanic region and a larger portion of brown-furred vulls in the dessert region simply because the particular environments reinforce particular alleles of the pigment gene by hiding them from predators (increasing the chances such phenotypes will survive and the progeny will carry them).
 
  • #243
Pythagorean said:
Of course, it's the environment of their ancestral history (two species diverge from one when the one is isolated into two separate environments).

Maybe we're not disagreeing? Nobody is questioning evolution but what is being questioned is the premise that just because something is a product of evolution or natural selection or of natural law, it will somehow give us access to the "real" properties of the world. This is what Peirce was suggesting: that knowledge of mind-independent reality is possible, I think. Chomsky and many innatists disagree. Some of the reasons (e.g. poverty of stimulus, etc.) were mentioned. Consider Pinker's argument:

We are organisms, not angels, and our minds are organs, not pipelines to the truth. Our minds evolved by natural selection to solve problems that were life-and-death matters to our ancestors, not to commune with correctness. Thus, it's argued that our minds like most other biological systems/organs are likely poor solutions to the design-problems posed by nature. They are, "the best solution that evolution could achieve under existing circumstances, but perhaps a clumsy and messy solution." Thus, it seems we cannot have direct knowledge of how the world is like as the knowledge has to be routed in terms of the resources available to our theory-building abilities/mental organs and these are not likely to be "pipelines to the truth".
 
Last edited:
  • #244
  • #245
bohm2 said:
Maybe we're not disagreeing? Nobody is questioning evolution but what is being questioned is the premise that just because something is a product of evolution or natural selection, it will somehow give us access to the "real" properties of the world. This is what Peirce was suggesting: that knowledge of mind-independent reality is possible, I think. Chomsky and many innatists disagree. Some of the reasons (e.g. poverty of stimulus, etc.) were mentioned. Consider Pinker's argument:

We are organisms, not angels, and our minds are organs, not pipelines to the truth. Our minds evolved by natural selection to solve problems that were life-and-death matters to our ancestors, not to commune with correctness. Thus, it's argued that our minds like most other biological systems/organs are likely poor solutions to the design-problems posed by nature. They are, "the best solution that evolution could achieve under existing circumstances, but perhaps a clumsy and messy solution." Thus, it seems we cannot have direct knowledge of how the world is like as the knowledge has to be routed in terms of the resources available to our theory-building abilities/mental organs and these are not likely to be "pipelines to the truth".

nobody (at least I) am not claiming that these so-called "mental organs" are "pipelines to the truth"...

I do, however, believe that knowledge of mind-independent reality is possible; that's not the same as saying that one can have a complete understanding of mind-independent reality. It's also not the same as saying that we aren't easily misled by our brain's clunky, and sometimes primal, way of processing.
 
  • #246
apeiron said:
You do know Peirce was the father of pragmatism?

http://en.wikipedia.org/wiki/Pragmatic_maxim

Here are some of the relevant quotes/interpretations:

It is somehow more than a mere figure of speech to say that nature fecundates the mind of man with ideas which, when those ideas grow up, will resemble their father, Nature...This is in line with Peirce’s synechism (which he developed especially after 1890s), according to which everything is continuous...Mind and matter are not entirely distinct elements but ‘all phenomena are of one character, though some are more mental and spontaneous, others more material and regular’... Similarly, it can be argued that there is no sharp line between instinct and inference; ‘instinct and reason shade into one another by imperceptible gradations’...The metaphysical ground is a rather vague argument for the idea that if the human mind is developed under those laws that govern the universe, it is reasonable to suppose that the mind has a tendency to find true hypotheses concerning this universe...In this way, general considerations concerning the universe, strictly philosophical considerations, all but demonstrate that if the universe conforms, with any approach to accuracy, to certain highly pervasive laws, and if man's mind has been developed under the influence of those laws, it is to be expected that he should have a natural light, or light of nature, or instinctive insight, or genius, tending to make him guess those laws aright, or nearly aright.

http://www.helsinki.fi/science/commens/papers/instinctorinference.pdf

For whatever reason, there are times when I want to be sympathetic to some of Peirce's ideas. I think there's a part of me that would like to think Peirce is right (at least in those views in quotes above). But my skeptical part blocks me. I still have a hard time understanding how we are able to arrive at some seemingly far-reaching results in disciplines like theoretical physics by using our ability to do abstract mathematics especially since that ability is unlikely to have been selected for. I'm not sure?
 
  • #247
bohm2 said:
For whatever reason, there are times when I want to be sympathetic to some of Peirce's ideas.

I'm not seeing evidence from your quoting that you understand those ideas.

http://plato.stanford.edu/entries/peirce/#psych

The most important extension Peirce made of his earliest views on what deduction, induction, and abduction involved was to integrate the three argument forms into his view of the systematic procedure for seeking truth that he called the “scientific method.” As so integrated, deduction, induction, and abduction are not simply argument forms any more: they are three phases of the methodology of science, as Peirce conceived this methodology. In fact, in Peirce's most mature philosophy he virtually (perhaps totally and literally) equates the trichotomy with the three phases he discerns in the scientific method. Scientific method begins with abduction or hypothesis: because of some perhaps surprising or puzzling phenomenon, a conjecture or hypothesis is made about what actually is going on. This hypothesis should be such as to explain the surprising phenomenon, such as to render the phenomenon more or less a matter of course if the hypothesis should be true. Scientific method then proceeds to the stage of deduction: by means of necessary inferences, conclusions are drawn from the provisionally-adopted hypothesis about the obtaining of phenomena other than the surprising one that originally gave rise to the hypothesis. Conclusions are reached, that is to say, about other phenomena that must obtain if the hypothesis should actually be true. These other phenomena must be such that experimental tests can be performed whose results tell us whether the further phenomena do obtain or do not obtain. Finally, scientific method proceeds to the stage of induction: experiments are actually carried out in order to test the provisionally-adopted hypothesis by ascertaining whether the deduced results do or do not obtain. At this point scientific method enters one or the other of two “feedback loops.” If the deduced consequences do obtain, then we loop back to the deduction stage, deducing still further consequences of our hypothesis and experimentally testing for them again. But, if the deduced consequences do not obtain, then we loop back to the abduction stage and come up with some new hypothesis that explains both our original surprising phenomenon and any new phenomena we have uncovered in the course of testing our first, and now failed, hypothesis. Then we pass on to the deduction stage, as before. The entire procedure of hypothesis-testing, and not merely that part of it that consists of arguing from sample to population, is called induction in Peirce's later philosophy.

bohm2 said:
...our ability to do abstract mathematics especially since that ability is unlikely to have been selected for.

Of course it is not a result of biological evolution. There is no abstract maths instinct. But it was quite clearly a result of cultural evolution. The human mind is the product of sociocultural development - remember Vygotsky? And maths was a valued cultural product because it underwrites technological control over the world. So there is no problem here.
 
  • #248
apeiron said:
I'm not seeing evidence from your quoting that you understand those ideas. http://plato.stanford.edu/entries/peirce/#psych

I disagree. I think there is some debate on Peirce's "abductive instinct" but it seems at least from his own writings that he did come to believe that because we are a product of nature/natural law, we have a natural instinct at somehow being able to arrive at the laws of nature. Some of those quotes are directly from Peirce's later writings: Collected Papers of Charles S. Peirce. See p. 415 + 421-422 (604) that is available in link below:

In this way, general considerations concerning the universe, strictly philosophical considerations, all but demonstrate that if the universe conforms, with any approach to accuracy, to certain highly pervasive laws, and if man's mind has been developed under the influence of those laws, it is to be expected that he should have a natural light, or light of nature, or instinctive insight, or genius, tending to make him guess those laws aright, or nearly aright...This would be impossible unless the ideas that are naturally predominant in their minds was true...The history of science, especially the early history of modern science, on which I had the honor of giving some lectures in this hall some years ago, completes the proof of showing how few were the guesses that men surpassing genius had to make before they rightly guessed the laws of nature...

Chomsky basically agrees with Peirce's "abductive instinct" but not Peirce's others beliefs. For Chomsky, there is an innate capacity to do so but he doesn't believe that

"nature fecundates the mind of man with ideas which when those ideas grow up, will resemble their father, Nature"

as Peirce suggests because for reasons mentioned including "Poverty of stimulus" argument, etc. (see below). I think Chomsky makes (to me) a very convincing argument.

http://books.google.ca/books?id=G7I...trictly philosophical considerations,&f=false

http://en.wikipedia.org/wiki/Poverty_of_the_stimulus

Also, I don't agree with Vygotsky that knowledge of abstract math is the result of cultural evolution, except to the extent that such environmental input may act as a trigger. I think mathematical knowledge like other aspects of our knowledge is innate, although I find Platonism kind of interesting but I have trouble understanding it but I'm trying to. A paper that takes a different perspective that innateness for mathematics/science ability may not be enough are these papers:

Mathematical symbols as epistemic actions – an extended mind perspective

http://kuleuven.academia.edu/HelenDeCruz/Papers/317927/Mathematical_symbols_as_epistemic_actions

Evolved cognitive biases and the epistemic status of scientific beliefs

http://kuleuven.academia.edu/HelenD...nd_the_epistemic_status_of_scientific_beliefs

I haven't read them but I'm looking forward to it. Maybe, that's what you guys are talking about?
 
Last edited:
  • #249
bohm2 said:
I think mathematical knowledge like other aspects of our knowledge is innate.

And how, neuroscientifically speaking, is this feat achieved? Where is the evidence that makes this a credible view in this day and age?
 
  • #250
apeiron said:
And how, neuroscientifically speaking, is this feat achieved? Where is the evidence that makes this a credible view in this day and age?


I don't think this author will go the full innatist distance though as Chomsky appears to:

The innateness Hypothesis and Mathematical concepts:

http://biblio.ugent.be/input/download?func=downloadFile&fileOId=911487


The cognitive basis of arithmetic:

http://www.cs.mcgill.ca/~dirk/PhiMSAMP-bk_DeCruzNethSchlimm.pdf
 
Last edited by a moderator:
Back
Top