Is Chomsky's View on the Mind-Body Problem Redefining Materialism?

  • Thread starter Thread starter bohm2
  • Start date Start date
Click For Summary
Chomsky critiques traditional views on the mind-body problem, arguing that it can only be sensibly posed with a clear conception of "body," which has been undermined by modern physics. He suggests that the material world is defined by our scientific theories rather than a fixed notion of physicality, leading to the conclusion that the mind-body problem lacks coherent formulation. Chomsky posits that as we develop and integrate theories of the mind, we may redefine what is considered "physical" without a predetermined concept of materiality. Critics like Nagel argue that subjectivity and qualia cannot be reduced to material entities, regardless of future scientific advancements. Ultimately, Chomsky advocates for a focus on understanding mental phenomena within the evolving framework of science, rather than getting bogged down in the elusive definitions of "mind" and "body."
  • #481
bohm2 said:
This is a critical (sarcastic?) piece from Chomsky talking about Deacon's earlier book:

This seems more like Chomsky being unable to think of a good argument against the biosemiotic perspective and so resorting to the rhetorical trick of "I don't even understand."

It is one way to preserve your belief system, but pretty pathetic.

There is of course nothing particularly difficult to follow in Deacon. Some of his papers might be worth a read.

For example, this is a clever paper on the origins of life - life getting started as the most primordial interaction between self-assembling molecular construction and constraint. As an alternative to the usual RNA world story it is pretty good.

http://anthropology.berkeley.edu/sites/default/files/BioTheory2006_Deacon.pdf

Then there is this one that perhaps should bring home how the Chomskyian perspective is quite shockingly defective in regard to the whole information theoretic revolution in science.

Langauge is supposed to be all about communication, right? The meaning of messages? Shannon/Weaver got to the heart of this with the reciprocal notions of information and entropy. The basic semiotic story of the interaction of two worlds - the computational and the (thermo)dynamic - was established right there. Yet Chomsky lives off in his own little world out of contact with the central thrust of science.

Sketching the general premise of semiotics/epistemic cut thinking, Deacon notes...

Consider the concept of “patriotism.” Despite the fact that there is no specific physical object or process that constitutes the content of this word, and nothing intrinsic to the sound of the word or its production by a brain that involves more than a tiny amount of energy, its use can contribute to the release of vast amounts of energy unleashed to destroy life and demolish buildings (as in warfare). This is evidence that we are both woefully ignorant of a fundamental causal principle in the universe and in desperate need of such a theory.

http://anthropology.berkeley.edu/sites/default/files/WhatIsMissingFromInfo.pdf

Clearly, the story is about the interaction of the internal and the external, to use Chomsky's jargon. And both are "real" (they have to be for there to be a causal interaction rather than the disjoint dualism and frank panpsychic mysticism which you get from following Chomsky's route to its natural conclusions).

Deacon makes the useful point that information gains its supra-causal power by being able to represent what does not in fact exist. It can talk about the not-A (when the material world can only be the A). You can see how this immediately knocks the props out from under supervenient notions of emergence popular with reductionists. The absence of things is precisely what cannot emerge via bottom-up constructive causes. Only top-down constraints can limit reality so that some things are definitely not there.

Ultimately, the concept of information has been a victim of a philosophical impasse that has a long and contentious history: the problem of specifying the ontological status of the representations or contents of our thoughts. The problem that lingers behind definitions of information boils down to a simple question: How can the content (aka meaning, reference, significant aboutness) of a sign or thought have any causal efficacy in the world if it is by definition not intrinsic to whatever physical object or process represents it?

In other words, there is a paradox implicit in representational relationships. The content of a sign or signal is not an intrinsic property of whatever physically constitutes it. Rather, exactly the opposite is the case. The property of something that warrants calling something information, in the usual sense, is that it is something that the sign or signal conveying it is not. I will refer to this as “the absent content problem .” Classic conundrums about the nature of thought and meaning all trace their origin to this simple and obvious fact.

Deacon here makes the argument that information theory is about the semiotic interaction between two realms and Chomskyian-like claims that physical information can be intrinsically meaningful, absent of their interpretive contexts, is a corruption of the foundational work.

The danger of being inexplicit about this bracketing of interpretive context is that one can treat the sign as though it is intrinsically signifi cant, irrespective of anything else, and thus end up reducing intentionality to mere physics, or else imagine that physical distinctions are intrinsically informational rather than informational only post hoc, that is, when interpreted.

Deacon rounds off that paper by making a claim particularly relevant to the OP...

Like so many other “hard problems” in philosophy, I believe that this one, too, appears to have been a function of asking the wrong sort of questions. Talking about cognition in terms of the mind –brain – implying a metaphysically primitive identity – or talking about mind as the software of the brain – implying that mental content can be reduced to syntactic relationships embodied in and mapped to neural mechanics – both miss the point.

The content that constitutes mind is not in the brain, nor is it embodied in neuronal processes in bodies interacting with the outside world. It is, in a precisely definable sense, that which determines which variations of neural signaling processes are not occurring, and that which will in a round-about and indirect way help reinforce and perpetuate the patterns of neural activity that are occurring. Informational content distinguishes semiosis from mere physical difference. And it has its influence on worldly events by virtue of the quite precise way that it is not present.

Attempts to attribute a quasi-substantial quality to information or to reduce it to some specific physical property are not only doomed to incompleteness, they ultimately ignore its most fundamental distinctive characteristic.

So this is another way of talking about the significance of global constraints - the role of not-A in shaping the material world. It is the kind of sophisticated systems thinking we just don't get from a Chomsky (or a Nagel when it comes to that).

A third paper Chomsky could be reading and understanding is http://anthropology.berkeley.edu/sites/default/files/Deacon_PNAS2010.pdf

Langauge is both a social and biological phenomenon. The capacity to acquire and use it is a unique and distinctive trait that evolved in only one species on earth. Its complexity and organization are like nothing else in biology, and yet it is also unlike any intentionally designed social convention. Short of appealing to divine intervention or miraculous accident, we must look to some variant of natural selection to explain it. By paying attention to the way Darwin’s concept of natural selection can be generalized to other systems, and how variants on this process operate at different interdependent levels of organism function, explaining the complexity of language and the language adaptation can be made more tractable.

Deacon is funny on Darwin's own adaptationalist dilemma...

In a letter he wrote to Asa Gray shortly after the publication of On the Origin of Species (2), he admits that “the sight of a feather in a peacock’s tail, whenever I gaze at it, makes me feel sick!”

But we know how that turned out...the early version of the singing ape hypothesis of language evolution (has Chomsky ever offered good arguments against it? Or again, is it too complicated for his understanding :smile:)...

In his book The Descent of Man and Selection in Relation to Sex (11)— which is typically referred to by only the first half of its title—Darwin argues that language and other human traits that appear exaggerated beyond survival value can be explained as consequences of sexual selection. So, for example, he imagines that language might have evolved from something akin to bird song, used as a means to attract mates, and that the ability to produce highly elaborate vocal behaviors was progressively exaggerated by a kind of arms-race competition for the most complex vocal display.

Deacon sure understands Chomsky though...

The appeal to pure accident, e.g., a “hopeful monster” mutation, to explain the evolution of such a highly complex and distinctive trait is the biological equivalent of invoking a miracle.

And it is difficult to see what is so hard to understand about the Baldwin effect and niche construction theory...

...“niche construction” theory (28) argues that, analogous to the evolution of beaver aquatic adaptations in response to a beaver generated aquatic niche, a constellation of learning biases and changes of vocal control evolved in response to the atypical demands of this distinctive mode of communication. To the extent that this mode of communication became important for successful integration into human social groups and a critical prerequisite for successful reproduction, it would bring about selection favoring any traits that favored better acquisition and social transmission of this form of communication.

Unlike Baldwinian arguments for the genetic assimilation of grammatical and syntactic features of language, however, the niche construction approach does not assume that acquired language regularities themselves ever become innate. Rather it implicates selection that favors any constellation of attentional, mnemonic, and sensorimotor biases that collectively aid acquisition,
use, and transmission of language.

Although this could conceivably consist of innate language-specific knowledge, Deacon (23, 27)
argues that this is less likely than more general cognitive biases that facilitate reliable maintenance of this extrinsic niche.

As Deacon argues, a modern neurodevelopmental approach to the brain finds no problem with the idea of social information structuring the brain's functional architecture - the critical period of language learning is after all one of the most striking findings in the field.

And we can see by Chomsky's failure to engage at this level of hypothesis that he really just is past his sell by date. He does not have the basic grounding where it is required now.

Although slight tweaks of this species-general brain architecture likely play important roles in producing the structural and functional differences of different species’ brains, a significant contribution also comes from selection-like processes that incorporate both intra- and extraorganismic information into the fine-tuning of neural circuitry.

Likewise, it is indefensible that Chomsky keeps trying to handwave away the fact of memetic or cultural evolution. How can it not be the case?

But language evolution includes one additional twist that may in fact mitigate some fraction of what biological evolutionary mechanisms must explain.Langauge itself exhibits an evolutionary dynamic that proceeds irrespective of human biological evolution. Moreover, it occurs at a rate that is probably many orders of magnitude faster than biological evolution and is subject to selective influences that are probably quite alien from any that affect human brains or bodies.

Darwin recognized this analogical process, although he did not comment on its implications for human brain evolution. “A struggle for life is constantly going on amongst the words and grammatical forms in each language. The better, the shorter, the easier forms are constantly gaining the upper hand, and they owe their success to their own inherent virtue” (ref. 11, p. 91).

Chomsky fusses about computational optimality. Darwin had already talked about how natural selection would achieve it.

As Deacon remarks (and it is a quite critical point of evolutionary logic)...

So as brains have adapted to the special demands of language processing over hundreds of thousands of years, languages have been adapting to the limitations of those brains at the same time, and a hundred times faster.

And then the balanced conclusion from someone who understands what he is talking about...

Langauge is too complex and systematic, and our capacity to acquire it is too facile, to be adequately explained by cultural use and general learning alone. But the process of evolution is too convoluted and adventitious to have produced this complex phenomenon by lucky mutation or the genetic internalization of language behavior.

Chomsky talks in simplicities and mysteries. The field of language evolution has already moved on to much more sophisticated modelling.
 
Last edited:
Physics news on Phys.org
  • #482
apeiron said:
The field of language evolution has already moved on to much more sophisticated modelling.

What is that modelling based on? Do brains fossilize? Can one tell from old fossilized skulls alone whether a particular brain had the capacity for language? Is there even agreement on what language is? Look through our posts. Unless I'm mistaken, I didn't think we've progressed much with respect to understanding the evolution of language. Everyone can tell nice stories (to back their particular biased philosophies/viewpoints) but that's about it, unless I'm mistaken? And I've tried to read most of the recent papers on this topic some of which I posted above. A recent paper by the same authors I listed before on skull size (but now discussing the effects of culture on human evolution) came out recently. But even here I see no hint of anything that explains how one gets language in the first place but I suppose that depends on what one takes language to be.
The study suggests that this divergence is also independent of the Xavánte's geographical separation from other population groups and differences in climate. According to the team of experts, the combination of cultural isolation and sexual selection could be the driving force behind the changes observed. To conclude their study, the authors hypothesize that gene-culture co-evolution could in fact be the dominant model throughout the history of the human evolutionary lineage.
Cultural Diversification Also Drives Human Evolution
http://www.sciencedaily.com/releases/2011/12/111222161213.htm
 
  • #483
bohm2 said:
What is that modelling based on? Do brains fossilize? Can one tell from old fossilized skulls alone whether a particular brain had the capacity for language? Is there even agreement on what language is?

I don't understand. The papers you are citing themselves make strong claims that would rule out any talk of modular language evolution of the kind Chomsky favours. So yes, there is plenty of both data and theory. And on a lot of things, I hear more agreement than dispute.

So...

The study calls for a reinterpretation of modern human evolutionary scenarios. As the lecturer Mireia Esparza explains, "Evolution acts as an integrated process and specific traits never evolve independently."

And...

To conclude their study, the authors hypothesize that gene-culture co-evolution could in fact be the dominant model throughout the history of the human evolutionary lineage.

On the one hand you have Chomsky who does not actually do experiments and rambles on about rationalism and hopeful monsters. On the other you have people doing field work and having to respond to data.

It is your choice which conversations you pay closer attention to.
 
  • #484
Speaking of the need to ground theory in experiment, this has become an active approach...

Langauge evolution in the laboratory/Thomas C. Scott-Phillips and Simon Kirby
http://data.cogsci.bme.hu/public_html/KURZUSOK/BMETE47MC07/2010_2011_2/readings/laborlangevol.pdf

And note the rationale...

We need to consider exactly how individuals interacting in dynamic structured populations can cause language to emerge. Once we have a better general understanding of the mechanisms of social coordination and cultural evolution, gained from the type of experimental work reviewed here, then we can combine this with models of biological evolution to gain a more complete understanding of the evolution of language. The latter without the former will inevitably give a distorted picture of the biological prerequisites for language.

ie: start with a proven theory of E-language so as to define what I-language actually needs to explain. Contrast this empirical approach with Chomsky's rationalistic approach where he argues from logic what he thinks must be the case, then spends all his time having to fend off the contrary evidence.

A few examples of the many empirical challenges to Chomsky...http://www.sciencedaily.com/releases/2012/01/120119133755.htm

Many prominent linguists, including MIT's Noam Chomsky, have argued that language is, in fact, poorly designed for communication. Such a use, they say, is merely a byproduct of a system that probably evolved for other reasons -- perhaps for structuring our own private thoughts.
As evidence, these linguists point to the existence of ambiguity: In a system optimized for conveying information between a speaker and a listener, they argue, each word would have just one meaning, eliminating any chance of confusion or misunderstanding. Now, a group of MIT cognitive scientists has turned this idea on its head. In a new theory, they claim that ambiguity actually makes language more efficient, by allowing for the reuse of short, efficient sounds that listeners can easily disambiguate with the help of context.

And another study that contradicts the Chomskian claim that self-talk is genetically innate...http://www.sciencedaily.com/releases/2011/12/111227142537.htm

The results suggest that even after children learn language, it doesn't govern their thinking as much as scientists believed.
"It is only over the course of development that children begin to understand that words can reliably be used to label items," he said.

And as for the Chomskian claim that culture does not evolve...http://www.sciencedaily.com/releases/2008/01/080109100831.htm

Historically, scientists believed that behavioural differences between colonies of chimpanzees were due to variations in genetics. A team at Liverpool, however, has now discovered that variations in behaviour are down to chimpanzees migrating to other colonies, proving that they build their 'cultures' in a similar way to humans.

Or the Chomskian claims about major brain reorganisation...http://www.sciencedaily.com/releases/2008/02/080228124415.htm

An area of the brain involved in the planning and production of spoken and signed language in humans plays a similar role in chimpanzee communication, researchers report.

Or again on the claim that self-talk is genetic...http://www.sciencedaily.com/releases/2012/01/120124200103.htm

Teaching children with autism to 'talk things through in their head' may help them to solve complex day-to-day tasks, which could increase the chances of independent, flexible living later in life,

Or that there is a poverty of stimulus issue and so statistical learning can play no part in the habit of grammar...http://www.sciencedaily.com/releases/2011/12/111209150156.htm

New research from the University of Notre Dame shows that during the first year of life, when babies spend so much time listening to language, they're actually tracking word patterns that will support their process of word- learning that occurs between the ages of about 18 months and two years.

It is just really hard to look at actual language research - chosen here from a trawl of recent headline findings - and not find problems for the Chomskian view.
 
Last edited by a moderator:
  • #485
A lot of the stuff you posted has been debated ad nauseaum on many linguistics forums, etc. I thought these were 2 of the more interesting blogs on language log on this related topic. I haven't read it fully because I like to print it out and I ran out of ink.

On Chomsky and the Two Cultures of Statistical Learning
http://norvig.com/chomsky.html

Norvig channels Shannon contra Chomsky
http://languagelog.ldc.upenn.edu/nll/?p=3172

Straw men and Bee Science
http://languagelog.ldc.upenn.edu/nll/?p=3180

Also here's an interesting blog by T. Deacon with comments that include Deacon and Derek Bickerton. This post by Deacon kind of surprised me:
Surprisingly, despite our many disagreements about innateness, I find some resonance in Noam Chomsky’s periodic suggestion that some of the complexity of grammar may have emerged from general laws of physics analogous to the way that the Fibonacci regularities exemplified in the spirals of sunflower seed and pine cone facets emerge. Natural selection has “found a way” to stabilize the conditions that support the generation of this marvelous regularity of growth because it has important functional advantages. But natural selection didn’t generate it in the first place, geometric regularities that can become amplified due to center-out growth process are the ultimate source (as has now been demonstrated also in growth-like inorganic processes). I also agree that flexibility CAN be an adaptive response to a variable and demanding habitat, but not necessarily. And I hope I have shown that there is another mechanism potentially available to explain some of the complexity (both neurologically and functionally) and some of the flexibility, besides natural selection and innate algorithms.

On the Human: Rethinking the natural selection of human language
http://onthehuman.org/2010/02/on-the-human-rethinking-the-natural-selection-of-human-language/
 
Last edited by a moderator:
  • #486
bohm2 said:
A lot of the stuff you posted has been debated ad nauseaum on many linguistics forums, etc.

Well, given these six empirical findings that seem to directly contradict Chomsky's prejudices, what do these linguistic forums, etc, conclude about them exactly?

bohm2 said:
I thought these were 2 of the more interesting blogs on language log on this related topic. I haven't read it fully because I like to print it out and I ran out of ink.

Yes, more people who don't seem too impressed with Chomsky...

So how could Chomsky say that observations of language cannot be the subject-matter of linguistics? It seems to come from his viewpoint as a Platonist and a Rationalist and perhaps a bit of a Mystic. As in Plato's analogy of the cave, Chomsky thinks we should focus on the ideal, abstract forms that underlie language, not on the superficial manifestations of language that happen to be perceivable in the real world. That is why he is not interested in language performance. But Chomsky, like Plato, has to answer where these ideal forms come from. Chomsky (1991) shows that he is happy with a Mystical answer, although he shifts vocabulary from "soul" to "biological endowment."

Since people have to continually understand the uncertain. ambiguous, noisy speech of others, it seems they must be using something like probabilistic reasoning. Chomsky for some reason wants to avoid this, and therefore he must declare the actual facts of language use out of bounds and declare that true linguistics only exists in the mathematical realm, where he can impose the formalism he wants. Then, to get language from this abstract, eternal, mathematical realm into the heads of people, he must fabricate a mystical facility that is exactly tuned to the eternal realm. This may be very interesting from a mathematical point of view, but it misses the point about what language is, and how it works.

bohm2 said:
Also here's an interesting blog by T. Deacon with comments that include Deacon and Derek Bickerton. This post by Deacon kind of surprised me:

Why is it surprising that Deacon cites evo-devo views?
 
  • #487
apeiron said:
And as for the Chomskian claim that culture does not evolve.

Where does Chomsky make this claim? From my understanding he argues that a lot of the work purported to be about biological language evolution is really language history or cultural /communication evolution. That's why you can take any baby from any area of the world at present or within the past 50,000-100,000 years and bring up in today's society and they would be just as capable as me or yourself with respect to acquistion of language. Do you disagee? If you don't disagree, then in what way has our biological linguistic ability evolved?
 
  • #488
bohm2 said:
That's why you can take any baby from any area of the world at present or within the past 50,000-100,000 years and bring up in today's society and they would be just as capable as me or yourself with respect to acquistion of language. Do you disagee? If you don't disagree, then in what way has our biological linguistic ability evolved?

What do you mean? Chomsky's theory was that there is a genetic I-language that you flick the settings on through experience. He was arguing that cultural evolution plays no part in creating the deep structure of syntax itself. (Remember his "hopeful monster" hypothesis? :rolleyes:)

The alternative view is that key aspects of syntax - such as SOV sentence structure - were possibly culturally evolved. Or as Deacon argues, there is a complex co-evolutionary story.

Further specific empirical evidence against Chomsky that emerged last year was...

http://www.nature.com/news/2011/110413/full/news.2011.231.html
http://blogs.discovermagazine.com/8...n-universal-study-challenges-chomskys-theory/

Chomsky supposed that languages change and evolve when the parameters of these rules get reset throughout a culture. A single change should induce switches in several related traits in the language...

In Chomsky's theory, as languages evolve, certain features should vary at the same time because they are products of the same underlying parameter...

[But the study found] neither of the universalist models matched the evidence. Not only did the co-dependencies that they discovered differ from those predicted by Greenberg's word-order 'universals', but they were different for each family. In other words, the deep grammatical structure of every family is different from that of the others: each family has evolved its own rules, so there is no reason to suppose that they are governed by universal cognitive factors.

What's more, even when a particular co-dependency of traits was shared by two families, the researchers could show that it came about in different ways for each, so it was possible that the commonality was coincidental. They conclude that the languages — at least in their word-order grammar — have been shaped in culture-specific ways rather than by universals.
 
  • #489
apeiron said:

Yes, I've read it before but I always read the comments section also to get more details/critical discussion. From the comment section on the discovery link you provided:
There *is* no “Chomskyan idea that rules associate in certain sets”, especially where the rules in question concern WORD order (rather than the more abstract structural relations that Chomsky and his colleagues do concern themselves with). The study in Nature has literally nothing at all to do with anything Chomsky has argued for. Now this article may be interesting for other reasons — see Mark Liberman’s thoughtful discussion today in Langauge Log, for example. But the anti-Chomsky spin placed on the article is just nuts (though it’s a good way to get publicity for a study on language). The results have logically nothing to do with Chomsky or with Universal grammar. As a linguist, I cringe at this sort of nonsense — especially since it seems to come around every year or so (Google “Piraha”, for example).
And that paper was discussed in detail in the language log. Read the comments section of the links you provided and the link below so you get a perspective from both sides:

Word-order "universals" are lineage-specific?
http://languagelog.ldc.upenn.edu/nll/?p=3088

There's also an interesting blog discussing the "popular 'Chomsky sucks' theme" and "the Universal Grammar is nonsense" theme here:
It's bizarre. Suddenly every piece of linguistic research is spun as a challenge to "universal grammar". The most recent example involves Ewa Dabrowka's interesting work on the linguistic correlates of large educational differences — Quentin Cooper did a segment on BBC 4, a couple of days ago, about how this challenges the whole idea of an innate human propensity to learn and use language. (Dr. Dabrowska appears to be somewhat complicit in this spin, but that's another story.) It's hard for me to explain how silly I think this argument is. It's like showing that there are hematologic effects of athletic training, and arguing that this calls into question the whole idea that blood physiology is an evolved system.
Universal Grammar haters
http://languagelog.ldc.upenn.edu/nll/?p=2507

There is also recent experimental work arguably supporting Chomsky's scheme but I haven't come across discussion of this paper so I'm not sure what bearing it has on the issue, particularly because I'm not a linguist as I'm guessing you aren't either:

Artificial Grammar Reveals Inborn Langauge Sense, JHU Study Shows
http://releases.jhu.edu/2011/05/12/artificial-grammar-reveals-inborn-language-sense-jhu-study-shows/
 
Last edited:
  • #490
bohm2 said:
Yes, I've read it before but I always read the comments section also to get more details/critical discussion. From the comment section on the discovery link you provided:

Yes, there was of course the counter-attacks from the loyal troops. Harnad must have googled every mention of the finding to post his same response. :smile:

There very definitely is a Chomsky vs the world social dynamic, which as mentioned, he encourages. So if discussion is being played out at a cartoon level, then Chomsky himself is actually to blame.

But if you want to stick to the science here, you can respond on all six bits of evidence that don't tally with his views.

There is also recent experimental work arguably supporting Chomsky's scheme but I haven't come across discussion of this paper so I'm not sure what bearing it has on the issue,

It is very funny you should cite this. The press release states the research: "shows clearly that learners are not blank slates; rather, their inherent biases, or preferences, influence what they will learn."

Well who claims that anything is blank slate? Already we are into the realm of academic caricature.

But then, what do we find the researchers actually think? Whoops, they want to explain the data with Bayesian models (which you will remember from that UCL speech, Chomsky dismissed as producing "zero results" like all statistical learning approaches :rolleyes:)

Formally, this means that human learners have a bias against the Verblog orders (as well as a bias against inconsistent use of orders). Ms. Culbertson developed a mathematical model in which learners deploy Bayesian probabilistic inference to learn a probabilistic model of the artificial language, a model which they then use to generate their own utterances. Because of the Bayesian prior that encodes the biases, learners exposed to Verblog will not acquire a model of their language that corresponds to the models acquired by learners of the other languages.

http://www.igert.org/highlights/327
 
  • #491
apeiron said:
But then, what do we find the researchers actually think? Whoops, they want to explain the data with Bayesian models (which you will remember from that UCL speech, Chomsky dismissed as producing "zero results" like all statistical learning approaches :rolleyes:)

I'm not a linguist to really judge this study but yes, I think they do see the value of both methods but the author is also supporting Chomsky's position versus Dunn's and Tomasello's stuff. This assumes that her conclusions are valid. For she writes:
Taken together, the results show that learners clearly make use of the input statistics in these artificial language learning experiments (as they have been shown to do in other such contexts). Learners can track the basic word order preferences in the training input, and they appear to be extremely sensitive to transitional probabilities equal to zero. However, prior structural biases not reflected in the input statistics also influence learning.The results further support a strong regularization bias, indicating that learners do not replicate the variability present in the input.
Statistical Learning Constrained by Syntactic Biases in an Artificial Langauge Learning Task
http://www.bcs.rochester.edu/people/jculbertson/papers/CulbertsonetalBUCLD36.pdf

And in another recent study she writes:
The hypothesis that universal constraints on human language learning strongly shape the space of human grammars has taken many forms, which differ on a number of dimensions including the locus, scope, experience-dependence, and ultimate source of such biases (Christiansen & Devlin,1997; Chomsky, 1965; Croft, 2001; Hawkins, 2004; Kirby, 1999; Lightfoot, 1991; Lindblom, 1986; Newmeyer, 2005; Newport & Aslin, 2004; Talmy, 2000; Tesar & Smolensky, 1998). However, the general hypothesis that language universals arise from biases in learning stands in contrast to hypotheses that place the source of explanation outside the cognitive system (Bybee, 20092; Dunn, Greenhill, Levinson, & Gray, 20113; Evans & Levinson, 20094)...If Universal 18’s substantive bias against a particular type of non-harmonic language is in fact specific to the language system, then the empirical findings reported here constitute clear evidence against recent claims that no such biases exist within cognition (Bybee, 2009; Dunn et al., 2011; Evans & Levinson, 2009; Goldberg, 2006; Levinson & Evans, 2010)...

To be more specific, the existence of typologically-relevant cognitive biases, and in particular the substantive L4 bias, is the primary conclusion we draw from the experimental results. Importantly, the finding that such biases exist on the time scale of our experiment—that is, revealed by individual participants in the course of a single experimental session—is not consistent with theories according to which typological asymmetries are the result of factors external to cognition. This includes theories which explain recurrent patterns as resulting from accidental geographic or cultural factors (Bybee, 2009; Dunn et al., 2011; Levinson & Evans, 2010, p. 2743), and those which hypothesize that functional factors induce asymmetries through language change across generations only (Bader, 2011, p. 345; Blevins & Garrett, 2004, p. 118; Christiansen & Chater, 2008; Levinson & Evans, 2010, p. 2738).

Learning biases predict a word order universal
http://www.bcs.rochester.edu/people/jculbertson/papers/Culbertsonetal11.pdf
 
Last edited by a moderator:
  • #492
bohm2 said:
I'm not a linguist to really judge this study but yes, I think they do see the value of both methods but the author is also supporting Chomsky's position versus Dunn's and Tomasello's stuff. This assumes that her conclusions are valid. For she writes:

But the paper makes the careful distinction between hard and soft "innate" constraints. So it is not really supporting Chomsky except in the most watered down version where everyone agrees that something is probably genetic/innate about language learning.

By formulating our theory of the bias as probabilistic we differ from most linguistic theories, which generally treat universals as the result of inviolable constraints specific to the linguistic system...

[As opposed in particular to]...even in Optimality Theory, typological asymmetries of the sort we discuss here are standardly explained by rigid, universal, inviolable requirements on the relative ranking of specified constraints (Prince & Smolensky, 1993/2004, chap. 9).

http://www.bcs.rochester.edu/people/jculbertson/papers/Culbertsonetal11.pdf

The only question then is whether Bayesian/abductive reasoning is a language-specific adaptation in H sap. or the general story of brain architecture (just as with hierarchical processing structure or "recursion"). And you already know my answer.

Though, as Cuthbertson argues, that does not yet rule out that there might be specific genetic biases that are language-specific rather than cognition-general. I have no problem with that hypothesis because it is working at a suitably fine-grain level of analysis with a plausible neurodevelopmental mechanism. We would already expect cognitive learning biases to be both general and specific.

BTW Cuthbertson seems to have hooked up with Newport for further work. So the statistical learning approach is chugging along nicely now.

However, extensive work by Carla Hudson-Kam and Elissa Newport suggests that creole languages may not support a universal grammar, as has sometimes been supposed. In a series of experiments, Hudson-Kam and Newport looked at how children and adults learn artificial grammars. Notably, they found that children tend to ignore minor variations in the input when those variations are infrequent, and reproduce only the most frequent forms. In doing so, they tend to standardize the language that they hear around them. Hudson-Kam and Newport hypothesize that in a pidgin situation (and in the real life situation of a deaf child whose parents were disfluent signers), children are systematizing the language they hear based on the probability and frequency of forms, and not, as has been suggested on the basis of a universal grammar.

http://en.wikipedia.org/wiki/Universal_grammar
 
Last edited by a moderator:
  • #493
It may pay to go back to Newport's very diplomatic summary of the story so far...
http://www.bcs.rochester.edu/people/newport/pdf/Newport_%20LLD11.pdf

Undoubtedly there are the radical proposals: positions arguing that nothing (of any kind) is innate, that languages have no universally shared principles or structures, and that language acquisition is just the learning of lexical and constructional forms. But this does not seem to me to be the dominant nonmodular view, and certainly not the most compelling or likely one.

Most nonmodularists, thanks to the profound importance of Chomsky’s work and its enormous impact on our field, believe that there are striking universal principles that constrain language structure and also that there are innate abilities of humans that are foundational for language acquisition and language processing. However, one can agree that there are innate abilities required for language and yet not be certain whether these abilities are specific to language. Though many nativists believe also in modularity, the question of innateness and that of modularity are in principle distinct (see Keil, 1990, for discussion).

In addition, few nonmodularists think that all of perception or cognition is homogeneous, characterized by the same principles of organization throughout. Rather, the question is where and how to divide the differing components of cognition/perception — and, in particular, whether language will turn out to be one of the components proper or rather is best described as the outcome of interactions among the other components and their constraints.

Certainly most cognitive psychologists believe that there is a difference in organizational principles between the visual and auditory systems, between iconic or echoic memory, short-term memory, and long-term memory, and between implicit learning and explicit problem solving. The nonmodularist’s question is whether language qualifies as one of those systems that handles information in its own way, or rather whether its characteristics are the outcome of squeezing information in, through, and out of the others.

A mild version of the nonmodularist view is that there may be some elements or principles specific to language—perhaps the basic primitives (e.g., features, syllables) at the base of the system—and perhaps some characteristics of the system that have become grammaticized or conventionalized within the life of the individual or the species. But the nonmodularist believes that relatively little of the structure of language falls into the specialized type of constraint, whereas many or most aspects of language derive from the interaction of other modules or systems of cognition/perception.

This paper by Newport then summarises evidence for cognition-general Bayesian reasoning - infant learning of speech and visual patterns...
http://www.bcs.rochester.edu/people/newport/pdf/Aslin-Newport_CDinpress.pdf

What is clear, however, is that statistical learning is not simply a veridical reproduction of the stimulus input; learning is shaped by a number of perceptual and memory constraints, at least some of which may apply not only to languages but also to nonlinguistic patterns.
 
Last edited by a moderator:
  • #494
What I find interesting about language acquistion/language evolution in connection with the Hard Problem is how it illustrates a general shift in science towards developmental systems thinking. Things don't exist. Things have to emerge.

It is what people called the process philosophy view a century earlier. And it requires a holistic view of causality, such as Aristotle's "four causes", where top-down constraints are part of what is the ontically "real".

So Chomsky vs the Behaviourists represented some weird broken view of causality.

The Behaviourists wanted to argue for simple-minded reductionism - the construction of the mind from atomistic "learning" events. The blank slate view. Although, as was then argued, Behaviourists did invoke contextual/situational factors - so holism was in there at the back of things, as it must be. And then even though Behaviourism seemed to be very much focused on individual learning - adaptation on the timescale of the developing organism - it did still accept also species-level learning, adaptation on the genetic timescale.

So Behaviourism - once reined back from the cartoon version of Watson in particular - does not seem so objectional from the systems view. It just did not have an actual model of emergent mental organisation.

Chomsky on the other hand does seem to come at all this from a strange and anti-science position. His focus is on the top-down constraints aspect of a developmental systems perspective. But he does this from a dualist/rationalist/Platonist standpoint which denies many crucial things.

So Chomsky fails to see that this is an interactionist story - the bottom-up in interaction with the top-down. He thus wants to explain everything in terms of Platonic principles and exclude anything to do with the other side of the story.

He doesn't see it as a developmental story either. So his strong Universal Grammar principles have to "exist" somewhere prior to their emergence in human communication. They can't be seen to have a naturalistic evolutionary or developmental story, such as one where small and subtle biases (ie: informational constraints) early in growth can strongly shape the final outcome. Thus when forced to give some evolutionary account of how human grammar emerged, Chomsky makes ridiculous statements about "hopeful monsters".

Chomsky ends up tangled in knots, even though he is "right" in that a systems view stresses the importance of global constraints in the development of any kind of organisation. And semiotics in particular gives a theory of how living systems construct such constraints.

The link with the Hard Problem is that this also is a false dilemma that arises out of a cartoon reductionist view of causality. And it is resolved by taking a full systems view of causality where downwards causation is taken to be ontic, and all real objects are understood to be developmentally emergent.
 
  • #495
I thought this was an interesting and pretty neat and easy to understand piece on this topic (I wish I knew who wrote it?), arguing for "mind" as an intrinsic property of matter:
The core of Strawson’s argument is that since the mental cannot possibly emerge from anything non-mental, and because we know that some macroscopic modifications of the world are intrinsically mental, the intrinsic nature of the basic constituents of the material world has to be mental as well. But now it seems that Strawson is confusing here the possibility of the emergence of mind from scientifically described properties like mass, charge, or spin, with the possibility of the emergence of mind from the intrinsic properties that correspond to these scientific properties. It is indeed the case that mind cannot emerge from scientifically described extrinsic properties like mass, charge, and spin, but do we know that mind could not emerge from the intrinsic properties that underlie these scientifically observable properties? It might be argued that since we know absolutely nothing about the intrinsic nature of mass, charge, and spin, we simply cannot tell whether they could be something non-mental and still constitute mentality when organised properly. It might well be that mentality is like liquidity: the intrinsic nature of mass, charge and spin might not be mental itself, just like individual H2O-molecules are not liquid themselves, but could nevertheless constitute mentality when organised properly, just like H2O-molecules can constitute liquidity when organised properly (this would be a variation of neutral monism). In short, the problem is that we just do not know enough about the intrinsic nature of the fundamental level of reality that we could say almost anything about it.

Finally, despite there is no ontological difference between the micro and macro levels of reality either on the intrinsic or extrinsic level, there is still vast difference in complexity. The difference in complexity between human mentality and mentality on the fundamental level is in one-to-one correspondence to the scientific difference in complexity between the brain and the basic particles. Thus, even if the intrinsic nature of electrons and other fundamental particles is in fact mental, this does not mean that it should be anything like human mentality—rather, we can only say that the ontological category their intrinsic nature belongs to is the same as the one our phenomenal realm belongs to. This category in the most general sense is perhaps best titled ‘ideal’.
Mind as an Intrinsic Property of Matter
http://users.utu.fi/jusjyl/MIPM.pdf
 
Last edited by a moderator:
  • #496
bohm2 said:
I thought this was an interesting and pretty neat and easy to understand piece on this topic (I wish I knew who wrote it?), arguing for "mind" as an intrinsic property of matter:

Here is your guy - http://users.utu.fi/jusjyl/

Welcome to Jussi Jylkkä's website
I am a postdoc researcher working mainly on issues in philosophy of mind, philosophy of language, and metaphysics. My current research focuses on the mind-body problem from a transcendental perspective. Other research interests include history of philosophy, Asian philosophy (zen), experimental philosophy and metaphilosophy.

Interesting to look at his extrinsic vs intrinsic property argument in the light of a systems approach.

The systems/pansemiotic view would suggest every "element of reality" indeed would have further "intrinsic" degrees of freedom.

Every locale has unlimited degrees of freedom (is vague) until some constraints are imposed top-down to limit the degrees in strong fashion, so creating an element of reality with some now definite, or extrinsic, properties.

But constraint is not absolute, and so further degrees of freedom remain, but in unexpressed fashion.

So taking his example of H20, we could say an unexpressed degree of freedom of a water molecule is its ability to collaborate in the broader organisation that we call liquidity. This "property" lurks intrinsically until it gets the chance to emerge and be expressed as a collective extrinsic property.

The same would be true of mentality. If you really want to insist on defining subjective experience as a property of a material object, you could in some sense say the necessary degrees of freedom exist at the level of the neuron, or the molecule, or the particle, or the quantum field. However far you want to drill down. If something emerges, you can claim there must have been the local degrees of freedom waiting to be harnessed. And give them the label of intrinsic (as opposed to latent, or potential, or whatever).

But it is an unnecessarily clunky story IMO. It becomes just a way of avoiding talking about formal causes and reducing your descriptions to "nothing but hidden properties of matter". It takes you further away from useful models for the sake of preserving a reductionist ontology.
 
Last edited by a moderator:
  • #497
There is this recent paper discussing a possible solution to the "combination problem" of panpsychism. One of the major criticisms of panpsychism is that panpsychism must also resort to some form of emergentism and this has led even more "panpsychist-friendly" philosophers (e.g. Goff?) to be critical of panpsychism:
between panpsychist emergentism and physicalist emergentism, the physicalist version is preferable for reasons of ontological economy
Coleman, who favours panpsychism, in this paper below tries to argue that some of the assumptions of critics like Goff may be mistaken. I'm not sure I buy or understand his argument of phenomenally-qualified but subjectless ultimates:
Crucially, the relationship presently envisaged between the phenomenal character of the phenomenally-qualitied ultimates composing him and that of Goff’s o-consciousness ('o' for organism) is quite different. On the present view, the phenomenal characters of the ultimates composing Goff’s brain jointly constitute the phenomenal character of his o-conscious phenomenal field, they do not spawn it as a separate entity. This feature enables us to overcome an objection lurking in Goff’s account concerning the unity of o experience: “The existence of a subject having a unified experience of feeling cold and tired and smelling roast beef does not seem to be a priori entailed by the existence of a subject that feels cold, a subject that feels tired, and a subject that smells roast beef”...In our model the phenomenal elements of cold, tiredness and the smell of roast beef come together closely enough to form a phenomenal unity: they are experienced together as overlapping features of the same phenomenal field. This is thanks to the pooling of the intrinsic natures of the phenomenally-qualitied ultimates, possible due to their subjectless nature.
Mental Chemistry: Combination for Panpsychists
http://onlinelibrary.wiley.com/doi/10.1111/j.1746-8361.2012.01293.x/pdf

I have trouble understanding the meaning of subjectless qualia/phenomenology or even how such subjectless ultimates can lead to a "unified" subject/organism without some type of emergentism?
 
Last edited:
  • #498
I thought this was an interersting dissertation (just abstract) that this guy is doing. He seems to be arguing against treating consciousness as genuine emergent phenomena suggesting that information at the micro-level leads to consciousness at the macro-level:

Naturalized Panpsychism
A central problem in the mind-body debate is the generation problem: how consciousness occurs in a universe understood as primarily non-conscious...I argue that the generation problem stems from a non-critical presupposition about the nature of reality, namely, that the mental is an exception in the universe, a non-fundamental property. I call this presupposition mental specialism...I argue that consciousness emerges from proto-consciousness, the fundamental property that is disposed to give rise to consciousness. Proto-consciousness is not an arbitrarily posited property; following an important contemporary approach in neuroscience (the integrated information account), I understand proto-consciousness as information. The thesis that consciousness emerges from proto-consciousness elicits a fatal problem with panpsychic theories, the combination problem. This problem is how to account for higher order conscious properties emerging from proto-conscious properties. I solve the combination problem by adopting Giuolio Tononi’e Integrated Information theory of Consciousness and demonstrating emerging higher order conscious properties just is a system integrating information. Thus information is the fundamental property that, when integrated in a system such as a human being, is consciousness. Proto-consciousness is thus a natural property and the formulated panpsychic theory based upon information is a naturalized panpsychism.
http://www.marquette.edu/grad/documents/Cookson.pdf

For an overview of Tononi's model and an interesting quote:
There are also some points of contact between the notion of integrated information and the approach advocated by relational quantum mechanics (Rovelli, 1996). The relational approach claims that system states exist only in relation to an observer, where an observer is another system (or a part of the same system). By contrast, the IIT says that a system can observe itself, though it can only do so by “measuring” its previous state. More generally, for the IIT, only complexes, and not arbitrary collections of elements, are real observers, whereas physics is usually indifferent to whether information is integrated or not. Other interesting issues concern the relation between the conservation of information and the apparent increase in integrated information, and the finiteness of information (even in terms of qubits, the amount of information available to a physical system is finite). More generally, it seems useful to consider some of the paradoxes of information in physics from the intrinsic perspective, that is, as integrated information, where the observer is one and the same as the observed.

Consciousness as Integrated Information: a Provisional Manifesto
http://www.biolbull.org/content/215/3/216.full.pdf
 
Last edited:
  • #499
bohm2 said:
I think Nagel is actually agreeing with you that no matter how far a future science/physics changes, qualia will forever remain subjective. Chomsky, on the other hand, in one paper-“Linguistics and Cognitive Science: Problems and Mysteries” (p. 39) questions Nagel's premise arguing that:

“this argument presupposes some fixed notion of the ‘objective world’ which excludes subjective experience, but it is hard to see why we should pay any more attention to that notion, whatever it may be, than to one that excludes action at a distance or other exotic ideas that were regarded as unintelligible or ridiculous at earlier periods, even by outstanding scientists.”

Elsewhere on that page he argues that there is nothing unique about the mind-body problem:

But from this we do not conclude that there was then (or now) a body-body problem, or a color-body problem, or a life-body problem, or a gas-body problem. Rather, there were just problems, arising from the limits of our understanding

I’m not sure what to make of this? I think Nagel’s position is clear. Nagel is simply arguing that the mind-body problem is different than all these other problems because unlike the others, subjectivity/qualia cannot be reduced to any “material” entity regardless of future revisions of our “physical” theories. Whether Chomsky is arguing that some type of “micropsychism”, is possible I’m not sure but I doubt it? Maybe Chomsky means that we should treat the mental just as "real" as other stuff in science even though unification may be beyond our cognitive limits (I'm thinking McGinn's cognitive closure stuff here)?

Panpsychism is a very interesting position even though it's not taken seriously by many. I really find the "intrinsic" argument as set ou by Russel, Eddington and now Strawson very interesting. One difficulty with panpsychism is that it also "faces a severe problem of understanding how more complex mental states emerge from the mental features of the fundamental features." An interesting paper on this topic is this one by Seager:

http://www.scar.utoronto.ca/~seager/panagg.pdf

One panpsychist physicist is Bohm. In his papers, he argues that his interpretation suggests a proto-mental aspect of matter. He has been called a panprotopsychist. When you look at the guiding wave properties and how it affects the "particle" (trajectory) in Bohm's ontological interpretation of QM, you can't help but notice the analogy between pilot wave/particle and mind/brain. In fact, Bohm argues just that (see quote below). Some interesting properties of Bohm's guiding wave:

1. The quantum potential energy does not behave like an additional energy of classical type. It has no external source, but is some form of internal energy, split off from the kinetic energy. Furthermore, if we look at traditional quantum mechanical problems and examine the quantum potential energy in mathematical detail, we find that it contains information about the experimental environment in which the particle finds itself, hence its possible role as an information potential.

2. In the case of the quantum wave, the amplitude also appears in the denominator. Therefore, increasing the magnitude of the amplitude does not necessarily increase the quantum potential energy. A small amplitude can produce a large quantum effect. The key to the quantum potential energy lies in the second spatial derivative, indicating that the shape or form of the wave is more important than its magnitude.

3. For this reason, a small change in the form of the wave function can produce large effects in the development of the system. The quantum potential produces a law of force that does not necessarily fall off with distance. Therefore, the quantum potential can produce large effects between systems that are separated by large distances. This feature removes one of the difficulties in understanding the non-locality that arises between particles in entangled states, such as those in the EPR-paradox

4. In Bohmian mechanics the wave function acts upon the positions of the particles but, evolving as it does autonomously via Schrödinger's equation, it is not acted upon by the particles...The guiding wave, in the general case, propagates not in ordinary three-space but in a multidimensional-configuration space and is the origin of the notorious ‘nonlocality’ of quantum mechanics.

5. Unlike ordinary force fields such as gravity, which affects all particles within its range, the pilot wave must act only one particle: each particle has a private pilot wave all its own that “senses” the location of every other particle of the universe. Although it extends everywhere and is itself affected by every particle in the universe, the pilot wave affects no other particle bit its own.

Bohm and Hiley have coined the expression “active information” for this sort of influence and suggest that the quantum potential is a source of this kind of information.

"There are many analogies to the notion of active information in our general experience. Thus, consider a ship on automatic pilot guided by radar waves. The ship is not pushed and pulled mechanically by these waves. Rather, the form of the waves is picked up, and with the aid of the whole system, this gives a corresponding shape and form to the movement of the ship under its own power. Similarly, the form of radio waves as broadcast from a station can carry the form of music or speech. The energy of the sound that we hear comes from the relatively unformed energy in the power plug, but its form comes from the activity of the form of the radio wave; a similar process occurs with a computer which is guiding machinery. The 'information' is in the program, but its activity gives shape and form to the movement of the machinery. Likewise, in a living cell, current theories say that the form of the DNA molecule acts to give shape and form to the synthesis of proteins (by being transferred to molecules of RNA).

Our proposal is then to extend this notion of active information to matter at the quantum level. The information in the quantum level is potentially active everywhere, but actually active only where the particle is (as, for example, the radio wave is active where the receiver is). Such a notion suggests, however, that the electron may be much more complex than we thought (having a structure of a complexity that is perhaps comparable, for example, to that of a simple guidance mechanism such as an automatic pilot). This suggestion goes against the whole tradition of physics over the past few centuries which is committed to the assumption that as we analyze matter into smaller and smaller parts, their behaviour grows simpler and simpler. Yet, assumptions of this kind need not always be correct. Thus, for example, large crowds of human beings can often exhibit a much simpler behaviour than that of the individuals who make it up."


http://www.tcm.phy.cam.ac.uk/~mdt26/local_papers/bohm_hiley_kaloyerou_1986.pdf
http://www.geestkunde.net/uittreksels/db-relationmindmatter.html
http://www.mindmatter.de/resources/pdf/hileywww.pdf
http://plato.stanford.edu/entries/qm-bohm/


So, I was lurking on this forum, and reading Bohm's interpretation in regard to the Mind-Body problem brought up some interesting questions for me. Keep in mind, I'm more of a science enthusiast than a scientist, my understanding is simple. So please forgive me and let me know if I've made ridiculous logical jumps, it's entirely probable.

Could it at all be possible that this "Mind Wave" is the quantum consideration of your observations? Because couldn't one infer that sapience is just increased/altered potential quantum energy due to the unique shape of our brain?
 
Last edited by a moderator:
  • #500
Anachronaut said:
So, I was lurking on this forum, and reading Bohm's interpretation in regard to the Mind-Body problem brought up some interesting questions for me...Could it at all be possible that this "Mind Wave" is the quantum consideration of your observations?
I don't understand how Bohm gets from "quantum potential" to "information potential" to a "mental pole/wave"? Why can't there just be a transfer of energy from the wave field to the quantum particle during a measurement process as argued by Peter Riggs:
The Active Information Hypothesis opens up a whole host of questions and issues that are extremely problematic. Consider first the difficulties encountered with particle structure. Quantum particles would require complex internal structures with which the ‘active information’ is processed in order that the particle be directed through space...

Instead, Rigg using a "Bohmian" perspective argues:
The quantum potential is the potential energy function of the wave field. It gives the amount of the wave field’s potential energy that is available to quantum particles. The well-established principle of energy conservation holds in classically-free quantum systems. This is achieved by energy exchanges between the quantum particles and wave field. The quantum potential facilitates these exchanges and provides an explanation of quantum phenomena such as tunnelling from a potential well.
Reflections on the deBroglie–Bohm Quantum Potential
http://www.tcm.phy.cam.ac.uk/~mdt26/local_papers/riggs_2008.pdf

Maybe there are physical reasons why Rigg's model will not work and why Bohm/Hiley thought it necessary to advance their "active information" model?
 
Last edited:
  • #501
Locked pending a reality check. Thread unlikely to be re-opened.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
286
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 31 ·
2
Replies
31
Views
9K
Replies
15
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
8
Views
2K