Why Do Some People Obsess Over IQ Scores?

  • Thread starter Thread starter cswhisper
  • Start date Start date
  • Tags Tags
    Iq Thread
Click For Summary
The discussion centers on the relationship between intelligence, as measured by IQ tests, and societal perceptions of genius. It highlights that many individuals who achieve significant accomplishments—often considered geniuses—are not preoccupied with IQ scores, suggesting that those who obsess over IQ may possess only marginal intelligence. The conversation critiques the validity of IQ statistics, particularly regarding racial comparisons, arguing that data can be manipulated to support specific agendas. Participants express skepticism about the interpretation of IQ scores and their implications, emphasizing that raw data can be misleading when not analyzed correctly. The dialogue also touches on the philosophical aspects of intelligence, questioning the existence of genius and the importance of self-perception in understanding one's intelligence. Ultimately, the thread underscores the complexity of measuring intelligence and the potential biases inherent in statistical analysis.
  • #31
Originally posted by Thallium
I pray you will realize that some day, though it is very unlikely in your state, Nachtwolf.
Hahaha! By all means, pray away! I'm sure the gods all agree with you!

--Mark
 
Physics news on Phys.org
  • #32
hitssquad wrote: Statistics is also being applied to other types of data sets to amplify the signal-to-noise ratio. In the area of astronomy, Earth based telescopes are now challenging orbiting telescopes in clarity of image. How is this done? It is done with statistical tools. The noise added by Earth's atmosphere is literally filtered out by modern statistical tools to give pictures as clear as those from orbiting telescopes. Do you want pudding proof? All they have to do to see if their statistically-based clean-up feats are accurate is compare cleaned-up images from a terrestrial telescope with images of the same astral bodies from an orbiting telescope.
And it's not just high-end telescopes, you can buy image enhancement tools for amateur 'scopes too. One has a tilt-tip mirror, and some deconvolution image processing software. I've not heard FFTs and deconvolution described as statistical processing before, and am not sure that the techniques have much in common with meta-analysis.
 
  • #33
Signals and noise

Originally posted by Nereid
And it's not just high-end telescopes, you can buy image enhancement tools for amateur 'scopes too. One has a tilt-tip mirror, and some deconvolution image processing software. I've not heard FFTs
20,800 hits:
http://www.google.com/search?q=fft+fourier+statistical


and deconvolution
23,900 hits:
http://www.google.com/search?q=deconvolution+statistical


described as statistical processing before, and am not sure that the techniques have much in common with meta-analysis.
Meta-analysis does not imply quality of randomness of the sampling in the same league as that of physics-phenomenon sampling. High-order trends tend to need to be accounted for in sampling of study data, where high-order trends don't tend to need to be accounted for in physical phenomena sampling:
http://www.qmk-online.de/pdfs/epidemiology_episcope.pdf

And no one seems to use the term "meta-analysis" to refer to what is done when telescope images are compared and signal is inferred.
http://www.google.com/search?q=image+telescope+"meta-analysis"

However, I would submit that if you are enhancing a signal/noise ratio in any other way than by moving your sampling instrument to an area that lends itself to good signal/noise ratio sampling, you are applying statistical tools to the data in order to infer the signal from the noise. And statistical tools are statistical tools. They don't care where you got the data. They just help you filter out noise from signals that might already be present. Therefore the statistical analysis of meta-analysis, and the mathematics involved in image correction of telescope data, is essentially the same.


From the M-W Unabridged 3.0:

---
Main Entry:meta*analysis
Pronunciation:*med.**
Function:noun
Etymology:meta- + analysis

: quantitative statistical analysis that is applied to separate but similar experiments or studies of different and usually independent researchers and that involves pooling the data and using the pooled data to test for statistical significance
---

---
Main Entry:mon£te car£lo
Pronunciation:*m*nt**k*r(*)l*-, -t**k-
Function:adjective
Usage:usually capitalized M&C
Etymology:from Monte Carlo, Monaco, city noted for its gambling casino

: of, relating to, or involving the use of random sampling techniques and often the use of computer simulation to obtain approximate solutions to mathematical or physical problems especially in terms of a range of values each of which has a calculated probability of being the solution *Monte Carlo methods* *Monte Carlo calculations*
---

---
Main Entry:statistical inference
Function:noun

: the making of estimates concerning a population from information gathered from samples
---

Google returns 11,600 hits for the search argument <image telescope "monte carlo">:
http://www.google.com/search?q=image+telescope+"monte+carlo"

Google returns 40,700 hits for the search argument <image telescope statistical>:
http://www.google.com/search?q=image+telescope+statistical



-Chris
 
  • #34
peace! It's just semantics.

The folks who get paid to do astronomy often use statistical tools in the work they do; some use such tools and techniques extensively.

Those astronomers who, as part of their work, improve the quality of images taken from ground-based optical and IR telescopes, use a wide variety of techniques. There are a number of image-enhacement algorithms which are now widely deployed, in standard packages (e.g. Lucy-Richardson, maximum entropy), and a number of metrics quoted for how close the image comes to a theoretical 'best' (e.g. Strehl ratio). There are other processing and enhancement techniques which have more limited use.

In the case of deconvolution (e.g. reducing the smearing caused by atmospheric 'seeing'), it's not so much extracting a signal from noise, as transforming the signal to a form which can be better analysed. In this sense, it's similar to reconstructing a 'clean' image of a distant galaxy, from what we see through a gravitational lens. Another example, for a totally different field, would be a CDMA mobile phone message - the spread spectrum technique makes the signal look like noise, but if you have the keys ('code'), you can extract the signal easily. No statistics involved, at least not directly.

A good example of extracting a weak signal from noise, in astronomy, would be the timing of pulsars. The tests of GR done by using such observations rely heavily on statistical techniques to extract the signal.
 
  • #35
Nachtwolf, please read my PMs.
 
  • #36


Originally posted by hitssquad
20,800 hits:
http://www.google.com/search?q=fft+fourier+statistical
-Chris
A little terminology correction (pet peeve of mine, sorry) I know this is just nit picky, but those aren't called hits, they're the number of web sites that have at least one of the words listed in your search. The actual site may not have anything to do with the subject you were searching for.

A "hit" is when someone goes to a page on a website. If you go to more than one page on the same site, it will be more than one hit.

Tallying the number of "hits" on a site is used in e-commerce to gauge a site's popularity and this can be used by the site's owner to lure advertisers.

Sorry, off topic.

I do enjoy your posts though.
 
  • #37
Website hits and database hits (was Signals and noise)

Originally posted by Evo
A "hit" is when someone goes to a page on a website. If you go to more than one page on the same site, it will be more than one hit.
I would qualify that and call it a website hit (IOW, in the category of a qualified hit). Before there were websites, database front-ends typically used the term hit to refer to positive matches to query arguments. They still do, today. The copy of the world wide web contained in the memory of a search engine computer network is a form of database and when we query the Google front-end with the argument <"a hit is" database -"file transfer">, it returns 3,460 database hits, of which I would submit the following item is a typical example:

---
The search proceeds in a fast continue mode displaying, for each hit, the reference information in the current display mode.
---
http://www.sdsc.edu/CCMS/Packages/cambridge/volume1/z1c11089.html


I do enjoy your posts though.
Thanks. At least now I know I'm not the only one who does.

In case you're interested, more hitssquad posts (and other very good posts such as those by Graham Cowen, Jim Hoerner, Karl Johanson, gumshoe, Nuke Bob, et al.) can be found here:
http://groups.yahoo.com/group/Know_Nukes
 
Last edited:
  • #38
Originally posted by hitssquad
I would qualify that and call it a website hit (IOW, in the category of a qualified hit).-Chris
Yes, you're right, it can be called a "hit" in that sense. I just deal with nimnals all day and I have developed a very short fuse when it comes to people's understanding of how the internet works. The people I deal with are supposedly trained in their field (at least that's what their employers think), and in truth are clueless. It is just amazing what some people believe.
Thanks. At least now I know I'm not the only one who does.
Well, I wanted to smack you upside the head the first time I saw one of your three page posts. I admit that I haven't had time to read all of your posts, but I agree with many of the reasons you have sited stating that IQ can be affected by a number of external sources, including malnutrition, etc...

At first I thought you were another Nachtwolf (which I have placed in the "chicken little" category, running around shouting that "the sky is falling"), since you seem to be his friend here?

I have not seen any "chicken little" traits from you so far and your posts are informative.

In case you're interested, more hitssquad posts (and other very good posts such as those by Graham Cowen, Jim Hoerner, Karl Johanson, gumshoe, Nuke Bob, et al.) can be found here:
http://groups.yahoo.com/group/Know_Nukes
I will take a look
 
  • #39
Prevention of age-related IQ decay

Originally posted by Evo
The people I deal with are supposedly trained in their field (at least that's what their employers think), and in truth are clueless.
And this, in turn, is supposedly the value of g in everyday life. Part of the practical-validity-of-g case states that two people trained in the same thing and to the same degree but who have different levels of g will tend to diverge farther and farther from each other in terms of value to their employers over the years of their terms of service.

Or, as Jensen says:
One of the most important conclusions that can be drawn from all this research is that mental ability tests in general have a higher success rate in predicting job performance than any other variables that have been researched in this context, including (in descending order of average predictive validity) skill testing, reference checks, class rank or grade-point average, experience, interview, education, and interest measures.
(The g Factor. p282.)
http://www.questia.com/PM.qst?a=o&d=24373874

So, what he is saying, I think, in a nutshell, is that people who don't like credentialism in the first place should take a look at g.


Evo continued
I wanted to smack you upside the head the first time I saw one of your three page posts.
If you ever do that, please be careful not to spill my formaldehyde (see avatar). Thanks in advance.


Then Evo said
I agree with many of the reasons you have sited stating that IQ can be affected by a number of external sources, including malnutrition, etc...
But the implications are complex. If you simply want to ensure that you personally have roughly the same IQ when you are 100 as you have now, I can show you how to do that and be able to expect a reasonable degree of success. As your peers decline in general intellectual power over the decades by 5, 10, 15, 20, 25 IQ points, you will stay the same. This might be easier to do with you, however, than it would be to do with someone who has a much lower IQ to begin with. You could understand a complicated regimen, it's theoretical basis in biochemistry and neurochemistry (i.e., its raison d'etre), and how to plan your lifestyle to incorporate it. Based on what I've studied about the practical general intellectual powers of humans in general at various IQ levels, I think a population of people below an IQ cutoff of 130 (Mensa level; the 97.7th percentile) would ultimately achieve very little success in implementation of such an anti-senescence/IQ-preservation regimen. So, the age-related decline in most of the population would continue.

Anyway, this has interesting implications for eugenics. If IQ-elite people chose not to breed, and implemented smart-nutriceutical regimens instead (in a sort of pseudo implementation of eugenics), they might change their minds at age 120 and find themselves unable to have children, but getting closer and closer to their own extinctions as they near the theoretical hard-limit to human longevity.

Seeing that possibility that lies in the future, people might decide instead on a mid-course action that incorporates both traditional eugenics -- but with limited fecundity -- and this type of anti-senescence I am talking about. I am imagining here that childbearing would be moved up to the forties instead of the twenties (since there wouldn't be the normal rapid die-off of oldsters that traditionally made room for the youngsters), and this might cause a lot of conflict in the form of DNA errors and whatever else makes a young mother biologically safer as a breeder than a mid-forties mother.


After that, Evo said
... Nachtwolf ... you seem to be his friend here?
Nachtwolf is one of my friends.


And, finally, Evo said
I have not seen any "chicken little" traits from you so far and your posts are informative.
Perhaps I am a sleeper waiting to go off once I have gained your confidences.
 
Last edited:
  • #40
Perhaps I am a sleeper waiting to go off once I have gained your confidences.
I never had the patience or duplicity for such tactics. The idiotic "Chicken Little" references remind me of something, however.

I wonder how many people remember the anti-smoking adds which depicted a pair of blackened, tar-stained lungs. You remember them, Chris? "This is what happens when you smoke," was the sober message, accompanied by horrific images of blackened, brutalized organs.

Apparently, it was discovered that these ads had little effect on the general public. The lungs were disgustingly black. The imagery was too strong. It looked surreal. People said to themselves, "What nonsense - the thought that my lungs look as bad as that!" And they took another drag.

The interesting fact is that, I'm told, these adds weren't at all disingenuous. Those of us who have wrapped a sock around the exhaust pipe of mommy’s Ford know what that sock looks like after one week. It's a simple experiment, and when it's done you have this marvelously stiff article of non-clothing to impress your fifth-grade teacher. If that's what happens to a sock after one week, just what should we expect our lungs to look like after years and years of inhaling tars and toxins on a regular basis?

If intelligence is primarily genetic - and if the more intelligent consistently reproduce less than the less intelligent - and if this continues for a century or two - and if nothing is done to change the situation - well just what should we expect?

It really isn't very difficult to see that, if something doesn't change within the next 200 years, the sky will indeed fall. But only those who have the necessary foresight to understand why this is so will be able to emotionally internalize the truth about how black our lungs are getting.

The rest will reach for another cigarette.


--Mark
 
  • #41
distortion vs noise

hitssquad wrote: Statistics is also being applied to other types of data sets to amplify the signal-to-noise ratio. In the area of astronomy, Earth based telescopes are now challenging orbiting telescopes in clarity of image. How is this done? It is done with statistical tools. The noise added by Earth's atmosphere is literally filtered out by modern statistical tools to give pictures as clear as those from orbiting telescopes. Do you want pudding proof? All they have to do to see if their statistically-based clean-up feats are accurate is compare cleaned-up images from a terrestrial telescope with images of the same astral bodies from an orbiting telescope.
and However, I would submit that if you are enhancing a signal/noise ratio in any other way than by moving your sampling instrument to an area that lends itself to good signal/noise ratio sampling, you are applying statistical tools to the data in order to infer the signal from the noise.
I replied to this once before; I now realize that there is an important distinction which hasn't been drawn out, and which may well cause confusion.

The image processing and enhancing techniques to which hitssquad refers have both a hardware and software component. Their objective is to create an image of a distant object as it would appear if the telescope were above the Earth's atmosphere, by removing distortion in the image introduced by rapid changes in the refractive index of the air cells in the line of sight. You can think of it as a more complicated version of making a 'flat' picture from an image taken through a fish-eye lens. The atmosphere does NOT add noise.

These techniques are quite different from those astronomers use to extract a signal from noisy data. If you take images of stars in broad daylight you would have a lot of noise to remove.

Hitssquad has been trying hard to make a case that clever statistical analyses, to lift a faint signal above the noise, can show clearly that (for example) Lynn's hypothesis is validated. He has used the astronomy analogy to add credence to his proposal.

The analogy is flawed. It is flawed because the example he quotes (amelioration of distortions produced by known causes) is not at all analogous to the case he's trying to make.

To make a better analogy, look at what astronomers do when they think they have found a faint signal buried in noise; first they do exactly what hitssquad says - you "enhanc[e] a signal/noise ratio [...] by moving your sampling instrument to an area that lends itself to good signal/noise ratio sampling", or they simply repeat the work. Where the signal remains stubbornly faint, enormous efforts are made to identify and characterise every conceivable source of bias and systematic error that there might be (and weaknesses in this search for error dragons to slay is one of the most common form of challenge).

Contrast this to hitssquad's responses to just such challenges, from me and others: no data, no analyses, ... no answers ... just 'the following high correlations demonstrate both statistical reliability and validity; no need to look for systematic errors'. You think this an unfair summary? Please read the threads and make up your own minds.
 

Similar threads

  • · Replies 51 ·
2
Replies
51
Views
7K
  • · Replies 40 ·
2
Replies
40
Views
26K
  • · Replies 6 ·
Replies
6
Views
5K
Replies
19
Views
7K
Replies
12
Views
2K
  • · Replies 71 ·
3
Replies
71
Views
26K
Replies
186
Views
25K
  • · Replies 13 ·
Replies
13
Views
3K
Replies
1
Views
1K
  • · Replies 161 ·
6
Replies
161
Views
14K