Saturday, October 20, 2007

Intelligence, Assumptions, and the g Conundrum

Cosma Shalizi at Three-Toed Sloth has written a long (but well worth the read) article entitled g, A Statistical Myth.

This article really elucidates a lot of the issues I have with the usual attempts to quantify "intelligence" and explain what causes it. An excerpt:

the case for g rests on a statistical technique, factor analysis, which works solely on correlations between tests. Factor analysis is handy for summarizing data, but can't tell us where the correlations came from; it always says that there is a general factor whenever there only positive correlations. The appearance of g is a trivial reflection of that correlation structure. A clear example, known since 1916, shows that factor analysis can give the appearance of a general factor when there are actually many thousands of completely independent and equally strong causes at work. Heritability doesn't distinguish these alternatives either. Exploratory factor analysis being no good at discovering causal structure, it provides no support for the reality of g.

These purely methodological points don't, themselves, give reason to doubt the reality and importance of g, but do show that a certain line of argument is invalid and some supposed evidence is irrelevant. Since that's about the only case which anyone does advance for g, however, which accords very poorly with other evidence, from neuroscience and cognitive psychology, about the structure of the mind, it is very hard for me to find any reason to believe in the importance of g, and many to reject it. These are all pretty elementary points, and the persistence of the debates, and in particular the fossilized invocation of ancient statistical methods, is really pretty damn depressing.


I've avoided writing much about this particular subject so far, because I wanted to wait until I either wrote or found something that would make it clear that I am not basing my opinions on intelligence on mere "political correctness", or on emotional appeals to some notion that every individual has the exact same set of abilities (which obviously isn't true).

I've read a lot of literature on theories of intelligence, including a fair number of papers on g and on psychometrics. I've also been professionally tested twice (on the Weschler Pre-School and Primary Scale of Intelligence at age 4, and on the Weschler Adult Intelligence Scale at age 20), so I have direct experience with at least one type of IQ test.

I don't dispute the fact that people who score well on certain types of tests are statistically more likely to, say, graduate from college or hold down a particular kind of job, but I do dispute the utility of IQ testing in evaluating an individual's "potential" or their ability to eventually process and understand intellectual and practical problems. It just has always seemed to me as if much of the "intelligence" literature doesn't tell the whole story, and is rife with implicit assumptions that are rarely ever examined.

One thing that gives me some hope that this might not always be the case, though, is that some studies are approaching intelligence in a way that does demonstrate awareness of some of these assumptions. This article in Science Daily describes a study meant to (at least in part) bypass the language difficulties commonly observed in autistic persons:

Led by psychologist Laurent Mottron of the University of Montreal, the team gave both autistic kids and normal kids two of the most popular IQ tests used in schools. The two tests are both highly regarded, but they are very different. The so-called WISC relies heavily on language, which is why the psychologists were suspicious of it. The other, known as the Raven's Progressive Matrices, is considered the preeminent test of what's called "fluid intelligence," that is, the ability to infer rules, to set and manage goals, to do high-level abstractions. Basically the test presents arrays of complicated patterns with one missing, and test takers are required to choose the one that would logically complete the series. The test demands a good memory, focused attention and other "executive skills," but--unlike the WISC--it doesn't require much language.

The idea was that the autistic kids' true intelligence might shine through if they could bypass the language deficit. And that's exactly what happened.

The difference between their scores on the WISC and the Raven's test was striking: For example, not a single autistic child scored in the "high intelligence" range of the WISC, yet fully a third did on the Raven's. Similarly, a third of the autistics had WISC scores in the mentally retarded range, whereas only one in 20 scored that low on the Raven's test. The normal kids had basically the same results on both tests.


EDIT: Here's a link to the paper describing the study referenced in the Science Daily article. Recommended reading, since researcher Michelle Dawson has pointed out a few clarifications with regard to the paper and how it was described in the press release.

I'd be curious to know what some of you statistically-minded folk think of the idea of "g as a statistical myth", as described in the first article I linked to. I've noticed that a lot of discussions of intelligence and "g" I read around the Web are dominated by those who seem to have high confidence in factor analysis as far as its ability to support the notion of g, but I would like to know whether that confidence also translates to assuming that supposedly "g-loaded" tasks are probably accomplished as a function of the same "property".

It seems to me that to make such an assumption, a person would have to ignore all the evidence pointing to the fact that different kinds of brains may, in fact, operate and solve problems differently (and that while one skill might correlate with another in a typical person, this isn't necessarily the case for a less typical person).

8 comments:

Xuenay said...

I'm a little confused over what the author's actual point is, really. First he seems to argue that g is simply a statistical artifact, then he mentions that g does correlate with working memory capacity, and that "all of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age 30." And since indeed, IQ has been shown to predict life success, I'm unsure of what the actual criticism is.

AnneC said...

xuenay:

I took the author's main point to be that the thing people have come up with (via factor analysis) and called "g" isn't necessarily traceable to one single cognitive property or "style", but that scores on particular tests could occur for different underlying reasons.

That is, brains might be solving supposedly "g-loaded" problems in completely different ways, to the point where positing a "general intelligence factor" to explain why brains can solve problems of a particular type is somewhat misleading.

It's an important observation to make, IMO, because there are a lot of people going around talking about how it would be a good thing to "raise people's IQs" -- that is, the notion of "cognitive enhancement" is often discussed in terms of somehow modifying the brain so that the person whose brain has been modified will score higher on an IQ test.

While I'm certainly all in favor of people being able to modify themselves however they wish, I don't think it's very likely that we'll discover some kind of "g module" in the brain that can be modified the same way in everyone, to the same effect.

I think that as cognitive/neuroscience progresses, we're going to find a lot more differences between the brains of individuals than most people imagine there are today. And in doing so, the idea that there's one "general intelligence" factor that you can just find and manipulate to make people "smarter" is probably going to wane somewhat.

I would actually be leery of a modification which claimed to be capable of "raising IQ" -- it would be important to know exactly what systems it affected, and what other effects it might have on the brain. IQ tests do not, after all, test for things like artistic ability, musical ability, writing ability, creativity, ability to understand specific academic areas like physics, etc.

Also, with regard to IQ predicting "life success", since the very inception of IQ testing (which, incidentally, began in France and was intended to identify schoolchildren who needed extra help with academics), most such tests have been predictive merely of a person's likely success relative to the prevailing status quo. If you look back at some of the earliest IQ tests, they seem almost laughably inane -- one of them, which presumed to test "mental age", required that a person with a mental age of 6 be able to classify pictures of people as either "pretty" or "ugly and deformed".

Nowadays, aesthetic preference is not generally considered to be an earmark of intelligence. But it certainly was at one point in time, and there are probably things on modern IQ tests that will eventually have us scratching our heads as to why we ever thought something like that mattered.

Xuenay said...

I don't know if the article really challenges the cognitive enhancement claims - after all, if g is normally distributed, that in itself already heavily suggests that it's made up from a variety of different factors, so that's hardly news.

I suspect that IQ is indeed really mostly a test of working memory, and that the most g-loaded tests are the ones that measure working memory most directly. Looking at this from the view point of cognitive enhancement, it certainly will make it a bit harder to modify our brains so that everybody will have a genius-level working memory - but on the other hand, if working memory is made up from a variety of "submodules", then it will probably be easier to increase everybody's IQ at least somewhat, since we can concentrate on the submodules that happen to be the easiest to improve in each particular person.

AnneC said...

Hmm. Working memory for what? In my experience, people seem to have different levels of working memory for different types of data and information. Additionally, some people who test as having an apparently poor working memory are still able to figure out complex problems over time.

Xuenay said...

Working memory in general, and working memory capacity in particular. See the paper linked in the Three-Toed Sloth article, page four in particular.

As for having different working memories, it's well known that practice in a particular domain can help you pack more stuff into your working memory. The classical figure about having a capacity of 5-7 "chunks" per time has been discredited, IIRC, but the concept is the same - with training you can learn to pack more information into each chunk. For an illiterate person, each word in a letter might require a separate chunk, while for others a well-remembered poem could fit into a single chunk. Practice doesn't increase your working memory capacity, as such, but helps you better compress information so that it fits in there, so those with a larger WMC maintain an advantage. SciAm's the Expert Mind covers this issue, especially pages 4 and 5.

Tardigrade said...

Analysis of the Raven's "Advanced" Progressive Matrices (compared to "high-range IQ tests") by a person believing in a general "g" factor, but who thinks g may not be entirely related to supposed high-range IQ: http://www.paulcooijmans.com/statistics/rapm_r.html

Tardigrade said...

I'm along the lines of thinking that the testing of g is often biased based on the format of test modalities (ala what 3-toed sloth mentioned vis-a-vis the WISC verbal loading).

I also think, to the extent g exists, it shows in part the ability of people to flexibly use the abilities they default to using to deal with problems or issues those abilities weren't evolved to deal with in the first place. I'd think it secondarily shows an ability to be flexible in using weaker or less usually used abilities when it seems to a person that their default ability isn't going to work well. Though whether most IQ tests can test for this switching ability is another question I don't have an answer to.

Josh said...

That piece you linked by Cosma Shalizi 'g is a statistical myth' is incorrect.

The main thrusts of his argument is that test data do not statistically support a g-factor. Gould tried to discredit g but his argument argument was statistically incompetent (for a statistican's critique see Measuring intelligence: facts and fallacies by David J. Bartholomew, 2004). Shalizi's criticism is incredibly sophisticated, but likewise incorrect. In a nutshell, Shalizi is trying to argue around the positive correlations between test batteries. If those correlations didn't exist, his argument would be meaningful. However, these intercorrelations are one of the best documented patterns in the social sciences.

Cosma Shalizi misrepresented Spearman and his two factor model. The author tried to present Spearman as ignorant of group factors (he should have called them out as such or noted that they are from the second stratum). The fact is that Spearman gave up on the two factor model and accepted group factors. The fact beyond that is that the predictive validity of group factors typically appears in the range from (and including) zero to about 4%. In other words, the two factor model is not rigorously correct, but it captures virtually all of the practical validity of any test.

For a discussion of neurological correlates with g see this discussion by Professor of Neurology at UCLA Paul Thompson:

www.loni.ucla.edu/~thompson/PDF/nrn0604-GrayThompson.pdf