Sleight of Mind
“The fact is, science has developed a concept of hard, sober intelligence that makes the old metaphysical and moral ideas of the human race simply intolerable, even though all it has to put in their place is the hope that a distant day will come when a race of intellectual conquerors will descend into the valleys of spiritual fruitfulness.”
— Robert Musil, The Man Without Qualities
Look around and you will not fail to find that intelligence has become one of our most fetishized ideas. We have a toxic fixation on the “prestige” of elite universities and a guilty-pleasure-like fascination with vapid metrics: SAT, ACT, IQ, EQ, GPA, ... Child prodigies make regular appearances on our talk shows and have become the heroes of our blockbuster movies. I mean, just think of Apple’s “Genius Bar” or “Think Different [sic!]” slogan. And, in a quasi-religious culmination of this social attitude, our culture’s cultish techno-dreamers are enraptured by the fantasies and doom of artificial intelligence. Not all of this, of course, falls under the explicit guise of ‘intelligence’. But the idea that this word conveys remains undoubtedly the common thread, however frayed it may be.*
In the wake of Christianity and the Scientific Revolution, we have inherited a somewhat paradoxical ideology of intelligence. Intelligence is conceived as being at once a transcendental power and a scientific phenomenon composed of some socially isolatable cognitive structures: memory, creativity, foresight, awareness, problem solving, and the like. In creating such a conception of intelligence, people of this mindset treat the word as though its true meaning, its real definition, were, comparatively, quite removed from social or affective matters—like, say, mathematical objects or physical phenomena. But despite the general acceptance of this ideology, science has actually been unable to provide us with a canonical definition of intelligence. In an ironic twist, reflexivity seems to blunt Descartes’ most precious tool...
This definitional elusiveness notwithstanding, we all seem to have a pretty decent ability to recognize intelligence in a person. Such a fact, of course, is, as words go, neither surprising nor problematic in itself: We simply know how to employ the word ‘intelligent’ in different contexts. The trouble lies in the scientistic privileging of this structural conception of intelligence. The trouble lies where these Scientists—some consciously and fanatically, others subtly and unthinkingly—allow this conception of intelligence to account for others which are in fact only tenuously connected. And so this asocial (or, purportedly, supra-social), structural intelligence—which, by construction, ignores most of our intuitions about human intelligence—attempts to hegemonize our various understandings of the phenomenon by explaining all the rest in its own terms.
Naturally, then, many of us accept as self-evident the asociality of intelligence’s causal mechanisms. But here we are falling prey to mystification. Our failure to examine critically the various ways we use the word ‘intelligent,’ I believe, is unacceptable. For in this impoverished view of intelligence, we wrongly leave out the social interactions and dynamic that comprise an essential dimension of the word’s meaning. As I hinted at above, it is this social aspect of intelligence’s original use—invariably involving judgments of other people: how they appear to and engage with us, the behaviors that they exhibit in certain settings—that I am emphasizing. For at the core of human intelligence lies an illusion—or more precisely, an inequality of understanding, between the person deemed ‘intelligent’ and the person carrying out this speech act, which begets an illusion. And this illusion is a very telling one, indeed. For our desires here are clouding our judgment, commanding our perception. What we wish to be meaning behind this mystery is merely illusion. What we see is nothing more than fog. This may not, of course, be a bad thing in itself—but why would we want a quantitative, scientific theory of fog? Moreover, the categories and analogies we use to understand our own minds unavoidably affect the ways in which we behave and think. If we think of ourselves as mere computers, is it really that surprising if we start to act and think like them, too?
So it is my goal in what follows to undermine our deeply entrenched divinization of intelligence by expounding this social-performative dimension as well as its corresponding notion of the specialist genius. I do so not because I have some positive or useful theory of intelligence to offer but rather because I think examining intelligence in this way will provide a suitable antidote to our culture’s current conception of intelligence—and to its intellectualist dogma, which many of us continue to cling to so dearly. Finally, we will see how the burden falls upon us, in the very language we use and narratives we espouse, to upend the sacred throne of intelligence.
We often observe someone so advanced in a particular domain of activity that we judge them to be “clever,” “intelligent,” or, in extraordinary cases, “genius.” (I am glossing over the trivial distinctions between these words to focus on some common sentiment conveyed by them all, which should be clearer below.) It is invariably a domain that we value: some academic discipline, an art form, a profession, conversation, or games like chess. But often their advancement, which to us (i.e., those not quite as experienced in this particular domain as the person in question) seems so instinctual, so effortless, so inexplicable, that we want to conclude it is indicative of a “natural gift”—some fluke of nature that cannot possibly be accounted for by experience alone. For, after all, our experience cannot attest to that. Such judgments of intelligence, it seems, bear an aprioristic connotation, a certain sense of innateness, as if the advancement itself, or some kernel of it, lie dormant in the individual as a pre-realized pre-adult. Intelligence here is thus viewed as being less performative than other attributes which might emerge out of, or be ascribed to, the social roles people play. In addition to the innateness there is a sense of interiority in this conception: Intelligence here is some intrinsic thing which only unravels publicly, as opposed to being the public display itself.
Now, of course, in many (if not all) of us there may lie certain predispositions (that, too, a loaded word) which contribute to the dogged pursuit of self-improvement in some particular domain. For example, someone who has absolute pitch may find more immediate rewards in learning a musical instrument than someone who does not have this gift. In fact, some recent (and some not so recent) scientific research in learning argues that there are peculiar kinds rewards for making progress in learning like there are for, say, having sex, and that people choose to learn and explore domains in which they can make the greatest “learning progress.” Predispositions, then, can and do contribute in this way to the development of proficiency in a domain, and in domains where progress toward mastery is perhaps more clearly defined, improvement is even more palpable. Think chess, math, music—anything with rather explicit levels of achievement—in contrast to, say, applied ethics, which clearly cannot be mastered as quickly as, say, the bebop idiom.
I’m guessing that you may already have some objections to this line of thinking. “Is there not a sense in which intelligence is general—i.e., domain-independent? And isn’t that precisely what we mean by the word?” Yes, that does seem to be the suggestion. But here we are wishing some generalized abstract entity into existence without, perhaps, due reason. And, as a result, we assign an almost mystical power to it. For “general intelligence” could simply be someone who has displayed their proficiency in several domains. Their skill-levels in these various domains, though perhaps somehow connected to one another (like pretty much everything else in the brain), does not mean that they possess some special, more general “gift.” The reasons, then, compelling us to do so must lie elsewhere—and such reasons are precisely what I’d like to extirpate in writing this essay.
Now this notion of general intelligence that I’m attacking is not the same as the fact that we make judgments of human intelligence holistically. Take, for instance, the contention that a computer may one day be able to write a Great American Novel. I believe that we would not be able to build a computer that could just write a Great American Novel. We would have to build a whole person, one that could live in, absorb, and interact with American society. Only then could it produce the kind of book that has the potential to move so many people. But this does not mean that we should speak of all the writers of Great American Novels as possessing some peculiar metaphysical power, some genius, or some kind of intelligence. This may be part of the social role they play in the society, but it is mystification to speak of innate talent or genius.
The psychologist Howard Gardner has proposed a theory of “multiple intelligences” in which there is no general intelligence but only domain-specific ones, and it seems to sound a lot like what I’m saying above. But this is only superficially true, for Gardner is simply pluralizing the notion of intelligence, while still taking it to mean some kind of asocial, inner aptitude. As such, he does not actually overcome this misunderstanding but, rather, reinforces it. So do the other “theories,” like Spearman’s g factor, which measures the positive correlations between a subject’s performance on different cognitive tasks, supposedly an indication of some underlying general intelligence. But it seems to me that it’s rather an indication of the underlying similarity and narrowness of the tasks themselves. While this is something that seems like it could be empirically verified (certain tasks are more similar to others and so should have higher correlations), this is not necessarily the case. Perhaps subjects simply have different levels of experience (or skills that would allow for success) in certain tasks with trends that are not discernible to the experimenter.
Now I’ll show my hand. I want to claim that what we mean by calling someone intelligent in the ‘original’ social context is not simply, among other putative implications, that they have incredible passion or motivation or intensity, as someone like John Stuart Mill might say. Although these qualities, and related ones, may very well be necessary causes of someone’s appearance to us as intelligent, none is sufficient to explain this phenomenon. Motivation in itself won’t get us intelligence; what matters more is precisely what the motivation is directed at.
I don’t pretend to offer some fixed set of necessary and sufficient conditions for our judging someone to be intelligent. In fact, I’d be skeptical of anyone who attempts such a quixotic project, and even more so of anyone who claims to have solved it—skeptical of both what they have to say and their motives for doing so. But I do have another necessary condition to offer: What happens when we call someone a genius is that the person who receives this loaded judgment (and often dangerous compliment) is in a sense masking the development, the hard work and focused attention, which their apparent effortlessness belies. This concealment may be deliberate (if one is trying to impress) or it may not be. But, either way, something crucial is hidden from the judge—not everything is hidden, of course, which is why a computer can pass the Turing test and yet we still might not want to call it truly intelligent—but something crucial is hidden, or left mysterious to the judge, and that makes all the difference. Consider how once one spells out the process whereby some product has emerged—a product worthy enough for us to deem its producer ‘intelligent’—invariably the “genius” or “intelligent” label is mollified, and sometimes exposed as mistaken, like when a magician reveals the secrets to his illusion...
Take, for instance, the theatrics, deception, and overall mystery of Sherlock Holmes as well as the depictions of other “genius”-types in literature. Invariably, certain (usually banal) aspects of these characters are left out, creating incomplete, flawless personae, whom we, in our dumbfounded ignorance, judge to be superior to the ordinary person. But all we know about these kinds of characters are contrived pictures of their personality and behavior—contrived to accord with our preconceptions. Though we are certain of his genius, we know nothing of Sherlock Holmes’s IQ and SAT scores—although we would love to know them, wouldn’t we? And, even more, wouldn’t these writers just love to give these characters absurdly high IQ and SAT scores so as to fit their personalities neatly into that most precious box which all their viewers/readers are just waiting to fill? What we have here, ladies and gentlemen, is a self-creating, self-perpetuating stereotype—and a mythology to boot. But for what?
Think of some of characteristics of this stereotype. Think of how many of them are symptoms of Asperger’s syndrome. What does this fact reflect about our society’s values? In other words, what does our society have to be like, what does it have to prioritize, for us to hold as the ideal of human greatness and genius the possessor of certain cognitive abilities? What impoverished imagination! It’s almost as though our ideal for intelligent computers (automated, specialized, asocial, deeply non-human in a sense) has become our ideal for human genius as well. But what ever happened to the generalist genius— the polymathic humanism of Goethe, da Vinci, Galileo, Leibniz, etc., or the Romantic genius of Beethoven—and why was it replaced by this ideal of the worker-bee specialist genius?
So what I’m reacting to is something wrong, deeply wrong, that I find in the typical language we use to conceive of intelligence, genius, and the like. The fact that human intelligence can only occur in a whole person, who is embedded in countless social structures, has been completely forgotten or dismissed as irrelevant; there, what matters are just the internal cognitive and organizational structures. But the social structures, for example, are absolutely essential to any understanding of human intelligence. For many of the words used to define intelligence in scientific circles are themselves utterly inextricable from social considerations. This is a fact which many researchers pretend to ignore in order to make mathematical models, general theories, and the like—in short, to do science in the style of mathematical physics, which is all well and good. But, much like the egoistic homo economicus of classical economic theory, this intelligence is an abstraction that both presupposes and shapes certain social attitudes which we have very good reason to challenge. That is, this conception of intelligence seems to be a rather pernicious receptacle for a whole cluster of outdated -isms. All this is not to say that the social-performative dimension is intelligence’s sole meaning, but rather that to remove the social dimension in explanation is to neglect an essential aspect of the meaning.
Think back to what Wittgenstein says about words as tools—even words about mental content. We need to see ‘intelligence’ in these different contexts as a verbal tool for communicating judgments of, and expressing sentiments about, others. I’ll put on my Nietzschean cap for a moment: ‘Intelligence’ first served as a mainly social tool in describing human behaviors or appearance in various domains, but it then took on another, less human, more ethereal meaning. The scientific mindset, in its insatiable lust for abstraction, has attempted to replace this original concept of intelligence with a new one so general, so abstract, that it is free from human qualities. Apart from being simply narrow-minded, this view itself has some rather abhorrent social repercussions, perpetuating many insidious superstitions. I think a large portion of these social ramifications can be explained as follows. Because these judgments invariably express some kind of wonderment at this concealment and the person performing it (whether they conceal it unwittingly or not), it immediately establishes a hierarchy: The subject of the wonderment is ineluctably put in a position of subordination to the object of the wonderment.† It is not hard to see how this kind of hierarchy can lead to dangerous, misguided questions about race and IQ, to wrongheaded educational policies, and even the completely preposterous but pervasive belief that the sheer complexity of what we mean by ‘intelligence’ could be somehow captured by a single (!) number, IQ, or g, or what have you.‡ It is thus on account of the hierarchical inevitability of this narrow conception of intelligence that I think we ought to decisively put the kibosh on it and to recognize the inequality which underlies it.
“But—but hold on for just one second here,” you might be saying to yourself.
“Intelligence, broadly conceived, is not just a bag of tricks, even though its social origin may be. And, regardless, are you saying that because some kind of illusion is at the core of intelligence, we will want to think that we have intelligent computers only insofar as they continue to fool us in some capacity?”
To the first point, I’ll agree. In fact, I’d add that the word has come to take on a plurality of meanings— though some of them are not very well-defined. Outside of the haughty scientific context, this fact, of course, is hardly a problem for the term— and I, personally, couldn’t care less. But think of the astounding difference between this social account I just offered of the label ‘intelligent’ and the transcendent meaning of intelligence that is contended by those in the AI community, a distinction which I hinted at in the first section. It remains self-evident that all these loftier meanings have emerged from some kind of loose resemblance to this original social judgment. And things start to become worrisome when one view of the word (i.e., the scientific intellectualist one here) hegemonizes all the others.
As for the second point, it’s close, but not quite. It won’t be surprising that we will have intelligent computers, if we have an automaton conception of intelligence. What could be so surprising about having computers that have been designed or trained to perform highly specialized tasks—and only those tasks? My point is that we should get rid of all this talk of “true” or “pure” intelligence—that is, a notion of intelligence so general that it captures the principles which underlie both human intelligence and computer intelligence. You know, the kind of thing that critics of AI appeal to when they dismiss Deep Blue’s chess-playing as a mere computational trick and not a “genuine” capacity for chess-thought. This is all to say: There will simply never be some single turning point (viz., Singularity) in the development of intelligence; there will never be some single program that we all agree displays “true” intelligence—because this is a fantastical notion that we’ve unduly abstracted from the complexities of human intelligence. The actual, human usage of the word ‘intelligence’ lives in, and was created by, a certain dialectical tension between (a) “knowing the trick” (i.e., the causal mechanism), which results in our reluctance to ascribe true intelligence, and (b) being the witness of seemingly miraculous feats that we can barely grasp in real time, and which consequently become deified as the manifestation of true intelligence. This is simply not the case with computers, which humans have built for whatever purposes, and not necessarily in their own image.
In light of this, take the example that Ryle gives in Concept of Mind of judging whether a soldier’s bulls-eye shot was a demonstration of his skill or just pure luck. Ryle says that to decide this “we should take into account his subsequent shots, his past record, his explanations or excuses, the advice he gave to his neighbor and a host of other clues of various sorts.” This example is meant as an analogy for how we judge whether someone’s actions exhibit intelligence. The point that I take from this is that such judgments are made holistically, synthesizing innumerable factors. There can be no single deciding factor and, so, no underlying principle as a sufficient cause.§ This is all because human intelligence is an inextricably social phenomenon in origin. It may be partially explained by biology or with computer models, but these will never fully explain human intelligence.
Since this holism is part and parcel of our judgments, we can see that the Turing test is rather simplistic. It is attempting to apply our judgments of human intelligence to how we ought to judge the intelligence of computers. But while such primitive functionalism might have been useful early on in the development of AI, it is clear that computer and human intelligence diverge too greatly for us to sensibly continue this analogy.‖ Human intelligence, thought, mental life, etc., are not just about producing sentences or performing certain operations. Every little thing—from the way one blinks during a conversation, to the barely perceptible intonations in one’s voice—matters. In short, the whole person matters. (And so does the “beyond,” the interior life, that these apparent behaviors represent.) But this is just not the case with computers. There is simply no reason—aside from some fanatical a priori commitments to behaviorism or functionalism or what-have-you, which fly in the face of the undeniable facts of our experience—to believe that computers, simply because they can perform some similar intellectual tasks or operations as us, have a comparable interior life.
Relatedly, we can see that it isn’t so remarkable that we will have intelligent machines, because in many respects we already do. While Turing’s contributions in this regard are invaluable, it is about time that we separate our notion and evaluation of computer intelligence from those of human intelligence: The meanings have just diverged too greatly. In his Principles of Psychology, William James provides a famous criterion for the presence of intelligence in some phenomenon: “the pursuance of future ends and the choice of means for their attainment.” This is a rather astute operational definition—for his purposes. But, if this is all there is to ‘intelligence,’ why would it, by itself, be so special, so supremely interesting to our society? Isn’t it a rather banal thing to be obsessed with? In other words, if James’s barren, value-empty definition were really all we meant by ‘intelligence,’ then I wouldn’t even have to be writing this essay.
I am asking us to look behind the curtain—to interrogate ourselves and our own values here. (What has compelled you, O curious reader, to read thus far?) What should be more concerning in the development of technologies is precisely what we keep developing the technologies for—what the values reflected are—because that will make all the difference as to what sense(s) of ‘intelligence’ these computers will come to embody. “Superintelligence,” of course, is not inherently a threat because there is nothing inherently threatening in this hyper-specialized automaton conception of intelligence. (There are, indeed, genuine technical concerns about building more autonomous computers—but none of these is inherent to the structural understanding of intelligence; they are all specific to the particular ways such computers are constructed—what they are built to do.) As I already said, intelligence’s pernicious hierarchizing aspect is inherent to our judgments of human intelligence, with all its social baggage, but not necessarily to computer intelligence. And this social baggage has almost nothing to do with how we conceive and evaluate computer intelligence.
If we are to keep using ‘intelligence’ responsibly, we need to stop thinking of it as some concept so special, so divine, that it is removed in its meaning from such “banal” considerations as social interactions and human objectives. As Wittgenstein reminds us, ‘intelligence,’ the word itself, is a tool. Here I have been attempting to draw out some particular, overlooked uses for this word-tool. In particular, I have been attempting to make you conscious of this subliminal, hierarchizing phenomenon for the purpose of overcoming it. Thus my aim here has not been accuracy but provocation. I surely overstated, misstated, and contradicted, but do know that this damnable erring was all in the service of your contemplation.
Perhaps it would be best to get rid of this confounded notion of intelligence altogether—to stop asking ourselves whether we will have intelligent computers and simply focus on better-posed questions about these technologies, questions that use less loaded vocabularies. We might ask ourselves, instead, in concrete terms, What do we want computers to do for us? Or, Why in the first place do we want ultra-proficient computational agents—perhaps ones that are versatile, too, proficient in various domains? For what exact purpose? Whatever it is, if the technologists choose to continue using this word ‘intelligence,’ they, by means of their AIs, ineluctably imbue it with a new, divergent meaning, washing away the old one in favor of an ‘intelligence’ that is thoroughly stripped of flesh and blood, fear and ambition, pathos and beauty. Intelligence here is no longer human but becomes post-human. And, as with any innovation, this leaves open a hopeful possibility: that one day, upon realizing the deeply unheroic, unmysterious, wholly utilitarian value of intelligent machines, we will do away with this divinization of intelligence once and for all.
* Nevertheless, given all our historical baggage, this current state of affairs is not really much of a surprise. The notion of genius, for instance, runs deep in our liberal tradition. Central to John Stuart Mill’s On Liberty are those most individual of individuals, the geniuses—hence, according to Mill, society’s corresponding duty to “preserve the soil” in which this fragile, eccentric elite can blossom.
† Tangentially, I speculate that this social view of intelligence explains why certain highly accomplished or precocious people often suffer from “impostor syndrome,” or the feeling that they are a fraud and are merely “fooling people.” Because judgments of intelligence are founded upon an asymmetry of perspective—i.e., the one deemed ‘intelligent’ often knows explicitly the banal processes, or magic-spoiling secrets, of the trick—they feel that what others find amazing is in fact not so special. The supposed ‘genius,’ unable to convey this fact properly to people who have not themselves experienced such steps toward self-improvement but who instead revere the genius from a distance, feels deceptive, because she is conscious of something which her admirers are not.
‡ It is worth repeating here the well-known fact that when IQ tests were first developed, women scored on average higher than men. The male psychologists devising the test, finding this result absurd, changed the exam so that the men performed on average at least as well as the women.
§ Compare judgments of intelligence to judgments of beauty. The beautiful whole is, so to speak, greater than the sum of its parts. (Or at least ought to be.)
‖ Consider, as merely one example, the difference between how psycholinguists understand human language processing and the completely divergent natural language processing computer programs that have been developed, with the use of “deep learning” and “deep neural networks.” The chasm is just too great, and it’s only getting wider, proving the Turings and Hoftstadters of yore to be naïvely simplistic on this point.