I've been following this critique (and the various debates it seems to have prompted) with great interest.
You see, when I first began to transition from being someone who mainly reads online to someone who also interacts online, one of the first groups/subcultures I encountered was that consisting of self-described "transhumanists".
Upon encountering these folks, I was mightily intrigued -- here, at long last, seemed to be a congenial community of science geeks who were all interested in many of the same things I was: life extension, robots, neurology, etc.
Additionally, many of the transhumanists I encountered seemed considerably more positive and appreciative of the sheer grandeur of existence than average -- that is, after years of thinking that I might very well be a rarity in my general tendency toward exuberance, here were a whole slew of other folks that saw conscious awareness and its interaction with the myriad patterns of reality as a good thing.
That was my first impression.
And -- it was a decent first impression, to be sure. I started communicating with people who seemed to have interests very similar to my own, and I really enjoyed the discussions I ended up in.
It wasn't anything I saw as a Really Big Deal. It certainly wasn't a "turning point in self-discovery" or some kind of grand identity revelation. It was more that I'd simply discovered that there were others who shared with me a particular set of interests and concerns. So I stuck around.
And now? Well, lately I've been doing a major round of mental housecleaning with regard to how I think about ethics, politics, and what it means to associate with a group.
I generally align myself only uneasily and tentatively with "subcultures" that aren't directly and concretely tied to something I'm a fan of (e.g., science fiction or Buffy), because part of me always feels like I'm "lying" if I call myself X, without truly understanding what X stands for.
Additionally, if I find a particular position to be consistent with the principles I hold, I don't bother running it through a "subcultural filter" before adopting it. Nor do I hesitate to reject things I'm told are consistent with my supposed affiliations if I don't happen to agree with those things.
I used to think this was "normal", but I'm beginning to suspect it's actually quite rare. One of the things I'm finding difficult to deal with in some of the discussions I tend to end up in is the apparent tendency on the part of many others to cling to rigid, abstract ideologies.
Basically, it seems sometimes like everyone is looking for a proper "ism" to adhere to and execute in their navigation of reality and of the various ethical dilemmas and decision cases it presents.
And -- I can't do that.
Not because I've got some kind of elitist, "I'm above all that" attitude, but because my brain simply doesn't work that way. What understanding I currently have of the few "isms" I'm somewhat familiar with has not come from reading books or articles that introduce and explain the tenets of those "isms", but from first looking directly at the world and its patterns, and at how people behave in certain situations, and then (often by chance) recognizing some occasional correlation between a pattern I've observed, and something I read about in a philosophy article on Wikipedia, or on someone's blog.
This is not "anti-intellectualism" on my part (per my understanding of "intellectualism", which is probably limited to begin with) -- it's just that I think it's perfectly possible to process information in an intellectually robust manner without obsessing over what predetermined system one's ethical and rational tendencies adhere to.
I've found something useful in most of the philosophy I've read, though most of the usefulness has been uncovered after the fact of coming up with some observations about The Universe And How It Works. For instance, I was practically bowled over when I discovered the concept of "existentialism" at around age 20, because some of what I read in that literature so keenly seemed to reflect the sudden sense of transparency, vulnerability, and ephemerality I fell into at that age (the usual postadolescent "Wait, you mean I'm not invincible?!" stuff).
But: I simply cannot start with a description of some philosophy or abstract way of representing people as "resources" or "data points", or of representing reality as something to be fed into a "utility maximizing function", and figure out how to make decisions or behave from that point.
My friend Amanda wrote an article a while back entitled, Politics, Ethics, and Mental Widgets. This article expresses very neatly some of the cognitive aspects of understanding and discussing politics and ethics that can make certain kinds of discourse inacessible to some of us.
I have a confession to make that might startle some people: I’m not capable of holding a complex ideology — what I call a set of abstract mental widgets all connected to each other in the sky — in my head. If I try, it falls apart rapidly. I can’t sustain it, I can’t even fully build it, and I certainly can’t believe in it. I used to try, because I thought that it was a measure of my stupidity or something that I couldn’t. And my brain turned to mush every time and I got really frustrated and miserable. I’ve since learned that that’s simply not my strong point and there’s no way on earth I could do it and would be better off putting my cognitive resources somewhere more useful.
My experience is very similar to hers -- I do not, and cannot, hold "complex ideologies" in my head, at least not under the heading of "complex ideology". Sometimes I end up coming up with something that turns out to resemble an existing complex ideology or "mental widget set", which makes me compelled to occasionally try and identify with that existing ideology (for the sake of having common terminology with which to discuss certain ideas I've never before been able to put in terms anyone else can relate to).
But really, I think this might tend to mislead people at times into thinking that I'm much more capable of grokking abstract widget-systems from the "top down" than I actually am. I know that there have also been times when I've misled myself via sloppy pattern-matching of things I think to ideological constructs that look like they're similar to what I think.
It seems, though, that for some people it’s either easier or preferred (it’s hard to tell which) to memorize all the proper mental widgets, and to violently force the world (or at least make a serious attempt) to bend to the shape of the widgets.
This doesn’t mean that people who apply mental widgets this way always get things wrong, or that I and others like me always get things right, or that I always disagree with people who use mental widgets (whether both of us are right or both wrong). We’re all fallible human beings, and sometimes mental widgets can provide a shortcut to the right answer. But overall the mental-widget approach to ethics and politics strikes me as far more violent, hateful, impractical, disconnected, and damaging, even if it’s also aesthetically pretty from a certain standpoint and fits very well into academia.
In many respects, I see Amanda's point regarding the "violent, hateful, impractical, disconnected, and damaging" nature of the "mental widget" approach to politics and ethics as very pertinent to something in one of Dale Carrico's latest posts -- something that very nearly made me exclaim aloud, "Yes, that's exactly it!" Dale writes:
it actually seems to me that what little general public traction bioconservative discourse actually gets (since the fact is that almost everybody actually champions healthcare in the service of longer healthier lives, and since most people who live in secular multicultures actually prefer them to police states) derives from its appeal to people's very sensible anti-corporatist and anti-militarist attitudes. Bioconservatives commandeer what should be a technoprogressive critique in the service of their own actually socioculturally reactionary aspirations. Since my own critique of Superlative Technology Discourses foregrounds this very connection of so much prevailing "pro-technology discourse" with elitism, reductionism, indifference, and exploitation it seems to me it actually functions to deprive bioconservative rhetoric of its one current advantage as a technocentric analytic framework.
Superlative technocentrics themselves typically respond to bioconservative formulations instead by misframing all of this as what amounts to a battle between Science and Religion, in which they cast themselves in the role of the Champions of (a reductively and monolithically misconstrued) Science and all of their foes as champions of a fundamentalist or New Age religiosity (misconstrued as a matter of epistemology when fundamentalism is more usually and more crucially a matter of politics) -- all of which has the misfortune of being both mostly wrong and also completely stupid.
The way I interpret Dale's observation here, and relate it to Amanda's observation noted above, is that "superlativity" discourse in many ways represents a grand Battle of the Widgets. In the misframing of technodevelopmental discourse as an episetmological argument between the self-proclaimed defenders of rationality and their perceived opponents (who are assumed entrenched in backwater dogmatism and the rudest of intellectual poverties), real people stand to get hurt, or neglected, or simply disregarded.
I am, like Dale, quite concerned about the "elitism, reductionism, indifference, and exploitation" that exists in some ostensibly "pro-technology discourse" -- even though I've long considered myself an enthusiast when it comes to neat gadgets and nifty machines. And it's not devices I take issue with, but the conclusions about reality (and people) that are sometimes drawn from their existence and how these conclusions are applied to individuals.
For instance, if you consider IQ tests to be a kind of technology, the manner in which many such tests are administered and interpreted can quite literally decide the course of a person's future -- there are, for instance, people living in squalid and/or abusive institutional conditions right now on the basis of a test score. This is preposterous. But no matter how preposterous, our present culture functionally enables it through defining, in the form of a grand tapestry of mental widgets, some persons as nonpersons (or sub-persons, or "persons with low mental age") via a number on a test that supposedly indicates their level of awareness or capacity for complex thought, or something.
This is only one example of what I see as a damaging form of reductionism. And it is only one symptom of a problem I've had for a long time with transhumanism, though I realize it is not a problem unique to transhumanism. I've actually seen more frightening and hateful and fearful statements coming from more mainstream sources than from nominally "transhumanist" or "futurist" sources -- but, and this might be an aspect of the superlativity critique that some are missing -- for a movement that wants to consider itself at the forefront of radical positive change, transhumanism isn't doing enough (in my estimation) to avoid slipping into stodgy parochialism.
The way I see it, anyone who can't tolerate a world in which Deaf, autistic, or otherwise-atypical persons continue to exist is not prepared to deal with a world in which forms and functions vary beyond the dreams of generations of sci-fi and fantasy authors. Anyone who cannot see anything other than the standard set of normative human abilities as a means to a "good start" in life is going to have a seriously hard time with a world of prostheticized and implanted and exuberantly decorated beings such as the very technologies they claim to encourage may bring.
I've at times been horrified to see important issues like disability rights (which are really very much civil rights, and strongly tied to concepts of morphological liberty) being framed by some as a battle between the "progressives" (who, for some reason, are expected to agree with Peter Singer on how to deal with disabled children), and the "disability activists" (who, for some reason, are often considered "reactionary" without anyone really bothering to consider what they are actually saying.). Can't one be a progressive disability activist? And can't one discuss the ethics of a very difficult dilemma without being relegated to the "extremist" camp for merely raising questions like, "Well, how valid are concepts like 'mental age'?" I should certainly hope so!
Per my own philosophical tendencies, I'm much more about people choosing for themselves which of their "limits" they'd like to push or overcome via modification than about some grand council ("We") deciding which factors constitute unacceptable "inequalities" and working to systematically eliminate them. Too many people in general seem to lack the ability to tell the difference between equality and sameness. As a result, even very well-meaning folks can end up overly dazzled by ideas like "maximizing the utility function of the universe" in ways that ignore and harm individuals who might, you know, have a different take on matters, or who are simply not in much of a position to have their thoughts heard.
The post of Dale's I quoted above describes precisely the weird sense I've gotten at times, wherein I've found myself dissatisfied with the dismissal of certain important concerns as "luddite". I've found myself seriously pondering at times where, if anywhere, I "belong" on the philosophical spectrum as someone who supports "weird" ideas like cryonics and consensual self-modification (to the point of someday finding it commonplace for people to replace functioning natural parts with mechanical ones if they happen to like the mechanical ones better), but who also finds it abhorrent that potential parents might ever be coerced or pressured into using genetic techniques to assure a nondisabled neurotypical child. And at this point I'm at the stage of realizing that it doesn't matter whether there's a word to describe where I sit in relation to anything or anyone else. I don't need a widget to tell me I can't support X so long as I support Y, or that in order to be a "real X" I need to support Z, even if Z conflicts with my principles.
"Progress", in my mind, is not something that can be found in imagining a world full of shiny, normal-plus people running around enjoying shiny superlative luxuries that everyone will somehow magically have access to once the "anti-technology" folks see the light. Rather, "progress" consists in the making of consistent, simultaneous improvements in scientific understanding, innovation, and ethical reasoning. Progress is only partly about building objects; it is also about cultural self-examination of the sort that enables people and groups to see how their apparent "objectivity" is often grounded not in revolutionary vision, but in status-quo-supporting theories about people and reality.
Therefore, I would hope that any group expressing a desire to seek and enable extreme levels of morphological freedom (as well as opportunities to self-modify or keep non-normative forms) would naturally understand that a proliferation of mutually exclusive, yet equally rich and valid, life paths -- not a race of superlative superpersons all meeting 2007 fashionability criteria -- ought to be the direction of thrust into the future. And I wouldn't say that transhumanism (at least in its usual formulations) can't stand for some critique in this regard.