Sunday, April 29, 2007

Of Boxes and Bias

An Analogy

Picture yourself in a room. There's a desk in the middle of the room with a pad of paper and some markers on it. You go over to the desk, sit down, and start to draw. You like drawing. You draw a detailed sketch of some of the trees you can see outside the window. You're just adding the final touches of shading onto it when someone snatches your drawing away from you. You're told that it is a good drawing, but that you couldn't possibly have done it -- because you're a Purple, and Purples cannot draw.

Frustrated but resigned, you go outside and take out your lunch -- apple slices and cheese. You are then told by someone walking by (who knows you are Purple) that you aren't actually eating apple slices and cheese, because Purples don't eat that sort of thing. Never mind the fact that you are sitting there in plain view, eating apple slices and cheese.

Later on, when another person asks what you had for lunch, you say, "Apple slices and cheese". The other person (who has also somehow heard that you are Purple) insists that you must be lying or mistaken. After all, Purples don't eat that sort of thing -- it said so recently in an article about Purples in a popular magazine. You think back to when you were first identified as Purple, and you don't recall there being any mention of apple-and-cheese aversion then -- it certainly isn't in the official criteria, and you tell the other person this. The other person then gets very suspicious and suggests that maybe you're not actually Purple at all, but pretending to be Purple in order to distinguish yourself.

He talks out loud to himself then, muttering something about attention-seeking kids latching onto a label for the sake of being special and different. Never mind that you were identified as Purple by multiple professionals, and that your developmental history reveals plenty of evidence of Purpleness starting in infancy -- the fact that you eat apple slices and cheese has somehow, in this person's view, "exposed" you as a Non-Purple with a desire to distinguish yourself. He says that you really ought to just suck it up and do everything in Non-Purple ways because that's just the way the world is.

You decide at that point to go home, so you get on your bicycle. You're riding along when someone flags you down, hands you three lemons, and asks if you wouldn't mind juggling them. You apologize and say that no, you can't juggle lemons. The other person is shocked -- they protest and insist that you must be able to juggle lemons. After all, you're riding a bicycle! And everyone knows that anyone capable of riding a bicycle is also capable of juggling lemons. You must just be too lazy to juggle lemons -- that, or you are refusing to juggle lemons at that moment because you want to make a spectacle of yourself.

You start to protest again, but another passerby shows up and interrupts. He looks at you and pegs you for a Purple instantly -- his nephew is Purple, so he knows the signs fairly readily on sight. He says that he saw a television show about Purples recently, in which it was explained that Purples can usually ride bicycles, but it is very uncommon for them to be able to juggle lemons. Relieved, you start to walk away, since this explanation seems to have silenced the Lemon Guy.

But you are stopped once again -- by the person who identified you as Purple, who saw the television show on Purples. He tells you that he really admires your bicycle-riding skills, but that he feels terrible that you will probably never experience the sublime joy that is lemon-juggling. You tell him that it's okay -- that as far as your priorities go, lemon-juggling is pretty far down on the list. He insists that while you may be able to get a job someday (particularly since you're a Light Purple, or at least, you appear to be while you are riding your bike), and while you may find some modicum of happiness, there is always going to be something fundamental missing in your experience of being alive. Because you can't juggle lemons. He can't imagine life without lemon-juggling. You tell him that you are a pretty happy person overall, but he mumbles something about "not knowing what you're missing".

You point out that he doesn't know what he is missing in not being Purple, and he says that while it's admirable you seem to have adapted to your condition, you are still living a diminished existence. It doesn't matter that you can do all kinds of other things (some of which he can't even do) -- the lack of lemon-juggling capacity is going to prevent you from enjoying parts of existence that ought to be every person's birthright. In his opinion. You ask why he gets to decide what parts of existence are more important than others, and he says that it should be obvious by just looking around -- that there are just some things that every person ought to be able to do in order to succeed in life. You ask him to define "success". His definition of success involves becoming a high-level lemon juggler, because, after all, "we are a lemon-juggling species".

This kind of thing happens to you regularly. And it is always this absurd.

The above analogy, while admittedly (and deliberately) bizarre in its content, is meant to demonstrate how "the box" -- that is, the limited set of traits and characteristics that a person (by virtue of belonging to a particular category) is supposed to exhibit -- can produce much in the way of absurdity and cognitive dissonance. This is why it is important to bring the concept of the box into the foreground -- a lot of people probably don't even realize that they're using the box, since social consensus often makes boxes of this sort functionally invisible to most people in a society. And this is why the box concept is particularly relevant for people who are configured in ways that don't permit them to sit comfortably in the various readily-available categories offered by default in their culture -- when you can't sit comfortably in what others consider to be a firm and fixed category, you run up against the walls of the box simply by existing.

Boxes Within Boxes

Biology defines "human-ness" through empirical factors -- from the biologist's standpoint, human is a function of DNA. However, not all people are biologists, and most probably tend to define humanity on the basis of fuzzier, messier factors like culture, language, behavior, and shared experience. Furthermore, human societies have a tendency to, on the basis of silent consensus, define particular subgroups of humans as more essentially human than other subgroups. The predominant shared experiences, behavior patterns, and even physical characteristics of the "essential" subgroup become reference points for determining the relative humanity of outgroups.

However, even a brief survey of history -- particularly history within the last two centuries or so -- reveals that the consensus as to who is essentially human (and more recently, essentially a person) cannot remain silent and unquestioned indefinitely. Genetic diversity, environmental change, scientific discovery, technological development and economic fluctuations all contribute toward the enabling of periodic unanticipated paradigm shifts. One such shift occurred when the practice of slavery ceased in the United States. Another occurred when women achieved suffrage. And now modern citizens look back upon times prior to those events as very dark indeed.

But despite the clarity of hindsight with regard to the transgressions and prejudices of our ancestors, the tendency to put some humans outside humanity (and outside personhood, for that matter) on the basis of what seem like rather arbitrary factors persists. Clearly, the question of who (or what) has human DNA is not asking the same question as, "Who can be a citizen?" or "Who is granted membership in the worldwide community of persons by default"? Personhood theory, formulated one way, offers that persons have particular qualities which are (or should be) substrate-independent -- that is, you shouldn't need to be human in order to be considered a person. I am in favor of this formulation of personhood theory since I have long believed that many nonhuman animals ought to be included in the category of persons -- persons fully deserving of the same respect accorded to humans.

I definitely think that personhood theory is probably already benefitting the Great Apes and dolphins of the world, which is obviously proof of at least some positive ethical development. But despite those ethical developments in the area of animal rights as a result of personhood theory, I still come across articles and comments presuming that autistic people are somehow missing some fundamental element of personhood. This seems to be, at least in part, because a human box (or perhaps more appropriately, a person box) has been established over time. So the problem with boxes isn't just limited to the box meant to comfortably contain those of us who are atypical in some way, but also concerns the overly-narrow set of parameters that are often invoked as conditions necessary for full personhood. In an article entitled The Big Question: How much do we know about the causes and incidence of autism?, Jeremy Laurance writes:

In the social world in which we live, the capacity to read situations and respond appropriately is crucial to success and can mean the difference between popularity and loneliness. Autism disturbs something that is core to our being human.

Notice how it is simply assumed that "we" live in something called a "social world" -- a world that apparently excludes autistics, not only from nebulous states like success and popularity, but from humanity altogether. Additionally, there is the assumption here that there is some kind of dichotomous relationship between loneliness and popularity -- as if somehow if you're not popular, you are doomed to a lifetime of miserable solitude. Aside from the fact that popularity simply isn't a priority for many people, including many who aren't autistic, it is not specified which subgroups of humanity a person needs to be popular within in order to be considered popular enough. Obviously no person is popular in every group -- perhaps the quote above is referring to the state of being "popular with one's peers", but even if that is indeed the case, it doesn't make sense to assume that popularity with peers is a good yardstick by which to measure a person's quality of life.(1)

Phrases like "the core to our being human" always need to be questioned when they are encountered. It is obvious that the quote above is not referring to human DNA when invoking the existence of a "core". Rather, it is more likely that the author is invoking something like "human nature" or perhaps Francis Fukuyama's "Factor X", as described in a review of Our Posthuman Future: Consequences of the Biotechnology Revolution in the Harvard Human Rights Journal:

[Fukuyama] is in favor of simply referring to the combination of these elements [of being human] as “Factor X.” The black box referred to as Factor X can be envisioned as an amalgamation of such elements as moral choice, reason, language, sociability, sentience, emotions, and consciousness. The crux of Fukuyama’s argument for regulation of biotechnology rests on the sanctity of Factor X. “We want to protect the full range of our complex, evolved natures against attempts at self-modification,” he writes. “We do not want to disrupt either the unity or the continuity of human nature, and thereby the human rights that are based on it.”

The similarities between the Laurance quote on autism and Fukuyama's notions of the "unity and continuity of human nature" are readily apparent. In both cases we have someone who first attempts to assert that there is some kind of ineffable "human-ness", and then goes on to explain the implications of this assertion in terms of what kinds of people or practices are a threat to this human-ness. In the first case, autism is the threat, and in the second case, biotechnology and self-modification are the threats -- human essentialism is invoked first to exclude a category of humans from humanity, and second to suggest that there is something sacred about human nature that demands a restriction on technologies that might make contemporary ideas about "what is human" fall to pieces.

Part of the reason I am generally supportive of the right to consensual prosthetic self-determination, and of efforts to further progress in biotechnology, is based on what I see as a tremendous need to break down the overall box that "humans" are supposed to be satisfied with. Humans are supposed to be satisfied with being typical humans -- we are not supposed to be satisfied with being autistic, just as we are not supposed to want to change ourselves in ways that threaten established (yet nebulously-defined) ideas about what "human nature" is.

In short, the "human nature" box exists as a direct threat to both the right to change and to the right to exist as an atypical being (since autistic and other disabled people are already functionally excluded from many formulations of what a "human" should be). Additionally, since it is likely that modification technologies will eventually become very easy to access and to apply, it is important to make sure that those technologies are not guided in the direction of keeping boxes intact, but in the direction of making them irrelevant and meaningless to all who encounter them. Morphological freedom, not naturalistic fallacy ought to be a guiding principle in envisioning society's future. And morphological freedom is in no way served by systematically denying that certain people do (or should) exist just because their attributes seem to place those people outside presently-delineated boxes.

Deconstructing The Box: Methodology and Action

But how can a person work toward helping to expose and break down boxes where they seem to be inordinately confining people and perpetuating over-limited interpretations of personhood? One potential way is, as Amanda Baggs has pointed out, contingent upon the willingness of people who are themselves atypical to live and to describe their lives in ways that ignore the box entirely. She writes:

If everyone who gives accounts of our life (publicly or privately) is busy letting fear of the box censor those accounts to only the parts acceptable to the box, then nobody facing similar experiences can learn from us, because our accounts of those experiences will be skewed if we mention them at all. While I can’t blame some people for force-fitting themselves to a box, it perpetuates the power of the box and makes others more likely to force-fit themselves as well, and becomes a self-perpetuating cycle that’s hard to break.

The "force-fitting" being referred to here is something that happens when people (who may be autistic, who may be atypical in some other way) are afraid to talk about the parts of their lives that don't conform to popular stereotypes about people "like them" -- that is, the things that don't fit in the box. And in many cases this isn't a matter of being "insecure" (as in the case of a man who won't admit he watches romantic comedies because he doesn't want people questioning his manhood), but a matter of not wanting to be constantly picked apart, scrutinized, or accused of malingering.

I honestly have no idea how I come across in most situations now. Part of me still can't believe I come across as anything but "normal", but I can't really disregard all those times as a youngster when I was told that I "must be trying to get attention" (even though I had no idea what I was doing, let alone that what I did could possibly affect the amount and type of attention I received). Or the whole thing that started when I was around thirteen or fourteen when people started accusing me of being on drugs (which I didn't always argue with, even though I wasn't on drugs, because I figured it was an improvement over "retard"). I don't know if I look like anyone's idea of "autistic", and I figure my presentation probably varies significantly from one environment to another, but in any case, I have certainly felt the pressure to "force-fit" at times. Even before I knew I was on the autistic spectrum (2) I remember feeling like I had to hide the things about me that weren't "consistent" with what I thought I was supposed to be.

The messages I received from my environment over time gradually led to a feeling that maybe I wasn't really human at all -- I was "supposed" to be all kinds of things that I just wasn't, and yet, there was a kind of internal consistency to how I was that I could not easily express or explain. By fourth grade I was seriously considering the idea that I'd actually been sent from outer space into the body of a human girl so that I could observe the people of Earth and take notes on them. This idea didn't last very long, as I soon realized there just wasn't any reasonable evidence that I was an alien, but the feelings of alienation and "otherness" remained. I tried to deny these feelings, not least in part because I knew there was a perception that anyone who lay outside established norms wanted to be there and was placing themselves there deliberately.

I told myself over and over again that I was just like everyone else, figuring that if I didn't, I was guilty of trying to be "special". Though I certainly wasn't about to give up my real interests or preferences (and though I couldn't exactly change my ability set to fit a more standard profile), I did start questioning myself constantly, and I was actively afraid that my interests and ability manifestations represented some kind of subconscious desire to distinguish myself and in doing so inconvenience everyone around me. I was terrified for years that I was fundamentally bad and selfish. This fear did not come out of nowhere, but was rather prompted by many real incidents, such as when a girl in my tenth-grade math class actually grabbed a book I was reading away from me ("The Fourth Dimension", by Rudy Rucker) and threw it across the room, while telling me that I shouldn't reading something so boring and stupid and indicating that I was somehow "showing off". Even though on one level I knew this girl was completely mistaken, I couldn't help but fear that she might be right -- that maybe she knew something I didn't.

But over time I came to realize that that was part of the box as well -- the assumption that everyone is, or should be, fundamentally the same, and that differences don't actually exist. The idea that differences don't exist is just as dangerous as the idea that some people are so different that they don't deserve to be thought of as people at all, which is why I don't have any patience with people who try to claim that (for instance) people who identify as autistic, or who have unpopular interests, or atypical ability sets are placing themselves in a box. The box is not the set of variables and constrains that delineate (if fuzzily at times) the boundaries of an individual, but the forceful and irrational imposition of particular constraints on ideas of how an individual should be from the outside.

In addition to living as if there are no boxes whenever possible, people can also help defuse the influence of the box through conscious and deliberate reduction of cognitive biases of the sort that influence heuristics. Heuristics are only as good as their ability to provide accurate information about reality, so it is important that people applying those heuristics develop the capacity to notice when they fail. Additionally, it is important to learn to avoid mistaking an heuristic for a fact about reality.

The problem is that when looking at reality, sometimes preconceptions get in the way of being able to recognize inappropriate heuristics. The analogy about Purples and lemon-jugglers I included at the beginning of this article was deliberately written to avoid (at least to some extent) triggering people's preconceived ideas about any particular known "condition" or state of being, in addition to preconceived ideas about the presence or absence of particular ability sets. I wanted to demonstrate that a lot of what probably passes for "common sense" when it comes to evaluating an atypical person's quality of life, character, and even personhood is actually quite frequently informed by prejudice and bias.

Perhaps when it comes to making decisions that could impact the way a person is treated, a place needs to be made for "specific, contextualized knowledge" (3). Not for irrationality, and certainly not for superstition, or for wishful or magical thinking -- but for the sort of data you get from knowing a person as opposed to just reading charts and graphs and articles about a kind of person. The data for those charts and graphs needs to come from somewhere, after all, and it has become abundantly clear to me that many of the models used to evaluate atypical (autistic, disabled, differently-configured, etc.) people are needlessly limited by boxes that should never have existed in the first place.

What needs to be acknowledged in the end is that personhood is far more heterogeneous than it might initially appear to be. It's not that people who don't fit standard patterns are alien-like or "other" at all -- it's that the standard patterns themselves are too flawed and limited to encompass the tremendous and vibrant variety of personhood that actually exists.

(1) Certainly, being able to form and maintain friendships is important, but there is nothing about being autistic that precludes friendship -- friendship requires at least two people who enjoy each other's company. And while autistics may have more difficulty than average finding people to relate to, this doesn't mean that no such people exist, or that because someone is observed to have difficulty with "age-appropriate peer relationships" in early childhood that they will never be able to make friends with anyone. It also doesn't mean that someone ceases to be autistic on the day they make their first friend, any more than it means that a girl ceases to be female when she gets an engineering degree.

(2) I use the term "autistic spectrum" not to imply something linear, with "high functioning" at one end and "low functioning" at the other, but as a useful term for expressing the heterogeneity seen in the autistic population. We have enough in common to share a particular designation, but we're all different "shades" (so to speak) within that designation.

(3) I borrowed the phrase "specific, contextualized knowledge" from a paper by Rob Breton and Lindsey McMaster, entitled Dissing the Age of MOO: Initiatives, Alternatives, and Rationality in which the authors write:

[The Buffy-Initiative Alliance] fails because she is integrated or integrates herself into the underworld’s core of assumptions: meaning, ironically, a refusal to de-humanize or alienate. She depends on specific, contextualized knowledge. Whereas Buffy is interested in questions of demon motivation, asking, “What do they want? Why are they here? Sacrifices, treasures, or did they just get rampagy?”, the Initiative is indifferent to questions which would thus lend consciousness to the monsters, positing that the creatures are “not sentient, just destructive” (“The I in Team”). Where the Initiative looks for empirical and tested facts, Buffy looks for factors, variables; her sense of herself and of monsters is suffused with personal motivations and individual desires, and indeed, her victories over adversaries are as much victories of personality and wit as of physical force."

Thursday, April 12, 2007

Disability and Economic Relevance

A recent poll on the Institute for Ethics and Emerging Technologies site posed the question:

Is the economic cost to society from disability relevant?

Three multiple-choice options were provided as possible responses to this question:

1. No, it should be ignored because of eugenic implications

2. No, because it can't really be calculated

3. Yes, it is part of any public policy calculus

At the time of this writing, final poll results aren't quite in yet, but I'll be interested to see the results. I'd be even more curious to know how the various people encountering the poll interpreted the question, but I am not sure there's any feasible way to obtain that information. I will say, however, that I personally found the question to be somewhat vague and difficult to interpret definitively. My first response to reading it was, "Relevant to what?"

I definitely don't think people should ever be discriminated against, or denied certain rights, on the basis of their configuration -- which means that if nondisabled people aren't forced or coerced into undergoing (possibly experimental) treatments in an effort to make them "less of a drain on the system", disabled people shouldn't be either. Additionally, I am horrified by the idea that parents would ever, say, be penalized or denied services for a disabled child on the basis that they could have learned of that child's condition (and aborted the fetus) but chose not to.

While I'm all about offering people plenty of opportunities to develop and enhance their talents, modify themselves, or even gain abilities that they weren't necessarily born with the potential for, it is vital that individuals retain maximum control over their own bodies and lives -- not states, not corporations, and not The Economy. And while I realize that some policies and programs I generally tend to support (e.g., those designed to help people quit smoking or overcome drug or alcohol addiction) would, if successful, result in a larger pool of people capable of traditional employment and less likely to need particular medical services in the long run, economic cost alone is neither a necessary nor sufficient reason for implementing such programs.

Many of the things that are often put into the category of "disability" won't necessarily kill you -- at least, they won't kill you so long as you receive medical care appropriate to someone with your configuration. And I think it's dangerous to promote policies that take an "all or nothing" approach to addressing disability and potential associated medical and social issues -- that is, those which mistakenly assert that unless a person is made "normal", terrible economic and social consequences will ensue.

My own ever-evolving philosophy of disability rights is very much rooted in the notion of morphological liberty, and I find it annoying and distracting when people keep trying to turn debates over disability policy into arguments over whether disabilities are "good or bad". That's not what those debates are about. It doesn't matter, in the context of such discussions, whether a given configuration can or cannot be defined as essentially problematic from the societal or individual standpoint -- at the core, disability rights are civil rights, and it is a civil right that a person's existence not be subject to whether or not they happen to be "in demand" at any particular moment in time, economically speaking. So while I can certainly appreciate that some policies might have a positive economic impact, and that I might agree with the premise of some of those policies, I will definitely draw the line at supporting anything which seems to place economic gain above morphological liberty.

Wednesday, April 11, 2007

Longevity - It's About Life, Not Money

Lately I've been trying to parse out some of my thoughts on the economic matters associated with longevity research, the ramifications of healthy life extension, and disability. The way I see it, longevity research is a good thing regardless of how much it costs -- just as sanitation and clean water are good things if people need them, regardless of how much they cost.

But right now, one of the most widely-known longevity research initiative is The Longevity Dividend. I am definitely in favor of any initiative that seeks to help make sure people who need particular kinds of health care in order to stay alive and healthy get it -- but part of me always squirms a bit when I read about the supposed "catastrophic" financial burden of caring for sick people. Now, I'm not trying to deny that cost is a monumental difficulty to deal with -- I just find it rather sad that it might take an economic argument to garner widespread support for healthy life extension research and medicine. Mind you, this doesn't mean I don't support the Longevity Dividend or similar potential efforts -- it just means that, just as I am not satisfied with the idea of 80-or-so years of life, I am also not satisfied with a society that needs some kind of financial incentive in order to recognize the value of people's lives.

Explicating further, one of the two top killers of American adults is heart disease. Changes in the body associated with age generally lead to an increased susceptibility to heart disease. If the bodily changes that increase susceptibility to heart disease (e.g., hardening of the arteries) could be diminished or decreased, people would be less likely to get heart disease, and therefore less likely to experience the pain and mortality associated with heart disease. This is intrinsically a good thing. (And I should note that in all my readings of disability rights literature, I have never once come across anyone in opposition to treatment of heart disease, or health practices likely to decrease the incidence of heart disease. The same goes for cancer. Ditto for pneumonia.*)

Of course, some expenditure is required in order to successfully treat anyone's heart disease, cancer, or pneumonia. And of course, the money associated with the treatment of these conditions (a) needs to come from somewhere, and (b) be appropriately managed and organized. However, the main reason we treat things like cancer and heart disease and pneumonia is not -- or at least, should not be -- the fact that people without cancer tend to be more productive workers than people with cancer. We treat cancer, heart disease, pneumonia (and think in terms of preventing these conditions) because of the suffering and death they impose upon us and those we care about. In this context, "aging" should be considered no different from any of the aforementioned conditions, because without some kind of intervention, it will most assuredly kill you. There is absolutely no basis for arguing that somehow it's good for people to die of aging but not good to die of anything else -- the exception people tend to make for age-related death is unacceptable and hypocritical.

If it's bad for people to suffer and die against their will, then it shouldn't matter what the source of that suffering and death is. And it also shouldn't matter how much it supposedly "costs" to permit people who would otherwise die to live -- obviously it costs something, but what could possibly be more valuable than the lives and health of irreplaceable persons? All I'm saying is, people ought to get their priorities in order.

As mentioned earlier, it might be necessary, at times, to invoke primarily economic arguments when dealing with people whose own main argument in opposition to healthy life extension is that "older" old people will decrease the amount of resources available for activities not related to health crisis management. Here, the economic argument is appropriate in the sense that it corrects what is more than likely factually untrue from an economic standpoint -- it is obvious that if a person doesn't get heart disease, nobody is going to need to spend any money to treat heart disease in that person, which means that money is free to be used elsewhere. But the reason we want to prevent heart disease -- at least, the primary reason -- isn't an economic one, but one that I hope stems from compassion.

Heart disease left untreated will most likely kill you. Aging left unaddressed will definitely kill you, whether indirectly or directly. So of course I'm in favor of things like longevity research -- because it has tremendous potential to save many, many innocent lives. Sure, it might end up having a particular economic effect that will make some people happy, but even if there was no chance of that, I would still support such research. The dragon is bad, it destroys people.

This is a point I've been trying to make for a long time (and it's a point that, I think, represents what I see as an underlying ethical consistency in support of longevity advocacy and disability rights simultaneously): Things that kill people get a category all their own. While I understand that the line between "therapy" and "enhancement" might be blurring more every day, I'm fairly certain that most people can understand the difference between something that makes you different and something that makes you dead.

Saturday, April 07, 2007

The Future Is For Everyone (Or At Least, It Should Be)

Recently, a short informational article was posted to the IEET site entitled, Autism Bad For Siblings And Society in response to an autism-spectrum-themed issue of the Archives of Pediatrics & Adolescent Medicine. This article referenced two studies: one on social and communication "problems" in siblings of autistic children, and one on the expenses incurred by autistic individuals over the course of their lifetime.

The article about siblings seems to indicate mainly that siblings of autistics can have autistic traits (or perhaps even be autistic themselves), which of course makes sense when you consider that autism has a strong genetic component. The characterization of this phenomenon as autism being "bad for siblings" is more than a bit misleading -- it's not as if, somehow, if the autistic sibling hadn't been born, the children being studied would not have exhibited the same social and communication patterns. If someone is going to be autistic, or perhaps broader autistic phenotype, they're going to be that way regardless of whether they have siblings or not.

What struck me about the article on siblings, though, was the manner in which the siblings' performance was described:

"Younger siblings of children with autism spectrum disorders demonstrated weaker performance in non-verbal problem-solving, directing attention, understanding words, understanding phrases, gesture use and social-communicative interactions with parents, and had increased autism symptoms, relative to control siblings,"

The reason that description struck me was because in all that verbiage, there was absolutely no questioning of the underlying assumptions in place. These assumptions are common in autism-related literature but very few people even notice them -- to me, they're like the proverbial "elephant in the room". And just what are these assumptions? Well, first of all, the tests being used to evaluate the performance of the siblings of autistics (many of whom were probably autistic themselves) were probably not written with autistic cognition in mind. Second of all, I'm almost certain that the tests being used in this context probably assumed quite a bit about the childrens' level of understanding based on the compliance of these children.

To make an analogy, watch any cat navigate around a house and you'll definitely get the sense that you're dealing with a creature with a highly developed understanding of physics, but tell the cat to fetch your slippers and you'll probably not get much in the way of a response.

This isn't to say that all autistics are good at physics and bad at following instructions -- but rather, that it doesn't really make sense to assume an autistic person must be able to perform well on tests normed to a typical population in order to be happy or successful.

I somehow doubt that cats wake up every morning lamenting that they're not dogs -- but who knows, they might if their human companions constantly punished them for not acting like dogs or doing things that dogs tend to do.

Whenever I read articles on autism so utterly dripping with unquestioned assumptions, I can't help but think back to elementary school, when quite a lot about me was considered to be "problematic" or worrisome, even the aspects of myself that I really liked.

If you'd asked my fifth-grade classmates about me then, they'd most certainly have said that there was definitely something very wrong with me, that I didn't relate normally, and even that they felt sorry for me.

In sixth grade a few girls came up to me and told me that they were being mean to me "for my own benefit", since in high school, "everyone was going to hate me anyway". I remember people wondering if I was sad or depressed because I often preferred to read or draw rather than engage in group activities -- in fact, the main thing that made me tend toward sadness at times was the perception that whatever I liked to do was some kind of symptom or problem.

I even once got in trouble for being really interested in a particular subject -- the teacher assumed that my interest was a sign of being "too lazy to learn about anything else".

I'm not saying all this to invoke a pity party -- that's the last thing I would want, especially considering one of the things that always infuriated me while growing up was the "we feel sorry for you for being you!" bit I used to run into at school. Rather, I'm just trying to make the point that kids like the ones I grew up with have also grown up. Some of them might even be in professions now where they're evaluating kids. And unless they've had some kind of intense mind-opening experience over the course of growing into adulthood, it's more than likely they've retained the same biases and playground prejudices that they had as preadolescents -- not to mention the fact that teachers sometimes demonstrated similar biases.

And I'm guessing that these biases weren't somehow native to where I grew up.

Some researchers agree that bias is a problem here, and are suggesting that it would probably be a good idea to involve actual autistics in autism research -- even though autistics are extremely diverse, having some autistics involved in the research side of things is certainly better than having none.

Morton Ann Gernsbacher writes:

Why haven’t autistics’ own voices been heard? Why haven’t autistics been as actively recruited to participate in all aspects of the research process as they’ve been recruited to participate as research subjects (even posthumously by donating their brain tissue)?

Perhaps it’s assumed that autistics just wouldn’t be able to handle high-level research. If so, someone ought to tell Vernon Smith, who was awarded the 2002 Nobel Prize in Economics (alongside APS Fellow Daniel Kahneman) for pioneering the field of experimental economics. And somebody better alert Richard Borcherds, who was awarded the mathematics equivalent of the Nobel Prize -- the Fields Medal -- in 1998. Both academics are diagnosed autistics.

It takes just a cursory stroll through history to view the shocking collage of groups deemed incapable of stepping up to the research plate. In 20th century psychological science alone, we have Mary Whiton Caulkins, the brilliant protégé of William James who, by lack of a Y chromosome, was denied her PhD at Harvard (but who later became APA’s first female president). It’s quite unlikely that APA’s founder and first (male) president, G. Stanley Hall, believed that members of ethnic minority groups would be suitable research collaborators, given his disturbing attribution of "adolescent races" who "would be better in mind, body, and morals if they knew no education."

Morton Ann Gernsbacher is one of those who, I think, really seems to "get it" with regard to helping reduce research bias and understanding what autistic people are really like I realize that the notion of autistics being valid people with actual minds is as weird for many as a discovery of a live Sasquatch might be, but at least there are some people helping to raise the right kinds of awareness so hopefully that won't be the case in years to come.

And I'm sorry, but the "cost to society" stuff seems to me to be a matter of misplaced priorities (not to mention reminiscent of T4 propaganda) While of course people with actual problems should have access to help, it seems bizarre and downright backward to make the existence of certain kinds of people contingent on market demand.

Everything has a cost, and while some costs are certainly best alleviated, to propose that certain kinds of people are "bad for society" is to make a very serious claim (and on that note, why aren't people more gung-ho about finding "cures" for conditions like sociopathy? Is it because sociopaths tend to be superficially likable and able to hold traditional jobs? Are those really the criteria that ought to be held up in evaluating the worth of a person?)

Autistic writer Joel Smith comments here on the subject of "cost to society":

You see, the value of someone depends much on how you look at it. The value of the automotive industry is great when you look at the salaries of the workers, the profits of the shareholders, and the tax revenues to the government. It’s horrible when you look at the complete costs of the car - the costs to the environment in particular. TV isn’t seen as a "waste" but rather as a "necessity" nowadays. And air travel "enables global business," whatever that means.

At the same time, there are teachers, psychologists, researchers, drug companies, alternative medicine practitioners, therpaists, health aides, group home workers, etc, who make their living providing a service to autistic people (however ethical that service might be) - just as the people who made your TV make a living by providing you with something you want or need. That $3 million dollars is not vaporized, but it’s used to pay for cars, TVs, and air tickets for people making money providing those services, the stockholders of commercial interests, etc. An expense is not necessarily "costly" to society.

It just strikes me as odd that the blanket statement "autism bad for siblings and society" was used here, without anything even resembling an examination of what "bad" and "good" are in this context.

This isn't about political correctness, it's about realizing the extent to which cultures create themselves out of language.

It's fine to acknowledge some of the serious problems that can be faced by autistic people and their families and I do see the need for better services (of the sort that take individual needs and abilities into account, rather than working on the basis of stereotyping and heavy-handed overgeneralization). I realize that care and education can be expensive, but there must be a way to say that that doesn't denigrate an entire, highly diverse group of people as "bad for society".

Think about it: for all the whining and griping that goes on with regard to helping autistics and other atypically-functioning persons lead long, healthy, enriched lives, people don't seem to have much trouble coming up with all the resources necessary to build things like baseball stadiums or stage live wrestling matches or fund any number of arguably extravagant pursuits. Once people decide to value something, it always seems that resources sufficient to sustain or obtain it appear -- sometimes seemingly out of nowhere. Funny, that.

The fact of the matter is that when you invest in people, there's generally always something given back, even if that something is not necessarily monetary -- sure, you might end up getting more economic production out of a person if they are properly cared for and educated, but that kind of reasoning should never be invoked to determine the supposed worth of an individual anyway. These arguments, by the way, apply to longevity research as well -- the earth most certainly has more than enough resources for the accomplishment of more than one goal.

And if anyone thinks that the arbitrary and bias-based devaluation of people who don't function in ways that meet societal norms isn't a problem for self-described transhumanists (and others of the "I want to be a cyborg when I grow up"! persuasion), think again.

Imagine walking into an airport with your neural-infrared prosthesis attached and being turned away at the security gate because your device "makes people uncomfortable" or poses some kind of perceived security risk.

The more intimately machines are integrated into the body, the more difficult it is going to be for existing human institutions to accommodate them. "Reasonable accommodations" could just as easily apply to willing cyborgs as to people presently defined as "disabled", and unless we want to live in a world where ancient monkey energies tied in with notions of dominance hierarchies and exclusion of the different shape our societies, we need to come to an acceptance that there's more than one valid way to be.

And I'll admit that one reason I'm enthusiastic about transformative technologies (in the context of prosthetic self-determination is because it could be that once particular kinds of modification become affordable and widespread, variations like autism will suddenly look pretty insignificant in comparison to the newly-expanded range of possible shapes and modes a being might take on.

Friday, April 06, 2007

Free Radicals, Oxidative Stress, and You

One "anti-aging" buzzword term you'll frequently find in mainstream media and supplement catalogs is Free Radical. Free radicals are definitely real, and there's at least some indication that they play a role in some of the major health concerns associated with aging (and that antioxidant activity is worth examining). But it's important to avoid thinking in terms of "magic bullet" approaches to longevity medicine, and free radicals are quite commonly cited in literature intended to help sell antioxidant supplements or present a simplified mainstream-media-friendly description of changes that occur with aging (and what might be done to help people maintain good health well into what we now think of as "old age").

Diet And Lifestyle Optimization Aren't Enough

Most people know the term "free radical" due to the fact that dietary and supplementary antioxidants enjoyed a few years in the spotlight as potential harbingers of prolonged youth and vibrance for all. However, more recent and broader study results have been mixed at best. There are many different kinds of antioxidants, with different bioavailabilities, side effects, and chemical behaviors in the body. Some studies indicate that certain kinds of antioxidants can even be deleterious in certain populations -- the most prominent example of this is the study which indicated increased cancer rates in smokers taking beta-carotene supplements. Vitamin E was being hailed as a sort of panacea for elderly ills a few years back, but its status has drifted back into "questionable" in response to data indicating possible increased mortality in supplement-takers.

While there are certainly weaknesses in all these studies, it is clear that nobody really interested in prolonging their healthspan can simply pop a few drugstore vitamins and expect definitive positive results. Eating a healthy diet rich in fresh vegetables and low in processed starches and "empty calories" (e.g., chips, sugary sodas) can help many people lower their risk factors for particular health problems (such as diabetes), but just being alive and having metabolic processes going on in the body all the time means that oxidative stress will be present and will contribute toward the accumulation of damage no matter what you're eating.

Many health-oriented sites on the Web and popular magazine articles and books will emphasize the role of nutrients, diet, and moderate exercise in promoting longevity. If you follow the advice from the better sources in that particular pool of information, you might indeed end up gaining yourself a few extra years of health in old age. But when I think of "longevity", I don't think in terms of "living to age 80 and still being able to play golf", as most of the aforementioned sources probably do. I think in terms of "living to age 80 and not having to worry about increased risk of cancer, immune collapse, organ failure, heart disease, atherosclerosis, Alzheimer's, or any number of other things that have long resulted in pain and death for people in your age group". Why should any group of people be expected to just accept pain and death, particularly on account of a factor as ludicrously arbitrary as how old they are?

Free radical activity and resultant oxidative stress is only one subject of interest in the quest for effective longevity medicine, but it's an important one, and it certainly falls into the category of an area of science worth pursuing. Though I can certainly appreciate that many sites and books these days are promoting the value of healthy living, I think they're aiming too low, that we as a society and we as individuals who care deeply about other individuals need to realize that dietary changes and (possibly) certain supplements might only gain us a few additional years at best. In order to deal effectively with the damage caused by oxidative stress -- and by "effectively" I mean "effective at a level that no amount of dietary tweaking or lifestyle optimization can presently touch" -- we need to find ways of cleaning up the damage, and helping the body protect itself from damage.

I'll focus on two aspects of physiology that relate to oxidative stress: mitochondrial mutation and advanced glycation end-product (AGE) formation.

Dealing With The Vulnerable Mitochondrial Genome

The role of mitochondria in age-related health decline is thought to be twofold: mitochondria produce free radicals in the course of performing their necessary metabolic activity, and additionally, they contain their own DNA separate from the nuclear DNA that characterizes us as individuals at the genetic/molecular level. During ATP production, the free radicals emitted by a mitochondrion can in turn damage that mitochondrion to the point where its DNA mutates. Mutant mitochondria are problematic both because they perform their duties less effectively and because they frequently continue to replicate, effectively overwhelming the cell with poorly functioning components and stepped-up production of free radicals.

One theorized method by which mutant mitochondria and associated damage might be mitigated is being explored in MitoSENS research. From the MitoSENS page:

The goal of MitoSENS is to obviate mtDNA mutations by expressing the mtDNA genes from the nucleus. Fortunately, we would be completing a process that evolution has already started. The mitochondrial genome originally had thousands of genes, but evolution has reduced it to a mere 13 (protein encoding) genes in humans. By studying how nature transfered expression of other genes from the mitochondria to the nucleus, we can identify the necessary steps to transfer the remaining 13 genes (in humans).

The concept of MitoSENS hinges upon the idea that the 13 remaining protein-encoding genes in the mitochondrial genome would be better protected from mutation-causing damage if they were moved to the nucleus of the cell. An important component of this idea is the fact that we've got something of an evolutionary head start when it comes to this endeavor -- if thousands of the original mitochondrial genes moved into the nucleus over the course of evolution, it would be quite prudent to reverse-engineer the process by which that occurred, and see if aspects of that process could be applied to the 13 laggards of concern here.

Practically speaking, copies of these 13 genes might be placed in the nucleus (after being modified so that the mechanisms by which the mitochondrion draws in the proteins it needs will operate on them), where they would function as necessary to produce the required proteins for the mitochondrion. This would reduce the impact of oxidative stress, since it is the vulnerability of mitochondrial DNA to injury that predisposes mitochondria to injury and mutation in the first place.

The success of MitoSENS depends, among other things, on the fine-tuning of effective gene therapy. But if lab results do end up indicating its potential effectiveness in humans, we'll be that much closer to helping obviate the problems caused by mitochondrial DNA mutations, provided that it isn't discovered that the 13 genes in the mitochondrion actually need to be there for some reason.

John Allen and Carol Allen of the School of Biological Sciences, Queen Mary, University of London theorize that the presence of mtDNA relates to the division of labor between the male and female sexes in the reproductive sense -- that is, since babies are born with undamaged mitochondria, they must have therefore inherited a "protected" copy of mitochondria from their mother. Mitochondrial DNA cannot be inherited from the father, since sperm are energy-intensive themselves, meaning that the mitochondria they use would be predamaged even if it weren't destroyed in the process of fertilization.

In short, mitochondria themselves might actually be sorted into two groups with two distinct purposes: genetic template (from the mother) and somatic (for energy conversion). The Allens posit that since the mitochondrion is one of the "worst possible environments" for genes, there must therefore be a corresponding good evolutionary reason for the presence of these genes -- possibly the necessity of having proteins in the closest possible proximity to the genes that code for them in order to assure efficient energy transfer.

If this theory turns out to be true, moving the 13 remaining mitochondrial genes into the nucleus might not work -- or at least, doing so might make it impossible for mitochondria to function as effective energy converters, which would of course mean they wouldn't be of much use to us. However, regardless of whether the mitochondria really are sorted into "genetic template" and "somatic" sets, it seems that a truly effective implementation of MitoSENS would make any possible gene-protein proximity requirement moot. Whether this turns out to be possible or not remains to be seen, but at any rate, the sooner experimental data is obtained, the better.

EDIT: Commenter daedalus2u expresses his skepticism about moving mitochondrial genes into the nucleus as follows -

I am quite sure that moving mtDNA into the nucleus won't work, particularly for large cells like nerves. Mitochondria necessarilly spend a lot of time away from the nucleus, out at the tippy end of the axon. The proteins that are coded in mtDNA are the active sites of the respiratory chain. The ones coded by the nucleus are regulatory proteins. It is the active sites that will get damaged and need to be replaced. That can't happen away from the nucleus if only nuclear coded proteins are available.

AGEs and Oxidative Stress

Advanced Glycation End-products (AGEs) are, quite predictably, the chemicals produced following the conclusion of a glycation event. Glycation occurs when a sugar molecule bonds to a protein or lipid molecule in the absence of an enzyme (a protein that accelerates a particular reaction)(1). AGEs can enter the body through a person's diet (they are particularly present in "browned" and caramelized foods), and they are also produced in the body during sugar metabolism. No matter their origin, though, AGEs are thought to disrupt the functioning of cells and molecules in the body and increase the production of oxidative stress, causing further damage -- and promoting the development of conditions such as diabetes, stroke, neuropathy, and cancer.

A person could, presumably, make some dietary changes that might reduce the prevalence of AGEs in their system -- by, for instance, being more careful about what sugars they consume (since certain sugars produce more glycations than others, and fewer glycations means fewer AGEs). Additionally, food producers might do well to avoid using AGEs as they have more commonly been doing recently (as flavor and color enhancers). But regardless of what a person eats, there is no way to completely avoid AGE production -- glycation is going to happen no matter what, and it would be extremely unrealistic to expect that metabolism itself could be reverse-engineered and modified not to result in AGEs any time in the foreseeable future. Metabolism has quite a bit of evolutionary clout behind it, and rather than trying to deconstruct it (which could be time-consuming at best and disastrous at worst), it's probably best to just find out what the more detrimental aspects of metabolism are and see what can be done about them while leaving the basic metabolic mechanisms intact. This is where the concept of repairing, rather than trying to prevent, damage comes in.

It could be that AGE-breakers and related compounds might comprise some of the most easily-achievable rungs on the ladder toward actuarial escape velocity, especially when you consider that AGEs and the stress they induce on the body contribute to so many of the common conditions leading to mortality in old age (and, in fact, contribute considerably toward many of the obvious manifestations most people associate with aging). Compounds like Alagebrium have demonstrated at least some efficacy in human clinical trials in the treatment of hypertension, aortic stiffness, and kidney dysfunction.

One thing I can't help but wonder in reading about AGE-breakers is how long it will take to get such compounds into the standard pharmacopoeia, so that they are regularly prescribed for such conditions as hypertension (or even as general health-maintenance drugs, used by the same demographic that might seek out cholesterol-lowering medications, perhaps). So far, clinical trials of Alagebrium (ALT-711) seem to be indicating a low incidence of side effects, and the compound has even been approved for inclusion in an Avon skin care product.

However, there are different types of AGEs in the body, and further research and clinical trials must continue in order to identify compounds effective at addressing the most prevalent types of AGEs found in elderly humans.

In Conclusion...

Knowing how oxidative stress affects the body, and how this stress might be reduced or mitigated, takes some of the mystery out of what goes on in aging bodies -- and as the mystery dissolves further and further, it will become more and more difficult to perceive age-related death as a kind of cosmically significant force or personified figure as many today still do. I look forward to following further developments in this and other areas of science.

1 - The enzyme-catalyzed version of sugar-molecule bonding has a different name, glycosylation.

Monday, April 02, 2007

Mini-Review and Miscellany

As of this month, I'll have been writing in this blog for a year. So much has happened over the past twelve months that I doubt I've even begun to process it all to any significant degree, and I don't see things slowing down anytime soon.

When I started writing here on Existence is Wonderful, I had no idea what would happen as a result. I started writing here weeks before actually giving anyone the URL, and I initially used a pseudonym (Nydra) on the basis that all I was doing was collecting information about, and arguments in favor of, healthy life extension in one place. Around this time last year, I was heavily involved in a powerfully intense discussion about longevity, its feasibility, and its implications on a BBS I've been a member of for several years. Some of my BBS friends seemed amenable to the idea of healthy life extension, while others were more cautious and wary (mainly on the basis that radical life extension wasn't "natural", though there was some debate over what "natural" truly meant, and I'm sure that debate persists in many circles today and will likely continue to do so for some time).

Over the past year, I've covered a lot of ground with regard to my initial intentions for this blog. I've stated my original purpose. I've discussed some of my rationale for doing whatever possible to ensure longer, healthier lives for all people. I've also branched out into topics that I didn't originally intend to cover on this blog, but that became sort of unavoidable when I saw what kinds of discussions were going on in the circles I found myself getting more and more acquainted with.

My goal in writing is never to simply produce content for content's sake. I'm not sure how people who write professionally, and manage to consistently write well (in terms of producing texts that are informative, interesting, and possibly significant in helping to hasten progress), do what they do. In my case, I can't just produce words that mean something on command, even if my material survival depends on it. In my case, writing is a lot like breathing -- it happens because it has to happen, as something my brain and body do in response to being a self-aware person in a complex environment. But unlike breathing, writing isn't rhythmic or consistent, even though it's something I'm frequently compelled to do.

Having spent a significant percentage of my life as a student, I know full well the difference between forcing words in order to fulfill an obligation, and producing them when they seem to want to come of their own accord. I do my best writing when my head is full of information that has somehow managed to organize itself in a particular way -- in those cases, it's almost like I am looking at some sort of large, complicated, multidimensional mechanism and my role as a writer is simply to describe that mechanism. At work, one of my primary tasks is to write test procedures -- sets of instructions for assessing the performance of, or troubleshooting, particular pieces of hardware. I enjoy this aspect of my work because it allows for plenty of chances to get into that zone where I take on the role of the person standing there looking at something and committing its essence to legible code and symbol.

An essential component of being able to write test procedures (or more generally, descriptions of how something works and how it might be honed in order to work better) is getting to know the hardware. This can be done through many means -- through direct physical contact with it, through seeing it from different angles, through disassembling it as much as possible without being irreversibly destructive, through reassembly, through reading of specifications, through charts and diagrams, and through hours and layers of background-processing after taking in huge chunks of information at a stretch. The more data I have about the hardware, the better the procedure will turn out, because in some sense in the process of observation I end up internalizing something of the hardware's essential structure (or at least, that's what it feels like -- sort of like I'm building up detailed models in my head over time).

So, when it comes to writing about things I care deeply about -- ethics, longevity, social justice of various sorts -- I am compelled to make the best possible attempts I can to develop the same kind of intimacy with the data, with the feel and content of the various important systems involved -- that I would if I were evaluating an object or piece of hardware with intent to understand and describe. And it can take different amounts of time to develop that level of intimacy, depending on the subject matter, on how much access to which kinds of information I have at any given time, etc.

In other words, I'm in something of an "absorbtion" phase at the moment. But expect more content soon...I've become quite interested in the evolutionary role of parasites in terms of the human immune system (and how that might relate to aging). Additionally, I've been formulating some thoughts on the fine-line dichotomy between "heroes" and "monsters" (inspired by my recent discovery of "Buffy the Vampire Slayer" -- there's a few episodes in there that made me think about some of James Hughes' zombie essays), and I've also got a partially-written piece on free radicals and antioxidants that was prompted by a conversation I had with my father the other day.

Also, and this might be something of a tangent, but this entire post is pretty tangential at this point: I was discussing the whole phenomenon of online writing with a friend recently. One of the concerns she expressed was the fact that if she writes something now and posts it publicly, what happens if she changes her mind later on about something she wrote? I responded by stating that if I were following the course of someone's developing opinion set and self-concept over time, it would look a lot weirder if nothing about that person's opinions or interpretations of events changed over time, than if their later writing and apparent mindset didn't resemble their earlier material in the least.

People are not static entities, and as each of us encounters and integrates new information about ourselves and about the world, it's perfectly valid and undeniably sane for our expressed opinions and observations to change in terms of their tone and content. Existence is Wonderful is barely a year old at this point, and already I can look in the archives and find examples of statements that sound both awkward and ignorant in comparison to my present understanding of things. I expect that to be the case for years to come, (and, more than likely, so should you if you're in the habit of writing and posting your writing online). Though there's nothing wrong with holding principles, and there are certainly points at which any person is likely to encounter a "best possible fit" explanation or an undeniable fact that continues to be true into the indefinite future, viewpoint evolution is part and parcel of existence as a dynamic entity, as a mind equipped with a feedback system.

So, in other words, don't be afraid to write because you think you might change your mind later. Be more afraid if you find yourself writing and writing and never changing your mind!