Introduction * Part 1 * Part 2 * Part 3 * Part 4 * Part 5
This entry represents Part 1 of a comprehensive report on the talks I attended at the Human Enhancement Technologies and Human Rights conference presented by the Institute for Ethics and Emerging Technologies. The narrative presented here represents my own personal observations and interpretations of the event and the subject matter discussed therein, and I encourage anyone who reads this who was also in attendance to note any factual errors I may have made and to offer any criticisms of my responses that seem merited.
9:00 - 9:15 AM - Welcome & Opening Comments
Delivered by Dr. James Hughes, this brief introductory statement explained how transhumanism / enhancement / modification of human form and function could be framed as a rational extension of liberal, democratic ideals. The basis of human, transhuman, or personhood rights (and the definitions of these three terms were discussed and questioned throughout the day) was presented as "conscious existence", and a being with conscious existence has a basic right to self-determination.
However, in contrast to earlier models of transhumanism (such as Extropianism), this democratic transhumanism (sometimes referred to as, or alongside with, "tech progressivism") distinguishes itself by seeking to retain or emphasize cooperation, discussion, and the retention of governmental bodies for the good of individuals and common welfare. That is, Dr. Hughes seems to be taking the position that transhumanist and individual rights to self-determination need not be tied to a libertarian or anarchistic social model.
Hughes described a state in which basic rights start at the individual level and then work their way up -- the reasoning here being that healthier, happier people will in turn build and maintain a healthier, happier society (or at least, this is my interpretation -- if so, I really like this notion). In short, "society should enable individuals to meet their full potential".
9:15 - 10:45 AM - Enhancement & Human Rights
This was a panel moderated by Nick Bostrom, Ph.D. (whom I am familiar with mainly in terms of his being the author of The Fable of the Dragon-Tyrant). Panel members were Patrick Hopkins, Ph.D. whose talk was entitled "Why Human Rights Are A Problem For Enhancement", Chris Gray, PhD, who discussed "Cyborg Political Technologies", and Nigel Cameron, Ph.D. who delivered "Some Caveats For Enhancers".
Dr. Hopkins was primarily concerned with the questions of (a) whether humans can be said to possess a "fundamental right to enhancement", and if so, in which context can this right be defended? Three different possible bases for a right to enhancement were put forth:
1.) Nature or "natural law" basis, meaning that through some inherent property of whatever it means to be human, we might be said to have a sort of fundamental right to make use of technologies that assist in allowing us to meet various goals we might have.
2.) Interests, meaning that rights are fundamental due to their capacity to protect what we care about, while keeping in mind that exercising certain rights of our own may impose duties on others.
3.) Autonomy, meaning that individuals have the right to absolute liberty so long as exercising this liberty does not impose on the liberty of others -- a libertarian model, in other words, in which the structure of interaction between individual rights should be self-correcting. The Autonomy basis was described as the "most common" and "most flexible" argument given for a person's right to access enhancement technology.
Dr. Hopkins pointed out that "rights", particularly things described as "fundamental rights", are difficult to define seeing as they are invisible and intangible yet also considered by many to be self-evident. The problem of how an invisible, intangible construct can be viewed as fundamental contributes to the challenge of attempting to frame enhancement from a rights perspective. It was also noted that the establishment of certain rights as fundamental necessarily -- as it creates various freedoms -- also imposes certain restrictions and conditions on individuals and societies, and that the nature of these conditions and restrictions needs to be taken into consideration.
At the beginning of this talk, I was fairly supportive of the idea of using the Autonomy basis to justify access to enhancement (which should definitely not be construed as a mandate for people to undergo compulsory enhancement, or as a support for the idea that all possible enhancements should be provided free of charge). By the end, I was still hovering around this position, though Dr. Hopkins sought to make the point that the Interests basis was perhaps the most valid. He expressed some concern about the "fetishization of liberty" and the fact that there will always be some people who abuse freedoms if they are given for the sake of considering liberty as a value in and of itself. Some historical context was given for this -- in which it was noted that societies with a certain level of granted freedoms do tend to swing back to a reactionary mode after a time.
I can understand the support for the Interests position in the sense that, using this model, people are urged to consider (when seeking enhancement or modification): what good will this serve, for me, and for those around me? Rather than doing whatever one wants simply because one is legally permitted to do so, people who think in terms of interests are likely to make healthier choices. Philosophically, I do agree with this -- though I would not go so far as to attempt to legislate anything that would require individuals to prove or demonstrate a degree of "common good" or even immediate self-based good that would come of a particular enhancement. Rather, I would definitely attempt to educate people so that they would be enabled and empowered to make healthier choices. The example given by Hopkins of a destructive use of technology was the common sci-fi notion of "plugging in" to some device that constantly stimulated the brain's pleasure centers, producing a state of unrelenting bliss.
I do think that it would be a shame if everyone on the planet plugged into a device like this, but at the same time, I think that the chances of that actually happening would be quite slim. As much as human biology compels us to seek pleasure, I think that personality development figures very strongly into the likelihood of someone choosing escapism or productive living. I don't believe that there is any sort of physically-based pleasure signal that would satisfy me more than, say, reading a good book or figuring out an interesting puzzle. If such devices did exist, it is almost certain that some people would choose to "plug in" and stay there, but I am convinced that no matter how supposedly pleasurable it was, plenty of people would actually get bored and seek other kinds of pleasure. Rather than making such devices illegal, it seems that working to create a society that people do not feel compelled to escape from -- in which they are enabled and empowered to reach all sorts of productive goals -- would support both the Interests ideal philosophically, and the Autonomy ideal from a legislative standpoint.
The Nature basis was dismissed to a certain extent, or at least not discussed as comprehensively as the distinction between Interests and Autonomy, in part because the Nature basis commonly rests upon assumptions related to particular religious beliefs. However, Hopkins did not dismiss the possibility of defining and using a concept of a "human nature" or "person nature" -- he just noted that the basis for enhancement rights could not rest upon traditional or religious notions of this "nature". Closing statements suggested that when considering any individual enhancement or goal, it would be wise to consider whether the enhancement is geared toward "worthwhile and noble ends". At the individual level, I wholly agree with this, and believe that people should be encouraged to consider their choices carefully -- but this need not entail legislation that bans the very development of certain technologies, in my estimation.
Dr. Gray delivered a very colorful talk that first described "rights" as granted on the basis of what freedoms a person is willing to risk physically to obtain. This is a rather useful definition and certainly seems logical when viewed in terms of history -- over time, the ongoing struggle for freedom and autonomy has led to numerous persons risking their lives, jobs, and property for the sake of assuring liberties.
I had a bit more of a difficult time taking notes on Gray's presentation (and throughout this report, my descriptions and reactions to various talks will vary in their length and depth -- the sheer diversity of presentation styles made some much more conducive to note-taking than others, and plus, my non-cyborgified human hand simply needed a rest now and then!). The main point I got out of Gray's presentation was that culture itself may seem transparent to the citizen / person, but it is still there, forming a framework of what we do and think, and how we assess information. Concepts like rights, and liberty, and even knowledge must be recognized as products of what is most certainly a limited understanding of reality.
One thing I appreciated from this talk was that I finally got a clear sense of what Epistemology means -- the definition of this word makes a curious amount of sense when phrased as a question: How do we know what we know? Epistemology is crucial to keep in mind when attempting to make judgements based on what we think we know, because as Gray pointed out, "you cannot have perfect information". Mathematics, though it serves as a practical means for quantifying data and making comparisons, is not (as Gödel proved with his Incompleteness Theorems) a system devoid of the potential for contradiction. This was admittedly my first exposure to the notion of these Incompleteness Theorems in the formal sense, but I remember something along these lines from the film Labyrinth, when the protagonist Sara had to choose her direction in the maze based on some very confusing instructions from a pair of creatures -- one who always lies, and one who always tells the truth.
Gray pointed out that mathematics is one thing, but real life is quite another. He amused the audience by noting that if we were walking down the street and asked someone if they'd seen which way a thief went, our response to an answer like, "He went that way, but I always lie" would be, "You asshole!" at which point we'd go and ask someone who did not suffer from terminal asshattery.
Now, bringing this back to the notion of enhancement -- for which Gray invoked such terms as "cyborgification" and "participatory evolution" (and I'll admit a decided fondness for the phrase "participatory evolution") -- I got the impression that rather than directly making statements or judgements about specific enhancements and the ethics thereof, Gray was providing a particular way to think about the decision-making process associated with seeking enhancements or determining whether they are worthwhile in terms of whether one is willing to work or fight for a right to something in particular. One concept, again invoking the notion of epistemology, that Gray also noted was the fact that whenever we choose to know something, we are simultaneously choosing not to know something else.
I would extend this to enhancements as well, though Gray did not do this directly -- when a person chooses to gain a particular skill or device or modification, s/he is simultaneously choosing not to gain something else. This, I think is an important consideration for everyone, and it should also be pointed out that people already do this all the time, such as when they select a particular college major (at the exclusion of other majors) or have dinner in one restaurant rather than another. In contrast with some speakers, who seemed to attempt to lay out their points and explain all of them, Gray made points and statements that (at least for me) prompted a lot of tangential thoughts and the formation of a sort of mental list of "things I need to study up on". This, I would certainly say, is a positive outcome of having listened to a speech.
Next came Dr. Nigel Cameron, who was perhaps the most conservative speaker at the entire conference. In all honesty, I expected to disagree with him more than I did (though I really only agreed fully on one point) -- I was surprised and pleased to hear that he was "less concerned about life extension" and primarily concerned, in the negative sense, about possible implications of genetic alteration and cognitive enhancement. His talk focused on five caveats for would-be enhancers, and while I definitely disagreed with the apparent fatalism regarding human potential and tendencies put forth in these caveats, I think that his viewpoint importantly brought to the arena many of the concerns held by people who oppose enhancements and even transhumanism at large. That is, he served as a representative for the people transhumanists / tech progressives are likely to encounter in many contexts, who at least attempt to formally or academically deconstruct transhumanist arguments.
Firstly, Dr. Cameron said something I wholly agreed with -- that perhaps the term "enhancements" is too subjective and relative to properly describe the processes of genetic, physiological, or technological alterations of the human form and ability set. He suggested the term "interventions" might be better, and while "intervention" has a negative connotation to me for reasons entirely unrelated to transhumanism, in this context it does seem appropriate. The term "modifications" might also be suitable. Simply put, I agree that "enhancement" is indeed a non-neutral term and that it must be acknowledged that one person's enhancement is another person's disability -- and also that whether a given modification or intervention is even perceived by the individual as enhancement or detriment can be largely environmentally dependent. But for the purposes of this article, I will occasionally use the terms "enhancement", "intervention", and "modification" interchangeably, with the qualification that I do not necessarily consider an "enhancement" to be something definable in the value-sense by any agent outside a particular individual in a particular environment.
Secondly, Dr. Cameron's five caveats were introduced and explained.
1.) Human nature is "experientially flawed", and we "know" that we humans are flawed in our capacity to become blundering and corrupt. Cameron seems to espouse the "technology is power" viewpoint, and is concerned about its potential for use as what would amount to weaponry of mass destruction.
While I do think that it is important to keep the potential for corruption in mind when embarking on any endeavour that could result in abilities and capacities not originally held by a person or group, I do wonder what Dr. Cameron thinks of the notion that many transhumans, or posthumans, or otherwise-modified individuals consider the elimination or diminishing of corrupt tendencies to be a goal of posthuman existence. That is, yes, we will have more powerful technology but at the same time it is possible that modifications to brains and / or ethics-memes will make corruption, "blundering", and mass destruction far less likely.
2.) The "Lewis Paradox" -- here, Cameron invokes C.S. Lewis' The Abolition of Man. This caveat is a criticism of using "science in the wrong way, that is, to 'debunk' values." In a sense, I would interpret this caveat as a critique of postmodern relativism and attempts to bring decision-making out of real human experience (however one might define that) and into the realm of mechanistic means. Technology could very well be a "zero-sum gain" situation wherein by defending its proliferation, application, and availability, humans might end up losing their very existence in the process.
Again, this caveat -- while it does present what could be construed as a reasonable "reality check" in the flavor of Dr. Hopkins' suggestion that actions should be based on whether they are in our best interests -- rests on some very arbitrary assumptions that necessarily trace back to the idea of natural (or supernatural / religious) law and the utilization of such law for the definition of "human nature". I would argue that the attempts by some to justify technological advance and enhancement in no way lead to a society wholly devoid of values. It would seem that any means by which technology might be controlled, banned, or heavily restricted might entail means of control much more insidious than the very things that this control seeks to prevent.
3.) The "New Feudalism" concept -- continuing the theme of viewing technology as an embodiment as power, Cameron suggests that we look at the already-existing disparity between the "haves" and the "have-nots" when it comes to such things as computer technology. As technology advances further and further, more power will be placed into the hands of a smaller group of people over time.
Taken as a cautionary statement I have no issue with this, however, I do not see how it could be used as a basis for pre-emptive legislation. Disparities must be acknowledged, certainly, but there are ways to remedy such disparities without resorting to preventing the development or proliferation of technology. I would need to read up on history and world statistics in order to make a more definitive statement about the social equalization effects of technology, so I will not say more on this matter here, but I will say that I don't think that "feudalism" is an inevitable consequence of technology. As another speaker noted later in the day (and I will go into this further in the next chapter of this report), there is no reason to believe that competitiveness is a more primary defining attribute of humanity than cooperation.
4.) "Where Freedom Truly Lies" -- like Hopkins, Cameron warns against "radical and random notions of autonomy" and points out that a freedom in one context constitutes a restriction in another. Exercising a choice, after all, necessarily leads to a situation where other choices need not be made.
This point touches on some notions covered by both Dr. Hopkins and Dr. Gray, in the sense of making a philosophical statement regarding the application of rights to the arena of choice. Though I think that different people are likely to have very different ideas as to what constitutes "radical and random", I can see how it is indeed necessary to not go blindly into anything based on a superficial perception of the goodness of an action.
5.) This seemed to be, as far as I could tell, a "human nature"-based caveat. The phrase "This Human Thing" was invoked here, however, I think that Dr. Cameron needs to write a book or paper (or perhaps he already has) that describes "This Human Thing" coherently. As far as I could tell, "This Human Thing" was reminiscent of Francis Fukuyama's "Factor X", which supposedly represents an irreducible collection of traits or commonalities that make humans human. Cameron questioned whether his Human Thing could coexist with exponentially-advancing technology, and urged that transhumanists continue to question this as well.
The question-and-answer session following this set of talks included a very cogent point made by someone (who may have been Dr. Anders Sandberg -- if anyone reading this knows otherwise, please let me know, and I will correct the attribution) regarding the fact that human musculature has basically been rendered nearly irrelevant. That is, we can flick or twitch a finger and activate devices that hurtle along at great speeds (i.e., automobiles) or (more ominously) result in the explosion of massive bombs. I will need to locate and listen to the audio recorded during this discussion to comment further on it, but one thing I will say about the conference as a whole is that the discussion sessions could have been a bit longer, so as to allow the speakers to answer questions more completely. This is a minor picky point, though -- for the most part, the speakers were impressive in their ability to concisely offer responses without sacrificing accuracy. And as I commented to several people at the event, I observed a good level of intellectual integrity on the part of people when they were asked a question that would have required an answer beyond the scope of what the speaker had studied and considered pertaining to his or her topic of interest.
Stay tuned for the next installment of this report!