I tend to see tools as extensions of the self. Tools are not friends or pets, but ME.
This is a good working definition of "tools", and one I thank Mr. Welch for reminding me of. Thinking of tools in this way -- as a particular class of objects that can serve as extensions of self (or extensions of will, perhaps) -- is very useful, particularly when viewing the "person" as embedded in and part of the environment, as opposed to somehow distant from it.
This actually isn't very far from a favorite bit of personal philosophy (it's even in my blog profile): I am a small piece of the universe observing itself. I see all people and things as simultaneously discrete and joined -- if I had to sculpt a geometric model of reality (a daunting task if there ever was one!), one possible model might resemble a big rubber sheet pulled to tiny points in some areas, stretched thin in others, pushed to a smooth roundness in still others, etc. Basically, while parts of the sheet would certainly have their own identities and local characteristics, and while each part would consequently be an entity in its own right, all parts and the interconnections between them would still comprise a larger entity.
Sticking with that model for now, let's say a person is initially represented by a point on the sheet pulled sharply upward. As this person grows, develops, learns, and interacts with the other local surface irregularities, relationships will be established with those irregularities. Depending on the type and nature of each irregularity, the relationship between it and the person will effectively change the shape of the person in some way. Some irregularities might make the person-representing point poke out further from the plane of the sheet, whereas others might smooth it out and draw it closer.
Yet all the while, the person maintains a sense of continuity, and certain aspects of his trajectory through time will always show the influence of his initial conditions. Just as the sheet itself provides fertile ground for a tremendous diversity of individual forms, each person-point is simultaneously capable of evolving in any of a fantastic array of directions and of maintaining a distinct sense of continuous personhood.
Along these lines, Nato continues in his comment:
You may command and own your body in that same fashion as you might "ruthlessly" dispose of tools. We see that as fundamental right.
In reading this statement, I get a much clearer idea of how tools in particular might differ from other machines (or beings, for that matter). Invoking the "sheet model" again, tools would represent those irregularities that can be effectively "absorbed" by the person-points to the point of becoming part of them. Similarly, tools can also be discarded and/or removed when the person no longer finds them useful, or when they begin to pose some problem.
The "body" over time cannot be said to be a static clod of matter -- rather, the body is a dynamic process that winds its way through spacetime, memory and sensation incrementally bridging the piecewise generations of cellular turnover. In some respects, cells and eyeglasses and hair and prosthetic limbs and tattoos and iPods and lungs are all of the same ilk: things that individually are not persons, but that can be aspects of persons that in turn define those persons -- at least on a moment to moment basis.
Basically, I concur with Nato that each person -- each body -- should indeed be considered to have a fundamental right to self-configure. And at this point I'd be willing to suggest that tools (if defined as non-autonomous objects that can nonetheless merge with -- and become part of -- autonomous entities) figure heavily into this process.
Nato also makes an interesting observation about robots. He writes on this subject:
It's common for people to confront the concept of robots as others, as, literally, autonomous automatons (auto meaning "self"). But personally, they made much more sense when I started conceiving of them in the traditional sense of extensions of their users. And indeed, the most prominent uses of robots we have today, before we really have any generally competent software to make them useful as truly autonomous entities, is in capacities where they are remotely controlled as untethered extensions of the bodies of remote operators.
I personally do tend to think of robots (at least in theory) as "autonomous automatons", but this is probably my sci-fi sensibilities coming through. Nato is right in noting that today's robots are not autonomous -- every robot I've ever made the acquaintance of in real life has been either an industrial robot, a toy, or an experimental "kit" bot equipped with a few sensors and/or actuators.
And even the more impressive "robots" I've heard of (such as the DARPA Grand Challenge cars) haven't been autonomous in the sense that humans, many animals, and fictional robots (like R2D2) are -- at best, they can do one thing quite well, but they aren't capable of deciding they'd rather do something else, and it seems to me unlikely that they've experienced existential despair over this fact.
Part of what was lingering in the back of my mind as I wrote my prior post was the question of why, if at all, humans might want to actually build truly autonomous machines. I've observed, as Nato has, that humans tend strongly to use technology prosthetically. That is, as the collective pool of knowledge about How Stuff Works (and How To Make Stuff Do Other Stuff) grows over time and is communicated more effectively to more and more people, the trend has been toward applications that allow people to assert their ideas, desires, and will over a greater distance, or with greater strength, or with greater precision, than was feasible before the adoption of the application. The trend has not been toward trying to (forgive the terminology) "ensoul" machines (except perhaps in the context of university lab projects, none of which have exactly panned out in that direction so far).
I'm not going to get into the whole debate over whether it is or is not possible to "create consciousness" in a substrate other than an animal brain. Intuitively, my sense is that consciousness is not a substrate-dependent phenomenon, but I don't know nearly enough about neuroscience or robotics to make any strong claims, so I'm content for now to keep reading and watching what the relevant research reveals.
But in any case, that debate is irrelevant to the question of whether humans would want autonomous automatons running around. The world is already pretty well populated by autonomous agents (animals), and half the time it seems like humans are more concerned with trying to decrease the autonomy of these agents than with increasing it. Hence, the idea of large groups of humans deciding to create autonomous robots and "release them into the wild" for the sake of allowing new life to flourish seems a mite farfetched.
Plus, there's the ethical problem with creating an autonomous entity in a lab -- as far as I'm concerned, once you've established that an entity is autonomous, you have no right to keep it confined (in a lab or otherwise), nor is it acceptable to subject it to non-consensual or coerced experimentation. This fact alone makes it seem unlikely to me that truly autonomous robots are going to be a major human goal anytime in the foreseeable future -- right now, robots outside the movies are pretty much thought of as being "tools" (extensions of human will), and people don't want their tools to talk back or say "No!".
Part of what is meant by some uses of the word "progress" is a kind of ongoing emancipatory process that involves seeking to recognize more and varied forms of personhood, to develop and provide tools that assist with individual flourishing, and to ensure that new technological developments (or proposed developments) benefit more than a few privileged folks. So while I certainly enjoy talking and thinking about robots, and while I would be overjoyed to someday wander through bright jungles populated by colorful mechanical fauna who have been set free to flourish as beings in their own right (rather than as means to some "end"), I think it's important to stay grounded in the present when considering what actions would likely lead to the greatest progress in the sense described above.
"Real" autonomous robots would in effect be non-tools. And non-tools (people, other autonomous entities, etc.) cannot be used, absorbed, and/or discarded by others in the sense that tools can. One reason I find myself intrigued by "roboethics" discussions these days is actually tied into the very real civil rights struggles faced by already-existing persons. And again with the disclaimer that this is a science fiction scenario, I can't help but wonder whether humans are at the point of being able to recognize very atypical persons (such as sentient robots would be) as non-tools. My guess is "not quite", and I see a potential (if not exactly immanent) danger of people creating entities that are autonomous and sentient, but that are not acknowledged as such.
It's not as if there isn't a precedent for this. Some of the worst abuses in history have been perpetuated as a result of people trying to use, absorb, and ignore or deny the personhood and autonomy of other people. Ethnic minorities, women, children, disabled persons, and individuals of any configuration in positions of disadvantage for whatever reason have all had to deal with being treated like tools (in the sense of being considered non-autonomous, and only worth what they can "produce", whether it be slave labor, sons to carry on the family lineage, or in the case of disabled persons, "proof" of full personhood in the first place).
And this isn't something we're exactly past as a species yet.
Regardless of the general sense I still have that all things in reality have a kind of "character" to them, I'm well aware that some things are tools, and that people are not tools, though tools can be extensions of people. Robots, perhaps, are interesting because they stand in a strange area where they have the potential to be considered either non-autonomous things or people (or both, context permitting!), depending on what direction the research goes in.
And given this, I think that anyone who finds himself or herself obsessing over "robot rights" would do very well to learn a bit more about general civil rights. Not only is a much greater consciousness of civil rights gravely needed in the present, but it is going to be vital to broaden the common concept of what a full person is if anyone really wants to see the kind of wide-ranging prosthetically-enabled vibrant diversity that may at least become physically feasible within the lifetimes of many alive today.