Monday, January 14, 2008

Response to Brint Montgomery's Transhumanism and the so-called "future good" of humanity.

Bravo to this article: Brint Montgomery's Transhumanism and the so-called "future good" of humanity

Brilliant point about the so called "paradox" of transhumanism.

If I gathered so correctly, here is the basic premise Mr. Montgomery is making. Human morality is not relative; however, the values that the human race embraces as an ethics system at any given time is subject to change, and therefore relative. Our values that compose the ethics system we abide by are subject to biological constraints to environmental constraints. As a result, as the human species continues to exist, values may change based on present conditions.

Therefore, what may be considered good, or moral today may not be so within 100 years. Thus, transhumanists are faced with a problem. What if what transhumanists deemed a moral imperative- the modification, enhancement, and growth of human beings into something greater- is seen as bad, or evil to the post humans created by the transhumanists now.

This is clearly a problem because the transhumanist would have created a self-destructive transhuman (or at least self-hating).


Let's go back to the part where we assume that "This is clearly a problem because the transhumanist would have created a self-destructive transhuman." This statement is a paradox in a paradox. Is it really a problem that we've created a self-destructive transhuman? Based on our current ethics system yes, a contradiction of a moral system, even if its a moral system that will be used in the future, is considered bad. But...is what we define a good and bad subject to change? Yes it is. So, it could very well be that it is "good " to have a self-destructive transhuman or good that the value systems of the humans vs. the transhuman are different. The point is that even though moral values may change, we don't know what impact it will ultimately have on a posthuman entity.

It should be important to note that many transhumanist believe in a morally superior posthuman, in addition to the other enhanced aspects of the posthuman; and therefore, a posthuman would have a moral system that is above that of the evolutionary and biological constraints of before. As such, it doesn't matter if the moral system clashes, the post human maintains its own. For example, Taiwan and China might have different political and moral systems (let's just pretend they do for the sake of argument), just because Taiwan branched off from China (like a posthuman will branch from humanity), does it matter that they have different moral systems? Not really, they both function independent of each other.


By the way, when I read this blog, I thought of Mary Percy Shelly's Frankenstein. The monster created by Frankenstein clearly was greater than human, being able to posses great intelligence, reasoning, and strength, but hated its existence. It was consumed by its self-destructive nature because what Frankenstein deemed good at first, the monster detested later. The monster loathed its creation and as a result, only destruction follows. Whooo, tangent.

Now, this is the not the best example of what Mr. Montgomery is trying to describe; nevertheless, it still touches on the important fact that transhumanists really have no clue how to be transhuman.

So I take from this blog article the important point that transhumanism is really a hit and miss philosophy. We are trying to philosophize on something that is not constant and bound to change. We are blindly trying to predict the future without even knowing the future.


In my opinion, a huge flaw within the thinking of many transhumanists is that they believe there is only one ultimate path to be transhuman. Unknown, but predefined. However, I hold that being transhuman is a umbrella term for varying human modifications and enhancements. Maybe all of these different transhuman will merge into one superior posthuman, but there is no one way to be transhuman- and if there is, there is no way in hell we know what that is.

2 comments:

Brint Montgomery said...

Hi, thanks for the careful engagement with my article. Although it is possible, as you point out, that a self-hating transhuman could arise because of the decisions we make now, that is not demanded by my position. (Again, though, it is a consistent possibility with my position.) My main point is that transhumanists are a bit to optimistic and presumptive that our understanding of the 'good' now will be seen (or even understandable) as a 'good' later. The moral link with the future human-like or post-human constructs of the human race is tenuous as best.

Robot Suit said...

Thanks your your response, I agree then with your main point.

Like I noted, everything we are talking about is completely unknown. We don't know what good and bad will be in the future; we don't even know if our moral framework will be centered on good vs bad or be binary in nature in the future (oh! it rhymes... )


Nevertheless, I still think my point stands that it doesn't matter what moral values we will hold in the future. We don't base our choices on a to-be held moral system nor do we fail to function in the future when we uphold a new moral system.