Reading for Chapter
Jul. 31st, 2006 03:13 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
This is sheer, unadulterated procrastination, but possibly it will help. Because I have a pile of transhumanist and anti-transhumanist books on my desk, and I want to smack the authors of all of them.
Ray Kurzweil, The Singularity is Near.
Posited: "Evolution" can be charted on a graph of state changes, starting with the emergence of life and including major technological developments. Humans are a step above non-sapient animals. Strong AI will be a step above us.
Conclusion: The AIs we create will be intensely grateful to us and devoted to our well-being.
Unstated assumption required for this chain of logic to work: Humans have been utterly devoted to the well-being of our evolutionary predecessors.
SMACK!
Michael Crichton, Prey. (Skip to the next SMACK, if you don't want the whole plot spoiled.)
Set-up: A new military camera is created, using a swarm of flying nanobots. Their movements are based on an artificial life program, in turn based on the movements, but not the actual motivations or hunting behavior, of a population of generic predators. An evolutionary programming algorithm, not entirely in control of the programmers, is used to produce swarms that don't blow apart in a strong wind.
Result: The swarms develop A) the ability to not blow apart in a strong wind, B) a method of drawing energy by eating meat (note, need not be human--they just happen to be carnivores), C) sapience (strong AI), and D) the ability to create utility fog (highly advanced nanotech, capable of eliminating poverty and reliance on non-renewable resources forever).
Conclusion: The only possible way to deal with a fellow sapient that speaks English, has already demonstrated a capacity for becoming fond of humans, and knows that you basically have a gun to its head... is to destroy it entirely, without getting records of how it developed technologies that could save millions of lives.
Bonus Assumption: A developmental psychologist, given the opportunity for first contact with a non-human intelligence, would have to be out of her mind to want to test its mental capacities.
SMACK!
Bill McKibben, Enough: Staying Human in an Engineered Age.
Posited: "Just because I'm writing an anti-technological screed doesn't mean I'm a luddite."
In Support: Overview of several upcoming genetic technologies, described in such a way as to get the maximum possible kneejerk negative reaction. Use of rhetorical questions about "Is this a good idea?" to which the reader is obviously supposed to answer "No," but to which my answer is, "Well, maybe."
SMACK!
Martin Rees, Our Final Hour.
I haven't picked this one up yet, but it came out in 2003. Perhaps he ought to change the title.
Ray Kurzweil, The Singularity is Near.
Posited: "Evolution" can be charted on a graph of state changes, starting with the emergence of life and including major technological developments. Humans are a step above non-sapient animals. Strong AI will be a step above us.
Conclusion: The AIs we create will be intensely grateful to us and devoted to our well-being.
Unstated assumption required for this chain of logic to work: Humans have been utterly devoted to the well-being of our evolutionary predecessors.
SMACK!
Michael Crichton, Prey. (Skip to the next SMACK, if you don't want the whole plot spoiled.)
Set-up: A new military camera is created, using a swarm of flying nanobots. Their movements are based on an artificial life program, in turn based on the movements, but not the actual motivations or hunting behavior, of a population of generic predators. An evolutionary programming algorithm, not entirely in control of the programmers, is used to produce swarms that don't blow apart in a strong wind.
Result: The swarms develop A) the ability to not blow apart in a strong wind, B) a method of drawing energy by eating meat (note, need not be human--they just happen to be carnivores), C) sapience (strong AI), and D) the ability to create utility fog (highly advanced nanotech, capable of eliminating poverty and reliance on non-renewable resources forever).
Conclusion: The only possible way to deal with a fellow sapient that speaks English, has already demonstrated a capacity for becoming fond of humans, and knows that you basically have a gun to its head... is to destroy it entirely, without getting records of how it developed technologies that could save millions of lives.
Bonus Assumption: A developmental psychologist, given the opportunity for first contact with a non-human intelligence, would have to be out of her mind to want to test its mental capacities.
SMACK!
Bill McKibben, Enough: Staying Human in an Engineered Age.
Posited: "Just because I'm writing an anti-technological screed doesn't mean I'm a luddite."
In Support: Overview of several upcoming genetic technologies, described in such a way as to get the maximum possible kneejerk negative reaction. Use of rhetorical questions about "Is this a good idea?" to which the reader is obviously supposed to answer "No," but to which my answer is, "Well, maybe."
SMACK!
Martin Rees, Our Final Hour.
I haven't picked this one up yet, but it came out in 2003. Perhaps he ought to change the title.
no subject
Date: 2006-07-31 10:55 pm (UTC)no subject
Date: 2006-07-31 11:06 pm (UTC)I've got nothing against anyone who wants to upload themselves onto computer (although I'd be kind of insulted if I were sleeping with them). More room for the rest of us! But the assumption that any significant portion of people would want to makes me think that somebody needs to spend more time outside the lab.
no subject
Date: 2006-08-01 04:30 pm (UTC)no subject
Date: 2007-01-27 06:11 am (UTC)no subject
Date: 2007-01-27 06:45 am (UTC)I think that "it sounded even better when I was on hallucinagens" should be a red flag for anyone.
wow!
Date: 2006-08-01 12:41 pm (UTC)Re: wow!
Date: 2006-08-01 02:54 pm (UTC)Re: wow!
Date: 2006-08-01 03:01 pm (UTC)I love your writing!
Date: 2006-08-02 01:27 am (UTC)Re: I love your writing!
Date: 2006-08-02 02:19 am (UTC)