Future computer robots will support humans with a very high quality of life. Genetic engineering may provide cures for most diseases. We can greatly improve our lives with nanotechnology. Molecular electronics will very soon allow us to build computers a million times more powerful than personal computers today. This is all fine and good, assuming the great danger from self-replicating engineered organism is understood. But I am astounded at how many singularitarians and trans-humanist's don't seem to have a problem accepting the idea that robots will eventually succeed human beings and make us extinct!
In the ethical dimension concerning the new technologies, human-induced natural evolution will compete with human-induced cyborg evolution, at first, until the cyborgs supposedly replace human beings. I consider it unethical and dangerous for future human and post-human evolution to fuse with robots, or to become robots.
We will, assuming we survive, develop computers that can do most things better than humans, but humans, and post-humans, need to have control over the machines. As Bill Joy and others have pointed out, computer robots making their own decisions could view humans as using up resources the robots require and so they could get rid of humans.