Super-Intelligence Could Help Us

One consideration that should be taken into account when deciding whether to promote the development of super-intelligence is that if super-intelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of super-intelligence no matter what. But once in existence, a super-intelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to super-intelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from super-intelligence. The overall risk seems to be minimized by implementing super-intelligence, with great care, as soon as possible.”

~Nick Bostrom

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

captcha

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>