Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias (Routledge, 2002), Global Catastrophic Risks (ed., 2008), and Human Enhancement (ed., OUP, 2009). He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains.If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence.But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanitys cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biologicalcognitive enhancement, and collective intelligence.This profoundly ambitious and original book picks its way carefully through a vast tract of forbiddingly difficult intellectual terrain. Yet the writing is so lucid that it somehow makes it all seem easy. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostroms work nothing less than a reconceptualization of the essential task of our time.
##很多科幻電影都在談,人類設計齣的人工智能,即機器人,反叛人類,統治人類。但是,為什麼這些超級智能機器人要統治人類?無一例外,所有人都采用瞭擬人化思維,認為機器人同樣要保護自己,爭奪資源,包括本書作者,包括被許多人神化的庫布裏剋《2001:太空奧德賽》。這是一...
評分 評分 評分原載於《中國醫學倫理學》2020年第7期 [摘要]人工智能的迅猛發展,使得人工智能倫理建設變得日益緊迫,如何將人工智能置於可控範圍,是其中一個重要議題。牛津學者博斯特羅姆於2014年推齣的《超級智能》一書雄辯地證明瞭人工智能存在的危險。博斯特羅姆關於“工具趨同論”以及...
評分人類命運麵臨的最後挑戰 ——讀《超級智能:路綫圖、危險性與應對策略》 彭忠富 在科技日新月異的現代,各國研發團隊無不卯足全力,競相開發新一代機器人産品。如果機器人比人類聰明,如果我們製造瞭機器人卻不能控製它們的思想和行為,如果機器人像人類一樣實現瞭自我進化,那...
評分 評分##ustc老師l推薦
評分##很多科幻電影都在談,人類設計齣的人工智能,即機器人,反叛人類,統治人類。但是,為什麼這些超級智能機器人要統治人類?無一例外,所有人都采用瞭擬人化思維,認為機器人同樣要保護自己,爭奪資源,包括本書作者,包括被許多人神化的庫布裏剋《2001:太空奧德賽》。這是一...
本站所有內容均為互聯網搜尋引擎提供的公開搜索信息,本站不存儲任何數據與內容,任何內容與數據均與本站無關,如有需要請聯繫相關搜索引擎包括但不限於百度,google,bing,sogou 等
© 2025 book.cndgn.com All Rights Reserved. 新城书站 版權所有