World Library  
Flag as Inappropriate
Email this Article

Future of Humanity Institute

Article Id: WHEBN0008481853
Reproduction Date:

Title: Future of Humanity Institute  
Author: World Heritage Encyclopedia
Language: English
Subject: Global catastrophic risk, Conference on Artificial General Intelligence, Room for more funding, Eliezer Yudkowsky, Centre for the Study of Existential Risk
Collection:
Publisher: World Heritage Encyclopedia
Publication
Date:
 

Future of Humanity Institute

The Future of Humanity Institute (FHI) is an interdisciplinary research centre focused on predicting and preventing large-scale risks to human civilization. It was founded in 2005 as part of the Faculty of Philosophy and the Oxford Martin School at the University of Oxford, England.[1] Its director is philosopher Nick Bostrom, and its research staff and associates include futurist Anders Sandberg, engineer K. Eric Drexler, economist Robin Hanson, and Giving What We Can founder Toby Ord.[2]

The Institute's stated objective is to develop and utilize scientific and philosophical methods for reasoning about topics of fundamental importance to humanity, such as the effect of future technology on the human condition and the possibility of global catastrophes.[3][4] It engages in a mix of academic and outreach activities, seeking to promote informed discussion and public engagement in government, businesses, universities, and other organizations.

History

Nick Bostrom established the Institute in November 2005 as part of the Oxford Martin School, then the James Martin 21st Century School, to bring together futures studies researchers.[1] Its research staff reached full capacity in December 2006.[5] Between 2008 and 2010, FHI hosted the Global Catastrophic Risks conference, wrote 22 academic journal articles, and published 34 chapters in academic volumes. FHI researchers gave policy advice at the World Economic Forum, BAE Systems, and Booz Allen Hamilton, as well as to governmental bodies in Sweden, Singapore, Belgium, the United Kingdom, and the United States. Bostrom and bioethicist Julian Savulescu also published the book Human Enhancement, in March 2009.[6]

In 2012, the Oxford Martin School ceased funding the Institute, as it transitioned to full support by independent donors.[1] Most recently, FHI has focused on obstacles to space colonization and on the dangers of advanced artificial intelligence. In 2014, its researchers published several books on AI risk, including Stuart Armstrong's Smarter Than Us and Bostrom's Superintelligence.[7][8]

Existential risk

The largest topic FHI has spent time exploring is global catastrophic risk, and in particular existential risk. In a 2002 paper, Bostrom defined an "existential risk" as one "where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential".[9] This includes scenarios where humanity is not directly harmed, but it fails to colonize space and make use of the observable universe's available resources in humanly valuable projects, as discussed in Bostrom's 2003 paper "Astronomical Waste: The Opportunity Cost of Delayed Technological Development".[10]

Bostrom and Milan Ćirković's 2008 book "Global Catastrophic Risks" collects essays on a variety of such risks, both natural and anthropogenic. Possible catastrophic risks from nature include super-volcanism, impact events, and energetic astronomical events such as gamma-ray bursts, cosmic rays, solar flares, and supernovae. These dangers are characterized as relatively small and relatively well-understood, though pandemics may be exceptions as a result of being more common, and of dovetailing with technological trends.[11][4]

Synthetic pandemics via weaponized Centre for the Study of Existential Risk and the Machine Intelligence Research Institute.[12][13] FHI researchers have also studied the impact of technological progress on social and institutional risks, such as totalitarianism, automation-driven unemployment, and information hazards.[14]

Anthropic reasoning

FHI devotes much of its attention to exotic threats that have been little explored by other organizations, and to methodological considerations that inform existential risk reduction and forecasting. The Institute has particularly emphasized anthropic reasoning in its research, as an under-explored area with general epistemological implications.

Anthropic arguments FHI has studied include the doomsday argument, which claims that humanity is likely to go extinct soon because it is unlikely that one is observing a point in human history that is extremely early. Instead, present-day humans are likely to be near the middle of the distribution of humans that will ever live.[11] Bostrom has also popularized the simulation argument, which suggests that if we're likely to avoid existential risks, humanity and the world around are not unlikely to be a simulation.[15]

A recurring theme in FHI's research is the Fermi paradox, the surprising absence of observable alien civilizations. Robin Hanson has argued that there must be a "Great Filter" preventing space colonization to account for the paradox. That filter may lie in the past, if intelligence is much more rare than current biology would predict; or it may lie in the future, if existential risks are even larger than is currently recognized.

Human enhancement and rationality

Closely linked to FHI's work on risk assessment, astronomical waste, and the dangers of future technologies is its work on the promise and risks of human enhancement. The modifications in question may be biological, digital, or sociological, and an emphasis is placed on the most radical hypothesized changes, rather than on the likeliest short-term innovations. FHI's bioethics research focuses on the potential consequences of gene therapy, life extension, brain implants and brain–computer interfaces, and mind uploading.[16]

FHI's focus has been on methods for assessing and enhancing human intelligence and rationality, as a way of shaping the speed and direction of technological and social progress. FHI's work human on irrationality, as exemplified in cognitive heuristics and biases, includes an ongoing collaboration with Amlin to study the risks of systemic risk arising from biases in modeling.[17][18]

Selected publications

  • Global Catastrophic RisksNick Bostrom and Milan Cirkovic:
  • Brain Emulation RoadmapNick Bostrom and Anders Sandberg:
  • Human EnhancementNick Bostrom and Julian Savulescu:

See also

References

  1. ^ a b c "Humanity's Future: Future of Humanity Institute". Oxford Martin School. Retrieved 28 March 2014. 
  2. ^ "Staff". Future of Humanity Institute. Retrieved 28 March 2014. 
  3. ^ "About FHI". Future of Humanity Institute. Retrieved 28 March 2014. 
  4. ^ a b Ross Andersen (25 February 2013). "Omens". Aeon Magazine. Retrieved 28 March 2014. 
  5. ^ Nick Bostrom (18 July 2007). Achievements Report: November 2005 - July 2007 (Report). Future of Humanity Institute. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=EA3FCCAE94FB3EDF04AB201D216BDAE3?doi=10.1.1.136.7706&rep=rep1&type=pdf. Retrieved 31 March 2014.
  6. ^ Nick Bostrom (18 July 2007). Achievements Report: 2008-2010 (Report). Future of Humanity Institute. Archived from the original on 21 December 2012. http://web.archive.org/web/20121221144029/http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0019/19900/Achievements_Report_2008-2010.pdf. Retrieved 31 March 2014.
  7. ^ Mark Piesing (17 May 2012). "AI uprising: humans will be outsourced, not obliterated". Wired. Retrieved 31 March 2014. 
  8. ^ Coughlan, Sean (24 April 2013). "How are humans going to become extinct?". BBC News. Retrieved 29 March 2014. 
  9. ^ Nick Bostrom (March 2002). "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards". Journal of Evolution and Technology 15 (3): 308–314. Retrieved 31 March 2014. 
  10. ^ Nick Bostrom (November 2003). "Astronomical Waste: The Opportunity Cost of Delayed Technological Development". Utilitas 15 (3): 308–314.  
  11. ^ a b Ross Andersen (6 March 2012). "We're Underestimating the Risk of Human Extinction". The Atlantic. Retrieved 29 March 2014. 
  12. ^ Kate Whitehead (16 March 2014). "Cambridge University study centre focuses on risks that could annihilate mankind". South China Morning Post. Retrieved 29 March 2014. 
  13. ^ Jenny Hollander (September 2012). "Oxford Future of Humanity Institute knows what will make us extinct". Bustle. Retrieved 31 March 2014. 
  14. ^ Nick Bostrom. "Information Hazards: A Typology of Potential Harms from Knowledge". Future of Humanity Institute. Retrieved 31 March 2014. 
  15. ^ John Tierney (13 August 2007). "Even if Life Is a Computer Simulation . . .". The New York Times. Retrieved 31 March 2014. 
  16. ^ Anders Sandberg and Nick Bostrom. "Whole Brain Emulation: A Roadmap". Future of Humanity Institute. Retrieved 31 March 2014. 
  17. ^ "Amlin and Oxford University launch major research project into the Systemic Risk of Modelling" (Press release). Amlin. 11 February 2014. Retrieved 2014-03-31. 
  18. ^ "Amlin and Oxford University to collaborate on modelling risk study". Continuity, Insurance & Risk Magazine. 11 February 2014. Retrieved 31 March 2014. 

External links

  • FHI website
  • Nick Bostrom's Homepage
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
 
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
 
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
 


Copyright © World Library Foundation. All rights reserved. eBooks from Project Gutenberg are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.