Currently there is no government panel or program to address global catastrophic risks including human extinction risks ("existential risks"), collect proactive solutions to prevent extinction-scale disasters or provide resilience in the face of less severe global catastrophes, and help coordinate (inter)governmental initiatives to reduce the likelihood and severity of extinction threats, including providing research grants. One significant threat to the future of humanity that has received inadequate governmental attention (virtually none) is the development of a human-indifferent artificial general intelligence that can alter its own source code to become “superintelligent” or smarter than any group of humans in multiple domains. Please note that these are a different set of issues than those covered in the Nano-Bio-Info-Cogno (NBIC) Convergence events, though it also encompasses information tech, nanotech, and biotech. Some institutions currently addressing such risks from which panel members might be drawn include the Future of Humanity Institute at Oxford U. (http://www.fhi.ox.ac.uk/research/global_catastrophic_risks ), The Singularity Institute for Artificial Intelligence (http://singinst.org/aboutus/ourmission ), and the Institute for Ethics and Emerging Technologies (http://ieet.org/index.php/IEET/about ). If we get global catastrophic risks wrong there might not be a future for humanity, period.
Idea No. 272