Articles:
- Yoshua Bengio et al., Managing extreme AI risks amid rapid progress, Science, 20 May 2024, Vol 384, Issue 6698, pp. 842-845 (arXiv)
- Jiaming Ji et al., AI Alignment: A Comprehensive Survey, arXiv:2310.19852 (website)
- Jack Stilgoe, Technological risks are not the end of the world, Science, 18 Apr 2024, Vol 384, Issue 6693.
- Nick Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, 2016. (唐澄暐譯,超智慧:AI風險的最佳解答,感電出版,2023)
- Brian Christian, The Alignment Problem: Machine Learning and Human Values, W. W. Norton & Company; 1st edition, 2020.
Courses
- Dan Hendrycks, Introduction to ML Safety, Director, Center for AI Safety, 2023.
Organizations:
- Center for AI Safety
- Statement on AI Risk: Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
- 2023 Impact Report
- Stanford Center for AI Safety
- U.S. Artificial Intelligence Safety Institute
沒有留言:
張貼留言