Questions
Tournaments
Services
News
Questions
Tournaments
Questions
Questions
More
Log in
Sign Up
a
/
文
Log in
Sign Up
Feed Home
👥
Communities
🏆
Leaderboards
Topics
🗳️
US Democracy Index
❓
Top Questions
⏳
AI 2027
🗞️
In the News
🦾
AI Forecasting Benchmark
📈
Indexes
💎
Metaculus Cup
🌍
AI Pathways
🏛️
POTUS Predictions
categories
🦠
Health & Pandemics
🌱
Environment & Climate
☢️
Nuclear Technology & Risks
🤖
Artificial Intelligence
See all categories
Platform feature suggestions
113
3.1k
3.1k
comments
Metaculus Meta
9
comments
40
forecasters
At the end of May 2023, will these individuals' signature be in the FLI open letter to pause giant AI experiments?
Mark Zuckerberg
result:
No
Sundar Pichai
result:
No
Sam Altman
result:
No
2 others
$5,000 in Prizes for Comments in the AI Pathways Tournament: Submit Before November 1
8
4
4
comments
AI Pathways Tournament
21
comments
299
forecasters
Ragnarök Question Series: if an artificial intelligence catastrophe occurs, will it reduce the human population by 95% or more?
26%
chance
14
comments
95
forecasters
Will leading AI labs have their models evaluated for dangerous behavior before 2026?
Google DeepMind
20%
Microsoft
4%
xAI
3%
14 others
4
comments
42
forecasters
Before 2029, will a new international organization focused on AI safety be established with participation from at least three G7 countries?
40%
chance
44
comments
432
forecasters
Ragnarök Question Series: If a global catastrophe occurs, will it be due to an artificial intelligence failure-mode?
28%
chance
14
comments
121
forecasters
Will Any Major AI Company Commit to an AI Windfall Clause by 2025?
result
No
5
comments
45
forecasters
Will the CEO of OpenAI, Meta, or Alphabet (Google) publicly commit to specific limitations on their company’s AI system autonomy before January 1, 2027?
22%
chance
7
comments
35
forecasters
Will there be a leading AI lab with no internal safety team in the following years?
Load More