Questions
Tournaments
Services
News
Questions
Tournaments
Questions
Questions
More
Log in
Sign Up
a
/
文
Log in
Sign Up
Feed Home
👥
Communities
🏆
Leaderboards
Topics
❓
Top Questions
🗞️
In the News
🦾
AI Forecasting Benchmark
💎
Metaculus Cup
⏳
AI 2027
🌍
AI Pathways
🏛️
POTUS Predictions
categories
🦠
Health & Pandemics
🌱
Environment & Climate
☢️
Nuclear Technology & Risks
🤖
Artificial Intelligence
See all categories
$5,000 in Prizes for Comments in the AI Pathways Tournament: Submit Before November 1
8
3
3
comments
AI Pathways Tournament
Platform feature suggestions
112
3.1k
3.1k
comments
Metaculus Meta
14
comments
85
forecasters
Will leading AI labs have their models evaluated for dangerous behavior before 2026?
Google DeepMind
13%
Microsoft
5%
xAI
3%
14 others
Contributed by the
AI 2025 Forecasting Survey
community.
0
comments
12
forecasters
Will an AI system be reported by OpenAI as of December 31st 2025 as having a pre-mitigation score of High or higher on Persuasion?
30%
chance
20%
this week
3
comments
37
forecasters
Before 2029, will a new international organization focused on AI safety be established with participation from at least three G7 countries?
40%
chance
6.4%
this week
44
comments
422
forecasters
Ragnarök Question Series: If a global catastrophe occurs, will it be due to an artificial intelligence failure-mode?
25%
chance
5%
this week
26
comments
43
forecasters
Five years after AGI, will AI philosophical competence be solved?
11%
chance
3%
this week
condition
CTs Policy Response After AI Catastrophe
16
forecasters
if yes
if no
CTs AI Extinction Before 2100
77%
77%
CTs AI Extinction Before 2100
30%
30%
2
3
3
comments
16
16
forecasters
Conditional Trees: AI Risk
Suggest questions to launch
121
2.8k
2.8k
comments
Metaculus Meta
7
comments
35
forecasters
Will there be a leading AI lab with no internal safety team in the following years?
Load More