Our aim is to generate a massive dataset of artificial intelligence (AI) forecasts over a range of time-frames, train models to optimally aggregate those forecasts, and use these to build a highly accurate map of the future of AI.
Participants will be contributing to the development of important new insights about the future of AI. Our results will be made publicly available.
The tournament will provide a unique proof of concept for aggregate probabilistic forecasting in the realm of AI.
Research Scientist on the policy team at OpenAI, where she’s worked on topics including responsible AI development, safety via debate, and publication norms in ML.
Cognitive scientist and the VP of Research at the AI Foundation. Previously, AI researcher at MIT Media Lab and the Harvard Program for Evolutionary Dynamics.
Research Scientist on the Policy team at OpenAI and Research Affiliate at the University of Oxford's Future of Humanity Institute.
Future of Humanity Institute
Research Scientist working on AI Safety at the Future of Humanity Institute, and member of the Board of Directors of Ought.
Manager of the AI Index Program at Stanford Institute for Human-Centered Artificial Intelligence (HAI).
Carroll ‘Max’ Wainwright
Research Scientist focused on technical aspects of AI safety, and Co-founder of Metaculus.