Context
Buck Shlegeris is a researcher at the Machine Intelligence Research Institute where he works on existential risk from artificial intelligence. Before joining MIRI, he worked as a software engineer at PayPal and was the first employee at Triplebyte. This talk has some background on his views about AI.
Competition
In this competition, your goal is to predict how Buck will predict the following question after reading all comments in this thread and considering the arguments and evidence mentioned.
There are two prizes:
- The most significant update (reasoning or evidence) according to Buck;
- The most accurate prediction of Buck’s posterior distribution submitted through an Elicit snapshot.
We’ll give out a $50 prize for each.
Question
The question is:
What will be the compute used in the largest ML training run before 2030, measured in log(petaflop/s-days)?
A petaflop/s-day consists of performing neural net operations per second for one day, or a total of about operations.
You can see and build on Buck's prior guess on Elicit. This guess is based on a quick extrapolation from AI and Compute.
Once this Metaculus question closes, Buck will look over the comment thread and any linked Elicit snapshots and build a new posterior distribution. Your goal is to predict that distribution.
Motivation
This project is similar in spirit to amplifying epistemic spot checks and other work on scaling up individual judgment through crowdsourcing. As in these projects, we’re hoping to learn about mechanisms for delegating reasoning, this time in the forecasting domain.
The objective is to learn whether mechanisms like this could save people like Buck work. Buck wants to know: What would I think if I had more evidence and knew more arguments than I currently do, but still followed the sorts of reasoning principles that I'm unlikely to revise in the course of a comment thread? To get there, participants (a) provide relevant evidence and arguments and (b) predict what Buck’s distribution will be in light of that evidence.
Why not just resolve against the actual outcome? By resolving predictions against Buck's prediction, we can apply the process to questions where we can't observe outcomes. If this initial trial run goes well, we'll run a conditional or counterfactual amplified forecasting experiment next.
Resolution details
- We will evaluate both the Metaculus community’s prediction and individuals’ predictions on accuracy by estimating KL divergence between Buck’s final distribution and others. To keep the setup as similar as possible between this run and future (counterfactual or conditional) runs, this question will not resolve as a Metaculus question.
- To participate, create your forecast using Elicit, click “Save Snapshot to URL” and post your snapshot URL in a comment below. Share your reasoning and sources in the “Notes” column of the Elicit snapshot.
- If you do not want to make your forecast public, you are welcome to forecast as usual on Metaculus. Your prediction will be incorporated into the community prediction.
- If you submit multiple predictions, Buck will evaluate the one that you explicitly identify as your final submission, or pick the last submission before the competition closes.
- If multiple users’ submissions are very close to Buck’s final distribution, the one submitted first will win.
- The prize for “most significant update (reasoning or evidence) according to Buck” will be entirely at Buck’s discretion. For example, Buck may choose to give a prize for best reasoning even if it does not cause Buck to update his beliefs. Or, he may choose to not give out a prize in this category.