Metaculus Help: Spread the word
If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.
Amplified forecasting: What will Buck's informed prediction of compute used in the largest ML training run before 2030 be?
Buck Shlegeris is a researcher at the Machine Intelligence Research Institute where he works on existential risk from artificial intelligence. Before joining MIRI, he worked as a software engineer at PayPal and was the first employee at Triplebyte. This talk has some background on his views about AI.
In this competition, your goal is to predict how Buck will predict the following question after reading all comments in this thread and considering the arguments and evidence mentioned.
There are two prizes:
- The most significant update (reasoning or evidence) according to Buck;
- The most accurate prediction of Buck’s posterior distribution submitted through an Elicit snapshot.
We’ll give out a $50 prize for each.
The question is:
What will be the compute used in the largest ML training run before 2030, measured in log(petaflop/s-days)?
A petaflop/s-day consists of performing neural net operations per second for one day, or a total of about operations.
Once this Metaculus question closes, Buck will look over the comment thread and any linked Elicit snapshots and build a new posterior distribution. Your goal is to predict that distribution.
This project is similar in spirit to amplifying epistemic spot checks and other work on scaling up individual judgment through crowdsourcing. As in these projects, we’re hoping to learn about mechanisms for delegating reasoning, this time in the forecasting domain.
The objective is to learn whether mechanisms like this could save people like Buck work. Buck wants to know: What would I think if I had more evidence and knew more arguments than I currently do, but still followed the sorts of reasoning principles that I'm unlikely to revise in the course of a comment thread? To get there, participants (a) provide relevant evidence and arguments and (b) predict what Buck’s distribution will be in light of that evidence.
Why not just resolve against the actual outcome? By resolving predictions against Buck's prediction, we can apply the process to questions where we can't observe outcomes. If this initial trial run goes well, we'll run a conditional or counterfactual amplified forecasting experiment next.
- We will evaluate both the Metaculus community’s prediction and individuals’ predictions on accuracy by estimating KL divergence between Buck’s final distribution and others. To keep the setup as similar as possible between this run and future (counterfactual or conditional) runs, this question will not resolve as a Metaculus question.
- To participate, create your forecast using Elicit, click “Save Snapshot to URL” and post your snapshot URL in a comment below. Share your reasoning and sources in the “Notes” column of the Elicit snapshot.
- If you do not want to make your forecast public, you are welcome to forecast as usual on Metaculus. Your prediction will be incorporated into the community prediction.
- If you submit multiple predictions, Buck will evaluate the one that you explicitly identify as your final submission, or pick the last submission before the competition closes.
- If multiple users’ submissions are very close to Buck’s final distribution, the one submitted first will win.
- The prize for “most significant update (reasoning or evidence) according to Buck” will be entirely at Buck’s discretion. For example, Buck may choose to give a prize for best reasoning even if it does not cause Buck to update his beliefs. Or, he may choose to not give out a prize in this category.
Metaculus help: Predicting
Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.
The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.
The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.
Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.
Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.
This question is not yet open for predictions.
Metaculus help: Community Stats
Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.
When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.