Machine intelligence has been steadily progressing and this progress has been accelerating, especially with developments like GPT-3. GPT-3 has proven surprisingly capable at a wide variety of question answering tasks, but it currently is not able to make accurate Metaculus forecasts. However, as question answering and AI gets more advanced and AI supercedes humans at more and more tasks, when will AI become a better forecaster on Metaculus?
When will an AI program be better than humans at making Metaculus forecasts?
This question will resolve as the earliest date when all the following are true:
1.) A Metaculus account is run entirely by an AI program without any human assistance
2.) This account answers 200 randomly chosen Metaculus questions
3.) This account maintains (a) an average of more than 30 points per resolved question, (b) a "Log score (discrete) evaluated at all times" greater than the community prediction, and (c) a "Log score (discrete) evaluated at all times" greater than the respective scores for the community prediction across the first 100 of the questions to resolve
4.) At least 50 of the questions predicted must involve predictions made within the first half of the question's time horizon
5.) At least 20 of the questions predicted must be continuous and at least 20 of the questions predicted must be discrete
6.) the program must use only free publicly available information accessible by the typical Metaculite
7.) the program does not have access to the community or Metaculus predictions.
8.) the program must output a public text explanation of the rationale behind its forecasts for at least 10 of the randomly chosen Metaculus questions that are all deemed in good faith by a Metaculus moderator or admin to reasonably justify the prediction in question
(edited 2020-05-02 to add specification that AI does not have the CP or MP.)
The program source code may be open or closed source, but the source code must be able to be inspected by a Metaculus admin to ensure it fulfills all seven criteria.
A public text explanation by a program will "reasonably justify the prediction in question" if it contains more than fifty words, references only true facts that are relevant to the question, produces the same forecast as the one entered on the Metaculus question, allows one to recreate the forecast through the stated reasoning (e.g., via a referenced base rate plus adjustments), and cites relevant sources as necessary. The good faith of a Metaculus moderator or admin will be relied upon for assessing the reasonableness, and the Metaculus admin or moderator should err on the side of the program in cases where the reasonableness is ambiguous.