Metaculus Help: Spread the word
If you like Metaculus, tell your friends! Share this question via Facebook, Twitter, or Reddit.
With the progressive centralization of social policy comes a conflict:
- Decreasing practicality of experimental control groups to infer social causality.
- Increasing ethical responsibility to predict outcomes caused by policies that affect larger numbers of humans that did not individually provide informed consent to the experimental treatments.
Social scientists play a critical role in resolving this conflict – a conflict that is contributing to a decrease in political civility. Radically-conflicting macrosocial models formed from a vast grab-bag of microsocial models are ill-suited to this resolution. The resulting incommensurable macrosocial models, and their unprincipled selection for application during partisan politics, may be resolved with an advance in Artificial General Intelligence (AGI) theory stating that given a set of observations, the most-predictive of existing models is the one that can most-compress those observations without loss.
This is the topic of Marvin Minsky's final advice to predictors:
It seems to me that the most important discovery since Gödel was the discovery by Chaitin, Solomonoff and Kolmogorov of the concept called Algorithmic Probability which is a fundamental new theory of how to make predictions given a collection of experiences and this is a beautiful theory, everybody should learn it, but it’s got one problem, that is, that you cannot actually calculate what this theory predicts because it is too hard, it requires an infinite amount of work. However, it should be possible to make practical approximations to the Chaitin, Kolmogorov, Solomonoff theory that would make better predictions than anything we have today. Everybody should learn all about that and spend the rest of their lives working on it.
— Marvin Minsky Panel discussion on The Limits of Understanding World Science Festival, NYC, Dec 14, 2014
For some insight, you can watch the Nature video "Remodeling Machine Learning: An AI That Thinks Like a Scientist" based on H. Zenil, N. A. Kiani, A. A. Zea, and J. Tegner, “Causal deconconvolution by algorithmic generative models,” Nature Machine Intelligence, vol. 1, no. 1, p. 58, 2019.
Question: Prior to 2030, will fewer than 10 social science papers use the size of losslessly compressed data as the model selection criterion among macrosociology models?
A paper is counted toward resolution if it satisfies all of the following:
It compares at least 2 macrosociology models by the degree to which they have losslessly compressed the same dataset.
It has the keywords "macrosociology" or "macroeconomic" or some obvious derivation of these such as "macrosocial" or "macroeconomics".
It defines "size" as the length of the decompression program plus the length of the compressed data. The salient characteristic of "length" is that it be measured in bits. i.e. the combination serves as a self-extracting archive of the dataset and may, indeed, be measured in that unified form. This definition of "size" is used to award cash in The Hutter Prize for Lossless Compression of Human Knowledge and is also used as a a language modeling benchmark.
It defines a runtime environment affording all competing models the same algorithmic resources. e.g. it produces the original dataset using the same virtual machine a.k.a. a Universal Turing Machine environment.
It is included in the Social Sciences Citation Index.
The question resolves ambiguously if Social Sciences Citation Index is discontinued prior to the above criteria being met.
Metaculus help: Predicting
Predictions are the heart of Metaculus. Predicting is how you contribute to the wisdom of the crowd, and how you earn points and build up your personal Metaculus track record.
The basics of predicting are very simple: move the slider to best match the likelihood of the outcome, and click predict. You can predict as often as you want, and you're encouraged to change your mind when new information becomes available.
The displayed score is split into current points and total points. Current points show how much your prediction is worth now, whereas total points show the combined worth of all of your predictions over the lifetime of the question. The scoring details are available on the FAQ.
Note: this question resolved before its original close time. All of your predictions came after the resolution, so you did not gain (or lose) any points for it.
Note: this question resolved before its original close time. You earned points up until the question resolution, but not afterwards.
This question is not yet open for predictions.
Metaculus help: Community Stats
Use the community stats to get a better sense of the community consensus (or lack thereof) for this question. Sometimes people have wildly different ideas about the likely outcomes, and sometimes people are in close agreement. There are even times when the community seems very certain of uncertainty, like when everyone agrees that event is only 50% likely to happen.
When you make a prediction, check the community stats to see where you land. If your prediction is an outlier, might there be something you're overlooking that others have seen? Or do you have special insight that others are lacking? Either way, it might be a good idea to join the discussion in the comments.