With the progressive centralization of social policy comes a conflict:
- Decreasing practicality of experimental control groups to infer social causality.
- Increasing ethical responsibility to predict outcomes caused by policies that affect larger numbers of humans that did not individually provide informed consent to the experimental treatments.
Social scientists play a critical role in resolving this conflict – a conflict that is contributing to a decrease in political civility. Radically-conflicting macrosocial models formed from a vast grab-bag of microsocial models are ill-suited to this resolution. The resulting incommensurable macrosocial models, and their unprincipled selection for application during partisan politics, may be resolved with an advance in Artificial General Intelligence (AGI) theory stating that given a set of observations, the most-predictive of existing models is the one that can most-compress those observations without loss.
This is the topic of Marvin Minsky's final advice to predictors:
It seems to me that the most important discovery since Gödel was the discovery by Chaitin, Solomonoff and Kolmogorov of the concept called Algorithmic Probability which is a fundamental new theory of how to make predictions given a collection of experiences and this is a beautiful theory, everybody should learn it, but it’s got one problem, that is, that you cannot actually calculate what this theory predicts because it is too hard, it requires an infinite amount of work. However, it should be possible to make practical approximations to the Chaitin, Kolmogorov, Solomonoff theory that would make better predictions than anything we have today. Everybody should learn all about that and spend the rest of their lives working on it.
— Marvin Minsky Panel discussion on The Limits of Understanding World Science Festival, NYC, Dec 14, 2014
For some insight, you can watch the Nature video "Remodeling Machine Learning: An AI That Thinks Like a Scientist" based on H. Zenil, N. A. Kiani, A. A. Zea, and J. Tegner, “Causal deconconvolution by algorithmic generative models,” Nature Machine Intelligence, vol. 1, no. 1, p. 58, 2019.
Question: Prior to 2030, will fewer than 10 social science papers use the size of losslessly compressed data as the model selection criterion among macrosociology models?
A paper is counted toward resolution if it satisfies all of the following:
It compares at least 2 macrosociology models by the degree to which they have losslessly compressed the same dataset.
It has the keywords "macrosociology" or "macroeconomic" or some obvious derivation of these such as "macrosocial" or "macroeconomics".
It defines "size" as the length of the decompression program plus the length of the compressed data. The salient characteristic of "length" is that it be measured in bits. i.e. the combination serves as a self-extracting archive of the dataset and may, indeed, be measured in that unified form. This definition of "size" is used to award cash in The Hutter Prize for Lossless Compression of Human Knowledge and is also used as a a language modeling benchmark.
It defines a runtime environment affording all competing models the same algorithmic resources. e.g. it produces the original dataset using the same virtual machine a.k.a. a Universal Turing Machine environment.
It is included in the Social Sciences Citation Index.
The question resolves ambiguously if Social Sciences Citation Index is discontinued prior to the above criteria being met.