In a blog post, Paul Christiano argues that we should consider preserving a message that would be uncovered by a potential future civilization that may arise on Earth if humanity goes extinct. He writes,
If we humans manage to kill ourselves, we may not take all life on Earth with us. That leaves hope for another intelligent civilization to arise; if they do, we could potentially help them by leaving carefully chosen messages.
In this post I’ll argue that despite sounding kind of crazy, this could potentially compare favorably to more conventional extinction risk reduction.
He offers a conditional prediction:
If humanity drives ourselves extinct (without AI), I think there is a ~1/2 chance that another intelligent civilization evolves while the earth remains habitable.
If humanity goes extinct, will another intelligent civilization evolve while Earth remains habitable?
For the purpose of this question, what it means for humanity to go extinct is that (1) no human is alive, and (2) none of our artificial or biological descendants are alive either (including our AI). This question will resolve ambiguously if humanity is not extinct by the year 10,000 AD. Otherwise, it will wait until Earth becomes uninhabitable for complex multi-cellular life before resolving.
Another intelligent civilization is said to evolve on Earth if some group of organisms develops any of the following technologies: agriculture, writing, or mathematics. The organisms must be direct descendants of current Earthly life (so, aliens who visit do not count). If such an intelligent civilization evolves on Earth after humans go extinct, then this question resolves positively. Otherwise, it resolves negatively.