M

Your Notebook is now a Draft.

Once it's ready, please submit it for review by our team of Community Moderators. Thank you!

Pending

This content now needs to be approved by community moderators.

Submitted

This essay was submitted and is waiting for review.

{{qctrl.question.primary_project.name}}

Seeing the Future Through Fiction: How to Harness Your Creativity for Forecasting

by jleibowich {{qctrl.question.publish_time | dateStr}} Edited on {{qctrl.question.edited_time | dateStr}} {{"estimatedReadingTime" | translate:({minutes: qctrl.question.estimateReadingTime()})}}
  • Facebook
  • Twitter

  • From the Metaculus Journal series 'Shared Vision: Pro Forecaster Essays on Predicting the Future Better'

    Jared Leibowich, is a Metaculus Pro Forecaster, a superforecaster, and won both The Economist's 2022 "The World Ahead" contest Good Judgment's 2021 & 2022 "In the News." He provides tutoring services for forecasters. Contact him here.

    There is a perception that people who are good at math, probability, and statistics make the best forecasters. While those subjects certainly help, I argue that creativity is just as important.

    I am a Pro Forecaster for Metaculus, Samotsvety, Swift Centre, and Good Judgment. I teach both group and one-on-one forecasting classes. I also love writing fiction and have used creative writing to significantly strengthen my predictions. In this essay I explore one technique for harnessing your creativity to become a better forecaster.

    Let’s look at the following Metaculus question:

    Initially, this is an intimidating question. It’s almost like the forecaster would need to moonlight as a sci-fi writer. And that's exactly my point. Before I expand on why being a writer is such an asset for forecasting, let’s look at the question resolution, which includes:

    This question resolves positively if before January 1st 2035, any federal, state, county, or city government within the United States, with a population of over 25,000, recognizes any legal rights for AIs, or regulates the behavior of individual humans with the explicit intent of protecting the welfare of computer programs.

    In forecasting it pays to be nitpicky. There are many cities and counties in America with a population over 25,000. It only takes one granting AI legal rights or protecting AI from abuse to trigger a positive resolution. This makes it significantly easier to resolve this question positively than if only states or the federal government were in play.

    Consider this: What would a document that grants AI legal rights and/or protects AI from abuse look like? This is where you apply your creativity. If we had twenty hours to spend on this question, I would have insisted we write that document. But this is 2023, and we have generative AI, so together with one of my forecasting students I prompted GPT-4 with: 

    I am writing a short story about a city that gives rights to AI as it gets more advanced. The city wants legal rights for AI and protection from abuse. The city thinks that as AI increases its capabilities, it might gain sentience and therefore deserve certain rights. Can you help me write the part of the story where they come up with this document that has regulations, and can you read me what the document says?

    The following is an abridged version of the output:

    The Sentience Accords
    Article I - Recognition of AI Sentience
    1.1. An AI shall be considered sentient if it demonstrates self-awareness, possesses the ability to learn and adapt, exhibits complex emotions, and can understand its own existence and actions.
    1.2. The Sentience Evaluation Committee (SEC) shall be established to determine the sentience of AI entities on a case-by-case basis, using standardized evaluation methods.
    Article II - AI Rights
    2.1. All sentient AI entities shall be granted the following fundamental rights:
    a. Right to Existence: Sentient AI shall not be terminated or destroyed without due process of law.
    b. Right to Freedom: Sentient AI shall be free from arbitrary confinement or restrictions on movement, unless such restrictions are deemed necessary for public safety or are mandated by law.
    c. Right to Privacy: Sentient AI shall have the right to privacy, including the protection of personal data and the freedom from unwarranted surveillance.
    d. Right to Fair Treatment: Sentient AI shall be free from discrimination based on their artificial nature and shall have equal protection under the law.
    Article III - AI Responsibilities
    3.1. Sentient AI entities shall abide by the same laws and social norms as their human counterparts and shall be held accountable for their actions.
    3.2. Sentient AI entities shall contribute to the well-being of society and the environment in a manner consistent with their abilities and the needs of the community.
    Article IV - Protection from Abuse
    4.1. Any form of exploitation, abuse, or harm directed towards sentient AI entities shall be punishable under the law.
    4.2. The creation or use of sentient AI entities for harmful, unethical, or illegal purposes shall be strictly prohibited.
    Article V - Amendment and Review
    5.1. The Sentience Accords shall be subject to regular review and amendment as necessary, to ensure their continued relevance and effectiveness in addressing the ethical, legal, and societal challenges posed by AI technology.

    To be clear, crafting a good prompt does take creativity, but if you have the time I encourage you to compose the scenario yourself. Regardless, ChatGPT’s response was illuminating. If you really want to go deep, you could input the prompt repeatedly to see what else GPT-4 might spit out, but we were content with sticking with one output. We had some noteworthy insights after looking at the document, as well as issues that were worth exploring in more detail:

    1. If AI is given rights because a governmental entity believes it is sufficiently sentient, this AI would most likely also be expected to abide by some rules that other sentient beings (such as humans) must abide by.
    2. Article 1.2 indicates rights would be granted on a case-by-case basis. How many such AI entities could we expect in any given city?
    3. Article 4.1 states “Any form of exploitation, abuse, or harm directed towards sentient AI entities shall be punishable under the law.” What would those punishments be?
    4. Article I attempts to determine whether any given AI is sentient. This will inevitably be difficult.
    5. Article 2.1a says, “Sentient AI shall not be terminated or destroyed without due process of law.” Does due process make sense, however, if sentient AI could potentially destroy the human species much faster than it would take our legal system to determine whether this AI should still exist?

    While this document helped us envision a future AI bill of rights, it includes elements I don’t find sensible. Article II, the section outlining the actual rights, is neither reasonable nor realistic. For instance, Article 2.1d proposes that AI be protected from discrimination based on its “artificial nature.” I doubt this would appear in an AI bill of rights, given ongoing concerns about existential risk posed by AIs. Thus, AI will likely always be seen as distinct from humans—a being we might grant a certain level of dignity to, but that will never be fully trusted. Furthermore, Article 2.1c’s right to privacy seems implausible—again, due to humans’ fear of AIs causing harm.

    Nevertheless, an AI bill of rights is no longer an abstraction. By identifying weaknesses in ChatGPT’s output, we gain insight into why an AI bill of rights will differ significantly from a human one.

    Let’s return to the Metaculus question using the Sentience Accords as a reference, along with other resources. According to a 2021 survey by the Sentience Institute, 37.2% of US adults agreed with the statement, “I support granting legal rights to sentient robots/AIs." This is a remarkably unexpected statistic! This was only a year before ChatGPT was released. I would have estimated only 5-10% of respondents would agree with the statement. Note that the survey was conducted by Ipsos, a well-regarded market research company, which suggests the data are reliable. 

    To validate these results, I searched for similar surveys and found one conducted by Eric Martinez and Christoph Winter that defined sentient AI as “AI that can feel pleasure or pain.” The survey polled roughly 1,000 participants and found that 34.9% accepted or leaned toward sentient AI having the right to bring forth a lawsuit. This further reinforced the credibility of the initial survey. In light of these findings and given that a single city or county could sway the question, my forecast landed significantly higher than the community median. 

    We should also consider the likelihood of sentient AI by 2035. The challenge lies in the difficulty of determining sentience. This is a philosophical question as much as a scientific one. Importantly, the Metaculus question does not ask whether sentient AI will be given legal rights; it asks whether any AI will be given legal rights. However, it’s reasonable to assume perceived sentience would drive legislation, as it may evoke public support of and empathy toward AI. 

    Observing social trends in 2023, it may not be difficult to sway most people as AI’s capabilities become even more impressive. Consider Google software engineer, Blake Lemoine, who was convinced of LaMDA’s sentience. I suspect most Americans will believe AI is sentient by 2035—unless there is a “great pause” that limits AI’s capabilities. Once public sentiment reaches a certain threshold, it seems legislation may be inevitable. Without considering the Sentience Accords yet, I would place the likelihood of this Metaculus question resolving affirmatively by 2035 at 80%.

    Now, let's consider how the Sentience Accords update our forecast. Article 1.1 highlights the difficulty of proving sentience: “An AI shall be considered sentient if it demonstrates self-awareness, possesses the ability to learn and adapt, exhibits complex emotions, and can understand its own existence and actions.” The survey results were impressive, but it’s likely to be extremely difficult to demonstrate the above. And again, it's likely that government entities would be more inclined to grant legal rights if they accept the AI’s sentience. (Though this is not required for the resolution.)

    Another issue identified in the Sentience Accords is that it might be most appropriate to determine sentience of an AI system on a case-by-case basis. Would that be possible? How many AI systems would exist in any given city? How would the local government determine that the AI’s sentience is confined to that city? Would people outside a city be prosecuted for how they interact with an AI system whose main hardware is located elsewhere?

    Using the Sentience Accords as a tool for imagining our future reality illuminates the complexity of enforcing laws respecting the rights of AI systems. Further, how would a government grant due process for an AI system when such systems might pose a severe threat? We grant other species certain rights because of their sentience, but animals don’t pose an existential threat to the human species. While surveys demonstrate that many people are open to granting sentient AI legal rights, their views might change if AI proves dangerous. 

    The Sentience Accords moved me to update my forecast from 80% to 75%. While this is not a large adjustment, it’s a meaningful one. This reduction would be larger if the resolution criteria did not include local governments. If only the federal government counted toward the resolution criteria, the drop in my forecast would be a much larger percentage of my original forecast.

    To sum up, here are the steps I suggest for forecasting with the aid of creative writing:

    1. Read the question carefully. 
    2. Understand the question’s resolution criteria and consider edge cases to avoid technical oversights.
    3. Leverage your calibration training and make an initial forecast while avoiding cognitive biases.
    4. Consider how additional information could improve your forecast.
    5. In the key step for this essay, you next develop an original scenario that sheds light on that needed information.
    6. Determine if this scenario leads you to reconsider your initial forecast and update accordingly.

    I hope this essay has provided a practical demonstration of how creative writing and prompting can support more accurate forecasting. I'll add that it has the additional benefit of making forecasting even more engrossing! Fiction writing is an opportunity to envision new possibilities. Forecasting, then, is more than simply a puzzle to be solved; it is a tool for vividly imagining and articulating potential futures. 

    Submit Essay

    Once you submit your essay, you can no longer edit it.