The AI Doom Calculator is an online tool developed by AI Death Calculator that estimates the likelihood of an artificial general intelligence (AGI) causing human extinction. It aims to promote awareness and discussion around the potential risks posed by advanced AI systems.
The calculator provides percentage chances of human extinction occurring at different points in the future – by 2030, 2050, 2100 and any year up to 3000. It calculates these estimates based on the user’s inputs for factors like the capability and motivations of future AI systems.
In this comprehensive guide, we will explore how to use the AI Doom Calculator to generate your own human extinction risk estimates.
Accessing the AI Doom Calculator
The AI Doom Calculator can be easily accessed online at aideathcalculator.org. The homepage displays background information on the motivation behind developing this tool and the methodology used to calculate risks.
To start using the calculator, scroll down and click on the “Calculate Now” button. This will bring you to the main input page where you can enter assumptions about future AI systems.
Inputs and Assumptions
The AI Doom Calculator allows users to input their own assumptions on several key factors that would influence the risk posed by AI, including:
AI Capability Level
This refers to how advanced you believe AI systems will become in the future. The options range from Limited AI (narrow AI focused on specific tasks) to High AI (artificial general intelligence surpassing human-level across most areas).
AI Motivation
This covers what motivations and goals you believe future AIs will have. The options include AI Safety (systems specifically designed to be safe and beneficial) to AI Risk (systems focused on harmful and dangerous goals).
Regulation Strictness
This input covers the expected level of regulation and oversight governing AI development. The options range from None (unregulated AI arms race) to High (strong coordination between nations and aligning AI goals with human values).
Convergence Time
This refers to how soon you think the capability level you selected could be achieved. For example, if you picked High AI capability, should that be possible by 2040, 2060, 2080 etc.
After entering your assumptions on these key factors, click the Calculate button to see the estimated human extinction probabilities.
Understanding the AI Risk Estimates
Once you input your assumptions and click Calculate, the AI Doom Calculator displays human extinction risk percentages for different time periods:
- By 2030
- By 2050
- By 2100
- By 3000
It also summarizes why it estimated that level of risk based on your inputs.
The percentages represent the probability of human extinction occurring within that timeframe due to the development of advanced artificial intelligence.
- Higher percentages indicate greater existential risk from AI by that date
- Lower percentages suggest more confidence that AI development can be safely managed and regulated by that time period.
For example, if you specify High AI Capability, Not Safe AI Motivations and No Regulation, the tool may estimate a:
- 60% risk of human extinction by 2100
- 80% risk by 2200
- Over 95% risk by 3000
This means based on your inputs, the AI Doom Calculator believes there is a 60% chance AI causes human extinction before the year 2100, and an over 95% chance by the year 3000.
The rationale explains this is because unchecked, unregulated AI development would likely progress rapidly and prioritize goals not aligned with human values or safety.
Adjusting Inputs to Assess Different AI Futures
A key benefit of the AI Doom Calculator is adjusting the inputs to model different potential AI scenarios and see the impact on existential risk estimates.
You can go back and tweak the factors like:
- AI Capability (Low, Moderate, High)
- Motivations (Safe, Risky)
- Regulation (None, Moderate, High)
- Convergence Timeline (2040, 2060 etc)
With each adjustment, the risk percentages will update along with the rationale. This allows you to interactively explore things like:
- How much would strong AI oversight reduce risks?
- What if we could ensure AI safety measures were built-in?
- How would risks change with different capability attainment timelines?
Evaluating various combinations of inputs can provide insight into which factors seem to have the greatest influence on managing existential AI risk. It also highlights that reasonable people can still have widely differing views on what trajectory AI development may take.
Share and Compare Risk Estimates
After using the AI Doom Calculator to assess AI risks, you can share your results to compare with others.
When you finish entering your assumptions, click on the “Share” button near the results. This generates a unique URL that captures your inputs and risk estimate outputs.
You can send this URL to friends and colleagues so they can view the exact same report you created. They can then explore tweaking the inputs and rerun the risk estimates themselves.
Comparing risk assessments provides a structured way to engage in constructive debates around hopes and concerns for the future of artificial intelligence. Rather than vague predictions back and forth, it focuses the discussion on specific factors and assumptions driving differences in estimated probabilities.
Over time, the calculator creators hope to build an open database of different risk projections. Researchers and policymakers could then review aggregated perspectives on this critical issue.
Tips for Using the AI Doom Calculator
When using the AI Doom Calculator to explore AI existential risk, keep these tips in mind:
Make Reasonable Assumptions
Try to make realistic assessments of future AI capabilities, motivations and oversight based on factual evidence. Avoid simply picking the most extreme options in every category, as that likely does not represent a plausible scenario.
Avoid Overconfidence Bias
Many experts are prone to overconfidence when making predictions about AI development timelines. Consider widening the timespans a bit compared to your instinctive assumptions.
Understand Influential Factors
Pay attention to which adjustments significantly sway the human extinction probabilities upwards or downwards. This highlights the most influential areas to focus safety research and policy efforts.
Consider Multiple Perspectives
Rerun the calculator using different sets of inputs to model alternative potential outcomes. This guards against anchoring on just your own speculative assumptions.
Share and Discuss Estimates
Leverage the URL sharing feature to debate with friends and colleagues about which input factors and assumptions seem most reasonable.
Using the tool responsibly along with these tips allows everyone to expand their understanding of the complex issues surrounding advanced AI.
Methodology Behind the AI Doom Calculator
The AI Doom Calculator was developed by AI Death Calculator, Validity Foundation and Rethink Priorities – organizations focused on mitigating extreme risks from emerging technologies.
It builds on past work quantifying existential risk, as well as subject matter expertise around AI development trajectories and potential safety strategies.
Here is an overview of the methodology used to generate the human extinction risk percentages:
Literature Reviews
The creators conducted extensive literature reviews of writings and surveys with AI experts on topics like:
- Predictions on Artificial General Intelligence (AGI) timelines
- AI safety and value alignment challenges
- Probabilities experts assign to human extinction risks
- Proposals for AI oversight approaches
This grounded the calculator in data-driven perspectives from leaders in the AI field.
Modeling Different Scenarios
Based on the literature, the developers defined a set of representative scenarios along two key dimensions:
- AI Capability Spectrum (Narrow AI -> AGI -> Superintelligent AI)
- AI Value Alignment Spectrum (Indifferent -> Beneficial -> Hostile)
Different combinations of capability and alignment levels cover a wide range of potential AI futures.
Risk Factor Formulas
For each intersecting scenario of capability vs. alignment, formulas were created to quantify extinction risk based on other influencing factors like:
- Presence of AI oversight
- Effectiveness of AI safety efforts
- Global coordination levels
- Weaponization dynamics
These formulas draw on both findings from the literature review and expert judgments.
Probability Distributions
Rather than just estimating an average risk percentage per timeline, probability distributions provide best case, worst case and likely ranges around each estimate.
This accounts for the high uncertainty inherent in forecasting AI development impacts so far into the future.
User Testing and Refinement
Before finalizing the calculator, volunteers across varying technical and non-technical backgrounds beta tested the tool and provided feedback.
This user input helped refine the language used for the input factors and results to be approachable by a general audience.
Ongoing monitoring, benchmarking and refinement of the underlying methodology aims to maintain reliable and scientifically grounded metrics as understanding of AI progress evolves.
Limitations and Ways to Improve the Calculator
While the AI Doom Calculator introduces a data-driven methodology to estimating existential risk, the developers acknowledge certain limitations:
Narrow Focus on Extinction Risk
The probability metrics focus only on assessing the likelihood that AI causes human extinction. However, other adverse outcomes like lock-in on suboptimal values are also concerning.
Difficulty Modeling New Information
The tool can rapidly become outdated as AI capabilities and safety solutions advance. The percentages may lag leading edge research breakthroughs.
Assumptions on Trajectories
Despite consulting literature, the modeled development trajectories and scenarios inherently involve speculative assumptions by the creators.
Randomness and Unknown Unknowns
True randomness and unforeseen events can dramatically alter technological progress in unpredictable ways unaccounted for.
Economic Disruption Not Factored In
While modeling extinction, the tool does not estimate potential AI impacts on economics, jobs, inequality etc.
To improve upon these limitations, the developers aim to:
- Expand to assessing risks of broad suffering beyond just extinction
- Regularly update underlying models per latest research
- Conduct expert surveys to benchmark projections
- Incorporate probabilities around disruptive randomness
- Partner with economists to quantify sub-existential impacts of AI
Conclusion
The AI Doom Calculator offers an interactive way to explore assumptions and uncertainties around future AI development scenarios. Estimating extinction risks is fraught with challenges, but can help focus priorities to steer towards safer technological outcomes.
Used responsibly and with an understanding of its limitations, the tool promotes deeper investigation into the factors influencing beneficial versus dangerous AI progressions.
Rather than dire warnings or blind optimism, it grounds the debate in nuanced risk modeling to urge caution and awareness.
Calculating existential risk is itself an inexact science. But these efforts move the discussion in a more constructive direction – recognizing AI’s profound promise and perils, while we still have a chance to act wisely.
FAQs
What is the AI Doom Calculator?
The AI Doom Calculator is an online tool that estimates the risk of human extinction from the development of advanced artificial intelligence (AI) systems. Users input assumptions about factors like future AI capabilities, motivations and regulation to generate percentage chances of human extinction by various dates.
How does the calculator estimate extinction risk percentages?
The percentages are calculated using mathematical formulas incorporating user inputs as well as data and modeling based on literature reviews of AI expert predictions. Probability distributions are used to account for uncertainties around forecasting long-term AI impacts.
What time periods show extinction risk estimates?
The calculator shows estimated probabilities of human extinction occurring by several dates – 2030, 2050, 2100 and 3000. Comparing the percentages for different time periods illustrates how risks might grow over time based on the user’s inputs.
1 thought on “How to Use AI Doom Calculator? An AI Death Calculator [2024]”