The future is always
twenty years out.
Every notable figure in AI has a public guess for when artificial general intelligence arrives. They have been making those guesses for seventy years. The average prediction has, almost without exception, been about twenty years from now. Said in 1965. Said in 1993. Said again last week.
Two hundred and fifty forecasts from one hundred and forty named figures. Press play and watch the predictions accumulate from 1950 forward. The twenty-year horizon emerges as a faint glowing crest along which most predictions terminate, regardless of when they were made.
Everyone agrees AGI is coming. They have for seventy years.
In 1965, Herbert Simon said machines would be capable of doing any work a man could do within twenty years. In 1970, Marvin Minsky told Life Magazine the problem would be solved in three to eight. In 1993, Vernor Vinge said the singularity would arrive within thirty years. In 1999, Ray Kurzweil drew a line at 2029. In 2025, Demis Hassabis said five to ten years. Each of these predictions, made roughly twenty years apart, was for roughly twenty years out.
The plot below is every public prediction we could find, organized so that the year a person spoke is on one axis and the year they pointed to is on the other. There is a faint crest across the chart where most of the dots terminate. That crest is twenty years from the speaker. It has been there the whole time.
Each lift is one prediction. The bar starts at the year it was said and rises to the year it forecasted. Press play to watch them appear in order. Hover any lift for the quote.
ChatGPT shipped in November 2022. In the eighteen months that followed, the median expert prediction for AGI moved closer by roughly twenty-five years. It is the largest single shift in expert opinion in the history of the field.
The same word, four very different distributions. The frontier lab average and the survey-of-academics average are nearly twenty years apart, on the same question, in the same year.
Six figures who have publicly revised their AGI timeline more than once. Almost all moves have been earlier.
Drag the slider. See who agrees with that year, what they said, and what camp they belong to.
The 2023 ESPAI survey asked researchers when AI would reach human-level performance on specific tasks. The answer is a calendar, not a date. The cracks come first.
DeepMind's "Levels of AGI" framework (Morris et al., 2023). One word, six different definitions. Where we stand and where each rung is projected to land. Frontier-lab framework, optimist by design.
The same 2023 expert survey gave two answers to two slightly different questions. By when can AI do every task better than a human? Median answer: 2047. By when does AI actually do every task, replacing every job? Median answer: 2116. The gap between "possible" and "deployed" is bigger than the gap between today and "possible."
The Armstrong-Sotala study put a name on the pattern. Across sixty years of dated AGI predictions, the central tendency has been fifteen to twenty-five years from the time of utterance, regardless of the year. Simon to 1985, Minsky to the late seventies, Vinge to 2023, Kurzweil to 2029, Hassabis to 2030. Twenty years is what "soon, but not yet" sounds like in the mouth of an expert.
Three forces probably keep the horizon stable. Frontier-lab executives benefit from short timelines for fundraising. Safety researchers benefit from "soon enough to matter." Tenured academics benefit from "current paradigm will not work." The camps disagree on the year, but each camp has a structural reason to keep the year where it is.
And the definitions keep moving. AGI in 2015 meant something different than AGI in 2025. Jensen Huang's "achieved" definition is "an AI that can build a billion-dollar business." Sam Altman calls AGI "a very sloppy term" and now talks about "superintelligence" instead. As capabilities arrive, the goalposts slide forward to include the still-missing pieces.
So: are 2026's predictions the first to break the pattern, or are they the same diagonal in new clothes? The honest answer is that we will know in 2046.
Forecasts answer when. They don't answer what. The most-discussed AI futures are the surface ones, utopia and extinction, but the probability mass sits below the waterline, in scenarios where humans don't get conquered so much as made redundant.
Click a label to read the scenario. Press Esc or the × to close.
Tegmark, Life 3.0 (2017), Ch. 5; Kulveit et al., "Gradual Disempowerment," arXiv:2501.16946 (2025). Tegmark deliberately does not assign probabilities to these scenarios. They are structural possibilities, not forecasts.
Two hundred and fifty rows assembled across four research batches in April 2026. Sources include the BLS-style synthesis of public-record predictions, the 2023 AI Impacts ESPAI survey (n=2,778), Metaculus aggregate snapshots, frontier-lab CEO statements, podcasts, blogs, and primary press. Each row carries a verification level and a source URL where available.
For each prediction we record the year it was said, the year (or year range) it forecasted, the speaker's role at time, and the concept used (AGI, HLMI, ASI, TAI, "powerful AI"). When a person made multiple predictions across years, each is its own row. Definitions vary across speakers; concept is preserved per row rather than normalized.
AI Impacts ESPAI 2023 ↗
Metaculus weakly-general-AI ↗
AI 2027 (Kokotajlo et al.) ↗
Situational Awareness (Aschenbrenner) ↗
Many quotes are paraphrased from secondary press; rows are flagged with a verification level. Single-year midpoints flatten ranges. Survey aggregates appear as one row each despite representing thousands of underlying respondents. The sample is heavily anglophone and tilted toward public figures. The dataset is not exhaustive and is not meant to be.