The inputs are bending up.
Is intelligence following?
Seven charts drawn from public data. Epoch AI, METR, Stanford's AI Index, the IEA, and the major capability leaderboards. Each chart asks a single question. Together they answer a bigger one: are we, in any concrete and measurable sense, approaching escape velocity?
Scroll through seven interactive charts. Each one comes from a real public dataset you can check yourself. The pro-singularity case and the skeptical case are both drawn from the same numbers. You decide which story fits.
TEST. 01 · Created with Kimi K2.6 · Language models can hallucinate. Verify primary sources.
In 1965, the British cryptographer I. J. Good described the last invention humanity would ever need to make.
He called it an "ultraintelligent machine," and predicted that once it could improve its own design, "there would unquestionably be an intelligence explosion." Sixty years later, the CEOs building today's AI say much the same thing. Other serious researchers say the curve is about to flatten.
This page doesn't pick a side. It shows the data. Seven charts, each from a public dataset you can verify. The question underneath is simple: are the trends that feed AI actually accelerating, or are they about to hit a wall?
Frontier training compute, 1950–2026. Zoom to Recent to read the individual labels. Toggle the reference curves to see whether we're tracking ordinary exponential, super-exponential, or something steeper still.
What this means
The y-axis is logarithmic. Each gridline up means 10× more compute, not 10 more units. A straight line on this chart is already exponential growth. A curve that bends upward on a log chart is the rare thing — super-exponential. Today's frontier doubles every ~6 months; Moore's Law for transistors took ~24 months. The orange "super-exp" line shows what the next decade looks like if the doubling time itself keeps shrinking. The data is currently tracking the blue 6-month line — fast, but not yet bending above it.
Counterpoint
Epoch's own data shows that growth in the very largest training runs slowed to about 4.2× per year after 2018. And here is the key caveat: more compute does not automatically mean smarter AI. Compute is just an ingredient, not the finished product.
AI gets cheaper in two ways at once: the chips get faster, and the code gets smarter. This chart shows both curves. Click "Combined" to see what happens when you multiply them together.
What this means
Two curves are climbing at once. The blue line is raw compute — the total FLOP poured into a frontier training run, doubling every ~6 months. The gold dashed line is algorithmic efficiency — the same answer for fewer FLOP because the code got smarter, doubling every ~8 months. Click "Combined" and the chart shows their product: effective compute, doubling every ~3.5 months. That is why Phi-3-mini matches the 2022 PaLM-540B benchmark at 1/142 the parameters, and why GPT-4-level intelligence costs 100× less than it did 18 months ago.
How long a task can an AI complete on its own? METR measures this in minutes of human-equivalent work. Since 2019 that number has doubled every 7 months. In 2024–2026 it sped up to every 4 months. Drag the handle to see where that trend lands.
Caveats
The early data points in this series have been questioned for dataset robustness. And METR's own July 2025 study found that experienced developers were actually 19% slower when using AI tools on their own real codebases. A 50% success rate on a benchmark is not the same as working reliably in production.
GPT-4-level intelligence cost $36 per million tokens in 2023. By early 2026 it costs about $0.18. That is not a typo. Scrub the timeline or press play to watch the price fall.
What this means
Each dot is the cheapest model on the market that hits a given quality bar. In 2023, GPT-4 cost $36 per million tokens — the equivalent of $13/hr if you ran it as a knowledge worker at 100 tokens per second. By April 2026 the same quality runs at $0.18 per million tokens — about $0.06/hr, 100× below US federal minimum wage. Toggle "Human-labor-equivalent cost" to put the y-axis in dollars-per-hour. The dashed rose line is $7.25/hr — the US minimum. Frontier intelligence crossed that line at the end of 2024 and is still falling.
Researchers design a hard test. Frontier models break it inside a year. They build a harder one. Same thing. Each row below is one benchmark. The bar runs from the day the test was first scored to today; the open circle marks the day a model crossed human-parity.
What this means
A short red bar means a benchmark fell quickly — the field invented it, and within months a frontier model crossed human parity. A long blue bar means the test is still beating frontier systems. Notice the pattern: tests built in 2021 took two-to-three years to fall. Tests built in 2024 are falling in under a year. This is "saturation watch" — and the meta-trend is that the time between "we built a hard test" and "the test got cracked" keeps shrinking. Below: each tile is a sparkline of model scores over time. Dashed line = human baseline.
The mirage warning
A 2023 study showed that many "sudden leaps" in AI ability disappear when you change how you measure them. And the same models that score 80% on one coding test score 45% on a harder, cleaner version. The tests may be easier to game than they look.
Exponentials always hit a ceiling eventually. We are running out of training data, power grid capacity, and money for bigger runs. Drag the dashed lines to see when each wall might arrive.
Every past technology that looked like a runaway exponential eventually bent into an S-curve or flattened out. Moore's Law slowed. Genome sequencing plateaued. Airline speeds peaked in 1969. Toggle the layers to compare AI's trajectory against history's other "this time is different" moments.
Editorial note
This chart is the most opinionated one here. Kurzweil's "Law of Accelerating Returns" sounds convincing, but it is a guess, not a law of physics. You only know an exponential has bent after it has already happened.
The raw ingredients of AI are accelerating. Compute doubles every 5 months. Algorithms get twice as efficient every 8 months. Multiply those together and effective compute doubles every ~3.5 months.
AI agents can now handle tasks that took humans ~7 hours, and that horizon is doubling every 4–7 months. The price of GPT-4-level intelligence has fallen ~200× in two years. Benchmarks designed to be "uncrackable" in 2023 were solved in 12–18 months.
If even half these trends hold for another 2–3 years, the world looks qualitatively different. It does not matter whether you call that "singularity" or just "really fast progress."
Every exponential in history eventually bends. Moore's Law slowed after 2015. Genome sequencing collapsed in cost, then plateaued. Airline speeds peaked in 1969 and have not moved since.
AI faces real walls. We may run out of quality training data by 2028–2032. Training runs are projected to bump against a 9-month practical ceiling around 2027. Power demand is on track to strain the electrical grid.
And the benchmarks are messier than they look. The same models that score 80% on one coding test score 45% on a cleaner version. METR's own study found experienced developers were slower with AI tools on real tasks. These are the fingerprints of an S-curve in its early innings.
Both stories fit the same data. The single question that splits them is this: Can AI meaningfully speed up AI research itself?
Right now the answer is "maybe, on short tasks." METR's RE-Bench shows AI agents beat human experts on quick ML engineering problems but still lose on projects that take a full workday. The gap is closing, but we will not know which way it breaks until 2026–27.
Until then, the honest read is simpler than either camp admits. We are watching the fastest input-side acceleration in technological history run headfirst into the first credible physical and data ceilings. That collision, not the headline, is the real story.
What would change your mind?
- METR horizon hits 1 month with >80% reliability
- ARC-AGI-3 cracked above 50%
- Data-wall projections move past 2030
- Frontier compute doubling drops below 10%/year
- AI R&D agents beat humans on day-long tasks
- First $100B+ training run announced
Every chart is drawn with plain JavaScript and SVG. No frameworks, no build step, no analytics trackers. The data lives as simple JSON files right next to the page. Once loaded, it works offline.
Every number comes from a public dataset you can check yourself. Primary sources: Epoch AI (compute trends), METR (agent capability), Stanford HAI (AI Index), IEA (energy projections), and the major benchmark leaderboards. Last updated April 2026.
Epoch AI · Compute Trends ↗
METR · Time-Horizon Paper ↗
Stanford HAI · AI Index 2025 ↗
IEA · Energy and AI ↗
Schaeffer et al. · Emergent Abilities Mirage ↗
Several anchor numbers are current as of early 2026 and will move. Benchmark scores are contested and subject to contamination. The "intelligence explosion" framing carries unavoidable editorial weight. This page shows signals, not prophecies. The data argues with itself by design.
This page was generated by a large language model (Kimi K2.6). While every dataset is anchored to a real public source, LLMs can hallucinate figures, misattribute findings, and confabulate citations. Do not treat any number here as verified fact without checking the primary source. The links in the reading list are the ground truth.