AI

Nobody's Building AGI

The AGI race isn't about who builds it first. It's about who gets to redefine what 'AGI' means just before their next funding round...

Kamil Korczyński
Kamil Korczyński
6 min read
Published
Tags:
#ai
#agi
#thoughts

Nobody's Building AGI. They're Just Moving the Goalposts.

Here's what nobody wants to admit: the AGI race isn't about who builds it first. It's about who gets to declare victory by changing what "AGI" means.

Every few months, another CEO hints we're "close to AGI." Two years out, maybe three. Investors write checks with more zeros. And if you've been paying attention, really paying attention, you've noticed the definition keeps shifting. What used to mean "human-level general intelligence" now means "economically disruptive" or "PhD-level on benchmarks."

That's not progress. That's marketing.

The Bait-and-Switch You're Watching Happen

I remember when AGI meant something specific. A system that could reason like a human across any domain. Learn to drive, write code, navigate office politics, solve novel problems, without being explicitly trained on each task. The kind of intelligence that invents solutions rather than pattern-matches against trillion-token datasets.

Now? Look at the corporate blog posts. "AGI is technology that disrupts large parts of the economy." Or "systems that handle most knowledge work." Or my personal favorite: "PhD-level performance on academic benchmarks."

Notice what happened? We went from thinking like humans to completing tasks we can measure.

If you define AGI as "economically transformative," congrats. ChatGPT might already qualify. If it's "beats our carefully designed benchmarks," that's achievable in the next product cycle. But if you stick with the original definition, actual human-level general intelligence that reasons, learns from few examples, and solves genuinely new problems, that's a completely different beast.

And here's the thing: definitions naturally evolve. Fields mature, terminology shifts. That's normal. What's not normal is when those shifts conveniently align with what's achievable right before a funding round.

The Scaling Myth Everyone Believed

The entire AI boom of the last five years rested on one simple bet: more data + more compute = more intelligence.

Train bigger models. Scale the infrastructure. Pour in the GPU hours. And it worked. The jump from GPT-3 to GPT-4 was real. The improvements were measurable and impressive.

This created a quasi-religious belief that scaling would eventually produce AGI. Just keep pushing the curve upward. The breakthrough is inevitable.

I've seen this movie before.

Anyone who's done optimization work recognizes the pattern: diminishing returns. That first 80% of capability gains came from scaling. Every dollar of compute, every terabyte of training data produced visible improvements. We captured that 80%.

The last 20%, the capabilities that would constitute actual AGI like robust reasoning, true generalization, reliable performance on edge cases, requires exponentially more resources for incremental gains. You're not climbing a linear slope anymore. You're trying to summit Everest with each step costing ten times the previous one.

Look at the infrastructure investments. The data center announcements. The power grid requirements. Models are improving, but the cost per improvement is growing faster than the improvements themselves.

You can only scale this approach as long as capital scales. And capital only scales as long as investors believe you'll reach the destination.

Hence: redefine the destination.

We've Been Here Before (And Nobody Learned)

IBM Watson was going to revolutionize medicine. It got sold for parts.

Expert systems in the 1980s were going to automate expertise. They found narrow success and broad failure.

Self-driving cars have been "two years away" for over a decade.

The pattern is always the same: impressive demos, difficult deployment, eventual recalibration of expectations.

Yes, breakthroughs happen. Yes, they're unpredictable. But "a breakthrough might happen" isn't a strategy. It's a hope. And building a $100 billion valuation on hope tends to end badly.

The 95% Accuracy Trap

Here's what separates people who deploy AI from people who demo it: the last mile problem.

Demos are curated. Cherry-picked examples under ideal conditions. Production systems face edge cases, unexpected inputs, adversarial users, and the accumulated complexity of reality.

Here's what 95% accuracy actually means in practice: if you can't identify which 5% is wrong, you can't trust any of it. Teams end up reviewing every output anyway, negating the efficiency gain. A tool that promises to save 10 hours a week can easily create 15 hours of validation work instead.

The gap between "works 90% of the time" and "works 99% of the time" is enormous. The gap between 99% and 99.9% is exponentially larger, and exponentially more expensive to achieve.

This is why enterprise AI adoption is slower than the headlines suggest. Most proof-of-concepts die before production. Not because AI is useless, but because "almost reliable" isn't the same as reliable.

I've shipped enough AI features to know: current systems excel at specific, well-defined tasks with clear success criteria and tolerance for occasional errors. AI augmentation, helping humans, works. AI automation, replacing humans, mostly doesn't.

That's genuinely useful. But it's not AGI. It's very good narrow AI. And the gap between those two isn't closing as fast as the announcements suggest.

What We're Actually Building

If we're not approaching AGI, at least not the original version, what are we actually building?

Probably something like horizontal scaling: more specialized models for specific domains. Better tooling for particular use cases. Incremental improvements in reliability and efficiency. AI as a powerful tool integrated into workflows.

This is valuable. It's just not the revolution being marketed.

Here's what I tell my team: evaluate AI on current, demonstrated capability, not roadmap promises. Be skeptical of timelines tied to redefined goals. Focus on specific problems these tools solve today, not general intelligence tomorrow.

The companies making real progress are usually quieter than the ones making announcements.

Could I be wrong? Sure. A genuine architectural breakthrough, something beyond scaling transformers, could change everything. But I've been in tech long enough to know: don't build strategy around unpredictable breakthroughs. Build strategy around demonstrated capability.

One is planning. The other is gambling.

The Part Nobody Wants to Hear

I work with LLMs daily. I ship AI features to production. The technology is genuinely capable in ways that would've seemed like science fiction a decade ago.

The problem isn't that AI is overhyped or useless. The problem is the framing.

Confusing "very good AI" with "AGI" benefits VCs writing checks and executives making promises. It doesn't benefit engineers deciding where to allocate resources, what to build, or how to think about the next two years.

The interesting question isn't whether AGI is coming. It's whether it matters, if the redefined version arrives first and everyone declares victory in a race whose finish line quietly moved.

So What Complexity Are You Willing to Manage?

Here's your choice: you can believe the hype cycle and plan for AGI that may never arrive in the form promised. Or you can accept that we're building very sophisticated narrow AI that requires careful deployment, constant monitoring, and realistic expectations about what it can reliably do.

One path leads to disappointment when the demos don't match production. The other leads to incremental wins that compound over time.

I know which bet I'm making. The question is: are you building for the AI that exists, or the AI you've been promised?

That answer determines what kind of engineer you'll be.

Share:

Comments