Every job you can imagine falls into one of four buckets: agriculture, manufacturing, services, or the experience/attention economy. That’s it. These are the four paradigms of human labor.
Agriculture fed us. Manufacturing built the modern world. Services became the dominant employer in the 20th century. And now, in the 21st, the experience economy—where meaning, identity, and attention are the commodities—has emerged as the frontier.
Here’s the catch: artificial intelligence, robotics, and automation are devouring the first three. Farming, factories, and even white-collar services are increasingly machine territory. That leaves only the experience economy as a refuge.
But not everyone can be an influencer, creator, or curator of meaning. The experience economy is powerful, but it cannot absorb the billions of workers displaced from the other three sectors.
And there is no fifth paradigm. Once machines take over agriculture, manufacturing, and services, and the experience economy reaches its limits, we arrive at a post-labor reality.
The question ahead of us is not whether jobs will survive—it’s how we will redefine value, purpose, and human meaning in a world where labor is no longer the foundation of the economy.
METR's data shows that agentic autonomy is accelerating.
But what about total Lines Of Code (LOC) written?
After all, just because an AI can run for 4 hours autonomously and succeed 50% of the time does NOT mean that it is producing useful, integrated output.
In today's graph, we examined all the scattered information coming out of enterprises and frontier labs, in an attempt to assess "what is the growth rate of LOC integrated into production, as written by AI?"
The graph below shows our current estimate (and it is an estimate as there are very few good sources of hard data!)
With that being said, we reach the reasonable (to us) conclusion that AI will probably write about 60% of all code globally by 2030.
Personally, I feel like this undersells exponential progress on metrics like agentic autonomy. Consider that two labs just dominated ICPC in 2025 and show no signs of slowing down!
Even so, this data is rooted in empirical trends and conservatively forecast out.
One thing we had to do was differentiate between "humans using AI to write more code" as well as "AI writing code fully autonomously" which is of course hard to model as agentic autonomy is really only ramping up this year.
But what do you think? Does this data oversell or undersell AI code generation?
Okay, here's the simple metaphor for how OpenAI cut "scheming" rates from 13% to 0.3% in some models:
You know how in math class you'd get graded as much (or more) on the METHOD you used? SHOW YOUR WORK.
They want to see you following their first principles, their reasoning, their methods, to ensure that you've internalized the protocol. That is, to many math teachers, far more important than just getting the right answer.
(For me personally, I often derived my own methods, which were admittedly brittle, but my teacher graded it wrong even when I got the right answer)
For AI? It's not so different.
Rather than just grading based on "did your final answer align with the correct values" they measure "did you actually internalize the principles we wanted each step of the way?"
This is a similar-but-different insight to the recent anti-hallucination paper. For tamping down on hallucinations, the cause was pretty straight forward: it was not unlike the reasoning around the SAT score.
With the SAT, you can often get partial credit for just guessing. But, when you RL a model and ONLY reward correct answers, you're inherently training it to at least give you a wild ass guess (confidently) because that's better than the guaranteed wrong answer of "I don't know"
In other words, you have to score more like the German education system to avoid hallucinations.
I suspect this method will apply broadly to many domains in LLM pedagogy:
1. not just alignment
2. probably also math
3. science
4. planning
5. etc
This paper is potentially HUGE
Via a relatively (in hindsight) simple three step pipeline. Essentially make the "path of least resistance" become "actually learn to be aligned" rather than "just pretend to be aligned"
David Shapiro
I'm reading far fewer books lately, but far more words. I wonder what impact this will have on society. Are books becoming irrelevant?
Read the full article here: daveshap.substack.com/p/books-dont-stack-up-to-ai-…
8 hours ago (edited) | [YT] | 66
View 19 replies
David Shapiro
Which chatbot do you find to be the most emotionally intelligent?
1 day ago | [YT] | 62
View 51 replies
David Shapiro
Radical acceptance is the only way to orient towards the coming wave of AI as it surpasses humans across the board.
Read the full article here: daveshap.substack.com/p/you-are-not-smart-enough-t…
2 days ago | [YT] | 227
View 17 replies
David Shapiro
2 days ago | [YT] | 155
View 43 replies
David Shapiro
Every job you can imagine falls into one of four buckets: agriculture, manufacturing, services, or the experience/attention economy. That’s it. These are the four paradigms of human labor.
Agriculture fed us. Manufacturing built the modern world. Services became the dominant employer in the 20th century. And now, in the 21st, the experience economy—where meaning, identity, and attention are the commodities—has emerged as the frontier.
Here’s the catch: artificial intelligence, robotics, and automation are devouring the first three. Farming, factories, and even white-collar services are increasingly machine territory. That leaves only the experience economy as a refuge.
But not everyone can be an influencer, creator, or curator of meaning. The experience economy is powerful, but it cannot absorb the billions of workers displaced from the other three sectors.
And there is no fifth paradigm. Once machines take over agriculture, manufacturing, and services, and the experience economy reaches its limits, we arrive at a post-labor reality.
The question ahead of us is not whether jobs will survive—it’s how we will redefine value, purpose, and human meaning in a world where labor is no longer the foundation of the economy.
3 days ago | [YT] | 334
View 37 replies
David Shapiro
Bro just make a YouTube channel about your special interest.
4 days ago | [YT] | 296
View 28 replies
David Shapiro
AI will write 350 BILLION lines of code by 2030*.
METR's data shows that agentic autonomy is accelerating.
But what about total Lines Of Code (LOC) written?
After all, just because an AI can run for 4 hours autonomously and succeed 50% of the time does NOT mean that it is producing useful, integrated output.
In today's graph, we examined all the scattered information coming out of enterprises and frontier labs, in an attempt to assess "what is the growth rate of LOC integrated into production, as written by AI?"
The graph below shows our current estimate (and it is an estimate as there are very few good sources of hard data!)
With that being said, we reach the reasonable (to us) conclusion that AI will probably write about 60% of all code globally by 2030.
Personally, I feel like this undersells exponential progress on metrics like agentic autonomy. Consider that two labs just dominated ICPC in 2025 and show no signs of slowing down!
Even so, this data is rooted in empirical trends and conservatively forecast out.
One thing we had to do was differentiate between "humans using AI to write more code" as well as "AI writing code fully autonomously" which is of course hard to model as agentic autonomy is really only ramping up this year.
But what do you think? Does this data oversell or undersell AI code generation?
6 days ago | [YT] | 173
View 28 replies
David Shapiro
Okay, here's the simple metaphor for how OpenAI cut "scheming" rates from 13% to 0.3% in some models:
You know how in math class you'd get graded as much (or more) on the METHOD you used? SHOW YOUR WORK.
They want to see you following their first principles, their reasoning, their methods, to ensure that you've internalized the protocol. That is, to many math teachers, far more important than just getting the right answer.
(For me personally, I often derived my own methods, which were admittedly brittle, but my teacher graded it wrong even when I got the right answer)
For AI? It's not so different.
Rather than just grading based on "did your final answer align with the correct values" they measure "did you actually internalize the principles we wanted each step of the way?"
This is a similar-but-different insight to the recent anti-hallucination paper. For tamping down on hallucinations, the cause was pretty straight forward: it was not unlike the reasoning around the SAT score.
With the SAT, you can often get partial credit for just guessing. But, when you RL a model and ONLY reward correct answers, you're inherently training it to at least give you a wild ass guess (confidently) because that's better than the guaranteed wrong answer of "I don't know"
In other words, you have to score more like the German education system to avoid hallucinations.
In this case, you tamp down on scheming by teaching like Mrs. Stewart did back in high school.
Full paper here: static1.squarespace.com/static/6883977a51f5d503d44…
I suspect this method will apply broadly to many domains in LLM pedagogy:
1. not just alignment
2. probably also math
3. science
4. planning
5. etc
This paper is potentially HUGE
Via a relatively (in hindsight) simple three step pipeline. Essentially make the "path of least resistance" become "actually learn to be aligned" rather than "just pretend to be aligned"
Elegantly simple overall.
1 week ago | [YT] | 204
View 28 replies
David Shapiro
Robots are not yet priced into the economy.
1 week ago | [YT] | 58
View 4 replies
Load more