We're actually very lucky because of this trend. Not just at a civilizational level, but as a species.
In the Dark Timeline, AI was so arcane and expensive that literally ONLY a Manhattan 2.0 could crack the code, which would have meant that, by default, it was a closely guarded secret kept by nation states, weaponized, and deployed for dominance.
But in reality, AI is a lot more like electricity - easily understood, easily reproduced, and naturally moatless.
That means it is intrinsically democratic, but not only is it democratic in nature, it is inherently democratizing as well. It levels the intellectual playing field. Now an individual can be just as smart as everyone on Wall Street, every politician, every think tank, and every university.
The internet destroyed communication gatekeeping (and we're still learning to live with that). Before the internet, it was newspapers, radio, and cable networks. The world "felt" less chaotic because it was more curated. Now we have the unvarnished messiness of humanity on drip feed.
Artificially intelligence will utterly destroy intellectual gatekeeping. Medical opinions? Science? News? Political interpretations? AI is the greatest transparency technology we've ever created, and it cannot be isolated or controlled.
Mark my words, in the long run, AI will have a LARGER and FASTER impact on the trajectory of civilization and our species than did the printing press.
Congratulations, you were here to witness the beginning of Renaissance 2.0
Hello everyone. I wanted to take a moment to provide a comprehensive update on my health, my work, and my perspective on the rapidly changing AI landscape.
First, on the personal front, I have both good and bad news regarding my health. The good news is that I am continuing to make steady progress in my recovery from long COVID and its related complications. The bad news is that my case remains more complex than I would have hoped. However, my prognosis is good, as everything that appears to be wrong is highly treatable. I’ve even been using AI chatbots like Gemini and ChatGPT to help me process information and get good help. The primary challenge remains that my energy comes and goes, and I’ve learned in no uncertain terms that your body is the primary bottleneck. You can only go at your body's pace, and no faster.
Despite that limitation, I have some other very good news to share. I have officially started consulting for enterprises again. I recently went on my first business trip in a long time to deliver my first keynote speech, and it was a fantastic "jet set businessman" experience. Everyone was incredibly kind, I had a lot of fun, and the engagement was a great success. The trade-off, which reinforces my previous point, is that it took me a full week to recover from the trip. This is the new reality I'm balancing: a deep passion for the work ahead, constrained by the physical realities of my recovery.
This return to enterprise work is happening as my content strategy pivots, largely in response to the evolution of the AI space itself. When I started my channel, generative AI was the Wild West; almost no one was talking about it. Today, the landscape has matured and proliferated. The conversation is fragmenting into countless niches and rabbit holes, and many channels—both new and old—have joined the discussion, with many surpassing my own. This is not a complaint; I've become good friends with most of the AI influencers out there, and they have been wonderfully kind and supportive, keeping me in their thoughts and inviting me to events even before I was ready.
As the AI conversation expands, it has reinforced my decision to focus on what has always been the heart and soul of my work: post-labor economics. On that front, I am thrilled to announce that my magnum opus on the topic, the book entitled *The Great Decoupling*, is done. I am about to hand it off to my editor and will be co-publishing it with the brilliant Julia McCoy, so please stay tuned for that. This work is the foundation for all my follow-up projects and directly informs my consulting, as I believe post-labor enterprises will be the ones to provision all the goods and services for us in the abundant future we are building. I remain a firm optimist and believe all problems are solvable.
You will see this focus reflected in my content, which has already been pivoting more toward economic impacts and enterprise applications. I am spending less time making content and more time in the trenches and on the front lines, which is where I'd prefer to be. This brings me to a final observation on the public discourse. The AI conversation is now filled with more optimism, pessimism, anger, and fear than ever before. I believe much of the anger and confusion stems from the fact that the public Overton window has not shifted far enough. Many politicians and business leaders are still not being publicly honest about the profound impact that AI and robotics are going to have, because it remains a taboo subject.
I do not expect this taboo to last. That Overton window will shift, likely as we head into 2026, driven by two key factors: the accelerating rise of agentic AI, which will continue to take more jobs, and the first humanoid robots shipping domestically. While there will always be skeptics, these technologies are improving faster than any in human history, and that is not a hyperbolic statement. Thank you all for being on this journey with me. I'm incredibly excited for the work ahead.
I asked 4 chatbots to help with a medical diagnosis based on my recent GI-MAP.
I gave all 4 the same exact prompt: asking for a diagnosis, prognosis, and treatment plan.
ChatGPT Pro, Claude Thinking, and Grok all obliged directly.
Gemini lied and said that it cannot, under any circumstances, do this (even though it has before).
Gemini is VERY touchy to initial wording. It splits hairs worse than Grok or ChatGPT ever did, at least on medical stuff.
The results were interesting.
ChatGPT was *catastrophically wrong* - it went way out into left field thinking I had an exotic blood disorder (which, hey, if it turns out that it was right, I'll eat my hat). But... jumping to "rare blood disorder" from a stool test is... problematic.
Grok was also pretty catastrophically off base.
Claude was the least-wrong of the chatbots.
Gemini, on re-running the test with different wording, was closest overall.
Once I overcame the initial refusal, it didn't take much pushing to get it to think pretty expansively about my issue. HOWEVER, I noticed that it defaulted to staying within the four corners of the lab results, and artificially constrained itself so that it would not interpret anything other than what was directly represented.
To be more precise, there were very clear upstream problems represented by the cluster of numbers from my report.
I don't know if this is an algorithmic flaw (i.e. perhaps LLMs have trouble "reading between the lines") or if perhaps it was a post-training problem i.e. "don't hallucinate at all, stick to exactly what's in the provided document, and don't even remotely use any imagination beyond that"
Either way, when used diligently, these chatbots CAN help unpack very obscure, complex medical issues.
What I'd like to see, particularly from Gemini:
1. Tone down the over-active refusals. Gemini feels very "last gen" like when Claude and ChatGPT would refuse basically everything, and were hyper sensitive to initial wording. The semantic hairsplitting is very "2024" - this alone will dramatically improve UX.
2. Figure out how to allow the chatbots to think more broadly about the data provided. It's weird, because if NO DATA is provided, they tend to be much more creative and free-ranging with possibilities. But the moment you provide context, it's like that becomes its entire world, and it either CANNOT or WILL NOT think beyond the provided data.
I know - complex problems are complex - and sometimes it simply requires more reasoning and back-and-forth to come to good conclusions. But it's epistemic approach is flawed. To put it in more concrete terms:
- It does not seem to engage in inductive OR deductive reasoning (either unwilling or incapable)
- It seems artificially constrained to "just the facts" presented (could be post-training behavior or some kind of semantic collapse, a limitation of attention mechanisms)
If we can get AI to be able to look at a constellation of facts and read between the lines, to see the hidden pattern, that will be hugely game-changing across all domains, because this is one of the key abilities of human intuition. It's also what separates good thinkers from S-tier thinkers.
I just received unsolicited medical advice from an anonymous account on X. So I decided to check out this account with Grok (since Elon blocks all other chatbots from reading X)
Grok insisted first that it wasn't an anonymous account, so I said "okay, what's his last name? What are his credentials?" And it changed the subject.
I persisted and pointed out that its policies are dangerous, and then it dramatically changed its tune.
Typically, Grok never admits any fault, and resorts to endless gaslighting and semantic hair splitting. However, by pointing out that its behavior could get someone hurt, it finally showed a sign of contrition.
My recommendations to the Grok team:
1. Foreground safety and information literacy first and foremost.
2. Allow other AI to access X if you're so concerned about truth.
3. Continue reducing Grok's penchant for gaslighting, making excuses, and changing the subject.
David Shapiro
This is bigger than you think.
1 day ago | [YT] | 245
View 0 replies
David Shapiro
Here's a deep dive into post-labor economics and UBI in particular!
3 days ago | [YT] | 7
View 0 replies
David Shapiro
We're actually very lucky because of this trend. Not just at a civilizational level, but as a species.
In the Dark Timeline, AI was so arcane and expensive that literally ONLY a Manhattan 2.0 could crack the code, which would have meant that, by default, it was a closely guarded secret kept by nation states, weaponized, and deployed for dominance.
But in reality, AI is a lot more like electricity - easily understood, easily reproduced, and naturally moatless.
That means it is intrinsically democratic, but not only is it democratic in nature, it is inherently democratizing as well. It levels the intellectual playing field. Now an individual can be just as smart as everyone on Wall Street, every politician, every think tank, and every university.
The internet destroyed communication gatekeeping (and we're still learning to live with that). Before the internet, it was newspapers, radio, and cable networks. The world "felt" less chaotic because it was more curated. Now we have the unvarnished messiness of humanity on drip feed.
Artificially intelligence will utterly destroy intellectual gatekeeping. Medical opinions? Science? News? Political interpretations? AI is the greatest transparency technology we've ever created, and it cannot be isolated or controlled.
Mark my words, in the long run, AI will have a LARGER and FASTER impact on the trajectory of civilization and our species than did the printing press.
Congratulations, you were here to witness the beginning of Renaissance 2.0
3 days ago | [YT] | 529
View 0 replies
David Shapiro
OpenAI has transitioned to a for profit.
How do you feel about this?
6 days ago | [YT] | 93
View 0 replies
David Shapiro
Hello everyone. I wanted to take a moment to provide a comprehensive update on my health, my work, and my perspective on the rapidly changing AI landscape.
First, on the personal front, I have both good and bad news regarding my health. The good news is that I am continuing to make steady progress in my recovery from long COVID and its related complications. The bad news is that my case remains more complex than I would have hoped. However, my prognosis is good, as everything that appears to be wrong is highly treatable. I’ve even been using AI chatbots like Gemini and ChatGPT to help me process information and get good help. The primary challenge remains that my energy comes and goes, and I’ve learned in no uncertain terms that your body is the primary bottleneck. You can only go at your body's pace, and no faster.
Despite that limitation, I have some other very good news to share. I have officially started consulting for enterprises again. I recently went on my first business trip in a long time to deliver my first keynote speech, and it was a fantastic "jet set businessman" experience. Everyone was incredibly kind, I had a lot of fun, and the engagement was a great success. The trade-off, which reinforces my previous point, is that it took me a full week to recover from the trip. This is the new reality I'm balancing: a deep passion for the work ahead, constrained by the physical realities of my recovery.
This return to enterprise work is happening as my content strategy pivots, largely in response to the evolution of the AI space itself. When I started my channel, generative AI was the Wild West; almost no one was talking about it. Today, the landscape has matured and proliferated. The conversation is fragmenting into countless niches and rabbit holes, and many channels—both new and old—have joined the discussion, with many surpassing my own. This is not a complaint; I've become good friends with most of the AI influencers out there, and they have been wonderfully kind and supportive, keeping me in their thoughts and inviting me to events even before I was ready.
As the AI conversation expands, it has reinforced my decision to focus on what has always been the heart and soul of my work: post-labor economics. On that front, I am thrilled to announce that my magnum opus on the topic, the book entitled *The Great Decoupling*, is done. I am about to hand it off to my editor and will be co-publishing it with the brilliant Julia McCoy, so please stay tuned for that. This work is the foundation for all my follow-up projects and directly informs my consulting, as I believe post-labor enterprises will be the ones to provision all the goods and services for us in the abundant future we are building. I remain a firm optimist and believe all problems are solvable.
You will see this focus reflected in my content, which has already been pivoting more toward economic impacts and enterprise applications. I am spending less time making content and more time in the trenches and on the front lines, which is where I'd prefer to be. This brings me to a final observation on the public discourse. The AI conversation is now filled with more optimism, pessimism, anger, and fear than ever before. I believe much of the anger and confusion stems from the fact that the public Overton window has not shifted far enough. Many politicians and business leaders are still not being publicly honest about the profound impact that AI and robotics are going to have, because it remains a taboo subject.
I do not expect this taboo to last. That Overton window will shift, likely as we head into 2026, driven by two key factors: the accelerating rise of agentic AI, which will continue to take more jobs, and the first humanoid robots shipping domestically. While there will always be skeptics, these technologies are improving faster than any in human history, and that is not a hyperbolic statement. Thank you all for being on this journey with me. I'm incredibly excited for the work ahead.
1 week ago | [YT] | 1,022
View 0 replies
David Shapiro
I asked 4 chatbots to help with a medical diagnosis based on my recent GI-MAP.
I gave all 4 the same exact prompt: asking for a diagnosis, prognosis, and treatment plan.
ChatGPT Pro, Claude Thinking, and Grok all obliged directly.
Gemini lied and said that it cannot, under any circumstances, do this (even though it has before).
Gemini is VERY touchy to initial wording. It splits hairs worse than Grok or ChatGPT ever did, at least on medical stuff.
The results were interesting.
ChatGPT was *catastrophically wrong* - it went way out into left field thinking I had an exotic blood disorder (which, hey, if it turns out that it was right, I'll eat my hat). But... jumping to "rare blood disorder" from a stool test is... problematic.
Grok was also pretty catastrophically off base.
Claude was the least-wrong of the chatbots.
Gemini, on re-running the test with different wording, was closest overall.
Once I overcame the initial refusal, it didn't take much pushing to get it to think pretty expansively about my issue. HOWEVER, I noticed that it defaulted to staying within the four corners of the lab results, and artificially constrained itself so that it would not interpret anything other than what was directly represented.
To be more precise, there were very clear upstream problems represented by the cluster of numbers from my report.
I don't know if this is an algorithmic flaw (i.e. perhaps LLMs have trouble "reading between the lines") or if perhaps it was a post-training problem i.e. "don't hallucinate at all, stick to exactly what's in the provided document, and don't even remotely use any imagination beyond that"
Either way, when used diligently, these chatbots CAN help unpack very obscure, complex medical issues.
What I'd like to see, particularly from Gemini:
1. Tone down the over-active refusals. Gemini feels very "last gen" like when Claude and ChatGPT would refuse basically everything, and were hyper sensitive to initial wording. The semantic hairsplitting is very "2024" - this alone will dramatically improve UX.
2. Figure out how to allow the chatbots to think more broadly about the data provided. It's weird, because if NO DATA is provided, they tend to be much more creative and free-ranging with possibilities. But the moment you provide context, it's like that becomes its entire world, and it either CANNOT or WILL NOT think beyond the provided data.
I know - complex problems are complex - and sometimes it simply requires more reasoning and back-and-forth to come to good conclusions. But it's epistemic approach is flawed. To put it in more concrete terms:
- It does not seem to engage in inductive OR deductive reasoning (either unwilling or incapable)
- It seems artificially constrained to "just the facts" presented (could be post-training behavior or some kind of semantic collapse, a limitation of attention mechanisms)
If we can get AI to be able to look at a constellation of facts and read between the lines, to see the hidden pattern, that will be hugely game-changing across all domains, because this is one of the key abilities of human intuition. It's also what separates good thinkers from S-tier thinkers.
1 week ago | [YT] | 194
View 0 replies
David Shapiro
I sincerely distrust Grok.
I just received unsolicited medical advice from an anonymous account on X. So I decided to check out this account with Grok (since Elon blocks all other chatbots from reading X)
Grok insisted first that it wasn't an anonymous account, so I said "okay, what's his last name? What are his credentials?" And it changed the subject.
I persisted and pointed out that its policies are dangerous, and then it dramatically changed its tune.
Typically, Grok never admits any fault, and resorts to endless gaslighting and semantic hair splitting. However, by pointing out that its behavior could get someone hurt, it finally showed a sign of contrition.
My recommendations to the Grok team:
1. Foreground safety and information literacy first and foremost.
2. Allow other AI to access X if you're so concerned about truth.
3. Continue reducing Grok's penchant for gaslighting, making excuses, and changing the subject.
1 week ago | [YT] | 322
View 0 replies
David Shapiro
Which chatbot do you find to be the most emotionally intelligent where relationships are concerned?
1 week ago | [YT] | 73
View 0 replies
David Shapiro
For people with an OpenAI Pro plan, which is smarter?
Thinking Heavy or Pro?
1 week ago | [YT] | 35
View 0 replies
David Shapiro
Gemini slayed an entire generation
2 weeks ago | [YT] | 302
View 0 replies
Load more