Consciousness, meaning, being human with AI
👋 Hi, my name is Chad — a computer scientist wandering in philosophy.
This is a space for conscious technologists, curious seekers, and anyone who feels the weight of building and living in an AI-transformed world.
Made with ❤️ from New Orleans.
Mindful Machines
"Science fiction writers are scouts sent out by mankind to survey the future." — Isaac Asimov
Y'all seem to like it when I explore our current-day AI reality through the lens of sci-fi.
What story/themes do you want me to dive into next?
Have something else in mind? Drop it in the comments.
3 days ago | [YT] | 7
View 6 replies
Mindful Machines
How do you think about consciousness? It is…
1 month ago | [YT] | 3
View 7 replies
Mindful Machines
How do you feel about the current state of AI development?
2 months ago | [YT] | 12
View 16 replies
Mindful Machines
Working on a few ideas. Which video do you want to see next?
4 months ago | [YT] | 7
View 4 replies
Mindful Machines
I've been having fun getting Claude to role play for my next video.
We've reached peak meta: AI playing AI to show why AI isn't what you think it is.
6 months ago | [YT] | 6
View 3 replies
Mindful Machines
Do you believe in God?
How does a conscious machine fit into your worldview?
6 months ago | [YT] | 8
View 11 replies
Mindful Machines
How do you feel about AI-generated content?
AI can now write, create images, videos, and music, and even code. How do you personally relate to this?
7 months ago | [YT] | 1
View 4 replies
Mindful Machines
We don't know how to align a safe, super-intelligent AI system.
That's the dirty truth that the major AI labs don't like talking about.
But I just started reading this new paper:
arxiv.org/abs/2504.15125
And it's changed how I think about this problem.
We've been approaching AI alignment like helicopter parents.
We try to create these rigid frameworks of rules and safeguards but this control-based approach creates a fundamental paradox. The more powerful AI becomes, the more our rigid control mechanisms fail.
Like trying to contain water with our hands, the tighter we squeeze, the more slips through our fingers.
Instead, these researchers propose four axiomatic principles to instill a resilient "Wise World Model" into AI systems:
- Mindfulness: enables self-monitoring and recalibration of emergent subgoals
- Emptiness: forestalls dogmatic goal fixation
- Non-duality: dissolves adversarial self-other boundaries
- Boundless care: motivates the universal reduction of suffering
The paper dives much deeper into all of this. I find it incredibly fascinating!
Let me know in the comments if you want me to make a video on all this.
7 months ago | [YT] | 28
View 4 replies
Mindful Machines
An information hazard is an idea considered too dangerous to spread widely.
But the most dangerous info hazards might not be about developing strong AI or building nuclear weapons.
Sometimes they are simple truths that would unravel systems of control built up over centuries.
Perhaps the most dangerous of these ideas is in realizing that our economic system isn't designed to solve scarcity — it's designed to maintain it.
This is the core idea I explore in my next video: what happens when AI fully eliminates scarcity?
Because Silicon Valley's billionaires want to use AI to build a utopia, but they're not planning on inviting you.
Capitalism as we know it is dying a slow and painful death, but what's coming to replace it may be much worse: technofeudalism.
A world where feudal lords and serfs re-emerge, but this time within a technologically advanced landscape.
Is the pace of AI development propelling us toward a cyberpunk, technological dystopia?
Or will AI usher in a golden age — a utopia? Is the idea of utopia even possible?
Stay tuned!
9 months ago | [YT] | 23
View 8 replies
Mindful Machines
What if AI is already partially conscious? 🤔
If it turns out sentience is a spectrum and the thinking machines of today already have a degree of consciousness, I would…
11 months ago | [YT] | 9
View 14 replies
Load more