Science, Technology & the Future

Breakthrough insights in science, technology & the future; philosophy & moral progress; artificial intelligence/robotics, biotechnology.


Science, Technology & the Future

Most viable AI safety strategy?
Which approach do you think has the best chance in practise of preventing existential risk from advanced AI?

2 days ago | [YT] | 0

Science, Technology & the Future

If AI can become more moral than us, should we let it?

1 week ago | [YT] | 2

Science, Technology & the Future

It will take wisdom and understanding to discern when AI is actually being genuinely more moral than us, and humility to accept it if this is the case.
The study by Eyal Aharoni et al, 'Attributions toward artificial agents in a modified Moral Turing Test' is interesting - for me it raises important questions:

- Can AI be more moral than us?
- Is current AI engaging in genuine moral reasoning or is it morality faking?
- What is required for AI to be truly more moral than us?
- and if AI can be more moral than us, should we let it?

1 week ago | [YT] | 3

Science, Technology & the Future

Halloween special: a scary futures tier list that is spooky in theme & sobering in content. This tier list isn't scientific, it isn't the final say - its a bit of a gimmick, and a fun intuition pump.

Anders Sandberg is a neuroscientist and futurist well known for sizing up the biggest canvases we’ve got. Formerly a senior research fellow at Oxford’s Future of Humanity Institute, he’s worked on AI, cognitive enhancement, existential risk, and those deliciously unsettling Fermi-paradox puzzles. His forthcoming books include “Law, Liberty and Leviathan: human autonomy in the era of existential risk and Artificial Intelligence”, and a big one - “Grand Futures—a tour of what’s physically possible for advanced civilisations”. He authored classic papers like "Daily Life Among the Jupiter Brains" (which came out in 1999), and co-authored “Eternity in Six Hours” on intergalactic expansion, and “Dissolving the Fermi Paradox.”

#halloween #xrisk #ai #superintelligence

Many thanks for tuning in!

2 weeks ago | [YT] | 1

Science, Technology & the Future

Stefan Lorenz Sorgner on Nietzsche, the Overhuman, and Transhumanism.

Dr. Stefan Lorenz Sorgner is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET) and teaches philosophy at the University of Erfurt.

1 month ago | [YT] | 2

Science, Technology & the Future

Ben Goertzel on the Singularity!

An oldie but a goodie!

Recorded in HK late 2011

#AGI #Singularity #AI

1 month ago | [YT] | 2

Science, Technology & the Future

Ben Goertzel on whether AI's can reason adequately.
Can current AI really reason - or are large language models just clever parrots, skipping the "understanding" step humans rely on?

1 month ago | [YT] | 2

Science, Technology & the Future

Interview with Colin Allen - Distinguished Professor of Philosophy at UC Santa Barbara and co-author of the influential 'Moral Machines: Teaching Robots Right from Wrong'. Colin is a leading voice at the intersection of AI ethics, cognitive science, and moral philosophy, with decades of work exploring how morality might be implemented in artificial agents.

We cover the current state of AI, its capabilities and limitations, and how philosophical frameworks like moral realism, particularism, and virtue ethics apply to the design of AI systems. Colin offers nuanced insights into top-down and bottom-up approaches to machine ethics, the challenges of AI value alignment, and whether AI could one day surpass humans in moral reasoning.

Along the way, we discuss oversight, political leanings in LLMs, the knowledge argument and AI sentience, and whether AI will actually care about ethics.


See the blogpost: www.scifuture.org/are-machines-capable-of-morality…

2 months ago | [YT] | 3

Science, Technology & the Future

Anders discusses his optimism about AI in contrast to Eliezer Yudkowsky's pessimism.

Eliezer sees AI Safety achievement through mathematical precision where the a good AI sort of folds out of the right equations - but get one BIT wrong and its doom. Anders sees AI safety through a sort of swiss cheese security:
en.wikipedia.org/wiki/Swiss_cheese_model

#AGI #Optimism #pdoom

9 months ago | [YT] | 2

Science, Technology & the Future

Interview with ‪@KeithWiley‬ on his new novel 'Contemplating Oblivion' - "A spectacular vision of the far future, in which the functioning of the mind is separated from the substrate of the brain, enabling light-speed interstellar travel and wild explorations of consciousness. Wiley depicts a society consumed with curiosity about the nature of consciousness, but also vexed by existential angst over the end of the universe, when even the immortals will perish."
—RANDAL KOENE PhD * Founder & CEO of Carboncopies Foundation * past Director of the Department of Neuroengineering at Tecnalia

1 year ago | [YT] | 1