Science, Technology & the Future

Breakthrough insights in science, technology & the future; philosophy & moral progress; artificial intelligence/robotics, biotechnology.


Science, Technology & the Future

"AI as a moral hypothesis generator" - with David Enoch.

AI may be useful to help generate moral hypotheses using pattern recognition to find deep ethical patterns (meta-patterns) invisible to humans or hard to see - proposing new moral ideas or testing existing ones within varieties of ethical frameworks like utilitarianism or deontology, acting as a powerful research assistant to challenge assumptions and explore complex moral dilemmas - even though humans remain the ultimate moral judges for the time being we should consider the possibility that AI could be more morally capable across the board than us humans at some stage.
Creating a highly capable AMA (artificial moral agent) may require designing AI with explicitly robust reasoning structures suitable for ethical reasoning, using large datasets of moral scenarios, and applying the right kinds of algorithms to sort through, aggregate and/or prioritise different ethical theories to suggest novel or refined moral insights.

1 week ago | [YT] | 0

Science, Technology & the Future

Most viable AI safety strategy?
Which approach do you think has the best chance in practise of preventing existential risk from advanced AI?

1 month ago | [YT] | 0

Science, Technology & the Future

If AI can become more moral than us, should we let it?

1 month ago | [YT] | 2

Science, Technology & the Future

It will take wisdom and understanding to discern when AI is actually being genuinely more moral than us, and humility to accept it if this is the case.
The study by Eyal Aharoni et al, 'Attributions toward artificial agents in a modified Moral Turing Test' is interesting - for me it raises important questions:

- Can AI be more moral than us?
- Is current AI engaging in genuine moral reasoning or is it morality faking?
- What is required for AI to be truly more moral than us?
- and if AI can be more moral than us, should we let it?

1 month ago | [YT] | 3

Science, Technology & the Future

Halloween special: a scary futures tier list that is spooky in theme & sobering in content. This tier list isn't scientific, it isn't the final say - its a bit of a gimmick, and a fun intuition pump.

Anders Sandberg is a neuroscientist and futurist well known for sizing up the biggest canvases we’ve got. Formerly a senior research fellow at Oxford’s Future of Humanity Institute, he’s worked on AI, cognitive enhancement, existential risk, and those deliciously unsettling Fermi-paradox puzzles. His forthcoming books include “Law, Liberty and Leviathan: human autonomy in the era of existential risk and Artificial Intelligence”, and a big one - “Grand Futures—a tour of what’s physically possible for advanced civilisations”. He authored classic papers like "Daily Life Among the Jupiter Brains" (which came out in 1999), and co-authored “Eternity in Six Hours” on intergalactic expansion, and “Dissolving the Fermi Paradox.”

#halloween #xrisk #ai #superintelligence

Many thanks for tuning in!

1 month ago | [YT] | 1

Science, Technology & the Future

Stefan Lorenz Sorgner on Nietzsche, the Overhuman, and Transhumanism.

Dr. Stefan Lorenz Sorgner is director and co-founder of the Beyond Humanism Network, Fellow at the Institute for Ethics and Emerging Technologies (IEET) and teaches philosophy at the University of Erfurt.

2 months ago | [YT] | 2

Science, Technology & the Future

Ben Goertzel on the Singularity!

An oldie but a goodie!

Recorded in HK late 2011

#AGI #Singularity #AI

2 months ago | [YT] | 2

Science, Technology & the Future

Ben Goertzel on whether AI's can reason adequately.
Can current AI really reason - or are large language models just clever parrots, skipping the "understanding" step humans rely on?

2 months ago | [YT] | 2

Science, Technology & the Future

Interview with Colin Allen - Distinguished Professor of Philosophy at UC Santa Barbara and co-author of the influential 'Moral Machines: Teaching Robots Right from Wrong'. Colin is a leading voice at the intersection of AI ethics, cognitive science, and moral philosophy, with decades of work exploring how morality might be implemented in artificial agents.

We cover the current state of AI, its capabilities and limitations, and how philosophical frameworks like moral realism, particularism, and virtue ethics apply to the design of AI systems. Colin offers nuanced insights into top-down and bottom-up approaches to machine ethics, the challenges of AI value alignment, and whether AI could one day surpass humans in moral reasoning.

Along the way, we discuss oversight, political leanings in LLMs, the knowledge argument and AI sentience, and whether AI will actually care about ethics.


See the blogpost: www.scifuture.org/are-machines-capable-of-morality…

4 months ago | [YT] | 3

Science, Technology & the Future

Anders discusses his optimism about AI in contrast to Eliezer Yudkowsky's pessimism.

Eliezer sees AI Safety achievement through mathematical precision where the a good AI sort of folds out of the right equations - but get one BIT wrong and its doom. Anders sees AI safety through a sort of swiss cheese security:
en.wikipedia.org/wiki/Swiss_cheese_model

#AGI #Optimism #pdoom

10 months ago | [YT] | 2