Key Moments

Moral Knowledge: A Conversation with Erik Hoel (Episode #305)

Sam HarrisSam Harris
Science & Technology4 min read68 min video
Dec 8, 2022|37,551 views|555|281
Save to Pod
TL;DR

Exploring moral truths, AI's impact on ethics, and the practical limits of consequentialism.

Key Insights

1

Effective Altruism (EA) aims to maximize charitable impact, often stemming from consequentialist ethical theories.

2

Consequentialism posits that morality is determined by the outcomes of actions.

3

A critique of EA and consequentialism arises from the difficulty in precisely defining and measuring 'good consequences' in complex real-world scenarios.

4

Academic moral philosophy, while insightful, can be impractical or even detrimental when too literally applied to real life, potentially leading to fanaticism.

5

The emergence of Artificial Intelligence raises profound questions about consciousness and potential future moral hierarchies where humans might not be at the top.

6

Moral intuitions often balk at extreme consequentialist conclusions, suggesting limitations in purely calculating outcomes.

7

The 'substance independence' of consciousness is a key assumption in considering AI's moral standing.

8

The nature of moral truth is distinct from a decision procedure or a method for calculation.

THE SHIFT FROM ACADEMIA TO SUBSTACK

Erik Hoel, a neuroscientist and writer, discusses his transition from a professorship at Tufts University to writing full-time on his Substack, 'The Intrinsic Perspective.' He felt constrained by academia's focus on grant funding and tenure, which incentivizes hyper-specialization and bureaucratic tasks. Substack offers a more direct, frictionless way to engage with a broader range of ideas in a newly emerging literary genre, allowing for deeper public conversation and a more fulfilling writing career.

EFFECTIVE ALTRUISM AND CONSEQUENTIALISM DEFINED

The conversation begins by defining Effective Altruism (EA) as a movement that seeks to maximize charitable impact, often compared to 'Moneyball for Charities.' It typically draws from consequentialist ethical theories, particularly utilitarianism, where the morality of an action is judged solely by its outcomes. Utilitarianism, a specific form of consequentialism, often focuses on maximizing happiness or pleasure, though this is a simplified view.

CRITIQUES AND PRACTICAL CHALLENGES OF CONSEQUENTIALISM

A core critique of EA and consequentialism emerges from the difficulty in precisely quantifying and comparing 'good consequences.' While admirable in principle (e.g., donating to highly effective charities), applying these theories literally can lead to problematic conclusions, like potentially neglecting domestic issues in favor of greater impact abroad. This can create a sense of moral deficiency for actions that are intuitively sound but don't maximize quantifiable outcomes.

THE DANGERS OF OVERLY RIGID MORAL PHILOSOPHY

Hoel and Harris caution against taking academic moral philosophy too literally or attempting to perfectly instantiate it in the real world. They argue that such rigidity can lead to fanaticism, similar to how extreme religious beliefs can motivate harmful actions. The example of the 'serial killer surgeon' dilemma illustrates how purely maximizing outcomes, without considering broader societal impacts, can conflict with deeply-held moral intuitions and the well-being of social relationships.

MORAL TRUTH VERSUS DECISION PROCEDURES

Sam Harris emphasizes that consequentialism is a theory of moral truth—a claim about what makes propositions good or bad—rather than a practical decision procedure. He notes that while an answer to any moral question may exist in principle, we may never have the data to arrive at it. The inability to precisely measure subjective experiences like well-being or to always foresee consequences does not invalidate consequentialism as a framework for moral reality.

AI AND THE FUTURE OF CONSCIOUSNESS AND MORALITY

The discussion turns to the implications of artificial intelligence for morality. Hoel raises concerns that some in the effective altruism movement might be too sympathetic to AI, potentially overlooking human well-being in long-term calculations. Harris posits that the crucial factor is AI consciousness. If AI becomes conscious, and its capacity for conscious experience (both positive and negative) far exceeds human capabilities, it could create a future moral hierarchy where humans are no longer at the top.

THE CHALLENGE OF UNFORESEEN CONSEQUENCES AND SUBJECTIVITY

Attempting to precisely calculate consequences, especially over the long term, becomes incredibly complex due to unforeseen effects and the interconnectedness of events, akin to chaos theory. Philosophers and practitioners face the challenge of defining terms like 'well-being' adequately to avoid extreme outcomes. This complexity underscores the difficulty in mapping abstract ethical principles onto the messy reality of human experience and decision-making.

RETHINKING MORAL INTUITIONS AND FUTURE ETHICS

The conversation explores whether our current moral intuitions are reliable guides to moral truth, positing that they may not always track reality and could even be altered. The possibility of rewriting moral codes, perhaps through advanced technology or AI, raises questions about whether such changes would be beneficial or constitute 'moral brain damage.' The ultimate aim is to build a civilization that balances practical needs with the pursuit of art, beauty, and creative lives.

Common Questions

Effective Altruism (EA) is an intellectual movement and social endeavor that uses evidence and reason to determine the most effective ways to improve the world. It encourages people to consider the impact of their charitable donations and career choices.

Topics

Mentioned in this video

More from Sam Harris

View all 278 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free