JRE #1736 – Tristan Harris & Daniel Schmachtenberger

18-Nov-21






Joe Rogan Podcast #1736 – Tristan Harris & Daniel Schmachtenberger Topics

Joe Rogan Podcast #1736 – Tristan Harris & Daniel Schmachtenberger Topics

This podcast delves into the intersection of technology, society, and existential risk. Tristan Harris, a former Google design ethicist and co-founder of the Center for Humane Technology, and Daniel Schmachtenberger, a founding member of The Consilience Project, engage in a wide-ranging conversation about the profound impact of social media on our lives. They explore how the race for attention on these platforms leads to polarization, misinformation, and societal fragility, ultimately jeopardizing democracy itself.

Provocative topics include the manipulation of human psychology through persuasive technology, the influence of algorithms on our beliefs, and the emergence of powerful AI that could reshape civilization in unpredictable ways. Underlying themes explore the dangers of unchecked technological advancement, the importance of collective sensemaking, and the need for a more humane approach to technology.

Key Topics:

  1. The Ethicist’s Dilemma: Harris describes the challenge of ethically influencing billions of users in the social media landscape. He highlights the asymmetric relationship between users and tech companies, with companies wielding immense knowledge about human psychology while users remain largely unaware of the manipulations at play.
    • Persuasive technology: Harris explains how persuasive technology is used to capture attention and manipulate behavior, drawing parallels to magic and the magician’s advantage over the audience.
    • The “race to the bottom of the brainstem”: Harris discusses the competitive nature of social media platforms, where they constantly strive to engage users by triggering primal desires and emotions.
    • The ethical framework: Harris emphasizes the lack of an ethical framework for social media platforms, raising concerns about the consequences of influencing billions of people without proper guidelines.
  2. Algorithms and Polarization: The podcast explores how social media algorithms amplify controversial content, contributing to the polarization of societies. Harris and Schmachtenberger discuss how these algorithms create personalized “filter bubbles,” where users are bombarded with information that confirms their existing beliefs, hindering their ability to engage in productive dialogue with opposing viewpoints.
    • The “bogeyman” effect: Harris explains how algorithms create an endless stream of content that triggers emotional responses, reinforcing existing biases and fueling polarization.
    • Confirmation bias: Harris and Schmachtenberger acknowledge that confirmation bias is a natural human tendency but argue that social media exacerbates it by feeding users content that reinforces their pre-existing beliefs.
    • The breakdown of democracy: They argue that polarization, fueled by social media algorithms, undermines the capacity of democracies to function effectively, leading to gridlock and inaction on crucial issues.
  3. The Rise of Deepfakes and AI: The podcast delves into the growing capabilities of AI, particularly the ability to create realistic deepfakes that can manipulate information and sow distrust. This discussion highlights the potential for AI to exacerbate existing problems of misinformation and polarization.
    • GPT-3 and the Turing test: The podcast discusses GPT-3, a powerful AI model that can generate text indistinguishable from human writing, raising concerns about the future of truth and authenticity in a world where AI can create convincing deepfakes.
    • The impact on science and trust: Harris and Schmachtenberger explore how AI can be used to create fake research papers and manipulated data, eroding trust in scientific institutions and furthering the spread of misinformation.
    • Decentralized Godlike powers: The podcast underscores the potential for AI to become a decentralized tool of manipulation and destruction, accessible to individuals and groups beyond the control of governments or corporations.
  4. China’s Approach to Tech Regulation: The podcast examines China’s stringent approach to regulating its Internet, where the government exerts significant control over social media platforms and seeks to shape public discourse for specific goals. This discussion raises questions about the balance between freedom and control, and the potential implications of such a centralized approach.
    • The “social credit score” system: Harris expresses concern about the Chinese social credit score system, which uses technology to monitor and control citizen behavior, and argues that similar systems could emerge in the West.
    • Limiting access and influence: The podcast describes how China limits the amount of time children can spend on social media, uses algorithms to promote educational content, and restricts access to certain platforms for its military personnel.
    • The potential for a “Chinese model”: Harris and Schmachtenberger acknowledge that China’s approach to technology is alarming, but also recognize its effectiveness in achieving specific objectives. This raises a question about whether Western societies will need to adopt similar measures to counter the potential threats posed by powerful technologies.
  5. The Urgent Need for a “Third Attractor”: Harris and Schmachtenberger argue that the world is facing a choice between dystopian centralized control and catastrophic decentralized chaos. They advocate for a “third attractor,” a new model for society that leverages technology in a way that promotes human flourishing and safeguards democratic values.
    • The “bowling alley” metaphor: They use the analogy of a bowling alley to illustrate the two potential paths: centralized control (the gutters) and decentralized chaos (the gutters). Their goal is to find a way to bowl a strike down the middle, creating a path toward a more balanced and sustainable future.
    • Taiwan as an example: They point to Taiwan’s “Polis” system as a model for a more participatory and tech-enabled democracy, where citizens can engage in online deliberation and decision-making.
    • Humanity’s test: They argue that we are facing a critical test, where we must learn to wield the power of advanced technologies with wisdom, love, and prudence.
  6. The Importance of Individual Responsibility: While acknowledging the systemic challenges posed by social media and AI, Harris and Schmachtenberger emphasize the importance of individual responsibility in navigating these turbulent times. They encourage listeners to become more discerning consumers of information, to cultivate a broader perspective, and to seek out genuine human connection.
    • The “mind warp”: They discuss the ways in which social media can warp our perceptions of reality, making it difficult to distinguish truth from fiction.
    • The need for critical thinking: They advocate for critical thinking, fact-checking, and a healthy skepticism toward information encountered online.
    • The power of personal choices: They highlight the power of individual choices, such as limiting social media use, engaging in offline communities, and seeking out diverse perspectives.
  7. The Role of Companies and Governments: The podcast explores the responsibility of both corporations and governments in shaping the future of technology. They discuss the limitations of current regulatory frameworks and argue for a more proactive approach, where these institutions work together to create a more humane and equitable tech landscape.
    • Facebook’s role: They criticize Facebook’s business model, arguing that it is fundamentally incompatible with democratic values and prioritizes profit over societal well-being.
    • The need for regulation: They advocate for regulation of social media platforms, not to censor speech, but to address the harmful effects of their algorithms and design choices.
    • The importance of transparency: They argue that both corporations and governments need to be more transparent in their operations, allowing for greater public scrutiny and accountability.
  8. The Metaverse and the Future of Reality: The podcast examines the emerging metaverse and its potential impact on human relationships and the fabric of society. They express concerns that the metaverse could lead to further detachment from reality, addiction, and the erosion of real-world connection.
    • The “attention casino”: They discuss how the metaverse could become a new frontier for the “attention casino,” where companies compete for user engagement and time.
    • The importance of the “offline world”: They emphasize the importance of nurturing real-world relationships and communities, arguing that a humane future requires technologies that complement and enhance our offline lives.
    • The need for ethical design: They advocate for a more ethical approach to the design of the metaverse, where technologies are created with the well-being of individuals and communities in mind.
  9. The Psychedelic Renaissance: The podcast concludes with a discussion about the potential role of psychedelics in fostering a more humane future. They acknowledge the limitations of relying solely on individual choices and argue that a broader cultural shift is needed.
    • The power of introspection: They explore how psychedelics can facilitate deep introspection, allowing individuals to see their own patterns of behavior and make radical changes.
    • The need for community: They emphasize the importance of community support and guidance in navigating psychedelic experiences, avoiding potential harms and fostering meaningful transformation.
    • A shift in consciousness: They suggest that a psychedelic renaissance could contribute to a broader shift in consciousness, leading to a greater emphasis on compassion, empathy, and collective well-being.

Memorable Quotes:

  1. “When you were studying at Stanford — Yep. — what year was this? This was 2002 to 2006. I was an undergrad, and then 2006, I got involved with professor B. J. Fogden, who, again, actually studied ways that persuasive technology could be used for positive purpose. Like, how do you help people be healthier? How do you help people floss? How do you help people work out more often, things like that. It could be used in a positive way. Right. But I got concerned because it was all of this increasing arms race to use persuasive tools to harvest and capture people’s attention, now known as the race to the bottom of the brainstem, to go down the brainstem and to more social validation, more social narcissism, all of that.” – Tristan Harris
  2. “The fundamental problem of humanity is we have paleolithic emotions and brains, like easy brains that are hackable for magicians. We have medieval institutions. government that’s not really good at seeing the latest tech, whether it was railroads or now social media or AI or deep fix or whatever’s coming next. and then we have Godlike technology.” – Tristan Harris
  3. “If I’m Tucker Carlson or Rachel Maddow or anybody who’s a political personality. Are they really saying things just for their TV audience? Are they also appealing to the algorithm? Because most more and more of their attention is gonna happen downstream in these little clips that get filtered around. So they they also need to appeal to how the algorithm is rewarding, saying negative things about the other party.” – Tristan Harris
  4. “We are the product making. And how do you know since there isn’t rigorous identity? If a user that says their user is really who they are or if they’re a troll farm or if pretty soon they’re an AI GPT 3 algorithm. And you should explain what you can do. GPT 3. Yes. The ability to generate text based deep fakes. So so people know what a deep fake is. Well, there’s a whole Reddit thread with people arguing with each other that are all fake. Do you know about that? No. I don’t actually. Here. I’m just gonna Jamie. It’s Duncan just sent this to me the other day, and I was like, what in the fuck? I could only look at it for a couple moments before I started freaking out. But the idea that, you know, that’s that’s it’s not far off. Like, this this ability that deep fake AI has to recreate is especially in text. Yes. Exactly. That that’s specifically what GPT Three is. It’s text model that trains on trillions of parameters and and basically the entire corpus of the Internet.” – Tristan Harris
  5. “The question is, what would the wisdom to steward the power of exponential tech? What would the minimum required level be? And that’s like the experiment right now. That’s the that’s the opportunity for us. We’re in the opportunity, but you’re talking about a radical shift in human nature. Well, it’s a possibility in human conditioning. Why don’t you give some examples?” – Tristan Harris