Revolutionizing AI: Tackling the Alignment Problem
In this episode, we delve into the frontier of AI and the challenges surrounding AI alignment. The AI / Crypto overlap at Zuzalu sparked discussions on topics like ZKML, MEV bots, and the integration of AI agents into the Ethereum landscape.
However, the focal point was the alignment conversation, which showcased both pessimistic and resigned optimistic perspectives. We hear from Nate Sores of MIRI, who offers a downstream view on AI risk, and Deger Turan, who emphasizes the importance of human alignment as a prerequisite for aligning AI. Their discussions touch on epistemology, individual preferences, and the potential of AI to assist in personal and societal growth.
Timestamps
0:00 Intro
1:50 Guests
5:30 NATE SOARES
7:25 MIRI
13:30 Human Coordination
17:00 Dangers of Superintelligence
21:00 AI’s Big Moment
24:45 Chances of Doom
35:35 A Serious Threat
42:45 Talent is Scarce
48:20 Solving the Alignment Problem
59:35 Dealing with Pessimism
1:03:45 The Sliver of Utopia
1:14:00 DEGER TURAN
1:17:00 Solving Human Alignment
1:22:40 Using AI to Solve Problems
1:26:30 AI Objectives Institute
1:31:30 Epistemic Security
1:36:18 Curating AI Content
1:41:00 Scalable Coordination
1:47:15 Building Evolving Systems
1:54:00 Independent Flexible Systems
1:58:30 The Problem is the Solution
2:03:30 A Better Future
Resources
Nate Soares
https://twitter.com/So8res?s=20
Deger Turan
https://twitter.com/degerturann?s=20
MIRI
Less Wrong AI Alignment
xhttps://www.lesswrong.com/tag/ai-alignment-intro-materials
AI Objectives Institute