Demis Hassabis — Google DeepMind: The Podcast
Problems (2)
Jagged Intelligence Consistency Problem
AI systems achieve PhD-level performance in complex domains (IMO gold medals) while failing at basic high school tasks (simple chess, letter counting). This inconsistency blocks AGI deployment in scenarios requiring reliable general reasoning.
Give me an AI system I can trust to reason consistently across all domains — not brilliant in narrow areas and incompetent everywhere else.
They can win gold medals at the International Maths Olympiad but can't really play decent games of chess yet, which is surprising. So there's something missing still from these systems in terms of their consistency.
Physics Verification in World Models
AI-generated simulated worlds look realistic but contain physics errors invisible to casual observation. This blocks reliable training of agents that need to transfer skills to real-world robotics and physical tasks.
Give me simulated physics that are accurate enough to train real-world agents — not just visually plausible but mechanically correct.
They look realistic when you just casually look at them, but they're not accurate enough yet to rely on for, say, robotics.
Solutions (2)
50-50 Scaling-Innovation Resource Allocation
Instead of pure scaling or pure research, Hassabis allocates exactly 50% of DeepMind's effort to scaling existing approaches and 50% to fundamental innovations. This combination maintains competitive performance while building breakthrough capabilities.
Mechanism: The mechanism exploits a structural asymmetry: scaling has predictable returns but capped upside, while innovation has unpredictable returns but unlimited upside. By running both in parallel rather th...
AGI as Mind Simulation for Consciousness Research
Rather than trying to understand consciousness directly, Hassabis proposes building AGI first, then using it as a controlled simulation of mind to compare against human consciousness and identify the differences. This makes consciousness research empirically tractable.
Mechanism: The mechanism inverts the research approach: instead of trying to understand consciousness from the inside (introspection) or outside (neuroscience), you build a working model of intelligence and then...