More on AlphaGo. Such respect to Sedol for choosing to play the last match against what he views as the stronger incarnation of the machine. It didn’t help, of course.
Fan Hui, the 3-time European champion has improved from number 600-ish in the world to be in the 300s while playing AlphaGo as part of its training. It’s been emotional, mostly sad, but he has found some beautiful moments.
Fabulous long Facebook reflections from Eliezer Yudkowsky. My take-aways (but it’s all worth reading).
- AlphaGo is superhuman with bugs, not near-human.
- Optimised strategies may look stupid, be strange edges of probability space and feel alien to us. We might have to get multiple AlphaGos to help us explain the “meaning” of moves.
- We can’t necessarily see AlphaGo moves as powerful as we can’t see the consequences because we are working with too small a probability space.
- “…when you’ve been been placed in an adversarial relation to something smarter than you, you don’t always know that you’ve lost, or that anything is even wrong, until the end.”
- “AI is either overwhelmingly stupider or overwhelmingly smarter than you.” There’s not much space for human-level competence.
More on how DeepMind works. We should all start understanding this, as best we can. Suleyman thinks it is too early to be talking about AGI rights: many people “know how difficult it is to get these things to anything.”