UPDATE 7: Final game — another one for AlphaGo. 4-1 for AI.
UPDATE 5: Lee Sedol WON game #4!
UPDATE 3: Done. AlphaGo won 3 out of 5 games. Now Google can feed it all our data. Bril:
UPDATE 2: AlphaGo won again. AI-Human 2-0. Game here:
UPDATE 1: AlphaGo won! Skynet is around the corner! Discussion during the Q&A today!
I’m not going to stay up for it, but this is important. This is not like last time around when the AI played “just” a Go champion. This guy is miles above that, the world best — or so I’ve been told; I can only play, not win at Go.
The point is, if Google DeepMind’s AlphaGo wins, this is a milestone for AI based on advanced forms (many layers) of what is in essence “simple” pattern recognition (and not a set of complex rules and symbolic representations combined with fast decision tree search like chess). As with symbolic AI, whether it works like the human mind remains to be seen. It works like part of the human mind. Perhaps this will turn out not to be enough. But perhaps the part of the mind that humans have developed on top of it hampers, rather than helps certain types of performance — and certain types of pattern-recognition based learning.
Humans have evolved for learning. We are the species where offspring is born as unfinished as possible, so as to have maximal adaptation to/learning in response to the environment. We didn’t evolve longer claws, wings, webbed feet: we evolved a capacity for learning, enabling us to far surpass what instinct an innate behaviour gave us. The younger we are, the better we learn associatively, implicitly, based on pattern recognition, without top-down. Later we learn to re-represent (if there are representations at all) information, manipulate symbols, make sense of concepts. But perhaps this stands in the way of some type of learning. Like, the learning that enables kids to pick up language and context and primitive concepts. Now let’s imagine that AI manages to outpace us in that.
Also, think about how the development of concepts, the rise of language, and social interaction are intertwined, and how human (adult) consciousness is made up mostly of concepts. Could AlphaGo develop concepts? Would it need to in order to develop consciousness? And, given that it “lives” in a different world, would we recognise it as conscious? How does intentionality, or observing agency in something, influence how “conscious” we believe them to be? People talk of AlphaGo “making some beautiful, creative moves” — is that intentionality? Is it already conscious in some way?