Chris Frith, éminence grise of consciousness, social cognition, and neuroimaging research, talks about “What is the point of consciousness” in the series Mind Bites. (audio)
New paper by Micah Allen (who is, together with Hakwan Lau and Steve Fleming, currently doing the most interesting work on metacognition):
The ability to introspectively evaluate our experiences to form accurate metacognitive beliefs, or insight, is an essential component of decision-making. Previous research suggests individuals vary substantially in their level of insight, and that this variation is related to brain volume and function, particularly in the anterior prefrontal cortex (aPFC). However, the neurobiological mechanisms underlying these effects are unclear, as qualitative, macroscopic measures such as brain volume can be related to a variety of microstructural features. Here we leverage a high-resolution (800 µm isotropic) multi-parameter mapping technique in 48 healthy individuals to delineate quantitative markers of in vivo histological features underlying metacognitive ability. Specifically, we examined how neuroimaging markers of local grey matter myelination and iron content relate to insight as measured by a signal-theoretic model of subjective confidence. Our results revealed a pattern of microstructural correlates of perceptual metacognition in the aPFC, precuneus, hippocampus, and visual cortices. In particular, we extend previous volumetric findings to show that right aPFC myeloarchitecture positively relates to metacognitive ability. In contrast, decreased myelination in the left hippocampus correlated with better metacognitive insight. These results highlight the ability of quantitative neuroimaging to reveal novel brain-behaviour correlates and may motivate future research on their environmental and developmental underpinnings.
Recall the experiment with the letter grids we did in class (Sperling), and how I criticised Block’s idea of a rich phenomenology that was “overflowing” access, instead suggesting a more constructivist idea whereby the brain, based on piecemeal information, continuously predicts and creates a conscious experience of the world?
Well, it’s all a pendulum in science, and in this recent short opinion piece “Are we underestimating the richness of visual experience?”, Andrew Haun, Giulio Tononi, Christof Koch, and Naotsugu Tsuchiya suggest that perhaps consciousness *is* in fact much richer than we measure:
It has been argued that the bandwidth of perceptual experience is low—that the richness of experience is illusory and that the amount of visual information observers can perceive and remember is extremely limited. However, the evidence suggests that this postulated poverty of experiential content is illusory and that visual phenomenology is immensely rich. To properly estimate perceptual content, experimentalists must move beyond the limitations of binary alternative-forced choice procedures and analyze reports of experience more broadly. This will open our eyes to the true richness of experience and to its neuronal substrates.
Anil Seth (Sackler Centre for Consciousness Science) being extremely eloquent on why understanding core aspects of consciousness is perhaps more important than finding out “what it is” or “why it exists at all” — and as such this piece in Aeon perfectly fits with how the field has matured over the past decade.
Great stuff by Louise Goupil from Sid Kouider’s lab: “Behavioural and Neural Indices of Metacognitive Sensitivity in Preverbal Infants“
Brilliant new review in Nature Reviews Neuroscience: http://www.nature.com/nrn/journal/v17/n5/abs/nrn.2016.22.html
Post will be expanded; Michael Graziano, author of “Consciousness and the Social Brain” writes this interesting short piece on how popular theories of consciousness can be emotionally satisfying, but wrong, instead “The machine has no way of realizing that this self-description is, well, not totally wrong, but distorted. What it has is a deep processing of information. What it concludes it has is something else—conscious experience.”
Another nice paper by Brian Maniscalco & Hakwan Lau, looking at how different models best capture performance and subjective perceptual awareness: “The signal processing architecture underlying subjective reports of sensory awareness” — as Lau says, “the nitty-gritty can drive one insane. But conceptually this sets up the debate against the Dehaene group’s dual route model, which I think should be the major target against which we all compare our theory, if we are serious about this business.”
UPDATE 7: Final game — another one for AlphaGo. 4-1 for AI.
UPDATE 5: Lee Sedol WON game #4!
UPDATE 3: Done. AlphaGo won 3 out of 5 games. Now Google can feed it all our data. Bril:
UPDATE 2: AlphaGo won again. AI-Human 2-0. Game here:
UPDATE 1: AlphaGo won! Skynet is around the corner! Discussion during the Q&A today!
I’m not going to stay up for it, but this is important. This is not like last time around when the AI played “just” a Go champion. This guy is miles above that, the world best — or so I’ve been told; I can only play, not win at Go.
The point is, if Google DeepMind’s AlphaGo wins, this is a milestone for AI based on advanced forms (many layers) of what is in essence “simple” pattern recognition (and not a set of complex rules and symbolic representations combined with fast decision tree search like chess). As with symbolic AI, whether it works like the human mind remains to be seen. It works like part of the human mind. Perhaps this will turn out not to be enough. But perhaps the part of the mind that humans have developed on top of it hampers, rather than helps certain types of performance — and certain types of pattern-recognition based learning.
Humans have evolved for learning. We are the species where offspring is born as unfinished as possible, so as to have maximal adaptation to/learning in response to the environment. We didn’t evolve longer claws, wings, webbed feet: we evolved a capacity for learning, enabling us to far surpass what instinct an innate behaviour gave us. The younger we are, the better we learn associatively, implicitly, based on pattern recognition, without top-down. Later we learn to re-represent (if there are representations at all) information, manipulate symbols, make sense of concepts. But perhaps this stands in the way of some type of learning. Like, the learning that enables kids to pick up language and context and primitive concepts. Now let’s imagine that AI manages to outpace us in that.
Also, think about how the development of concepts, the rise of language, and social interaction are intertwined, and how human (adult) consciousness is made up mostly of concepts. Could AlphaGo develop concepts? Would it need to in order to develop consciousness? And, given that it “lives” in a different world, would we recognise it as conscious? How does intentionality, or observing agency in something, influence how “conscious” we believe them to be? People talk of AlphaGo “making some beautiful, creative moves” — is that intentionality? Is it already conscious in some way?
In this amazeballs study just out in Current Biology, Emilie Caspar, Julia Christensen, Axel Cleeremans & Patrick Haggard (collaboration between ULB/Brussels & UCL/London), show that when coerced into harming others, people effectively experience less agency then when they do it voluntarily (using the well-documented fact that a stronger sense of agency about an action relates to a subjective perception of a shorter time between your action and its outcome). Awesome stuff, Befehl its Befehl, and rightly so because we feel not just less responsible, we experience our actions as being less causal!
It’s very hot at the moment, leading to nice digest versions in Nature, Scientific American (MIND), and on the IFLS website. (Also of interest to the consciousness course is a very recent more general overview of consciousness, free will and intentionality by Caspar & Cleeremans.)
The basis for the Current Biology experiment is the infamous Milgram obedience experiment, conducted in the early 60’s in an attempt to show that people would not just follow orders and hurt fellow human beings (WW2 had only been over 15 years). People were required to train a subject in a memory task. Every time the other (a confederate) got it wrong, they were to administer an electric shock. To his horror, Milgram saw that most people, at simple request of the experimenter, would continue until lethal shock level. This was the first illustration that anyone could be brought to do atrocious things to others.
Caspar et al. wanted to know to what degree people saw their actions as not their own, experienced less self-agency. Their study relies on the intentional binding effect (on which Patrick Haggard has worked a lot), which is the well-established finding that a stronger sense of agency about an action relates to a subjective perception of a shorter time between your action and its outcome, and the reverse for a weaker sense of agency. In other words, the impression (some would say illusion) of free will, of voluntarily doing something relates to how close we perceive our actions to be to an effect in the world.
As it turns out, when people are told to administer a shock, they effectively perceive the interval between action and outcome to be longer than when they chose to administer the shock. In other words, when someone orders me to do something, at a very basic level I experience my actions less to be my own! As such, yes, when someone tells us to do an atrocious thing, we experience less as an agent with free will to our own actions.