Monday, November 30, 2009

Opaque Introspection

Some time ago, Psyblog posted a series called "What Everyone Should Know About Their Own Minds". Appropriately enough, it covers different ways in which humans typically completely misunderstand (or misjudge) components of their own behavior such as motivations, reasons, and predictions about their own responses. Sometimes this involves resolving cognitive dissonance, where a subject changes some of her beliefs in order to better match other beliefs they had been opposing; other times the explanation is less clear.


The whole subject is a curious one, since, subjectively, most of us feel like we have pretty transparent access to the inside of our own heads. For example, if asked to give reasons why we find one face more attractive than another, we usually think we can do so so; and we feel moreover that the reasons we come up with will be true to whatever processes actually go on inside our heads. Not so, as one of the Psyblog posts reports; at least not with the accuracy that we'd think we would have.

My impression is that we are sometimes alien to our own minds, insofar as we don't understand most of our inner machinations. Yet, strangely, we have this need to "tell stories", even to ourselves, about why we've done things a particular way: a need for explanation or justification of our inner landscape, much in the fashion of the need that we have to explain (and understand) the outer world. This isn't such a bad thing, since seeking explanations is the heart of science (and rational inquiry). But it does emphasize the necessity of being relentlessly critical and skeptical if your aim is truth -- skeptical even about your own thought processes -- lest you settle too firmly on the first story that seems plausible to you.

A downside of the above skeptical strategy, in my own experience, is that it tends to drive you a little bit crazy. Constantly doubting your own motivations, values, judgments, and ostensible motivations is not a very enjoyable way to go about your day. After a certain point, if you lose too much faith in your own understanding with regard to the contents of your own head, you may end up impeding your own progress, due to constantly looking for nonexistent solid ground. From a practical "getting things done" standpoint, it may be better to be wrong about a few little details here and there if you're still able to function well in the main.

We may suppose, perhaps, that this was why we did not evolve to be more naturally self-critical and self-reflective creatures. Don't get me wrong, compared to any other form of intelligence that we know about, we still go pretty far toward it -- even the least reflective of individuals tries to purge logical inconsistencies from her thoughts, though the amount of tolerance for inconsistency obviously varies from person to person. But, based on experiments like those linked above, plausible-seeming beliefs about your own motivations were evolutionarily much more relevant to fitness than numerous safeguard mechanisms to ensure internal accuracy. Apparently.

Wednesday, November 11, 2009

Markram Speaks On Simulating the Brain



Henry Markram, director of the Blue Brain project, (relatively recently) gave a speech at TED about simulating neuron activity of the entire brain. This will require a supercomputer, in this case supplied by IBM, since the brain has some 1011 neurons. So far, they've replicated a rat's neocortical column, which comprises some 104 neurons, and Markram anticipates being able to fully model the human brain within 10 years.

One of the curious things that I found in the early part of his talk was how he kept saying "decisions" to refer to activity over which we have no direct control, such as the processing that goes into scaling object size based on distance. I'm not fond of that choice of words, since it seems to suggest that we, as conscious entities, could actually decide to perceive things differently from how we do -- something which should appeal to fans of neo-mystical idealism. For that reason, I think it would be better to avoid that kind of terminology to avoid a similar confusion as that which resulted from physicists' choice of the term "observe" to describe a particular type of interaction in quantum mechanics. They (physicists) also started the use of "God particle" to refer to the Higgs boson, which is again misleading to the general public, although I think the media has been more responsible for propagating that usage than any actual scientists.

Curious things, words.

Tuesday, November 10, 2009

Inquiry Into Impossibility

(Briefly).

I am a being with desires. Roughly, this means that I, as a sentient system, feel impelled to relieve an urge. So, I create a mental simulation of some thing, some state of affairs that differs from the current state of the world, that I expect (or hope) will alleviate that urge.
A simple example: Debra is hungry. Instinct and memory tell her that moving her body so as to bring comestibles to her mouth and chew (+ swallow, etc.) will make her less hungry. Thus, her object of desire is a state where she has eaten food, or where her stomach is full, or something to that effect.
The fundamental principle, as everyone already knows, is that actions modify our environment, and our bodies consequently reward us – with dopamine and the diminishment of urge – through modifying the environment in particular ways.

What happens, now, if I desire something impossible?

Let me sit and contemplate a bowl of fruit. My desire is for one of the fruit – a mango, perhaps – to be in my hand. I have, then, a clearly defined goal (a simulated aspect of the environment which will sate my want), and all that remains is to use the power of action to translate object of desire into actuality. Common sense says I should move myself within range of the bowl (if I'm not there already), extend an arm, and pick it up. However, my desire is slightly more complex than that: I want to achieve my goal without going through those steps. Part of my envisioned goal includes the condition that I collect the mango in the absence of gross physical movement.
There might be other options: maybe I have a friend (or servant) nearby who will interpret a very slight gesture on my part as a request for the mango. Or maybe there's a mechanical hand and conveyor belt set up that leads directly to my own biological hand, and it is activated through some minute action – moving my eyes or blinking in a particular pattern, for example.
But let's say none of these things have been arranged: the most likely situation is that I'm sitting in place alone, unaided, merely willing the mango to somehow appear in my hand through no great effort of my own.
That this should happen is improbable to the point of impossibility.
So, I review my options: supposing that I stubbornly stick to my original constraints (no gross physical movement), there is little to nothing that I can do to change the situation. To effect change, action is required; and only a particular subset of available actions leads to particular (desired) outcomes. If the set of actions-available-to-me happens to be disjoint to the set of actions-leading-to-my-goal, it seems I am utterly powerless. There must be overlap between those categories, otherwise it is logically impossible for me to achieve my goal through action.
Is there any other way to achieve a goal than through action? By the very nature of the word "achieve", I think the answer is no. Is it possible for me to accomplish something while doing nothing? Pace Laozi, the very idea reeks of contradiction. "But," I protest, "I am doing something – I'm thinking and simulating." The problem is that, apparently, thinking and simulating exert no influence on the universe if they are not accompanied by physical forces to realize their aims. Alone, they do not overlap with very many sets of actions-that-lead-to-goals at all.

How can I shape the universe to match my desires? Only through the channels that the universe allows me. And so the question becomes, how can I change those channels?

Wednesday, November 4, 2009

Science/Philosophy discussion on PF

Looks to be a very fascinating discussion about science and philosophy shaping up on the Philosophy Forums. At least, fascinating for my tastes, since I'm perennially interested in the question of what use philosophy really is these days compared to science.


Anyway, the thread starts out by directing several questions to John Searle (who is actually a member of the forums, although I suspect far too busy to be very active). Searle himself did give one response, but since then the thread's been drawing other participants; nonetheless, so far it has been a high caliber and well-reasoned discussion, by my estimation.

Here's a sample to whet your appetite, from the original poster (HamishMacSporran)'s response to Searle's own response:
... [M]any scientists take the view that the scientific revolution was made possible not by an accumulation of philosophical analysis, but instead by a rejection of the existing philosophical systems in favour of a new experimental method. From this point of view, scientists don't need philsophers [sic] to do their groundwork for them, but should instead ignore their clever arguments and focus on doing experiments. Hence the motto of the Royal Society, "Take nobody's word for it".
...

You [Searle] also raise the issue of computationalism within cognitive science, and your arguments against it. Whatever their merits, these arguments have not led to a conclusive rejection of computationalism. Dennett, for example, continues to deny your conclusions are valid, and the mind as a computer program metaphor continues to be common currency among cognitive scientists.

To many scientists this is just another symptom of philosophy's malaise: nothing ever gets resolved. That's why, instead of arguing for another thousand years about whether universals exist, they believe they need to focus on questions that can be definitively answered by experiment, with some even claiming that questions outside this domain are meaningless.

Does the computationalist hypothesis really have any experimental implications for cognitive scientists? What effects would you expect your arguments to have on a cognitive scientists research program? Are there benefits of philosophical dispute even in the case that no definite conclusion is reached?