Thursday, April 30, 2009

Why Do People Do Bad Things?

A question that has troubled humankind for as long as we've been able to formulate it, no doubt.

Let us consider a moral agent, Agent Q. (Not 007, no.)
Q commits an immoral act B (B for bad. Or maybe for /b/.).


  1. Q performs B because she doesn't know any better. She may have no conception or understanding of morality, or at least not the required kind of morality at hand here. Perhaps she is an animal or mindless drone and not a moral agent at all.
  2. Q performs B but does not believe it is wrong. This is basically a variation of 2, only now I assume Q does understand why other people might think B is wrong, yet she disagrees. Maybe she's Nietzsche.
  3. Q believes that B is wrong, yet she wants to benefit from B (in whatever way she does) badly enough to overcome any moral hestitations. She could be Judas, I suppose.
  4. Q believes that B is wrong and does not think that B's selfish good will outweigh the evil it entails. Somehow, Q performs B nonetheless. Q now seems to be conflicted – was this a lapse of will (akrasia?) How do we account for what has happened? Did Q choose to do B? Why, when she knows that it's wrong?
  5. Q had no genuine control over her actions.
The real one we're concerned with is naturally number 4. Why do I do (or does anyone) do that which disagrees with her own judgment?

Are we simply animals? Do we have impossible standards?
What gives here.

Wednesday, April 29, 2009

Reading A New Kind of Science

Recently checked out the gargantuan tome (1197 textbook sized pages), A New Kind of Science, by Stephen Wolfram. Not that I expect to get through the entire thing, or even a significant portion of it. But I've been wanting to take a look at it for ages.

Loosely, Wolfram intends to present some kind alternative framework for conceptualizing science (and, if I understand him correctly, practically every other field--philosophy, art, etc.) building from the principles of cellular automata. The main theme in the intro thus far is that simple systems can yield very complex results.

That's all well and good, a fascinating project. I have to say, however, that I'm a bit irritated by his style of prose. From what I've read so far, Mr. Wolfram has repeated that same basic idea--"complexity can arise from simplicity"--about 400 times more than he has actually needed to. He changes the words he uses, but essentially he keeps repeating the same idea without really adding anything to it. For the span of several pages he talks about how his new framework will benefit specific disciplines (biology, physics, mathematics, etc.) running through a list with a paragraph for each. And each paragraph essentially states the same basic idea, generically adapted to the subject at hand.

Seriously, Stephen. Your book is already an ungodly length without you adding what feels to me very much like pointless filler. I'm getting the impression that he likes to "hear himself write", so to speak.

I also take issue with a seeming arrogance Wolfram displays: he can't quite emphasize enough that this is all due to his discoveries and ideas, and this is the first time anyone has approached these problems from this particular angle, etc.

Which may be true to some degree. Certainly Wolfram's earlier work with cellular automata introduced the world to new classes of automata that had not been previous examined. But I feel that he relishes telling us about the magnitude of his own accomplishments a little much.

All that said, I'm just being picky here. I still intend to read more of the book, and I hope it will improve as it gets more into the heart of the matter.

Wednesday, April 15, 2009

Writers Block and the Overcoming Thereof

The solution to writer's block for any endeavor (music, essays, blog posts, fiction, art) is simple.

Just write anyway, no matter how terrible it's going to be. No more excuses, no more worrying about the final product, no more stopping yourself in the middle of a sentence (melody, line stroke, bit of dialog) because it isn't good enough, no more getting sidetracked by picayune concerns. Just write.

Create in spite of yourself, if necessary.

I'll let y'all know how it goes as I attempt to put words to practice.

Update as of 04-16-09: Eh. Not sure how well that worked. It's still very discouraging.

Thursday, April 9, 2009

The Ability to Repeat (Certain)

...Against The Ability to Grow (Uncertain)

From an article on Carol Dweck and motivation (incidentally, I've mentioned her research before in a post from last year):

Dweck’s next question: what makes students focus on different goals in the first place? During a sabbatical at Harvard, she was discussing this with doctoral student Mary Bandura (daughter of legendary Stanford psychologist Albert Bandura), and the answer hit them: if some students want to show off their ability, while others want to increase their ability, “ability” means different things to the two groups. “If you want to demonstrate something over and over, it feels like something static that lives inside of you—whereas if you want to increase your ability, it feels dynamic and malleable,” Dweck explains. People with performance goals, she reasoned, think intelligence is fixed from birth. People with learning goals have a growth mind-set about intelligence, believing it can be developed. (Among themselves, psychologists call the growth mind-set an “incremental theory,” and use the term “entity theory” for the fixed mind-set.) The model was nearly complete...

Thursday, April 2, 2009


How can we (or I?) motivate ourselves to do things that we do not want to do?

Seems to me that "instrumental" and "punitive" approaches have been most popular over the years. Which is to say, under the instrumental approach we think, "X is unpleasant, but I need to accomplish/tolerate X in order to achieve Y, which I do desire", and thence derives motivation to do X.

The punitive approach--maybe it should be called the "threat of punishment" approach, but that just sounds long and awkward--is more along the lines of, "Q is unpleasant; but Z is even more unpleasant than Q, and Z will happen if I do not do Q".

Actually, these are analogous (although not perfectly so) to modus ponens and modus tollens, like so:



From a motivational standpoint, one looks toward the conclusion for a desired outcome. In the first case, we want Y to happen, and we know that one of the ways to effect Y is to do X. That is, if we do X, unpleasant though it may be, we will be rewarded with Y.

Similarly, in the second case Z is even more odious than Q, but we know that if Z is going to happen, it must be in Q's absence. Thus, we can prevent Z from happening (effect ~Z) by doing Q.

These also (unsurprisingly) map pretty handily onto the psychological principles of operant conditioning: what I've called the "instrumental approach" (modus ponens) is analogous to positive reinforcement and positive punishment, and the "punitive approach" (modus tollens) to negative reinforcement and negative punishment.

All four approaches are logically equivalent, depending on how we choose our premises. It is worthwhile to note, however, that they are not psychologically equivalent--and that people may react better to what they understand as a positive stimulus versus a negative stimulus, etc.

Wednesday, April 1, 2009

Living in the Past

Apparently David Eagleman and Terrence Sejnowski hold that the human conscious awareness of certain events--such as visual events--actually occurs a substantial period after the event has occurred (~80 ms).

See "Motion, Integration and Postdiction in Visual Awareness". I don't actually have access to the full article, so I'm just basing this off the abstract and some secondary news reports, but anyway.

Other studies, such as those by Benjamin Libet (yes, I'm too lazy to dig out a proper citation), suggest that conscious decisions may begin up to ~350ms (give or take a few hundred milliseconds) before we become aware of them. Or at least that the brain begins to make neural preparations that correlate with decisions that far in advance of our awareness.

I need to look into these and related studies more thoroughly, but, prima facie, what the hell does this imply about our perception of time? Is our awareness for pretty much any event constantly lagging behind the actual occurence of that event?

How does that work when we're consciously trying to sync ourselves up to events, as with playing musical rhythms in time? To a human, a latency of > 15 ms (or even lower for sensitive musicians) becomes very noticeable very quickly when playing on an electronic instrument; but how is that........ how does that fit with the rest of our consciousness experiencing such a time delay?

What about computer games and real life activities need to happen need to happen on the scale of around ~250 - ~300ms? Does our awareness delay factor into that too?

Does that have any implications about free will during these and other activities? Would that make a sort of delayed epiphenomenalism seem the most logical position for the philosophy of mind?

Questions, question, questions that I should seek out answers to. But I'm too lazy, so questions they shall remain.