Monday, November 30, 2009

Opaque Introspection

Some time ago, Psyblog posted a series called "What Everyone Should Know About Their Own Minds". Appropriately enough, it covers different ways in which humans typically completely misunderstand (or misjudge) components of their own behavior such as motivations, reasons, and predictions about their own responses. Sometimes this involves resolving cognitive dissonance, where a subject changes some of her beliefs in order to better match other beliefs they had been opposing; other times the explanation is less clear.


The whole subject is a curious one, since, subjectively, most of us feel like we have pretty transparent access to the inside of our own heads. For example, if asked to give reasons why we find one face more attractive than another, we usually think we can do so so; and we feel moreover that the reasons we come up with will be true to whatever processes actually go on inside our heads. Not so, as one of the Psyblog posts reports; at least not with the accuracy that we'd think we would have.

My impression is that we are sometimes alien to our own minds, insofar as we don't understand most of our inner machinations. Yet, strangely, we have this need to "tell stories", even to ourselves, about why we've done things a particular way: a need for explanation or justification of our inner landscape, much in the fashion of the need that we have to explain (and understand) the outer world. This isn't such a bad thing, since seeking explanations is the heart of science (and rational inquiry). But it does emphasize the necessity of being relentlessly critical and skeptical if your aim is truth -- skeptical even about your own thought processes -- lest you settle too firmly on the first story that seems plausible to you.

A downside of the above skeptical strategy, in my own experience, is that it tends to drive you a little bit crazy. Constantly doubting your own motivations, values, judgments, and ostensible motivations is not a very enjoyable way to go about your day. After a certain point, if you lose too much faith in your own understanding with regard to the contents of your own head, you may end up impeding your own progress, due to constantly looking for nonexistent solid ground. From a practical "getting things done" standpoint, it may be better to be wrong about a few little details here and there if you're still able to function well in the main.

We may suppose, perhaps, that this was why we did not evolve to be more naturally self-critical and self-reflective creatures. Don't get me wrong, compared to any other form of intelligence that we know about, we still go pretty far toward it -- even the least reflective of individuals tries to purge logical inconsistencies from her thoughts, though the amount of tolerance for inconsistency obviously varies from person to person. But, based on experiments like those linked above, plausible-seeming beliefs about your own motivations were evolutionarily much more relevant to fitness than numerous safeguard mechanisms to ensure internal accuracy. Apparently.

Wednesday, November 11, 2009

Markram Speaks On Simulating the Brain



Henry Markram, director of the Blue Brain project, (relatively recently) gave a speech at TED about simulating neuron activity of the entire brain. This will require a supercomputer, in this case supplied by IBM, since the brain has some 1011 neurons. So far, they've replicated a rat's neocortical column, which comprises some 104 neurons, and Markram anticipates being able to fully model the human brain within 10 years.

One of the curious things that I found in the early part of his talk was how he kept saying "decisions" to refer to activity over which we have no direct control, such as the processing that goes into scaling object size based on distance. I'm not fond of that choice of words, since it seems to suggest that we, as conscious entities, could actually decide to perceive things differently from how we do -- something which should appeal to fans of neo-mystical idealism. For that reason, I think it would be better to avoid that kind of terminology to avoid a similar confusion as that which resulted from physicists' choice of the term "observe" to describe a particular type of interaction in quantum mechanics. They (physicists) also started the use of "God particle" to refer to the Higgs boson, which is again misleading to the general public, although I think the media has been more responsible for propagating that usage than any actual scientists.

Curious things, words.

Tuesday, November 10, 2009

Inquiry Into Impossibility

(Briefly).

I am a being with desires. Roughly, this means that I, as a sentient system, feel impelled to relieve an urge. So, I create a mental simulation of some thing, some state of affairs that differs from the current state of the world, that I expect (or hope) will alleviate that urge.
A simple example: Debra is hungry. Instinct and memory tell her that moving her body so as to bring comestibles to her mouth and chew (+ swallow, etc.) will make her less hungry. Thus, her object of desire is a state where she has eaten food, or where her stomach is full, or something to that effect.
The fundamental principle, as everyone already knows, is that actions modify our environment, and our bodies consequently reward us – with dopamine and the diminishment of urge – through modifying the environment in particular ways.

What happens, now, if I desire something impossible?

Let me sit and contemplate a bowl of fruit. My desire is for one of the fruit – a mango, perhaps – to be in my hand. I have, then, a clearly defined goal (a simulated aspect of the environment which will sate my want), and all that remains is to use the power of action to translate object of desire into actuality. Common sense says I should move myself within range of the bowl (if I'm not there already), extend an arm, and pick it up. However, my desire is slightly more complex than that: I want to achieve my goal without going through those steps. Part of my envisioned goal includes the condition that I collect the mango in the absence of gross physical movement.
There might be other options: maybe I have a friend (or servant) nearby who will interpret a very slight gesture on my part as a request for the mango. Or maybe there's a mechanical hand and conveyor belt set up that leads directly to my own biological hand, and it is activated through some minute action – moving my eyes or blinking in a particular pattern, for example.
But let's say none of these things have been arranged: the most likely situation is that I'm sitting in place alone, unaided, merely willing the mango to somehow appear in my hand through no great effort of my own.
That this should happen is improbable to the point of impossibility.
So, I review my options: supposing that I stubbornly stick to my original constraints (no gross physical movement), there is little to nothing that I can do to change the situation. To effect change, action is required; and only a particular subset of available actions leads to particular (desired) outcomes. If the set of actions-available-to-me happens to be disjoint to the set of actions-leading-to-my-goal, it seems I am utterly powerless. There must be overlap between those categories, otherwise it is logically impossible for me to achieve my goal through action.
Is there any other way to achieve a goal than through action? By the very nature of the word "achieve", I think the answer is no. Is it possible for me to accomplish something while doing nothing? Pace Laozi, the very idea reeks of contradiction. "But," I protest, "I am doing something – I'm thinking and simulating." The problem is that, apparently, thinking and simulating exert no influence on the universe if they are not accompanied by physical forces to realize their aims. Alone, they do not overlap with very many sets of actions-that-lead-to-goals at all.

How can I shape the universe to match my desires? Only through the channels that the universe allows me. And so the question becomes, how can I change those channels?

Wednesday, November 4, 2009

Science/Philosophy discussion on PF

Looks to be a very fascinating discussion about science and philosophy shaping up on the Philosophy Forums. At least, fascinating for my tastes, since I'm perennially interested in the question of what use philosophy really is these days compared to science.


Anyway, the thread starts out by directing several questions to John Searle (who is actually a member of the forums, although I suspect far too busy to be very active). Searle himself did give one response, but since then the thread's been drawing other participants; nonetheless, so far it has been a high caliber and well-reasoned discussion, by my estimation.

Here's a sample to whet your appetite, from the original poster (HamishMacSporran)'s response to Searle's own response:
... [M]any scientists take the view that the scientific revolution was made possible not by an accumulation of philosophical analysis, but instead by a rejection of the existing philosophical systems in favour of a new experimental method. From this point of view, scientists don't need philsophers [sic] to do their groundwork for them, but should instead ignore their clever arguments and focus on doing experiments. Hence the motto of the Royal Society, "Take nobody's word for it".
...

You [Searle] also raise the issue of computationalism within cognitive science, and your arguments against it. Whatever their merits, these arguments have not led to a conclusive rejection of computationalism. Dennett, for example, continues to deny your conclusions are valid, and the mind as a computer program metaphor continues to be common currency among cognitive scientists.

To many scientists this is just another symptom of philosophy's malaise: nothing ever gets resolved. That's why, instead of arguing for another thousand years about whether universals exist, they believe they need to focus on questions that can be definitively answered by experiment, with some even claiming that questions outside this domain are meaningless.

Does the computationalist hypothesis really have any experimental implications for cognitive scientists? What effects would you expect your arguments to have on a cognitive scientists research program? Are there benefits of philosophical dispute even in the case that no definite conclusion is reached?


Friday, October 23, 2009

Philosophy as counter-productive in a therapeutic sense?

From Irvin D. Yalom's Existential Psychotherapy:

As nature abhors a vacuum, we humans abhor uncertainty. One of the tasks of the therapist is to increase the patient's sense of certainty and mastery. It is a matter of no small importance that one be able to explain and order the events in our lives into some coherent and predictable pattern. To name something, to locate its place in a causal sequence, is to begin to experience it as under our control. No longer, then, is our internal experience or behavior frightening, alien or out of control; instead, we behave (or have a particular inner experience) because of something we can name or identify. The '"because" offers one mastery (or a sense of mastery that phenomenologically is tantamount to mastery). [pp. 189-190]
Does the study of philosophy, I wonder, run counter to that need we have for certainty? Or on the contrary, does it perhaps assist it? After all, though philosophy does not provide us with cut-and-dry answers, and its study exacerbates our awareness of the human's dismal epistemic and existential plight, nonetheless any sort of rational investigation (of which philosophy purports to be the discipline par excellance) will invariably encompass the latter part of Yalom's quote, viz., the bit about "naming" and explaining/ordering events.

The Ancients make for a good example here: reducing the physical world's phenomena to a limited number of substances--e.g. fire/change for Heraclitus, water for Thales, air for Anaximenes, all four elements for Empedocles--allows the more succinct and manageable comprehension of that world. Rational reductionism of all shades and hues affords a better sense of psychological certainty, and consequently an increased feeling of "mastery". Science, as the eventual granddaughter of this impulse, performs the same function, although our methods have since grown much more mathematical and empirical.

But by no means ought we think this strategy unique to science and philosophy. Human mythology is cross-culturally rife with proposed explanations for phenomena ("just so" stories). Tellingly, many of them explicitly exalt the importance of words and naming in the development of humanity. For current Euro-American culture, most obviously we see Adam's naming of the beasts in Genesis. Sigmund Freud asserted that religion attempts to reconcile our needs for control against the volatile chaos of nature: if the natural world is controlled utterly by a superior being who acts as a father for us, we are thus assured that there is order lurking behind ostensible chaos; and, more importantly, it is an order with our own best interests in mind (eventually, that is. Because God's will is inscrutable and ineffable, we must accept that bad things will happen to us in the now). Finally, it is an order which is not set forever in stone, but an order which may be bargained with and appealed to, since it is ruled fundamentally by a person of sorts, not by a blind, unintelligent, and uncaring force. (The "Communication/Negotiation" section of my post "The Immutability of Vicissitudes, Part 1" addresses this too.)

It may in fact be fruitful to analyze the scientific thirst for knowledge in light of psychoanalytical need for control, and I am positive that I'm not the first to suggest this. We might say that science-lust is an extension of Freud's interpretation of religion: science serves similar psychological needs, although it requires a deep paradigm shift as well. The scientific Weltanschauung does not privilege humanity by pretending that the universe operates in humanity's best interest, nor does it offer us a kindly father-figure. However, it does give us an explanation--an understanding of order inherent behind the horrible confusions nature presents us with; and it supplies us, through understanding, with a means to combat our own helplessness.

Thus, science may be seen nearly as an outgrowth of religion (more properly, the religious impulse); but it is one that replaces an anthropomorphic epistemology with a mechanical epistemology. As Quine said of physical objects and gods,
Both sorts of entities enter our conception only as cultural posits. The myth of physical objects is epistemologically superior to most in that it has proved more efficacious than other myths as a device for working a manageable structure into the flux of experience. ["Two Dogmas of Empiricism"]
That is to say, we put forth the common understanding of physical objects (as opposed to the skeptic's or the idealist's) as an explanation for our collected observations in the same way that, and for similar reasons as, our ancestors put forth gods. However, gods are not very good explanations and they do not enable us to do things the way that believing in the reality of physical objects do; hence their (physical objects') epistemic superiority.

Though we cannot appeal to a God anymore, science does furnish us with a means to control nature ourselves, which is something that religion and mythology could never adequately supply. From this we get the term "playing God" and our species' tradition of shunning technological progress for the power it takes away from God. Now, the contemporary existentially-minded human finds herself sitting down in God's throne after having killed Him, and she finds herself terrified by the loneliness, the lack of direction, and the growing awareness that, if God had ever existed, He wouldn't have been any better off than she is now.


So, all respect due to Boethius, is philosophy actually a consolation? Cautiously, I say that it can be; but I hasten to add that stopping there is woefully (and willfully?) near-sighted. Rational inquiry enables us to satisfy some of the needs Yalom detailed above in a similar capacity that religion has served past-ly. Unfortunately, we no sooner find a good reason to believe something than we recognize that there can be no absolute certainty (in a broad sense). Naming (and, these days, quantifying) the surrounding world is a valuable ability, but the relentless and open-eyed pursuit of exhaustive naming schemes leads one to the conclusion that such things are impossible--and that certainty is more so.

(It seems to me that Mark Z. Danielewski deals with these themes among others in his novel House of Leaves, although I may not be able to say exactly how.)

Monday, September 28, 2009

Link Dump

So here are some blog entries (or whatever) I have recently found interesting.


At Theoretical Atlas, Jeffrey Morton recounts a talk given by Gregory Chaitin where Chaitin urges us to create a more rigorous "theoretical biology". Something where we could prove interesting theorems about evolutionary models, for example.



This is a somewhat older paper (1997) by Allen F. Randall entitled "Quantum Phenomenology". He attempts something of an updated version of Descartes' cogito (maybe mixed with Kant's transcendental project), attempting to logically derive the principles of quantum mechanics from the basic awareness of experiential existence (my phrase, not Randall's). Randall claims, "Far from being "bizarre" and "weird", as is usually thought, the strangest paradoxes of quantum theory turn out to be just what one ought to expect of a rational universe." I admit, I've only read the first part of this paper, but it's still a very fascinating idea.

At Mixing Memory, Chris reports on several studies where unpleasant stimuli (such as foul odors, messy/sloppy workspaces, disgusting video footage) harshen the severity of people's moral judgments.


Saturday, July 4, 2009

Why we're here

It doesn't seem as though the universe should exist. It is common to think there should be a reason--a truthmaker--for why the universe exists as opposed to not existing.

Which is a mighty peculiar thing, since we've certainly never had any observations of a non-existent universe, nor can we even properly imagine it (in my opinion). All we have is the assumption that the pre-natal and natural state of anything is non-existence, or absence; accordingly, there must be something else, some kind of metaphysical mechanism, that causes that non-existence to become existence.
This stems from our observations about causality, perhaps coupled with innate physical intuitions. Maybe it isn't so peculiar to have this belief, then, since every time that we witness an effect in everyday life, it is preceded (anteceded) by a cause. Hence cosmological arguments; hence the conclusion that there must have been either a single "unmoved mover" or an infinite chain of antecedent causes to explain the universe's existence. Well, that or the universe shouldn't exist at all, being causeless.
I wonder if we're really justified in this conclusion. We have never witnessed something come from nothing (which is presumably what must have happened when a pure void became material substance?) but then... we have truthfully never witnessed "nothing". As our scientific instruments grow more and more powerful, we end up discovering that even that which has appeared empty before--vacuums, space--is nonetheless filled with a frothing mass of virtual particles and other bizarre fauna of the micro-universe. (See, e.g., this Wikipedia article on the vacuum state).
And if there is always something there materially... I guess then we have to suppose that those minute, practically non-existent material bits must be able to exert causal influence on other bits of matter. In that case, how can we be sure that it's even physically possible for the universe (or at least matter) not to exist? Maybe it is logically possible, infosofar as we can imagine it (which I have my doubts about, as I said before). Metaphysically possible? Hmmm....
Consider another Easter egg laid by the goose of science: the first law of thermodynamics. The total amount of energy in an isolated system must remain constant, though it may change form. (Similarly so with physical information, as came up in the black hole information paradox. I believe the conservation of one quantitiy--information or energy--may be derived from the other.) If the universe is an isolated system, and if we presume that this law holds invariably, then we must conclude that the universe has always existed--or rather that the energy within it has always existed, which is close enough for our purposes, since energy may be converted to matter.
Thus, the laws of physics do not allow the energy of the universe to come into or out of existence--thus, we must presume it has always existed.
But then, all I'm doing is shifting the question back a step. Now we ask instead, "Why is it that the total energy in the universe equals a positive value, not a zero or negative value?"
I don't know. Should we expect there to be a reason that gravity exists? That time exists? Some physicists do look for causes there, I suppose. And to a certain degree, we may get explanations about gravity and other forces as they (we think) split off from a single unified source in the very early universe. Still, if we accept that laws or forces need no justification for their existence, maybe we shouldn't expect a justification for the existence of energy either?

Edit as of 07-05-2009: I should clarify that under some theories, the total energy of the universe actually does add up to zero; it just that the way it's distributed gives us the material and energy formations we're used to. (I think gravity is typically suggested as the negative counterpart for the energy released during the Big Bang. Here's a handy and relevant (though brief) link for further reading).

Friday, June 26, 2009

Controlling oneself

Fascinating post over at Less Wrong about the influence of control systems on human behavior, and the role they seem to play in the brain. A control system here basically means a feedback device that catalyzes or inhibits some variable in order to maintain it within an accepted range. (Thermostat, homeostasis, etc.)


Here's a lengthy quote that seems most pertinent to me:
In a primitive, tribal culture, being seen as useless to the tribe could easily be a death sentence, so we [likely] evolved mechanisms to avoid giving the impression of being useless. A good way to avoid showing your incompetence is to simply not do the things you're incompetent at, or things which you suspect you might be incompetent at and that have a great associated cost for failure. If it's important for your image within the tribe that you do not fail at something, then you attempt to avoid doing that.

You might already be seeing where this is leading. The things many of us procrastinate on are exactly the kinds of things that are important to us. We're deathly afraid of the consequences of what might happen if we fail at them, so there are powerful forces in play trying to make us not work on them at all. Unfortunately, for beings living in modern society, this behavior is maladaptive and buggy. It leads to us having control circuits which try to keep us unproductive, and when they pick up on things that might make us more productive, they start suppressing our use of those techniques.

Furthermore, the control circuits are stupid. They are occasionally capable of being somewhat predictive, but they are fundamentally just doing some simple pattern-matching, oblivious to deeper subtleties. They may end up reacting to wholly wrong inputs. Consider the example of developing a phobia for a particular place, or a particular kind of environment. Something very bad happens to you in that place once, and as a result, a circuit is formed in your brain that's designed to keep you out of such situations in the future. Whenever it detects that you are in a place resembling the one where the incident happened, it starts sending error signals to get you away from there. Only that this is a very crude and unoptimal way of keeping you out of trouble - if a car hit you while you were crossing the road, you might develop a phobia for crossing the road. Needless to say, this is more trouble than it's worth.
(P.S., I should note that the author of the quoted blog post, Kaj_Sotala, draws this conceptual material largely from a self-help article by PJ Eby).

Fascinating stuff. Maybe not without its problems though, as commenter Silas Barta notes:
The explanations here for behavioral phenomena look like commonsense reasoning that is being shoehorned into controls terminology by clever relabeling. (ETA: Why do you need the concept of a "feedback control system" to think of the idea of running through the reasons you're afraid of something, for example?)
The thought concerns me a bit too. Are we really getting any benefit from describing these aspects of behavior as control mechanisms? Are we getting a more accurate model of behavior? At an individual, practical level, does it help us to conceive of our thought processes in this way?

Thursday, June 18, 2009

"Following From" Addendum

Ah, this is a perfect example of why I should actually read philosophical work that has already been done on subjects that I wonder about.


My last post, Following From, concerned itself (loosely, and among other things) with the nature of metaphysical laws. However, I uncritically assumed a view analogous to the regularity theory of laws of nature--as opposed to the necessitarian theory. That is, I took it for granted that laws (though of course I have in mind metaphysical laws, not just those in the physical world) are simply descriptions of behavior rather than forces which "govern" or "command" objects to act in particular ways.

Yet, while I took the regularity view for granted, I speculated about what it is that "causes" or "makes" things behave the way they do, while at the same time rejecting the necessitarian view which would--we hope--give just that kind of explanation. Now, this isn't really a solution to whatever problem I had in mind, because I would still be inclined to ask of necessitarians, "But what, in turn, makes necessitarian laws hold the sway they do?", thus leading us obnoxiously into a typical infinite regress. But my point is that if I'd already been aware of these existing philosophical positions, and read at least a modicum about them, I would have had a basis from which to work when asking my own questions--it may not have furnished me with answers right off the bat, but I believe that marking this distinction has helped to clarify the matter in my mind.

Wednesday, June 17, 2009

Following From

[I wish that I had the interest/dedication to actually pursue these thoughts more rigorously by studying existent philosophical work on the topic, but alas.]


Consider a state governed by absolutely no physical or metaphysical laws. Does this mean anything can happen? Or for "something to happen", does that require that there be something to guide or direct "happenings"?

Natural laws, of course, probably don't "guide" action by any means. They're simply descriptions of how phenomena behave. But... how do phenomena "know" how to behave? What makes them behave in a particular way versus some other way?

I suppose I'm inquiring about how causation works in general--what exactly goes on when some observed or postulated event (which we call the antecedent) is supposed to cause, or in some way be responsible for, another event (the consequent). (As a side note, "event" is too loose of a term. Really, I suppose I mean "states of affairs" or "sets of circumstances" that obtain at a certain point in time. But event is a bit quicker to write, and most of the discussion examples that I can think of are events in the more standard sense too.)

If nothing else, we can at least say that human minds (and thus what we call rational thought) work best thinking under the following paradigm: to understand how/why a circumstance came to be, the circumstance must have followed from, or been enabled by, a pre-existing framework. {{And it is this necessity of thought unchecked that Kant rebukes in his Critique. The search for the unconditioned condition--a final explanation--the prime mover--God--is an attempt to step outside of the infinite regress that otherwise results, and thus to give us a circumstance which needs no further explanation. Kant (perhaps rightfully) claims that reason oversteps its justifiable boundaries when it tries to make this move.}} This paradigm seems to have served us fairly faithfully so far, but we cannot nonetheless discount the possibility that this fundamental "strategy" of thought might be flawed. {{As you will notice, the current topic is regrettably plagued by difficulty (impossibility?) of discussion. Like many other areas of philosophy, we are trying to grapple with notions that extend into the core of our most basic assumptions, and even trying to think about them will be difficult, much less to question their "accuracy".}}

What, however, does any of this mean? The ideas I suggest now may be completely nonsensical, possibly incoherent as well. And surely there is nothing to be gained by indulging nonsense.

I believe the question ties into (is enrooted in?) regress and the problem of finding first causes. Which in its own way mirrors the confusing interrelationship between objectivity and subjectivity.

Friday, June 5, 2009

Systemic Therapy

Holy cripes, there's a form of psychotherapy based off of cybernetics / systems theory?


Crazy. I doubt it's as cool as it sounds to me, and really I don't even know that much about the aforementioned subjects, but I really like the idea of understanding a psychosocial situation in terms of interacting systems.

Thursday, May 21, 2009

Want? Choice?

Why is it so hard to be a particular way by choice?

Why is it that I cannot simply decide one day, hey, I want to accomplish X--and then pursue it?

Nothing prevents me.

Akrasia...?

Saturday, May 16, 2009

Probability - BAH!

Yes.

Something vaguely bothers me about probability and the overall use of statistics as means for projecting and collecting data. And not just the fact that I find it harder to understand them than I believe I should, considering how comparatively simple their application and execution is.

I can't quite articulate what it is yet, and in any case it's likely that my worries here are pretty groundless, as with the concerns I've felt about other aspects of science and mathematics. But hey, the investigation is the fun part, right? And I truly seem to learn the most easily when I'm mentally "assailing" a position: hunting for weak points, discrepancies, internal conflicts, etc.

Friday, May 1, 2009

Cats and the Qualia of Happiness

My cat, having satisfied himself with the food available indoors, ambles back to the door and sits expectantly. He'll look up at the portal, look around, look at me occasionally. If I draw close, he'll rise up and paw the side of the door, possibly rub himself against me, make movements that seem to express a readiness to go forth. He meows on occasion, if the exit remains barred for too long.

As near as I can tell, he experiences a desire to go outside.

Would he feel pleasure from his desire being fulfilled, or would he simply feel relief from the pressing urge?

Thursday, April 30, 2009

Why Do People Do Bad Things?

A question that has troubled humankind for as long as we've been able to formulate it, no doubt.

Let us consider a moral agent, Agent Q. (Not 007, no.)
Q commits an immoral act B (B for bad. Or maybe for /b/.).

Possibilities:

  1. Q performs B because she doesn't know any better. She may have no conception or understanding of morality, or at least not the required kind of morality at hand here. Perhaps she is an animal or mindless drone and not a moral agent at all.
  2. Q performs B but does not believe it is wrong. This is basically a variation of 2, only now I assume Q does understand why other people might think B is wrong, yet she disagrees. Maybe she's Nietzsche.
  3. Q believes that B is wrong, yet she wants to benefit from B (in whatever way she does) badly enough to overcome any moral hestitations. She could be Judas, I suppose.
  4. Q believes that B is wrong and does not think that B's selfish good will outweigh the evil it entails. Somehow, Q performs B nonetheless. Q now seems to be conflicted – was this a lapse of will (akrasia?) How do we account for what has happened? Did Q choose to do B? Why, when she knows that it's wrong?
  5. Q had no genuine control over her actions.
The real one we're concerned with is naturally number 4. Why do I do (or does anyone) do that which disagrees with her own judgment?

Are we simply animals? Do we have impossible standards?
What gives here.

Wednesday, April 29, 2009

Reading A New Kind of Science

Recently checked out the gargantuan tome (1197 textbook sized pages), A New Kind of Science, by Stephen Wolfram. Not that I expect to get through the entire thing, or even a significant portion of it. But I've been wanting to take a look at it for ages.

Loosely, Wolfram intends to present some kind alternative framework for conceptualizing science (and, if I understand him correctly, practically every other field--philosophy, art, etc.) building from the principles of cellular automata. The main theme in the intro thus far is that simple systems can yield very complex results.

That's all well and good, a fascinating project. I have to say, however, that I'm a bit irritated by his style of prose. From what I've read so far, Mr. Wolfram has repeated that same basic idea--"complexity can arise from simplicity"--about 400 times more than he has actually needed to. He changes the words he uses, but essentially he keeps repeating the same idea without really adding anything to it. For the span of several pages he talks about how his new framework will benefit specific disciplines (biology, physics, mathematics, etc.) running through a list with a paragraph for each. And each paragraph essentially states the same basic idea, generically adapted to the subject at hand.

Seriously, Stephen. Your book is already an ungodly length without you adding what feels to me very much like pointless filler. I'm getting the impression that he likes to "hear himself write", so to speak.

I also take issue with a seeming arrogance Wolfram displays: he can't quite emphasize enough that this is all due to his discoveries and ideas, and this is the first time anyone has approached these problems from this particular angle, etc.

Which may be true to some degree. Certainly Wolfram's earlier work with cellular automata introduced the world to new classes of automata that had not been previous examined. But I feel that he relishes telling us about the magnitude of his own accomplishments a little much.



All that said, I'm just being picky here. I still intend to read more of the book, and I hope it will improve as it gets more into the heart of the matter.

Wednesday, April 15, 2009

Writers Block and the Overcoming Thereof

The solution to writer's block for any endeavor (music, essays, blog posts, fiction, art) is simple.

Just write anyway, no matter how terrible it's going to be. No more excuses, no more worrying about the final product, no more stopping yourself in the middle of a sentence (melody, line stroke, bit of dialog) because it isn't good enough, no more getting sidetracked by picayune concerns. Just write.

Create in spite of yourself, if necessary.



I'll let y'all know how it goes as I attempt to put words to practice.

Update as of 04-16-09: Eh. Not sure how well that worked. It's still very discouraging.

Thursday, April 9, 2009

The Ability to Repeat (Certain)

...Against The Ability to Grow (Uncertain)

From an article on Carol Dweck and motivation (incidentally, I've mentioned her research before in a post from last year):

Dweck’s next question: what makes students focus on different goals in the first place? During a sabbatical at Harvard, she was discussing this with doctoral student Mary Bandura (daughter of legendary Stanford psychologist Albert Bandura), and the answer hit them: if some students want to show off their ability, while others want to increase their ability, “ability” means different things to the two groups. “If you want to demonstrate something over and over, it feels like something static that lives inside of you—whereas if you want to increase your ability, it feels dynamic and malleable,” Dweck explains. People with performance goals, she reasoned, think intelligence is fixed from birth. People with learning goals have a growth mind-set about intelligence, believing it can be developed. (Among themselves, psychologists call the growth mind-set an “incremental theory,” and use the term “entity theory” for the fixed mind-set.) The model was nearly complete...

Thursday, April 2, 2009

Motivation

How can we (or I?) motivate ourselves to do things that we do not want to do?

Seems to me that "instrumental" and "punitive" approaches have been most popular over the years. Which is to say, under the instrumental approach we think, "X is unpleasant, but I need to accomplish/tolerate X in order to achieve Y, which I do desire", and thence derives motivation to do X.

The punitive approach--maybe it should be called the "threat of punishment" approach, but that just sounds long and awkward--is more along the lines of, "Q is unpleasant; but Z is even more unpleasant than Q, and Z will happen if I do not do Q".

Actually, these are analogous (although not perfectly so) to modus ponens and modus tollens, like so:

X->Y
X
Y

Z->~Q
Q
~Z

From a motivational standpoint, one looks toward the conclusion for a desired outcome. In the first case, we want Y to happen, and we know that one of the ways to effect Y is to do X. That is, if we do X, unpleasant though it may be, we will be rewarded with Y.

Similarly, in the second case Z is even more odious than Q, but we know that if Z is going to happen, it must be in Q's absence. Thus, we can prevent Z from happening (effect ~Z) by doing Q.


These also (unsurprisingly) map pretty handily onto the psychological principles of operant conditioning: what I've called the "instrumental approach" (modus ponens) is analogous to positive reinforcement and positive punishment, and the "punitive approach" (modus tollens) to negative reinforcement and negative punishment.


All four approaches are logically equivalent, depending on how we choose our premises. It is worthwhile to note, however, that they are not psychologically equivalent--and that people may react better to what they understand as a positive stimulus versus a negative stimulus, etc.

Wednesday, April 1, 2009

Living in the Past

Apparently David Eagleman and Terrence Sejnowski hold that the human conscious awareness of certain events--such as visual events--actually occurs a substantial period after the event has occurred (~80 ms).

See "Motion, Integration and Postdiction in Visual Awareness". I don't actually have access to the full article, so I'm just basing this off the abstract and some secondary news reports, but anyway.

Other studies, such as those by Benjamin Libet (yes, I'm too lazy to dig out a proper citation), suggest that conscious decisions may begin up to ~350ms (give or take a few hundred milliseconds) before we become aware of them. Or at least that the brain begins to make neural preparations that correlate with decisions that far in advance of our awareness.

I need to look into these and related studies more thoroughly, but, prima facie, what the hell does this imply about our perception of time? Is our awareness for pretty much any event constantly lagging behind the actual occurence of that event?

How does that work when we're consciously trying to sync ourselves up to events, as with playing musical rhythms in time? To a human, a latency of > 15 ms (or even lower for sensitive musicians) becomes very noticeable very quickly when playing on an electronic instrument; but how is that........ how does that fit with the rest of our consciousness experiencing such a time delay?

What about computer games and real life activities need to happen need to happen on the scale of around ~250 - ~300ms? Does our awareness delay factor into that too?

Does that have any implications about free will during these and other activities? Would that make a sort of delayed epiphenomenalism seem the most logical position for the philosophy of mind?

Questions, question, questions that I should seek out answers to. But I'm too lazy, so questions they shall remain.

Monday, March 2, 2009

Review of "Two Dogmas"

(Gasp, who could have imagined I'd ever do something that approximates real philosophy on this blog?)

Quine's paper on the analytic/synthetic divide, "Two Dogmas of Empiricism", has always bugged me--chiefly because it feels very important to me, yet I have a bloody devil of a time understanding it. And it certainly is a seminal work for the twentieth century, so it can't hurt to improve my understanding of it a little.

To that end, I'm making a rough outline for what I understand Quine to be doing in the darned piece. That is, I'm going to peruse "Two Dogmas" on a paragraph-by-paragraph basis and try to give a feel for how the pieces fit together in his overall argument. It seems to me that the hardest parts of this essay are keeping track of where you are and identifying how his specific strategies function, so I'll endeavor to make that clear. Critiquing may also occur at a few points.

Anyway, tally ho! Onward!


Quine's "Two Dogmas of Empiricism"

Quine formally divides his paper into six sections, but I think of the big picture as dividing into three larger "macro sections": first, the preface where Quine gives context and explains what he means by analyticity (intro remarks and §1); second, the parts where he considers methods for understanding analytic statements but rejects them as inadequate (§§2, 3, and 4); and finally, the remaining parts where he studies the ramifications for empiricism (§§5 and 6). For my purposes, I'm most interested in §§1-4 (the first two "macro sections"), and I'm going to completely ignore the remainder.

1. Background for Analyticity
Here, Quine paints a basic backdrop for the concept "analytic" and "synthetic": Kant's characterization, the Humean and Aristotelian precursors, intension/extension, Carnap's take on analyticity, blah blah blah. As I see it, a lot of this may be dismissed as irrelevant to his actual argument except insofar as it makes sure we're all on the same page when he uses these terms. (Much like the beginning of any other philosophical paper). Quine considers a few different interpretations or conceptions of these things I don't care to describe, then rejects a few for reasons I don't care to get into. The important points to take away from this section are
  1. Analytic statements are true either by virtue of their being simple logical truths or by virtue of their using synonyms. (Example of logical truth: "All unmarried men are unmarried"; example of synonymous truth: "All bachelors are unmarried").
  2. Logical analytic truths are unproblematic; Quine's got no beef with them. This "synonymy" business, however, needs further explication.
  3. Synonymy cannot be explained away by referring to analogous logically true analytic statements, because any such account presupposes the notion of synonymy itself.
A further note on point 1: the key way to recognize logically true analytic statements, as opposed to synonymously true ones, is that the former may be directly formulated as tautologies which truth relies on their syntactic form, while the latter may not. To wit, "All unmarried men are unmarried" may be translated to ∀x:(Ux & Mx)->Ux, where U means "is unmarried" and M means "is a man"; this proposition is tautologically true no matter what we substitute for U, M, or x. However, to formulate "All bachelors are unmarried", we would have to write ∀x:Bx->Ux, where B means "is a bachelor", which is not a tautological truth. (Quine does not state this in precisely the same manner, but this is my construal of "a statement which is true and remains true under all reinterpretations of its components other than the logical particles" (pp.22-23)).

An obvious reply might be, "What if we add a premise which states that "Bx <-> (Ux & Mx)"? Quine seems to think that this move presupposes synonymy itself, and thus we cannot use it. I'm a bit skeptical about that response, and I wish he would give more argument against it, but we'll see what he says in later sections.

Anyway, Quine's basic strategy from now 'till §5 is to investigate possible explanations for synonymy, but then to reject each, more or less because they always rely on an unclear or circular definition.


§§2, 3, AND 4 TO COME LATER. ...Possibly in separate posts.


---------------------
Reference:

Quine, W.V.O. 1961. "Two dogmas of empiricism". In From A Logical Point of View, 20-46. New York: Harper Torchbooks.

(Also available online for free.)

Tuesday, February 24, 2009

Will, Belief, Experience

Do we consciously choose what to believe?

It seems to me that I cannot choose to disbelieve that the world exists, in the same way that I cannot disbelieve that my throat is sore (at this moment, it actually is sore). There's something about the immediate presence of certain stimuli and observations that seems to force states of mind upon me, states which dictate that "things are a particular way".

Now, I can sort of construct an artificial barrier between myself and the belief. I can do some dialectical footwork to maybe convince myself that what I'm thinking of as "being sore" really isn't. Or maybe my throat (or my conscious self at all) doesn't exist, so there's nothing to be sore. Or, maybe, I can detach myself slightly from the experience: focus on other things or meditate on the painful sensation in such a way as to attenuate its sting.

But I can't seem to rid myself of this persistent belief that there is something painful I am experiencing. I can't simply decide, "I no longer believe that it hurts when I swallow"--because somehow all that I feel at a given moment conspires to reject that belief as untenable? ... Or something?

Maybe it's not "belief" that I'm talking about. Maybe it's a primal awareness, and I'm simply attributing implicit "beliefs" to any states of awareness at all. But okay then, does that mean when I decide, "My throat no longer hurts", I really do now believe that my throat doesn't hurt, in spite of the continued discomfort I feel? That seems obviously wrong, somehow.

Probably, I can't convince myself that there is no table in front of me (unless I have good reason to believe that it's an illusion). I can try, and I can reach forward as though to swipe my hand clear through the illusory table without resistance. But I will be lying to myself as I do so, because I will know (or at least strongly believe), in some sense, that my hand will in fact meet resistance.

Really, all I'm doing is convincing the conversational (logical?) part of my mind, whereas the neuro-hardware that processes these things remains unconvinced. Probably it needs other triggers.

Can I look down at a patch of grass and convince myself that I'm looking at trees and vegetation from two miles up, thus inducing vertigo?

How is it that being told "X was just in this room" can elicit such an emotional response in me, whereas simply thinking "X was just in this room" in isolation, when I have no good reason to actually believe it was true, elicits nothing? Why can't I manufacture conviction artificially?


{Is there a connection between sanity and "proper" beliefs?}


Beliefs like, "The world is round/flat" and "God does (not) exist". For these propositions, I don't see anything which sticks out so obviously as a barrier toward changing belief; they're much too abstract. Maybe I really can change these through will alone. But in these cases, there's still a corpus of other beliefs and actions which conflict with my supposed espousal of a contrary belief: I can declare "I was an atheist just one moment, but now I fervently believe in the Judeo-Christian God", yet I won't really accept or embrace that conclusion. I won't feel the truth of it, yet I will feel like a duplicitous moron when praying or trying to act as though God really did exist.

Is, then, a vital component of belief feeling? The experience of certainty?

Thursday, February 5, 2009

Fallibility vs Infallibility

And binary.

This is another variant of the same old theme I keep rehashing.

If we have no perfect (infallible) knowledge, then we cannot know with certainty that we have no perfect/infallible knowledge. Even as fallible beings, we must know at least that--but then, we're not completely fallible after all. Just, mostly.

Maybe 99.99%.

...

In the mystical traditions, "one" is unity, "two" is a division (from a whole to distinct things). Dualistic thinking. If we had no powers of discrimination whatsoever, presumably we would experience all things as one giant homogeneous, indistinguishable conglomerate.

That is, assuming it is possible to "experience" at all without discriminating in some way or another. Don't our senses detect contrast best? When exposed to a single, unwavering stimulus, that stimulus loses its edge, its flavor, its ability to be sensed at all?

From the concept of "two"--of one thing distinct from another--we can build up, perhaps, the entirety of mathematics (and thus, we think, physics, nature, human thought) through binary; binary being a meaningful alternation between two distinguishable states (represented in computers as "0" and "1").

If we can reduce any piece of information to "yes" or "no" questions--true or false statements--then we can represent it as a series of bits. Just as, even before George Boole came along, logic traditionally separated all propositions into those which are the case and those which are not, knowing that some thing cannot both be the case and not be the case simultaneously. Presumably, a mind simply needs to know, from the list of infinite propositions, which are true, which false. (Perhaps we would need to know which propositions are senseless or lacking truth/false values too; but then we shouldn't have included them in the list to begin with.)

This would be sufficient for omniscience...? Leibniz thought so, or at least that God comprehended the universe through such eyes.

Infinity does seem to present a problem, among many other obstacles. Recursion, describing oneself while describing the universe.

And what about the infinity of imaginary possibilities? Are conditional/subjunctive statements "true" or "false"?

Thursday, January 29, 2009

Leibnizesque Proclivities

I recently borrowed Bertrand Russell's A Critical Exposition of the Philosophy of Leibniz, and I'm rather enjoying what little I've read of it so far.

One thing that struck me is the description (in the beginning) of Leibniz's disinclination toward publishing a fully developed system. Rather, it seems Leibniz tended to craft arguments in response to private correspondence, or to issues which he bore some personal connection to. Unfortunately, this makes giving a comprehensive take on his views a bit difficult, since most if it is fragmentary.

The amusing thing is that I think I find myself doing something similar. I rarely write for the sake of writing--I will always be most motivated to give an in-depth argument when it's directly in response to someone else, peculiarly enough.

Maybe I should work on that.

Sunday, January 25, 2009

Creation

Is it satisfying to create a world in which your creatures achieve their desires, so that you may vicariously effect wish fulfillment? To craft a fantasy where your own dreams (and perhaps others') come true?

Is it satisfying to create a world in which your creatures suffer through a morass of confusion, despair, and ever-frustrated longing, so that you spitefully ensure they are never happier than yourself? To craft a fantasy where no one's dreams come true?

Thursday, January 22, 2009

Vacuous Truth

From my post in a topic in the xkcd forums:

"Every woman currently living on Pluto is male."

"Every true falsity is false."
"Every false truth is true."

"Every circular square is neither circular nor square." (This one almost makes an intuitive sense... but then, it really doesn't, when we consider that every circular square is also circular and square.)

"Every present King of France is simultaneously bald and not bald."

By the mathematical/logical principle that propositions are vacuously true when their subject does not exist, no matter what is predicated of that subject, the above statements should be considered true.

This relates to the problem of negative existentials in the philosophy of language, but I don't have more to say about it at the moment. May investigate further.

Must Perfection Equal Stagnation?

At various times I have encountered or myself espoused the view that anything which is perfect must be, in some sense, static. Religious critics might apply it to the idea of heaven, to show that any possible heaven must be a boring/stagnant place; similarly, we might argue that we're better off not being able to reach a state of complete perfection, because perfection would require no change, and therefore what would we do?

The reasoning goes something like this: for change to occur, an entity or circumstance must become other than it is currently. That is to say, at one time we may say of x that Px, while at another time we may say ~Px. So, if x is currently in a state of perfection but that state changes, surely that change now negates the previous, perfect state of x, thus rendering x imperfect.

An analogy with which we should all be familiar is grades: suppose a student has the perfect grade of 100% in the first week of a course. Suppose further that the teacher never allows extra credit, so at this moment the student possesses the highest grade attainable. From here on out, the only possible change to her grade would be to a lower value; so if at any point during the remainder of the semester, her grade experiences any form of change, it must be to a state of imperfection (< s100%).

So far, so good. If perfection be defined as having a grade equal to 100%, this is all true. Any change to that number necessarily yields an imperfect grade. The problem comes when we assume that, because the state of being perfect must not change, therefore nothing else pertaining to the object or circumstance in question can change. And that assumption is simply unwarranted.

To continue with the grade analogy, we should recognize that even while the total grade percentage does not change, a number of other factors do: as the school term progresses, the student continues to do assignments and turn them in. The teacher then grades these new assignments, sums the student's earned points so far, and divides that number into the highest possible point total. So while the grade percentage does not change, the student's earned sum continually rises. And of course, throughout the term, the course progresses, the student strives and sweats over homework, studies for tests, etc.

To me, this makes an obvious case that perfection can occur even while the qualities being judged for perfection change. In my example, the percentage remains at 100% even while the sums increase. This suggests the more general point that "perfection" can describe processes, not merely singular, unchanging states. Which really ought to be obvious, since, after all, we can easily set up (artificial) criteria for a perfect performance or the perfect execution of a technique in music, dance, etc. Yet clearly change does occur in these cases, for they are activities, not frozen states, after all.

Now, the point about lack of change is still true insofar as a perfect performance or process must never deviate from its perfect criteria on pain of falling from perfection. That is to say, the perfect student must continue to score 100% for as long as we judge her, and in this way her score will be predictable and therefore stagnant. However, we can hardly say that the student's homework and grade as a whole are stagnant, since she continues to accumulate new points and produce new work.

Similarly, I say that heaven would be static only insofar as its residents and contents would not deviate from a particular type of perfection; but their behavior could easily be a process or performance, or at least analogous to such. In simplistic terms, if perfection in heaven could be gauged as a percentage of points achieved out of points possible, the points could rise (or hey, they could fall too) without becoming imperfect so long as the total points possible matches them. It could be like a series of games in which one continually succeeds.


It is also important to note that, every time we talk about "perfection," we must qualify ourselves by answering the question, "Perfect according to what criteria?" Would it not be possible to have a constantly changing criteria set for perfection? And could there not easily be criteria which require change of some sort, as perhaps to run a perfect marathon requires that one change one's location and move one's limbs?

Monday, January 19, 2009

Gödelizing Gödel (and random thoughts)

It strikes me that Gödel's incompleteness theorem bears a bit of a resemblance to the types of skepticism I keep criticizing, insofar as it makes a universal statement about formal systems which seems to limit or hinder their power. Yet I wonder--could it be that to make this kind of statement successfully requires an implicit perspective subject to the same critique the incoimpletness theorem makes in the first place?

This is an idle thought, and it does not seem to me that Gödel's theorem could be undermined or subverted by its own conclusion, and nor am I qualified to investigate much further.

Analogously, that might have implications for the halting problem.

Just wondering.


And on a completely (?) unrelated note, I feel obliged to mention how Jorge Luis Borges' story "Funes the Memorious" features a man with complete eidetic/photographic memory, who experiences and recalls the "sensory manifold" in toto, rather than as we do in bits and pieces. (Here roughly meaning Kant's "sensory manifold", or whatever modern analogy there is). As a consequence, the man sees little point in abstraction and in fact has difficulty recognizing the similarities behind different species of dogs, or even different objects seen from different angles. I'm reminded next (through reading Gregory Chaitin) of information theory and Kolmogorov Complexity, where we judge the complexity of algorithms or objects based on the smallest instruction set necessary to recreate them.

...

On the other hand

Just when you think there's hope...

Sunday, January 18, 2009

And sometimes...

[And sometimes, perhaps, desires can be trusted. Or, miraculously and all the more mysteriously, they can shape.]

I wonder then, were perfection nearly within grasp, how terrifying might that be? And too, how exhilarating, giddying! For Tantalus's hand to brush the grapes, his lips to graze the waters.

Imagine the moment following this unprecedented anomaly: his pulse races, a hitherto dulled and pessimistic mind comes alight, afire! Certainty (and its incumbent predictability) had blunted and wearied his existence. For surely Tantalus had realized – real-ized! – the despotic futility that ruled and overruled his every action, that denied him possibility of relief. Surely, he at last reached a point where unrelenting failure wore away the last of his persistence, leaving him stupefied, resigned, and stultified.

Imagine the moment prior: the cusp of despair, his head bows, and he begs penitently to the gods, as he has done countless times before. He knows well the gesture's uselessness: they will not heed him--and is it that they do not hear or do not care? Or is there anyone to hear? So long has been his imprisonment, that who can say which beings existed, or did fancy alone conjure up his divine imprisoners? Did he really host that profane and awful banquet? Or did he imagine his feat of hubris merely as to justify his own torment? But watch now, as his head bows, his parched and broken lips touch for one instant the inconceivable--the impossible, the unreal--and shatter his dreadful certainty.

How this moisture? How this incomprehensible moisture, there long enough to shock yet gone before it can be tasted? What does it mean, what can it mean? That the gods have heard, relented? Or their powers wane and may now be circumvented?

Imagine his reckless rejuvenation as thoughts careen throughout his jolted mind. An intoxicating force invigorates his ailing hopes and catapults him beyond Reason; and with that same resurgence comes a creeping, dawning horror: he is poised now at the brink of lunatic conclusions, and if he stretches just a little further, will his lips find long-sought respite? Or will the conscious act of striving revoke his supposed progress and rebuke him all the more?

Be this redemption, or a god's mocking laughter?

Is it chance that furnishes him with hope, or insidious design?

Is this opportunity, or is it a desire's wishful mirage? Is there any action he can possibly take to sway the outcome either way? If he chooses wrongly, will he ever get this chance again? And so horrible, if not, to live on knowing that he'd once come so near perfection, but failed and damned himself.

Saturday, January 17, 2009

Hegelian Precursor to Derrida

If we cannot indeed make true statements about the whole (because they inevitably lead to an incomplete thesis-antithesis-synthesis triad, which in turn becomes a leg of a new triad...), then yes, I think Derrida's theory of deconstruction--wherein all statements undermine themselves--follows more naturally.

Note, by the way, that I know next to nothing about both Hegel and Derrida here. But anyway.

My objection to deconstructionism hitherto has been similar to my complaints about extreme skepticism generally: by attempting to demolish all foundations, one must necessarily presuppose a new foundation from which to do that demolishing. In other words, the statement "all propositions undermine themselves" necessarily undermines itself, rendering it false. Discussing that with a lit-theory friend of mine, he laughed and called that a beautiful part of the theory; to him, it "shows" the theory in operation on itself. It bothers me, however, since it leaves us left with a paradox and/or a jumble of contradiction.


Parallels

We can view the deconstruction of "All propositions undermine themselves" as analogous to the eternal war between a skeptic and a dogmatist, or an unraveling of the Liar's paradox. To wit, each valid step may be succeeded by a contradictory, equally valid step in the argument. E.g.,

(1) All propositions undermine themselves.

(2) "All propositions undermine themselves" undermines itself, by (1). Ergo, (1) is false.

(3) If (1) is false, then there can be propositions which do not undermine themselves after all. In which case, (2)'s reasoning is incorrect, because (1) might be one of those propositions, and so (1) might be true. Since (1) was given as an initial premise, we should then consider (1) true.

(4) (3)'s conclusion is a proposition which, by (1), undermines itself. Hence (3) must be false, and if (3) is false, then (1) is not true after all.

...etc, etc. This may be argued back and forth as long as we like without resolution.

[Note again: as said before, I know crap-all about Derrida. I have no idea if genuine deconstructions follow the form I just gave, and at the moment am too lazy to verify. Hooray, I'm a bad scholar. You caught me, want a prize?]

Just as with the Liar's Paradox,

(5) This statement (5) is false.

(6) Because (5) is false, (5)'s negation, "This statement (5) is true", must be true. Hence (5) is true.

(7) Since (5) is true, we know that the proposition "(5) is false" is true. Thus, (6)'s conclusion is false, because (5) is not true after all.

(8) Yet, if (5) is false, then its proposition "(5) is false" is false itself, meaning that (5) is really true. That means (7)'s conclusion is falses.

... etc. Clearly, an infinite succession of licitly derived contradictions. I'm trying to make a point about steps directly contradicting the directly previous step, but that's probably confusing, and under an ordinary analysis it is not necessary, so let me show a more intuitive route which is equivalent:

(9) (9) is false.

(10) Because (9) is false, "(9) is false" is false. Thus (9) is true.

(11)
But if (9) is true, then "(9) is false" is true. And that means (9) is really false.

(12)
If (9) is false, the same reasoning as from (10) shows that (9) is true.

(13)
But if (9) is true, then the same reasoning as from (11) shows that (9) is false.

(14) The same reasoning from (10) and (12) shows that (9) is true.

(15) The same reasoning from (11) and (13) shows that...

The main difference is that I'm not spelling out the contradiction of the last step so much as just reasserting either (10) or (11) to refute the last conclusion. Which is really the same thing, so why am I making a fuss about it? The Lord only knows. Really, the smart thing to do is to stop as soon as you've found a contradiction in the argument (since otherwise we run into problems with explosion), but I'm trying to make the analogy to Hegel more explicit.


And speaking of whom, back to Hegel.

Suppose we call the assertion that "The Liar's Paradox statement is true" our thesis, and "The Liar's Paradox statement is false" our antithesis. Clearly, we can always reason toward thesis or antithesis, successfully proving or disproving each conclusion however many times we like, without ever reaching a final resolution. To Hegel, I believe, we should then realize the futility of this exercise, at which point we need to step outside of the system and create a synthesis between thesis and antithesis. Being a Hegelian neophyte, I don't know what the synthesis should be in this case, but it might be something like, "The Liar's paradox is both true and false" or it is"partially true, partially false," or "true at one time, false at another," or some other means of effecting reconciliation.

Now, the fun part about Hegel is that he says the new synthesis, whatever it is, now becomes the thesis or antithesis of a new thesis-antithesis-synthesis triad, which will need its own extra-dichotomous resolution. And we approach (but never reach?) truth through an infinite chain of these triads--reminiscent of Kant's moral progression, where practical reason must postulate an infinity of time (or lifespans?) through which we imperfect beings aspire toward perfection.

Now, where this is relevant more specifically to my thoughts, is that an infinite chain of dialectical syntheses reminds me very strongly of the warring double-helix Ouruborus I mentioned earlier. It seems to me wishful thinking on Hegel's part to claim that the addition of a synthesis makes a new kind of "progression" or development. Rather, the chain of triads fighting with each other is precisely ismorphic to the chain of contradicting (10)s and (11)s I outlined above, the simply unending contradiction. So, either the Liar's Paradox and Derrida's deconstruction already exemplify Hegel's described growth, or there exists simply no progression to speak of either way.

Hegel's syntheses are attempts to establish a new "groundless ground" or self-supporting justification in each controversy. By my thinking, however, this is not progression, since the new synthesis remains just as much a part of the very system it attempted to escape. This is just like trying to defuse Gödel's incompleteness theorem by adding axioms to a formal system; it doesn't matter how many or what axioms you introduce: by the very nature of the system at hand, you will leave yourself open to a new version of the incompleteness theorem. (Unless you reduce your system's axioms to a point where they express less than you originally wanted.)

Now, we might be able to argue that there's a kind of progression/development/growth/whatever here anyway, but I'm not going to investigate further at this point. It may explain (in part) the impossibility of halting philosophical inquiry.

By now, it's beginning to seem to me, naively, that the history of thought is little more than a horribly convoluted deception, an intriciate illusion, contrived to hide the fact that all we've been doing is saying "Nuh-uh!" and "Yeah-huh!" to each other for the last two thousand years.

Silly children.

Friday, January 16, 2009

Volition

Why is it that dreams do not permit free manipulation to suit our own ends (save in the case of the comparatively rare lucid dreams)? Since they are created entirely by our minds, and since we can exert some measure of control over them through thought and will, what is it that prevents a more thorough control/influence?

How does action differ from willing...?

Thursday, January 15, 2009

Skepticism and the Root of All Things

Vancouver Philosopher at The Chasm makes the following point about Heidegger:

Heidegger's denial about a fundamental unifying characteristic to philosophy is foolish. When we employ such skepticism about the reality of the ground/central concept, we only wind up grounding such skepticism in denial. The very act of denial becomes its own ground, and this strategy winds up being self-defeating in the end. Instead, we shouldn't think of various philosophical systems as themselves foolhardy in establishing a ground or framework. We should interrogate the framework or ground on its own merits.

This follows my own thoughts about skepticism regarding truth: we cannot sensibly make a statement like "There is no truth" without simultaneously undermining that statement by implying that it itself is true. For skepticism to assert a meaningful proposition (that is, a proposition which supplies us with genuine information), the skepticism itself must be grounded in something. And if the brand of skepticism at hand denies the existence or reliability of all grounds, it necessarily denies its own conclusions. Hence, as Vancouver Philosopher says, denial becomes its own ground, and it invariably ends up defeating itself.

Similarly, skepticism about our epistemic relation to truth undermines itself as well. Suppose our skeptic offers us "an irrefutable argument" that, whether there exist truths or not, we simply cannot possess knowledge of them. E.g., all claims must be justified by other claims, and there are no unjustified, self-supported claims; ergo all claims are unjustified/unacceptable. And yet, if we accept the skeptic's conclusion, we (presumably) have now acquired a new bit of knowledge, viz., the knowledge that "All knowledge is impossible", or "We possess no knowledge"! Wait, how did that happen? How do we know this when we can't know anything? Again the skeptical conclusion undermines itself; it presupposes its own new ground from which to criticize the whole of another position, yet in doing so creates and relies upon that which it seeks to demolish.

Similar arguments apply to skepticism toward the legitimacy of reason generally.


Roots

The essential problem, as I see it, is that the skeptic must engage the opponent on her own territory, so to speak, and using her own tools. This makes strategic sense, since otherwise why should the anti-skeptic (let's say "dogmatist") accept whatever point the skeptic tries to make? Unfortunately, this is a rigged game for the extreme skeptic: when playing by the dogmatist's rules, there is simply no way to "win" or "break outside" the system, because every attempt to do so places you right back into a new system. (This all ties into the Liar's Paradox, Gödel's Incompleteness Theorem, and the Halting Problem; I don't know how to make this more explicit yet, but see Gödel, Escher, Bach by Douglas Hofstadter for related discussion).

Now, life isn't exactly a bed of roses for the dogmatist, either, since she can never really answer the skeptic's demands for an "unmoved mover" in the realm of logic, so to speak. But it's fascinating to imagine the two positions, intertwined together as an infinity of recursive, dialectic contradiction. The skeptic asserts, "You know nothing"; the dogmatist responds, "Then I must know that I know nothing"; the skeptic rejoins, "But to know that, you must presuppose other principles for which you have no justification either!"; and the dogmatist counters, "But in order to know that I need those principles, surely I must know that I need them, so I do know something after all!", and so on ad infinitum. Or ad nauseam, take your pick. The two warring sides endlessly wrap around each other, neither overcoming the other--perhaps like twin strands of intertwined DNA which end up consuming their own tails as an involuted Ouroborus?

But let's not get ahead of ourselves, nor lost in mystical/metaphorical speculation. The key here is that we could apparently resolve the tension if we could ever find a groundless ground. What I call a "groundless ground" shows up in many places: as Aristotle's "prime mover," as Aquinas' "uncaused cause," as the idea of a necessary entity or fact in general, as a self-caused being, as that which needs no justification, as a "primitive fact," as Kant's description of "the unconditioned." To find such a ground and look out from it would yield the fabled view from nowhere, the view sub specie aeternitatis, the God's-eye view. This perspective, and none other, would be satisfactorily "outside of the system" to satisfy both skeptic and dogmatist. (Hopefully.)

It's no surprise, then, that my description of the clash above mirrors Kant's Antinomies of Pure Reason, where my "dogmatism" would map on to the rationalist understanding of metaphysics, and skepticism to the empiricist understanding. Roughly, anyhow. To be more precise, Kant thought that both rationalist and empiricist sought a "groundless ground" (the "unconditioned"), but they sought it from different starting premises; whereas my dogmatist/skeptic divide does not show both sides seeking an unconditioned ground so much as the denial that there is such a thing on the skeptical side. So take the comparison with a grain of salt--I'd have more to say about justifying my analogy/mapping, but I'm running out of motivation at this point.

Tuesday, January 13, 2009

...

.. to desire a thing so much that one is paralyzed by the very possibility of it being unattainable...

Friday, January 2, 2009

Silly details about numbers

The ancient Greeks found numbers fascinating in an unprecedented way (or so Morris Kline's Mathematics for the Nonmathematician informs me). Rather than adopting the more practical attitude of the Egyptians and Mesopotamians toward numbers, the Greeks recognized a conceptual beauty in the ability use the self-same methods on any possible collection of objects. That is to say, abstraction, and the universality thereof.

To assign a single, unique label to every quantity, and then to perform mental operations upon them which, amazingly, correctly modeled comparable situations in reality--astonishing. And I do mean that without irony.

Take a pie or any appropriately divisible object. Cut it into halves, then cut each half into thirds. We take it for granted today that the answer may be computed easily through a mechanical procedure: (1 * 1/2) * 1/3 = 1/6. This is to say that, without having made any cuts or further measurements, we already know with certainty what size the smaller pieces will be! (Or the size that they will approximate, because it is not an ideal world.)

But this is a fantastic discovery, for by beginning from only a few known facts, we discover what must be the case for physical objects after manipulating them--all without having left our armchairs.