Sunday, May 31, 2009

The Future of Rationality

It occurred to me I haven't bothered you with my random thoughts for a while. Not that I had lack thereof, they just possibly were a little too random to make it into written word.

I wrote this sentence with my toes, just to amuse you.

So here's a thought I've been pushing around for a while. Are we on the path towards more or less rationality? The last several hundred years were marked by increased rationality: the rise and success of the scientific method, the Age of Enlightenment, the decline of religion and superstition, and so on. But you look around these days it seems that increasingly more people seem to be scared by the prospect. If you extrapolate that trend where will it lead us? Maybe there are just things we don't want to know. (See also The Right Not to Know).

It seems to me there's a sentiment in the air that we need more "spirituality," more "magic," more "wonders" in our increasingly technological world based on mechanical engineering and computer algorithms. Some people want to "reinvent the sacred," others emphazise "emotional intelligence" or "the power of thinking without thinking." Blink.

While I think some of these arguments aren't very insightful, there are two aspects I'm sympathetic to.

For one, I think there is at any one time a limit to what humans can possibly know, possibly even a limit to what we can ever know and we should be more aware of that. That means for example instead of being scared by gaps in our knowledge it or discarding them as a failure of scientists we should recognize the relevance of acknowledging and dealing with uncertainty, incomplete knowledge and 'unknown unknowns,' as well as be vary of The Illusion of Knowledge.

But besides that putting an emphasis on rationality neglects other cognitive abilities we have. For example, many of us have on some occasion met somebody who, through their experience, have developed a strong intuition for what might or might not work. Even though they might not be able to come up with any precise "rational" argument, they have a feeling for what seems right or doesn't. Granted, they might be mistaken, but more often then not you'll benefit from listening to them. One of the most important gifts, so I believe, of the human mind is to make what Plato called on some occasion at this blog an 'intuitive leap' into the unknown. Without such leaps our space of discoveries would be strongly limited. Rationality isn't always the path towards progress. (While not many insightful points were raised in the aftermath of the publication of Lee's book, I found it very interesting what Joe Polchinski had to say on the role of rigor in physics.)

Now let me step away from the human brain and consider instead of a system of neurons the systems that govern our every day lives, like for example our political systems. They have some "rational" processes to deal with input and to decide on actions. They also have some emergency shortcuts resembling unconscious reactions. If somebody throws a pillow at you, you'll raise your arms and close your eyes without a long deliberation of whether or not that's a good thing to do. If somebody throws a bomb on your territory you don't want to get stuck in endless discussions about what to do.

But what about intuitions and emotions? Where is the space for them?

Let us take as an example the credit crisis. It was not that people who were actively involved in building up the problem were completely unconcerned. They just had no way to channel their uncanny feelings. From a transcript of a radio broadcast "This American Life" (audio, pdf transcript, via):

    mortgage broker: ...it was unbelievable... my boss was in the business for 25 years. He hated those loans. He hated them and used to rant and say, “It makes me sick to my stomach the kind of loans that we do.”

    Wall St. banker: ...No income no asset loans. That's a liar's loan. We are telling you to lie to us. We're hoping you don't lie. Tell us what you make, tell us what you have in the bank, but we won't verify? We’re setting you up to lie. Something about that feels very wrong. It felt wrong way back when and I wish we had never done it. Unfortunately, what happened ... we did it because everyone else was doing it.

Italics added. My favourite part though was this
    Mike Garner: Yeah, and loan officers would have an accountant they could call up and say “Can you write a statement saying a truck driver can make this much money?” Then the next one, came along, and it was no income, verified assets. So you don't have to tell the people what you do for a living. You don’t have to tell the people what you do for work. All you have to do is state you have a certain amount of money in your bank account. And then, the next one, is just no income, no asset. You don't have to state anything. Just have to have a credit score and a pulse.

    Alex Blumberg: Actually that pulse thing. Also optional. Like the case in Ohio where 23 dead people were approved for mortgages.

Well, so much about rationality. The point is it's not that people didn't feel there was something wrong. It was just that the system itself had no way to address that feeling. The negative feedback it could have provided went nowhere. 

Or take the academic system, one of my pet topics as you know. It's not that people think it's all well and great. In fact, they can tell you all kinds of things that don't work well and some can complain seeming endlessly. But the system itself has no way to address these concerns. The only way to improve it is external intervention, which however usually only takes place once things go really wrong.

It's like you go out with a guy and even though you don't know exactly what's wrong, he makes you feel really awkward. But instead of just stop dating him you'd go see a shrink who looks up in a book what you're supposed to do. That's about what's wrong with our political systems.

So what's the future of rationality? I think we'll need to find its proper place.

Aside: I believe that many of the arguments we have about rationality are based on a lacking definition. For example if I intend to buy a new gadget I will typically look at the first few offers and pick the one I like best, finito. Sure, if I had looked a little harder or a little longer I might have saved some bucks. But frankly I'd rather pay more than spending an infinite amount of time with customer reviews. I think this is perfectly rational. Others might disagree. (And now encode that in your utility function.) That is to exemplify that rationality might not easily be objectively quantifiable.

No comments:

Post a Comment