Friday, 3 April 2009
My Theory of Morality
I have always been non-religious, believed that nothing is black or white, and that life is inherently meaningless. A religious fundamentalist would cringe at the thought of having a person with my beliefs to be a part of his society. However, if he were to examine my life’s history he would find me to be a pretty moral, law-abiding person. For as long as I can remember I have been struggling to find a logical justification for being a moral person. No matter how often I thought about the meaninglessness of being moral, I continued striving to do good and straying away from doing evil. Well, the confusion is over; I have finally come up with a rational explanation of why everyone should be moral. The bad definition of “moral”Partly why morality is so confusing is because of how the word is conventionally thought of. People think that a moral action (e.g. it is bad to murder) should be applied in any situation and for any thinking agent. This way of using the word makes any moral guide (think Ten Commandments) vulnerable to all sorts of moral dilemmas. For example, should someone kill a person to save 100 others? Should someone die for their country or for anything? I have broadened the definition of “moral” in a way that leads to no moral ambiguity. My definition of “moral,” which I will refer to as “moralL”MoralsL: the set of behaviors and actions (that affect others in any way) that one should use in a collaboration or co-existence of two or more people in order to maximize personal utility. Whenever there is a group of two or more people that can benefit from each other, there exists a set of rules-of-interaction which, if followed correctly, will maximize the individual’s utility/happiness. These rules may change depending on the people within the group and the number of people in the group. You want to be moralL because you only benefit yourself from doing so. Evolutionary explanation for moral misconceptionsIt makes sense that natural selection favors the human/human ancestors that interact with others in such a way that increases their fitness. Humans can have non-zero-sum gains that increase their fitness by interacting with other humans in a certain way. I’ll call this set of rules-of-interactions as moralsE. Those genes that made humans predisposed or programmed to interact in a way that maximizes their fitness (i.e. morallyE) would dominate the gene pool. Now we have a population of humans that get positive emotional reinforcements (but not always) from behaving morallyE. The set of moralsE may only be a sub-set of the set of moralL or be completely different! Today’s religions/societies try following a moral code that contains some combination of moralsL and moralsE. Our emotional impulses bound us to moralsE that aren’t contained in the set of moralsL. Why we should only follow moralsL and not moralsEFirst of all, understand that our reproductive fitness can be a function of our happiness and therefore a part of our moralL code. If there are two conflicting actions (that affects others), option one leads to a 10% increase in happiness and option two leads to a 5% increase in happiness with a 2% increase in reproductive success (assuming the happiness associated with a 2% increase in reproductive success is already accounted for in the 5% increase in happiness), I don’t see why we should pick option two over option one. There is no reason to increase our reproductive fitness other than for the fact that it sometimes makes us happy to do so. In a universe empty of absolute right and wrong, why bother following any other moral code that doesn’t have to do with maximizing personal happiness?Now to answer questions that I know are on your mindIf someone maximizes their happiness by going on murderous rampages, why should that be a moral deed? You would first need to accept the unlikely idea that someone maximizes their happiness by dramatically increasing their probability of dying (capital punishment/avengers/self-defense by victims) and sitting in prison in exchange for killing some people. Some of these people do exist so this isn’t a trivial example. Sure, they are acting morallyL, but expect natural selection to weed these people out quickly. Natural selection has favored those genes that make our happiness a function of other people’s well-being. Of course, not everyone has these genes or has them turned on, but there isn’t much to worry. If natural selection doesn’t weed them out, a benevolent society (that contains a majority of people who care about its members) will weed them out. How this moral theory explains moral dilemmas Should someone die for their country or for anything? You should only die for something if not dying for that something leads you to a life of such negative utility that it leads you to commit suicide anyways. If I don’t die for my country, will I live a life with such embarrassment, sadness, guilt, etc. that I would want to commit suicide and die either way? This same reasoning applies to such decisions of whether you should die or kill someone in order to save n number of people. What about going to the army and merely risking your life for your country? In this case you have to ask yourself whether the positive expected utility of risking your life for your country (or any cause) is greater than the negative expected utility of doing so. For example, if you had a 10% increase in probability of dying by going to the army, you might want to consider it, but not if it is a 50% increase. If saving your family from death requires you to increase your probability of dying by 50% you might consider it. I am just demonstrating why risking dying for a cause is all relative to your expected utility. Should we stop the genocide in Darfur? Well, that answer depends on whether net expected utility is positive or not. The surprising conclusion is that it may not even be worth it to stop the genocide (maybe we are better off just deterring it or slowing it down). There is no absolute right or wrong, there just is what there is: a bunch of evolved beings with different yet similar utility functions interacting with other beings in way that they hope leads to maximizing their individual utility. How this theory affected my moral behaviorIt hasn’t. I still do and believe in the same things, my preferences haven’t changed. What has changed is my perspective on moral ambiguity. With my theory, I no longer see moral dilemmas as unanswerable problems. There is always a rational moralL answer that differs from person to person. That moralL answer is the one that leads to maximizing happiness. We shouldn’t let things stray us away from acting morallyL like a religion telling you what they think is best for you.