人类投资行为的缺陷

标签:
潇洒哥价值投资投资行为缺陷股票 |
分类: 他山之石[转载] |
人类投资行为的缺陷
来源:阅读投资
Prospect Theory
前景理论(Prospect
Theory) 认为人们通常不是从财富的角度考虑问题,而是从输赢的角度考虑,关心收益和损失的多少。
前景理论的产生
前景理论(PT)首先由国外学者Kahneman和Tversky(1979)(用KT代表两个作者)明确的提出,他们认为个人在风险情形下的选择所展示出的特性和VonNeumann—Morgenstem
的效用理论的基本原理是不相符的。
一是他们发现和确定性的结果相比个人会低估概率性结果,他们称之为确定性效应(Certainty
Effect)。KT还指出确定性效应导致了当选择中包含确定性收益时的风险厌恶以及当选择中包含确定性损失的风险寻求。
二是他们还发现了孤立效应(Isolation
Effect),即当个人面对在不同前景的选项中进行选择的问题时,他们会忽视所有前景所共有的部分。孤立效应会导致当一个前景的描述方法会改变个人决策者决策的变化。三是KT发现了反射效应(Reflection
Effect),当正负前景的前景绝对值相等时,在负前景之间的选择和在正前景之间的选择呈现镜像关系。为了补偿这些VMUT所不能解释的关于个人行为的特征,KT提出了新模型PT。
前景理论二定律
面对风险决策,人们会选择躲避还是勇往直前?这当然不能简单绝对的回答,因为还要考虑到决策者所处的环境,企业状况等情况,我们先抛开这些条件来研究在只考虑风险本身的时候,人们的心理对决策的影响。这时候我们会得出很有意思的结论。
卡尼曼的前景理论有两大定律:(1)人们在面临获得时,往往小心翼翼,不愿冒风险;而在面对损失时,人人都变成了冒险家。
(2)人们对损失和获得的敏感程度是不同的,损失的痛苦要远远大于获得的快乐。来看两个好玩的试验:一是有两个选择,A是肯定赢1000元,B是50%可能性赢得2000元,50%可能性什么也得不到。你会选择哪一个呢?大部分人会选A,这说明人是风险规避的。二是有这样两种选择,A是你肯定损失1000元,B是50%可能性损失2000元,50%可能性什么也不损失。结果大部分人会选B,这说明他们是风险偏好的。
可是我们仔细分析一下,这两个实验其实是一样的,只是玩了个文字游戏而已。第一个实验中假设你刚刚赢了2000元(以此为参照),那么如果你选了A就相当于肯定损失了1000元;选B50%可能性赢2000元就相当于50%可能性不损失钱,50%可能性什么也得不到就相当于50%可能性损失2000元。
这个似乎玩文字游戏的实验结论对管理决策是很有意义的。举个例子,一家公司面临两个决策,投资方案A肯定盈利200万元,投资方案B有50%的可能盈利300万元,50%的可能盈利100万元。这时候如果公司定的盈利目标比较低,不如说100万元,那么A方案看起来好像多赚了100万元,而B则要么刚好达到目标,要么多赚200万元。A和B都是获得,那么大多数人会选A方案。但要是公司定的目标比较高,比如是300万元,那么大多数人会选B方案,员工会抱着说不定会达到目标的心理,去拼一下。这说明老板(决策者)完全可以通过改变盈利目标来改变员工对待风险的态度。
前景理论还可以用来解释一些决策现象。比如群体在奖惩决策时,奖励往往是就低不就高,惩罚往往是就宽不就严。为什么会有这种向中间回归的趋向呢?可以这么解释:获奖励者并非大多数人,多少有奖金就可以了,重要的是名誉,多几块钱少几块钱人们不会太在意;受到惩罚者也非大多数人,名誉上已经惩罚了,物质上象征性地惩罚一下,
给人改过的机会。这就像中国古代的中庸之道,得饶人处且饶人。
前景理论的缺陷
一是作 (0)
为一个描述性的模型,前景理论具有描述性模型共有的缺点,和规范性模型(具有严格数学推导的模型)相比,它缺乏严格的理论和数学推导,只能对人们的行为进行描述,因此前景理论的研究也只能使其描述性越来越好,换句话说它只是说明了人们会怎样做,而没有告诉人们应该怎样做。
二是前景理论的应用研究,尤其在我国的应用研究还不足,前景理论作为风险下决策的描述性模型,其应用价值非常大,应用范围也非常广,而目前的应用研究主要集中在金融市场上,因此应用范围方面还有待拓展。
(冯春影 编辑)
availability error
Most important human judgments are made under conditions of uncertainty. We use heuristics, or rules of thumb, to guide us in such instances as we try to determine what belief or action has the highest probability of being the correct one in a given situation. These rules of thumb are often instinctive and irrational. Social psychologists such as Thomas Gilovich, Daniel Kahneman, and Amos Tversky have studied several important heuristics and discovered errors associated with their use. One of these heuristics is the availability heuristic, determining probability "by the ease with which relevant examples come to mind" (Groopman 2007: p. 64) or "by the first thing that comes to mind" (Sutherland 1992: p. 11).
The problem with the availability heuristic is that what is available at any given time is often determined by factors that lead to an irrational or erroneous decision. Dr. Jerome Groopman gives the example of a doctor who had treated "scores of patients" over a period of several weeks with "a nasty virus" causing viral pneumonia. Then a patient presented herself with similar symptoms except that her chest x-ray "did not show the characteristic white streaks of viral pneumonia." The doctor diagnosed her as being in the early stages of the illness. He was wrong. Another doctor diagnosed her correctly as suffering from aspirin toxicity. The diagnosis of viral pneumonia was available because of the recent experience of many cases of the illness. Had his recent experience not included so many cases of viral pneumonia it is likely the doctor would have made the right diagnosis. After he realized his mistake, he said "it was an absolutely classic case--the rapid breathing, the shift in her blood electrolytes--and I missed it. I got cavalier."
The availability error explains, in part, the irrational behavior of those who, after 9/11, assaulted anyone they thought looked Middle Eastern. It explains, in part, the current rash of attacks on Mexicans and Mexican Americans in the U.S. Some politicians and some journalists have made immigration a hot-button issue and made ethnicity available as a reason for venting frustration at the economic situation in the country.
Lotteries do not try to sell tickets by emphasizing the statistical odds any ticket has of winning. Those who advertise lotteries do not want the first thing that comes to a potential ticket-buyer's mind the thought that he has a one-in-40-million chance of winning. Instead, they put forth recent winners in the hope that what will come to mind when the chance to buy a ticket arrives is the happy winner. A person is more likely to buy a ticket if the first thing that comes to mind is winning rather than losing. (You've probably got a better chance of being killed in a car accident driving to buy your lottery ticket than you do of winning the lottery.)
No matter how much knowledge one has, one's experiences can undermine that knowledge when it comes time to apply it in a concrete situation. Experiences with deep emotional impact will affect one's judgment and trump one's knowledge. Your brother was killed in a plane crash so you decide to never fly in an airplane. But you drive thousands of miles a year to do concerts, even though the odds of your being killed in a car crash are significantly greater than the odds of your being killed in an airplane crash.
Anything that creates a vivid image is likely to override other, perhaps more rational, choices one might make. Advertisers don't worry that many of their words make no sense when looked at carefully. What matters are the images. Stuart Sutherland claims there are studies that show people can remember thousands of photographs a week after seeing them just once (1992: p. 15). How many words would we remember from a list of thousands a week later? Images stick in the mind, whereas words are often quickly forgotten. Seeing the Rodney King beating over and over again or seeing the Humane Society undercover film of cattle being tormented in a meat-processing plant can affect a viewer's judgment profoundly. But are the films representative of a general problem with police brutality and animal abuse, or were they aberrations? Even if the films completely misrepresent the truth, the images will overpower any words that try to make that point.
For most people, concrete data is more available than abstract data. Some think this is why the solution to the Wason problem goes up when put in concrete rather than abstract terms.
The stock market is another place where the availability error exemplifies itself. Most people wouldn't think of buying a stock that has recently fallen in value, yet a good way to make money in the market is to buy low and sell high. (It's not the only way, of course. You can make money by earning dividends and holding on to a good stock for a long time or you can do it the way Martha Stewart did with insider information.) Yet, most people will only consider buying a stock if it's doing well, i.e., at a high value. Some people, apparently, buy stock on the advice of their hairdresser or of a stranger who sent them an email. The advice is concrete and readily available, but probably wrong.
As a teacher, I experienced a kind of reverse availability problem. After the fourth or fifth student during a term had told me that they missed an exam because a grandparent had died, I became suspicious. I usually had about 150 students a semester and some of them were old enough to be grandparents themselves. My guess is that they couldn't have had more than 400 grandparents among the lot of them. What are the odds that four or five of 400 grandparents of students in my classes in the Sacramento area would die within a 16-week period? I had no idea, so I said nothing. But it did give me pause.
One of the things that is disturbing about the availability error is the ease with which we can be manipulated by writers, filmmakers, pollsters, or anybody who presents us with a stream of words or images over whose sequence they have control. The order in which ideas, words, and images are presented affect our judgment. Sutherland notes experiments done by Solomon Ash that demonstrated that earlier items influence judgment more than later items in a sequence. Earlier items are more available to our minds than later items. It has been known for some time that you get different results when you reorder questions in a poll. Earlier answers influence later ones. In my critical thinking text, I advise that when evaluating extended arguments one try to read the argument at least once without making any judgments about the claims made. The reason for this is that once you start classifying or categorizing items it will affect how you understand and evaluate later items. In short, you will bias your judgment if you start making judgments too early. This is true of any kind of investigation. If you make an early assessment, it will color your later evaluation of items and you will often find that your seemingly brilliant work was simply confirmation bias.
First impressions make lasting impressions.
representativeness error
Most important human judgments are made under conditions of uncertainty. We use heuristics, or rules of thumb, to guide us in such instances as we try to determine what belief or action has the highest probability of being the correct one in a given situation. These rules of thumb are often instinctive and irrational. Social psychologists such as Thomas Gilovich, Daniel Kahneman, and Amos Tversky have studied several important heuristics and discovered errors associated with their use. One of these heuristics is the representativeness heuristic. In judging items, we compare them to a prototype or representative idea and tend to see them as typical or atypical according to how they match up with our model.
The problem with the representativeness heuristic is that what appears typical sometimes blinds you to possibilities that contradict the prototype. Jerome Groopman, M.D. gives the example of a doctor who failed to diagnose a cardiac problem with a patient because the patient did not fit the model of a person likely to have a heart attack. The patient complained of all the things a person with angina would complain of, but he was the picture of health. He was in his forties, fit, trim, athletic, worked outdoors, didn't smoke, and had no family history of heart attack, stroke, or diabetes. The doctor wrote off the chest pains the patient complained of as due to overexertion. The next day the patient had a heart attack.
Another example from Groopman illustrates both the representativeness error and the availability error. A patient who appeared to be the model for bulimia, anorexia nervosa, and irritable bowel syndrome was misdiagnosed by some thirty doctors over a period of fifteen years. The more doctors that confirmed the diagnosis, the more available the diagnosis became for the next doctor. But it wasn't until she saw Dr. Myron Falchuk that she found a physician who looked beyond the model that the other doctors had used. Falchuck correctly diagnosed the patient as having celiac disease, an autoimmune disorder (an allergy to gluten) that causes an irritation and distortion in the lining of the bowel, making it nearly impossible for nutrients to be absorbed.
The key to avoiding the representativeness error is to always be open to the possibility that the case before you isn't typical. Force yourself to consider other possibilities. Something may look like a giant airplane flying across the sky, but it may be an illusion caused by having no reference point to correctly estimate the distance between you and the lights you see moving across the sky.
The gambler's fallacy is a type of representativeness error. Because, say, red has come up four times in a row on the roulette wheel, the gambler bets on black because he thinks the odds are against five reds in a row. His model for odds is wrong, however. The ball is as likely to land on red as on black on any given roll (assuming a fair wheel), including on a roll following four reds.
注:此文系转载,仅供个人学习研究之用。