Heuristics, or thinking by shortcuts
In everyday conclusions, we often use heuristics, i.e. simplified rules of thinking. Thanks to their use, we do not have to analyze all the information that reaches us, but on the other hand, such shortcutting leads to many mistakes.
One of the simplistic ways of thinking is the availability heuristic, which is how we judge the probability of events by how easily examples of such events come to mind. For many people, this heuristic leads to the fear of flying airplanes. Air crashes are very rare and, therefore, are always covered in the media. Sounding them makes them seem more likely than, for example, driving a car, even though objectively driving a car is more risky. We generally overestimate the likelihood of dying from unusual causes such as flooding or suicide, and underestimating the likelihood of dying from common events such as heart disease and diabetes (Lichtenstein et al., 1978).
In one study (Bates & Gabor, 1986 as cited in: Tyszka, 1999), respondents were asked to provide prices of various food products, such as sugar, flour, tea, bread, butter, and then to answer questions about changes in the prices of these products (e.g. “Has the price of the bread you usually buy changed in the last month? What was the old price and what is the new?”). Respondents generally remembered well both the current prices and the previous ones if the price had changed. However, they fared worse when asked general questions: “have food prices changed in the last month?”, “Up or down?”, “By what percentage?” It seemed to many interviewees that overall prices had risen much more than they really were. Probably this was due to the fact that it was easier for the respondents to recall products whose prices have recently increased significantly than those whose prices have increased slightly or not at all. The greater availability in the mind of products that have gone up significantly increased the overall perception of the increases. The use of the availability heuristic also explains why spouses often overestimate their contribution to household chores. We have easier access to what we have done in our mind than to what our partner did. It is easier for us to remember what we did ourselves, because, for example, we could wash the dishes when our partner was not at home or our last purchases were very heavy and therefore we remembered them better.
We also make frequent use of the anchoring heuristic. It consists in the fact that when we evaluate the probability of an event, the frequency of occurrence, or any other numerical value, we take some other readily available number as our starting point – we anchor on it, and our assessment depends largely on that number. An example can be a study in which students and professional real estate sellers were to assess the value of a house (Northcraft & Neale, 1987 as cited in: Wojciszke, 2006). They were given the asking price information and inspected the house to be able to judge how much it was worth. The value of the real estate was always assessed as lower than the asking price, but the higher the asking price, the higher their estimate was. They anchored themselves at the price given to them by the experimenters and estimated the real value according to the amount of that price. This was the behavior of not only students who did not have specialist knowledge about real estate, but also experts (although their estimates were less dependent on the asking price than students). This can be exploited and probably many people use it, for example by selling their used car. If we do not want to sell it for what it really is worth, but to make as much profit as possible, it is worth inflating its value in the advertisement. Potential customers will try to negotiate a lower price, but you can expect that the higher the price, the final price someone decides to buy the car for will also be higher, although obviously lower than your starting price.
It is interesting that we can also anchor ourselves on a number that has nothing to do with what we are to estimate. It can even be a random number, as in the study by Tversky and Kahneman (1974). The participants of the study were asked to estimate various values (e.g. how many percent of African countries belong to the United Nations) and before answering the question, a number was drawn by roulette. Although it was a completely random number, and consciously respondents certainly knew that it had nothing to do with the correct answer to the question, it still influenced their estimates. When roulette drew the number 10, respondents on average estimated the percentage of African countries in the UN at 25%, and when they picked 65 – at 45%.
Another heuristic we use frequently is the representativeness heuristic. It is illustrated in one of the studies by Kahneman and Tversky (1973). Subjects were asked to estimate how likely it is that the person described in the brief information is a lawyer and how likely it is that he is an engineer. The subjects were divided into two groups and one group was told that the described person was one of a group of 100 including 30 engineers and 70 lawyers, and the other group was reported to be a member of a 100-strong group of 70 engineers and 30 lawyers. lawyers. One of the pieces of information sounded like this: Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful, and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies which include home carpentry, sailing, and mathematical puzzles. The probability that Jack is one of the 30 engineers in thesample of 100 is …% The respondents only to a small extent took into account how many engineers and how many lawyers were in the group. They were primarily guided by the description when making their estimates. If the description matched that of a typical engineer representative, they assumed that it was describing the engineer, even if there were more lawyers in the group. This is what the representativeness heuristic is all about – making judgments based on similarity to typical (representative) cases, often disregarding other data.
- Kahneman, D., Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80(4), 237-251.
- Lichtenstein, S., Slovic, P., Fischhoff, B., Layman, M. i Combs, B. (1978). Judged frequency of lethal events. Journal of Experimental Psychology: Human Learning and Memory, 4(6), 551-578.
- Tversky, A., Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.
- Tyszka, T. (1999). Psychologiczne pułapki oceniania i podejmowania decyzji. Gdańsk: Gdańskie Wydawnictwo Psychologiczne.
- Wojciszke, B. (2006). Człowiek wśród ludzi. Zarys psychologii społecznej. Warszawa: Wydawnictwo Naukowe Scholar.
Author: Maja Kochanowska