A book that will change the way you think... about how you think. Key takeaway: humans are deeply irrational and often make judgements based on poor intuition.
We use resemblance as a simplifying heuristic to make difficult judgment, causing predictable biases in predictions.
Social scientists in the 1970s broadly accepted that people are generally rational, and emotions such as fear, affection, and hatred explain departures from rationality.
People tend to assess the relative importance of issues by the ease with which they are retrieved from memory, which is largely determined by the media.
Accurate intuitions of experts are better explained by the effects of prolonged practice than by heuristics.
Valid intuitions develop when experts have learned to recognise familiar elements in a new situation and to act in a manner that is appropriate to it.
When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.
When intuition fails, because neither an expert solution nor a heuristic answer comes to mind, we resort to slower, deliberate, and effortful thinking.
All operations of System 2 require attention and are disrupted when attention is drawn away.
Most of what System 2 thinks and does originates in your System 1, but System 2 takes over when things get difficult, and it normally has the last word.
One of the tasks of System 2 is to overcome the impulses of System 1. System 2 is in charge of self-control.
System 2 is too slow and inefficient to substitute for System 1. The best we can do is to recognise when mistakes are likely, and to try harder to avoid significant mistakes when the stakes are high.
We avoid cognitive overload by breaking up current tasks into small steps to be committed to long term memory; we are naturally drawn to solutions that use as little mental effort as possible.
Pupils are sensitive indicators of mental effort. The more System 2 exerts mental effort, the more they dilate.
We decide what to do, but we have limited control over the effort of doing it. The task at hand decides this.
Orienting and responding quickly to the gravest threats or most promising situations improved the chance of survival.
In the economy of action, effort is a cost, and the acquisition of skill is driven by balancing benefits and costs. Laziness is in our nature.
System 2 is the only one that can follow rules, compare objects on several attributes, and make deliberate choices between objects.
A crucial capability of System 2 is that it can program memory to obey an instruction that overrides habitual responses.
Multitasking is effortful. Time pressure is another driver. Any task that requires keeping several ideas in mind simultaneously has the same hurried character.
One of the main functions of System 2 is to monitor and control suggestions from System 1, however it is often lazy and places too much faith in intuition.
Mihaly's flow is a state of effortless concentration so deep that people lose a sense of time, of themselves, and of their problems.
This flow separates the two forms of effort: Concentration on the task and the deliberate control of attention.
People who are cognitively busy are more likely to yield to temptation, make selfish choices, use sexist language, and make superficial judgments in social situations.
Controlling thoughts and behaviors is one of the tasks that System 2 performs.
If you exert self-control for a task, then you are less willing or able to exert self-control for a following task. This is called ego depletion.
When people believe a conclusion is true, they are also very likely to believe arguments that appear to support it, even when those arguments are unsound.
Intelligence is not only the ability to reason; it is also the ability to find relevant material in memory and to deploy attention when needed.
"Engaged" people are more alert, less willing to be satisfied with superficially attractive answers, and more skeptical about their intuitions.
People who are not "engaged" are impulsive, impatient, and keen to receive immediate gratification.
System 1 provides impressions that often turn into beliefs and actions; even the most insignificant of ideas can trigger other ideas and so on.
The responses by System 1 are associatively coherent, yielding a self-reinforcing pattern of cognitive, emotional, and physical responses.
Cognition is embodied. You think with your body, not only with your brain.
Ideas are nodes in a vast network called associative memory, where causes link to effects, things to their properties, and things to their categories.
Priming is not restricted to concepts and words. Events that you are not even aware of prime your actions and emotions.
The influence of an action by the idea is called the ideomotor effect. It also works in reverse. For example, thinking of old age makes you act old, and vice versa.
System 1 provides the impressions that often turn into your beliefs, and is the source of impulses that often become the choices of our actions.
You act differently when experiencing cognitive ease vs. strain; you’ll probably make less errors when strained, but you won’t be as creative.
Cognitive strain is affected by both the current level of effort and the presence of unmet demands. This mobilises System 2.
A repeated experience, clear display, primed idea, and good mood all increase cognitive ease. This in turn makes things feel familiar, true, good, and effortless.
When strained, you are vigilant, suspicious, invest more effort, feel less comfortable, and make fewer errors. But you are less intuitive and less creative.
Predictable illusions occur if judgment is based on an impression of cognitive ease or strain. For example, frequent repetition makes people believe lies.
To craft a persuasive message, use high quality paper, bright colours, simple words, memorable verse, and quote the source with the simpler name.
Cognitive strain, whatever the source, mobilises System 2, which is more likely to reject the intuitive answer suggested by System 1.
The mere exposure effect links the repetition of an arbitrary stimulus and the mild affection that people have for it. It's stronger for stimuli that we don't consciously see.
Mood affects the operation of System 1. When we are uncomfortable and unhappy, we lose touch with our intuition.
The main function of System 1 is to maintain and update a model of your personal world, which represents what is normal in it.
Norm theory is when a surprising event happens, and subsequent surprising events will appear more normal because they are interpreted in conjunction with it.
We have norms for a vast number of categories, which provide the background for the immediate detection of anomalies.
System 1 is adept at finding a coherent causal story that links the fragments of knowledge at its disposal.
We are ready from birth to have impressions of causality, which do not depend on reasoning about patterns or causation. They are products of System 1.
We are prone to apply causal thinking to situations that require statistical reasoning, but System 1 cannot do this, and System 2 requires necessary training.
System 1 is radically insensitive to both the quality and the quantity of the information that gives rise to impressions and intuitions.
Jumping to conclusions is efficient if the jump saves time and effort, the conclusion is likely correct, and the cost of an occasional mistake is acceptable.
System 1 bets on answers, where recent events and the current context have the most weight in determining and interpretation. Otherwise, more distant memories govern.
System 1 is gullible and biased to believe, while System 2 is in charge of doubting and unbelieving, but System 2 is sometimes busy and often lazy.
Unlike scientists, which test hypotheses by trying to refute them, we seek data that are likely to be compatible with the beliefs that we currently hold.
The halo effect increases the weight of first impressions, sometimes to the point that subsequent information is mostly wasted.
To derive the most useful information from multiple sources of evidence, you should always try to make these sources independent of each other.
The standard practice of open discussion gives too much weight to the opinions of those who speak early and assertively, causing others to line up behind them.
System 1 excels at constructing the best possible story that incorporates ideas currently activated, but it cannot allow for information that it does not have.
Jumping to conclusions facilitates the achievement of coherence and of the cognitive ease that causes us to accept a statement as true. It explains overconfidence.
When making judgements, we often either compute much more information than we need, or we attempt to match the underlying scale of intensity across dimensions.
System 1 continually assesses the problems that an organism must solve to survive. We equate good mood and cognitive ease with safety and familiarity.
Faces with a strong chin and a slight confident-appearing smile exude confidence.
Because System 1 represents categories by a prototype or a set of typical exemplars, it deals well with averages but poorly with sums.
System 1 allows matching intensity across diverse and unrelated dimensions. This mode of prediction by matching is statistically wrong, although acceptable to both systems.
The control over intended computations is far from precise, and we often compute much more than we want or need. This is called the mental shotgun.
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and answer it instead.
If we can't satisfactorily answer a hard target question, then System 1 invokes substitution by recalling and answering an easier heuristic question.
After answering a heuristic question, System 1 uses intensity matching to translate this answer to an answer of the target question.
The dominance of conclusions over arguments is most pronounced when emotions are involved.
While self-criticism is one of the functions of System 2, it is more of an apologist for than a critic of the emotions of System 1.
We have a strong bias towards believing that small samples closely resemble the population from which they are drawn.
System 1 is inept when faced with statistical facts, which change the probability of outcomes but do not cause them to happen.
Extreme outcomes, both high and low, are most likely to be found in small than in large samples.
Even statistical experts pay insufficient attention to sample size, and have poor intuitions of sampling effects.
System 2 is capable of doubt, but sustaining doubt is harder work than sliding into certainty.
System 1 runs ahead of the facts and constructs a rich image based on scraps of evidence, causing us to exaggerate the consistency and coherence of what we see.
The associative machine seeks causes. But instead of focusing on how the event came to be, the statistical view relates it to what could have happened instead.
We do not expect to see regularity produced by a random process. When we detect what appears to be a rule, we quickly reject the idea that the process is truly random.
The anchoring effect occurs when a particular value for an unknown quantity influences your estimate of that quantity.
Adjusting your estimate away from the anchor is an effortful activity. Insufficient adjustment, where we accept the anchor, is a sign of a weak or lazy System 2.
Anchoring is also a priming effect, which selectively evokes compatible evidence. This is the automatic operation of System 1.
Anchors that are obviously random can be just as effective as potentially informative anchors.
When negotiating, don't make an outrageous counteroffer to an outrageous proposal, but make a scene and make it clear that you won't continue with that number on the table.
To resist anchoring effects, search your memory for arguments against the anchor. This negates the biased recruitment of thoughts that produces these effects.
System 2 is susceptible to the biasing effect of some anchors that makes some information easier to retrieve. It has control over or knowledge of the effect.
The ease with which we can think of examples is often used to judge the frequency of events.
Salient events, dramatic events, and personal experiences versus experiences by others bias the ease with which instances come to mind.
This explains why everyone in a group may feel as though he or she does more than his or her fair share.
By asking people to provide more instances of a given behaviour, you increase their struggle, and consequently they conclude that they don't adopt that behaviour.
Judgment is influenced more by the ease of retrieval than the number of instances retrieved. Increasing the number of requested instances therefore weakens judgment.
When you provide a spurious reason for the difficulty of retrieving a large number of instances, judgment is again strengthened. The surprise is eliminated.
People who are personally involved in the judgment are more likely to consider the number of instances and less likely to go by fluency.
Fluency of instances is a System 1 heuristic, which is replaced by a focus on content when System 2 engages.
Merely reminding people of a time when they had power increases their apparent trust in their own intuition.
We try to simplify our lives by creating a world that is much tidier than reality; in the real world, we often face painful tradeoffs between benefits and costs.
Our expectations about the frequency of events are distorted by the prevalence and emotional intensity of the messages to which we are exposed.
Jonathan Haidt said "the emotional tail wags the rational dog."
Policy is about what people want and what is best for them. The availability cascade is the mechanism through which biases flow into policy.
When dealing with small risks, we either ignore them or give them far too much weight, with no middle ground.
Availability cascades may have a long-term benefit of calling attention to classes of risks and by increasing the risk-reduction budget.
It’s common practice to overweight evidence and underweight base rates; how do you know that your case is different?
The proportion of a particular class in a population is called the base rate of that class.
How much an instance conforms to the stereotype of a particular class is called the representativeness of that instance.
To determine how likely an instance belongs to a class, we ignore the base rate of the class and focus on the representativeness of the instance.
Probability by representativeness is more accurate than chance guesses, but neglecting base rate information that points in another direction is a statistical sin.
When endorsing representativeness, System 1 will automatically process the available information as if it were true, unless you decide immediately to reject it.
Bayesian statistics govern how we should adjust the base rates given an account of representativeness.
Adding detail to scenarios makes them more persuasive, but less likely to come true.
The most coherent stories are not the most probable, but they are plausible. And we confuse the notions of coherence, plausibility, and probability.
Consequently, adding details to a scenario can make it more persuasive, but less likely to come true.
When performing single evaluation instead of joint evaluation, the less is moreprinciple evaluates a collection of items by its average and not its sum.
The sum-like nature of a variable is less obvious for probability than something more enumerable like money.
A question phrased as "how many" makes you think of individuals, while "what percentage" does not. This decreases the incidence of the conjunction fallacy.
You’re more likely learn something from an individual case or example than you are from facts and statistics.
Statistical base rates are facts about a population to which a case belongs, but they are not relevant to the individual case.
Causal base rates change your view of how the case came to be.
We neglect statistical base rates, and we easily combine causal base rates with other case-specific information.
Resistance to stereotyping is a laudable moral position, but neglecting valid stereotypes inevitably leads to suboptimal judgments.
System 1 can deal with stores in which the elements are causally linked, but it is weak in statistical reasoning.
Individuals feel relieved of responsibility when they know that others have heard the same request for help.
When the outcome surprises us, we are unwilling to deduce the particular from the general, but are willing to infer the general from the particular.
Consequently, we are more likely to learn something by finding surprises in our own behaviour than by hearing surprising facts about people in general.
It’s important to understand the natural fluctuations of quantifiable performance.
Regression to the mean is when poor performance is followed by improvement, and good performance is followed by deterioration.
We tend to be nice to other people when they please us and nasty when they do not, we are statistically punished for being nice and rewarded for being nasty.
The discrepancies between two trials does not need a causal explanation. Often luck explains why one is a significant outlier.
Whenever the correlation between two scores is imperfect, there will be regression to the mean.
Causal explanations will be evoked when we detect regression, but they will be wrong because regression to the mean has an explanation but does not have a cause.
In order to produce unbiased predictions, start with the average and systematically move from there based on matching and estimated correlation of evidence.
We are capable of rejecting information as irrelevant and false, but adjusting for smaller weaknesses in the evidence is not something System 1 can do.
If we are asked for a prediction but substitute an evaluation of the evidence, we generate biased predictions that completely ignore regression to the mean.
To create an unbiased prediction, start with a baseline estimate and an estimate derived from the evidence, and choose the value between them that is proportional to your estimate of correlation.
Such unbiased predictions make errors, but they are smaller and do not favor either higher or lower outcomes.
Unbiased predictions permit predicting rare or extreme cases only when the information is very good, so you'll never have the satisfaction of calling an extreme case.
Unbiased predictions are less preferred when error has varying types and severity, and extreme cases must be called correctly even if it accumulates error elsewhere.
Your intuitions will deliver predictions that are too extreme and you will be inclined to put far too much faith into them.
We believe that we understand the past due to our constantly adjusting view of the world; this implies that the future should be knowable as well, however we understand the past less than we think.
We are always ready to interpret behavior as a manifestation of general propensities and personality traits, or causes that you readily match to effects.
In a story, many of the important events that involve choices tempts us to exaggerate the role of skill and underestimate the part of luck.
When you adopt a new view of the world, you immediately lose much of your ability to recall what you used to believe before your mind changed.
Hindsight bias leads us to asses the quality of a decision not by whether the process was sound but by whether its outcome was good or bad.
The sense-making System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is. So we think that we can predict the future.
The comparison of firms that have been more or less successful is to a significant extent a comparison between firms that have been more or less lucky.
Subjective confidence isn’t a reasoned evaluation that a judgment is correct, but rather a feeling that reflects the coherence of information and the ease of processing it.
Declarations of high confidence tell you that an individual has constructed a coherent story in his or her mind, not necessarily that the story is true.
The persistence of individual differences in achievement is the measure by which we confirm the existence of skill in someone.
Facts that challenge basic assumptions, and thereby threaten our livelihood and self-esteem, are simply not absorbed. The mind does not digest them.
People can maintain an unshakeable faith, however absurd, when they are surrounded by a community of like-minded believers.
A person who acquires more knowledge develops an enhanced illusion of skill and becomes unrealistically overconfident. They have many excuses ready when proven wrong.
Errors in prediction are inevitable because the world is unpredictable. And high subjective confidence is not an indicator of accuracy.
Whenever you can replace intuition and impressions with a structured, yet simple formula, you should at least consider it.
Roughly 60% of studies have shown significantly better accuracy for algorithms in comparisons of clinical and statistical predictions.
One reason experts may be inferior is that they try to be clever, think outside the box, and consider complex combinations of features. This actually reduces validity.
Assigning equal weights to all predictors is often superior than varying weights found by multiple regression, because it's not affected by accidents of sampling.
Consequently, we can develop useful algorithms without any prior statistical research. And back of the envelope judgments are often good enough.
The aversion to algorithms making decisions is rooted in the strong preference that many people have for the natural over the synthetic or artificial.
Intuition adds value, but only after a disciplined collection of objective information and disciplined scoring of separate traits.
When creating your own formula for an interview procedure, pick at most six dimensions, and develop a 1 to 5 scale for each one.
Under normal conditions, you can usually trust an expert’s intuition, however when dealing with less regular environments, be more skeptical.
In the recognition-primed decision model, System 1 comes up with a plan. System 2 simulates it. If it works, it's implemented. Otherwise it's tweaked or discarded.
Emotional learning may be quick, but expertise takes time to develop because it requires building a large collection of mini-skills.
Confidence does not imply truth. The associative machine suppresses doubt and evokes ideas that are compatible with the current dominant story.
Skilled intuitions come from environments where the environment is sufficiently regular to be predictable, and we can learn these regularities through prolonged practice.
Human learning is normally efficient. If a strong predictive cue exists, human observers are likely to find it, given a sufficient opportunity to do so.
Algorithms perform better in noisy environments because they detect weakly valid cues, and they use such cues consistently.
The unrecognised limits of professional skill help explain why experts are often overconfident.
Judgments that answer the wrong question can also be made with high confidence.
We have a tendency to plan projects based on best-case scenarios and without taking into account all of the previous similar cases out there.
Confidentially collecting the judgment of each person in a group makes better use of the knowledge of its members.
Our inside view judgment is overly optimistic, while the outside view judgment rightly adjusts a baseline prediction.
We routinely discard statistical information, such as that offered by the outside view, when it's incompatible with personal impressions of a case.
To counter the planning fallacy, reference class forecasting uses distributional information to create a baseline prediction. This adopts an outside view.
People often, but not always, take on risky projects because they are overly optimistic about the odds they face.
The suppression of doubt contributes of overconfidence; try using a premortem to legitimise your doubts.
The people who have the greatest influence on the lives of others are likely to be optimistic and overconfident, and to take more risks than they realise.
The optimistic risk taking of entrepreneurs contributes to the economic dynamism of society, even if most risk takers end up disappointed.
We rate ourselves below average on any task we find difficult, and so we are overly optimistic about our standing on any activity we do moderately well.
Entrepreneurs imagine a future where their actions determine the outcome of the firm, and not their competitors. This is because they know so little about these competitors.
A wide confidence interval is a confession of ignorance, which is not socially acceptable for someone who is paid to be knowledgeable. So we must be overconfident.
Optimism is highly valued, both socially and in the market, and so we reward the providers of dangerously misleading information more than we reward truth tellers.
Optimism contributes to resilience by defending one's self image, where we take credit for successes but little blame for failures.
A premortem assumes that a year has passed, you implemented the plan as it exists, and the outcome was a disaster. It explains why.
This counters overconfident optimism by escaping groupthink, and unleashing the imagination of knowledgeable individuals in a much needed direction.
Bernoulli’s expected utility model lacks the idea of a reference point, the value of something is largely dependent on a person’s current situation.
Choices between simple gambles provide a simple model that shares important features with the more complex decisions that researchers actually aim to understand.
Expected utility theory defined axioms of rationality. Prospect theory examines why we deviate this theory under risk.
A risk-averse decision maker will choose a sure thing that is less than the expected value of a gamble, paying a premium to avoid uncertainty.
In mixed gambles, we are naturally risk-averse; while for bad choices, when a sure loss is guaranteed, we are more likely to seek out risk.
You know you have made a theoretical advance when you can no longer reconstruct why you failed for so long to see the obvious.
Your personal wealth does not determine your attitudes to gains and losses. We like winning and dislike losing. And we dislike losing more than you like winning.
In financial outcomes, the reference point can be the status quo, what you expect, or what you feel entitled to. Better outcomes are gains. Worse outcomes are losses.
A principle of diminishing sensitivity applies to changes of wealth, both for gains and losses.
We typically only accept a bet if its loss aversion ratio, or its ratio of gains to losses, is a minimum between 1.5 and 2.5.
In mixed gambles, where both a gain and a loss are possible, loss aversion causes extremely risk-averse choices.
In bad choices, where a sure loss is compared to a larger loss that is merely possible, diminishing sensitivity causes risk seeking.
But prospect theory cannot deal with disappointment. It also cannot deal with regret, such as losing the gamble and foregoing the sure option.
We naturally assign more value to things just because we own them.
Indifference curves assume that your utility depends entirely on the present situation, and that the evaluation of a possible job does not depend on your current job.
In labor negotiations and bargaining, the reference point and loss aversion is well understood. But their omission in most scenarios is theory-induced blindness.
Loss aversions says the disadvantages of a change loom larger than its advantages. This introduces a bias that favors the status quo.
The endowment effect applies to goods "for use," or consumed or enjoyed. It doesn't apply to goods "for exchange," or traded for other goods.
Loss aversion is built into the automatic evaluations of System 1. Consider the baby who holds on fiercely to a toy and shows agitation when it is taken away.
Selling goods activates regions of the brain that are associated with disgust as pain. Buying activates these areas if the price is too high.
Veteran traders are unaffected by the endowment effect. They ask "How much do I want to have this good, compared with other things I could have instead?"
Being poor is living below one's reference point. Small amounts of money they receive are a reduced loss, not a gain. All their choices, then, are between losses.
We generally work harder to avoid losses than we do to secure gains.
Our brain prioritizes threats above opportunities. It can process hostile images that we can't consciously see, and can quickly pick hostile faces out from a crowd.
Even symbolic threats evoke in attenuated form many reactions of the real thing, including fractional tendencies to avoid or approach, recoil or lean forward.
Given a goal, not achieving it is a loss, while exceeding it is a gain. But the aversion to the failure of not reaching a goal is stronger than the desire to exceed it.
An existing wage, price, or rent sets a reference point that is entitled. We think that exploiting market power to impose losses on others is unacceptable.
A firm has its own entitlement, which is to retain its current profit. But we think it's not unfair for a firm to reduce its workers' wages when its profitability is falling.
If a merchant lowers the price of a good, customers who bought at the higher price think of themselves as having sustained a loss that is more than appropriate.
Humans are just as risk seeking in the domain of losses as we are risk averse in the domain of gains.
When mapping decision weights to outcome probabilities, they are not equal in value, contrary to the expectation principle.
We also have inadequate sensitivity to intermediate probabilities. The range of decision weights is much smaller than the range of probabilities.
Paying a premium to eliminate a worry with certainty is compatible with the psychology of worry but not with the rational model.
Between a sure loss and a gamble with a high probability of a larger loss, diminishing sensitivity makes the sure loss more aversive, and the certainty effect reduces the aversiveness of the gamble.
This explains why people accept a high probability of making things worse for a small hope of avoiding a large loss, which can turn manageable failures into disasters.
These same two factors enhance the attractiveness of the sure thing and reduce the attractiveness of the gamble when the out come is positive.
Systematic deviations from expected value are costly in the long run. This rule applies to both risk aversion and to risk seeking.
People often overestimate the probabilities of unlikely events; this causes us to overweight them in our decisions.
Emotion and vividness influence fluency, availability, and judgments of probability, and thus account for our excessive response to the few rare events we don't ignore.
If the event we are asked to estimate is very unlikely, we instead focus on its alternative. We focus on the odd, different, and unusual.
A rich and vivid representation of the outcome, whether or not it is emotional, reduces the role of probability in the evaluation of an uncertain prospect.
We weight low-probability events more when stated in terms of relative frequencies (how many) than when stated in more abstract terms of "chances," "risk," and "probability" (how likely).
In choice from experience, instead of choices from description, we are exposed to variable outcomes from the same source, and so we do not overweight rare events.
In order to avoid exaggerated caution induced by loss aversion, take a broad frame; think as if the decision is just one of many.
Every simple choice formulated as gains and losses can be deconstructed in innumerable ways into a combination of choices, yielding preferences that are likely to be inconsistent.
If paying a premium for sure gains and to avoid a sure loss come out of the same pocket, the discrepant attitudes are unlikely to be optimal.
We prefer narrow framing, or considering a sequence of simple decisions, even when we must entertain broad framing, or considering the decisions jointly.
Narrow framing of gambles leads to loss aversion. Broad framing treats each gamble as one of many, blunting the emotional reaction to loss and increasing the tolerance of risk.
Closely following daily fluctuations is a losing proposition, because the pain of small losses exceeds the pleasure of equally frequent small gains.
While an outside view protects you from the exaggerated optimism of the planning fallacy, a risk policy protects you from the exaggerated caution induced by loss aversion.
Rewards and punishments shape our preferences and motivate our actions, all kept track of by difference mental accounts.
The emotion that people attach to the state of our mental accounts are not acknowledged in standard economic theory.
The disposition effect is the bias in finance to sell winners rather than losers, and is an instance of narrow framing.
The sunk-cost fallacy invests additional resources in a losing account when better investments are available. It prefers an unfavorable gamble to a sure loss.
Members of a board will replace a CEO with one who does not carry the same mental accounts and is therefore better able to ignore sunk costs of past involvements.
A poignant story evokes more regret if it involves unusual events, because such events attract attention and are easier to undo in our imagination.
People expect to have stronger emotional reactions to an outcome that is produced by action than to the same reaction when it is produced by inaction.
When you deviate from the default, you can easily imagine the norm. If the default is associated with bad consequences, the discrepancy can be a source of painful emotions.
You will be more loss averse in situations that are more important than money, and more reluctant to sell important endowments when it might lead to an awful outcome.
To inoculate yourself against regret, remind yourself of its possibility and that things can go badly, and preclude any hindsight that might cause it.
Single evaluations call upon the emotional responses of System 1, whereas comparisons involve more careful assessment, typically by System 2.
We normally experience situations in which contrasting alternatives are absent, and so moral intuitions that come to your mind in different scenarios are inconsistent.
Preference reversal can occur if joint evaluation focuses attention on a situational aspect that is less salient than in single evaluation.
The emotional reactions of System 1 likely determine single evaluation, while the comparison and careful evaluation required by joint evaluation calls for System 2.
Judgments and preferences are coherent within categories but potentially incoherent when the objects are evaluated belonging to different categories.
Be wary of joint evaluation when someone who controls what you see has a vested interest in what you choose.
Logically different statements can evoke different reactions depending on how they are framed.
Losses evoke stronger negative feelings than costs do, and so the cost of a lottery ticket that did not win is more acceptable than losing a gamble.
"Rational" subjects, which are least susceptible to framing effects, showed enhanced activity in the frontal area of the brain that is implicated in combining emotion and reasoning.
Broader frames and inclusive accounts generally lead to more rational decisions.
Opt-in versus opt-out is another framing effect. We check a box if we've already decided what we wish to do. But if unprepared for the question, laziness prefers the default.
We have an experiencing self and a remembering self; the latter of which keeps score and governs what we learn in order to make decisions.
The decision maker who pays different amounts to achieve the same gain or be spared the same loss is making a mistake.
The peak-end rule states that a global retrospective rating is dominated by the highest score (the peak) and the final score (the end).
The duration neglect states that the length of the evaluated activity has no effect whatsoever on its evaluation.
What we learn from the past is to maximize the qualities of our future memories, not necessarily our future experience.
This is a feature of System 1, which represents sets by averages, norms, and prototypes. Not by sums.
In some cases rats who can stimulate their brain by pressing a lever will die of starvation without taking a break to feed themselves.
A memory that neglects duration will not serve our preference for long pleasures and short pains.
Duration neglect is normal in a story. A story is about significant events and memorable events like its ending, not about time passing.
The peak-end rule and duration effect also influence our evaluation of others' lives.
We use the word "memorable" to describe vacation highlights, which explicitly reveals the goal of the experience.
People’s evaluations of their lives and their actual experience are related, but different.
A small fraction of the population endures most of the suffering, whether because of illness, an unhappy temperament, or misfortunes or tragedies.
Our emotional state is largely determined by what we attend to, and we are normally focused on our current activity and immediate environment.
From Gallup data, some life aspects, such as education, is associated with a higher evaluation of one's life, but not with greater experienced well-being.
The word happiness doesn’t have a simple meaning and should not be used as if it does.
Affective forecasting is the forecast of one's personal state in the future. An error happens when you think statistics don't apply to you.
Both experienced temperament and life satisfaction are largely determined by the genetics of temperament.
Setting goals that are especially difficult to attain leads to a dissatisfied adulthood. We must consider what people want in the concept of well-being.
This illusion can cause people to be wrong about their present state of well-being as well as the happiness of others, and about their own happiness in the future.
The term miswanting describes bad choices that arise from the errors of affective forecasting. The focusing illusion is a rich source of it.
The focusing illusion favors goods or experiences that are initially exciting but may lose their appeal. It appreciates less experiences that retain their attention value.